Yokoya

Naoto

Naoto Yokoya

Yokoya Naoto

横矢

横矢直人

University of Tokyo

東京大学

hyperspectral

ハイパースペクトル

remote sensing

リモートセンシング

pattern recognition

パターン認識

data fusion

データ融合

Journal papers

  1. X. Junshi, N. Yokoya, and A. Iwasaki, ”Classification of large-sized hyperspectral imagery using fast machine learning algorithms,” Journal of Applied Remote Sensing, accepted on June 27, 2017.
    PDF    Quick Abstract

    Abstract:


  2. N. Yokoya, C. Grohnfeldt, and J. Chanussot, ”Hyperspectral and multispectral data fusion: a comparative review of the recent literature,” IEEE Geoscience and Remote Sensing Magazine, vol. 5, no. 2, pp. 29-56, 2017.
    PDF    Quick Abstract

    Abstract: In recent years, enormous efforts have been made to design image processing algorithms to enhance the spatial resolution of hyperspectral (HS) imagery. One of the most commonly addressed problems is the fusion of HS data with higher-spatial-resolution multispectral (MS) data. Various techniques have been proposed to solve this data fusion problem based on different theories including component substitution, multiresolution analysis, spectral unmixing, and Bayesian probability. This paper presents a comparative review of those HS-MS fusion techniques with extensive experiments. Ten state-of-the-art HS-MS fusion methods are compared by assessing their fusion performance both quantitatively and visually. Eight data sets featuring different geographical and sensor characteristics are used in the experiments to evaluate the generalizability and versatility of the fusion algorithms. To maximize the fairness and transparency of this comparison, publicly available source codes are used, and parameters are individually tuned for maximum performance. Additionally, the impact of spatial resolution enhancement on classification is investigated. Robustness against various factors characterizing the HS-MS fusion problem is systematically analyzed for all methods under comparison. The algorithm characteristics are summarized, and methods with high general versatility are clarified. The paper also provides possible future directions for the development of HS-MS fusion.


  3. N. Yokoya, ”Texture-guided multisensor superresolution for remotely sensed images,” Remote Sensing, vol. 9, no. 4: 316, 2017.
    PDF    Quick Abstract

    Abstract: This paper presents a novel technique, namely texture-guided multisensor superresolution (TGMS), for fusing a pair of multisensor multiresolution images to enhance the spatial resolution of a lower-resolution data source. TGMS is based on multiresolution analysis, taking object structures and image textures in the higher-resolution image into consideration. TGMS is designed to be robust against misregistration and the resolution ratio and applicable to a wide variety of multisensor superresolution problems in remote sensing. The proposed methodology is applied to six different types of multisensor superresolution, which fuse the following image pairs: multispectral and panchromatic images, hyperspectral and panchromatic images, hyperspectral and multispectral images, optical and synthetic aperture radar images, thermal-hyperspectral and RGB images, and digital elevation model and multispectral images. The experimental results demonstrate the effectiveness and high general versatility of TGMS.


  4. D. Hong, N. Yokoya, and X. X. Zhu, ”Learning a robust local manifold representation for hyperspectral dimensionality reduction,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., accepted on Feb. 26, 2017.
    PDF    Quick Abstract

    Abstract: Local manifold learning has been successfully applied to hyperspectral dimensionality reduction in order to embed nonlinear and non-convex manifolds in the data. Local manifold learning is mainly characterized by affinity matrix construction, which is composed of two steps: neighbor selection and computation of affinity weights. There is a challenge in each step: (1) neighbor selection is sensitive to complex spectral variability due to non-uniform data distribution, illumination variations, and sensor noise; (2) the computation of affinity weights is challenging due to highly correlated spectral signatures in the neighborhood. To address the two issues, in this work a novel manifold learning methodology based on locally linear embedding (LLE) is proposed through learning a robust local manifold representation (RLMR). More specifically, a hierarchical neighbor selection (HNS) is designed to progressively eliminate the effects of complex spectral variability using joint normalization (JN) and to robustly compute affinity (or reconstruction) weights reducing multicollinearity via refined neighbor selection (RNS). Additionally, an idea that combines spatial-spectral information is introduced into the proposed manifold learning methodology to further improve the robustness of affinity calculations. Classification is explored as a potential application for validating the proposed algorithm. Classification accuracy in the use of different dimensionality reduction methods is evaluated and compared, while two kinds of strategies are applied in selecting the training and test samples: random sampling and region-based sampling. Experimental results show the classification accuracy obtained by the proposed method is superior to those state-of-the-art dimensionality reduction methods.


  5. N. Yokoya, X. X. Zhu, and A. Plaza, ”Multisensor coupled spectral unmixing for time-series analysis,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 5, pp. 2842-2857, 2017.
    PDF    Quick Abstract

    Abstract: We present a new framework, called multisensor coupled spectral unmixing (MuCSUn), that solves unmixing problems involving a set of multisensor time-series spectral images in order to understand dynamic changes of the surface at a subpixel scale. The proposed methodology couples multiple unmixing problems based on regularization on graphs between the time-series data to obtain robust and stable unmixing solutions beyond data modalities due to different sensor characteristics and the effects of non-optimal atmospheric correction. Atmospheric normalization and cross-calibration of spectral response functions are integrated into the framework as a preprocessing step. The proposed methodology is quantitatively validated using a synthetic dataset that includes seasonal and trend changes on the surface and the residuals of non-optimal atmospheric correction. The experiments on the synthetic dataset clearly demonstrate the efficacy of MuCSUn and the importance of the preprocessing step. We further apply our methodology to a real time-series data set composed of 11 Hyperion and 22 Landsat-8 images taken over Fukushima, Japan, from 2011 to 2015. The proposed methodology successfully obtains robust and stable unmixing results and clearly visualizes class-specific changes at a subpixel scale in the considered study area.


  6. J. Xia, N. Yokoya, and A. Iwasaki, ”Hyperspectral image classification with canonical correlation forests,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 1, pp. 421-431, 2017.
    PDF    Quick Abstract

    Abstract: Multiple classifier systems or ensemble learning is an effective tool for providing accurate classification results of hyperspectral remote sensing images. Two well-known ensemble learning classifiers for hyperspectral data are random forest (RF) and rotation forest (RoF). In this paper, we proposed to use a novel decision tree (DT) ensemble method, namely, canonical correlation forest (CCF). More specifically, several individual canonical correlation trees (CCTs) that are binary DTs, which use canonical correlation components for the hyperplane splitting, are used to construct the CCF. Additionally, we adopt the projection bootstrap technique in CCF, in which the full spectral bands are retained for split selection in the projected space. The techniques aforementioned allow the CCF to improve the accuracy of member classifiers and diversity within the ensemble. Furthermore, the CCF is extended to the spectral–spatial frameworks that incorporate Markov random fields, extended multiattribute profiles (EMAPs), and the ensemble of independent component analysis and rolling guidance filter (E-ICA-RGF). Experimental results on six hyperspectral data sets are used to indicate the comparative effectiveness of the proposed method, in terms of accuracy and computational complexity, compared with RF and RoF, and it turns out that CCF is a promising approach for hyperspectral image classification not only with spectral information but also in the spectral–spatial frameworks.


  7. U. Heiden, A. Iwasaki, A. Müller, M. Schlerf, T. Udelhoven, K. Uto, N. Yokoya, and J. Chanussot, ”Foreword to the special issue on hyperspectral remote sensing and imaging spectroscopy,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 9, no. 9, pp. 3904-3908, 2016.
    PDF

  8. N. Yokoya, J. C. W. Chan, and K. Segl, ”Potential of resolution-enhanced hyperspectral data for mineral mapping using simulated EnMAP and Sentinel-2 images,” Remote Sensing, vol. 8, no. 3: 172, 2016.
    PDF    Quick Abstract

    Abstract: Spaceborne hyperspectral images are useful for large scale mineral mapping. Acquired at a ground sampling distance (GSD) of 30 m, the Environmental Mapping and Analysis Program (EnMAP) will be capable of putting many issues related to environment monitoring and resource exploration in perspective with measurements in the spectral range between 420 and 2450 nm. However, a higher spatial resolution is preferable for many applications. This paper investigates the potential of fusion-based resolution enhancement of hyperspectral data for mineral mapping. A pair of EnMAP and Sentinel-2 images is generated from a HyMap scene over a mining area. The simulation is based on well-established sensor end-to-end simulation tools. The EnMAP image is fused with Sentinel-2 10-m-GSD bands using a matrix factorization method to obtain resolution-enhanced EnMAP data at a 10 m GSD. Quality assessments of the enhanced data are conducted using quantitative measures and continuum removal and both show that high spectral and spatial fidelity are maintained. Finally, the results of spectral unmixing are compared with those expected from high-resolution hyperspectral data at a 10 m GSD. The comparison demonstrates high resemblance and shows the great potential of the resolution enhancement method for EnMAP type data in mineral mapping.


  9. M.A. Veganzones, M. Simoes, G. Licciardi, N. Yokoya, J.M. Bioucas-Dias, and J. Chanussot, “Hyperspectral super-resolution of locally low rank images from complementary multisource data,” IEEE Transactions on Image Processing, vol. 25, no. 1, pp. 274-288, 2016.
    PDF    Quick Abstract

    Abstract: Remote sensing hyperspectral images (HSI) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images (MSI) in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods decrease mainly because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSI are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution via local dictionary learning using endmember induction algorithms (HSR-LDL-EIA). We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.


  10. L. Loncan, L. B. Almeida, J. Bioucas Dias, X. Briottet, J. Chanussot, N. Dobigeon, S. Fabre, W. Liao, G. A. Licciardi, M. Simoes, J. Y. Tourneret, M. A. Veganzones, G. Vivone, Q. Wei, and N. Yokoya, “Hyperspectral pansharpening: a review,” IEEE Geoscience and Remote Sensing Magazine, vol. 3, no. 3, pp. 27-46, 2015.
    PDF    Quick Abstract

    Abstract: Pansharpening aims at fusing a panchromatic image with a multispectral one, to generate an image with the high spatial resolution of the former and the high spectral resolution of the latter. In the last decade, many algorithms have been presented in the literature for pansharpening using multispectral data. With the increasing availability of hyperspectral systems, these methods are now being adapted to hyperspectral images. In this work, we compare new pansharpening techniques designed for hyperspectral data with some of the state of the art methods for multispectral pansharpening, which have been adapted for hyperspectral data. Eleven methods from different classes (component substitution, multiresolution analysis, hybrid, Bayesian and matrix factorization) are analyzed. These methods are applied to three datasets and their effectiveness and robustness are evaluated with widely used performance indicators. In addition, all the pansharpening techniques considered in this paper have been implemented in a MATLAB toolbox that is made available to the community.


  11. T. Matsuki, N. Yokoya, and A. Iwasaki, ”Hyperspectral tree species classification of Japanese complex mixed forest with the aid of LiDAR data,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 8, no. 5, pp. 2177-2187, 2015.
    PDF    Quick Abstract

    Abstract: The classification of tree species in forests is an important task for forest maintenance and management. With the increase in the spatial resolution of remote sensing imagery, individual tree classification is the next target of research area for the forest inventory. In this work, we propose a methodology involving the combination of hyperspectral and LiDAR data for individual tree classification, which can be extended to areas of shadow caused by the illumination of tree crowns with sunlight. To remove the influence of shadows in hyperspectral data, an unmixing-based correction is applied as preprocessing. Spectral features of trees are obtained by principal component analysis of the hyperspectral data. The sizes and shapes of individual trees are derived from the LiDAR data after individual tree crown delineation. Both spectral and tree-crown features are combined and input into a support vector machine classifier pixel-by-pixel. This procedure is applied to data taken over Tama Forest Science Garden in Tokyo, Japan, to classify it into 16 classes of tree species. It is found that both shadow correction and tree-crown information improve the classification performance, which is further improved by postprocessing based on tree-crown information derived from the LiDAR data. Regarding the classification results in the case of 10% training data, when using the random sampling of pixels to select training samples, a classification accuracy of 82% was obtained, while the use of reference polygons as a more practical means of sample selection reduced the accuracy to 71%. These values are respectively 21.5% and 9% higher than those are obtained using hyperspectral data only.


  12. N. Yokoya and A. Iwasaki, ”Object detection based on sparse representation and Hough voting for optical remote sensing imagery,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 8, no. 5, pp. 2053-2062, 2015.
    PDF    Quick Abstract

    Abstract: We present a novel method for detecting instances of an object class or specific object in high-spatial-resolution optical remote sensing images. The proposed method integrates sparse representations for local-feature detection into generalized-Hough-transform object detection. Object parts are detected via class-specific sparse image representations of patches using learned target and background dictionaries, and their co-occurrence is spatially integrated by Hough voting, which enables object detection. We aim to efficiently detect target objects using a small set of positive training samples by matching essential object parts with a target dictionary while the residuals are explained by a background dictionary. Experimental results show that the proposed method achieves state-of-the-art performance for several examples including object-class detection and specific-object identification.


  13. M. Kokawa, N. Yokoya, H. Ashida, J. Sugiyama, M. Tsuta, M. Yoshimura, K. Fujita, M. Shibata, ”Visualization of gluten, starch, and butter in pie pastry by fluorescence fingerprint imaging,” Food and Bioprocess Technology, online ISSN: 1935-5149, Sep. 2014.
    PDF    Quick Abstract

    Abstract: The distribution of starches, proteins, and fat in baked foods determine their texture and palatability, and there is a great demand for techniques to visualize the distributions of these constituents. In this study, the distributions of gluten, starch, and butter in pie pastry were visualized without any staining, by using the fluorescence fingerprint (FF). The FF, also known as the excitation–emission matrix (EEM), is a set of fluorescence spectra acquired at consecutive excitation wavelengths. Fluorescence images of the sample were acquired with excitation and emission wavelengths in the ranges of 270–320 and 350–420 nm, respectively, at 10-nm increments. The FFs of each pixel were unmixed into the FFs and abundances of five constituents, gluten, starch, butter, ferulic acid, and the microscope slide, by using the least squares method coupled with constraints of non-negativity, full additivity, and quantum restraint on the abundances of the slide glass. The calculated abundances of butter, starch, and gluten at each pixel were converted to shades of red (R), green (G), and blue (B), respectively, and RGB images showing the distribution of these three constituents was composited. The composited images showed high correspondence with the images acquired with the conventional staining method. Furthermore, the ratio of gluten, starch, and butter in short pastry was calculated from the abundance images. The calculated ratio was 16.6:37.6:45.8, which was very close to the actual ratio of 12.7:38.8:48.5, and further proved the accuracy of this imaging method.


  14. N. Yokoya, S. Nakazawa, T. Matsuki, and A. Iwasaki, ”Fusion of hyperspectral and LiDAR data for landscape visual quality assessment,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 7, no. 6, pp. 2419-2425, 2014.
    PDF    Quick Abstract

    Abstract: Landscape visual quality is an important factor associated with daily experiences and influences our quality of life. In this work, the authors present a method of fusing airborne hyperspectral and mapping light detection and ranging (LiDAR) data for landscape visual quality assessment. From the fused hyperspectral and LiDAR data, classification and depth images at any location can be obtained, enabling physical features such as land-cover properties and openness to be quantified. The relationship between physical features and human landscape preferences is learned using least absolute shrinkage and selection operator (LASSO) regression. The proposed method is applied to the hyperspectral and LiDAR datasets provided for the 2013 IEEE GRSS Data Fusion Contest. The results showed that the proposed method successfully learned a human perception model that enables the prediction of landscape visual quality at any viewpoint for a given demographic used for training. This work is expected to contribute to automatic landscape assessment and optimal spatial planning using remote sensing data.


  15. N. Yokoya, J. Chanussot, and A. Iwasaki, ”Nonlinear unmixing of hyperspectral data using semi-nonnegative matrix factorization,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 2, pp. 1430-1437, 2014.
    PDF    Quick Abstract

    Abstract: Nonlinear spectral mixture models have recently received particular attention in hyperspectral image processing. In this work, we present a novel optimization method of nonlinear unmixing based on a generalized bilinear model (GBM), which considers the second-order scattering of photons in a spectral mixture model. Semi-nonnegative matrix factorization (Semi-NMF) is used for the optimization to process a whole image in matrix form. When endmember spectra are given, the optimization of abundance and interaction abundance fractions converges to a local optimum by alternating update rules with simple implementation. The proposed method is evaluated using synthetic datasets considering its robustness for the accuracy of endmember extraction and spectral complexity, and shows smaller errors in abundance fractions than conventional methods. GBM-based unmixing using Semi-NMF is applied to the analysis of an airborne hyperspectral image taken over an agricultural field with many endmembers, it visualizes the impact of a nonlinear interaction on abundance maps at reasonable computational cost.


  16. N. Yokoya, N. Mayumi, and A. Iwasaki, ”Cross-calibration for data fusion of EO-1/Hyperion and Terra/ASTER,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 6, no. 2, pp. 419-426, 2013.
    PDF    Quick Abstract

    Abstract: The data fusion of low spatial-resolution hyperspectral and high spatial-resolution multispectral images enables the production of high spatial-resolution hyperspectral data with small spectral distortion. EO-1/Hyperion is the world’s first hyperspectral sensor. It was launched in 2001 and has a similar orbit to Terra/ASTER. In this work, we apply hyperspectral and multispectral data fusion to EO-1/Hyperion and Terra/ASTER datasets by the preprocessing of datasets and the onboard cross-calibration of sensor characteristics. The relationship of the spectral response function is determined by convex optimization by comparing hyperspectral and multispectral images over the same spectral range. After accurate image registration, the relationship of the point spread function is obtained by estimating a matrix that acts as Gaussian blur filter between two images. Two pansharpening-based methods and one unmixing-based method are adopted for hyperspectral and multispectral data fusion and their properties are investigated.


  17. N. Yokoya, T. Yairi, and A. Iwasaki, ”Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion,” IEEE Trans. Geosci. Remote Sens., vol. 50, no. 2, pp. 528-537, 2012.
    PDF    Quick Abstract

    Abstract: Coupled non-negative matrix factorization (CNMF) unmixing is proposed for the fusion of low-spatial-resolution hyperspectral and high-spatial-resolution multispectral data to produce fused data with high spatial and spectral resolutions. Both hyperspectral and multispectral data are alternately unmixed into endmember and abundance matrices by the CNMF algorithm based on a linear spectral mixture model. Sensor observation models that relate the two data are built into the initialization matrix of each NMF unmixing procedure. This algorithm is physically straightforward and easy to implement owing to its simple update rules. Simulations with various image datasets demonstrate that the CNMF algorithm can produce high-quality fused data both in terms of spatial and spectral domains, which contributes to the accurate identification and classification of materials observed at a high spatial resolution.


  18. N. Yokoya, N. Miyamura, and A. Iwasaki, “Detection and correction of spectral and spatial misregistrations for hyperspectral data using phase correlation method,” Applied Optics, vol. 49, no. 24, pp. 4568-4575, 2010.
    PDF    Quick Abstract

    Abstract: Hyperspectral imaging sensors suffer from spectral and spatial misregistrations. These artifacts prevent the accurate acquisition of spectra and thus reduce classification accuracy. The main objective of this work is to detect and correct spectral and spatial misregistrations of hyperspectral images. The Hyperion visible near-infrared (VNIR) subsystem is used as an example. An image registration method based on phase correlation demonstrates the precise detection of the spectral and spatial misregistrations. Cubic spline interpolation using estimated properties makes it possible to modify the spectral signatures. The accuracy of the proposed postlaunch estimation of the Hyperion characteristics is comparable to that of the prelaunch measurements, which enables the precise onboard calibration of hyperspectral sensors.

Conference papers

  1. D. Hong, N. Yokoya, J. Chanussot, and X. X. Zhu, “Learning A low-coherence dictionary to address spectral variability for hyperspectral unmixing,” Proc. ICIP, Beijing, China, September 17-20, 2017.
    PDF    Quick Abstract

    Abstract: This paper presents a novel spectral mixture model to address spectral variability in inverse problems of hyperspectral unmixing. Based on the linear mixture model (LMM), our model introduces a spectral variability dictionary to account for any residuals that cannot be explained by the LMM. Atoms in the dictionary are assumed to be low-coherent with spectral signatures of endmembers. A dictionary learning technique is proposed to learn the spectral variability dictionary while solving unmixing problems simultaneously. Experimental results on synthetic and real datasets demonstrate that the performance of the proposed method is superior to state-of-the-art methods.


  2. N. Yokoya, P. Ghamisi, and J. Xia, “Multimodal, multitemporal, and multisource global data fusion for local climate zones classification based on ensemble learning,” Proc. IGARSS, Texas, USA, July 23-28, 2017.
    PDF    Quick Abstract

    Abstract: This paper presents a new methodology for classification of local climate zones based on ensemble learning techniques. Landsat-8 data and open street map data are used to extract spectral-spatial features, including spectral reflectance, spectral indexes, and morphological profiles fed to subsequent classification methods as inputs. Canonical correlation forests and rotation forests are used for the classification step. The final classification map is generated by majority voting on different classification maps obtained by the two classifiers using multiple training subsets. The proposed method achieved an overall accuracy of 74.94% and a kappa coefficient of 0.71 in the 2017 IEEE GRSS Data Fusion Contest.


  3. J. Xia, N. Yokoya, and A. Iwasaki, “Tree species classification in Japanese mixed forest with hyperspectral and LiDAR data using rotation forest algorithm,” Proc. EARSeL IS, Zurich, Switzerland, April 19-21, 2017.
    PDF    Quick Abstract

    Abstract:


  4. J. Xia, N. Yokoya, and A. Iwasaki, “A novel ensemble classifier of hyperspectral and LiDAR data using morphological features,” Proc. ICASSP, New Orleans, US, March 5-9, 2017.
    PDF    Quick Abstract

    Abstract:


  5. J. Xia, N. Yokoya, and A. Iwasaki, “Mapping of large size hyperspectral imagery using fast machine learning algorithms,” Proc. ACRS, Colombo, Sri Lanka, October 17-21, 2016.
    PDF    Quick Abstract

    Abstract:


  6. N. Yokoya, X. X. Zhu, and A. Plaza, “Graph-regularized coupled spectral unmixing for multisensor time-series analysis,” Proc. WHISPERS, LA, US, August 21-24, 2016.
    PDF    Quick Abstract

    Abstract: A new methodology that solves unmixing problems involving a set of multisensor time-series spectral images is proposed in order to understand dynamic changes of the surface at a subpixel scale. The proposed methodology couples multiple unmixing problems via regularization on graphs between the multisensor time-series data to obtain robust and stable unmixing solutions beyond data modalities owing to different sensor characteristics and the effects of non-optimal atmospheric correction. A synthetic dataset that includes seasonal and trend changes on the surface and the residuals of non-optimal atmospheric correction is used for numerical validation. Experimental results demonstrate the effectiveness of the proposed methodology.


  7. N. Yokoya and P. Ghamisi, “Land-cover monitoring using time-series hyperspectral data via fractional-order Darwinian particle swarm optimization segmentation,” Proc. WHISPERS, LA, US, August 21-24, 2016.
    PDF    Quick Abstract

    Abstract: This paper presents a new method for unsupervised detection of multiple changes using time-serires hyperspectral data. The proposed method is based on fractional-order Darwinian particle swarm optimization (FODPSO) segmentation. The proposed method is applied to monitor land-cover changes following the Fukushima Daiichi nuclear disaster using multitemporal Hyperion images. Experimental results indicate that the integration of segmentation and a time-series of hyperspectral images has great potential for unsupervised detection of multiple changes.


  8. J. C.-W. Chan and N. Yokoya, “Mapping land covers of Brussels capital region using spatially enhanced hyperspectral images,” Proc. WHISPERS, LA, US, August 21-24, 2016.
    Quick Abstract

    Abstract:


  9. D. Hong, N. Yokoya, and X. X. Zhu, “The K-LLE algorithm for nonlinear dimensionality reduction of large-scale hyperspectral data,” Proc. WHISPERS, LA, US, August 21-24, 2016.
    Quick Abstract

    Abstract:


  10. D. Hong, N. Yokoya, and X. X. Zhu, “Local manifold learning with robust neighbors selection for hyperspectral dimensionality reduction,” Proc. IGARSS, Beijing, China, July 10-15, 2016.
    Quick Abstract

    Abstract:


  11. N. Yokoya and A. Iwasaki, “Generalized-Hough-transform object detection using class-specific sparse representation for local-feature detection,” Proc. IGARSS, Milan, Italy, July 26-31, 2015.
    Quick Abstract

    Abstract:


  12. D. Niina, N. Yokoya, and A. Iwasaki, “Detector anomaly detection and stripe correction of hyperspectral data,” Proc. IGARSS, Milan, Italy, July 26-31, 2015.
    Quick Abstract

    Abstract:


  13. L. Loncan, L. B. Almeida, J. Bioucas Dias, X. Briottet, J. Chanussot, N. Dobigeon, S. Fabre, W. Liao, G. A. Licciardi, M. Simoes, J. Y. Tourneret, M. A. Veganzones, G. Vivone, Q. Wei, and N. Yokoya, “Comparison of nine hyperspectral pansharpening methods,” Proc. IGARSS, Milan, Italy, July 26-31, 2015.
    Quick Abstract

    Abstract:


  14. N. Yokoya and X. X. Zhu, “Graph regularized coupled spectral unmixing for change detection,” Proc. WHISPERS, Tokyo, Japan, June 2-5, 2015.
    PDF    Quick Abstract

    Abstract: This paper presents a methodology of coupled spectral unmixing for multitemporal hyperspectral data analysis. Coupled spectral unmixing simultaneously extracts the sets of spectral signatures of endmembers and respective abundance maps from multiple spectral images with differences in observation conditions and sensor characteristics. The problem is formulated in the framework of coupled nonnegative matrix factorization. A graph regularization that reflects spectral correlation between two images on abundance fractions is introduced into the optimization of coupled spectral unmixing to consider temporal changes of the earth’s surface. An alternating optimization algorithm is investigated using the method of Lagrange multipliers to guarantee a stable convergence. The proposed method was applied to dual-temporal Hyperion images taken over the Fukushima Daiichi nuclear power plant. Experimental results showed that the proposed method can extract essential information on the earth’s surface in a data-driven manner beyond multitemporal data modality


  15. T. Takayama, N. Yokoya, and A. Iwasaki, “Optimal hyperspectral classification for paddy field with semisupervised self-learning,” Proc. WHISPERS, Tokyo, Japan, June 2-5, 2015.
    PDF    Quick Abstract

    Abstract: Monitoring and management of paddy fields are one of key elements for not only stable production but also ensuring national food security. Classification of growth stage with remote sensing data is expected to be a highly effective solution, which can capture large area in one time observation. In general cases, a pixel-based classification is one of the most attractive choices. However, acquiring enough number of field survey plots for the classification is not easy from the aspect of consumed time and cost. This problem can impact negatively on the accuracy of classification map. In this paper, we propose semisupervised classification method considering characteristic of paddy field in order to provide an optimal classification map with hyperspectral data.


  16. N. Yokoya, M. Kokawa, and J. Sugiyama, “Spectral unmixing of fluorescence fingerprint imagery for visualization of constituents in pie pastry,” Proc. ICIP, Paris, France, October 27-30, 2014.
    PDF    Quick Abstract

    Abstract: In this work, we present a new method that combines fluorescence fingerprint (FF) imaging and spectral unmixing to visualize microstructures in food. The method is applied to visualization of three constituents, gluten, starch, and butter, in two types of pie pastry. It is challenging to discriminate between starch and butter because both of them can be represented by similar FFs of low intensities. Two optimization approaches of FF unmixing that consider qualitative knowledge are presented and validated by comparison to the conventional staining method. Although starch and butter were represented by very similar FFs, a constrained-least-squares method with abundance quantization successfully visualized the distributions of constituents in pie pastry.


  17. C. F. Liew, N. Yokoya, and T. Yairi, “Facial alignment by using sparse initialization and random forest,” Proc. ICIP, Paris, France, October 27-30, 2014.
    Quick Abstract

    Abstract:


  18. N. Yokoya and A. Iwasaki, “Object localization based on sparse representation for remote sensing imagery,” Proc. IGARSS, Québec, Canada, July 13-18, 2014.
    PDF    Quick Abstract

    Abstract: In this paper, we propose a new object localization method named sparse representation based object localization (SROL), which is based on the generalized Hough-transform-based approach using sparse representations for parts detection. The proposed method was applied to car and ship detection in remote sensing images and its performance was compared to those of state-of-the-art methods. Experimental results showed that the SROL algorithm can accurately localize categorical objects or a specific object using a small size of training data.


  19. N. Yokoya and A. Iwasaki, “Airborne unmixing-based hyperspectral super-resolution using RGB imagery,” Proc. IGARSS, Québec, Canada, July 13-18, 2014.
    PDF    Quick Abstract

    Abstract: This paper presents an airborne experiment on unmixingbased hyperspectral super-resolution using RGB imagery. Preprocessing is described to ensure spatial and spectral consistency between hyperspectral and RGB images. An extended version of coupled nonnegative matrix factorization (CNMF) is introduced for multisensor hyperspectral super-resolution to deal with a challenging problem setting, i.e., only three spectral channels for higher spatial information and a 10-fold difference of ground sampling distance. The proposed method successfully estimated the high-spatial-resolution red-edge image. Numerical evaluation by comparing the high-spatial-resolution hyperspectral image to ground-measured spectra demonstrated recovery of pure-pixel spectra by the proposed method.


  20. N. Yokoya and A. Iwasaki, “Effect of unmixing-based hyperspectral super-resolution on target detection,” Proc. WHISPERS, Lausanne, Switzerland, June 24-27, 2014.
    PDF    Quick Abstract

    Abstract: We present an airborne experiment on unmixing-based hyperspectral super-resolution using RGB imagery to examine the restoration of pure spectra comparing with ground-measured spectra and demonstrate its impact on target detection. An extended version of coupled nonnegative matrix factorization (CNMF) is used for hyperspectral super-resolution to deal with a challenging problem setting. Our experiment showed that the extended CNMF can restore pure spectra, which contribute to accurate target detection.


  21. T. Matsuki, N. Yokoya, and A. Iwasaki, “Hyperspectral tree species classification with an aid of LiDAR data,” Proc. WHISPERS, Lausanne, Switzerland, June 24-27, 2014.
    PDF    Quick Abstract

    Abstract: Classification of tree species is one of the most important applications in remote sensing. A methodology to classify tree species using hyperspectral and LiDAR data is proposed. The data processing consists of shadow correction, individual tree crown delineation, classification by support vector machine (SVM) and postprocessing by a smoothing filter. The authors applied this procedure to the data taken over Tama Forest Science Garden in Tokyo, Japan and classified it into 16 classes of tree species. As a result, the authors achieved classification accuracy of 79 with 10 % training data, which is 17 % higher than what is obtained by using hyperspectral data only. Shadow correction and morphological processing derived from LiDAR data increase the accuracy by 3 % and 14 %, respectively.


  22. T. Matsuki, N. Yokoya, and A. Iwasaki, “Fusion of hyperspectral and LiDAR data for tree species classification,” Proc. 34th ACRS, Bali, Indonesia, Oct. 20-24, 2013.
    PDF    Quick Abstract

    Abstract: Tree species classification is one of the most important applications in remote sensing. In this study, the authors propose a methodology to classify tree species using hyperspectral and LiDAR data. The method consists of shadow correction, individual tree crown delineation and classification by support vector machine (SVM). Shadows in hyperspectral data are modified by unmixing. Individual tree crown delineation is achieved by a local maxima and region growing method for a LiDAR derived canopy height model (CHM). The input variables of SVM classifiers are principal components of hyperspectral data and the canopy form (height and size). The authors applied this method to the hyperspectral and LiDAR dataset taken over Tama Forest Science Garden in Tokyo and classified the data into 19 classes. As a result, we achived classification accuracy of 68 %, which is 20 % higher than what is obtained by using hyperspectral data only.


  23. N. Yokoya and A. Iwasaki, “Hyperspectral and multispectral data fusion mission on hyperspectral imager suite (HISUI),” Proc. IGARSS, Melbourne, Australia, Jul. 21-26, 2013.
    PDF    Quick Abstract

    Abstract: Hyperspectral imager suite (HISUI) is the Japanese nextgeneration earth-observing sensor composed of hyperspectral and multispectral imagers. Unmixing-based fusion of hyperspectral and multispectral data enables the production of high-spatial-resolution hyperspectral data. HISUI simulated imaging system combining two imagers was developed for verification experiments to investigate the feasibility and clarify the whole procedure of the hyperspectral and multispectral data fusion mission on HISUI. Airborne experiments are planned as simulation tests of HISUI higher-order products. The experimental results of the ground based observation showed the importance of the preprocessing and cross-calibration on the final quality of fused data, which contributes to the practical use of hyperspectral and multispectral data fusion.


  24. N. Yokoya and A. Iwasaki, “Design of combined optical imagers using unmixing-based hyperspectral data fusion,” Proc. WHISPERS, Florida, USA, Jun. 25-28, 2013.
    PDF    Quick Abstract

    Abstract: Unmixing-based hyperspectral and multispectral data fusion enables the production of high-spatial-resolution and hyperspectral imagery with small spectral errors. In this work, we present sensor design of combined optical imagers using unmixing-based data fusion, which aims to fuse hyperspectral and multispectral sensors and improve the performance of the final fused data. Owing to the degeneracy of the data cloud and additive noise, there is an optimal range in the relationship of spatial resolutions between two imagers.


  25. N. Yokoya and A. Iwasaki, “Optimal design of hyperspectral imager suite (HISUI) for hyperspectral and multispectral data fusion,” Proc. ISRS, Tokyo, Japan, May. 15-17, 2013.
    Quick Abstract

    Abstract: The spatial resolution of hyperspectral sensors is lower than that of multispectral and panchromatic imagers to maintain better signal-to-noise ratios (SNRs). Hyperspectral imager suite (HISUI) is a Japanese nextgeneration earth-observing imager consisting of hyperspectral and multispectral cameras that have 30 and 5 m ground sampling distances (GSDs), respectively. Unmixing-based hyperspectral and multispectral data fusion enables the production of high-spatial-resolution hyperspectral imagery with small spectral errors by alternately unmixing two data to obtain endmember spectra and high-spatial-resolution abundance maps. In this work, we present a sensor design of combined optical imagers using unmixing-based data fusion, which aims to fuse hyperspectral and multispectral sensors and improve the performance of the final fused data. Assuming HISUI/VNIR specifications, such as spectral bandwidth and SNR, we investigate the optimal relationship of spatial resolution from the perspective of data fusion. The final performance of fused data is determined by the accuracies of two unmixings. Owing to the degeneracy of the data cloud, there is an optimal range in the relationship of spatial resolutions between two imagers. When the spatial resolution of the multispectral imager is fixed at 5 m, 20−30 m is the optimal range for the spatial resolution of the hyperspectral imager, which is the actual design point of HISUI. Hyperspectral and multispectral data fusion can usher in a new concept for satellite sensor design that aims to obtain high-spatial-resolution and high-spectral-resolution data by observing spectral information using a hyperspectral camera and spatial information provided by a multispectral camera. HISUI is a promising sensor that enables hyperspectral and multispectral data fusion and the production of high-spatial-resolution hyperspectral data, which will bring a major breakthrough in hyperspectral remote sensing applications.


  26. N. Yokoya, J. Chanussot, and A. Iwasaki, “Generalized bilinear model based nonlinear unmixing using semi-nonnegative matrix factorization,” Proc. IGARSS, Munich, Germany, Jul. 22-27, 2012.
    PDF    Quick Abstract

    Abstract: Nonlinear spectral mixing models have been recently receiving attention in hyperspectral image processing. This paper presents a novel optimization method for nonlinear unmixing based on a generalized bilinear model (GBM), which considers second-order scattering effects. Semi-nonnegative matrix factorization is used for the optimization to process a whole image in a matrix form. The proposed method is applied to an airborne hyperspectral image with many endmembers and shows good performance both in unmixing quality and computational cost with simple implementation. The effect of endmember extraction on nonlinear unmixing is investigated and the impacts of the nonlinearity on abundance maps are demonstrated.


  27. A. Iwasaki, N. Yokoya, T. Arai, Y. Itoh, and N. Miyamura, “Similarity measure for spatial-spectral registration in hyperspectral era,” Proc. IGARSS, Munich, Germany, Jul. 22-27, 2012.
    Quick Abstract

    Abstract: In the hyperspectral era, the demand for data registration in spectral region is an important issue in addition to spatial region. Detection of smile and keystone phenomena that are caused by aberrations in spectrometer is related to registration activity, which is crucial for data fusion research. Hyperspectral Imager Suite (HISUI) is a next-generation Japanese optical sensor that is composed of a hyperspectral imager and a multispectral imager, which will be launched on Advanced Land Observation Satellite 3 (ALOS-3). Three similarity measures, normalized cross correlation (NCC), phase correlation (PC) and mutual information (MI), for spatial-spectral registration of hyperspectral data are discussed for Level-1 data processing of HISUI.


  28. N. Yokoya, J. Chanussot, and A. Iwasaki, “Hyperspectral and multispectral data fusion based on nonlinear unmixing,” Proc. WHISPERS, Shanghai, China, Jun. 4-7, 2012.
    PDF    Quick Abstract

    Abstract: Data fusion of low spatial-resolution hyperspectral (HS) and high spatial-resolution multispectral (MS) images based on a linear mixing model (LMM) enables the production of high spatial-resolution HS data with small spectral distortion. This paper extends the LMM based HS-MS data fusion to nonlinear mixing model using a bilinear mixing model (BMM), which considers second scattering of photons between two distinct materials. A generalized bilinear model (GBM) is able to deal with the underlying assumptions in the BMM. The GBM is applied to HS-MS data fusion to produce high-quality fused data regarding multiple scattering effect. Semi-nonnegative matrix factorization (Semi-NMF), which can be easily incorporated with the existing LMM based fusion method, is introduced as a new optimization method for the GBM unmixing. Comparing with the LMM based HS-MS data fusion, the proposed method showed better results on synthetic datasets.


  29. N. Yokoya, T. Yairi, and A. Iwasaki, “Coupled non-negative matrix factorization for hyperspectral and multispectral data fusion: application for pasture classification,” Proc. IGARSS, Vancuber, Canada, Jul. 24-29, 2011.
    PDF    Quick Abstract

    Abstract: Coupled non-negative matrix factorization (CNMF) is introduced for hyperspectral and multispectral data fusion. The CNMF fused data have little spectral distortion while enhancing spatial resolution of all hyperspectral band images owing to its unmixing based algorithm. CNMF is applied to the synthetic dataset generated from real airborne hyperspectral data taken over pasture area. The spectral quality of fused data is evaluated by the classification accuracy of pasture types. The experiment result shows that CNMF enables accurate identification and classification of observed materials at fine spatial resolution.


  30. N. Yokoya, T. Yairi, and A. Iwasaki, “Hyperspectral, multispectral, and panchromatic data fusion based on non-negative matrix factorization,” Proc. WHISPERS, Lisbon, Portugal, Jun. 6-9, 2011.
    PDF    Quick Abstract

    Abstract: Coupled non-negative matrix factorization (CNMF) is applied to hyperspectral, multispectral, and panchromatic data fusion. This unmixing based method extracts and fuses hyperspectral endmember spectra and high-spatial-resolution abundance maps using these three data. An experiment with the synthetic data simulating ALOS-3 (advanced land observing satellite 3) dataset shows that the CNMF method has a possibility to produce fused data which have both high spatial and spectral resolutions with smaller spectral distortion.


  31. N. Yokoya, and A. Iwasaki, “A maximum noise fraction transform based on a sensor noise model for hyperspectral data,” Proc. 31st ACRS, Hanoi, Vietnam, Nov. 1-5, 2010.
    PDF    Quick Abstract

    Abstract: The maximum noise fraction (MNF) transform, which produces the improved order of components by signal to noise ratio (SNR), has been commonly used for spectral feature extraction from hyperspectral remote sensing data before image classification. When hyperspectral data contains a spectral distortion, also known as a “smile” property, the first component of the MNF, which should have high image quality, suffers from noisy brightness gradient pattern which thus reduces classification accuracy. This is probably because the classic noise estimation of the MNF is different from the real noise model. The noise estimation is the most important procedure because the noise covariance matrix determines the characteristics of the MNF transform. An improved noise estimation method from a single image based on a noise model of a charge coupled device (CCD) sensor is introduced to enhance the feature extraction performance of the MNF. This method is applied to both airborne and spaceborne hyperspectral data, acquired from the airborne visible infrared/imaging spectrometer (AVIRIS) and the EO-1/Hyperion, respectively. The experiment for the Hyperion data demonstrates that the proposed MNF is resistant to the spectral distortion of hyperspectral data. Furthermore, the image classification experiment for the AVIRIS Indian pines data using the MNF as a preprocessing step to extract spectral features shows that the proposed method extracts higher SNR components in lower MNF components than the existing feature extraction methods.


  32. N. Yokoya, N. Miyamura, and A. Iwasaki, “Preprocessing of hyperspectral imagery with consideration of smile and keystone properties,” Proc. SPIE 7857-10, Incheon, Korea, Oct. 11-15, 2010.
    PDF    Quick Abstract

    Abstract: Satellite hyperspectral imaging sensors suffer from ‘’smile’’ and ‘’keystone’’ properties, which appear as distortions of spectrum image. The smile property is the center wavelength shift and the keystone property is the band-to-band misregistration. These distortions degrade the spectrum information and reduce classification accuracies. Furthermore, these properties may change after the launch. Therefore, in the preprocessing of satellite hyperspectral image, the onboard correction of the smile and keystone properties only from the observed images is important issue as well as the radiometric and geometric correction. The main objective of this work is to build up the prototype of the preprocessing of hyperspectral image with consideration of the smile and keystone properties. The image registration based on phase correlation is proposed to detect the smile and keystone properties. By estimating the distortion of the atmospheric absorption line, the smile property is detected, and by estimating band-to-band misregistration, the keystone property is detected. Cubic spline interpolation is adopted to modify the spectrum because of its good trade-off between the smoothness and shape preservation. The smile and keystone correction is built into the preprocessing of the radiometric and geometric correction. The Hyperion visible near-infrared (VNIR) is used as a simulation data. It is proved that the smile and keystone distortions are modified on the analysis of maximum noise fraction (MNF) transformation. The precise detection and correction of the smile and keystone properties make it possible to maximize the spectral performance of the hyperspectral imagery. The proposed method is the prototype of the preprocessing of the future satellite hyperspectral sensors.


  33. N. Yokoya, N. Miyamura, and A. Iwasaki, “Detection and correction of spectral and spatial misregistration for hyperspectral data,” Proc. IGARSS, Honolulu, HI, Jul. 25-30, 2010.
    PDF    Quick Abstract

    Abstract: Hyperspectral imaging sensors suffer from spectral and spatial misregistrations. These artifacts prevent the accurate acquisition of the spectra and thus reduce classification accuracy. The main objective of this work is to detect and correct spectral and spatial misregistrations of hyperspectral images. The Hyperion visible near-infrared (VNIR) subsystem is used as an example. An image registration method using normalized cross-correlation for characteristic lines in spectrum image demonstrates its effectiveness for detection of the spectral and spatial misregistrations. Cubic spline interpolation using estimated properties makes it possible to modify the spectral signatures. The accuracy of the proposed postlaunch estimation of the Hyperion properties has been proven to be comparable to that of the prelaunch measurements, which enables the precise onboard calibration of hyperspectral sensors.


  34. A. Iwasaki, M. Koga, H. Kanno, N. Yokoya, T. Okuda, and K. Saito, “Challenge of ASTER digital elevation model,” Proc. IGARSS, Honolulu, HI, Jul. 25-30, 2010.
    Quick Abstract

    Abstract: Accuracy of digital elevation model (DEM) obtained by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) that has along-track stereovision is investigated. The pointing offset and stability of the radiometer is one cause of the geometric deviation of the ASTER DEM attached with orthorectified image. The correction methodology to be implemented to the data processing is suggested. A fine-tuning of image matching procedure leads to better reproduction of the topography. The comparison with reference DEM is described.

Technical report

  1. N. Yokoya and A. Iwasaki, ”Airborne hyperspectral data over Chikusei,” Space Appl. Lab., Univ. Tokyo, Japan, Tech. Rep. SAL-2016-05-27, May 2016.
    PDF    Quick Abstract

    Abstract: Airborne hyperspectral datasets were acquired by Hyperspec-VNIR-C (Headwall Photonics Inc.) over agricultural and urban areas in Chikusei, Ibaraki, Japan, on July 29, 2014, as one of the flight campaigns supported by KAKENHI 24360347. This technical report summarizes the experiment. The hyperspectral data and ground truth were made available to the scientific community.