Yokoya

Naoto

Naoto Yokoya

Yokoya Naoto

横矢

横矢直人

University of Tokyo

東京大学

hyperspectral

ハイパースペクトル

remote sensing

リモートセンシング

pattern recognition

パターン認識

data fusion

データ融合

img12

Hyperspectral Superresolution

Read More

View more
img12

Hyperspectral Image Restoration

Read More

View more
img12

Spectral Unmixing

Read More

View more
img1

Hyperspectral Classification

Read More

View more
img1

Object Detection

Read More

View more
img1

Change Detection

Read More

View more
img12

Interdisciplinary Application

Read More

View more

Hyperspectral Superresolution

Hyperspectral and Multispectral Data Fusion

Coupled non-negative matrix factorization (CNMF) unmixing is proposed for the fusion of low-spatial-resolution hyperspectral and high-spatial-resolution multispectral data to produce fused data with high spatial and spectral resolutions. Both hyperspectral and multispectral data are alternately unmixed into endmember and abundance matrices by the CNMF algorithm based on a linear spectral mixture model. Sensor observation models that relate the two data are built into the initialization matrix of each NMF unmixing procedure. This algorithm is physically straightforward and easy to implement owing to its simple update rules. Simulations with various image datasets demonstrate that the CNMF algorithm can produce high-quality fused data both in terms of spatial and spectral domains, which contributes to the accurate identification and classification of materials observed at a high spatial resolution.
Related Publication
N. Yokoya, T. Yairi, and A. Iwasaki, ” Coupled non-negative matrix factorization unmixing for hyperspectral and multispectral data fusion ,” IEEE Trans. Geosci. Remote Sens., vol. 50, no. 2, pp.528-537, 2012.

Hyperion and ASTER Data Fusion

The data fusion of low spatial-resolution hyperspectral and high spatial-resolution multispectral images enables the production of high spatial-resolution hyperspectral data with small spectral distortion. EO-1/Hyperion is the world’s first hyperspectral sensor. It was launched in 2001 and has a similar orbit to Terra/ASTER. In this work, we apply hyperspectral and multispectral data fusion to EO-1/Hyperion and Terra/ASTER datasets by the preprocessing of datasets and the onboard cross-calibration of sensor characteristics. The relationship of the spectral response function is determined by convex optimization by comparing hyperspectral and multispectral images over the same spectral range. After accurate image registration, the relationship of the point spread function is obtained by estimating a matrix that acts as Gaussian blur filter between two images. Two pansharpening-based methods and one unmixing-based method are adopted for hyperspectral and multispectral data fusion and their properties are investigated.
Related Publication
N. Yokoya, N. Mayumi, and A. Iwasaki, ” Cross-calibration for data fusion of EO-1/Hyperion and Terra/ASTER ,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 6, no. 2, pp. 419-426, 2013.

Hyperspectral Image Restoration

Smile and Keystone Correction

Hyperspectral imaging sensors suffer from spectral and spatial misregistrations. These artifacts prevent the accurate acquisition of spectra and thus reduce classification accuracy. The main objective of this work is to detect and correct spectral and spatial misregistrations of hyperspectral images, commonly called as "smile" and "keystone", respectively. The Hyperion visible near-infrared (VNIR) subsystem is used as an example. An image registration method based on phase correlation demonstrates the precise detection of the spectral and spatial misregistrations. Cubic spline interpolation using estimated properties makes it possible to modify the spectral signatures. The accuracy of the proposed postlaunch estimation of the Hyperion characteristics is comparable to that of the prelaunch measurements, which enables the precise onboard calibration of hyperspectral sensors.
Related Publication
N. Yokoya, N. Miyamura, and A. Iwasaki, “ Detection and correction of spectral and spatial misregistrations for hyperspectral data using phase correlation method ,” Applied Optics, vol. 49, no. 24, pp.4568-4575, 2010.

Spectral Unmixing

Nonlinear Spectral Unmixing

Nonlinear spectral mixture models have recently received particular attention in hyperspectral image processing. We present a novel optimization method of nonlinear unmixing based on a generalized bilinear model (GBM), which considers the second-order scattering of photons in a spectral mixture model. Semi-nonnegative matrix factorization (Semi-NMF) is used for the optimization to process a whole image in matrix form. When endmember spectra are given, the optimization of abundance and interaction abundance fractions converges to a local optimum by alternating update rules with simple implementation. The proposed method is evaluated using synthetic datasets considering its robustness for the accuracy of endmember extraction and spectral complexity, and shows smaller errors in abundance fractions than conventional methods. GBM-based unmixing using Semi-NMF is applied to the analysis of an airborne hyperspectral image taken over an agricultural field with many endmembers; it visualizes the impact of a nonlinear interaction on abundance maps at reasonable computational cost.

Related Publication
N. Yokoya, J. Chanussot, and A. Iwasaki, ” Nonlinear unmixing of hyperspectral data using semi-nonnegative matrix factorization ,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 2, pp. 1430-1437, 2014.

Spectral Unmixing of Fluorescence Fingerprint Imagery

The distribution of starches, proteins, and fat in baked foods determine their texture and palatability, and there is a great demand for techniques to visualize the distributions of these constituents. In this study, the distributions of gluten, starch, and butter in pie pastry were visualized without any staining, by using the fluorescence fingerprint (FF). The FF, also known as the excitation–emission matrix (EEM), is a set of fluorescence spectra acquired at consecutive excitation wavelengths. The FFs of each pixel were unmixed into the FFs and abundances of five constituents, gluten, starch, butter, ferulic acid, and the microscope slide, by using the least squares method coupled with constraints of non-negativity, full additivity, and quantum restraint on the abundances of the slide glass. The composite RGB image assigning the abundance images of butter, gluten, and starch to red, blue, and green, respectively, showed high correspondence with the images acquired with the conventional staining method.
Related Publication
M. Kokawa, N. Yokoya, H. Ashida, J. Sugiyama, M. Tsuta, M. Yoshimura, K. Fujita, M. Shibata, ”Visualization of gluten, starch, and butter in pie pastry by fluorescence fingerprint imaging,” Food and Bioprocess Technology, online ISSN: 1935-5149, Sep. 2014.

Hyperspectral Classification

IEEE Data Fusion Contest 2013: Best Classification Challenge

We joined the Best Classification Challenge of IEEE Data Fusion Contest 2013 and got the 5th place out of more than 50 participants. Hyperspectral and LiDAR-derived DSM datasets were provided with training data for 15 classes. Hyperspectral contains a large shadow of cloud and it makes classification challenging. Our method is summarised as follows. As a pre-processing, shadows caused by clouds and buildings in the hyperspectral data were detected and corrected using illumination distributions calculated by the spectra and geometric shadow map obtained from the LiDAR-derived DSM. The DSM data was also modified to reduce terrain effects. The nearest neighbor (NN) algorithm was applied to a fused feature of spectral angle and difference of height. The final classification result was obtained after applying iterative high-dimensional bilateral filtering to the NN classification map to reduce noise.

Object Detection

Object Detection for High-Resolution Optical Remote Sensing Imagery

The optical resolution of optical remote sensing imagers has been improving, particularly in the last decade, for example, the GeoEye, WorldView, and Pleiades series. Automated object detection is required to understand high-spatial-resolution optical remote sensing imagery along with user interpretation. We have developed a novel method for detecting instances of an object class or specific object in high-spatial-resolution optical remote sensing images. The proposed method integrates sparse representations for local-feature detection into generalized-Hough-transform object detection. Object parts are detected via class-specific sparse image representations of patches using learned target and background dictionaries, and their cooccurrence is spatially integrated by Hough voting, which enables object detection. We aim to efficiently detect target objects using a small set of positive training samples by matching essential object parts with a target dictionary while the residuals are explained by a background dictionary.
Related Publication
N. Yokoya and A. Iwasaki, ”Object detection based on sparse representation and Hough voting for optical remote sensing imagery,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 8, no. 5, pp. 2053-2062, 2015.

Change Detection

Multisensor Coupled Spectral Unmixing for Time-Series Analysis

Improved understanding of dynamics on the surface is expected by synergistically analyzing a set of time-series spaceborne hyperspectral and multispectral images (e.g., EnMAP, Sentinel-2, and Landsat-8). We propose a new framework, called multisensor coupled spectral unmixing (MuCSUn), that solves unmixing problems involving a set of multisensor time-series spectral images to analyze dynamic changes of the surface at a subpixel scale. The proposed methodology couples multiple unmixing problems based on regularization on graphs between the time-series data to obtain robust and stable unmixing solutions beyond data modalities due to different sensor characteristics and the effects of non-optimal atmospheric correction. Our methodology was applied to a real dataset composed of 11 Hyperion and 22 Landsat-8 images and showed robust and stable results visualizing class-specific changes at a subpixel scale.
Related Publication
N. Yokoya, X. X. Zhu, and A. Plaza, ”Multisensor coupled spectral unmixing for time-series analysis,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 5, pp. 2842-2857, 2017.

Interdisciplinary Application

Landscape Visual Quality Assessment

Landscape visual quality is an important factor associated with daily experiences and influences our quality of life. In this work, we present a method of fusing airborne hyperspectral and mapping light detection and ranging (LiDAR) data for landscape visual quality assessment. From the fused hyperspectral and LiDAR data, classification and depth images at any location can be obtained, enabling physical features such as land-cover properties and openness to be quantified. The relationship between physical features and human landscape preferences is learned using least absolute shrinkage and selection operator (LASSO) regression. The proposed method is applied to the hyperspectral and LiDAR datasets provided for the 2013 IEEE GRSS Data Fusion Contest. The results showed that the proposed method successfully learned a human perception model that enables the prediction of landscape visual quality at any viewpoint for a given demographic used for training. This work is expected to contribute to automatic landscape assessment and optimal spatial planning using remote sensing data.
The original concept of this work was developed for the Best Paper Challenge of IEEE Data Fusion Contest 2013 where we got the 4th place out of 30 international teams.
Related Publication
N. Yokoya, S. Nakazawa, T. Matsuki, and A. Iwasaki, ” Fusion of hyperspectral and LiDAR data for landscape visual quality assessment ,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 7, no. 6, pp. 2419-2425, 2014.