Reduced dimensionality hyperspectral classification using finite mixture models

Vikram Jayaram, University of Texas at El Paso

Abstract

Classification of h yperspectral imaging (HSI) data is a challenging problem for two main reasons. First, with limited spatial resolution of HSI sensors and/or the distance of the observed scene, the images invariably contain pixels composed of several materials. It is desirable to resolve the contributions of the constituents from the observed image without relying on high spatial resolution images. Remote sensing cameras have been designed to capture a wide spectral range motivating the use of post-processing techniques to distinguish materials via their spectral signatures. Secondly, available training data for most pattern recognition problems in HSI processing is severely inadequate. Under the framework of statistical classifiers, Hughes was able to demonstrate the impact of this problem on a theoretical basis. Concerning the second problem, feature extraction and optimal band selection are the methods most commonly used for finding useful features in high-dimensional data. On the other hand, reduced dimensionality algorithms suffer from theoretical loss of performance. This performance loss occurs due to reduction of data to features, and further approximating the theoretical features to PDFs. ^ Traditional Hyperspectral classification algorithms have used deterministic techniques such as the spectral angle mapper (SAM) and minimum Euclidean distances (MED). Although the supervised methods are easy to implement they do not quite model the inherent behavior of the HSI pixel vector for the reduced dimensional case. This is primarily due to the absence of a statistical treatment. Hyperspectral data represents a mixture of several component spectra from many classifiable sources. The knowledge of the contributions of the underlying sources to the recorded spectra is valuable in many remote sensing applications and thus demands further investigations. In this dissertation, we propose a hidden Markov model (HMM) based on a probability density function (PDF) classifier for the reduced dimensional feature space. ^ The proposed classifier is derived from two major finite mixture models. We utilize the Gaussian mixture model (GMM) that uses dynamic component allocation to model mixture classes. This classification scheme incorporates the HMM which in turn uses the GMM to represent class-specific features. The HMM is a powerful stochastic model that could closely approximate many naturally occurring phenomena. While being a very powerful stochastic model, a single HMM cannot easily act as a good classifier between wide varieties of signal classes. Instead, it is best to design them specifically for each signal type and feature type. ^ The Markovian principle assumes consecutive samples are statistically independent when conditioned on knowing the samples that preceded it. This leads to an elegant solution of HMM which employs a set of M PDFs of dimension P. The HMM regards each of the K samples as having originated from one of the M possible states and there is a distinct probability that the underlying model “jumps” from one state to another. Our approach uses an unsupervised learning scheme for maximum-likelihood (ML) parameter estimation that combines both model selection and estimation in a single algorithm. This technique could be applied to any type of parameter mixture model that utilizes the EM algorithm. Our experiments exemplify that the proposed method models and well synthesizes the observations of the HSI data in a reduced dimensional feature space. The likelihood measurements obtained from HMM trained classes are then used to derive the classifier rules. We then provide comparative results of the proposed methodology to ML classifier and popular deterministic classifier such as the minimum Euclidean distance (MED) and the parallelepiped. The classification results show that the proposed classifier model outperforms the other classifiers used in our study on basis of overall classification accuracy. The derived classification technique, unlike classical classifiers, can circumvent the curse of dimensionality if each class can be represented (statistically described) using a separate low-dimensional feature set. ^ The outcome of this dissertation presents a seamless integration of advanced data analysis and modeling tools to scientists, advancing the state-of-the-practice in the utilization of satellite image data to various types of Earth System Science studies. With the ever increasing volume of Earth Science data and computational requirements of Earth system models, the proposed methodology could put together a new paradigm of methods that will increase the productivity of researchers in the Science Mission Directorate as well as the science return from NASA data. (Abstract shortened by UMI.)^

Subject Area

Engineering, Electronics and Electrical

Recommended Citation

Jayaram, Vikram, "Reduced dimensionality hyperspectral classification using finite mixture models" (2009). ETD Collection for University of Texas, El Paso. AAI3371745.
http://digitalcommons.utep.edu/dissertations/AAI3371745

Share

COinS