Nonlinear graph-based dimensionality reduction algorithms have been shown to be very effective at yielding low-dimensional representations of hyperspectral image data. However, the steps of graph construction and eigenvector computation often suffer from prohibitive computational and memory requirements. In the paper, we develop a semi-supervised deep auto-encoder network (SSDAN) that is capable of generating mappings that approximate the embeddings computed by the nonlinear DR methods. The SSDAN is trained with only a small subset of the original data and enables an expert user to provide constraints that can bias data points from the same class towards being mapped closely together. Once the SSDAN is trained on a small subset of the data, it can be used to map the rest of the data to the lower dimensional space, without requiring complicated out-of-sample extension procedures that are often necessary in nonlinear DR methods. Experiments on publicly available hyperspectral imagery (Indian Pines and Salinas) demonstrate that SSDANs compute low-dimensional embeddings that yield good results when input to pixel-wise classification algorithms.
KEYWORDS: Target detection, Hyperspectral target detection, Detection and tracking algorithms, Hyperspectral imaging, RGB color model, Image segmentation, Control systems, Silver, Principal component analysis, Algorithm development
The Biased Normalized Cuts (BNC) algorithm is a useful technique for detecting targets or objects in RGB imagery. In this paper, we propose modifying BNC for the purpose of target detection in hyperspectral imagery. As opposed to other target detection algorithms that typically encode target information prior to dimensionality reduction, our proposed algorithm encodes target information after dimensionality reduction, enabling a user to detect different targets in interactive mode. To assess the proposed BNC algorithm, we utilize hyperspectral imagery (HSI) from the SHARE 2012 data campaign, and we explore the relationship between the number and the position of expert-provided target labels and the precision/recall of the remaining targets in the scene.
Nonlinear graph-based dimensionality reduction algorithms such as Laplacian Eigenmaps (LE) and Schroedinger Eigenmaps (SE) have been shown to be very effective at yielding low-dimensional representations of hyperspectral image data. However, the steps of graph construction and eigenvector computation required by LE and SE can be prohibitively costly as the number of image pixels grows. In this paper, we propose pre-clustering the hyperspectral image into Simple Linear Iterative Clustering (SLIC) superpixels and then performing LE- or SE-based dimensionality reduction with the superpixels as input. We then investigate how different superpixel size and regularity choices yield trade-offs between improvements in computational efficiency and accuracy of subsequent classification using the low-dimensional representations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.