We explore some variants of “Gaussianization” for characterizing the distribution of background pixels in multi-spectral and hyperspectral imagery, and then use this characterization to develop algorithms for target detection. We consider two very different problems – anomalous change detection and gas-phase plume detection – as ways to explore the applicability of Gaussianization for remote sensing image analysis. One variant is an extension of the Gaussianization concept to non-Gaussian reference distributions, and in particular, we show that using the multivariate t as the reference distribution often leads to better target detection performance. Since we are no longer, strictly speaking, Gauss-ianizing, we call the method iterative rotation and remarginalization. In our scheme, the remarginalization is achieved with a parametric transformation function that is built up from a linear basis of (hard or soft) hinge functions, which provide explicitly differentiable and enforcably monotonic remarginalization functions. An efficient knot-pruning strategy enables rapid training of these functions. Also, for remote sensing imagery with many spectral channels, we have found it advantageous to pre-whiten the data with axes aligned to principal components, and then selectively to Gaussianize only the top principal components, treating the lower-variance directions as “already Gaussian.” This provides a computationally faster and empirically more effective Gaussianization for spectral imagery.
Combining multiple satellite remote sensing sources provides a far richer, more frequent view of the earth than that of any single source; the challenge is in distilling these petabytes of heterogeneous sensor imagery into meaningful characterizations of the imaged areas. Meeting this challenge requires effective algorithms for combining multi-modal imagery over time to identify subtle but real changes among the intrinsic data variation. Here, we implement a joint-distribution framework for multi-sensor anomalous change detection (MSACD) that can effectively account for these differences in modality, and does not require any signal resampling of the pixel measurements. This flexibility enables the use of satellite imagery from different sensor platforms and modalities. We use multi-year construction of the SoFi Stadium in California as our testbed, and exploit synthetic aperture radar imagery from Sentinel-1 and multispectral imagery from both Sentinel-2 and Landsat 8. We show results for MSACD using real imagery with implanted, measurable changes, as well as real imagery with real, observable changes, including scaling our analysis over multiple years.
In this work, we leverage deep learning to reproduce and expand Synthetic Aperture Radar (SAR) based deforestation detections generated using a probabilistic Bayesian model. Our Bayesian updating deforestation detections leverage SAR backscatter and InSAR coherence to perform change detection on forested areas and detect deforestation regardless of cloud cover. However, this model does not capture all deforestation events and is better suited to near-real time alerting than accurate forest loss acreage estimates. Here, we use SAR based probabilistic detections as deforested labels and Sentinel-2 optical composites as input features to train a neural network to differentiate deforested patches at various stages of regrowth from native forest. The deep learning model predictions demonstrate excellent recall of the original Bayesian label, and low precision due to providing better coverage of deforestation and detecting deforested patches not included in the imperfect Bayesian labels. These results provide an avenue to improve existing deforestation models, specifically with regards to their ability to quantify deforested acreage.
Gaussianization is a recently suggested approach for density estimation from data drawn from a decidedly non-Gaussian, and possibly high dimensional, distribution. The key idea is to learn a transformation that, when applied to the data, leads to an approximately Gaussian distribution. The density, for any given point in the original distribution, is then given by the determinant of the transformation's Jacobian at that point, multiplied by the (analytically known) density of the Gaussian for the transformed data. In this work, we investigate the use of distilled machine learning to provide a compact implementation of the Gaussianization transform (which in usual practice is obtained iteratively), thereby enabling faster computation, better controlled regularization, and more direct estimation of the Jacobian. While density estimation underlies many statistical analyses, our interest is in hyperspectral detection problems.
In this work we utilize generative adversarial networks (GANs) to synthesize realistic transformations for remote sensing imagery in the multispectral domain. Despite the apparent perceptual realism of the transformed images at a first glance, we show that a deep learning classifier can very easily be trained to differentiate between real and GAN-generated images, likely due to subtle but pervasive artifacts introduced by the GAN during the synthesis process. We also show that a very low-amplitude adversarial attack can easily fool the aforementioned deep learning classifier, although these types of attacks can be partially mitigated via adversarial training. Finally, we explore the features utilized by the classifier to differentiate real images from GAN-generated ones, and how adversarial training causes the classifier to focus on different, lower-frequency features.
Global climate warming is rapidly reducing Arctic sea ice volume and extent. The associated perennial sea ice loss has economic and global security implications associated with Arctic Ocean navigability, since sea ice cover dictates whether an Arctic route is open to shipping. Thus, understanding changes in sea ice thickness, concentration and drift is essential for operation planning and routing. However, changes in sea ice cover on scales up to a few days and kilometers are challenging to detect and forecast; current sea ice models may not capture quickly-changing conditions on short timescales needed for navigation. Assimilating these predictive models requires frequent, high-resolution morphological information about the pack, which is operationally difficult. We suggest an approach to mitigate this challenge by using machine learning (ML) to interpret satellite-based synthetic aperture radar (SAR) imagery. In this study, we derive ML models for the analysis of SAR data to improve short-term local sea ice monitoring at high spatial resolutions, enabling more accurate analysis of Arctic navigability. We develop an algorithm/classifier that can analyze Sentinel-1 SAR imagery with the potential to inform operational sea ice forecasting models. We focus on detecting two sea ice features of interest to Arctic navigability: ridges and leads (fractures in the ice shelf). These can be considered local extremes in terms of ice thickness, a crucial parameter for navigation. We build models to detect these ice features using machine learning techniques. Both our ridge and lead detection models perform as well as, if not better than, state-of-the- art methods. These models demonstrate Sentinel-1's ability to capture sea ice conditions, suggesting the potential for Sentinel-1 global coverage imagery to inform sea ice forecasting models.
In this work we demonstrate that generative adversarial networks (GANs) can be used to generate realistic pervasive changes in RGB remote sensing imagery, even in an unpaired training setting. We investigate some transformation quality metrics based on deep embedding of the generated and real images which enable visualization and understanding of the training dynamics of the GAN, and provide a useful measure in terms of quantifying how distinguishable the generated images are from real images. We also identify some artifacts introduced by the GAN in the generated images, which are likely to contribute to the differences seen between the real and generated samples in the deep embedding feature space even in cases where the real and generated samples appear perceptually similar.
Combining multiple satellite remote sensing sources provides a far richer, more frequent view of the earth than that of any single source; the challenge is in distilling these petabytes of heterogeneous sensor imagery into meaningful characterizations of the imaged areas. To meet this challenge requires effective algorithms for combining heterogeneous data to identify subtle but important changes among the intrinsic data variation. The major obstacle to using heterogeneous satellite data to monitor anomalous changes across time is this: subtle but real changes on the ground can be overwhelmed by artifacts that are simply due to the change in modality. Here, we implement a joint-distribution framework for anomalous change detection that can effectively "normalize" for these changes in modality, and does not require any phenomenological resampling of the pixel signal. This flexibility enables the use of satellite imagery from different sensor platforms and modalities. We use multi-year construction of the Los Angeles Stadium at Hollywood Park (in Inglewood, CA) as our testbed, and exploit synthetic aperture radar (SAR) imagery from Sentinel-1 and multispectral imagery from both Sentinel-2 and Landsat 8. We explore results for anomalous change detection between Sentinel-2 and Landsat 8 over time, and also show results for anomalous change detection between Sentinel-1 SAR imagery and Sentinel-2 multispectral imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.