The Air Force Institute of Technology (AFIT) created the AFIT Sensor and Scene Emulation Tool (ASSET) which aims to produce accurate and realistic electro-optical and infrared (EO/IR) data. While working to validate ASSET’s cloud free radiometry calculations, researchers demonstrated that the radiometric accuracy of synthetic data can be improved using Hyperspectral Imagery (HSI). This research addresses the lack of accurate HSI reflectance data required by ASSET to perform scene generation with two novel machine learning (ML) models and a scene generation process. Two Convolutional Neural Network (CNN) models, a U-Net and a Pix2Pix Generative Adversarial Network (GAN), are trained using multi-sensor data including land classification, elevation, texture, and HSI image data. The ML model processes image chips as inputs to a novel rendering process, generating realistic whole-Earth hyperspectral reflectance maps between 480 nm and 2500 nm. To study the accuracy of this model and rendering process, generated data was compared against truth HSI data using Mean Absolute Error (MAE), Mean Squared Error (MSE), and image quality metrics, such as Structural Similarity (SSIM) and Peak-Signal-to-Noise-Ratio (PSNR). This paper details the current stage of model development and the possible contributions of the model to ASSET and synthetic scene generation.
The AFIT Sensor and Scene Emulation Tool (ASSET) is a physics-based model used to generate synthetic data sets of wide field-of-view (WFOV) electro-optical and infrared (EO/IR) sensors with realistic radiometric properties, noise characteristics, and sensor artifacts. This effort evaluates the use of Convolutional Neural Networks (CNNS) trained on samples of real space-based hyperspectral data paired with panchromatic imagery as a method of generating synthetic hyperspectral reflectance data from wide-band imagery inputs to improve the radiometric accuracy of ASSET. Further, the effort demonstrates how these updates will improve ASSET’s radiometric accuracy through comparisons to NASA’s moderate resolution imaging spectroradiometer (MODIS). In order to place the development of synthetic hyperspectral reflectance data in context, the scene generation process implemented in ASSET is also presented in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.