Space-based sensor platforms, including both current and planned future satellites, are capable of surveilling Earth-based objects and scenes from high altitudes. Overhead persistent infrared (OPIR) is a growing surveillance technique where thermal-waveband infrared sensors are deployed on orbiting satellites to look down and image the Earth. Challenges include having sufficient image resolution to detect, differentiate and identify ground-based objects while monitoring through the atmosphere. Demonstrations have shown machine learning algorithms to be capable of processing imagebased scenes, detecting and recognizing targets amongst surrounding clutter. Performant algorithms must be robustly trained to successfully complete such a complex task, which typically requires a large set of training data on which statistical predictions can be based. Electro-optical infrared (EO/IR) remote sensing applications, including OPIR surveillance, necessitate a substantial image database with suitable variation for adept learning to occur. Diversity in background scenes, vehicle operational state, season, times of day and weather conditions can be included in training image sets to ensure sufficient algorithm input variety for OPIR applications. However, acquiring such a diverse overhead image set from measured sources can be a challenge, especially in thermal infrared wavebands (e.g., MWIR and LWIR) when adversarial vehicles are of interest. In this work, MuSES™ and CoTherm™ are used to generate synthetic OPIR imagery of several ground vehicles with a range of weather, times of day and background scenes. The performance of a YOLO (“you only look once”) deep learning algorithm is studied and reported, with a focus on how image resolution impacts algorithm detection/recognition performance. The image resolution of future space-based sensor platforms will surely increase, so this study seeks to understand the sensitivity of OPIR algorithm performance to overhead image resolution.
Machine learning algorithms are capable of processing image-based scenes, detecting and recognizing embedded targets. This has been demonstrated by data scientists and computer vision engineers, but performant algorithms must be robustly trained to successfully complete such a complex task. This typically requires a large set of training data on which the algorithm can base statistical predictions. Electro-optical infrared (EO/IR) remote sensing applications necessitate a substantial image database with suitable variation for adept learning to occur. For human detection/recognition applications diversity in clothing ensembles, pose, season, times of day, sensor platform perspectives, scene backgrounds and weather conditions can be included in training image sets to ensure sufficient input variety. However, acquiring such a diverse image set from measured sources can be a challenge, especially in thermal infrared wavebands (e.g., MWIR and LWIR). Alternatively, generating synthetic imagery with appropriate features is possible and has been shown to perform well, but a careful methodology must be followed if robust training is to be accomplished. In this work, MuSES and CoTherm are used to generate synthetic EO/IR remote sensing imagery of various human dismounts with a range of clothing, poses and environmental factors. The performance of a YOLO (“you only look once”) deep learning algorithm is studied, and sensitivity conclusions are discussed.
The capabilities of machine learning algorithms for observing image-based scenes and recognizing embedded targets have been demonstrated by data scientists and computer vision engineers. Performant algorithms must be well-trained to complete such a complex task automatically, and this requires a large set of training data on which to base statistical predictions. For electro-optical infrared (EO/IR) remote sensing applications, a substantial image database with suitable variation is necessary. Numerous times of day, sensor perspectives, scene backgrounds, weather conditions and target mission profiles could be included in the training image set to ensure sufficient variety. Acquiring such a diverse image set from measured sources can be a challenge; generating synthetic imagery with appropriate features is possible but must be done with care if robust training is to be accomplished. In this work, MuSES™ and CoTherm™ are used to generate synthetic EO/IR remote sensing imagery of various high-value targets with a range of environmental factors. The impact of simulation choices on image generation and algorithm performance is studied with standard computer vision deep learning convolutional neural networks and a measured imagery benchmark. Differences discovered in the usage and efficacy of synthetic and measured imagery are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.