PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12736, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using YOLO (a convolutional neural network) for camouflage evaluation in combination with a genetic algorithm (GA) we investigated what details and colours are important for visual camouflage of a soldier. Depending on the distance, details like the face and legs or the soldier’s silhouette appeared most important for detection. GA optimization yielded a set of optimal colours that depended on whether the evolution targeted a specific location or (average over a) scene, as the immediate background in a scene differs per location. We validated our results in a human observer experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The threat of AI-based surveillance and reconnaissance systems that has emerged in recent years has made it necessary to develop new camouflage and deception measures directed against them. A primary example would be adversarial attack camouflage. This is achieved by employing specifically calculated digital patterns that are more or less conspicuous to human observers but can effectively deceive an AI. In most cases, however, only photo manipulations showing the pattern in optimal frontal positioning are used to evaluate its effectiveness. This paper aims to demonstrate a comprehensive evaluation methodology that examines both the visual conspicuity and effectiveness of AI camouflage and deception methods in terms of spatial and angular positioning, to provide a measure of evaluation as well as advice for the application of patch camouflage. Here, the distances and viewing angles at which DAAC is still effective are investigated to produce a spatial effectiveness map. Consequently, the shape, extent and intensity of the effectiveness range can be used as an evaluation measure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep Learning based architectures such as Convolutional Neural Networks (CNNs) have become quite efficient in recent years at detecting camouflaged objects that would be easily overlooked by a human observer. Consequently, countermeasures have been developed in the form of adversarial attack patterns which can confuse CNNs by causing false classifications while maintaining the original camouflage properties in the visible spectrum. In this paper, we describe the various steps in generating suitable adversarial camouflage patterns based on the Dual Attribute Adversarial Camouflage (DAAC) technique for evading the detection by artificial intelligence as well as human observers which was proposed in [Wang et al. 2021]. The aim here is to develop an efficient camouflage with the added ability to confuse more than a single network without compromising camouflage against human observers. In order to achieve this, two different approaches are suggested and the results of first tests are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Air Force Institute of Technology (AFIT) created the AFIT Sensor and Scene Emulation Tool (ASSET) which aims to produce accurate and realistic electro-optical and infrared (EO/IR) data. While working to validate ASSET’s cloud free radiometry calculations, researchers demonstrated that the radiometric accuracy of synthetic data can be improved using Hyperspectral Imagery (HSI). This research addresses the lack of accurate HSI reflectance data required by ASSET to perform scene generation with two novel machine learning (ML) models and a scene generation process. Two Convolutional Neural Network (CNN) models, a U-Net and a Pix2Pix Generative Adversarial Network (GAN), are trained using multi-sensor data including land classification, elevation, texture, and HSI image data. The ML model processes image chips as inputs to a novel rendering process, generating realistic whole-Earth hyperspectral reflectance maps between 480 nm and 2500 nm. To study the accuracy of this model and rendering process, generated data was compared against truth HSI data using Mean Absolute Error (MAE), Mean Squared Error (MSE), and image quality metrics, such as Structural Similarity (SSIM) and Peak-Signal-to-Noise-Ratio (PSNR). This paper details the current stage of model development and the possible contributions of the model to ASSET and synthetic scene generation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our study employs a simulation technique (‘hybrid simulation’), which combines sensor imagery of real environments with virtual objects to generate realistic target imagery for signature assessment. This method is preferred over entirely virtual scenes, for which it can be challenging to replicate reality with sufficient fidelity. In prior research, we investigated three methods for creating hybrid scenes using Unity software: i) using recordings of a scale-model of the target within the scene, ii) using material capture, which involves recording probe spheres in the scene to map colors directly onto the virtual target and iii) combining 360-degree light field measurements with BRDF measurements to integrate a virtual target into the scene. Although the hybrid scenes demonstrated reasonable agreement with the validation scenes, discrepancies both in color and luminance were observed. To address this, we replaced the Unity rendering engine with the validated RADIANCE lighting simulation raytracing engine. In this study, hybrid scenes are rendered using the RADIANCE engine and compared to validation scenes. Additionally, we introduce a second layer of interaction that enables the virtual target to affect the recorded imagery. As a result, the virtual target can now create shadows and affect the appearance of the recorded imagery. The combined improvement of the rendering engine and the inclusion of the interaction between the virtual target and real environment has resulted in more realistic scenes, making them more appropriate for camouflage assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificially inserted objects in synthetic aperture radar (SAR) images are an important component of modern electronic warfare, e.g. for the concealment or illusion of targets. Target simulation can be achieved through software and hardware techniques, by means of a radar target simulator (RTS). In this work, we designed an experiment involving an RTS on the ground and a synthetic aperture radar (SAR) mounted on an aircraft. The study aims to assess the performance of the RTS, analyse measured RTS signals in SAR imagery and its usefulness for future missions. The SAR sensor was the Fraunhofer’s (FHR) MIRANDA35 operating at Ka-band with signals of 600 MHz bandwidth. The RTS can simulate an adjustable number of targets, intensity, and slant range positions. During the experiments, the RTS was observed from different viewing angles, depending on the trajectory of the aircraft carrying MIRANDA35. The RTS signals were generated using the delay-based technique, so the target’s location in the focused SAR image varied depending on the time delay. Five corner reflectors were placed on the ground, enabling a comparison of the RTS signatures with those of the fixed reflectors. The theoretical position and backscatter of the simulated targets, based on the RTS configuration, were compared to those measured in the SAR images. The results showed that the measured and theoretical slant range values differed by about 2 meters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Achieving low signature is much desired in forest and foliage dominated backgrounds. Cellulose nanofibres (CNF) or nanocellulose, is a non-toxic biopolymer that is biodegradable and renewable. Other physical and optical properties such as thermal stability, high mechanical strength and high infrared (IR) emissivity gives it potential to replace hazardous components for camouflage applications, where a more environmentally friendly alternative would be preferred. CNF based films were produced with optical properties similar to leaves in terms of colour. The investigated films were made with carboxymethylated CNF as a matrix, and by adding cenospheres (hollow spheres of alumino-silicate), chlorophyll as a green dye and other additives the films optical and physical properties could be modified to mimic a leaf. The physical structure had a rough paper-like texture and the optical properties could be altered through changing the amount of cellulose, cenospheres and other additives. Initial analysis using UV-VIS-NIR and IR spectroscopy show similar optical properties to a fresh maple leaf. In the UV-VIS region of 0.25 – 0.8 μm a clear chlorophyll peak and a red edge is visible. The CNF film give a high reflectance in the SWIR region 1 – 2.5 μm. A peak of higher reflectance is observed in the IR region 3 – 6 μm, while the reflectance decreases for longer wavelengths. However, silver nanowires could increase the reflectance at longer IR wavelengths. This early work in exploring the use of nanocellulose based films for camouflage applications gives a glimpse of what capabilities CNF might bring in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adapting to natural backgrounds is important, yet very difficult, to achieve effective camouflage. One interesting aspect is the optical depth of biomaterials as a function of both geographical region, in which the biomaterials exist, and wavelength. The effective optical thickness of sand, for instance, will vary from scene to scene, and the amount of light that is reflected and transmitted is lower in the visible wavelength region than in the near-infrared region. Such variations in optical properties should be accounted for to ensure that synthetic camouflage material has high adaption to natural backgrounds over a large set of natural scenes as well as over a large range of wavelengths. The aim with this study has been to study optical reflectance properties of thin sand layers as a function of their thickness, given in weight per area. Sand is a biomaterial abundant in many parts of the world and relevant for many camouflage purposes. In specific, we have studied spectral reflectance (350–2500 nm) of 1–6 layers of sand covering a generic target object with known spectral signature. We also present mathematical models aiming to be able to estimate the reflectance of the sand layer samples, for a given layer thickness. The models are easy to use and we show that the models are able to reproduce the spectral reflectance properties of the sand samples. The model also allows for an estimation of the optical extinction coefficient as a function of wavelength and for a given sand type. This opens for a further estimation of transmittance and absorptance of the sand layers based on the experimental reflectance data, and we explore this in the paper. We also explore how the model is able to capture optical properties of sand and compare with corresponding optical properties of other natural materials, often found in thin layers, such as leaves and snow. We think our results will be a valuable contribution to developing multi-layered camouflage material that accounts for the variation of optical depth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Background matching is an essential form of camouflage, adopted by humans especially within military applications in terms of signature reduction. Recent sensor developments have created a need for novel camouflage effective in shortwave infrared (SWIR). The absorption of electromagnetic radiation in SWIR is heavily influenced by water. In a forest environment, the water content in plants is of considerable importance. Therefore, it can be inferred that fabrics with higher moisture levels would exhibit reduced detectability in SWIR wavelengths when compared to their dry counterparts. In this study, the optical properties of different fabrics, both in their dry and hydrated state, was evaluated with SWIR imaging and UV-VIS-NIR spectroscopy and compared to foliage. Two methods of hydration were used, water and nanocellulose. In addition, an assessment of the evaporation of water from the hydrated fabrics samples was carried out. Two fabrics were surface treated with hydrophobic compounds in order to modify the evaporation. The treatments included water-repelling agents and silica nanoparticles (SNPs). The hydrophobic surface modifications did not appear to prolong the evaporation of water from the studied fabrics. Hydrating the fabrics appears to transform the electro-optical signature to closely resemble foliage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Minimizing electro-optical signatures of soldiers against modern sensors is a challenging task, but a task with high importance and benefits for operative soldiers that need to stay undetected. Optimizing camouflage uniforms for winter conditions efficiently reduces soldier signatures in winter scenes, especially in the visual spectrum. Snow usually dominates winter scenes and is difficult to mimic because the spectral properties of snow change with several parameters such as grain size, structure, and wetness. Developing efficient winter camouflage thus requires knowledge and data on the spectral properties of snow. This paper presents spectral data on common snow types in Norway and evaluates the camouflage performance of several winter uniforms of different colors and patterns. We assessed and ranked the camouflage performance of the uniforms quantitatively in the visible spectrum using an observer-based photosimulation where many soldiers searched for targets in various Norwegian winter scenes. By collecting a large number of detection times, indicating how difficult it was for an observer to detect each camouflage in each of the unique winter scenes, it was possible to rank the camouflage targets quantitatively. The results show how each camouflage performed (given by time of detection or as a percentage) compared to all the other camouflages in the test for each scene. The photosimulation method is time-consuming, but it gives a realistic estimation of camouflage performance over the different scenes. We discuss the performance of the various winter camouflages with their pattern and similarity to snow (color coordinates).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Platform signature reduction in the thermal infrared (TIR) wavelength range can efficiently be accomplished by applying a low-emissive paint to a surface heated by internal heat sources, in order to decrease the apparent temperature and to reduce the risk of detection from imaging TIR sensors. Main ingredients in low-emissive paints are some sort of IR reflecting pigment mixed in a binder with low IR absorption properties. Aluminium (Al) flakes are normally used as IR reflecting pigment, which are available with different surface treatments, morphologies and size distributions, and used in various applications such as industrial coatings, automotive coatings and printing inks. However, in this work we replace Al flakes with silver nanowires (AgNWs) making use of their longitudinal plasmon resonance or optical antenna effect, giving rise to high reflectance in the IR. The spectral reflectance is here affected by the length of the AgNWs. In this investigation we have used AgNWs of different lengths from two different producers, to study their effect on the lowemissive properties. We have produced paints with different concentrations of AgNWs in combination with an IR transparent binder including different additives, but without any visual pigments. The investigations methods used are spectral total directional hemispherical reflectance in the IR range, microscopy studies of paint films and thermal imaging of sample surfaces. We also compare the use of AgNWs with Al flakes to look for advantages and disadvantages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe our efforts to investigate the usefulness of very simple surface temperature modelling, based on a single input parameter, from an operational perspective. A particular infrared sensor would require a minimum temperature difference (contrast) between target (CUBI) and background (sky, for instance) to declare a detection, but it does not ‘care’ if the contrast is larger than that. We use a linear model of the solar irradiance to predict the instantaneous daytime surface temperatures. To operationalize the modelling we calculate rates for true/false positive and negative predictions and show that even simple instantaneous (memory-less) modelling yields good predictions up to 80 % of the time. For modelling of nighttime surface temperatures, we investigate the usefulness of using cloud cover, wind speed, but also out-radiation measurements to predict these temperatures. In this case, we achieve high rates of correct predictions, up to 80 %.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a numerical method able to compute atmospheric radiance based on ray path. In this way is possible to evaluate relevant features of Infrared Search & Track (IRST) such as the so called Target Contrast Irradiance, i.e. difference between irradiance received with and without an obstacle towards a fixed Line-Of-Sight. The proposed model evaluates propagation of electromagnetic waves in parametric spectral band by means of computational procedure that relies on integral solution of radiative transfer differential equation defined in inhomogeneous domain such as atmosphere. Atmospheric properties are reconstructed by agile offline usage of MODTRAN® tool and then summarised in synthetic data structure that can be easily loaded in simulation initialization. Atmospheric data structure is devised to resume macroscopic properties of homogeneous layers, among which we mention temperature, absorption, scattering and uniform angular average of incoming radiance. Furthermore atmospheric propagation is fully described by accounting spherical/ellipsoidal geometry, extinction and emission terms, i.e. thermal and scattering radiance sources. Specifically computational algorithm has two main functions, one that reconstructs slant path individuating local delta in atmospheric layers, the other that compute incoming radiation on the basis of ray path, allowing to compute both background and target radiance. In other words the core algorithm constitutes a discrete functional that take as input local path delta in respective layers and returns radiance and transmittance. Moreover, setting electro-optics sensor resolution and target features, it is possible to evaluate theoretical performances of parametric IRST by means of TCI evaluation. Numerical approach described in this paper, called IR-ART (InfraRed Atmospheric Radiative Transfer), constitutes an accurate, fast and versatile algorithm, based on full physical description, that can be used to evaluate ideal performances or integrated in real-time simulative contest to reconstruct realistic electro-optical flux in several operative mode (e.g. IRST and FLIR) and IR spectral band. Furthermore it can be enforced with 3D target modelling to better evaluate cross-section and emission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on the design and implementation of a micrometeorological weather station tailored for swarm applications in ground-based scenarios. The concept of a swarm application refers to the integration of numerous low-cost devices that work together to optimise efficiency. Our weather station integrates sensors such as GPS, temperature, humidity, air pressure, wind metrics, particle measurements and radiation parameters, and incorporates a camera for enhanced data insights. Built on a state of the art Sensebox MCU, Arduino Mega and Raspberry Pi, the different stations provides a low-cost solution. Field tests have verified and validated the station’s capabilities, and while some limitations have been identified, the station’s swarm applicability holds promise for large-scale weather data collection in large environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic imagery is very useful for visible signature studies, because of the control, flexibility and replicability of simulated environments. But for study results to be meaningful, synthetic images must closely replicate reality, so validating radiometric representation is a key question. Recent research on extracting spectral reflectance from real digital photographs could be adapted to compare the spectral reflectance of objects in synthetic scenes to their real world counterparts. This paper is a preliminary study using real world spectral radiance data (combination of spectral reflectance and scene illumination) and associated RGB images to a train machine learning model to predict the spectral radiance of objects in any RGB image. The preliminary results using two machine learning algorithms, namely support vector machine and multi-layer perceptron, show promise for predicting spectral radiance from RGB images. Future research in the area will attempt to improve the construction by supplying a much larger pool of training data, by measuring the spectral response of our camera, and using image information from an earlier stage of the imaging pipeline, such as camera raw values instead of RGB values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Handheld thermal imaging devices can capture images in quick succession, each image with a slightly different orientation, resulting in image series that can be combined to produce an improved image through multi-image deconvolution. Implementing deconvolution algorithms that take advantage of the entire information contained in the image series to produce an image with a field of view that is as large as possible given the coverage of all collected images is a challenging problem given that the image series covers a possibly non-square area. In this paper, we present a multi-image deconvolution method that addresses this boundary condition problem. First, we determine the relative geometric transformations between the images to determine a rectangular canvas that can accommodate the full field of view covered by the image series. Next, we formulate the deconvolution problem as a regularized minimization problem with two terms, (i) the residue between the forward transformation applied to the reconstructed candidate and the measured images and (ii) a regularization term to take into account image priors. In order to accommodate for a non-square coverage of the combined images, which results in boundary artifacts when the forward model is used during iterative minimization, we recast the problem into one where the original scene is masked, thereby mitigating the effects of unknown image values beyond image boundaries. We characterize our method on both synthetic and experimental images. We observe both visual and quantitative improvements of the images at the boundaries where distortions are attenuated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The histogram-probabilistic multi-hypothesis tracker (H-PMHT) is an efficient parametric mixture fitting approach to the multi target track-before-detect (TBD) problem. It has been shown that it can give comparable performance to other methods by a fraction of the computational costs. In the original derivation of the H-PMHT, the mixing proportions are both coupled and uncorrelated over time, which may not hold true in practical scenarios involving fluctuating target amplitudes. In this paper, the mixing proportions are modeled according to a Poisson mixture measurement process. In contrast to existing approaches, a more general Markov chain prior based on the generalized inverse Gaussian (GIG) distribution is used as a prior of the Poisson mixing rates. The proposed method provides an alternative solution to the data association uncertainty in clutter, giving accurate and robust signal-to-noise ratio (SNR) estimates by utilizing the GIG Markov chain. The results are validated on simulated data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a web-based quiz game designed to enhance the training of image interpreters in identifying target object signatures. Inspired by image interpreters informally testing each other's knowledge during breaks, the tool mimics their practice of naming target objects displayed on self-made index cards. The underlying didactic concept is similar to the spaced repetition method. The quiz provides quick, focused tasks that are easily accessible on desktops and mobile devices (e.g., smartphones). Gamification features such as high scores and achievements add an element of fun and motivation similar to observed casual training. Built-in editors allow users to contribute tasks fostering a sense of community. The tool's adaptivity engine adjusts task difficulty based on user performance further enhancing the learning experience. Although the tool was developed in a military context, its dynamic training approach, which promotes sustained engagement and skill improvement, is not domain-specific, making it suitable for a wide range of learning content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At present, aerospace development puts forward an urgent need for the integrative system design in satellite with MWIR/LWIR hyperspectral imaging spectrometer to provide the solution of target detection problem under the circumstance of weak thermal contrast between target and background at night, which can hardly be solved by traditional thermal infrared imaging system. In order to efficiently optimize the imaging index of the MWIR/LWIR hyperspectral imaging spectrometer, i.e. ground sample distance (GSD), spectral resolution, noise equivalent temperature difference (NETD), this paper proposed a novel optimized integrative system design method based on evaluation for target detection performance through multidimensional signal-to-clutter ratio (SCR). For assumed Gaussian target and background statistics, multidimensional SCR is the primary parameter describing the detection performance of a variety of detection algorithms based on the generalized maximum likelihood formulation, especially when the thermal contrast between target and background approach to zero. Therefore, we calculate the multidimensional SCR from MWIR/ LWIR hyperspectral images that are obtained through the simulation of satellite borne hyperspectral imaging chain with imaging indices, as the equivalent of detection performance. Based on the training datasets composed of multidimensional SCR and imaging indices, we can use random forest regression to identify the sensitivities of different imaging indices to multidimensional SCR. The sensitivity analysis of multidimensional SCR can help to determine the key to index optimization, guiding the integrative system design. More importantly, the relationship between the SCR and imaging indices can be predicted through random forest learning, which can be applied to the further global optimization of imaging indices with related optimization algorithms. With our proposed method, the integrative system design is closely associated to the demand for target detection task, meeting the satellite-borne detection performance requirements, and the manufacturing cost could be reduced due to the absence of excessive index optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.