PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9452, including the Title Page, Copyright information, Table of Contents, Invited Panel Discussion, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Typically, the modeling of linear and shift-invariant (LSI) imaging systems requires a complete description of each subcomponent in order to estimate the final system transfer function. To validate the modeled behavior, measurements are performed on each component. When dealing with packaged systems, there are many situations where some, if not all, data is unknown. For these cases, the system is considered a blackbox, and system level measurements are used to estimate the transfer characteristics in order to model performance. This correspondence outlines the blackbox measured system component in the Night Vision Integrated Performance Model (NV-IPM). We describe how estimates of performance can be achieved with complete or incomplete measurements and how assumptions affect the final range. The blackbox measured component is the final output of a measurement characterization and is used to validate performance of delivered and prototype systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Measuring the performance of a cathode ray tube (CRT) or liquid crystal display (LCD) is necessary to enable end-to-end system modeling and characterization of currently used high performance analog imaging systems, such as 2nd Generation FLIR systems. If the display is color, the performance measurements are made more difficult because of the underlying structure of the color pixel as compared to a monochrome pixel. Out of the various characteristics of interest, we focus on determining the gamma value of a display. Gamma quantifies the non-linear response between the input gray scale and the displayed luminance. If the displayed image can be corrected for the display’s gamma, an accurate scene can be presented or characterized for laboratory measurements such as MRT (Minimum Resolvable Temperature) and CTF (Contrast Threshold Function). In this paper, we present a method to determine the gamma to characterize a color display using the Prichard 1980A photometer. Gamma corrections were applied to the test images for validating the accuracy of the computed gamma value. The method presented here is a simple one easily implemented employing a Prichard photometer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Researchers at the US Army Night Vision and Electronic Sensors Directorate have added the functionality of Machine Vision MRT (MV-MRT) to the NVLabCap software package. While the original calculations of MV-MRT were compared to human observers performance using digital imagery in a previous effort,1 the technical approach was not tested on 8-bit imagery using a variety of sensors in a variety of gain and level settings. Now that it is more simple to determine the MV-MRT for a sensor in multiple gain settings, it is prudent to compare the results of MV-MRT in multiple gain settings to the performance of human observers for thermal imaging systems that are linear and shift invariant. Here, a comparison of the results for a LWIR system to trained human observers is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Future E-O (FEO) program was established to develop a flexible, modular, automated test capability as part of the Next Generation Automatic Test System (NGATS) program to support the test and diagnostic needs of currently fielded U.S. Army electro-optical (E-O) devices, as well as being expandable to address the requirements of future Navy, Marine Corps and Air Force E-O systems. Santa Barbara infrared (SBIR) has designed, fabricated, and delivered three (3) prototype FEO for engineering and logistics evaluation prior to anticipated full-scale production beginning in 2016. In addition to presenting a detailed overview of the FEO system hardware design, features and testing capabilities, the integration of SBIR’s EO-IR sensor and laser test software package, IRWindows 4™, into FEO to automate the test execution, data collection and analysis, archiving and reporting of results is also described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate Signal Intensity Transfer Functions (SITF) measurements are necessary to determine the calibration factor in the 3D noise calculation of an electro-optical imaging system. The typical means for measuring a sensor’s SITF is to place the sensor in a flooded field environment at a distance that is relatively close to the aperture of the emitter. Unfortunately, this arrangement has the potential to allow for additional contributions to the SITF in the form of scattering or stray light if the optics are not designed properly in the system under test. Engineers at the US Army Night Vision and Electronic Sensors Directorate are working to determine a means of evaluating the contribution due to scatting or stray light.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a growing interest in developing helmet-mounted digital imaging systems (HMDIS) for integration into military aircraft cockpits. This interest stems from the multiple advantages of digital vs. analog imaging such as image fusion from multiple sensors, data processing to enhance the image contrast, superposition of non-imaging data over the image, and sending images to remote location for analysis. There are several properties an HMDIS must have in order to aid the pilot during night operations. In addition to the resolution, image refresh rate, dynamic range, and sensor uniformity over the entire Focal Plane Array (FPA); the imaging system must have the sensitivity to detect the limited night light available filtered through cockpit transparencies. Digital sensor sensitivity is generally measured monochromatically using a laser with a wavelength near the peak detector quantum efficiency, and is generally reported as either the Noise Equivalent Power (NEP) or Noise Equivalent Irradiance (NEI). This paper proposes a test system that measures NEI of Short-Wave Infrared (SWIR) digital imaging systems using a broadband source that simulates the night spectrum. This method has a few advantages over a monochromatic method. Namely, the test conditions provide spectrum closer to what is experienced by the end-user, and the resulting NEI may be compared directly to modeled night glow irradiance calculation. This comparison may be used to assess the Technology Readiness Level of the imaging system for the application. The test system is being developed under a Cooperative Research and Development Agreement (CRADA) with the Air Force Research Laboratory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Panoramic infrared imaging is relatively new and has many applications to include tower mounted security systems, shipboard protection, and platform situational awareness. In this paper, we review metrics and methods that can be used for analysis of requirements for an infrared panoramic imaging system for military vehicles. We begin with a broad view of general military requirements organized into three categories, survivability, mobility, and lethality. A few requirements for the sensor modes of operation across all categories are selected so that panoramic system design can address as many needs as possible, but with affordability applied to system design. Metrics and associated methods that can translate military operational requirements into panoramic imager requirements are discussed in detail in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A 3-D simulation of the polarization-dependent reflection of a Gaussian shaped laser beam on the dynamic sea surface is presented. The simulation considers polarized or unpolarized laser sources and calculates the polarization states upon reflection at the sea surface. It is suitable for the radiance calculation of the scene in different spectral wavebands (e.g. near-infrared, SWIR, etc.) not including the camera degradations. The simulation also considers a bistatic configuration of laser source and receiver as well as different atmospheric conditions. In the SWIR, the detected total power of reflected laser light is compared with data collected in a field trial. Our computer simulation combines the 3-D simulation of a maritime scene (open sea/clear sky) with the simulation of polarized or unpolarized laser light reflected at the sea surface. The basic sea surface geometry is modeled by a composition of smooth wind driven gravity waves. To predict the input of a camera equipped with a linear polarizer, the polarized sea surface radiance must be calculated for the specific waveband. The s- and p-polarization states are calculated for the emitted sea surface radiance and the specularly reflected sky radiance to determine the total polarized sea surface radiance of each component. The states of polarization and the radiance of laser light specularly reflected at the wind-roughened sea surface are calculated by considering the s- and p- components of the electric field of laser light with respect to the specular plane of incidence. This is done by using the formalism of their coherence matrices according to E. Wolf [1]. Additionally, an analytical statistical sea surface BRDF (bidirectional reflectance distribution function) is considered for the reflection of laser light radiances. Validation of the simulation results is required to ensure model credibility and applicability to maritime laser applications. For validation purposes, field measurement data (images and meteorological data) was analyzed. An infrared laser, with or without a mounted polarizer, produced laser beam reflection at the water surface and images were recorded by a camera equipped with a polarizer with horizontal or vertical alignment. The validation is done by numerical comparison of measured total laser power extracted from recorded images with the corresponding simulation results. The results of the comparison are presented for different incident (zenith/azimuth) angles of the laser beam and different alignment for the laser polarizers (vertical/horizontal/without) and the camera (vertical/horizontal).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Atmospheric turbulence is a well-known phenomenon that often degrades image quality due to intensity fluctuations, distortion, and blur in electro-optic and thermal imaging systems. To properly assess the performance of an imaging system over the typical turbulence trade space, a time consuming and costly field study is often required. A fast and realistic turbulence simulation will allow the performance assessment of an imaging system under various turbulence conditions to be done as well as provide input data for the evaluation of turbulence mitigation algorithms in a cost efficient manner. The simulation is based on an empirical model with parameters derived from the first and second-order statistics of imaging distortions measured from field collected data. The dataset consists of image sequences recorded with a variable frame rate visible camera from strong to weak turbulence conditions. The simulation uses pristine, single images containing no turbulence effects as an input and produces image sequences degraded by the specified turbulence. Target range, optics diameter, wavelength, detector integration time, and the wind velocity component perpendicular to the propagation path all contribute to the severity of the atmospheric turbulence distortions and are included in the simulation. The addition of the detector integration time expands the functionality of the simulation tool to include imagers with lower frames rates. Examples are presented demonstrating the utility of the turbulence simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several image processing techniques for turbulence mitigation have been shown to be effective under a wide range of long-range capture conditions; however, complex, dynamic scenes have often required manual interaction with the algorithm’s underlying parameters to achieve optimal results. While this level of interaction is sustainable in some workflows, in-field determination of ideal processing parameters greatly diminishes usefulness for many operators. Additionally, some use cases, such as those that rely on unmanned collection, lack human-in-the-loop usage. To address this shortcoming, we have extended a well-known turbulence mitigation algorithm based on bispectral averaging with a number of techniques to greatly reduce (and often eliminate) the need for operator interaction. Automations were made in the areas of turbulence strength estimation (Fried’s parameter), as well as the determination of optimal local averaging windows to balance turbulence mitigation and the preservation of dynamic scene content (non-turbulent motions). These modifications deliver a level of enhancement quality that approaches that of manual interaction, without the need for operator interaction. As a consequence, the range of operational scenarios where this technology is of benefit has been significantly expanded.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A key component in any image-based tracking system is the adaptive tracking algorithm used to segment the image into potential targets, rank-and-select the best candidate target, and the gating of the selected target to further improve tracker performance. This paper will describe a new adaptive tracker algorithm added to the naval threat countermeasure simulator (NTCS) of the NATO-standard ship signature model (ShipIR). The new adaptive tracking algorithm is an optional feature used with any of the existing internal NTCS or user-defined seeker algorithms (e.g., binary centroid, intensity centroid, and threshold intensity centroid). The algorithm segments the detected pixels into clusters, and the smallest set of clusters that meet the detection criterion is obtained by using a knapsack algorithm to identify the set of clusters that should not be used. The rectangular area containing the chosen clusters defines an inner boundary, from which a weighted centroid is calculated as the aim-point. A track-gate is then positioned around the clusters, taking into account the rate of change of the bounding area and compensating for any gimbal displacement. A sequence of scenarios is used to test the new tracking algorithm on a generic unclassified DDG ShipIR model, with and without flares, and demonstrate how some of the key seeker signals are impacted by both the ship and flare intrinsic signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate infrared signature prediction of targets, such as humans or ground vehicles, depends primarily on the realistic prediction of physical temperatures. Thermal model development typically requires a geometric description of the target (i.e., a 3D surface mesh) along with material properties for characterizing the thermal response to simulated weather conditions. Once an accurate thermal solution has been obtained, signature predictions for an EO/IR spectral waveband can be generated. The image rendering algorithm should consider the radiative emissions, diffuse/specular reflections, and atmospheric effects to depict how an object in a natural scene would be perceived by an EO/IR sensor. The EO/IR rendering process within MuSES, developed by ThermoAnalytics, can be used to create a synthetic radiance image that predicts the energy detected by a specific sensor just prior to passing through its optics. For additional realism, blurring due to lens diffraction and noise due to variations in photon detection can also be included, via specification of sensor characteristics. Additionally, probability of detection can be obtained via the Targeting Task Performance (TTP) metric, making it possible to predict a target’s at-range detectability to a particular threat sensor. In this paper, we will investigate the at-range contrast and detectability of some example targets and examine the effect of various techniques such as sub-pixel sampling and target pixel thresholding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Existing FLIR detection models such as NVThermIP and NV-IPM, from the U.S. Army Night Vision and Electronic Sensors Directorate (NVESD), use only basic inputs to describe the target and background (area of the target, average and RMS temperatures of both the target and background). The objective of this work is to try and bridge the gap between more sophisticated FLIR detection models (of the sensor) and high-fidelity signature models, such as the NATO-Standard ShipIR model. A custom API is developed to load an existing ShipIR scenario model and perform the analysis from any user-specified range, altitude, and attack angle. The analysis consists of computing the total area of the target (m2), the average and RMS variation in target source temperature, and the average and RMS variation in the apparent temperature of the background. These results are then fed into the associated sensor model in NV-IPM to determine its probability of detection (versus range). Since ShipIR computes and attenuates the spectral source radiance at every pixel, the black body source and apparent temperatures are easily obtained for each point using numerical iteration (on temperature), using the spectral attenuation and path emissions from MODTRAN (already used by ShipIR to predict the apparent target and background radiance). In addition to performing the above calculations on the whole target area, a variable threshold and clustering algorithm is used to analyse whether a sub-area of the target, with a higher contrast signature but smaller size, is more likely to be detected. The methods and results from this analysis should provide the basis for a more formal interface between the two models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Thermal Range Model (TRM4)1 developed by the Fraunhofer IOSB of Germany is a commonly used performance model for military target acquisition systems. There are many similarities between the U.S Army Night Vision Integrated Performance Model (NV-IPM)2 and TRM4. Almost all of the camera performance characterizations, such as signal-to-noise calculations and modulation transfer theory are identical, only the human vision model and performance metrics differ. Utilizing the new Custom Component Generator in the NV-IPM we develop a component to calculate the Average Modulation at Optimal Phase (AMOP) and Minimum Difference Signal Perceived (MDSP) calculations used in TRM4. The results will be compared with the actual TRM4 results for a variety of imaging systems. This effort demonstrates that the NV-IPM is a versatile system design tool which can easily be extended to include a variety of image quality and performance metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using the latest models from the U.S. Army Night Vision Electronic Sensors Directorate (NVESD), a survey of monochrome and color imaging systems at daylight and low light levels is conducted. Each camera system is evaluated and compared under several different assumptions, such as equivalent field of view with equal and variable f/#, common lens focal length and aperture, with high dynamic range comparisons and over several light levels. The modeling is done by use of the Targeting Task Performance (TTP) metric using the latest version of the Night Vision Integrated Performance Model (NV⁸IPM). The comparison is performed over the V parameter, the main output of the TTP metric. Probability of identification (PID) versus range predictions are a direct non-linear mapping of the V parameter as a function of range. Finally, a comparison between the performance of a Bayer-filtered color camera, the Bayer-filtered color camera with the IR block filter removed, and a monochrome version of the same camera is also conducted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern thermal imaging lenses for uncooled detectors are high aperture systems. Very often, their aperture based fnumber is faster than 1.2. The impact of this on the depth of field is dramatic, especially for narrow field lenses. The users would like to know how the image quality changes with and without refocusing for objects at different distances from the camera core. The Depth of Field approach presented here is based on the lens specific Through Focus MTF. It will be averaged for the detector area. The lens specific Through Focus MTF will be determined at the detector Nyquist frequency, which is defined by the pixel pitch. In this way, the specific lens and the specific FPA-geometry (pixel pitch, detector area) are considered. The condition, that the Through Focus MTF at full Nyquist must be higher than 0.25, defines a certain symmetrical depth of focus. This criterion provides a good discrimination for reasonable lens/detector combinations. The examples chosen reflect the actual development of uncooled camera cores. The symmetrical depth of focus is transferred to object space using paraxial relations. This defines a typical depth of field diagram containing three functions: Hyperfocal distance, nearest and furthest distance versus sharp distance (best focus). Pictures taken with an IR Camera illustrate the effect in the depth of field and its dependence on focal length. These pictures confirm the methodology. A separate problem is the acceptable drop of resolution in combination with a specific camera core and specific object scenes. We propose to evaluate the MTF-graph at half Nyquist frequency. This quantifies the resolution loss without refocus in accordance with the IR-picture degradation at the limits of the Depth of Field. The approach is applied to different commercially available lenses. Pictures illustrate the Depth of Field for different pixel pitches and pixel counts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human visual system (HVS) “resolution” (a.k.a. visual acuity) varies with illumination level, target characteristics, and target contrast. For signage, computer displays, cell phones, and TVs a viewing distance and display size are selected. Then the number of display pixels is chosen such that each pixel subtends 1 min-1. Resolution of low contrast targets is quite different. It is best described by Barten’s contrast sensitivity function. Target acquisition models predict maximum range when the display pixel subtends 3.3 min-1. The optimum viewing distance is nearly independent of magnification. Noise increases the optimum viewing distance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design and modeling of compressive sensing (CS) imagers is difficult due to the complexity and non-linearity of the system and reconstruction algorithm. The Night Vision Integrated Performance Model (NV-IPM) is a linear imaging system design tool that is very useful for complex system trade studies. The custom component generator, included in NV-IPM, will be used to include a recently published theory for CS that links measurement noise, easily calculated with NV-IPM, to the noise of the reconstructed CS image given the estimated sparsity of the scene and the number of measurements as input. As the sparsity will also depend on other factors such as the optical transfer function and the scene content, an empirical relationship will be developed between the linear model within NV-IPM and the non-linear reconstruction algorithm using measured test data. Using the theory, a CS imager varying the number of measurements will be compared to a notional traditional imager.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For more than 50 years, the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has been studying and modeling the human visual discrimination process as it pertains to military imaging systems. In order to develop sensor performance models, human observers are trained to expert levels in the identification of military vehicles. From 1998 until 2006, the experimental stimuli were block randomized, meaning that stimuli with similar difficulty levels (for example, in terms of distance from target, blur, noise, etc.) were presented together in blocks of approximately 24 images but the order of images within the block was random. Starting in 2006, complete randomization came into vogue, meaning that difficulty could change image to image. It was thought that this would provide a more statistically robust result. In this study we investigated the impact of the two types of randomization on performance in two groups of observers matched for skill to create equivalent groups. It is hypothesized that Soldiers in the Complete Randomized condition will have to shift their decision criterion more frequently than Soldiers in the Block Randomization group and this shifting is expected to impede performance so that Soldiers in the Block Randomized group perform better.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This report describes tasks comparing the simulated performance levels of infrared (IR) sensing systems in detecting, recognizing, and identifying (DRI) targets using the Night Vision Integrated Performance Model (NV-IPM) version 1.1. Both mid-wave infrared (MWIR) and long-wave infrared (LWIR) systems, chosen to represent the current state-of-the-art, were analyzed across various environmental conditions. These states included a range of both man-made and natural obscurants, selected to simulate atmospheric conditions commonly experienced throughout the world. This report investigates the validity of the NV-IPM, down-selects top-performing systems from an original set, and provides detailed performance analysis of these best-of-breed systems in various environmental scenarios. Six sensing systems, Indium-Antimonide (InSb) MWIR, Mercury-Cadmium-Telluride (MCT) MWIR, nBn InSb MWIR, Quantum Well Infrared Photodetector (QWIP) LWIR, uncooled LWIR, and dual-band MCT MWIR/LWIR system, were evaluated against a variety of environmental variations. Specifications for the IR systems were obtained from manufacturers or relevant published literature. Simulation results indicated the nBn InSb MWIR system as the strongest-performing system in many of the tests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
NVESDs new integrated sensor performance model, NV-IPM, replaces the discrete spectral band models that preceded it (NVTherm, SSCamIP, etc.). Many advanced modeling functions are now more readily available, easier to implement, and integrated within a single model architecture. For the legacy model user with ongoing modeling duties, however, the conversion of legacy decks to NV-IPM is of more immediate concern than mastering the many “power features” now available. This paper addresses the processes for the legacy model user to make a smooth transition to NV-IPM, including the conversion of legacy sensor decks to NV-IPM format decks, differences in parameters entered in the new versus old model, and a comparison of the predicted performance differences between NV-IPM and legacy models. Examples are presented to demonstrate the ease of sensor deck conversion from legacy models and to highlight enhanced model capabilities available with minimal transition effort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The latest version of the U.S. Army imager performance model, the Night Vision Integrated Performance Model (NV-IPM), is now contained within a single, system engineering oriented design environment. This new model interface allows sensor systems to be represented using modular, reusable components. A new feature, added in version 1.3 of the NV-IPM, allows users to create custom components which can be incorporated into modeled systems. The ability to modify existing component definitions and create entirely new components in the model greatly enhances the extensibility of the model architecture. In this paper we will discuss the structure of the custom component and parameter generators and provide several examples where this feature can be used to easily create new and unique component definitions within the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Networked Imaging Sensor (NIS) model takes as input target acquisition probability as a function of time for
individuals or individual imaging sensors, and outputs target acquisition probability for a collection of imaging
sensors and individuals. System target acquisition takes place the moment the first sensor or individual acquires
the target. The derivation of the NIS model implies it is applicable to multiple moving sensors and targets. The
principal assumption of the NIS model is independence of events that give rise to input target acquisition
probabilities. For investigating the validity of the NIS model, we consider a collection of single images where
neither the sensor nor target is moving. This paper investigates the ability of the NIS model to predict system
target acquisition performance when multiple observers view first and second Gen thermal imagery, field-of-view
imagery that has either zero or one stationary target in a laboratory environment when observers have a maximum
of 12, 17 or unlimited seconds to acquire the target. Modeled and measured target acquisition performance are
in good agreement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A model was developed to understand the effects of spatial resolution and Signal to Noise ratio on the detection and
tracking performance of wide-field, diffraction-limited electro-optic and infrared motion imagery systems. False positive
detection probability and false positive rate per frame were calculated as a function of target-to-background contrast and
object size. Results showed that moving objects are fundamentally more difficult to detect than stationary objects
because SNR for fixed objects increases and false positive probability detection rates diminish rapidly with successive
frames whereas for moving objects the false detection rate remains constant or increases with successive frames. The
model specifies that the desired performance of a detection system, measured by the false positive detection rate, can be
achieved by image system designs with different combinations of SNR and spatial resolution, usually requiring several
pixels resolving the object; this capability to tradeoff resolution and SNR enables system design trades and cost
optimization. For operational use, detection thresholds required to achieve a particular false detection rate can be
calculated. Interestingly, for moderate size images the model converges to the Johnson Criteria. Johnson found that an
imaging system with an SNR >3.5 has a probability of detection >50% when the resolution on the object is 4 pixels or
more. Under these conditions our model finds the false positive rate is less than one per hundred image frames, and the
ratio of the probability of object detection to false positive detection is much greater than one. The model was
programmed into Matlab to generate simulated images frames for visualization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we demonstrate the utility of the Night Vision Integrated Performance Model (NV-IPM) image generation tool by using it to create a database of face images with controlled degradations. Available face recognition algorithms can then be used to directly evaluate camera designs using these degraded images. By controlling camera effects such as blur, noise, and sampling, we can analyze algorithm performance and establish a more complete performance standard for face acquisition cameras. The ability to accurately simulate imagery and directly test with algorithms not only improves the system design process but greatly reduces development cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to develop correspondingly larger-format infrared emitter arrays to support the testing needs of systems incorporating these detectors. As with most integrated circuits, fabrication yields for the read-in integrated circuit (RIIC) that drives the emitter pixel array are expected to drop dramatically with increasing size, making monolithic RIICs larger than the current 1024x1024 format impractical and unaffordable. Additionally, many scene projector users require much higher simulated temperatures than current technology can generate to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024x1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During an earlier phase of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1000K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. Also in development under the same UHT program is a 'scalable' RIIC that will be used to drive the high temperature pixels. This RIIC will utilize through-silicon vias (TSVs) and quilt packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the inherent yield limitations of very-large-scale integrated circuits. Current status of the RIIC development effort will also be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High pixel temperatures for IR scene projector arrays face materials challenges of oxidation, diffusion, and recrystallization. For cost effective development of new high-temperature materials, we have designed and fabricated simplified pixels for testing. These consist of resistive elements, traces, and bond pads sandwiched between dielectric layers on Si wafers. Processing involves a pad exposure etch, a pixel outline etch, and an undercut etch to thermally isolate the resistive element from the substrate. Test pixels were successfully fabricated by electron-beam lithography using a combination of wet and dry etching.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infrared scene projectors (IRSPs) are a key part of performing dynamic testing of infrared (IR) imaging systems. Two important properties of an IRSP system are apparent temperature and thermal resolution. Infrared scene projector technology continues to progress, with several systems capable of producing high apparent temperatures currently available or under development. These systems use different emitter pixel technologies, including resistive arrays, digital micro-mirror devices (DMDs), liquid crystals and LEDs to produce dynamic infrared scenes. A common theme amongst these systems is the specification of the bit depth of the read-in integrated circuit (RIIC) or projector engine , as opposed to specifying the desired thermal resolution as a function of radiance (or apparent temperature). For IRSPs, producing an accurate simulation of a realistic scene or scenario may require simulating radiance values that range over multiple orders of magnitude. Under these conditions, the necessary resolution or “step size” at low temperature values may be much smaller than what is acceptable at very high temperature values. A single bit depth value specified at the RIIC, especially when combined with variable transfer functions between commanded input and radiance output, may not offer the best representation of a customer’s desired radiance resolution. In this paper, we discuss some of the various factors that affect thermal resolution of a scene projector system, and propose some specification guidelines regarding thermal resolution to help better define the real needs of an IR scene projector system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an effort to improve technology for performance testing and calibration of multispectral and hyperspectral imagers, the National Institute of Standards and Technology (NIST) has been developing a Hyperspectral Image Projector (HIP) capable of projecting dynamic scenes than include distinct, programmable spectra in each of its 1024x768 spatial pixels. The HIP is comprised of a spectral engine, which is a light source capable generating the spectra in the scene, coupled to a spatial engine, capable of projecting the spectra into the correct locations of the scene. In the prototype HIP, the light exiting the Visible-Near-Infrared (VNIR) / Short-Wavelength Infrared (SWIR) spectral engine is spectrally dispersed and needs to be spectrally homogenized before it enters the spatial engine. In this paper we describe the results from a study of several different techniques for performing this spectral homogenization. These techniques include an integrating sphere, a liquid light guide, a randomized fiber bundle, and an engineered diffuser, in various combinations. The spectral uniformity of projected HIP scenes is measured and analyzed using the spectral angle mapper (SAM) algorithm over the VNIR spectral range. The SAM provides a way to analyze the spectral uniformity independently from the radiometric uniformity. The goal of the homogenizer is a spectrally uniform and bright projected image. An integrating sphere provides the most spectrally uniform image, but at a great loss of light compared with the other methods. The randomized fiber bundle generally outperforms the liquid light guide in both spectral homogenization and brightness. Using an engineered diffuser with the randomized fiber bundle increases the spectral uniformity by a factor of five, with a decrease in brightness by a factor of five, compared with the randomized fiber bundle alone. The combination of an engineered diffuser with a randomized fiber bundle provides comparable spectral uniformity to the integrating sphere while enabling 40 times greater brightness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Santa Barbara Infrared (SBIR) is continually developing improved methods for non-uniformity correction (NUC) of its Infrared Scene Projectors (IRSPs) as part of its comprehensive efforts to achieve the best possible projector performance. The most recent step forward, Advanced Iterative NUC (AI-NUC), improves upon previous NUC approaches in several ways. The key to NUC performance is achieving the most accurate possible input drive-to-radiance output mapping for each emitter pixel. This requires many highly-accurate radiance measurements of emitter output, as well as sophisticated manipulation of the resulting data set. AI-NUC expands the available radiance data set to include all measurements made of emitter output at any point. In addition, it allows the user to efficiently manage that data for use in the construction of a new NUC table that is generated from an improved fit of the emitter response curve. Not only does this improve the overall NUC by offering more statistics for interpolation than previous approaches, it also simplifies the removal of erroneous data from the set so that it does not propagate into the correction tables. AI-NUC is implemented by SBIR’s IRWindows4 automated test software as part its advanced turnkey IRSP product (the Calibration Radiometry System or CRS), which incorporates all necessary measurement, calibration and NUC table generation capabilities. By employing AI-NUC on the CRS, SBIR has demonstrated the best uniformity results on resistive emitter arrays to date.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fast and accurate computation of light path deviation due to atmospheric refraction is an important requirement for real-time simulation of optical imaging sensor systems. A large body of existing literature covers various methods for application of Snell’s Law to the light path ray tracing problem. This paper provides a discussion of the adaptation to real time simulation of atmospheric refraction ray tracing techniques used in mid-1980's LOWTRAN releases. The refraction ray trace algorithm published in a LOWTRAN-6 technical report by Kneizys (et. al.) has been coded in MATLAB for development, and in C-language for simulation use. To this published algorithm we have added tuning parameters for variable path segment lengths, and extensions for Earth grazing and exoatmospheric "near Earth" ray paths. Model atmosphere properties used to exercise the refraction algorithm were obtained from tables published in another LOWTRAN-6 related report. The LOWTRAN-6 based refraction model is applicable to atmospheric propagation at wavelengths in the IR and visible bands of the electromagnetic spectrum. It has been used during the past two years by engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) in support of several advanced imaging sensor simulations. Recently, a faster (but sufficiently accurate) method using Gauss-Chebyshev Quadrature integration for evaluating the refraction integral was adopted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The infrared (IR) energy radiated from any source passes through the atmosphere before reaching the sensor. As a result, the total signature captured by the IR sensor is significantly modified by the atmospheric effects. The dominant physical quantities that constitute the mentioned atmospheric effects are the atmospheric transmittance and the atmospheric path radiance. The incoming IR radiation is attenuated by the transmittance and path radiance is added on top of the attenuated radiation. In IR scene simulations OpenGL is widely used for rendering purposes. In the literature there are studies, which model the atmospheric effects in an IR band using OpenGLs exponential fog model as suggested by Beers law. In the standard pipeline of OpenGL, the related fog model needs single equivalent OpenGL variables for the transmittance and path radiance, which actually depend on both the distance between the source and the sensor and also on the wavelength of interest. However, in the conditions where the range dependency cannot be modeled as an exponential function, it is not accurate to replace the atmospheric quantities with a single parameter. The introduction of OpenGL Shading Language (GLSL) has enabled the developers to use the GPU more flexible. In this paper, a novel method is proposed for the atmospheric effects modeling using the least squares estimation with polynomial fitting by programmable OpenGL shader programs built with GLSL. In this context, a radiative transfer model code is used to obtain the transmittance and path radiance data. Then, polynomial fits are computed for the range dependency of these variables. Hence, the atmospheric effects model data that will be uploaded in the GPU memory is significantly reduced. Moreover, the error because of fitting is negligible as long as narrow IR bands are used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new definition is proposed for the calibration of Night Vision Goggle (NVG) gains. This definition is based on the measurement of radiometric input and output quantities of the NVG. While the old definition used the “equivalent fL” which is a non SI traceable luminance unit, the new definition utilizes the radiance quantities that are traceable to the SI units through NIST standards. The new NVG gain matches the previous one as a result of the application of a correction coefficient originating from the conversion of the radiance to luminance units. The new definition was tested at the NIST Night Vision Calibration Facility and the measurement results were compared to the data obtained with a Hoffman Test Set Model ANV-126. Comparing the radiometric quantities of the Hoffman Test Set and those measured by the NIST transfer standard radiometer, indicates that the observed differences up to 15% were due to the calibration and experimental errors of the ANV-126 Test Set. In view of different spectral characteristics of luminophores that can be utilized in the NVG design, the simulation of the NVG output for gain measurement was performed. The NVG output was simulated with a sphere-based source using different LEDs and the measured gain was compared to that obtained with the ANV-126 internal luminance meter. The NVG gain uncertainty analysis was performed for the Type A, B, and C goggles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed an IR image generation S/W to be applied for moving objects. To validate the S/W the IR signal and the surface temperature are measured from a test ship operating along a designated route on the sea, and the weather conditions and the ship positions are also measured simultaneously. Calculations of the surface temperature and the IR signal of the test ship are performed by using the measured weather data. Results obtained from the measurement and the numerical analysis show fairly good agreements and we found an applicability of the developed S/W in analyzing IR signals from moving objects
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most mass-produced, commercially available and fielded military reflective imaging systems operate across broad swaths of the visible, near infrared (NIR), and shortwave infrared (SWIR) wavebands without any spectral selectivity within those wavebands. In applications that employ these systems, it is not uncommon to be imaging a scene in which the image contrasts between the objects of interest, i.e., the targets, and the objects of little or no interest, i.e., the backgrounds, are sufficiently low to make target discrimination difficult or uncertain. This can occur even when the spectral distribution of the target and background reflectivity across the given waveband differ significantly from each other, because the fundamental components of broadband image contrast are the spectral integrals of the target and background signatures. Spectral integration by the detectors tends to smooth out any differences. Hyperspectral imaging is one approach to preserving, and thus highlighting, spectral differences across the scene, even when the waveband integrated signatures would be about the same, but it is an expensive, complex, noncompact, and untimely solution. This paper documents a study of how the capability to selectively customize the spectral width and center wavelength with a hypothetical tunable fore-optic filter would allow a broadband reflective imaging sensor to optimize image contrast as a function of scene content and ambient illumination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.