In 2023, Richards and Hübner proposed silux as a new standard unit of irradiance for the full 350-1100 [nm] band, specifically addressing the mismatch between the photopic response of the human eye and spectral sensitivity of new low-light, Silicon, CMOS sensors with enhanced NIR response. This spectral mismatch between the response of the human eye and the spectral sensitivity of the sensor can lead to significant errors in measuring the magnitude of the signal available to a different camera system with the traditional lux unit. In this correspondence, we demonstrate a per-pixel calibration of a camera to create the first imaging siluxmeter. To do this, we developed a comprehensive per-pixel model as well as the experimental and data reduction methods to estimate the parameters. These parameters are then combined to an updated NVIPM measured system component that now provides the conversion factor from device units of DN to silux, lux, and other radiometric units. Additionally, the accuracy of the measurements and modeling are assessed through comparisons to field observations and validating/transferring calibration from one low light camera to another. Following this process, other low-light cameras can be calibrated and applied to scenes such that they may be accurately characterized using silux as the standard unit.
This report explores how various mechanisms effect the response time of event-based cameras (EBCs). EBCs are based on unconventional electro-optical IR vision sensors, which are only sensitive to changing light. Because their operation is essentially “frameless,” their response time is not dependent to a frame rate or readout time, but rather the number of activated pixels, the magnitude of background light, local fabrication defects, and analog configuration of the pixel. A test apparatus was devised using a commercial off-the-shelf EBC to extract the sensor latency’s dependence on each parameter. Under various illumination levels, results show a mean latency and temporal jitter can increase by a factor of 10 via configuring bias parameters. Furthermore, worst-case latency can exceed 1–2 ms even when 0.005% of the array is activated simultaneously. These and many other findings in the report hope to inform use of event-based sensing technology when latency is a critical component of successful application.
Neuromorphic sensors (also known as event based cameras) behave differently than traditional imaging sensors as they respond only to changes in stimuli as they occur. They typically have higher dynamic range and frame rates than traditional imaging systems while using less power than other imaging systems because a pixel only outputs data when a stimulus occurs at that pixel. There are a variety of uses for neuromorphic sensors from temporal anomaly detection to autonomous driving. While the information in the output of the neuromorphic sensor correlates to a change in stimuli, there has not been a defined means to characterize neuromorphic sensors in order to predict performance from a given stimuli. This study focuses on the measurement of the temporal and spatial response of a neuromorphic sensor with additional discussion on model performance based upon these measurements.
Direct measurement of F-number presents a known challenge in characterizing electro-optical and infrared imaging systems. Conventional methods typically require the sensor to be evaluated separately from the lens, indirectly calculating F-number from measurements of effective focal length and entrance-pupil diameter. When a focal plane array is positioned behind the optics and cannot be removed, some potential options could be to quantify signal-to-noise ratio or depth of field using incoherent light. In either of these cases, the result is subject to extraneous camera parameters and sensitive to noise, aberrations, etc. To address these issues, we propose an alternative measurement routine that utilizes a coherent point source at the focus of an off-axis Newtonian collimator to generate collimated light. This allows us to place the system under test at optical infinity, where retroreflections from its focal plane depend solely on angle of incidence, wavelength of illumination and F-number. Thus by measuring retroreflected power as a function of incidence angle, we can back out the system’s F-number with a high degree of confidence. We demonstrate this concept through numerical simulation and laboratory testing, along with an unconventional knife-edge technique for gauging the entrance-pupil diameter in situ. Together these two measurements enable us to calculate effective focal length (and in turn pixel pitch by measuring instantaneous field of view) for a comprehensive system description. We further show that a working F-number and effective image distance are attainable through this method for finite-conjugate systems. These tools improve our ability to update existing system models with objective measurements.
This manuscript presents a systematic approach for developing new measurements and evaluation techniques through
modeling and simulation. A proposed sequence of steps is outlined, starting with defining the desired measurable(s), going
through model development and exploration, conducting experiments, and publishing results. This framework, based on
the scientific method, provides a structured process for creating robust, well-defined measurement procedures before
experiments are performed. The approach is demonstrated through a case study on measuring camera-to-display system
latency. A simulation tool is described that enables exploration of how different experimental parameters like camera
temporal response, display properties, and source characteristics impact the measurement and associated uncertainties.
Several examples illustrate using the tool to establish notional guidelines for optimizing the experimental design. The
simulation-driven process aims to increase confidence in new measurement techniques by incrementally refining models,
identifying assumptions, and evaluating potential error sources prior to costly physical implementation. In support of the
reproducible research effort, the tools developed for this effort are available on the MathWorks file exchange.
Modern electro-optical systems increasingly incorporate multiple electro-optical sensors, each adding unique wavebands, alignment considerations, and distortion effects. Laboratory testing of these systems traditionally requires multiple measurement setups to determine metrics such as inter-sensor alignment/distortion, near/far focus performance, latency, etc.; a multi-spectral scene has been created to support many simultaneous, objective measurements from a single mounting position. In some cases, a multi-spectral scene is the only way to test new system-of-system type units because traditional tests don’t engage with or demonstrate their built-in algorithms (ex: fusion). In 2023 Parker et. al. developed a diverse multi-band scene with a diverse target set in order to test camera systems. In this correspondence, we describe a comprehensive and precise calibration of the scene. Among the methods used was a pair of reference cameras (reflective and emissive with a fixed extrinsic relations) translated across the entire field of view. The transformation matrices were determined to map pixel locations to angle; subsequent imaging of the target scene will yield precise locations of each feature, and comparisons between modelled and recorded images based on varied camera positions will validate the success of the calibration. This process will allow various measurements, across multiple wavebands, to be taken simultaneously and efficiently for a wide range of modern electro-optical systems.
An efficient means to determine a camera’s location in a volume is to incorporate fiducials at known locations in the volume and triangulate their found locations. ArUco markers are an efficient means to use in this model because there are many pre-defined means to locate ArUco markers through open source software like OpenCV. The algorithms that are used to determine if an ArUco marker is found is not always well characterized, yet there will always be an output from the algorithm that states in a definitive that either an ArUco target was found or not and if it is found it was found at this location. There are many parameters that affect the results of an accurate detection and calculated pose estimation, including system blur, image entropy, input illumination, additional camera attributes, the size of marker, its orientation, and its distance. Because each of these variables impacts the detection algorithm, each variable space must be tested to determine the operating bounds for a given set of ArUco markers. This correspondence demonstrates a method to quantify the ArUco detection performance based upon a simulation that separates each of the previously defined variables. Using virtually constructed imagery that simulates these effects, it is possible to create a sufficiently large data set that can give a definitive performance for ArUco target detection as a function of the OpenCV algorithm.
Terahertz (THz) imaging systems use active sources, specialized optics, and detectors in order to penetrate through certain materials. Each of these components have design and manufacturing characteristics (e.g. coherence for sources, aberrations for optics, and dynamic range and noise in detectors) that can lead to a nonideal performance for the overall imaging system. Thus, system designers are frequently challenged to designs systems that approach theoretical performance, making quantitative measurement of imaging performance a key feedback element of system design. Quantitative evaluation of actual THz system performance will be performed using many of the same figures of merit that have been developed for imaging at other wavelengths (e.g. infrared imaging systems nominally operating in the shorter 3-12 μm wavelength range). The suitability and limitations of these evaluation criteria will be analyzed as part of the process for improving the modeling and design of high performance THz imaging systems.
Many modern electro-optical systems incorporate multiple electro-optical sensors, each having unique wavebands, alignment, and distortion. Traditional laboratory testing requires multiple measurement setups for metrics like inter-channel sensor alignment, near/far focus performance, color accuracy, etc. In this study, a calibrated scene is developed for objective measurements of multiple electro-optical cameras from a single mounting position. This scene uses multiple targets (size and shape), multiple flat fields (blackbodies and spectralon panels), and temporal sources. Some targets work well in both the emissive and reflective bands, allowing for accuracy relative distortion to be measured. Specific attention was given to testing in the presence of scene-based algorithms such as auto-gain/level/exposure, where bright and dark objects are used to drive dynamic range. This approach allows for various measurements to be taken simultaneously and efficiently.
Sensitivity of a camera is most often measured by recording video segments while viewing a constant (space and time) scene. This video, commonly referred to as a noise cube, provides information about how much the signals are varying away from the average. In this work, we describe the systematic decomposition of noise cubes into components. First, the average of a noise cube (when combined with other cube measurements) is used to determine the cameras Signal Transfer Function (SiTF). Removing the average results in a cube that exhibits variations in both spatial and temporal directions. These variations also occur at different scales (spatial/temporal frequencies), therefore we propose applying a 3-dimensional filter to separate fast and slow variation. Slowly varying temporal variation can indicate an artifact in measurement, the camera signal, or the camera’s response to measurement. Slowly varying spatial variation can be considered as non-uniformity, and conventional metrics applied. Fast varying spatial/temporal noise is combined and evaluated through the conventional 3D noise model (providing 7 independent noise measurements. In support of the reproducible research effort, the functions associated with this work can be found on the Mathworks file exchange.
In an ideal world, each camera pixel would exhibit the same behavior and response when stimulated with an incoming signal. In the real world however, variations between pixel’s response (gain) and dark-current/extraneous-signal/etc. (offset) require a non-uniformity correction (NUC). The residual pixel to pixel variation following a NUC is the fixed pattern noise of the camera. For thermal cameras, the ability to NUC is critical, as the pixel’s gain and offset typically change with temperature. Moreover, the offset typically drifts in time, even when the camera is at equilibrium. These additional dependencies of time and temperature make the “fixed” pattern noise not fixed, and make measurement agreement between laboratories much more difficult. In this work, we describe a modification of the standard thermal camera noise measurement procedure and analysis (at some specified equilibrium temperature) that removes the time dependencies of fixed pattern noise measurement. Additionally, we describe a temporal measurement to characterize the time dependent nature of “fixed” pattern noise. We show that this behavior is stationary, and independent on the direction of time since the NUC was defined. The temporal behavior is well described by a combination of power-law and liner time dependence. With this, new metric s can be considered to evaluation how frequent to conduct a NUC dependent on operational requirements.
Specifications for microbolometer defective detector pixel outages, cluster sizes, and row/column outages are common for many electro-optical imaging programs. These specifications for bad pixels and clusters often do not take into account the user’s ability to perceive the lack of information from areas of a focal plane with outages that are replaced using substitution algorithms. This is because the defective pixels are typically specified as a sensor parameter and does not take into account a camera’s system level descriptors: modulation transfer function (MTF), outage substitution strategy, post processing MTF, display performance, and observer’s psychophysical performance. These parameters combine to determine the total system MTF, which can be used to determine the minimum resolution at which a replaced pixel or cluster can be observed. This study analyzes different defective pixel specifications and their visibility from the system level descriptors and proposes specifications that are better aligned to camera performance.
An accurate prediction of the number of pixels on a target is critical in modeling the performance of a cameras ability to perform a task. This requires an accurate knowledge of the angle subtended by a pixel of interest, which can be calculated from a specification sheet or lens prescription. When such information is not available, it can be retrieved through a measurement of a known sized target at a known distance. In this correspondence, we utilize canonical images (ideal simple functions) together with non-linear optimization to provide sub-pixel target localization. This allows for accurate and repeatable measurement of the angular sampling of a camera. Additionally, the use of well-defined shapes and accurate location determination can be used to determine the blur, rotation, motion, contrast, distortion, and other camera metrics.
Image intensified systems are a compact, low power device that converts visible through near-infrared illumination to visible imagery. These devices provide usable imagery in a variety of ambient illuminations, and they are a preferred means for night imaging. Even though the device consists of objective or relay optics and an image intensified tube, to perform critical measurements on the device performance one needs to dis-assemble the device to perform testing on only the image intensified tube. This is a non-trivial process that requires the hardware to be re-aligned and re-purged during re-assembly. Using proper sources, reference cameras, and image processing techniques, it is possible to fully characterize an image intensified device for its relevant measurable parameters (signal to noise ratio, tube gain, and limiting resolution) without disassembly. This paper outlines the classic component image intensified measurement methodology, assumptions on performance that support those measurement techniques, and the new methodology procedure. A comparison of measurement results using both methods will demonstrate the validity of this new measurement approach.
Time and resource constraints often limit the number of cameras available to establish statistical confidence in determining
if a device meets a desired range performance requirement. For thermal cameras, measurements of sampling/resolution,
sensitivity and temporal response are combined through the Targeting Task Performance (TTP) metric to predict range.
To accommodate a large volume of cameras, we utilized a rotation stage to iterate across the required measurements, with
only a single connection instance to the camera. Automation in collection, processing, and device communication reduces
opportunities for human error, further improving confidence in the results. To accommodate variations in mounting,
cameras were automatically registered to the measurement setup, ensuring accurate analysis and facilitating automatic
processing. Additional efficiency was accomplished through processing the measurements in parallel with data collection,
reducing the time for full analysis of a single camera from 30 minutes down to 4 minutes. From this work, a statistically
relevant sampling of range was accumulated, along with other metrics, to gain insight into manufacturing repeatability,
correlated metrics, and datasets for device emulation. In support of the reproducible research effort, many of the analysis
scripts used in this work are available for download at [1].
At NVESD, the targeting task performance (TTP) metric applies a weighting of different system specifications, that are determined from the scene geometry, to calculate a probability of task performance. In this correspondence we detail how to utilize an imaging system specification document to obtain a baseline performance estimate using the Night Vision Integrated Performance Model (NV-IPM), the corresponding input requirements, and potential assumptions. We then discuss how measurements can be performed to update the model to pro- vide a more accurate prediction of performance, detailing the procedures taken at the NVESD Advanced Sensor Evaluation Facility (ASEF) lay utilizing the Night Vision Laboratory Capture (NVLabCap) software. Finally, we show how the outputs of the measurement can be compared to those of the initial specification sheet based model, and evaluated against a requirements document. The modeling components and data set produced for this work are available upon request, and will serve as a means to benchmark performance for both modeling and measurement methods.
Typical thermal system performance measurements include measurements from a sensor’s digital or analog output while system performance characterizations are based upon measurements from those outputs while characterizing the performance of the display separately. This can be a improper assumption because additional signal processing could occur between the sensor test port and the display. Recent research has focused on the characterization of thermal system displays for better model fidelity. The next evolution in this research is to introduce a means for characterizing thermal system signal intensity transfer (SITF) and three dimensional noise (3DN) performance for systems that have a display as well as a known digital output. This correspondence presents an attempted means to characterize the SITF and 3DN performance for a thermal system when only using a display as the output.
KEYWORDS: Cameras, Sensors, Optical filters, Black bodies, Long wavelength infrared, Mid-IR, Collimators, Integrating spheres, Imaging systems, Signal to noise ratio
The spectral response of cameras is often reported as the product of the individual responses of the elements in the optical path, the lens element coatings, filter (if available), FPA window and coating, and detector elements. This data is often incomplete or inaccurate as vendors will often provide limited spectral data or the data is measured under conditions different than those in the camera system (i.e., normal incidence assumption). We have designed and built an instrument for the measurement of the normalized spectral response of camera systems in the thermal bands (MWIR [3um-6um] and LWIR [8um – 12um]). The design utilizes a series of narrowband filters, a cavity blackbody and other components for the conditioning of the stimuli to the camera. The normalized camera spectral response is obtained by comparing the camera response to each narrowband filter against a reference measurement. In this paper we discuss the modeling and analysis in support of the design, show the final design and some preliminary measurements.
Typically, a system level characterization of a thermal imaging device includes characterizing the objective optics, detector and readout electronics. Ultimately, the thermal imagery is converted to an 8-bit signal and presented on a display for human visual consumption. In some situations, direct characterization of the pre-sample imaging system is not possible, and measurements must be performed from analyzing the output from its display. Additionally, the performance of the display and display optics are significant contributors to the performance of the imaging system, yet they are both assumed to be ideal in many aspects. In this paper, we describe how the underlying imaging system non-uniformity is related to additional display contributions in the total system non-uniformity. This paper will be divided into three parts: the technique and considerations needed to properly measure system through its display, how we can use this information in the NVIPM performance model, and a comparison of performance from measurements at the pre-sample readout versus measurements only at the display.
The Modulation Transfer Function (MTF) of an imaging device is a strong indicator of the resolution limited performance. The MTF at the system level is commonly treated as separable, with the optical MTF multiplying the postoptic (detector) MTF to give the system MTF. As new detector materials and methods have become available, and as the manufacturing of detectors has been separated from the optical system, independently measuring the MTF of the detector is of great interest. In this correspondence, a procedure for measuring the post-optic MTF of a mid-wave (3-5 micron) sampled imager is described. This is accomplished through a careful measurement of a reference optic that is later installed to allow for a final system MTF measurement. The key finding is that matching the chromatic shape of the illumination between the optic and system MTFs is critical, as in both measurements the effective MTF is scaled by the source and detector spectral shapes. This is most easily accomplished through the use of narrow bandpass filters. Our results are consistent across bandpass filter cut-on and F/number.
The quality of an imaging system can be assessed through controlled laboratory objective measurements. Currently, all imaging measurements require some form of digitization in order to evaluate a metric. Depending on the device, the amount of bits available, relative to a fixed dynamic range, will exhibit quantization artifacts. From a measurement standpoint, measurements are desired to be performed at the highest possible bit-depth available. In this correspondence, we described the relationship between higher and lower bit-depth measurements. The limits to which quantization alters the observed measurements will be presented. Specifically, we address dynamic range, MTF, SiTF, and noise. Our results provide guidelines to how systems of lower bit-depth should be characterized and the corresponding experimental methods.
When new and unique task difficulties are requested to be determined, it is important to use methodologies that are consistent with previous research. Unfortunately, some new tasks break the paradigm of past research and require new techniques in order to properly determine their difficulty. This paper describes the process of determining the difficulty for tasks that are unique in that they have a null case (where no object or motion is present) and because these tasks have been requested to be quantified in environments that potentially contain high amounts of atmospheric turbulence. Because each of the calculated V50’s was based upon an assumption, a secondary field collection was necessary in order to validate which model assumptions correlated properly to field performance data.
Thermal systems with a narrow spectral bandpass and mid-wave thermal imagers are useful for a variety of imaging
applications. Additionally, the sensitivity for these classes of systems is increasing along with an increase in
performance requirements when evaluated in a lab. Unfortunately, the uncertainty in the blackbody temperature
along with the temporal instability of the blackbody could lead to uncontrolled laboratory environmental effects
which could increase the measured noise. If the temporal uncertainty and accuracy of a particular blackbody
is known, then confidence intervals could be adjusted for source accuracy and instability. Additionally, because
thermal currents may be a large source of temporal noise in narrow band systems, a means to mitigate them is
presented and results are discussed.
KEYWORDS: 3D metrology, 3D modeling, 3D image processing, Imaging systems, Sensors, Nonuniformity corrections, Data modeling, Convolution, Performance modeling, Image processing
When evaluated with a spatially uniform irradiance, an imaging sensor exhibits both spatial and temporal variations,
which can be described as a three-dimensional (3D) random process considered as noise. In the 1990s, NVESD
engineers developed an approximation to the 3D power spectral density (PSD) for noise in imaging systems known as
3D noise. In this correspondence, we describe how the confidence intervals for the 3D noise measurement allows for
determination of the sampling necessary to reach a desired precision. We then apply that knowledge to create a smaller
cube that can be evaluated spatially across the 2D image giving the noise as a function of position. The method
presented here allows for both defective pixel identification and implements the finite sampling correction matrix. In
support of the reproducible research effort, the Matlab functions associated with this work can be found on the
Mathworks file exchange [1].
Laboratory measurements on thermal imaging systems are critical to understanding their performance in a field
environment. However, it is rarely a straightforward process to directly inject thermal measurements into thermal
performance modeling software to acquire meaningful results. Some of the sources of discrepancies between
laboratory and field measurements are sensor gain and level, dynamic range, sensor display and display brightness,
and the environment where the sensor is operating. If measurements for the aforementioned parameters could
be performed, a more accurate description of sensor performance in a particular environment is possible. This
research will also include the procedure for turning both laboratory and field measurements into a system model.
KEYWORDS: Stray light, Sensors, Collimators, Signal to noise ratio, Imaging systems, Modulation transfer functions, Reflectivity, Light scattering, Scattering, Black bodies
Accurate Signal Intensity Transfer Functions (SITF) measurements are necessary to determine the calibration factor in the 3D noise calculation of an electro-optical imaging system. The typical means for measuring a sensor’s SITF is to place the sensor in a flooded field environment at a distance that is relatively close to the aperture of the emitter. Unfortunately, this arrangement has the potential to allow for additional contributions to the SITF in the form of scattering or stray light if the optics are not designed properly in the system under test. Engineers at the US Army Night Vision and Electronic Sensors Directorate are working to determine a means of evaluating the contribution due to scatting or stray light.
KEYWORDS: Modulation transfer functions, Systems modeling, Contrast transfer function, Visual process modeling, Sensors, Performance modeling, Night vision, Signal processing, Integrated modeling, Transmittance
The latest version of the U.S. Army imager performance model, the Night Vision Integrated Performance Model (NV-IPM), is now contained within a single, system engineering oriented design environment. This new model interface allows sensor systems to be represented using modular, reusable components. A new feature, added in version 1.3 of the NV-IPM, allows users to create custom components which can be incorporated into modeled systems. The ability to modify existing component definitions and create entirely new components in the model greatly enhances the extensibility of the model architecture. In this paper we will discuss the structure of the custom component and parameter generators and provide several examples where this feature can be used to easily create new and unique component definitions within the model.
KEYWORDS: Signal to noise ratio, Systems modeling, Interference (communication), Performance modeling, Imaging systems, Modulation transfer functions, Visual process modeling, Temperature metrology, Mid-IR, Eye models
Typically, the modeling of linear and shift-invariant (LSI) imaging systems requires a complete description of each subcomponent in order to estimate the final system transfer function. To validate the modeled behavior, measurements are performed on each component. When dealing with packaged systems, there are many situations where some, if not all, data is unknown. For these cases, the system is considered a blackbox, and system level measurements are used to estimate the transfer characteristics in order to model performance. This correspondence outlines the blackbox measured system component in the Night Vision Integrated Performance Model (NV-IPM). We describe how estimates of performance can be achieved with complete or incomplete measurements and how assumptions affect the final range. The blackbox measured component is the final output of a measurement characterization and is used to validate performance of delivered and prototype systems.
KEYWORDS: Sensors, Imaging systems, Black bodies, Modulation transfer functions, Machine vision, Temperature metrology, Contrast transfer function, Cameras, Systems modeling, Eye
Researchers at the US Army Night Vision and Electronic Sensors Directorate have added the functionality of Machine Vision MRT (MV-MRT) to the NVLabCap software package. While the original calculations of MV-MRT were compared to human observers performance using digital imagery in a previous effort,1 the technical approach was not tested on 8-bit imagery using a variety of sensors in a variety of gain and level settings. Now that it is more simple to determine the MV-MRT for a sensor in multiple gain settings, it is prudent to compare the results of MV-MRT in multiple gain settings to the performance of human observers for thermal imaging systems that are linear and shift invariant. Here, a comparison of the results for a LWIR system to trained human observers is presented.
The necessity of color balancing in day color cameras complicates both laboratory measurements as well as modeling for task performance prediction. In this proceeding, we discuss how the raw camera performance can be measured and characterized. We further demonstrate how these measurements can be modeled in the Night Vision Integrated Performance Model (NV-IPM) and how the modeled results can be applied to additional experimental conditions beyond those used during characterization. We also present the theoretical framework behind the color camera component in NV-IPM, where an effective monochromatic imaging system is created from applying a color correction to the raw color camera and generating the color corrected grayscale image. The modeled performance shows excellent agreement with measurements for both monochromatic and colored scenes. The NV-IPM components developed for this work are available in NV-IPM v1.2.
KEYWORDS: Sensors, Modulation transfer functions, Systems modeling, Data modeling, Performance modeling, Video, Image sensors, Cameras, Software development, Imaging systems
Engineers at the US Army Night Vision and Electronic Sensors Directorate have recently developed a software package called NVLabCap. This software not only captures sequential frames from thermal and visible sensors, but it also can perform measurements of signal intensity transfer function, 3-dimensional noise, field of view, super-resolved modulation transfer function, and image bore sight. Additionally, this software package, along with a set of commonly known inputs for a given thermal imaging sensor, can be used to automatically create an NV-IPM element for that measured system. This model data can be used to determine if a sensor under test is within certain tolerances, and this model can be used to objectively quantify measured versus given system performance.
KEYWORDS: Eye, Image segmentation, Sensors, Imaging systems, Thermography, Black bodies, Video, Minimum resolvable temperature difference, Image processing, Human vision and color perception
The GStreamer architecture allows for simple modularized processing. Individual GStreamer elements have been
developed that allow for control, measurement, and ramping of a blackbody, for capturing continuous imagery
from a sensor, for segmenting out a MRTD target, for applying a blur equivalent to that of a human eye and a
display, and for thresholding a processed target contrast for "calling" it. A discussion of each of the components
will be followed by an analysis of its performance relative to that of human observers.
KEYWORDS: Modulation transfer functions, Interference (communication), Signal to noise ratio, Imaging systems, Fourier transforms, Spatial frequencies, Systems modeling, Sensors, Performance modeling, 3D metrology
The modulation transfer function (MTF) measurement is critical for understanding the performance of an EOIR system.
Unfortunately, due to both spatially correlated and spatially un-correlated noise sources, the performance of the MTF
measurement (specifically near the cutoff) can be severely degraded. When using a 2D imaging system, the intrinsic
sampling of the 1D edge spread function (ESF) allows for redundant samples to be averaged suppressing the noise
contributions. The increase in the signal-to-noise will depend on the angle of the edge with respect to the sampling along
with the specified re-sampling rate. In this paper, we demonstrate how the information in the final ESF can be used to
identify the contribution of noise. With an estimate of the noise, the noise-limited portion of MTF measurement can be
identified. Also, we demonstrate how the noise-limited portion of the MTF measurement can be used in combination
with a fitting routine to provide a smoothed measurement.
KEYWORDS: Sensors, Thermal modeling, Image sensors, Imaging systems, Thermography, 3D metrology, Systems modeling, Current controlled current source, Performance modeling, Received signal strength
While it is now common practice to use a trend removal to eliminate low frequency xed pattern noise in
thermal imaging systems, there is still some disagreement as to whether one means of trend removal is better
than another and whether or not the strength of the trend removal should be limited. The dierent methods
for trend removal will be presented as well as an analysis of the calculated noise as a function of their strengths
will be presented for various thermal imaging systems. In addition, trend removals were originally put in place
in order to suppress the low-frequency component of the Sigma VH term. It is now prudent to perform a trend
removal at an intermediate noise calculation step in order to suppress the low frequency component of both the
Sigma V and Sigma H components. A discussion of the ramications of this change in measurement will be
included for thermal modeling considerations.
Multiple source band image fusion can sometimes be a multi-step process that consists of several intermediate
image processing steps. Typically, each of these steps is required to be in a particular arrangement in order to
produce a unique output image. GStreamer is an open source, cross platform multimedia framework, and using
this framework, engineers at NVESD have produced a software package that allows for real time manipulation
of processing steps for rapid prototyping in image fusion.
Predicting an accurate Minimum Resolvable Temperature Difference (MRTD) for a thermal imaging system is often
hindered by inaccurate measurements of system gain and display characteristics. Variations in these terms are often
blamed for poor agreement between model predictions and measured MRTD. By averaging over repeated human
measurements, and carefully recording all system parameters affecting image quality, it should be possible to make an
accurate prediction of MRTD performance for any resolvable frequency. Utilizing the latest NVESD performance
models with updates for noise, apparent target angle, and human vision, predicted MRT are compared with measured
curves. We present results for one well characterized mid-wave thermal staring system.
Recent developments in image fusion give the user community many options for ways of presenting the imagery to
an end-user. Individuals at the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate
have developed an electronic system that allows users to quickly and efficiently determine optimal image fusion
algorithms and color parameters based upon collected imagery and videos from environments that are typical
to observers in a military environment. After performing multiple multi-band data collections in a variety
of military-like scenarios, different waveband, fusion algorithm, image post-processing, and color choices are
presented to observers as an output of the fusion system. The observer preferences can give guidelines as to how
specific scenarios should affect the presentation of fused imagery.
The presence of noise in an IR system adversely impacts task performance in many cases. Typically when
modeling the effect of noise on task performance the focus is on the noise generated at the front end of the
system (detector, amplifier, etc). However, there are cases when noise may arise in the post-sample of the system
due to different display technologies, etc. This paper presents a means to determine the effect of display noise on
the sensor system noise under a variety of conditions. A modeling study demonstrates that the effect of display
noise correlates to the predicted modeled performance.
The modulation transfer function (MTF) of optical systems is often derived by taking the Fourier transform (FT) of a
measured line spread function. Recently, methods of performing Fourier transforms that are common in infrared
spectroscopy have been applied to MTF calculations. Proper apodization and phase correction have been shown to
improve MTF calculations in optical systems. In this paper these methods as well as another filtering algorithm based
on phase are applied to under-sampled optical systems. Results, both with and without the additional processing are
presented and the differences are discussed.
The predicted Minimum Resolvable Temperature (MRT) values from five MRT models are compared to the
measured MRT values for eighteen long-wave thermal imaging systems. The most accurate model, which is
based upon the output of NVTherm IP, has an advantage over the other candidate models because it accounts
for performance degradations due to blur and bar sampling. Models based upon the FLIR 92 model tended
to predict overly optimistic values for all frequencies. The earliest models for MRT's for staring arrays did
not incorporate advanced eye effects and had the tendency to provide pessimistic estimates as the frequency
approached the Nyquist limit.
The next generation of night vision goggles will fuse image intensified and long wave infra-red to create a hybrid image that will enable soldiers to better interpret their surroundings during nighttime missions. Paramount to the development of such goggles is the exploitation of image quality measures to automatically determine the best image fusion algorithm for a particular task. This work will introduce a novel monotonic correlation coefficient to investigate how well possible image quality features correlate to actual human performance, which is measured by a perception study. The paper will demonstrate how monotonic correlation can identify worthy features that could be overlooked by the traditional Pearson correlation.
KEYWORDS: Modulation transfer functions, Imaging systems, Sensors, Analog electronics, Optical signal processing, Optical testing, Infrared imaging, Objectives, NVThermIP, Systems modeling
Using measured quantities, it is possible to arrive at a reasonable approximation for the optics MTF of a
longwave undersampled imaging system. Certain reasonable assumptions concerning the format of the data
from the imaging system should be made in order to make sure that there is no image processing artifacts. For
systems that contain imaging artifacts, such as an analog output, there are too many secondary effects that will
degrade the predicted Optics MTF beyond a reasonable approximation.
KEYWORDS: Modulation transfer functions, Apodization, Fourier transforms, Optical transfer functions, Signal to noise ratio, Phase shift keying, Sensors, Systems modeling, Infrared spectroscopy, Algorithm development
Fourier transform methods common in infrared spectroscopy were applied to the problem of calculating the modulation
transfer function (MTF) from a system's measured line spread function (LSF). Algorithms, including apodization and
phase correction, are discussed in their application to remove unwanted noise from the higher frequency portion of the
MTF curve. In general, these methods were found to significantly improve the calculated MTF. Apodization reduces
the proportion of noise by discarding areas of the LSF where there is no appreciable signal. Phase correction
significantly reduces the rectification of noise that occurs when the MTF is calculated by taking the power spectrum of
the complex optical transfer function (OTF).
This paper discusses the Modulation Transfer Functions (MTF) associated with image motion. The paper describes MTF for line-of-sight vibration, electronic stabilization, and translation of the target within the field of view. A model for oculomotor system tracking is presented. The common procedure of treating vibration blur as Gaussian is reasonably accurate in most cases. However, the common practice of ignoring motion blur leads to substantial error when modeling search tasks.
US Army thermal target acquisition models based on the Johnson metric do not accurately predict sensor performance with electronic zoom (E-zoom). For this reason, NVTherm2002 removed the limiting E zoom Modulation Transfer Functions (MTF) to agree better with measured performance results. In certain scenarios, especially with under-sampled staring sensors, the model shows incorrect performance improvements with E-zoomed images. The current Army model NVThermIP, based upon the new targeting task performance (TTP) metric, more accurately models range performance in these cases. E-zoom provides system design flexibility when limited to a single optical field-of-view and/or eye distance is constrained by ergonomic factors. This paper demonstrates that targeting acquisition range performance, modeled using the TTP metric, shows increases only up to an optimized magnification and then decreases beyond this optimal value. A design "rule of thumb" is provided to determine this optimal magnification. NVThermIP modeled range performance is supported with E-zoom perception experiment results.
The performance of image fusion algorithms is evaluated using image fusion quality metrics and observer performance
in identification perception experiments. Image Intensified (I2) and LWIR images are used as the inputs to the fusion
algorithms. The test subjects are tasked to identify potentially threatening handheld objects in both the original and
fused images. The metrics used for evaluation are mutual information (MI), fusion quality index (FQI), weighted fusion
quality index (WFQI), and edge-dependent fusion quality index (EDFQI). Some of the fusion algorithms under
consideration are based on Peter Burt's Laplacian Pyramid, Toet's Ratio of Low Pass (RoLP or contrast ratio), and
Waxman's Opponent Processing. Also considered in this paper are pixel averaging, superposition, multi-scale
decomposition, and shift invariant discrete wavelet transform (SIDWT). The fusion algorithms are compared using
human performance in an object-identification perception experiment. The observer responses are then compared to the
image fusion quality metrics to determine the amount of correlation, if any. The results of the perception test indicated
that the opponent processing and ratio of contrast algorithms yielded the greatest observer performance on average.
Task difficulty (V50) associated with the I2 and LWIR imagery for each fusion algorithm is also reported.
There have been numerous applications of super-resolution reconstruction algorithms to
improve the range performance of infrared imagers. These studies show there can be a
dramatic improvement in range performance when super-resolution algorithms are
applied to under-sampled imager outputs. These occur when the imager is moving
relative to the target which creates different spatial samplings of the field of view for
each frame. The degree of performance benefit is dependent on the relative sizes of the
detector/spacing and the optical blur spot in focal plane space. The blur spot size on the
focal plane is dependent on the system F-number. Hence, in this paper we provide a
range of these sensor characteristics, for which there is a benefit from super-resolution
reconstruction algorithms. Additionally, we quantify the potential performance
improvements associated with these algorithms. We also provide three infrared sensor
examples to show the range of improvements associated with provided guidelines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.