Despite the ubiquity and maturity of imaging systems in the modern world, new imaging technologies continue to emerge that allow humanity to see into new spaces. For example, a new medical capability uses a laser system that can image neural activity through the skull in real-time. At the other extreme is the Dragonfly project for NASA, which uses an imaging system on an unmanned aerial vehicle to navigate, image, and possibly find life on the Saturn moon of Titan. This talk will cover these and other new imaging technologies that promise to change how we view our world.
Hyperspectral imaging (HSI) technologies span the electro-optical and infrared domains. Longwave infrared (LWIR) HSI is particularly well suited for chemical and material identification in both day and night conditions due to the fact that longwave signals depend on thermal emission and material composition. However, exploitation performance is impacted by spectral data quality, which is driven by fundamental sensor noise characteristics, focal plane array health, spectral and radiometric calibration accuracy, and weather conditions. Previous algorithms have focused on quantifying spectral quality in the visible, near infrared, and shortwave infrared domains. More recently, we developed a spectral image quality equation (SIQE) based on Bayesian Information Criterion (BIC) for quantifying spectral quality of LWIR HSI data. Here, we further develop the algorithm to provide a more intuitive interpretation of the resulting BIC scores by transforming the scores into a metric that more closely resembles target detection scores. In addition to showing how SIQE is correlated with noise-equivalent spectral radiance, we illustrate several applications of SIQE, including the impact of atmospheric/environmental interferences and calibration errors. Our results reveal that SIQE is an effective metric for quantifying hyperspectral data quality, and thus, can be used for filtering data cubes prior to implementing exploitation algorithms.
LiDAR systems typically use a single fixed-frequency pulsed laser to obtain ranging and reflectance information from a complex scene. In recent years, there has been an increased interest in multispectral (MS) LiDAR. Here, progress in the development of a MS LiDAR with agile wavelengths selection is reported. Broadcast wavelengths are selected from a spectrally-broad source, in a pre-programmed or at-will fashion, to support target discrimination using 2D information. In this study, where measured reflectance spectra of the target of interest and background are provided, an L1-band selection algorithm is used to identify the most valuable wavebands to distinguish between scene elements. Anomaly detection methods have also been successfully demonstrated and will be discussed. Furthermore, an investigation into the use of a Silicon Photomultiplier (SiPM) device for collecting pulse returns from targets such as vegetation, minerals, and human-made objects with varying spatial and spectral properties is completed. In particular, an assessment of the impact of the device response to (1) different focal plane spot illumination conditions and (2) bias level settings is carried out, and the implications to radiometric accuracy and target discrimination capability are discussed.
Many modern LIDAR platforms contain an integrated RGB camera for capturing contextual imagery. However, these RGB cameras do not collect a near-infrared (NIR) color channel, omitting information useful for many analytical purposes. This raises the question of whether LIDAR data, collected in the NIR, can be used as a substitute for an actual NIR image in this situation. Generating a LIDAR-based NIR image is potentially useful in situations where another source of NIR, such as satellite imagery, is not available. LIDAR is an active sensing system that operates very differently from a passive system, and thus requires additional processing and calibration to approximate the output of a passive instrument. We examine methods of approximating passive NIR images from LIDAR for real-world datasets, and assess differences with true NIR images.
KEYWORDS: 3D modeling, LIDAR, Data modeling, Clouds, Data storage, Buildings, Tin, Statistical modeling, Global Positioning System, Instrument modeling
Airborne LIDAR instruments are capable of delivering high density point clouds, but sampling is inherently uneven in both 2D and 3D space due to collection patterns as well as effects like occlusion. Taking full advantage of the detail available when creating 3D models therefore requires that resolution be adaptable to the amount of localized data. Voxel-based modeling of LIDAR has proven advantageous in many situations, but the traditional use of a fixed grid size prevents full realization of the potential resolution. Allowing voxel sizes to vary across the model using spatial subdivision techniques overcomes this limitation. An important part of this process is defining an appropriate limit of resolution for different sections of a model, and we incorporate information gained through tracing of LIDAR pulses to guide this decision process. Real-world data are used to demonstrate our results, and we show how dynamic resolution voxelization of LIDAR allows for both reduced storage requirements as well as improved modeling flexibility.
Various phenomena occur in geographic regions that cause pixels of a scene to contain spectrally mixed pixels. The mixtures may be linear or nonlinear. It could simply be that the pixel size of a sensor is too large so many pixels contain patches of different materials within them (linear), or there could be microscopic mixtures and multiple scattering occurring within pixels (non-linear). Often enough, scenes may contain cases of both linear and non-linear mixing on a pixel-by-pixel basis. Furthermore, appropriate endmembers in a scene are not always easy to determine. A reference spectral library of materials may or may not be available, yet, even if a library is available, using it directly for spectral unmixing may not always be fruitful. This study investigates a generalized kernel-based method for spectral unmixing that attempts to determine if each pixel in a scene is linear or non-linear, and adapts to compute a mixture model at each pixel accordingly. The effort also investigates a kernel-based support vector method for determining spectral endmembers in a scene. Two scenes of hyperspectral imagery calibrated to reflectance are used to validate the methods. We test the approaches using a HyMAP scene collected over the Waimanalo Bay region in Oahu, Hawaii, as well as an AVIRIS scene collected over the oil spill region in the Gulf of Mexico during the Deepwater Horizon oil incident.
Fourier transform infrared spectroscopy is a standard technique for remote detection of gaseous vapors. However, as
algorithms mature and hyperspectral imaging in the longwave infrared becomes more prominent in ground based
applications it is important to determine optimum parameters for detection due to potentially high data rates. One
parameter, spectral resolution, is of particular interest because 1) it can be easily changed and 2) it has significant effect
on the data rate. The following presents a mathematical foundation for determining the spectral resolution for vapor
detection in the presence of atmospheric interferants such as water vapor and ozone. Results are validated using real-world
long wave infrared hyperspectral data of several open air chemical simulant releases.
Long-wave infrared hyperspectral sensors provide the ability to detect gas plumes at stand-off distances. A number of
detection algorithms have been developed for such applications, but in situations where the gas is released in a complex
background and is at air temperature, these detectors can generate a considerable amount of false alarms. To make
matters more difficult, the gas tends to have non-uniform concentrations throughout the plume making it spatially similar
to the false alarms. Simple post-processing using median filters can remove a number of the false alarms, but at the cost
of removing a significant amount of the gas plume as well. We approach the problem using an adaptive subpixel detector
and morphological processing techniques. The adaptive subpixel detection algorithm is able to detect the gas plume
against the complex background. We then use morphological processing techniques to isolate the gas plume while
simultaneously rejecting nearly all false alarms. Results will be demonstrated on a set of ground-based long-wave
infrared hyperspectral image sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.