There are two multispectral Mastcam imagers in the Mars Science Laboratory (MSL) onboard the Mars rover Curiosity. The left imager has low resolution whereas the right imager is just the opposite. Most of the time, the two imagers work independently. It will be interesting to explore the possibility of fusing the two left and right images to produce stereo images. However, the extremely short baseline of the stereo images challenges the stereo 3D reconstruction. In this paper, we tested the feasibility of using Mastcam images for stereo 3D reconstruction. We took a five-point algorithm to perform the epipolar rectification and then performed a census-based semi-global matching algorithm on the rectified stereo pairs to produce disparity maps. Preliminary tests using Mastcam images of two scenes were performed to test the robustness of the processing pipeline.
The Mastcam multispectral imagers onboard the Mars rover Curiosity have been collecting data since 2012. There are two imagers onboard the rover. The left imager has a wide field of view, but three times lower resolution than that of the right, which is just the opposite. Left and right images can be combined to generate stereo and disparity images. However, the resolution of the stereo images using conventional ways is at the same resolution of the left. Ideally, it will be more interesting to science fans and rover operators if one can generate stereo images with the same resolution of the right imager, as the resolution will be three times better. Recently, we have developed some algorithms that can fuse left and right images to create left images with the same resolution of the right. Consequently, high resolution stereo images can be generated. Moreover, disparity images can also be generated. In this document, we will summarize the development of new JMARS layers that display the enhanced left images using pansharpening and deep learning algorithms, high resolution stereo images, and high resolution disparity maps. The details of the workflow will be described. Some demonstration examples will be given as well.
The objective of this paper is to detect the type of vegetation so that a more accurate Digital Terrain Model (DTM) can be generated by excluding the vegetation from the Digital Surface Model (DSM) based on the vegetation type (such as trees). This way, many different inpainting methods can be applied subsequently to restore the terrain information from the removed vegetation pixels from DSM and obtain a more accurate DTM. We trained three DeepLabV3+ models with three different datasets that are collected at different resolutions. Among the three DeepLabV3+ models, the model trained with the dataset that has an image resolution close to the test data images provided the best performance and the semantic segmentation results with this model looked highly promising.
To accurately extract digital terrain model (DTM), it is necessary to remove heights due to vegetation such as trees and shrubs and other manmade structures such as buildings, bridges, etc. from the digital surface model (DSM). The resulting DTM can then be used for construction planning, land surveying, etc. Normally, the process of extracting DTM involves two steps. First, accurate land cover classification is required. Second, an image inpainting process is needed to fill in the missing pixels due to trees, buildings, bridges, etc. In this paper, we focus on the second step of using image inpainting algorithms for terrain reconstruction. In particular, we evaluate seven conventional and deep learning based inpainting algorithms in the literature using two datasets. Both objective and subjective comparisons were carried out. It was observed that some algorithms yielded slightly better performance than others.
Adaptive Infrared Imaging Spectroradiometer (AIRIS) is a longwave infrared (LWIR) sensor for remote detection of chemical agents such as nerve gas. AIRIS can be considered as a hyperspectral imager with 20 bands. In this paper, we present a systematic and practical approach to detecting and classifying chemical vapor from a distance. Our approach involves the construction of a spectral signature library of different vapors, certain practical preprocessing procedures, and the use of effective detection and classification algorithms. In particular, our preprocessing involves effective vapor signature extraction with adaptive background subtraction and normalization, and vapor detection and classification using Spectral Angle Mapper (SAM) technique, which is a signature-based target detection method for vapor detection. We have conducted extensive vapor detection analyses on AIRIS data that include TEP and DMMP vapors with different concentrations collected at different distances and times of the day. We have observed promising detection results both in low and high-concentrated vapor releases.
The Mastcam multispectral imagers onboard the Mars rover Curiosity have been collecting data since 2012. There are two imagers. The left imager has wide field of view, but three times lower resolution than that of the right, which is just the opposite. Left and right images can be combined to generate stereo images. However, the resolution of the stereo images using conventional ways is at the same resolution of the left. Ideally, it will be more interesting to science fans and rover operators if one can generate stereo images with the same resolution of the right imager, as the resolution will be three times better. Recently, we have developed some algorithms that can fuse left and right images to create left images with the same resolution of the right. Consequently, high resolution stereo images can be generated. Moreover, disparity image can also be generated. In this paper, we will summarize the development of a data processing pipeline that takes left and right Mastcam images from the Planetary Data System (PDS) archive, performs pansharpening to enhance the left images with help from the right images, generates high resolution stereo images, disparity maps, and saves the processed images back into the PDS archive. The details of the workflow will be described. For example, image alignment algorithm, the pansharpening algorithm, stereo image formation algorithms, and disparity map generation algorithms will be summarized. Some demonstration examples will be given as well.
Ground object detection is important for many civilian applications. Counting the number of cars in parking lots can provide very useful information to shop owners. Tent detection and counting can help humanitarian agencies to assess and plan logistics to help refugees. In this paper, we present some preliminary results on ground object detection using high resolution Worldview images. Our approach is a simple and semi-automated approach. A user first needs to manually select some object signatures from a given image and builds a signature library. Then we use spectral angle mapper (SAM) to automatically search for objects. Finally, all the objects are counted for statistical data collection. We have applied our approach to tent detection for a refugee camp near the Syrian-Jordan border. Both multispectral Worldview images with eight bands at 2 m resolution and pansharpened images with four bands at 0.5 m resolution were used. Moreover, synthetic hyperspectral (HS) images derived from the above multispectral (MS) images were also used for object detection. Receiver operating characteristics (ROC) curves as well as detection maps were used in all of our studies.
Change detection using hyperspectral images is important in surveillance and reconnaissance operations. The process involves two images: one reference and one test. Many algorithms such as chronochrome (CC) and covariance equalization (CE) were proposed in the past. In this paper, we will present a new nonlinear change detection framework for hyperspectral images. The idea was motivated by the band rationing concept. First, image segmentation is applied to the reference image. For each segmented subimage in the reference image, the bands with the most and least variations are found. Then new images are formed by dividing the two bands. Similarly, the new band ratioed images are formed in the test images. Second, we propose to apply CC or CE to generate residual images. Finally, anomaly detection algorithms are applied to detect changes. Actual hyperspectral images have been used in our studies. Receiver operating characteristics (ROC) curves were used to compare the various options. Results showed that this approach can achieve excellent detection performance.
In the 2015 NASA ROSES solicitation, NASA has expressed strong interest in improving the accuracy of Mars surface characterization using satellite images. Thermal Emission Imaging System (THEMIS), an imager with a spatial resolution of 100 meters, has 10 infrared bands between 6 and 15 micrometers. Thermal Emission Spectrometer (TES), an imager with a spatial resolution of 3 km, has 143 bands between 5 and 50 micrometers. While both imagers have a variety of applications, it would be ideal to generate high-spatial and high-spectral resolution data products by fusing their respective outputs. We present a novel approach to fusing THEMIS and TES satellite images, aiming to improve orbital characterization of Mars’ surface. First, the THEMIS bands must undergo atmospheric compensation (AC) due to the presence of dust, ice, carbon dioxide, etc. A systematic AC procedure using elevation information and spectrally uniform pixels has been developed and implemented. Second, a set of proven pan-sharpening algorithms has been applied to fuse the two sets of images. The pan-sharpened images have the spatial resolution of THEMIS images and the spectral resolution of TES images. The results of extensive experiments using THEMIS and TES data collected near the Syrtis Major region (one of the final 3 candidate landing sites for the Mars 2020 rover) clearly demonstrate the feasibility of the proposed approach.
Surveillance images downlinked from unmanned air vehicles (UAVs) may have corrupted pixels due to channel interferences from the adversary’s jammer. Moreover, the images may be deliberately downsampled in order to conserve the scarce bandwidth in UAVs. As a result, the automatic target recognition (ATR) performance may degrade significantly because of poor image quality due to corrupted and missing pixels. In this paper, we present some preliminary results of a novel approach to automatic target recognition based on corrupted images. First, we present a new matrix completion algorithm to reconstruct missing pixels in electro-optical (EO) images. Second, we extensively evaluated our algorithm using many EO images with different missing rates. It was observed that recovering performance in terms of peak signal-to-noise ratio (PSNR) is very good. Third, we compared with a state-of-the-art algorithm and found that our performance is superior. Finally, experiments using an ATR algorithm showed that the target detection performance (precision and recall) has been improved after applying our algorithm, as compared to those results generated by using interpolated images.
This paper presents a practical approach to target detection for hyperspectral images. In target detection, it is normally assumed that the ground truth target signatures collected in a laboratory are available and one then uses them to search for targets in a given image. However, directly applying the laboratory signatures to the real data is not appropriate due to environmental differences between the ground truth data and real data. Conventional atmospheric compensation schemes such as the use of MODTRAN can help to improve the target detection performance. However, the computational load is huge and thus real-time applications may prohibit this compensation approach. We present results of an alternative compensation technique known as in-scene compensation, which is appealing as no complicated techniques such as MODTRAN are needed. Two in-scene methods for visible near-infrared/short-wave infrared range have been developed in the literature: empirical line method (ELM) and vegetation normalization (VN). Both approaches have advantages and disadvantages. We propose a hybrid in-scene compensation method that can be considered as a combination of ELM and VN and we call our method ELM augmented VN (EAVN). One key advantage of EAVN is that it combines the advantages of ELM and VN and eliminates their disadvantages. Compared to ELM, there is no need for two or more known target pixels in the test scene. Compared to VN, there is no need for dark pixels. Extensive experimental results using ground-based sensor data showed that the EAVN algorithm provides excellent compensation to environmental changes. After compensation, the receiver operating characteristics performance of target detection has been significantly improved by orders of magnitude in a number of cases, as compared to two standard compensation methods: quick atmospheric correction and internal average relative reflectance correction.
NASA is developing a new generation of audio system for astronauts. The idea is to use directional speakers and
microphone arrays. However, since the helmet environment is very reverberant, the inbound signals in the directional
speaker may still enter the outbound path (microphone array), resulting in an annoying positive feedback loop. To
improve the communication quality between astronauts, it is necessary to develop a digital filtering system to minimize
the interactions between inbound and outbound signals.
In this paper, we will present the following results. First, we set up experiments under three scenarios: office, bowl, and
helmet. Experiments were then performed. Second, 3 adaptive filters known as normalized least mean square (NLMS),
affine projection (AP), and recursive least square (RLS) were applied to the experimental data. We also developed a new
frequency domain adaptive filter called FDAFSS (frequency domain adaptive filter (FDAF) with spectral subtraction
(SS)), which is a combination of FDAF and SS. FDAFSS was compared with LMS, AP, RLS, FDAF, and SS filters and
FDAFSS yielded better performance in terms of perceptual speech quality (PESQ). Moreover, FDAFSS is fast and can
yield uniform convergence across different frequency bands.
In this paper, we summarize our efforts of using three different radars (impulse radar, swept frequency radar,
and continuous-wave radar) for through-the-wall sensing. The purpose is to understand the pros and cons of each
of the three radars. Through extensive experiments, it was found that the radars are complementary and multiple
radars are needed for different scenarios of through-the-wall target detection and tracking.
Endmember extraction in Hyperspectral Images (HSI) is a critical step for target detection and abundance estimation. In
this paper, we propose a new approach to endmember extraction, which takes advantage of the sparsity property of the
linear representation of HSI's spectral vector. Sparsity is measured by the l0 norm of the abundance vector. It is also well
known that l1 norm well resembles l0 in boosting sparsity while keeping the minimization problem convex and tractable.
By adding the l1 norm term to the objective function, we result in a constrained quadratic programming which can be
solved effectively using the Linear Complementary Programming (LCP). Unlike existing methods which require
expensive computations in each iteration, LCP only requires pivoting steps, which are extremely simple and efficient for
the un-mixing problem, since the number of signatures in the reconstructing basis is reasonably small. Preliminary
experiments of the proposed methods for both supervised and unsupervised abundance decomposition showed
competitive results as compared to LS-based method like Fully Constrained Least Square (FCLS). Furthermore,
combination of our unsupervised decomposition with anomaly detection makes a decent target detection algorithm as
compared to methods which require prior information of target and background signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.