As the Air Force pushes toward reliance on autonomous systems for navigation, situational awareness, threat analysis and target engagement there are several requisite technologies that must be developed. Key among these is the concept of `trust' in the autonomous system to perform its task. This term, `trust' has many application specific definitions. We propose that a properly calibrated algorithm confidence is essential to establishing trust. To accomplish properly calibrated confidence we present a framework for assessing algorithm performance and estimating confidence of a classifier's declaration. This framework has applications to improved algorithm trust, fusion, and diagnostics. We present a metric for comparing the quality of performance modeling and examine three different implementations of performance models on a synthetic dataset over a variety of operating conditions.
Since radar imaging of buried objects involves propagation through media that are at best partially known, there is mismatch between the forward model used in the inversion and the propagation behavior actually observed in the measured data. The mismatch can cause degradation and/or reduce resolution in the imagery, which limit automatic target recognition features that can be extracted from the imagery. Recently, several research groups have advocated backpropagation of interferometric measurements as a more statistically stable estimator of targets in the presence of forward model errors and in the presence of clutter. Specifically, the lifting approach to inverse problems [Demanet and Jugnon, 2017]1 has been proposed as a robust approach to inversion in the presence of forward model mismatch that can produce reconstructions with fidelity comparable to direct inversion with the matched model. We apply this technique to radar imaging of buried targets to determine if it can produce enhanced imagery in the presence of limited knowledge of the surrounding ground geometry and/or material properties. In this paper we describe the algorithm implementation and present results for both simulated and measured data. The results show that the approach has significant potential for enhancing images of buried objects from scenarios with realistic forward model mismatch. However, we have observed significant sensitivity to surrounding clutter and to the choice of regularization. Mitigating these sensitivities is a topic of ongoing research.
Backprojection of cross-correlated array data, using algorithms such as coherent interferometric imaging (Borcea, et al., 2006), has been advanced as a method to improve the statistical stability of images of targets in an inhomogeneous medium. Recently, the Windowed Beamforming Energy (WBE) function algorithm has been introduced as a functionally equivalent approach, which is significantly less computationally burdensome (Borcea, et al., 2011). WBE produces similar results through the use of a quadratic function summing signals after beamforming in transmission and reception, and windowing in the time domain. We investigate the application of WBE to improve the detection of buried targets with forward looking ground penetrating MIMO radar (FLGPR) data. The formulation of WBE as well the software implementation of WBE for the FLGPR data collection will be discussed. WBE imaging results are compared to standard backprojection and Coherence Factor imaging. Additionally, the effectiveness of WBE on field-collected data is demonstrated qualitatively through images and quantitatively through the use of a CFAR statistic on buried targets of a variety of contrast levels.
This paper discusses the application of several image formation techniques to forward looking ground penetrating radar (FLGPR) data to observe if they improve target-to-clutter ratio. Specifically, regularized imaging with 𝐿1 and total variation constraints and coherence-factor filtered images are considered. The technical framework and software implementation of each of these image formation techniques are discussed, and results of applying the techniques to field collected data are presented. The results from the different techniques are compared to standard backprojection and compared to each other in terms of image quality and target-to-clutter ratio.
With all of the new remote sensing modalities available, and with ever increasing capabilities and frequency of collection, there is a desire to fundamentally understand/quantify the information content in the collected image data relative to various exploitation goals, such as detection/classification. A fundamental approach for this is the framework of Bayesian decision theory, but a daunting challenge is to have significantly flexible and accurate multivariate models for the features and/or pixels that capture a wide assortment of distributions and dependen- cies. In addition, data can come in the form of both continuous and discrete representations, where the latter is often generated based on considerations of robustness to imaging conditions and occlusions/degradations. In this paper we propose a novel suite of ”latent” models fundamentally based on multivariate Gaussian copula models that can be used for quantized data from SAR imagery. For this Latent Gaussian Copula (LGC) model, we derive an approximate, maximum-likelihood estimation algorithm and demonstrate very reasonable estimation performance even for the larger images with many pixels. However applying these LGC models to large dimen- sions/images within a Bayesian decision/classification theory is infeasible due to the computational/numerical issues in evaluating the true full likelihood, and we propose an alternative class of novel pseudo-likelihoood detection statistics that are computationally feasible. We show in a few simple examples that these statistics have the potential to provide very good and robust detection/classification performance. All of this framework is demonstrated on a simulated SLICY data set, and the results show the importance of modeling the dependencies, and of utilizing the pseudo-likelihood methods.
KEYWORDS: Error analysis, Information theory, Monte Carlo methods, Data modeling, Statistical analysis, Analytical research, Matrices, Imaging systems, Computer simulations, Systems modeling, Radar
In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information– theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are ”loose” or ”tight” bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.
KEYWORDS: Principal component analysis, Sensors, Explosives, Ground penetrating radar, Detection and tracking algorithms, General packet radio service, Radar, Land mines, Target detection, Signal to noise ratio
Explosive hazards are a deadly threat in modern conflicts; hence, detecting them before they cause injury or death is of paramount importance. One method of buried explosive hazard discovery relies on data collected from ground penetrating radar (GPR) sensors. Threat detection with downward looking GPR is challenging due to large returns from non-target objects and clutter. This leads to a large number of false alarms (FAs), and since the responses of clutter and targets can form very similar signatures, classifier design is not trivial. One approach to combat these issues uses robust principal component analysis (RPCA) to enhance target signatures while suppressing clutter and background responses, though there are many versions of RPCA. This work applies some of these RPCA techniques to GPR sensor data and evaluates their merit using the peak signal-to-clutter ratio (SCR) of the RPCA-processed B-scans. Experimental results on government furnished data show that while some of the RPCA methods yield similar results, there are indeed some methods that outperform others. Furthermore, we show that the computation time required by the different RPCA methods varies widely, and the selection of tuning parameters in the RPCA algorithms has a major effect on the peak SCR.
This paper investigates the enhancements to detection of buried unexploded ordinances achieved by combining ground
penetrating radar (GPR) data with electromagnetic induction (EMI) data. Novel features from both the GPR and the EMI
sensors are concatenated as a long feature vector, on which a non-parametric classifier is then trained. The classifier is a
boosting classifier based on tree classifiers, which allows for disparate feature values. The fusion algorithm was applied
to a government-provided dataset from an outdoor testing site, and significant performance enhancements were obtained
relative to classifiers trained solely on the GPR or EMI data. It is shown that the performance enhancements come from a
combination of improvements in detection and in clutter rejection.
This paper investigates the use of the apex-shifted hyperbolic Radon transform to improve detection of buried
unexploded ordinances with ground penetrating radar (GPR). The forward transform, motivated by physical signatures
generated by targets, is defined and implemented. The adjoint of the transform is derived and implemented as well. The
transform and its adjoint are used to filter out responses that do not exhibit the hyperbolic structure characteristic of GPR
target responses. The effectiveness of filtering off clutter via this hyperbolic Radon transform filtering procedure is
demonstrated qualitatively on several examples of GPR B-scan imagery from a government-provided dataset collected at
an outdoor testing site. Furthermore, a quantitative assessment of the utility within a detection algorithm is given in
terms of improved ROC curve performance on the same dataset.
This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective “ground-plane” and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.
KEYWORDS: Principal component analysis, General packet radio service, Detection and tracking algorithms, Target detection, Radon, Ground penetrating radar, Signal detection, Sensors, Independent component analysis, Signal processing
This paper investigates the application of Robust Principal Component Analysis (RPCA) to ground penetrating radar as a means to improve GPR anomaly detection. The method consists of a preprocessing routine to smoothly align the ground and remove the ground response (haircut), followed by mapping to the frequency domain, applying RPCA, and then mapping the sparse component of the RPCA decomposition back to the time domain. A prescreener is then applied to the time-domain sparse component to perform anomaly detection. The emphasis of the RPCA algorithm on sparsity has the effect of significantly increasing the apparent signal-to-clutter ratio (SCR) as compared to the original data, thereby enabling improved anomaly detection. This method is compared to detrending (spatial-mean removal) and classical principal component analysis (PCA), and the RPCA-based processing is seen to provide substantial improvements in the apparent SCR over both of these alternative processing schemes. In particular, the algorithm has been applied to both field collected impulse GPR data and has shown significant improvement in terms of the ROC curve relative to detrending and PCA.
This paper explores the effectiveness of an anomaly detection algorithm for downward-looking ground penetrating radar (GPR) and electromagnetic inductance (EMI) data. Threat detection with GPR is challenged by high responses to non-target/clutter objects, leading to a large number of false alarms (FAs), and since the responses of target and clutter signatures are so similar, classifier design is not trivial. We suggest a method based on a Run Packing (RP) algorithm to fuse GPR and EMI data into a composite confidence map to improve detection as measured by the area-under-ROC (NAUC) metric. We examine the value of a multiple kernel learning (MKL) support vector machine (SVM) classifier using image features such as histogram of oriented gradients (HOG), local binary patterns (LBP), and local statistics. Experimental results on government furnished data show that use of our proposed fusion and classification methods improves the NAUC when compared with the results from individual sensors and a single kernel SVM classifier.
With all of the new remote sensing modalities available, with ever increasing capabilities, there is a constant desire to extend the current state of the art in physics-based feature extraction and to introduce new and innovative techniques that enable the exploitation within and across modalities, i.e., fusion. A key component of this process is finding the associated features from the various imaging modalities that provide key information in terms of exploitative fusion. Further, it is desired to have an automatic methodology for assessing the information in the features from the various imaging modalities, in the presence of uncertainty. In this paper we propose a novel approach for assessing, quantifying, and isolating the information in the features via a joint statistical modeling of the features with the Gaussian Copula framework. This framework allows for a very general modeling of distributions on each of the features while still modeling the conditional dependence between the features, and the final output is a relatively accurate estimate of the information-theoretic J-divergence metric, which is directly related to discriminability. A very useful aspect of this approach is that it can be used to assess which features are most informative, and what is the information content as a function of key uncertainties (e.g., geometry) and collection parameters (e.g., SNR and resolution). We show some results of applying the Gaussian Copula framework and estimating the J-Divergence on HRR data as generated from the AFRL public release data set known as the Backhoe Data Dome.
Two major missions of Surveillance systems are imaging and ground moving target indication (GMTI). Recent advances in
coded aperture electro optical systems have enabled persistent surveillance systems with extremely large fields of regard.
The areas of interest for these surveillance systems are typically urban, with spatial topologies having a very definite
structure. We incorporate aspects of a priori information on this structure in our aperture code designs to enable optimized
dealiasing operations for undersampled focal plane arrays. Our framework enables us to design aperture codes to minimize
mean square error for image reconstruction or to maximize signal to clutter ratio for GMTI detection. In this paper we
present a technical overview of our code design methodology and show the results of our designed codes on simulated
DIRSIG mega-scene data.
Previously, we demonstrated useful and novel features of the General Dynamics QuickStar adaptive-optics testbed
utilizing Phase Diversity (PD) as the wavefront sensor operating on a point object. Point objects are relatively easy to
produce in the laboratory and simplify the calibration procedure. However, for some applications, natural or artificial
beacons may not be readily available and a wavefront sensor that operates on extended scenes is required. Accordingly,
the QuickStar testbed has been augmented to allow PD to operate on natural three-dimensional solar-illuminated scenes
external to the QuickStar laboratory. In addition, a computationally efficient chip-selection strategy has been developed
that allows PD to operate on chips with favorable scene content. Finally, a covariance matrix has been developed that
provides an accuracy estimate for PD wavefront-parameter estimates. The covariance can be used by the controls
algorithm to properly weight the correction applied according to the accuracy of the estimates. These advances suggest
that PD is a sufficiently mature technology for use in adaptive optics systems that require operation with extended
scenes.
Phase Diversity (PD) is a wavefront-sensing technology that offers certain advantages in an Adaptive-Optics
(AO) system. Historically, PD has not been considered for use in AO applications because computations have
been prohibitive. However, algorithmic and computational-hardware advances have recently allowed use of PD
in AO applications. PD is an attractive candidate for AO applications for a variety of reasons. The optical
hardware required is simple to implement and eliminates non-common path errors. In addition, PD has also
been shown to work well with extended scenes that are encountered, for example, when imaging low-contrast solar
granulation. PD can estimate high-order continuous aberrations as well as wavefront discontinuities characteristic
of segmented-aperture or sparse-aperture telescope designs. Furthermore, the fundamental information content
in a PD data set is shown to be greater than that of the correlation Shack-Hartmann wavefront sensor for the
limiting case of unresolved objects. These advantages coupled with recent laboratory results (extended-scene
closed-loop AO with PD sampling at 100 Hz) highlight the maturation of not only the PD concept and algorithm
but the technology as an emerging and viable wavefront sensor for use in AO applications.
Space-variant blur occurs when imaging through volume turbulence over sufficiently large fields of view. Space- variant effects are particularly severe in horizontal-path imaging, slant-path (air-to-ground or ground-to-air) geometries, and ground-based imaging of low-elevation satellites or astronomical objects. In these geometries, the isoplanatic angle can be comparable to or even smaller than the diffraction-limited resolution angle. Clearly, space-invariant methods used in conjunction with mosaicing will fail in this regime. Our approach to this problem has been to generalize the method of Phase-diverse Speckle (PDS) by using a physically motivated distributed phase-screen model to accomplish both pre- and post-detection correction. Previously reported simulation results have demonstrated the reconstruction of near diffraction-limited imagery using imagery which was severely degraded by space-variant blur. In this paper, we present a novel adaptation of the space- variant PDS scheme for use as a beacon-less wavefront sensor in a multi-conjugate AO system when imaging extended scenes. We then present results of simulation experiments demonstrating that this multi-conjugate AO-compensation scheme is very effective in improving the quality and resolution of collected imagery.
There is currently much interest in deploying large space- based telescopes for various applications including fine- resolution astronomical imaging and earth observing. Often a large primary mirror is synthesized by the precise alignment of several smaller mirror segments. Misalignment or misfigure of these segments results in phase error which degrade the resolution of collected imagery. Phase diversity (PD) is a technique used to infer unknown phase aberrations form image data. It requires the collection of two or more images of the same object, each incorporating a known phase perturbation in addition to the unknown aberrations. Statistical estimation techniques are employed to identify a combination of object and aberrations that is consistent with all of the collected images. The wavefront- sensing performance of PD is evaluated through simulation for a variety of signal and aberration strengths. The aberrations are parameterizes by piston and tilt misalignment of each segment. An unknown extended scene is imaged, complicating the estimation procedure. Since wavefront correction is often an iterative process, moderate estimation errors can be corrected by subsequent estimates. The interpretation of iterative wavefront adjustments as creating new phase-diversity channels suggests a more sophisticated processing approach, called Actuated Phase Diversity. This technique is shown to significantly improve PD wavefront-sensing performance.
Imaging through volume turbulence gives rise to anisoplanatism (space-variant blur). The effects of volume turbulence on imaging are often modeled through the use of a sequence of phase screens distributed along the optical path. Wallner recently derived a prescription for the optimal functional form and location of multiple phase screens for use in simulating the effects of volume turbulence in infinite-range imaging geometries. We generalized Wallner's method to accommodate the finite range case and to have a more optimal functional form for the phase screens. These methods can also be used for designing a multi-conjugate AO system. Examples of optimal solutions are given for horizontal-path finite-range imaging cases.
Space-variant blur occurs when imaging through volume turbulence over sufficiently large fields of view. This condition arises in a variety of imaging geometries, including astronomical imaging, horizontal-path imaging, and slant-path (e.g. air-to-ground) imaging. Space-variant effects are particularly severe when much of the optical path is immersed in turbulent media. We present a novel post-processing algorithm based on the technique of phase- diverse speckle (PDS) and a physical model for the space- variant blur. PDS imaging is a combination of phase diversity and speckle imaging which has proven to be an effective post-processing technique for cases with space- invariant blur. We present the details of the algorithm modified to accommodate space-variance and demonstrate its performance with results from both simulation experiments and real-data experiments. The results show that the space- variant PDS algorithm is very effective in cases involving severe space-variant blur, which cause correction techniques based on space-invariant models to fail.
By illuminating an object with a laser and collecting far- field speckle intensity patterns, at a regularly spaced sequence of wavelengths, one obtains the squared magnitude of the 3D Fourier transform of the object. Performing 3D phase retrieval to reconstruct a 3D image (consisting of complex-valued voxels) is relatively difficult unless one has a tight support constraint. An alternative is to perform averaging of the autocovariance of the far-field speckle intensities, over an ensemble of speckle realizations, to estimate the square magnitude of the Fourier transform of the underlying (incoherent) reflectivity of the object, by the correlography method. This also gives us an incoherent- image-autocorrelation estimate, from which we can derive an initial support constraint. Since the image, being incoherent, is real-valued and nonnegative, performing phase retrieval on this data is easier and more robust. Unfortunately the resolution for correlography is only moderate since the SNR is low at the higher spatial frequencies. However, one can then use a thresholded version of that reconstructed incoherent image as a tight support constraint for performing phase retrieval on the original speckle intensity patterns to reconstruct a fine-resolution, coherent image. The fact that the objects are opaque plays an important role in the robustness of this approach. We will show successful reconstruction results from real data collected in the laboratory as part of the PROCLAIM (Phase Retrieval with an Opacity Constraint for LAser IMaging) effort.
Anomaly detection offers a means by which to identify potentially important objects in a scene without prior knowledge of their spectral signatures. As such, this approach is less sensitive to variations in target class composition, atmospheric and illumination conditions, and sensor gain settings than would be a spectral matched filter or similar algorithm. The best existing anomaly detectors generally fall into one of two categories: those based on local Gaussian statistics, and those based on linear mixing moles. Unmixing-based approaches better represent the real distribution of data in a scene, but are typically derived and applied on a global or scene-wide basis. Locally adaptive approaches allow detection of more subtle anomalies by accommodating the spatial non-homogeneity of background classes in a typical scene, but provide a poorer representation of the true underlying background distribution. The CHAMP algorithm combines the best attributes of both approaches, applying a linear-mixing model approach in a spatially adaptive manner. The algorithm itself, and teste results on simulated and actual hyperspectral image data, are presented in this paper.
In this paper we describe a novel method of automatic target detection applied directly to the synthetic aperture radar (SAR) phase history. Our algorithm is based on a sequential likelihood ratio test (Wald test). The time dynamic behavior of the SAR phase history is modeled as a 2D autoregressive process. The sequential test attempts to dynamically ascertain the presence/absence of a target while the SAR phase history data is being collected. A target/no target decision can then be made during the collection aperture. System resources such as collection aperture and image formation processing can be dynamically reallocated depending on scene content. In contrast, image based detection methods wait until the entire aperture is collected, an image formed, then an algorithm is applied. We will show that significant savings in collection aperture can be obtained using this detection structure which may increase system search rates.
We compare phase diversity and curvature wavefront sensing. Besides having completely different reconstruction algorithms, the two methods measure data in different domains: phase diversity very near to the focal plane, and curvature wavefront sensing far from the focal plane in quasi-pupil planes, which enable real-time computation of the wavefront using analog techniques. By using information- theoretic lower bounds, we show that the price of measuring far from the focal lane is an increased error in estimating the phase. Curvature wavefront sensing is currently operating in the field, but phase diversity should produce superior estimates as real-time computing develops.
Generating a 3D image can be accomplished by gathering 3D far-field heterodyne array data with multiple laser wavelengths and performing a 3D Fourier transform. However, since heterodyne detection is difficult at optical frequencies, the collection system can be greatly simplified if direct detection is performed instead. Then to reconstruct an image one would need a phase-retrieval algorithm. To assist the reconstruction algorithm, we place bounds on the support of the illuminated object, derived from the support of the autocorrelation function, which can be computed from the Fourier intensity data. We have developed 3D locator sets for getting tight bounds on the object support. These new locator sets are more powerful than those for 2D imaging, and some of them make explicit use of the fact that the illuminated opaque object is effectively a 2D surface embedded in 3D space. For those cases in which it is tight enough, the locator set itself may be all that we need to give an accurate height profile of the object.
The effect of focus anisoplanatism upon the performance of an astronomical laser guide star (LGS) adaptive optics (AO) system can in principle be reduced if the lowest order wavefront aberrations are sensed and corrected using a natural guide star (NGS). For this approach to be useful, the noise performance of the wavefront sensor (WFS) used for the NGS measurements must be optimized to enable operation with the dimmest possible source. Two candidate sensors for this application are the Shack-Hartmann sensor and “phase-diverse phase retrieval,” a comparatively novel approach in which the phase distortion is estimated from two or more well-sampled, full-aperture images of the NGS measured with known adjustments applied to the phase profile. We present analysis and simulation results on the noise-limited performance of these two methods for a sample LGS AO observing scenario. The common parameters for this comparison are the NGS signal level, the sensing wavelength, the second-order statistics of the phase distortion, and the RMS detector read noise. Free parameters for the two approaches are the Shack-Hartmann subaperture geometry, the focus biases used for the phase-diversity measurements, and the algorithms used to estimate the wavefront. We find that phase-diverse phase retrieval provides consistently superior wavefront estimation accuracy when the NGS signal level is high. For lower NGS signal levels on the order of 103 photodetection events, the Shack-Hartmann (phase diversity) approach is preferred at a RMS detector read noise level of 5 (0) electrons/pixel.
In phase diverse speckle imaging, one collects a time series of phase-diversity image sets. From these data it is possible to jointly estimate the object and each realization of the aberrations. Current approaches model the total aberration phase screen in some deterministic, parametric fashion. For a typical scenario, however, one has more information than this. Specifically, the total aberration phase screen is caused by fixed aberrations combined with dynamic (time-varying), turbulence-induced aberrations for which we have some knowledge about the stochastic behavior. One important example is where the dynamic aberrations derive from Kolmogorov turbulence. In this context, utilizing this extra information has the potential for being a powerful aid in the joint aberration/object estimation. In addition, such a framework would provide a relatively simple method for calibrating fixed aberrations in an imaging system. The natural framework for utilizing the stochastic nature of the wavefronts is that of Bayesian statistical inference, where one imposes an a priori probability distribution on the turbulence-induced wavefronts. In this paper, we present the general Bayesian approach for this joint-estimation problem of the fixed aberrations, the dynamic aberrations, and the object from phase-diverse speckle data. We then discuss issues related to theoretical performance, numerical implementation, and applications. Finally we provide simulation results which demonstrate improvement in PDS image reconstructions resulting from the Bayesian estimation approach.
A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The overall model incorporates four components; a mission flight model, a multispectral target and background signature model, a multispectral sensor model, and a multispectral target detection model. Emphasis is placed on estimating the effects of mission multispectral target detection algorithms. Thus, the model ideally supports mission and multispectral sensor trade studies which require optimization of the system's overall target detection performance. The model and a typical example of performance prediction results are presented.
A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.
We describe a band selection process based on wavelet analysis of hyperspectral data which naturally decomposes the data into sub-bands. Wavelet analysis allows the control of the position, resolution, and envelope of the specific spectral sub-bands which will be selected. The sub-band sets are selected to maximize the Kullback-Liebler distance between specific classes of materials for a specific dimensionality contraint or discrimination performance goal. A sequential construction of the sub-band sets is used as an approximation to the global maximization operation over all possible sub-band sets. A max/min strategy is also introduced to provide a robust framework for sub-band selection when faced with multiple materials. We show band selection and material classification results of this technique applied to Fourier transform spectrometer data.
The method of phase diversity has been used in the context of incoherent imaging to estimate jointly an object that is being imaged and phase aberrations induced by atmospheric turbulence. The method requires a parametric model for the phase-aberration function. Typically, the parameters are coefficients to a finite set of basis functions. Care must be taken in selecting a parameterization that properly balances accuracy in the representation of the phase- aberration function with stability in the estimates. It is well known that over parameterization can result in unstable estimates. Thus a certain amount of model mismatch is often desirable. We derive expressions that quantify the bias and variance in object and aberration estimates as a function of parameter dimension.
An automated system for the SAR/ISAR imaging of rigid bodies which are undergoing arbitrarily complicated unknown motions is being developed. This system determines, from only the radar data, all observable parameters of motion, on a pulse by pulse basis. The approach makes it possible to: (1) exploit any type of relative motion: translational, rotational, two dimensional, three dimensional, deterministic, or stochastic; no prior parametric assumptions on the functional form of the motion are required; (2) require only the radar data; no ancillary motion measurement system on either the radar platform or on the target is required; (3) automatically provide all the motion information needed to form correctly scaled images, without cross range scale ambiguities; (4) make full use of all the radar data; no signals returning from a target are discarded; and (5) require a known computation time, which is not signal dependent, as all iterative processes used have known, guaranteed convergence rates.
A variety of clever pre- and post-detection methods have been developed to image through atmospheric turbulence. These methods have been developed and exercised primarily for use in the isoplanatic (space-invariant) imaging case. Multi-conjugate methods have been investigated for the accommodation of space-variance with compensated imaging, however multiple guidestars, wavefront sensors, and deformable mirrors are required making a practical implementation uncertain. Post-detection correction methods can be extended to the space-variant case with mosaicing methods, but such approaches provide suboptimal estimates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.