Data decorrelation and energy compaction are the two fundamental characteristics of wavelets that led to wavelet based image compression models. Wavelet transform is not a perfect whitening transform; but it is viewed as an approximation to Karhunen-Loeve transform (KLT). In general, decorrelation does not imply statistical independence. Thus, a wavelet transform results in coefficients which exhibit inter and intra band dependencies. The energy compaction property of a wavelet is reflected in the coding performance, which can be measured by its coding gain. This paper investigates the above two important aspects of bi-orthogonal wavelets in the context of lossy compression. This investigation suggests that simple predictive models are sufficient to capture the dependencies exhibited by the wavelet coefficients. This paper also compares the metrics that measure the performance of bi-orthogonal wavelets in lossy coding schemes.
This paper discusses the utility of scale-angle continuous wavelet transform features for object classification. These features are used as input to two algorithms: character recognition and target recognition in FLIR images. The corresponding recognition algorithm is robust against noise and allows data reduction. A comparative study is made between two types of directional wavelets derived from the Mexican hat wavelet and the usual template matching.
In this work, we introduce a detection scheme that is able to identify regions of interest during the intermediate stages of an image formation process for ultra-wideband (UWB) synthetic aperture radar. Traditional detection methods manipulate the data after image formation. However, this approach wastes computational resources by resolving to completion the entire scene including area dominated by benign clutter. As an alternative, we introduce a multiscale focus of attention (FOA) algorithm that processes intermediate radar data from a quadtree-based backprojection image formation algorithm. As the stages of the quadtree algorithm progress, the FOA thresholds a detection statistic that estimates the signal-to-background ratio for increasingly smaller subpatches. Whenever a subpatch fails a detection, the FOA cues the image formation processor to terminate further processing of that subpatch. We demonstrate that the FOA is able to decrease the overall computational load of the image formation process by a factor of two. We also show that the new FOA method provides fewer false alarms than the two-parameter CFAR FOA over a small database of UWB radar data.
The Extended Fractal (EF) feature has been shown to lower the false alarm rate for the focus of attention (FOA) stage of a synthetic aperture radar (SAR) automatic target recognition (ATR) system. The feature is both contrast and size sensitive, and thus, can discriminate between targets and many types of cultural clutter at the earliest stages of the ATR. In this paper we modify the EF feature so that one can 'tune' the size sensitivity to the specific targets of interest. We show how to optimize the EF feature using target chip data from the public MSTAR database. We demonstrate improvements in performance for FOA algorithms that include the new feature by comparing the receiver operating characteristic (ROC) curves for all possible combinations of FOA algorithms incorporating EF, two-parameter CFAR, and variance features. Finally, we perform timing experiments on the fused detector to demonstrate the feasibility for implementation of the detector in a real system.
This paper investigates methods to improve template-based synthetic aperture radar (SAR) automatic target recognition (ATR). The approach utilizes clustering methods motivated from the vector quantization (VQ) literature to search for templates that best represent the signature variability of target chips. The ATR performance using these new templates are compared to the performance using standard templates. For baseline SAR ATR, the templates are generated over uniform angular bins in the pose space. A merge method is able to generate templates that provide a nonuniform sampling of the pose space, and the templates produce modest gains in ATR performance over standard templates.
This paper investigates the use of Continuous Wavelet Transform (CWT) features for detection of targets in low resolution FLIR imagery. We specifically use the CWT features corresponding to the integration of target features at all relevant scales and orientations. These features are combined with non-linear transformations (thresholding, enhancement, morphological operations). We compare our previous results using the Mexican hat wavelet with those obtained using the two types of directional wavelets: the Morlet wavelet and the Cauchy wavelets. The algorithm was tested on the TRIM2 data base.
In this work, we evaluate the robustness of template matching schemes for automatic target recognition (ATR) against the effects of clutter layover. The results of our experiments indicate the performance of template matching ATR in various image transform domains against the signal to clutter ratio (SCR). The purpose of these transforms is to enhance the target features in a chip while suppressing features representative of background clutter or simple noise. The ATR experiments were performed for synthetic aperture radar imagery using target chips in the public domain MSTAR database. The transforms include pointwise nonlinearities such as the logarithm and power operations. The templates are generated using the training portion of the MSTAR database at the nominal SCR. Many different ATR parameterizations are considered for each transform domain where templates are built to represent different ranges of aspect angles in uniform angular bins of 5, 10, 15, 30, and 45 degree increments. The different ATRs were evaluated using the testing portion of the database where synthetic clutter was added to lower the SCR.
Two texture-based and one amplitude-based features are evaluated as detection statistics for synthetic aperture radar (SAR) imagery. The statistics include a local variance, an extended fractal, and a two-parameter CFAR feature. The paper compares the effectiveness of focus of attention (FOA) algorithms that consist of any number of combinations of the three statistics. The public MSTAR database is used to derive receiver-operator-characteristic (ROC) curves for the different detectors at various signal-to-clutter rations (SCR). The database contains one foot resolution X-band SAR imagery. The results in the paper indicate that the extended fractal statistic provides the best target/clutter discrimination, and the variance statistic is the most robust against SCR. In fact, the extended fractal statistic combines the intensity difference information used also by the CFAR feature with the spatial extent of the higher intensity pixels to generate an attractive detection statistics.
This paper treats the compression of Synthetic Aperture Radar (SAR) imagery. SAR images are difficult to compress, relative to natural images, because SAR contains an inherent high frequency speckle. Today's state-of-the-art coders are designed to work with natural images, which have a lower frequency content. Thus, their performance on SAR is under par. In this paper we given an overview performance report on the popular compressions techniques, and investigate three approaches to improve the quality of SAR compression at low- bit rates. First, we look at the design of optimal quantizers which we obtain by training on SAR data. Second, we explore the use of perceptual properties of the human visual system to improve subjective coding quality. Third, we consider the use of a model that separates the SAR image into structural and textural components. The paper concludes with a subjective evaluation of the algorithms based on the CCIR recommendation for the assessment of picture quality.
In images, anomalies such as edges or object boundaries take on a perceptual significance that is far greater than their numerical energy contribution to the image. Wavelet transform highlights these anomalies by representing them with significant coefficients. The contribution of a wavelet coefficient to the perceptual quality of the image is related to its magnitude. Degradation in image quality due to image compression reflects in the form of reduction in the magnitude of the wavelet coefficients. Since, significant wavelet coefficients appear across different scales and orientations, it is important to observe the wavelet transform at different scales and orientations. In this paper, the wavelet transform of a given image and the reconstructed images at various quality levels are represented in the form of energy density plots suggested in reference one. A quality metric is proposed based on the absolute difference between the energy densities corresponding to the original and reconstructed images. Preliminary results obtained using the scale-based image quality evaluation strategy are reported.
This paper presents a method to evaluate image quality using the continuous wavelet transform. The method utilizes a bank of filters tuned to different scales and orientations to extract the image details. The filters are designed according to the criterion suggested by Antoine and Murenzi. The wavelet transform of a given image and the reconstructed images at various quality levels are represented in the form of energy density plots. These density plots highlight image features such as edges, object boundaries and texture. Thus, they represent the details contained in the image. A quality metric is proposed based on the absolute difference between the energy densities corresponding to the original and reconstructed images. The proposed metric is used to measure the relative quality of the image. In addition, the metric is also used to study the performance of a specific ATR algorithm as a function of image quality.
The increase in the number of multimedia databases consisting of images has created a need for a quick method to search these databases for a particular type of image. An image retrieval system will output images from the database similar to the query image in terms of shape, color, and texture. For the scope of our work, we study the performance of multiscale Hurst parameters as texture features for database image retrieval over a database consisting of homogeneous textures. These extended Hurst features represent a generalization of the Hurst parameter for fractional Brownian motion (fBm) where the extended parameters quantize the texture roughness of an image at various scales. We compare the retrieval performance of the extended parameters against traditional Hurst features and features obtained from the Gabor wavelet. Gabor wavelets have previously been suggested for image retrieval applications because they can be tuned to obtain texture information for a number of different scales and orientations. In our experiments, we form a database combining textures from the Bonn, Brodatz, and MIT VisTex databases. Over the hybrid database, the extended fractal features were able to retrieve a higher percentage of similar textures than the Gabor features. Furthermore, the fractal features are faster to compute than the Gabor features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.