We evaluate recently developed randomized matrix decomposition methods for fast lossless compression and reconstruction of hyperspectral imaging (HSI) data. The simple random projection methods have been shown to be effective for lossy compression without severely affecting the performance of object identification and classification. We build upon these methods to develop a new double-random projection method that may enable security in data transmission of compressed data. For HSI data, the distribution of elements in the resulting residual matrix, i.e., the original data subtracted by its low-rank representation, exhibits a low entropy relative to the original data that favors high-compression ratio. We show both theoretically and empirically that randomized methods combined with residual-coding algorithms can lead to effective lossless compression of HSI data. We conduct numerical tests on real large-scale HSI data that shows promise in this case. In addition, we show that randomized techniques can be applicable for encoding on resource-constrained on-board sensor systems, where the core matrix-vector multiplications can be easily implemented on computing platforms such as graphic processing units or field-programmable gate arrays.
Nonnegative matrix factorization and its variants are powerful techniques for the analysis of hyperspectral images
(HSI). Nonnegative matrix underapproximation (NMU) is a recent closely related model that uses additional
underapproximation constraints enabling the extraction of features (e.g., abundance maps in HSI) in a recursive
way while preserving nonnegativity. We propose to further improve NMU by using the spatial information:
we incorporate into the model the fact that neighboring pixels are likely to contain the same materials. This
approach thus incorporates structural and textural information from neighboring pixels. We use an ℓ1-norm
penalty term more suitable to preserving sharp changes, and solve the corresponding optimization problem using
iteratively reweighted least squares. The effectiveness of the approach is illustrated with analysis of the real-world
cuprite dataset.
This work describes numerical methods for the joint reconstruction and segmentation of spectral images
taken by compressive sensing coded aperture snapshot spectral imagers (CASSI). In a snapshot, a CASSI
captures a two-dimensional (2D) array of measurements that is an encoded representation of both spectral
information and 2D spatial information of a scene, resulting in significant savings in acquisition time and data
storage. The double disperser coded aperture snapshot imager (DD-CASSI) is able to capture a hyperspectral
image from which a highly underdetermined inverse problem is solved for the original hyperspectral cube
with regularization terms such as total variation minimization. The reconstruction process decodes the
2D measurements to render a three-dimensional spatio-spectral estimate of the scene, and is therefore an
indispensable component of the spectral imager. In this study, we seek a particular form of the compressed
sensing solution that assumes spectrally homogeneous segments in the two spatial dimensions, and greatly
reduces the number of unknowns. The proposed method generalizes popular active contour segmentation
algorithms such as the Chan-Vese model and also enables one to jointly estimate both the segmentation
membership functions and the spectral signatures of each segment. The results are illustrated on a simulated
Hubble Space Satellite hyperspectral dataset, a real urban hyperspectral dataset, and a real DD-CASSI image
in microscopy.
Ocular recognition is a new area of biometric investigation targeted at overcoming the limitations of iris recognition
performance in the presence of non-ideal data. There are several advantages for increasing the area beyond
the iris, yet there are also key issues that must be addressed such as size of the ocular region, factors affecting
performance, and appropriate corpora to study these factors in isolation. In this paper, we explore and identify
some of these issues with the goal of better defining parameters for ocular recognition. An empirical study is
performed where iris recognition methods are contrasted with texture and point operators on existing iris and
face datasets. The experimental results show a dramatic recognition performance gain when additional features
are considered in the presence of poor quality iris data, offering strong evidence for extending interest beyond
the iris. The experiments also highlight the need for the direct collection of additional ocular imagery.
KEYWORDS: Hyperspectral imaging, Chemical elements, Optical engineering, Ultraviolet radiation, Hubble Space Telescope, Roads, Principal component analysis, Biometrics, Eye, Binary data
Non-negative matrix factorization (NMF) and its variants have recently been successfully used as dimensionality reduction techniques for identification of the materials present in hyperspectral images. We study a recently introduced variant of NMF called non-negative matrix underapproximation (NMU): it is based on the introduction of underapproximation constraints, which enables one to extract features in a recursive way, such as principal component analysis, but preserving non-negativity. We explain why these additional constraints make NMU particularly well suited to achieve a parts-based and sparse representation of the data, enabling it to recover the constitutive elements in hyperspectral images. Both ℓ2-norm and ℓ1-norm-based minimization of the energy functional are considered. We experimentally show the efficiency of this new strategy on hyperspectral images associated with space object material identification, and on HYDICE and related remote sensing images.
Nonnegative Matrix Factorization (NMF) and its variants have recently been successfully used as dimensionality
reduction techniques for identification of the materials present in hyperspectral images. In this paper, we present
a new variant of NMF called Nonnegative Matrix Underapproximation (NMU): it is based on the introduction
of underapproximation constraints which enables one to extract features in a recursive way, like PCA, but
preserving nonnegativity. Moreover, we explain why these additional constraints make NMU particularly wellsuited
to achieve a parts-based and sparse representation of the data, enabling it to recover the constitutive elements in hyperspectral data. We experimentally show the efficiency of this new strategy on hyperspectral images associated with space object material identification, and on HYDICE and related remote sensing images.
An important aspect of spectral image analysis is identification of materials present in the object or scene being
imaged. Enabling technologies include image enhancement, segmentation and spectral trace recovery. Since
multi-spectral or hyperspectral imagery is generally low resolution, it is possible for pixels in the image to
contain several materials. Also, noise and blur can present significant data analysis problems. In this paper,
we first describe a variational fuzzy segmentation model coupled with a denoising/deblurring model for material
identification. A statistical moving average method for segmentation is also described. These new approaches
are then tested and compared on hyperspectral images associated with space object material identification.
An integrated array computational imaging system, dubbed PERIODIC, is presented which is capable of exploiting a
diverse variety of optical information including sub-pixel displacements, phase, polarization, intensity, and
wavelength. Several applications of this technology will be presented including digital superresolution, enhanced
dynamic range and multi-spectral imaging. Other applications include polarization based dehazing, extended depth of
field and 3D imaging. The optical hardware system and software algorithms are described, and sample results are
shown.
Digital super-resolution refers to computational techniques that exploit the generalized sampling theorem to
extend image resolution beyond the pixel spacing of the detector, but not beyond the optical limit (Nyquist
spatial frequency) of the lens. The approach to digital super-resolution taken by the PERIODIC multi-lenslet
camera project is to solve a forward model which describes the effects of sub-pixel shifts, optical blur, and
detector sampling as a product of matrix factors. The associated system matrix is often ill-conditioned, and
convergence of iterative methods to solve for the high-resolution image may be slow.
We investigate the use of pupil phase encoding in a multi-lenslet camera system as a means to physically
precondition and regularize the computational super-resolution problem. This is an integrated optical-digital
approach that has been previously demonstrated with cubic type and pseudo-random phase elements. Traditional
multi-frame phase diversity for imaging through atmospheric turbulence uses a known smooth phase perturbation
to help recover a time series of point spread functions corresponding to random phase errors. In the context of a
multi-lenslet camera system, a known pseudo-random or cubic phase error may be used to help recover an array
of unknown point spread functions corresponding to manufacturing and focus variations among the lenslets.
We investigate the use of a novel multi-lens imaging system in the context of biometric identification, and more
specifically, for iris recognition. Multi-lenslet cameras offer a number of significant advantages over standard
single-lens camera systems, including thin form-factor and wide angle of view. By using appropriate lenslet spacing
relative to the detector pixel pitch, the resulting ensemble of images implicitly contains subject information
at higher spatial frequencies than those present in a single image. Additionally, a multi-lenslet approach enables
the use of observational diversity, including phase, polarization, neutral density, and wavelength diversities. For
example, post-processing multiple observations taken with differing neutral density filters yields an image having
an extended dynamic range. Our research group has developed several multi-lens camera prototypes for the
investigation of such diversities.
In this paper, we present techniques for computing a high-resolution reconstructed image from an ensemble of
low-resolution images containing sub-pixel level displacements. The quality of a reconstructed image is measured
by computing the Hamming distance between the Daugman4 iris code of a conventional reference iris image,
and the iris code of a corresponding reconstructed image. We present numerical results concerning the effect of
noise and defocus blur in the reconstruction process using simulated data and report preliminary work on the
reconstruction of actual iris data obtained with our camera prototypes.
Computational imaging systems are modern systems that consist of generalized aspheric optics and image processing capability. These systems can be optimized to greatly increase the performance above systems consisting solely of traditional optics. Computational imaging technology can be used to advantage in iris recognition applications. A major difficulty in current iris recognition systems is a very shallow depth-of-field that limits system usability and increases system complexity. We first review some current iris recognition algorithms, and then describe computational imaging approaches to iris recognition using cubic phase wavefront encoding. These new approaches can greatly increase the depth-of-field over that possible with traditional optics, while keeping sufficient recognition accuracy. In these approaches the combination of optics, detectors, and image processing all contribute to the iris recognition accuracy and efficiency. We describe different optimization methods for designing the optics and the image processing algorithms, and provide laboratory and simulation results from applying these systems and results on restoring the intermediate phase encoded images using both direct Wiener filter and iterative conjugate gradient methods.
The insertion of a suitably designed phase plate in the pupil of an imaging system makes it possible to encode the depth dimension of an extended three-dimensional scene by means of an approximately shift-invariant PSF. The so-encoded image can then be deblurred digitally by standard image recovery algorithms to recoup the depth dependent detail of the original scene. A similar strategy can be adopted to compensate for certain monochromatic aberrations of the system. Here we consider two approaches to optimizing the design of the phase plate that are somewhat complementary - one based on Fisher information that attempts to reduce the sensitivity of the phase encoded image to misfocus and the other based on a minimax formulation of the sum of singular values of the system blurring matrix that attempts to maximize the resolution in the final image. Comparisons of these two optimization approaches are discussed. Our preliminary demonstration of the use of such pupil-phase engineering to successfully control system aberrations, particularly spherical aberration, is also presented.
Automated iris recognition is a promising method for noninvasive verification of identity. Although it is noninvasive, the procedure requires considerable cooperation from the user. In typical acquisition systems, the subject must carefully position the head laterally to make sure that the captured iris falls within the field-of-view of the digital image acquisition system. Furthermore, the need for sufficient energy at the plane of the detector calls for a relatively fast optical system which results in a narrow depth-of-field. This latter issue requires the user to move the head back and forth until the iris is in good focus. In this paper, we address the depth-of-field problem by studying the effectiveness of specially designed aspheres that extend the depth-of-field of the image capture system. In this initial study, we concentrate on the cubic phase mask originally proposed by Dowski and Cathey. Laboratory experiments are used to produce representative captured irises with and without cubic asphere masks modifying the imaging system. The iris images are then presented to a well-known iris recognition algorithm proposed by Daugman. In some cases we present unrestored imagery and in other cases we attempt to restore the moderate blur introduced by the asphere. Our initial results show that the use of such aspheres does indeed relax the depth-of-field requirements even without restoration of the blurred images. Furthermore, we find that restorations that produce visually pleasing iris images often actually degrade the performance of the algorithm. Different restoration parameters are examined to determine their usefulness in relation to the recognition algorithm.
A novel and successful optical-digital approach for removing certain
aberrations in imaging systems involves placing an optical mask between an image-recording device and an object to encode the wavefront phase before the image is recorded, followed by digital image deconvolution to decode the phase. We have observed that when appropriately engineered, such an optical mask can also act as a form of preconditioner for certain deconvolution algorithms. It can boost information in the signal before it is recorded well above the noise level, leveraging digital restorations of very high quality. In this paper, we 1) examine the influence that a phase mask has on the incoming signal and how it subsequently affects the performance of restoration algorithms, and 2) explore the design of optical masks, a difficult nonlinear optimization problem with multiple design parameters, for removing certain aberrations and for maximizing
restorability and information in recorded images.
By suitably phase-encoding optical images in the pupil plane and then digitally restoring them, one can greatly improve their quality. The use of a cubic phase mask originated by Dowski and Cathey to enhance the depth of focus in the images of 3-d scenes is a classic example of this powerful approach. By using the Strehl ratio as a measure of image quality, we propose tailoring the pupil phase profile by minimizing the sensitivity of the quality of the phase-encoded image of a point source to both its lateral and longitudinal coordinates. Our approach ensures that the encoded image will be formed under a nearly shift-invariant imaging condition, which can then be digitally restored to a high overall quality nearly free from the aberrations and limited depth of focus of a traditional imaging system. We also introduce an alternative measure of sensitivity that is based on the concept of Fisher information. In order to demonstrate the validity of our general approach, we present results of computer simulations that include the limitations imposed by detector noise.
Analytic determination of the 3D PSF of a tomosynthetic image volume
allows us to solve the ill-posed inverse problem of recovering an
image volume from a nonspecific orientation of projection views.
To restore these inherently blurred images from tuned-aperture
reconstructed volumes, we consider three approaches; direct inversion
via 3D Wiener filter restoration, regularized iterative 3D conjugate
gradient least squares, and regularized nonlinear iterative 3D
modified residual norm steepest descent with nonnegativity
constraints. From these tests we infer that all three methods
produce adequate restorations, while the nonlinear, nonnegatively constrained iterative algorithm appears to produce especially
good restorations for this problem in an efficient and stable way.
Image restoration is a procedure which is characterized by ill- poseness, ill-conditioning and non-uniqueness of the solution in the presence of noise. Iterative numerical methods have gained much attention for solving these inverse problems. Among the methods, minimal variance or least squares approaches are widely used and often generate good results at a reasonable cost in computing time using iterative optimization of the associated cost functional. In this paper, a new regularization method obtained by minimizing the autocorrelation function of residuals is proposed. Several numerical tests using the BFGS nonlinear optimization method are reported and comparisons to the classical Tikhonov regularization method are given. The results show that this method gives competitive restoration and is not sensitive to the regularization weighting parameter. Furthermore, a comprehensive procedure of image restoration is proposed by introducing a modified version of the Mumford-Shah model, which is often used in image segmentation. This approach shows promising improvement in restoration quality.
A study is made of a non-smooth optimization problem arising in adaptive-optics, which involves the real-time control of a deformable mirror designed to compensate for atmospheric turbulence and other dynamic image degradation factors. One formulation of this problem yields a functional f(U) equals (Sigma) iequals1n maxj[(UTMjU)ii] to be maximized over orthogonal matrices U for a fixed collection of n X n symmetric matrices Mj. We consider first the situation which can arise in practical applications where the matrices Mj are nearly pairwise commutative. Besides giving useful bounds, results for this case lead to a simple corollary providing a theoretical closed-form solution for globally maximizing f if the Mj are simultaneously diagonalizable. However, even here conventional optimization methods for maximizing f are not practical in a real-time environment. The genal optimization problem is quite difficult and is approached using a heuristic Jacobi-like algorithm. Numerical test indicate that the algorithm provides an effective means to optimize performance for some important adaptive-optics systems.
We consider two general procedures for constructing the nearest approximation of a given matrix by one with any lower rank and any linear structure. Nearness can be measured in any matrix norm. Structured low rank matrices arise in various applications, including image enhancement and model reduction. In practice, the empirical data collected in the matrix often either do not maintain the specified structure or do not induce the desirable rank.It is an important task to search for the nearest structured lower rank approximation of a given matrix. The techniques developed in this paper can easily be implemented for numerical computation. In particular, it is shown that the computations can be approached using efficient optimization packages. The special case of Toeplitz structure using the Frobenius matrix norm is discussed in detail to illustrate the ideas, and numerical test are reported. The procedures developed herein can be generalized to consider a much broader range of approximation problems.
Real-time adaptive-optics is a means for enhancing the resolution of ground based, optical telescopes beyond the limits previously imposed by the turbulent atmosphere. One approach for linear performance modeling of closed-loop adaptive-optics system involves calculating very large covariance matrices whose components can be represented by sums of Hankel transform based integrals. In this paper we investigate approximate matrix factorizations of discretizations of such integrals. Two different approximate factorizations based upon representations of the underlying Bessel function are given, the first using a series representation due to Ellerbroek and the second an integral representations. The factorizations enable fast methods for both computing and applying the covariance matrices. For example, in the case of an equally spaced grid, it is shown that applying the approximated covariance matrix to a vector can be accomplished using the derived integral-based factorization involving a 2D fast cosine transform and a 2D separable fast multiple method. The total work is then O(N log N) where N is the dimensions of the covariance matrix in contrast to the usual O(N2) matrix-vector multiplication complexity. Error bounds exist for the matrix factorizations. We provide some simple computations to illustrate the ideas developed in the paper.
Phase diversity is a technique for obtaining estimates of both the object and the phase, by exploiting the simultaneous collection of two short-exposure optical images, one of which has been formed by further blurring regularized variant of the Gauss-Newton optimization method for phase diversity-based estimated when a Gaussian likelihood fit-to-data criterion is applied. Simulation studies are provided to demonstrate that the method is remarkably robust and numerically efficient.
This paper concerns solving deconvolution problems for atmospherically blurred images by the preconditioned conjugate gradient algorithm, where a new approximate inverse preconditioner is used to increase the rate of convergence. Removing a linear, shift-invariant blur from a signal or image can be accomplished by inverse or Wiener filtering, or by an iterative least squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise, filtering methods often yield poor results. On the other hand, iterative methods often suffer from slow convergence at high spatial frequencies. Theoretical results are established to show that fast convergence for our iterative algorithm can be expected, and test results are reported for a ground-based astronomical imaging problem.
In this paper, we proposed a method to generalize Strang's circulant preconditioner for arbitrary n-by-n matrices An. The [n/2]th column of our circulant preconditioner Sn is equal to the [n/2]th column of the given matrix An. Thus if An is a square Toeplitz matrix, then Sn is just the Strang circulant preconditioner. When Sn is not Hermitian, our circulant preconditioner can be defined as (S*nSn)1/2. This construction is similar to the forward-backward projection method used in constructing preconditioners for tomographic inversion problems in medical imaging. Comparisons of our preconditioner Sn with other circulant-based preconditioners are carried out for some 1D Toeplitz least squares problems: minb - Ax2. Preliminary numerical results show that Sn performs quite well. Test results are also reported for a 2D deconvolution problem arising in ground-based atmospheric imaging.
The performance of a closed loop adaptive optics system may in principle be improved by selecting distinct and independently optimized control bandwidths for separate components, or modes, of the wave front distortion profile. In this paper we outline a method for synthesizing and optimizing a multi-bandwidth adaptive optics control system from performance estimates previously derived for single-bandwidth control systems operating over a range of bandwidths. Numerical results are presented for use of an atmospheric turbulence profile consisting of a single translating phase screen with Kolmogorov statistics, a Shack-Hartmann wave front sensor with 8 subapertures across the aperture of the telescope, and a continuous facesheet deformable mirror with actuators conjugate with the corners of the wave front sensor subapertures. The use of multiple control bandwidths significantly relaxes the wave front sensor noise level allowed for the adaptive optics system to operate near the performance limit imposed by fitting error. Nearly all of this reduction is already achieved through the use of a control system utilizing only two distinct bandwidths, one of which is the zero bandwidth.
Pure nuclear quadrupole resonance (NQR) of 14N nuclei is quite promising as a method for detecting explosives such as RDX and contraband narcotics such as cocaine and heroin in quantities of interest. Pure NQR is conducted without an external applied magnetic field, so potential concerns about damage to magnetically encoded data or exposure of personnel to large magnetic fields are not relevant. Because NQR frequencies of different compounds are quite distinct, we do not encounter false alarms from the NQR signals of other benign materials. We have constructed a proof-of-concept NQR explosives detector which interrogates a volume of 300 liters (10 ft3). With minimal modification to the existing explosives detector, we can detect operationally relevant quantities of (free base) cocaine within the 300-liter inspection volume in 6 seconds. We are presently extending this approach to the detection of heroin base and also examining 14N and 35,37Cl pure NQR for detection of the hydrochloride forms of both materials. An adaptation of this NQR approach may be suitable for scanning personnel for externally carried contraband and explosives. We first outline the basics of the NQR approach, highlighting strengths and weaknesses, and then present representative results for RDX and cocaine detection. We also present a partial compendium of relevant NQR parameters measured for some materials of interest.
Discretized 2-D deconvolution problems arising, e.g., in image restoration and seismic tomography, can be formulated as 1eas squares compuaions, mm lib— Tx112 where T is often a large-scale rectangular Toeplitz-block matrix. We consider solving such block least squares problems by the preconditioned conjugate gradient algorithm using square nonsingular circulant-block and related preconditioners, constructed from the blocks of the rectangular matrix T. Preconditioning with such matrices allows efficient implementation using the 1-D or 2-D Fast Fourier Transform (FFT). It is well known that the resolution of ill-posed deconvolution problems can be substantially improved by regularization to compensate for their ill-posed nature. We show that regularization can easily be incorporated into our preconditioners, and we report on numerical experiments on a Cray Y-MP. The experiments illustrate good convergence properties of these FET—based preconditioned iterations.
We study fast preconditioned conjugate gradient (PCG) methods for solving least squares problems min (parallel)b-T(chi) (parallel)2, where T is an m X n Toeplitz matrix of rank n. Two circulant preconditioners are suggested: one, denoted by P, is based on a block partitioning of T and the other, denoted by N, is based on the displacement representation of TTT. Each is obtained without forming TTT. We prove formally that for a wide class of problems the PCG method with P converges in a small number of iterations independent of m and n, so that the computational cost of solving such Toeplitz least squares problems is O(m log n). Numerical experiments in using both P and N are reported, indicating similar good convergence properties for each preconditioner.
Estimates for the condition number of a matrix are useful in many areas of scientific computing including: recursive least squares computations optimization eigenanalysis and general nonlinear problems solved by linearization techniques where matrix modification techniques are used. The purpose of this paper is to propose an adaptive Lanczos estimator scheme which we call ale for tracking the condition number of the modified matrix over time. Applications to recursive least squares (RLS) computations using the covariance method with sliding data windows are considered. ale is fast for relatively small n - parameter problems arising in RLS methods in control and signal processing and is adaptive over time i. e. estimates at time t are used to produce estimates at time t + 1 . Comparisons are made with other adaptive and non-adaptive condition estimators for recursive least squares problems. Numerical experiments are reported indicating that ale yields a very accurate recursive condition estimator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.