This paper addresses the fundamental performance limits of object reconstruction methods using intensity interferometry measurements. It shows examples of reconstructed objects obtained with the FIIRE (Forward-model Interferometry Image Reconstruction Estimator) code developed by Boeing for AFRL. It considers various issues when calculating the multidimensional Cramér-Rao lower bound (CRLB) when the Fisher information matrix (FIM) is singular. In particular, when comparing FIIRE performance, characterized as the root mean square difference between the estimated and pristine objects with the CRLB, we found that FIIRE performance improved as the singularity became worse, a result not expected. We found that for invertible FIM, FIIRE yielded lower root mean squared error than the square root of the CRLB (by a factor as large as 100). This may be due to various regularization constraints (positivity, support, sharpness, and smoothness) included in FIIRE, rendering it a biased estimator, as opposed to the unbiased CRLB framework used. Using the sieve technique to mitigate false high frequency content inherent in point-by-point object reconstruction methods, we also show further improved FIIRE performance on some generic objects. It is worth noting that since FIIRE is an iterative algorithm searching to arrive at an object estimate consistent with the collected data and various constraints, an initial object estimate is required. In our case, we used a completely random initial object guess consisting of a 2-D array of uniformly distributed random numbers, sometimes multiplied with a 2-D Gaussian function.
Many imaging techniques provide measurements proportional to Fourier magnitudes of an object, from which one attempts to form an image. One such technique is intensity interferometry which measures the squared Fourier modulus. Intensity interferometry is a synthetic aperture approach known to obtain high spatial resolution information, and is effectively insensitive to degradations from atmospheric turbulence. These benefits are offset by an intrinsically low signal-to-noise (SNR) ratio. Forward models have been theoretically shown to have best performance for many imaging approaches. On the other hand, phase retrieval is designed to reconstruct an image from Fourier-plane magnitudes and object-plane constraints. So it’s natural to ask, “How well does phase retrieval perform compared to forward models in cases of interest?” Image reconstructions are presented for both techniques in the presence of significant noise. Preliminary conclusions are presented for attainable resolution vs. DC SNR.
Many imaging modalities measure magnitudes of Fourier components of an object. Given such data, reconstruction of an image from data that is also noisy and sparse is especially challenging, as may occur in some forms of intensity interferometry, Fourier telescopy, and speckle imaging. In such measurements, the Fourier magnitudes must be positive, and moreover must be less than 1 given the usual normalization, scaling the magnitudes so that the magnitude is one at zero spatial frequency in the u-v plane data. The Cramér-Rao formalism is applied to single Fourier magnitude measurements to ascertain whether a reduction in variance is possible given these constraints. An extension of the Cramér-Rao formalism is used to address the value of relatively general prior information. The impact of this knowledge is also shown for simulated image formation for a simple disk, with varying measurement SNR and sampling in the (u,v) plane.
Super resolution image reconstruction allows for the enhancement of images in a video sequence that is superior to the
original pixel resolution of the imager. Difficulty arises when there are foreground objects that move differently than the
background. A common example of this is a car in motion in a video. Given the common occurrence of such situations,
super resolution reconstruction becomes non-trivial. One method for dealing with this is to segment out foreground
objects and quantify their pixel motion differently. First we estimate local pixel motion using a standard block motion
algorithm common to MPEG encoding. This is then combined with the image itself into a five dimensional mean-shift
kernel density estimation based image segmentation with mixed motion and color image feature information. This
results in a tight segmentation of objects in terms of both motion and visible image features. The next step is to combine
segments into a single master object. Statistically common motion and proximity are used to merge segments into master
objects. To account for inconsistencies that can arise when tracking objects, we compute statistics over the object and fit
it with a generalized linear model. Using the Kullback-Leibler divergence, we have a metric for the goodness of the track
for an object between frames.
An image reconstruction approach is developed that makes joint use of image sequences produced by a conventional imaging channel and a Shack-Hartmann (lenslet) channel. Iterative maximization techniques are used to determine the reconstructed object that is most consistent with both the conventional and Shack-Hartmann raw pixel-level data. The algorithm is analogous to phase diversity, but with the wavefront diversity provided by a lenslet array rather than a simple defocus. The log-likelihood cost function is matched to the Poisson statistics of the signal and Gaussian statistics of the detector noise. Addition of a cost term that encourages the estimated object to agree with a priori knowledge of an ensemble averaged power spectrum regularizes the reconstruction. Techniques for modeling FPA sampling are developed that are convenient for performing both the forward simulation and the gradient calculations needed for the iterative maximization. The model is computationally efficient and accurately addresses all aspects of the Shack-Hartmann sensor, including subaperture cross-talk, FPA aliasing, and geometries in which the number of pixels across a subaperture is not an integer. The performance of this approach is compared with multi-frame blind deconvolution and phase diversity using simulations of image sequences produced by the visible band GEMINI sensor on the AMOS 1.6 meter telescope. It is demonstrated that wavefront information provided by the second channel improves image reconstruction by avoiding the wavefront ambiguities associated with multiframe blind deconvolution and to a lesser degree, phase diversity.
Image restoration algorithms compensate for blur induced attenuation of frequency components that correspond to fine
scale image features. However, for Fourier spatial frequency components with low signal to noise ratio, noise
amplification outweighs the benefit of compensation and regularization methods are required. This paper investigates a
generalization of the Wiener filter approach developed as a maximum a priori estimator based on statistical expectations
of the object power spectrum. The estimate is also required to agree with physical properties of the system, specifically
object positivity and Poisson noise statistics. These additional requirements preclude a closed form expression. Instead,
the solution is determined by an iterative approach. Incorporation of the additional constraints results in significant
improvement in the mean square error and in visual interpretability. Equally important, it is shown that the performance
has weak sensitivity to the weight of the prior over a large range of SNR values, blur strengths, and object morphology,
greatly facilitating practical use in an operational environment.
KEYWORDS: Staring arrays, Wavefronts, Sensors, Image processing, Point spread functions, Data modeling, Telescopes, Signal to noise ratio, Image acquisition, Convolution
Ideally phase diversity determines the object and wavefront that are consistent with two images taken identically except that the wavefront of the diversity channel is perturbed by a known additive aberration. In practice other differences may occur such as image rotation, magnification, changes in detector response, and non-common image motion. This paper develops a mathematical forward model for addressing magnification changes and a corresponding maximumlikelihood implementation of phase diversity. Performance using this physically correct forward model is compared with the more simple approach of resampling the data of the diversity channel.
Conventional adaptive optics methods for controlling the phase control element based on least-squares reconstructions of the measured residual phase error exhibit poor performance as scintillation becomes strong. This paper compares the performance of various closed-loop control methods for different phase sensor types (self-referencing interferometer, shearing, and Shack-Hartmann interferometers), and for both conventional and segmented piston-type deformable mirrors (DMs). Significant performance improvements are demonstrated using a weighted least-squares reconstructor that adaptively optimizes the weights at each frame based on the intensities associated with each phase difference measurement and their sums around closed loops. Although the reconstructors considered do not explicitly place branch-cuts in the reconstructed residual phase, branch-cut like features can appear in both the single frame reconstructions and the closed-loop actuator commands. It is also found that at higher Rytov numbers, segmented piston-type DMs outperform conventional deformable mirrors. It is believed that conventional DMs suffer a fitting error associated with branch cuts in the actuator commands that the piston-type DMs are immune to. Performance trends corresponding to self-referencing interferometers provides a useful benchmark since, unlike Shack-Hartmann and shearing interferometers, the phase measurements are not corrupted by scintillation effects.
A methodology for analyzing an imaging sensor's ability to assess target properties is developed. By applying Cramer- Rao covariance analysis to a statistical model relating the sensor measurements to the target, a bound on the accuracy with which target properties can be estimated can be calculated. Such calculations are important in understanding how a sensor's design effects its performance for a given assessment task, and in performing feasibility studies or trade studied between sensor designs and sensing modalities. A novel numerical model relating a sensor's measurements to a target's three-dimensional geometry is developed in order to overcome difficulties in accurately performing the required numerical computations. An example use of the approach is presented in which the influence of viewing perspective on orientation accuracy limits is analyzed. The example is also used to examine the potential for improving the accuracy bound by fusing multi-perspective data.
A post-processing methodology for reconstructing undersampled image sequences with randomly varying blur is described which can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive optics compensated imagery taken by the Starfire Optical Range 3.5 meter telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques which includes a representation of spatial sampling by the focal plane array elements in the forward stochastic model of the imaging system. This generalization enables the random shifts and shape of the adaptive compensated PSF to be used to partially eliminate the aliasing effects associated with sub- Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss which occurs when imaging in wide FOV modes.
Using the estimate-maximize technique for maximum-likelihood estimation, a multiframe generalization of the Richardson- Lucy algorithm is derived which encompasses additive Poisson noise sources in addition to source dependant photon noise. This enables the estimation algorithm to properly treat situations in which the signal is similar in strength to noise sources such as background radiation and ark current. Simulations are used to investigate the level of restoration performance that may be expected at various noise source strengths.
Reconstructed scene intensity distribution obtained from deconvolution using three different iterative reconstruction methods: phase diversity, deconvolution, and iterative blind deconvolution approaches are presented. For images degraded with as much as a quarter wavelength of aberration and a signal to noise ratio of 10, we show that the correlation between the `truth' scene and the reconstructed scene is 0.9761 for deconvolution, 0.9680 for phase diversity, and 0.9169 for iterative blind deconvolution. The correlation coefficient becomes even higher as the signal to noise ratio and the aberration strength decrease. In spite of the sometimes severe edge effect, we show that these algorithms as adapted by our group yield relatively decent reconstructed objects as determined visually and by peak correlation coefficient comparison. The success of these adapted algorithms on extended scenes makes them potentially useful in imaging with degraded optical systems.
Two methods for restoration of images of coherently illuminated objects or coherent sources using multiple short-exposure images are proposed. The first method is based on the same concepts as the Knox-Thompson technique used for incoherent imaging. Sampling resolution requirements are calculated as a function of turbulence strength and lens size. The second method uses a probability maximization technique to form an estimate of the object based on the images taken and knowledge of the statistical nature of atmospheric turbulence. Both techniques require phase information of the electric field in the image plane. Methods of obtaining this information are also discussed.
Effects of weak turbulence on images of coherent sources or coherently illuminated objects taken with exposure times much greater than the turbulence's time constant are examined using the extended Fresnel principle. Two cases are considered. One in which the turbulent medium fills the region between the imaging system and the object and the other in which the turbulence occurs as a phase screen directly before the object. Assuming the Kolmogoroff spectrum for the index of refraction fluctuations and specifying a modulated Gaussian form to describe the object, a closed form result is reached which illustrates the effect of the turbulence on the image in a conceptually simple manner. A loss of resolution is found, manifesting itself as an effective reduction of the lens size. A simple relation is derived that relates the effective lens size to the actual lens size and the coherence length (rho) o, of a spherical wave propagating through the turbulent medium. This relation agrees well with the empirically known fact that increasing the size of the primary lens of a telescope beyond approximately 10 cm does little to improve resolution. Turbulence is also found to cause a coherent object to seem incoherent from the standpoint of the viewer if the resolution spot size of the imaging system is larger than (rho) o. Approximations upon which these results are based are supported by numerical calculations for particular objects. Finally, the implications of these discoveries are compared with papers that demonstrate superresolution for short- and long-exposure imaging.
We evaluate the time-averaged double-passage image of a coherently illuminated object that is obscured by a random phase screen. In particular, we study the effect of changing the location of the random screen on the average intensity spectrum of the image. We consider two cases, when the random screen is at an arbitrary location between the pupil plane of the imaging system and the object plane and when it is located right next to the object. In both cases we find that the average intensity spectrum of the image is diffraction-limited.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.