Using digital holographic (DH) sensors, coupled with iterative computational algorithms we can sense and correct the effects of distributed volume turbulence in DH imagery. These iterative methods minimize a non-convex cost function with respect to the wavefront phase function, modeled as discreet arrays. This approach leads to high-dimensional optimization problems plagued by local minima. The problem is amplified in the presence of challenging conditions, (e.g., high noise, strong turbulence, insufficient data). We investigate using implicit neural representations (INRs) to model atmospheric phase errors in DH data. INRs offer a low-dimensional functional representation, simplifying the optimization problem and allowing us to produce high-quality wavefront estimates and focused images, even in deep-turbulence conditions.
In this paper we explore the use of digital holography with a high-speed camera to sense and correct atmospheric turbulence. Deep turbulence, a type of atmospheric turbulence, degrades the performance of both imaging and directed-energy systems. Characterizing and correcting atmospheric turbulence requires knowledge of the induced phase errors on the wavefront. Digital holography provides the capability to measure phase errors in the most challenging atmospheric conditions. Previous laboratory experiments at the United States Air Force Academy have demonstrated both sensing and correction of simulated atmospheric turbulence at discrete planes using digital holography. In this work, we design and test a digital holography system capable of imaging in relevant atmospheric conditions. This paper expands the optical design of the existing laboratory-based digital holography system to be more capable of accurately measuring real-world turbulence because it uses a high-speed camera. Our results detail the performance of the digital holography system in a controlled test environment and provide data about the feasibility of integrating digital holography into future fielded systems.
KEYWORDS: Expectation maximization algorithms, Simulations, Atmospheric turbulence, Distortion, Wavefront sensors, Directed energy weapons, Data modeling, Air force
The estimation of phase errors due to atmospheric turbulence from digital-holography (HD) data with high throughput and low latency is critical for applications such as wavefront sensing and directed energy. The problem of focusing outgoing directed energy is particularly difficult because the phase errors must be estimated with extremely low latency for use in closed-loop correction of the outgoing light before the atmospheric parameters decorrelate. This low latency requirement necessitates that the phase distortion be estimated from a single shot of DH data. The Dynamic DH-MBIR (DDH-MBIR) algorithm has been shown to be capable of accurately estimating isoplanatic phase-errors using the expectation-maximization (EM) algorithm; however, DDH-MBIR was introduced using data that models only frozen flow of atmospheric turbulence. In this paper, we characterize the performance of the Dynamic DH-MBIR algorithm in more realistic settings. Specifically, Dynamic DH-MBIR produces accurate phase estimates in the case of moderate levels of atmospheric boiling.
Imaging through deep turbulence is a hard and unsolved problem. There have been recent advances toward sensing and correcting moderate turbulence using digital holography (DH). With DH, we use optical heterodyne detection to sense the amplitude and phase of the light reflected from an object. This phase information allows us to digitally back propagate the measured field to estimate and correct distributed-volume aberrations. Recently, we developed a model-based iterative reconstruction (MBIR) algorithm for sensing and correcting atmospheric turbulence using multi-shot DH data (i.e., multiple holographic measurements). Using simulation, we showed the ability to correct deep-turbulence effects, loosely characterized by Rytov numbers greater than 0.75 and isoplanatic angles near the diffraction limited viewing angle. In this work, we demonstrate the validity of our method using laboratory measurements. Our experiments utilized a combination of multiple calibrated Kolmogorov phase screens along the propagation path to emulate distributed-volume turbulence. This controlled laboratory setup allowed us to demonstrate our algorithm’s performance in deep turbulence conditions using real data.
Recently, we proposed a deep-learning (DL) -based method for solving coherent imaging inverse problems, known as coherent plug and play (CPnP). CPnP is a regularized inversion framework that works with coherent imaging data corrupted by phase errors. The algorithm jointly produces a focused and speckle-free image and an estimate of the phase errors. The algorithm combines physics-based propagation models with image models learned with DL and produces higher-quality estimates when compared to other non-DL methods. Previously, we were only able to demonstrate CPnP using simulated data. In this work, we design a coherent imaging test bed to validate CPnP using real data. We devise a method to obtain truth data for both the images and the phase errors. This allows us to quantify performance and compare different algorithms. Our results validate the improved performance of CPnP when compared to other existing methods.
Imaging through deep-atmospheric turbulence is a challenging and unsolved problem. However, digital holography (DH) has recently demonstrated the potential for sensing and digitally correcting moderate turbulence. DH uses coherent illumination and coherent detection to sense the amplitude and phase of light reflected off of an object. By obtaining the phase information, we can digitally propagate the measured field to points along an optical path in order to estimate and correct for the distributed-volume aberrations. This so-called multi-plane correction is critical for overcoming the limitations posed by moderate and deep atmospheric turbulence. Here we loosely define deep turbulence conditions to be those with Rytov numbers greater than 0.75 and isoplanatic angles near the diffraction limited viewing angle. Furthermore, we define moderate turbulence conditions to be those with Rytov numbers between 0.1 and 0.75 and with isoplanatic angles at least a few times larger than the diffraction-limited viewing angle. Recently, we developed a model-based iterative reconstruction (MBIR) algorithm for sensing and correcting atmospheric turbulence using single-shot DH data (i.e., a single holographic measurement). This approach uniquely demonstrated the ability to correct distributed-volume turbulence in the moderate turbulence regime using only single-shot data. While the DH-MBIR algorithm pushed the performance limits for single-shot data, it fails in deep turbulence conditions. In this work, we modify the DH-MBIR algorithm for use with multi-shot data and explore how increasing the number of measurements extends our capability to sense and correct imagery in deep turbulence conditions.
This paper makes use of digital-holographic detection in the off-axis image plane recording geometry to determine the Fried parameter of transmissive phase screens. Digital-holographic detection, in practice, provides us with an estimate of the complex-optical field (i.e., both the amplitude and wrapped phase); thus, we can use this estimate for determining the Fried parameter of transmissive phase screens, especially when the resulting aberrations follow Kolmogorov statistics. As such, this paper uses two experimental setups and Lexitek phase plates, which make use of Kolmogorov statistics to create aberrations with a prescribed Fried parameter. In both experimental setups, we place the Lexitek phase plates under test near the single-receiver lens of our digital-holographic system and assume isoplanatic conditions. In the first experimental setup, we uniformly illuminate a chrome-on-glass bar chart backed by Labsphere Spectralon®. We then use digital-holographic detection and an image-sharpening algorithm to indirectly measure the aberrations and determine the Fried parameter. In the second experimental setup, we send a collimated beam through the Lexitek phase plates. We then use digital-holographic detection to directly measure the aberrations and determine the Fried parameter. The results show that the first experimental setup overestimates the prescribed Fried parameter by 20%-60%, whereas the second experimental setup produces less variability with estimates of ±20% of the prescribed Fried parameter.
The US Air Force (USAF) conducts research involving the sensing and compensation of atmospheric turbulence, which acts to blur images and make laser-beam propagation more challenging. As such, USAF scientists and engineers (S and Es) often face the challenging task of explaining this research to audiences without relevant technical expertise. These audiences vary widely all the way from upper military leadership down to K-12 students as part of science, technology, engineering, and mathematics (STEM) outreach activities. Previously, a team of USAF S and Es developed a table-top setup for the demonstration of digital-holography (DH) technology. This technology enables the measurement of the complex-optical field, which in turn enables a plethora of applications that involve imaging and wavefront sensing. Therefore, in this paper we extend this table-top setup to illustrate both the effects of atmospheric turbulence on the imaging and wavefront sensing process and the digital-signal processing required to estimate and mitigate these effects. The enhanced demonstration provides a visual-learning aid to help explain the complicated concepts associated with imaging through atmospheric turbulence. Specifically, we show that we can introduce aberrations into the DH system and use digital-image correction to refocus the resultant blurry images. This paper discusses the overall system design, improvements, and lessons learned.
Recently developed coherent-imaging algorithms using Model-Based Iterative Reconstruction (MBIR) are robust to noise, speckle, and phase errors. These MBIR algorithms produce useful images with less signal which allows imaging distances to be extended. So far, MBIR algorithms have only incorporated simple image models. Complex scenes, on the other hand, require more advanced image models. In this work, we develop an MBIR algorithm for image reconstruction in the presence of phase errors which incorporates advanced image models. The proposed algorithm enables optically-coherent imaging of complex scenes at extended ranges.
Standard Synthetic Aperture LADAR (SAL) image processing techniques use fast Fourier transforms (FFTs) to render images. This leads to noise amplification and estimates of the complex-valued reflection coefficient, thus resulting in high variation known as speckle. In this paper we propose a model-based iterative reconstruction (MBIR) approach using a Bayesian framework to form SAL images. The resulting images are the maximum a posteriori (MAP) estimate of object's real-valued surface reflectance. To overcome the complexity of the MAP cost function, we use the expectation maximization (EM) algorithm to derive a surrogate function which is then optimized. The proposed algorithm is tested on simulated data and compared against the Fourier-based approach.
A wave-optics model is developed which allows simulation of an Inverse Synthetic Aperture LADAR (ISAL) imaging system. This end-to-end tool models the complex interactions of Linear Frequency Modulated (LFM) chirped pulses, object/beam interactions including object articulation, speckle phenomenology, heterodyne detection with noise, atmospheric turbulence, and laser-guide star adaptive optics. Detected signal outputs are simulated and processed to explore system design trades and to test and compare image processing algorithms. Model verification results will be presented as well as reconstructed images.
KEYWORDS: Telescopes, Sensors, Interferometry, Receivers, Signal to noise ratio, Speckle, Data archive systems, Data processing, Data storage, Atmospheric propagation
Intensity Interferometry is a form of imaging developed in the 1950’s by Hanbury Brown and Twiss, which gave very early results for estimates of the diameters of stellar discs. It relies on the statistical properties of light to form an image by correlating the electronic signals measured independently and simultaneously at two or more separate collection telescopes. Its benefits are that it can provide very high resolution, can be very low in cost, does not require precision path matching, and is insensitive to atmospheric effects. Its disadvantages are that it has relatively poor SNR properties for larger telescope separations. An experiment is performed with three telescopes in Kihei, HI to investigate the potential for large-separation, high-resolution, multi-telescope operation. Simulations were performed to address key issues related to the experiment. Correlations were measured during lab checkouts, and also for early field testing. A compression scheme was developed to archive the raw data. The compression process had the added advantage of eliminating spurious electronic interference signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.