I will discuss the emerging trend in computational imaging to train deep neural networks (DNNs) for image formation. The DNNs are trained from examples consisting of pairs of known objects and their corresponding raw images drawn from databases such as ImageNet, Faces-LFW and MNIST. The raw images are converted to complex amplitude maps and displayed on a Spatial Light Modulator (SLM.) After training, the DNNs are capable of recovering unknown objects, i.e. objects not previously included in the training sets, from the raw images in several scenarios: (1) phase objects retrieved from intensity after lensless propagation; (2) phase objects retrieved from intensity after lensless propagation at extremely low photon counts; and (3) amplitude objects retrieved from intensity in-focus after propagation through a strong scatterer. Recovery is robust to disturbances in the optical system, such as additional defocus or various misalignments. This suggests that DNNs may form robust internal models of the physics of light propagation and detection and generalize priors from the training set. In the talk I will discuss in more detail various methods to incorporate the physics into DNN training, and how DNN architecture and “hyper-parameters” (i.e., depth, number of units in each depth, presence or absence of skip connections, etc.) influence the quality of image recovery.
In a recent paper [Goy et al., Phys. Rev. Lett. 121, 243902, 2018], we showed that deep neural networks (DNNs) are very efficient solvers for phase retrieval problems, especially when the photon budget is limited. However, the performance of the DNN is strongly conditioned by a preprocessing step that consists in producing a proper initial guess. In this paper, we study the influence of the preprocessing in more details, in particular the choice of the preprocessing operator. We also empirically demonstrate that, for a DenseNet architecture, the performance of the DNN increases with the number of layers up to a point after which it saturates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.