We report an all-optical object classification framework using a single-pixel diffractive network and spectrum encoding, classifying unknown objects through unknown random phase diffusers at the speed of light. Using this single-pixel diffractive network design, we numerically achieved a blind testing accuracy of 88.53%, classifying unknown handwritten digits through 80 unknown random diffusers that were never used during training. This framework presents a time- and energy-efficient all-optical solution for directly sensing through unknown random diffusers using a single pixel and will be of broad interest to various fields, such as security, biosensing and autonomous driving.
We report a computer-free method to image through random, new diffusers at the speed of light using passive diffractive optical networks composed of spatially-engineered transmissive layers. These diffractive layers were designed using deep learning in a computer with image pairs containing diffuser distorted optical fields and the corresponding distortion-free images (ground truth). After this one-time training effort, the resulting diffractive layers were fabricated to form a physical network to all-optically reconstruct unknown objects through random, unknown diffusers, without requiring any power except for illumination. This diffractive computational imager might find applications in various fields, e.g., atmospheric sciences, biomedical imaging, defense/security.
We report an all-optical computational imager to restore diffuser-distorted images at the speed of light, without a computer. For seeing through random/unknown diffusers, we trained diffractive networks consisting of successive transmissive layers. After its training, the resulting diffractive layers are fabricated, forming a passive optical network, placed behind random, new diffusers to perform all-optical reconstruction of unknown images entirely covered by unknown diffusers. All-optical diffractive reconstructions are completed at the speed of light propagation from the input to the output, do not require power except for illumination, and might find applications in e.g., atmospheric sciences, biomedical imaging, defense/security, among others.
We report a mobile device based on inline holography and deep learning to directly measure the volatility of particulate matter with high-throughput. We applied this mobile device to characterize aerosols generated by electronic cigarettes (e-cigs). Our measurements revealed a negative correlation between e-cig generated particle volatility and vegetable glycerin concentration in the e-liquid. Furthermore, the addition of other chemicals, e.g., nicotine and flavoring compounds, reduced the overall volatility of e-cig generated aerosols. The presented device can monitor the dynamic behavior of e-cig aerosols in a high-throughput manner, potentially providing important information for e-cig exposure assessment via e.g., second-hand vaping.
KEYWORDS: Optical coherence tomography, Image restoration, Neural networks, 3D image reconstruction, Image quality, 3D image processing, Stereoscopy, Spectral resolution, Signal to noise ratio, Imaging systems
We report neural network-based rapid reconstruction of swept-source OCT (SS-OCT) images using undersampled spectral data. We trained and blindly tested a deep neural network using mouse embryo samples imaged by an SS-OCT system. Using >3-fold undersampled spectral data per A-line, the trained neural network can blindly remove spatial aliasing artifacts due to spectral undersampling, presenting a very good match to the images reconstructed using the full spectral data. This method can be integrated with various swept-source or spectral domain OCT systems to potentially improve the 3D imaging speed without a sacrifice in resolution or signal-to-noise of the reconstructed images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.