Holographic optical elements (HOEs) are based on the principle of holography, which can implement arbitrary functions such as convex lenses and concave mirrors. The performance of HOEs is expected to be enhanced by a cooperative operation of multiple HOEs. However, the design of multiple HOEs is difficult to achieve with conventional design methods such as ray tracing software. We will introduce the HOE design of the cooperative operation by machine learning. In this work, we implemented a diffractive deep neural network (D2NN) to realize the cooperative operation by multiple HOEs at the visible wavelengths. D2NN is a kind of optical neural network that is represented by light propagation, and it is implemented by multiple DOEs that can represent arbitrary optical functions. However, multiple-layer HOEs cause noise to be overlapped on the output wavefront since the HOE generates unnecessary lights such as the direct light and high-order lights. Therefore, we implemented the D2NN consisting of two layers of HOEs by an off-axis D2NN, which avoids this obstacle. The two-layer HOEs were trained to perform a classification task of handwritten digits as a task. The trained D2NN model with HOEs was evaluated in a numerical simulation, achieving 87.1% accuracy in the simulation. The method enables the design of cooperative operation of multiple HOEs, it enables HOEs to achieve more complex and higher performance functions.
Phase-shifting interferometry selectively extracting wavelength information has been proposed since 2013, which is called computational coherent superposition (CCS) of multiple wavelengths. In this proceeding, we apply CCS to self-interference incoherent holography and construct single-path, mechanical-motion-free, wavelength-multiplexed, incoherent multicolor digital holographic microscopy systems. Also, we numerically investigate quantum fluctuation in phase-shifting interferometry for the sensing of weak light such as natural light and nonlinear light. After that, we briefly discuss the difference between the digital holography systems with CCS and phase-shifting interferometry with a Bayer color image sensor.
We can record digitally-designed information of three-dimensional (3D) objects or optical elements on a holographic photosensitive material by using wavefront printing technology. But the hologram data generated from the digitally-designed information are very huge and there are often the occurrences of the unnecessary bidirectional communications. To solve this problem, we studied on a special-purpose computer for wavefront printing technology. This technique consists of generating the light-ray information from digitally-designed information of 3D objects, converting the light-ray information to the wavefront information and generating the hologram data locally from the wavefront information in interaction. In this paper, we designed the emulator of the special-purpose computer for wavefront printing technology and obtained the amount of information (the number of bits) required for the circuit by comparing the 3D images reconstructed from the holograms generated by the emulator. As a result, the amount of information of the wavefront information converted from the light-ray information most affected the quality of the 3D images reconstructed from the holograms generated by the emulator and we can design the emulator that can reduce the noise component from those 3D images. In the future, we will design the special-purpose computer for wavefront printing technology by using hardware description language and implement that special-purpose computer on a programmable logic device such as a field programmable gate array.
KEYWORDS: Holography, Wavefronts, Printing, 3D displays, 3D printing, Electromagnetism, Communication engineering, Communication and information technologies, Photonics, Electro-optical engineering
A hologram of a scene can be digitally created by using a large set of images of that scene. Since capturing such a large amount is infeasible to accomplish, one may use view synthesis approaches to reduce the number of cameras and generate the missing views. We propose a view interpolation algorithm that creates views inside the scene, based on a sparse set of camera images. This allows the objects to pop out of the holographic display. We show that our approach outperforms existing view synthesis approaches and show the applicability on holographic stereograms.
In this paper, we introduce hologram printing technology. This technology includes the following technologies, computer-generated hologram, hologram printer, duplication, and application-depended technologies. When this technology is applied to static hologram, the media can present static 3D objects more clearly than traditional 3D technologies such as lenticular lens and integral photography(IP) because it is based on holography. When this technology is applied to holographic optical elements(HOE), the HOE will be useful for many purposes especially for large optical elements. For example, when it is used as screen, the visual system which consists of the screen and projector can present dynamic 2D or 3D objects. Since this technology digitally designs hologram/HOE and manufactures them by wavefront printer, it is good at generating small lot of production. As a result, it is effective for the research stage of both 2D and 3D display. In addition, it is also effective for commercial stage due to simple duplication method.
Several wavefront printers have been recently proposed. Since the printers can record an arbitrary computer-generated wavefront, they are expected to be useful for fabricating complex mirror arrays used in front projection 3-D screens without using real existing optics. We prototyped two transparent reflective screens using our hologram printer in experiments. These screens could compensate for a spherically distorted reference wave caused by a short projection distance to obtain an ideal reference wave. Owing to the use of the wavefront-printed screen, the 3-D display was simply composed of a normal 2-D projector and a screen without using extra optics. In our binocular system, reflected light rays converged to the left and right eyes of the observer and the crosstalk was less than 8%. In the light field system, the reflected light rays formed a spatially sampled light field and focused a virtual object in a depth range of ±30 mm with a ±13.5-deg viewing angle. By developing wavefront printing technology, a complex optics array may easily be printed by nonprofessionals for optics manufacturing.
Wavefront printing for a digitally-designed hologram has got attentions recently. In this printing, a spatial light modulator (SLM) is used for displaying a hologram data and the wavefront is reproduced by irradiating the hologram with a reference light the same way as electronic holography. However, a pixel count of current SLM devices is not enough to display an entire hologram data. To generate a practical digitally-designed hologram, the entire hologram data is divided into a set of sub-hologram data and wavefront reproduced by each sub-hologram is sequentially recorded in tiling manner by using X-Y motorized stage. Due to a lack of positioning an accuracy of X-Y motorized stage and the temporal incoherent recording, phase continuity of recorded/reproduced wavefront is lost between neighboring subholograms. In this paper, we generate the holograms that have different size of sub-holograms with an overlap or nonoverlap, and verify the size of sub-holograms effect on the reconstructed images. In the result, the reconstructed images degrade with decreasing the size of sub-holograms and there is little or no degradation of quality by the wavefront printing with the overlap.
KEYWORDS: Wavefronts, Printing, 3D image reconstruction, Holograms, Holography, Diffraction, Spatial light modulators, 3D acquisition, 3D image processing, 3D printing
A hologram recording technique, generally called as “wavefront printer”, has been proposed by several research groups for static three-dimensional (3D) image printing. Because the pixel number of current spatial light modulators (SLMs) is not enough to reconstruct the entire wavefront in recording process, typically, hologram data is divided into a set of subhologram data and each wavefront is recorded sequentially as a small sub-hologram cell in tiling manner by using X-Y motorized stage. However since previous works of wavefront printer do not optimize the cell size, the reconstructed images were degraded by obtrusive split line due to visible cell size caused by too large cell size for human eyesight, or by diffraction effect due to discontinuity of phase distribution caused by too small cell size. In this paper, we introduce overlapping recording approach of sub-holograms to achieve both conditions: enough smallness of apparent cell size to make cells invisible and enough largeness of recording cell size to suppress diffraction effect by keeping the phase continuity of reconstructed wavefront. By considering observing condition and optimization of the amount of overlapping and cell size, in the experiment, the proposed approach showed higher quality 3D image reconstruction while the conventional approach suffered visible split lines and cells.
To develop a 3D display which can show true 3D images is very important and necessary. Holography has great potential to achieve the objective because holography can actually reconstruct the recorded object in space by reconstruction of wavefront. Further, computer generated hologram (CGH) is used to solve the major issue of conventional holography, which means the recoding process is quite complicated and needs the real objects. The reconstructed image, however, will be blurred and with the unexpected light if using only one phase-only spatial light modulator (PSLM). Although to use two PSLMs by dual-phase modulation method (DPMM) can modulate the phase and the amplitude information simultaneously to enhance the quality of the reconstructed image, it is hard to use in practical application because of the extremely high accurate calibration of the two PSLMs. Therefore, double phase hologram (DPH) was proposed to use only one PSLM to modulate the phase and the amplitude information simultaneously to make the reconstructed image be more focused and eliminate the unexpected light.
In this paper, we overview a high resolution three-dimensional (3D) holographic display using 2D images captured in an integral imaging system and dense ray resampling technique. Holograms are generated the resampled rays from the 2D images. This method can improve the display resolution because each object is captured in focus and light-ray information is interpolated and resampled with high density on ray-sampling plane located near the object. Numerical experimental results for different scenes show that the presented holographic display technique can reconstruct multiple objects at different depths with higher resolution compared to conventional integral imaging-based holographic displays.
A holographic TV system based on multiview image and depth map coding and the analysis of coding noise effects in reconstructed images is proposed. A major problem for holographic TV systems is the huge amount of data that must be transmitted. It has been shown that this problem can be solved by capturing a three-dimensional scene with multiview cameras, deriving depth maps from multiview images or directly capturing them, encoding and transmitting the multiview images and depth maps, and generating holograms at the receiver side. This method shows the same subjective image quality as hologram data transmission with about 1/97000 of the data rate. Speckle noise, which masks coding noise when the coded bit rate is not extremely low, is shown to be the main determinant of reconstructed holographic image quality.
We have recently developed an electronic holography reconstruction system by tiling nine 4Kx2K liquid crystal on silicon (LCOS) panels seamlessly. Magnifying optical systems eliminate the gaps between LCOS panels by forming enlarged LCOS images on the system’s output lenses. A reduction optical system reduces the tiled LCOS images to the original size, returning to the original viewing zone angle. Since this system illuminates each LCOS panel through polarized beam splitters (PBS) from different distances, viewing-zone-angle expansion was difficult since it requires illumination of each LCOS panel from different angles. In this paper, we investigated viewing-zone-angle expansion of this system by integrating point light sources in the magnifying optical system. Three optical fibers illuminate a LCOS panel from different angles in time-sequential order, reconstructing three continuous viewing-zones. Full-color image reconstruction was realized by switching the laser source among R, G, and B colors. We propose a fan-shaped optical fiber arrangement to compensate for the offset of the illumination beam center from the LCOS panel center. We also propose a solution for high-order diffraction light interference by inserting electronic shutter windows in the reduction optical system.
KEYWORDS: Computer generated holography, Integral imaging, 3D image processing, 3D displays, Digital holography, Image resolution, Remote sensing, Visualization, Holography, 3D acquisition
Various techniques to visualize a 3-D object/scene have been proposed until now; stereoscopic display, parallax barrier,
lenticular approach, integral imaging display, and holographic display. Application for a real existing 3-D scene is one of
important issues. In this paper, at first the fundamental limitation of integral imaging display for deep 3-D scene is
discussed. Then a two main types of holographic display; digital holography approach that digitally capturing an
interference pattern and a computer generated hologram (CGH) approach from a set of perspective images are
overviewed with describing the radical advantages and disadvantages.
In this paper, we propose a new algorism for calculating computer-generated hologram (CGH) for 3D image display.
The wavefront is calculated from light-ray information, which can be obtained by artificial computer graphics or imagebased
renderings using the data captured by a camera array. The view interpolation, hidden surface removal, and gloss
reproduction are easily implemented by utilizing the techniques of image-based rendering or light-field rendering. The
method is similar to the CGH based on the principle of Holographic Stereogram (HS), but using HS based CGH, the
image far from the hologram plane is blurred due to the light-ray sampling and the diffraction at the hologram surface, so
it is not suitable for the display of deep scene. Thus we proposed the use of virtual "Ray-Sampling (RS) plane" near the
object, and the wavefront at the RS plane is calculated from the light-rays. The wavefront propagation is then simulated
based on Fresnel diffraction from the RS plane to the hologram. The hologram pattern is obtained from the complex
amplitude distribution on the hologram plane. Even if the RS plane is distant from the hologram, the resolution of the
reconstructed image is not degraded since the long distance light propagation is calculated by diffraction theory. In the
experiment, we obtained high resolution, deep 3D image with gloss appearance with using the image data generated by
commercial rendering software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.