KEYWORDS: 3D acquisition, 3D displays, Photography, Optical resolution, 3D metrology, Image resolution, 3D image processing, Optical engineering, 3D vision, Visualization
We experimentally verified the depth perception and accommodation-convergence conflict in viewing integral photography. For comparison, the same measurements were performed with binocular stereoscopic images and real objects. First, the depth perception in viewing an integral three-dimensional (3D) target was measured at three display resolutions: 153, 229, and 458 ppi. The results showed that the depth perception was dependent on the display resolution. The results were also evaluated in a statistical test at a significance level of 5%. The results showed that the recognized depth perception ranges were 180, 240, and 330 mm when the display resolutions were 153, 229, and 458 ppi, respectively. The results were also analyzed in terms of image resolution. This suggested that depth perception occurred at over 1.0 cpd. The accommodation and convergence responses in viewing an integral 3D target displayed on a 3D display with 458 ppi were measured using PowerRef 3. The experimental results were evaluated with a multiple comparison test. It was found that 6 of the 10 observers did not have an accommodation-convergence conflict when viewing the integral 3D target inside and outside the depth of field. In conclusion, integral photography can provide a natural 3D image that looks like a real object.
Light field displays can provide a naturally viewable three-dimensional (3D) image without the need for using special glasses. However, improving in the resolution of 3D images is difficult because considerable image information is required. Therefore, we propose two new light field display methods that use multiple ultra-high definition projectors to realize a reproduction of a high-resolution spatial image. One of the two proposed methods is based on integral imaging. Multi-elemental images are superimposed onto a lens array using multiple projectors placed at optimal positions. An integral 3D image with enhanced resolution and viewing angle can be reproduced by projecting each elemental image as collimated light rays at different predetermined angles. We prototyped a display system having six projector units and realized a resolution of approximately 100,000 pixels and viewing angle of approximately 30°. The other proposed method aiming at further resolution enhancement is based on multi-view projection. By constructing a new display optical system to reproduce a full parallax light field and by developing a special 3D screen with isotropic narrow diffusion characteristics of non-Gaussian shape, optical 3D images could be reconstructed, which was difficult with conventional methods. We prototyped a display system comprising two projector units and realized higher resolution of approximately 330,000 pixels as compared to our previous full parallax light field display systems.
We studied an integral three-dimensional (3D) TV based on integral photography to develop a new form of broadcasting that provides a strong sense of presence. The integral 3D TV can display natural 3D images that have motion parallax in the horizontal and vertical directions. However, a large number of pixels are required to obtain superior 3D images. To improve image quality, we applied ultra-high-definition video technologies to an integral 3D TV system. Furthermore, we are developing several methods for combining multiple cameras and display devices to improve the quality of integral 3D images.
KEYWORDS: Cameras, 3D modeling, 3D image processing, 3D displays, Robotics, Integral imaging, Image resolution, Image processing, Zoom lenses, Stereoscopy
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject’s position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
We propose a method for arranging multiple projectors in parallel using an image-processing technique and for enlarging the viewing zone in an integral three-dimensional image display. We have developed a method to correct the projection distortion precisely using an image-processing technique combining projective and affine transformations. To combine the multiple viewing zones formed by each projector continuously and smoothly, we also devised a technique that provides accurate adjustment by generating the elemental images of a computer graphics model at high speed. We constructed a prototype device using four projectors equivalent to 4K resolution and realized a viewing zone with measured viewing angles of 49.2 deg horizontally and 45.2 deg vertically. Compared with the use of only one projector, the prototype device expanded the viewing angles by approximately two times in both the horizontal and vertical directions.
A three-dimensional (3D) capture system based on integral imaging with an enhanced viewing zone by using a camera array was developed. The viewing angle of the 3D image can be enlarged depending on the number of cameras consisting of the camera array. The 3D image was captured by using seven high-definition cameras, and converted to be displayed by using a 3D display system with a 4K LCD panel, and it was confirmed that the viewing angle of the 3D image can be enlarged by a factor of 2.5 compared with that of a single camera.
The quality of the integral 3D images created by a 3D imaging system was improved by combining multiple LCDs to utilize a greater number of pixels than that possible with one LCD. A prototype of the display device was constructed by using four HD LCDs. An integral photography (IP) image displayed by the prototype is four times larger than that reconstructed by a single display. The pixel pitch of the HD display used is 55.5 μm, and the number of elemental lenses is 212 horizontally and 119 vertically. The 3D image pixel count is 25,228, and the viewing angle is 28°. Since this method is extensible, it is possible to display an integral 3D image of higher quality by increasing the number of LCDs. Using this integral 3D display structure makes it possible to make the whole device thinner than a projector-based display system. It is therefore expected to be applied to the home television in the future.
KEYWORDS: 3D displays, LCDs, 3D image processing, Stereoscopic displays, Eye, 3D metrology, Image resolution, Holography, Photography, 3D image reconstruction
We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display
conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an
optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that
comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468
dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal
length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned
60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth
positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15
and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object
display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions.
The results suggest that the IP image induced more natural accommodation responses compared to the binocular
stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however,
they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce
accommodation to the depth positions of 3D images.
KEYWORDS: 3D image processing, Imaging systems, Integral imaging, 3D displays, 3D image reconstruction, LCDs, 3D vision, Staring arrays, Multichannel imaging systems, Image quality
We developed a three-dimensional (3-D) imaging system with an enlarged horizontal viewing angle for integral
imaging that uses our previously proposed method for controlling the ratio of the horizontal to vertical viewing
angles by tilting the lens array used in a conventional integral imaging system. This ratio depends on the tilt
angle of the lens array. We conducted an experiment to capture and display 3-D images and confirmed the
validity of the proposed system.
A wide viewing-zone-angle full-color electronic holography reconstruction system is developed. Time division
multiplexing of RGB color light and space division multiplexing of viewing-zone-angles are adopted to keep the optical
system compact. Undesirable light such as illumination light, phase conjugate light, and high-order diffraction light are
eliminated by half-zone-plate hologram generation and single sideband beam reconstruction. Color aberration and
astigmatism caused by the reproduction optical system are analyzed and reduced. The developed system expands
viewing-zone-angle of full-color holographic image three times wider than the original, suppressing undesirable light,
color aberration, and astigmatism.
We are studying electronic holography and have developed a real-time color holography system for live scene
which includes three functional blocks, capture block, processing block, and display block. In this paper, we will
introduce our developed system after describing basic idea that quickly calculates hologram from IP image. The
first block, capture block, uses integral photography (IP) technology to capture color 3-D objects under natural
light in real time. The second block, processing block, consists of four general personal computers to generate
holograms from IP images in real time. Three half-zone-plated holograms for red, green and blue (RGB) channels
are generated for all captured IP images by using fast Fourier Transform. The last block, display block, mainly
consists of three liquid crystal displays to display the holograms and three laser sources for RGB to reconstruct
the color 3-D objects. All blocks work in real time, i.e., in 30 color frames per second.
Holography is considered as an ideal 3D display method. We generated a hologram under white light. The infrared depth
camera, which we used, captures the depth information as well as color video of the scene in 20mm of accuracy at 2m of
object distance. In this research, we developed a software converter to convert the HD resolution depth map to the
hologram. In this conversion method, each elemental diffraction pattern on a hologram plane was calculated beforehand
according to the object distance and the maximum diffraction angle determined by the reconstruction SLM device (high
resolution LCOS). The reconstructed 3D image was observed.
We are studying electronic holography and have already developed a real-time color holography system for live
scene which includes three functional blocks, capture block, processing block, and display block. One of the issues
of such systems is to spoil half of the captured 3-D information due to half-zone-plate processing in processing
block, which means the resolution of reconstructed 3-D objects is reduced to half at the instant of processing
block. This issue belongs to not only our system but also all similar systems, because electronic display devices
do not have enough resolution for hologram even now. In this paper, we propose to use semi-lens lens array
(SLLA) in capture block to solve this issue, whose optical axis of elemental lens is not at the center of elemental
lens but at the edge of it. In addition to that, we will describe the processing block for SLLA. We show the basic
experimental results that SLLA is better than general lens array.
KEYWORDS: Holograms, Digital signal processing, 3D image reconstruction, Lenses, Image processing, Photography, Field programmable gate arrays, Near field diffraction, Cameras, 3D image processing
Holography is a 3-D display method that fully satisfies the visual characteristics of the human eye. However, the
hologram must be developed in a darkroom under laser illumination. We attempted hologram generation under white
light by adopting an integral photography (IP) technique as the input. In this research, we developed a hardware
converter to convert IP input (with 120×66 elemental images) to a hologram with high definition television (HDTV)
resolution (approximately 2 million pixels). This conversion could be carried out in real time. In this conversion method,
each elemental image can be independently extracted and processed. Our hardware contains twenty 300-MHz floating-point
digital signal processors (DSPs) operating in parallel. We verified real-time conversion operations by the
implemented hardware.
We are studying electronic holography and have developed a real-time color holographic movie system which includes three functional blocks, capture block, processing block, and display block. We will introduce the system and its technology in this paper. The first block, capture block, uses integral photography (IP) technology to capture color 3-D objects in real time. This block mainly consists of a lens array with approximately 120(W)x67(H) convex lenses and a video camera with 1920(W)x1080(H) pixels to capture IP images. In addition to that, the optical system to reduce the crosstalk between elemental images is mounted. The second block, processing block, consists of two general personal computers to generate holograms from IP images in real time. Three half-zone-plated holograms for red, green and blue (RGB) channels are generated for each frame by using Fast Fourier Transform. The last block, display block, mainly consists of three liquid crystal displays for displaying the holograms and three laser sources for RGB to reconstruct the color 3-D objects. This block is a single-sideband holography display, which cuts off conjugate and carrier images from primary images. All blocks work in real time, i.e., in 30 frames per second.
Single-sideband holography with half-zone-plate processing is a well-known method of displaying computer generated holograms (CGHs) using electronic devices such as liquid crystal displays (LCDs) that do not have narrow pixel intervals. Half-zone plate only permits primary images to pass through a single-sideband spatial filter and cuts off conjugate and carrier images; however, there is a problematic restriction on this method in that objects being shot must be either in front of or behind the hologram. This paper describes a new approach to simultaneously placing them on both sides of the hologram, which means we can eliminate this restriction. The underlying idea is when half-zone plate permits the primary images in front of the hologram to pass through a single-sideband spatial filter, the conjugate images cannot pass through it. When we prepare a half-zone plate on the opposite side, the primary images on both sides of the hologram can pass through but the conjugate images cannot. This approach not only doubles the area of objects but also reduces computational time because objects can be placed close to the hologram. We implemented this approach, tested it, and confirmed its effectiveness.
KEYWORDS: Holograms, 3D image reconstruction, LCDs, Photography, Lenses, Holography, 3D image processing, IP cameras, Near field diffraction, Light sources
This paper describes a method of generating holograms by calculation from an image captured using the integral photography (IP) technique. In order to reduce the calculation load in hologram generation, a new algorithm that shifts the optical field along the exit plane of the micro lenses in a lens array is proposed. We also explain the aliasing that occurs when a hologram is generated by IP. Furthermore, an elemental image size and micro lens's focal length at which aliasing does not occur are suggested. Finally, we use the algorithm to calculate a hologram from an IP image of a real object captured with an IP camera, confirming by optical reconstruction that a three-dimensional image can be formed from the hologram.
This paper describes a holographic display using liquid crystal panels from which holographic images can be perceived with both eyes. In the display, the hologram plane is composed of two high-resolution liquid crystal panels each with a pixel pitch of 10 microns (both horizontally and vertically) and 3840 (horizontal) × 2048 (vertical) pixels. The horizontal viewing zone is doubled by applying the viewing-zone enlargement method, which uses higher-order diffraction beams, to the high-resolution liquid crystal panels. In addition, obstacles resulting from conjugate beams are eliminated using the modified single-sideband method. As a result, the viewing zone of the display is 6.5 cm, which is equivalent to the distance between pupils, at a viewing distance of 90 cm. Thus, moving three-dimensional holographic images free off conjugate beam obstacles could be perceived with both eyes.
We have developed a new system based on computer-generated holography using a hologram plane with a sampling structure like a liquid crystal display. This system can eliminate beams from conjugate images and enlarge the viewing zone, which are achieved by combining the following two methods. The first method enlarges the viewing zone by using higher- order diffraction beams generated because of the sampling structure of the hologram plane. If the angle between the object beam and the reference beam is larger than the angle determined by the sampling period on the hologram plane, aliasing occurs in the fringe patterns. In this method, the viewing zone is enlarged by using a spatial filter to extract the object beams from the higher-order diffraction beams generated from aliasing and then combining them. The second method is a modification of the single-sideband method that is known to eliminate the conjugate beams and to restrict the viewing zone to a narrow range. The modified method improves this restriction by dividing the range of the object beams and reproducing each of them. This paper presents the developed system, and the results of experiments that confirmed the effectiveness of this system in enlarging the viewing zone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.