A three-dimensional model was generated from an object picked-up by an RGB-D camera, and the threedimensional model generated was arranged in a computer. For the three-dimensional model, the image was picked-up by the two-dimensional camera array in which the fixation viewpoint was set, and the multi-view stereoscopic image was taken. In the setting of the picking-up parameter of the multi-view stereoscopic image, the region which minimizes the spatial distortion in the two-dimensional camera array was calculated beforehand. The pixel position conversion was carried out on the multi-view stereoscopic image taken by the calculated parameter, and the element image was generated on the LCD, and the three-dimensional model of the outside object by integral photography was able to be displayed.
We developed an integral photography with parallax only in the horizontal direction, namely one-dimensional integral photography. For developing, 4K LCD and lenticular lenses were used for the display equipment. With regard to the reproduced three-dimensional image, a fixation point was provided to the camera array in the computer, and then a multi-view stereoscopic image was picked-up, pixel position conversion was performed, and an elemental image was generated on the LCD and displayed. The display characteristics of resolution and depth distance related to the prototyped one-dimensional integral photography were described. Furthermore, from the subjective evaluation experiment results, we described the degree of the influence of the reduction of vertical spatial frequency on the depth perception and the influence of the toed-in capturing method on the reproduction of the depth distance.
Animals see the world through their eyes. Even though plants do not have organs of the visual system, plants are receptive to their visual environment. However, the exact mechanism of vision in plants has yet to be determined. For plants, vision is one of the important senses because they store energy from light. Light is not only the source of growth but also a vector of information for plants. Photosynthesis is one of the typical phenomena where light induces the response from plants. Photosynthesis is the process that coverts light energy into chemical energy and produces oxygen. In this study, we have emulated the three-dimensional vision in plants by artificial photosynthesis. Instead of using real plant cell, we have exploited the artificial photosynthetic properties of photoelectrochemical (PEC) cell. The siliconbased PEC cell sensitive to red/far-red region (600 - 850 nm) was used as a single-pixel sensor, and a mechanical scanner was used to simulate two-dimensional sensor array with a single-pixel sensor. We have successfully obtained the result by measuring photocurrents generated by photosynthetic water splitting.
The accommodation and convergence responses in a light field display which can provide up to 8 images to each eye of viewers are investigated. The DOF (Depth of Field) increase with the increasing number of projected images is verified for both monocular and binocular viewings. 7 subjects with eye sights greater than 1.0 reveal that their responses can match to their real object responses as the number of images increased to 7 and more, though there are distinctive differences between objects. The matching performance of the binocular is more stable than that of the monocular viewing for the number of images less than 6. But the response stability of the accommodation increases as the number becomes more than 7.
Multi-view stereoscopic images were produced via the pick-up method with respect to the object set in the computer, and integral photography was generated from this multi-view stereoscopic image. When the multiview stereoscopic image was taken by pick-up, the optical axis of each camera of the array was aligned with one point in the front of the camera array. A calculation method was derived with respect to the depth position and width of the object displayed by integral photography generated using this method. Based on the derived calculation method, consideration was given to the distortion displayed and reproduced with respect to the depth position and width of the object in the prototyped integral photography.
A method of displaying a multiview image set prepared by a computer on an electronic display device based on integral photography has been formulated. With the device, accommodation response was measured and the result that the accommodation response linearly changed within and outside the range of depth of focus was obtained. In contrast, in the measurement of the accommodation response using the stereoscopic image, the accommodation response showed a linear change within and outside the depth of focus at the spatial resolution of three cycle per degree (cpd) or less. Therefore, it can be considered that the measurement result of the accommodation response for integral photography with a spatial resolution of 3 cpd or less cannot be used to determine whether it is influenced by the integral photography display method or spatial resolution reduction. From these results, we found that spatial resolution is required to be 3 cpd or more when outside of the range of depth of focus in the measurement of the accommodation response for integral photography.
KEYWORDS: 3D displays, Stereoscopy, Diffusers, Glasses, 3D image processing, Optical engineering, Microlens array, Refractive index, Eye, RGB color model
Color moirés induced at contact-type multiview three-dimensional and light-field imaging are reviewed, slanted color moirés are introduced, and the reason why they become invisible as the slanting angle increases is explained. The color moirés in the imaging are induced by the structural uniqueness of the imaging, i.e., viewing zone-forming optics (VZFO) on the display panel. The moirés behave differently from those by the beating effect. They are (1) basically chirped, (2) their fringe numbers and phases are also varying according to the changes in viewer’s viewing positions and viewing angles at a given viewing distance, (3) the pattern period of the VZFO is at least more than several times that of the pixel pattern, and (4) they are colored. The color moirés can hardly be eliminated because they are induced structurally, but they can be minimized by either reducing the regularity of the pixel pattern using a diffuser between the panel and the VZFO or aligning VZFO’s pattern to have a certain slanting angle with the pixel pattern in the panel.
A simulator which can test the visual perception response of light field displays is introduced. The simulator can provide up to 8 view images to each eye simultaneously to test the differences between different numbers of different view images in supermultiview condition. The images are going through a window with 4 mm width, which is located at the pupil plane of each eye. Since each view image has its own slot in the window, the image is separately entring the eye without overlapping with other images. The simulator shows that the vergence response of viewers' eyes for an image at a certain distance is closer to the real object of the same distance for 4 views than 2 views. This informs that the focusable depth range will increase more as the the number of different view images increases.
The Kinect sensor is a device that enables to capture a real scene with a camera and a depth sensor. A virtual model of the scene can then be obtained with a point cloud representation. A complex hologram can then be computed. However, complex data cannot be used directly because display devices cannot handle amplitude and phase modulation at the same time. Binary holograms are commonly used since they present several advantages. Among the methods that were proposed to convert holograms into a binary format, the direct-binary search (DBS) not only gives the best performance, it also offers the possibility to choose the display parameters of the binary hologram differently than the original complex hologram. Since wavelength and reconstruction distance can be modified, compensation of chromatic aberrations can be handled. In this study, we examine the potential of DBS for RGB holographic display.
We developed a measurement tool for binocular eye movement and examined the perception of depth distance in integral photography images, which is a type of three dimensional image, using the tool we developed. Furthermore, we evaluated the perception of the depth distance in integral photography images by the subjective test, and we considered the perception results of the depth distance, which were these two experimental results. Additionally, we examined the perception of the depth distance in the real objects, and compared with the results in the case of integral photography images and real objects.
KEYWORDS: 3D image processing, 3D displays, Holograms, Eye, 3D displays, Projection systems, Camera shutters, 3D image reconstruction, Mirrors, Stereo holograms, Computer simulations
A super-multiview condition simulator which can project up to four different view images to each eye is introduced. This simulator with the image having both disparity and perspective informs that the depth of field (DOF) will be extended to more than the default DOF values as the number of simultaneously but separately projected different view images to each eye increase. The DOF range can be extended to near 2 diopters with the four simultaneous view images. However, the DOF value increments are not prominent as the image with both disparity and perspective with the image with disparity only.
In the first part of this paper, the principle and the development of IP display using computer software were described. Next, the measurement results of accommodation response for the developed IP display were described. As a result, the accommodation response was linearly changed as the depth position of the visual target moved in and out of the range of the depth of focus. On the other hand, the influences generated by the image blur for the accommodation response were investigated experimentally using stereoscopic images. The results showed that the accommodation response was coincident to the convergence point of stereoscopic images with less than 3 cpd spatial resolution. Based on these results, the considerations of the measurement results of the accommodation response for the development IP were examined. The requirements of the measurement condition of accommodation response for IP were also discussed.
Despite an increased need for three-dimensional (3-D) functionality in curved displays, comparisons pertinent to human factors between curved and flat panel 3-D displays have rarely been tested. This study compared stereoscopic 3-D viewing experiences induced by a curved display with those of a flat panel display by evaluating subjective and objective measures. Twenty-four participants took part in the experiments and viewed 3-D content with two different displays (flat and curved 3-D display) within a counterbalanced and within-subject design. For the 30-min viewing condition, a paired t-test showed significantly reduced P300 amplitudes, which were caused by engagement rather than cognitive fatigue, in the curved 3-D viewing condition compared to the flat 3-D viewing condition at P3 and P4. No significant differences in P300 amplitudes were observed for 60-min viewing. Subjective ratings of realness and engagement were also significantly higher in the curved 3-D viewing condition than in the flat 3-D viewing condition for 30-min viewing. Our findings support that curved 3-D displays can be effective for enhancing engagement among viewers based on specific viewing times and environments.
KEYWORDS: 3D vision, 3D displays, Visualization, Flat panel displays, 3D metrology, Eye, Electrodes, Electroencephalography, 3D image processing, 3D acquisition
As advanced display technology has been developed, much attention has been given to flexible panels. On top of that, with the momentum of the 3D era, stereoscopic 3D technique has been combined with the curved displays. However, despite the increased needs for 3D function in the curved displays, comparisons between curved and flat panel displays with 3D views have rarely been tested. Most of the previous studies have investigated their basic ergonomic aspects such as viewing posture and distance with only 2D views. It has generally been known that curved displays are more effective in enhancing involvement in specific content stories because field of views and distance from the eyes of viewers to both edges of the screen are more natural in curved displays than in flat panel ones. For flat panel displays, ocular torsions may occur when viewers try to move their eyes from the center to the edges of the screen to continuously capture rapidly moving 3D objects. This is due in part to differences in viewing distances from the center of the screen to eyes of viewers and from the edges of the screen to the eyes. Thus, this study compared S3D viewing experiences induced by a curved display with those of a flat panel display by evaluating significant subjective and objective measures.
KEYWORDS: Visualization, Photography, LCDs, 3D image processing, 3D displays, Spatial resolution, Image fusion, Lawrencium, Eye, Human vision and color perception
In this paper the accommodation responses for integral photography still images were measured. The experimental results showed that the accommodation responses for integral photography images showed a linear change with images showing the depth position of integral photography, even if the integral photography images were located out of the depth of the field. Furthermore, the discrimination of depth perception, which relates to a blur effect in integral photography images, was subjectively evaluated for the examination of its influence on the accommodation response. As a result, the range of the discrimination of depth perception was narrow in comparison to the range of the rectilinear accommodation response. However, these results were consistent according to the propensity of statistical significance for the discrimination of depth perception in the out range of subjectively effective discriminations.
KEYWORDS: Visualization, Head, Image resolution, Photography, Motion models, Televisions, 3D image processing, 3D displays, 3D modeling, Current controlled current source
Depth perception caused from the motion parallax, which was derived from the horizontally moving pickup device, was examined. The image sequences were captured to the real scene using only horizontally moving pickup device or horizontally moving pickup device with setting the fixation point. As results, the depth perception was a relatively high performance in the case of horizontally moving pickup device with setting the fixation point. For the examination of this result, the displacement and the differential displacement on the pickup device and the motion perception for the visual stimuli, which means the captured image sequences, are investigated.
KEYWORDS: 3D vision, Visualization, 3D displays, 3D visualizations, Visual system, Cell phones, 3D image processing, Electroencephalography, Manufacturing, Mobile devices
With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant
growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV
manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into
contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market
growth, there is an important thing to consider for consistent development and growth in the display market. To put it
briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D
technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as
motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing
distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of
view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of
viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing
brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience
from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing
guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a
human-friendly mobile 3D viewing.
We have already developed glasses-free three-dimensional (3-D) displays using multi-projectors and a special diffuser
screen that results in a highly realistic communication system. The system can display 70-200 inch large-sized 3-D
images with full high-definition video image quality. The displayed 3-D images were, however, only computergenerated
graphics or still images of actual objects. In this work, we studied a 3-D video capturing method for our multiprojection
3-D display. We analyzed the optimal arrangement of cameras for the display, and the image quality as
influenced by calibration error. In the experiments, we developed a prototype multi-camera system using 30 highdefinition
video cameras. The captured images were corrected via image processing optimized for the display. We
successfully captured and displayed, for the first time, 3-D video of actual moving objects in our glasses-free 3-D video
system.
KEYWORDS: Cameras, 3D displays, Imaging systems, Image processing, Associative arrays, 3D image processing, Stereoscopy, 3D visualizations, 3D modeling, Eye
The multi-view three-dimensional (3D) visualization by means of a 3D display requires reproduction of scene light
fields. The complete light field of a scene can be reproduced from the images of a scene ideally taken from infinite
viewpoints. However, capturing the images of a scene from infinite viewpoints is not feasible for practical applications.
Therefore, in this work, we propose a sparse camera image capture system and an image based virtual image generation
method for 3D imaging applications. We show a resulting virtual image produced by the proposed algorithm for
generating in-between view of two real images captured with our multi-camera image capture system.
A projector array-based 70-inch screen display, which is our first prototype, has a smooth horizontal parallax and gives
a dense viewpoint interval that is narrower than half of the interocular distance. Our final goal is to develop advanced
autostereoscopy so that viewers are not compelled to wear 3D glasses and can avoid watching under insufficient resolution.
We believe that larger screen size, higher image quality, and such natural image appearances as motion parallax and
multiple viewer capability are priority targets for professional 3D display applications. By combining a proprietary screen
and our developed projector array, we've designed and implemented a kind of autostereoscopic projection display. Enough
pixels to render true high definition are assigned for every viewpoint. The initial implementation has more than 100
million pixels. The actual observed horizontal motion parallax is smooth and reduces flipping. This feasibility study
clarified the following two factors: the strong requirement of an array friendly feature ready projector, and the existence
of some image glitches. The appearances of moires and ghost images are the most significant factors of visual fatigue in
our implementation. Some of these problems were tackled and suppressed. The projectors for the array must be prepared
to manage color space, brightness, geometric image compensation, and accurate frame synchronization. Extracting and
examining the practical problems with an autostereoscopic projection display are the first steps of our feasibility study. Our
goal is to establish an autostereoscopic display with natural and superior horizontal parallax.
We have developed several three-dimensional display systems that are matched to the human visual field characteristics.
In this article, we describe our developed display systems, which are matched for the human communication in the closerange,
medium-range, and distant rage categories.
We present a general concept of the proposed 3D imaging system called 3D-geometric camera (3D-gCam) to pick
up pixel-wise 3D surface profile information along with color information. The 3D-gCam system includes two
isotropic light sources placed at different geometric locations, an optical alignment system for aligning the light
rays projected onto the scene, and a high precision camera. To determine the pixel-wise distance information,
the system captures two images of the same scene during the stropping of the each light source. Then, the
intensity of each pixel location in these two images along with the displacement between the light sources are
utilized for calculating the distance information of object points corresponding to pixel locations in the image
to generate a dense 3D point cloud. The approach is suitable for capturing of 3D and color information in high
definition image format synchronously.
It is said that visual fatigue caused by watching stereoscopic images
is due to the conflict between convergence eye movement and accommodation functions. We studied the degree of visual fatigue in subjects watching HDTV stereoscopic images. The HDTV stereoscopic images used as visual stimuli contained only absolute, with no relative, parallax. In the experiments, images were displayed behind or in front of the screen by a 120-Hz time-sequential method. The examination of visual fatigue was carried out with a five-grade subjective evaluation test and measurement of the accommodation response after watching for one hour. We found that when stereoscopic HDTV images were displayed within the corresponding range of depth of focus, and remained static in the depth direction, the degree of visual fatigue was almost the same as that induced by watching images displayed at the depth of the screen. However, when images were displayed outside the corresponding range of depth of focus, visual fatigue was clearly induced. Moreover, we found that even if images were displayed within the corresponding range of depth of focus, visual fatigue was induced if the images were moved in depth according
to a step pulse function.
The goal of the present study was to examine size perception of objects depicted stereoscopically. Display size, target size, viewing distance, camera convergence distance, disparity information, and the scene background were manipulated. Subjects estimated the perceived height of a set of stereoscopic targets that were projected from photographic slides. The stereoscopic slides were taken with the targets either in a studio with a black background or outdoors with a backdrop of natural vegetation, using three camera convergence distances. The stereo slides were presented on two screens of different sizes and were viewed at three viewing distances. Disparity information was removed in half the trials. For the conditions examined, display and target size had a significant effect on perceived size; disparity and scene background had a small effect; viewing distance and camera convergence distance had a negligible effect. Interestingly, it was found that the range of estimates of perceived height was reduced compared to estimates of the actual targets. Furthermore, for the small display, the smaller targets tended to be perceived the same or a bigger size than the actual targets and the large targets tended to be perceived smaller than the actual targets.
A person has a feeling of 'being in' an image when watching a screen with a wide visual field, and his somatic sensation and sense of direction are affected by the image. Making use of this effect, we investigated the sensation of reality in images based on testing the sense of direction. In our studies, we examined the relationship between information as it is perceived by the human visual and vestibular systems. In our experiment, we used images from a horizontally rotated cameras for the visual information, and displayed these imags through a head-mounted display. Also, we set up an angular acceleration using a turntable as information to be perceived by the vestibular system. The direction of rotated cameras and the turntable were separately controlled, and the directions were varied. We found that the human visual system dominates in the case of stimuli which are small which are small for the vestibular system, and overestimates in the case of stimuli which are significant for the vestibular system. The results showed that the visual system is important for the perception of the sensation of reality, which is enhanced if stimuli to the various sensory systems are in correspondence.
The induced motion from visual information is discussed in showing the stable and moving images with or without disparities. Experiments were conducted to measure body sway, eye movement, and head movement under plane depth visual stimuli, to clarify the effect caused by the depth information for induced motion. Several obvious points emerged from the analysis of the experimental data. The minimal visual area needed to make the body stale is about 45 degrees for the plane stimuli, and is about 22.5 degrees of 45 degrees for the depth stimuli. It was also found that there are large differences in body sway between the plane stimuli and the depth stimuli after target moving. For the moving visual stimuli without disparity, the body sway was found to depend on the visual angle, but in the other case, it was found to be independent of the visual angle. This is due to the difference in OKN values from the measurement of eye movement. The frequency of emergence and the amplitude of OKN in the depth stimuli were larger than those of OKN in the plane stimuli. It was additionally observed that the low-frequency components of body sway were equal to those of head movement in these experiments.
KEYWORDS: 3D image processing, Visualization, Human vision and color perception, Image fusion, Televisions, 3D displays, Brain, Image processing, Visual analytics, Visual system
A new evaluation method of visual wide-field effects using human postural control analysis is proposed. In designing a television system for future, it is very important to understand the dynamic response of human beings in order to evaluate the visual effects of displayed images objectively. Visual effects produced by 3-D wide-field images are studied. An observer's body sway produced by postural control is discussed using rotating 2-D and 3-D images. Comparisons between stationary and rotating images are also performed. A local peak appears in power spectra of the body sway for the rotating images (3-D and 2-D). On the other hand, no distinctive component appears in the power spectra for the stationary images. By extending the visual field, the cyclic component can be proved from the audio-correlation function of the body sway for the rotating images. These results suggest that displayed images induce the postural control. The total length of the body sway locus is also analyzed to evaluate the postural control. The total length for the rotating images increases in proportion to viewing angles, and is nearly saturated after 50 (deg). Moreover, it is shown that the total length for the rotating 3-D image is greater than for the rotating 2-D image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.