PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We have used the eclipse method for a pair of liquid crystal (LC) projectors so they produce a stereoscopic image, with very little cross-talk, by taking advantage of the fact that liquid crystal projectors have long image lag. We operate an eclipse system whose rate is independent of the field rate of the projectors since the liquid crystal image is sustained more or less continuously. In this way we are turning a problem associated with these projectors into the method for projecting good stereoscopic images. We use a pair of LC shutters at the projector lenses opening and closing out of phase with each other, eclipsing the images at a high enough rate to preclude flicker. The selection device eyewear uses left and right LC shutters that run in synchrony with the projector shutters. A stereoscopic images is viewed the right image is seen by the right eye when the right shutters for both projector and eyewear are open. The left eye image is blocked by both eyewear and projector shutters, and so on, vice versa, field after field. The fields have been, in effect, created by the projector shutters and are unrelated to the actual video field rate. An important aspect of this approach is that the resultant stereo image has very low cross-talk because the process is dependent upon the dynamic range of the LC shutters and eliminates any contribution of cross-talk from pixel hysteresis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Polarized stereoscopic projection involves the polarization of the two projected images in orthogonal directions -- e.g. +/- 45 degrees for linear polarization, or clockwise and counter- clockwise for circular polarization. When LCD projectors are used for polarized stereoscopic projection it is important to take into consideration the fact that the output of these types of projectors is already polarized -- but not necessarily in the directions required for stereoscopic projection. The paper considers the amount of light loss of various polarized stereoscopic projection configurations to recommend optimal configurations. The paper also points out that it is advantageous that the output of an LCD projector is already polarized. In general, there will be less light loss in the polarization process when an LCD projector is used for stereoscopic projection as compared to when a projector which outputs unpolarized light is used (e.g. CRT and DMD/DLP).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years we have noted considerable disparity in ghosting among stereoscopic images encoded by polarization and projected onto silver screens. Potential causes include inefficiency of the polarizers used in projection, errors in orientation of linear polarizers placed over paired projection lenses, misalignment of linear polarizers in 3-D viewers; and leakage of non-polarized light through each of the polarizers. With polarizing images, the efficiency of the dyes that form the polarizing images must be considered. In addition, ghosting may arise from depolarization by poor screen surfaces or by some projection equipment. Spectrophotometric measurements confirmed visual differences in blue leakage of crossed polarizing lenses from various suppliers. We also found significant differences in transmittance curves of individual polarizers. We also examined efficiencies of dye polarizers crossed with polarizers used in 3-D viewers, simulating images formed of polarizing dyes in the StereoJet process. Although antighosting measures described earlier help mitigate unwanted imagewise ghosting here, it is important to minimize all potential for the appearance of ghosts. In addition, we reviewed the relative efficiencies of linear polarizers and circular polarizers for encoding polarization of stereoscopic images. Here we compared the transmittance of the crossed pairs as functions of both wavelength and angular disparity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to the results of the simulated teleoperation experiment, the larger the ratio of the overlapping area of stereoscopic images, the smaller the completion times and the number of errors. For this paper we did an experiment using the actual stereoscopic video system. We examined the performance of the teleoperation of an insert task in two experiments. In experiment 1, we set three fixed overlap rate conditions for the stereoscopic image pairs. (High overlap rate condition): The convergence point of the two cameras was set at the goal point where a cylindrical object was inserted. When subjects fixated their eyes on the goal point, the overlap rate of the images from the cameras was 95%. (Middle overlap rate condition): The convergence point of the cameras was set at the center of the working area. When subjects fixated their eyes on the goal points, it was 76.7%. (Low overlap rate condition): Convergence point of the camera was set at the point in the situation where the ratio of the overlapped area was 49% when subjects fixated to the goal point. Completion times and the numbers of errors of the insert task were measured. As a result, these were smallest at the high overlap rate condition. In experiment 2, we compared the performance between a fixed and a variable overlap rate conditions in a pick-and-insert task. The experimental results suggested that the number of errors of variable overlap rate condition was less than that of the fixed condition although the completion time of the former condition was not shorter than that of the latter condition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study examines potential advantages of using stereoscopic video for underwater inspection under turbid conditions, a task which is affected also by low illumination and camouflaging. An overview is given of earlier research on the theoretical effects of these factors on visual perception. An experiment was carried out to investigate performance using stereoscopic video in a simulated underwater inspection task involving detection of a camouflaged object, at four turbidity levels and three camera separation levels. In general, it was found that detection rate and sensitivity were consistently and significantly better using stereoscopic compared to monoscopic video. Of equal importance was the discovery of an interaction between camera and turbidity treatments, suggesting that use of stereoscopic video should decelerate performance degradation for increasing turbidity. It was also found that diminished display brightness due to the use of shuttering glasses detracted very little from performance with monoscopic video, implying that addition of stereo glasses should not offset the expected advantages of stereoscopic viewing. Finally, the magnitude of stereoscopic disparity was found to have only a slight effect on detection performance, implying that a minimal level of disparity is sufficient for overcoming turbidity significantly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Overlaid stereo image pairs, viewed without stereo demultiplexing optics, are not always perceived as a ghosted image: if image generation and display parameters are adjusted so that disparities are small and limited to foreground and background regions, then the perception is rather more of blurring than of doubling. Since this blurring seems natural, comparable to the blurring due to depth-of-focus, it is unobjectionable. In contrast, the perception of ghosting seems always to be objectionable. Now consider the possibility that there is a perceptual regime in which disparity is small enough that perception of crosstalk is as blurring rather than as ghosting, but it is large enough to stimulate depth perception. If such a perceptual region exists, then it might be exploited to relax the strict 'crosstalk minimization' requirement normally imposed in the engineering of stereoscopic displays. This paper reports experiments that indicate that such a perceptual region does actually exist. We suggest a stereoscopic display engineering design concept that illustrates how this observation might be exploited to create a zoneless autostereoscopic display. By way of introduction and motivation, we begin from the observation that, just as color can be shouted in primary tones or whispered in soft pastel hues, so stereo can be shoved in your face or raised ever so gently off the screen plane. We review the problems with 'in your face stereo,' we demonstrate that 'just enough reality' is both gentle and effective in achieving stereoscopy's fundamental goal: resolving the front-back ambiguity inherent in 2D projections, and we show how this perspective leads naturally to the relaxation of the requirement for crosstalk reduction to be the main engineering constraint on the design of stereoscopic display systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereoscopic images are hard to get right, and comfortable images are often only produced after repeated trial and error. The main difficulty is controlling the stereoscopic camera parameters so that the viewer does not experience eye strain or double images from excessive perceived depth. Additionally, for head tracked displays, the perceived objects can distort as the viewer moves to look around the displayed scene. We describe a novel method for calculating stereoscopic camera parameters with the following contributions: (1) Provides the user intuitive controls related to easily measured physical values. (2) For head tracked displays; necessarily ensures that there is no depth distortion as the viewer moves. (3) Clearly separates the image capture camera/scene space from the image viewing viewer/display space. (4) Provides a transformation between these two spaces allowing precise control of the mapping of scene depth to perceived display depth. The new method is implemented as an API extension for use with OpenGL, a plug-in for 3D Studio Max and a control system for a stereoscopic digital camera. The result is stereoscopic images generated correctly at the first attempt, with precisely controlled perceived depth. A new analysis of the distortions introduced by different camera parameters was undertaken.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The addition of stereo cues to perspective displays is generally expected to improve the perception of depth. However, the display's pixel array samples both perspective and stereo depth cues, introducing inaccuracies and inconsistencies into the representation of an object's depth. The position, size and disparity of an object will be inaccurately presented and size and disparity will be inconsistently presented across depth. These inconsistencies can cause the left and right edges of an object to appear at different stereo depths. This paper describes how these inconsistencies result in conflicts between stereo and perspective depth information. A relative depth judgement task was used to explore these conflicts. Subjects viewed two objects and reported which appeared closer. Three conflicts resulting from inconsistencies caused by sampling were examined: (1) Perspective size and location versus stereo disparity. (2) Perspective size versus perspective location and stereo disparity. (3) Left and right edge disparity versus perspective size and location. In the first two cases, subjects achieved near-perfect accuracy when perspective and disparity cues were complementary. When size and disparity were inconsistent and thus in conflict, stereo dominated perspective. Inconsistency between the disparities of the horizontal edges of an object confused the subjects, even when complementary perspective and stereo information was provided. Since stereo was the dominant cue and was ambiguous across the object, this led to significantly reduced accuracy. Edge inconsistencies also led to more complaints about visual fatigue and discomfort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computer controlled stereoscopic camera system that produces precise and rapid changes of camera orientation and lens parameters is described and assessed. The system consists of a pair of cameras, each attached to a lens with a computer controlled zoom, focus and aperture. Cameras, at right angle to each other, are aimed through a half-silvered mirror to acquire the left and right images. Each camera is mounted on a motorized base that controls the camera separation and convergence angle. The computer controls camera separation from zero to 45 cm, with an accuracy of 0.1 cm, and the convergence angle of each camera from +/- 15 degrees off- center, with an accuracy of 0.02 degrees. Subjects viewed 27 conditions on a stereo monitor system where camera and target parameters were changed. There were three levels of convergence angle, camera separation and target intensity creating 27 viewing conditions. The subjects viewed all conditions and made depth judgments between four pairs of point source lights. Depth judgement results indicate that direct and remote views are consistent, subjects produce consistent judgments despite non-orthoscopic intervening camera configurations and judgments are consistent with varying system parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When shutter speeds approach a nanosecond you set your experiment up using a tape measure. Light-in-Flight imaging takes over when the length of the pulse and the shutter time can relate to a distance of two or three meters. This paper addresses the development of next generation ultrahigh speed digital imaging system and their application to stereo photography of ballistic, penetration, fragmentation, and spray events. Applications of high speed imaging from 1000 to 100 million frames per second are discussed along with the software used to evaluate various experimental methods. Applications range from ultra-high resolution still imaging using a laser strobe to laser illuminated digital movies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most teleoperations require both wide vision for work space recognition and fine vision for the recognition of the object structure. To achieve a high operational efficiency in teleoperations, we developed the Q stereoscopic video system which was constructed by four video sets. Therefore, it requires for video channels to transmit video signals. However, four channels are not always free for the video system. Presently, only six wireless channels are assigned for controlling the machines at Japanese construction sites. To reduce the number of channels for video images, we tried to send the images from two cameras in a single channel alternatively. It was not clear how many field-refreshing rates of video images were necessary for teleoperations. We researched the relationship between operational efficiency and the number of field-refreshing rates per second. In the experiment, subjects were required to pick a cylindrical object up and insert it into a hole by operating a tele-robot under stereoscopic images in 60 Hz and 30 Hz field-refreshing rates. The results showed no significant differences between the two conditions in the completion time and the number of errors. Therefore, it is possible to reduce the field- refreshing rates by half, and provide two channels the Q stereoscopic video system without reducing the operational efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the development of an original stereoscopic sensor suitable for cylindrical and spherical 3D imaging. Its principle is based on the rotation of two linear CCD cameras with their optical center on the rotation axis of a stepper motor. The architecture of the sensor allows us to obtain panoramic images with a cylindrical geometry. By adding a second rotation axis orthogonal to the first rotation axis, color stereoscopic spherical images are obtained. This second axis gets through the middle of the two cameras. Due to the geometrical precision acquisition of the color pixels and the faithful restitution of the colors, high quality color images are obtained. The concept of this sensor simplifies calibration and feature matching by reducing classical 2D matching problem to 1D. This original sensor is dedicated to real scenes acquisition for multimedia and motion pictures applications over 360 X 360 degrees.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Panoramic stereo pictures are created by stitching together frames taken from a single moving video camera. Stereo panoramas can be created up to a full 360 degrees. The mosaicing process is robust and fast, and can be performed in real time. Mosaicing starts by computing the motion between the video frames. The video frames, together with the motion between frames computed in the previous step, are used to generate two panoramic pictures: One picture for the left eye and one picture for the right eye. Since the camera is moving, each object is viewed from different directions in different frames. Stitching together strips from the different video frames, selected to have the correct viewing directions for stereo perception, generates the panoramic stereo pictures. The stereo mosaicing process allows several features that were not available before: (1) The creation of stereo panoramic images in 360 degrees. (2) Automatic disparity control: increasing stereo disparity for far away objects, and reducing stereo disparity for close object, to give optimal stereo viewing in all directions and for all distances. (3) The creation of multiple pictures from multiple views, not limited to two views. This enables viewing the panoramic stereo pictures using lenticular technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer-based Stereoscopic Imaging and Applications
In theory, a computer system could serve as an ideal platform for displaying stereoscopic images. It is commonplace to have computers that have enough performance to render stereoscopic images. Of these computers, about 90% are running a version of the Windows operating system, with a version of Windows 9x being the most commonly used. In addition, it is relatively easy and inexpensive to add stereoscopic equipment to the computers. However, a serious problem is encountered when one wants to use the stereoscopic equipment with software that is running under a version of Windows 9x. There are very few applications programming interfaces (API's) that allow a developer to take advantage of the stereoscopic equipment. Typically, a developer is forced to use an API specific to a video card, or specific to a particular model of stereoscopic equipment. To overcome these obstacles to the development of stereoscopic applications, the Win3D company developed a stereoscopic API based around modular components. The primary module of the API communicates with the rendering library. Direct X is the initial library supported. Additional modules are developed for the supported video cards and for the supported stereoscopic equipment. This allows, for the first time, a widely available solution for developers to independently select the video equipment and the stereoscopic equipment to be used. Thus, the limitations of an API for specific video cards or specific stereoscopic equipment can be eliminated. An additional feature of the modular design is that various display methods are supported -- software page- flipping, hardware page-flipping, interlaced, over/under, anaglyph, etc. The modular design also allows applications to automatically stay current with additional video equipment and stereoscopic equipment as it is developed, without requiring changes to the applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Head-up displays provide the pilot of an aircraft with a means to view real-world cues simultaneously with on-board flight information. In combination with a precise navigation, head-up displays allow approaches and landings under degraded visual flight conditions under manual control by the pilots. While color and intensity coding is state of the art in modern glass cockpits, head-up displays are still monochrome and limited in brightness. Only contrast and motion are used of the four normal human visual capabilities (contrast, color, motion and stereo). Furthermore the symbols have to be based solely on lines to maintain see-through capability. These limitations lead to coding of symbology by form and font, resulting in cluttered formats that require a high effort in the training of the flight crews. Since current symbology is monochrome collimated to a certain distance and monoscopic, it appears like one object in a single depth plane. With stereoscopy in the head-up display, several layers of information can be used to declutter the high-loaded 2D symbology. A stereoscopic head-up display was developed on the base of a modern civil head-up display and integrated into a fixed based flight simulator with a collimated visual projection system. First tests indicate improved information perception in head-up displays by adding stereoscopy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Strabismus and amblyopia are two main impairments of our visual system, which are responsible for the loss of stereovision. A device is developed for diagnosis and treatment of strabismus and amblyopia, and for training and developing stereopsis. This device is composed of a liquid crystal glasses (LCG), electronics for driving LCG and synchronizing with an IBM PC, and a special software. The software contains specially designed patterns and graphics for enabling to train and develop stereopsis, and do objective measurement of some stereoscopic vision parameters such as horizontal and vertical phoria, fusion, fixation disparity, and stereoscopic visual threshold.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multimedia Ambiance Communication is as a means of achieving shared-space communication in an immersive environment consisting of an arch-type stereoscopic projection display. Our goal is to enable shared-space communication by creating a photo-realistic three-dimensional (3D) image space that users can feel a part of. The concept of a layered structure defined for painting, such as long-range, mid-range, and short-range views, can be applied to a 3D image space. New techniques, such as two-plane expression, high quality panorama image generation and setting representation for image processing, 3D image representation and generation for photo- realistic 3D image space have been developed. Also, we propose a life-like avatar within the 3D image space. To obtain the characteristics of user's body, a human subject is scanned using a CyberwareTM whole body scanner. The output from the scanner, a range image, is a good start for modeling the avatar's geometric shape. A generic human surface model is fitted to the range image. The obtained model is topologically equivalent even if our method is applied to another subject. If a generic model with motion definitions is employed, and common motion rules can be applied to all models made from the generic model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current 3-dimensional display system using only the binocular disparity has the problem of the conflict between eye convergence and accommodation. A 3-dimensional display system for one observer that can solve it was introduced in this paper. A proof about the possibility of the satisfaction of accommodation of one eye was given as the experimental result in this 3-dimensional display system. This system uses 2- dimensional images to generate a 3-dimensional image for one observer. The number of 2-dimensional images is 127 for one 3- dimensional image. The horizontal directional motion parallax is possible in this system but the area is a little larger than the distance between both eyes of one observer. This system is close to the observer and generates horizontal parallax only 3-dimensional image. In this case, vertical diffuser could not be used in the system because of image blurring in vertical direction. A vertical directional cylindrical lens instead of the vertical diffuser was used in the system. It could solve the image blurring but the depth directional freedom to the observer's eye position was restricted in narrow width because of using cylindrical lens. This system can offer a large 3-dimensional image as HMD (Head Mounted Display) can display a large 2-dimensional image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One recently reported approach to flat panel autostereoscopic 3D displays under investigation at Sharp Laboratories of Europe Ltd. (SLE) uses a high precision patterned optical half wave retarder combined with a re-configurable output polarizer to 'develop' a parallax barrier structure attached to an LCD display panel. Such a barrier is invisible without the polarizer and thus a 2D/3D configurable display can be formed. A discussion of cross talk and white level variation in the 3D mode will be made with reference to Fresnel diffraction in the display. A model will be presented and justified in the light of the panel geometry. This model will be compared with measured cross talk, window structure and white level variation in such a 2D/3D configurable system. The implications that the shape of the transmitting profile has on 3D-display cross talk will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A desktop 20 inches autostereoscopic display system based on two 6.5 inches LCD projection panels and a single objective is described. The system employs vertical separation of left and right views in the objective's entrance pupil for the viewing zone forming. The vertically separated views are horizontally divided with use of a positioned at the pupil. This diaphragm is composed of two horizontally parallel LC stripes shutters which are made of independently working 32 black and white LCD columns. A rear projection type screen made of a Fresnel lens with a vertical diffuser is used for the image projection. The screen provides the best viewing distance of 70 - 80 cm. The system equipped with a head-tracking device for 16-view image display. The system is compatible with any dual monitor SVGA video card with resolution 800 X 600 or any source of parallel stereoscopic video signal. The size of the system is comparable to that of 20 inch CRT monitor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the design of an autostereoscopic display system we usually deal with trade-offs among resolution, number of viewing- zones, and brightness. A viewer-tracking-based autostereoscopic display, with a micro-retardation array for image splitting, can achieve reasonable balance among these properties simultaneously. However, the bulkiness due to the hard-to-decrease f-number of the field lens used to direct light to the viewer presents a problem. In this paper, we introduce a newly-developed flat-panel autostereoscopic display system. Instead of using a large format field lens, a precise lenticular plate, with an LCD tracking panel on its back focal plane, is attached to the LCD image panel and acts as both the backlight and the tracking device. The total depth of the system is condensed to the order of several centimeters under such a configuration. Both micro-retardation array and micro-prism can be used as the images splitter of such a system. In order to avoid the crosstalk caused by the optical leakage between adjacent columns, a vertically interlaced blazed grating should be used in place of the striped-wise micro-prism fabricated by mechanical machining. A display with 6' screen in diagonal is constructed for feasibility study, in which a micro-retardation array is used as the image splitter and combination of micro-retardation array and a polarizer is used to stimulate the tracking LCD. Detailed performance evaluation of the system is addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a primitive system design of a super multi-view (SMV) 3-D display system based on a focused light array (FLA) concept using reflective vibrating scanner array (ViSA). The parallel beam scanning using a vibrating scanner array is performed by moving left and right an array of curvature-compensated mirrors or diamond-ruled reflective grating attached to a vibrating membrane. The parallel laser beam scanner array can replace the polygon mirror scanner which has been used in the SMV 3-D display system based on the focused light array (FLA) concept proposed by Kajiki at TAO (Telecommunications Advancement Organization). The proposed system has great advantages in the sense that it requires neither huge imaging optics nor mechanical scanning parts. Some mathematical analyses and fundamental limitations of the proposed system are presented. The proposed vibrating scanner array, after some modifications and refinements, may replace polygon mirror-based scanners in the near future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a CGIP (Computer-Generated Integral Photography) method and verify its feasibility. In CGIP, the elemental images of imaginary objects are computer-generated instead of using pickup process. Since this system is composed of only one lens array and conventional display devices, it is compact and cost effective. The animated image can also be presented by the time-varying elemental images. As a result, autostereoscopic images with full color and full parallax were observed in real time. Moreover, this method can be applied to a quasi-3D display system. If each camera picks a scene which is a part of total view and elemental images are generated so that each scene has different depth, real objects captured by ordinary cameras can be displayed in quasi-3D. In addition, since it is easy to change the shape or size of elemental images in this scheme, we can observe the effect of several viewing parameters. This helps us to analyze the basic IP system. We perform an experiment with different lens arrays and compare the results. The lateral and depth resolution of the integrated image is limited by some factors such as the image position, object thickness, the lens width, and the pixel size of display panel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this research is to develop a head-tracked, stern virtual reality system utilizing plasma or LCD panels. This paper describes a head-tracked barrier auto-stereographic method that is optimized for real-time interactive virtual reality systems. In this method, virtual barrier screen is created simulating the physical barrier screen, and placed in the virtual world in front of the projection plane. An off- axis perspective projection of this barrier screen, combined with the rest of the virtual world, is projected from at least two viewpoints corresponding to the eye positions of the head- tracked viewer. During the rendering process, the simulated barrier screen effectively casts shadows on the projection plane. Since the different projection points cast shadows at different angles, the different viewpoints are spatially separated on the projection plane. These spatially separated images are projected into the viewer's space at different angles by the physical barrier screen. The flexibility of this computational process allows more complicated barrier screens than the parallel opaque lines typically used in barrier strip auto-stereography. In addition this method supports the focusing and steering of images for a user's given viewpoint, and allows for very wide angles of view. This method can produce an effective panel-based auto-stereo virtual reality system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 3D display using super multi-view images is expected as a display that enables to reproduce natural stereoscopic view. This 3D display needs about 100 views ideally, but actually available view sources from cameras are 10 - 20. So real-time view interpolation is needed. This research aims at a real- time view interpolation system. We have developed a prototype system that inputs four video images and generates several views. The system consists of two parts: acquisition part and data processing unit. The acquisition part has a multi-view camera that has four video cameras on the X-theta mechanical stage. It captures four view images and multiplexes them into digital signal stream. The data processing unit has four DSP chips and performs camera calibration and view interpolation, so that various processing algorithms can be examined. As for the view interpolation, we adopted an adaptive filtering technique on EPI. By this technique, multi-view images are interpolated on the EPIs adaptively by the most suitable filters. And, in order to perform view interpolation with high accuracy, camera calibration is studied. The details of the prototype system, examination of view interpolation and camera calibration are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adopting a pixel cell plate designed by use of geometrical optics, a full parallax imaging is realized. The pixel cell plate is consisted of 2-dimensional array of pixel cells which are made of the same number pixels in each image of a N X N multiview image array.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An 8-color multiplanar volumetric display is being developed by Actuality Systems, Inc. It will be capable of utilizing an image volume greater than 90 million voxels, which we believe is the greatest utilizable voxel set of any volumetric display constructed to date. The display is designed to be used for molecular visualization, mechanical CAD, e-commerce, entertainment, and medical imaging. As such, it contains a new graphics processing architecture, novel high-performance line- drawing algorithms, and an API similar to a current standard. Three-dimensional imagery is created by projecting a series of 2-D bitmaps ('image slices') onto a diffuse screen that rotates at 600 rpm. Persistence of vision fuses the slices into a volume-filling 3-D image. A modified three-panel Texas Instruments projector provides slices at approximately 4 kHz, resulting in 8-color 3-D imagery comprised of roughly 200 radially-disposed slices which are updated at 20 Hz. Each slice has a resolution of 768 by 768 pixels, subtending 10 inches. An unusual off-axis projection scheme incorporating tilted rotating optics is used to maintain good focus across the projection screen. The display electronics includes a custom rasterization architecture which converts the user's 3- D geometry data into image slices, as well as 6 Gbits of DDR SDRAM graphics memory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
My prototype 3D display shows that 3D objects can be projected in space in color without moving parts by using a scalable high-resolution screen, 10 micrometer pixels, that combines up to 100 views. The patented high-resolution screen uses coherent optical fiber bundles to combine the images of many spatial light modulators together. The straightforward modular approach makes it easy to: scale the screen up; adjust light output and change 2D and 3D resolution. Depending on the computer used refresh rates go from 0.1 up to real-time. The volume of the apparatus is comparable to an ordinary TV set. The paper will include concepts, theory, experiments, proof of concept and configurations for different uses. The flexibility of the system allows you to make curved 3D screens, live size 3D screens and even the possibility to reproduce the 'Princess Leia it's a hologram' of Star Wars effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recommendation and Standards for Stereoscopic Imaging
In stereoscopic perception of a three-dimensional world, binocular disparity might be thought of as the most important cue to 3D depth perception. Nevertheless, in reality there are many other factors involved before the 'final' conscious and subconscious stereoscopic perception, such as luminance, contrast, orientation, color, motion, and figure-ground extraction (pop-out phenomenon). In addition, more complex perceptual factors exist, such as attention and its duration (an equivalent of 'brain zooming') in relation to physiological central vision, In opposition to attention to peripheral vision and the brain 'top-down' information in relation to psychological factors like memory of previous experiences and present emotions. The brain's internal mapping of a pure perceptual world might be different from the internal mapping of a visual-motor space, which represents an 'action-directed perceptual world.' In addition, psychological factors (emotions and fine adjustments) are much more involved in a stereoscopic world than in a flat 2D-world, as well as in a world using peripheral vision (like VR, using a curved perspective representation, and displays, as natural vision does) as opposed to presenting only central vision (bi-macular stereoscopic vision) as in the majority of typical stereoscopic displays. Here is presented the most recent and precise information available about the psycho-neuro- physiological factors involved in the perception of stereoscopic three-dimensional world, with an attempt to give practical, functional, and pertinent guidelines for building more 'natural' stereoscopic displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The display of stereoscopic images implies an attempt to create as realistic an exhibition as can be attained with the particular image display system as possible. Therefore, consideration must be given to the method by which the human visual mechanism perceives and renders such images, so both in the real world and as a result of viewing such displays. Although there are several similarities between the way images are perceived by the human visions system in real life and the way they are perceived when viewing stereoscopic displays, there are also several very important differences. The substance of this paper is directed toward the minimization or, where possible, the elimination of these differences. Both stereoscopic and autostereoscopic display systems are covered, and specified separately where necessary. From the standpoint of the relationship of human vision to the design of a particular display, in many (but not all) instances the requirements necessary to simulate normal human vision are similar in both stereoscopic and autostereoscopic type displays. The descriptions as referenced to, are compared with the corresponding function(s) of the human visual mechanism, emphasizing the strong correlation of the eye-brain model, and the ways by which our perception is affected by the parameters of the particular display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of stereo imagery is the creation of the perception (in the human viewer) of a 3-D world, for purposes such as entertainment, visualization, education, and allowing people to better learn about and control the world about them. Human perception of 3-D is based upon a combination of many cues from the senses, as well as internal mental templates and expectations. In a stereo presentation, if some of the 3-D cues are inconsistent with others, the perceptual system receives conflicting information, and seeks to find a consistent interpretation. In cases of severe conflict, 3-D perception of the scene may be totally disrupted or highly inaccurate. Even if the user is able to perceive a consistent 3-D view, the effort required to resolve conflicts may reduce the sense of realism and the enjoyment of viewing, and may contribute to fatigue, eyestrain, and headache. Many experienced users of stereo imagery have learned to ignore a certain degree of image inconsistency, and can derive considerable pleasure from viewing even significantly inconsistent stereo images. Unfortunately, novice users generally do not have this ability, and while they may not be able to verbally explain what's wrong with a stereo image, they may comment that it gives them a headache, or that something's not quite right about it, or that they have trouble seeing the depth. Worse, a novice user may conclude that the problems with this image are characteristic of all stereo images, and decide (and tell others) that stereo is not worthwhile. Since the field of display can greatly benefit from the development of a large user base (to provide money to support manufacturers and future technology development, and to draw the interest of content providers), it is important that every effort be made to make sure that novice users have a pleasant viewing experience, with stereo views that have a high degree of consistency in their 3-D cues. Even experienced users have varying degrees of tolerance for inconsistencies, so any improvement in stereo realism increases the number of people who can enjoy it. Despite the importance of providing consistent 3-D cues, no existing display system can do a perfect job of displaying any significant variety of stereo images, nor will such a perfect display be created in the next several decades. It is therefore important to look at the sources of 3-D cue inconsistencies in terms of the severity of impact on the viewing experience, and the effort required to minimize the effect of each inconsistency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper explores the transmission of MPEG-2 compressed stereoscopic (3-D) video over broadband networks and digital television (DTV) broadcast channels. A system has been developed to perform 3-D (stereoscopic) MPEG-2 video encoding, transmission and decoding over broadband networks in real- time. Such a system can benefit applications where a depiction of the relative positions of objects in 3-dimensional space is critical, by providing visual cues along the sight axis. Applications such as tele-medicine, remote surveillance, tele- education, entertainment and others could benefit from such a system since it conveys an added viewing experience. For simplicity and cost efficiency the system is kept as simple as possible while offering a certain degree of control over the encoding and decoding platforms. Data exchange is done with TCP/IP for control between the server and client and with UDP/IP for the MPEG-2 transport streams delivered to the client. Parameters such as encoding rate can be set independently for the left and right viewing channels to satisfy network bandwidth restrictions, while maintaining satisfactory quality. Using this system, transmission of stereoscopic MPEG-2 transport streams (video and audio) has been performed over a 155 Mbps ATM network shared with other video transactions between server and clients. Preliminary results have shown that the system is reasonably robust to network impairments making it useable in relatively loaded networks. An innovative technique for broadcasting Standard Definition Television 3-D video using an ATSC compatible encoding and broadcasting system is also presented. This technique requires a simple video multiplexer before the ATSC encoding process, and a slight modification at the receiver after the ATSC decoding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Asymmetrical coding has been shown to be a viable method for reducing the bandwidth required for stereoscopic video storage and transmission. In the basic version of asymmetric coding, high quality images are streamed to one eye, and lower quality images are streamed to the other eye. To remove this imbalance in image quality between the two eyes, we propose a modified version of asymmetrical coding where high-quality images are interleaved with reduced-quality images within each stream. The change between high-quality and reduced-quality images occurs in counter-phase for the two image streams, such that the levels of image quality are cross-switched between streams. Experimental evidence is provided to show that a cross-switch is best positioned at scene cuts where it is masked, otherwise it is visible as a 'jerky motion' in the stereoscopic picture. We conclude that a modified version of asymmetric coding with cross-switches occurring at scene-cuts is a useful method for balancing the image quality between eyes without introducing artifacts, while maintaining the feature of bandwidth reduction for stereoscopic video storage and transmission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-view auto-stereoscopic images and image sequences require large amounts of space for storage and large bandwidth for transmission. High bandwidth can be tolerated for certain applications where the image source and display are close together but, for long distance or broadcast, compression of information is essential. We report on the results of our two- year investigation into multi-view image compression. We present results based on four techniques: differential pulse code modulation (DPCM), disparity estimation, three- dimensional discrete cosine transform (3D-DCT), and principal component analysis (PCA). Our work on DPCM investigated the best predictors to use for predicting a given pixel. Our results show that, for a given pixel, it is generally the nearby pixels within a view that provide better prediction than the corresponding pixel values in adjacent views. This led to investigations into disparity estimation. We use both correlation and least-square error measures to estimate disparity. Both perform equally well. Combining this with DPCM led to a novel method of encoding, which improved the compression ratios by a significant factor. The 3D-DCT has been shown to be a useful compression tool, with compression schemes based on ideas from the two-dimensional JPEG standard proving effective. An alternative to 3D-DCT is PCA. This has proved to be less effective than the other compression methods investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are some interesting cases where image processing techniques applied independently to the left and right eye images can preserve some parallax within the accepted limits of vertical disparity. We investigate the application of simple geometric techniques to determine which ones alter stereo and to what extent they alter stereo. We consider the simple affine transformations such as translation, rotation, scale and shear and show that interesting results can be obtained in some cases by applying a transformation to one eye image and not the other. We show that any 2D transformation applied to one eye view of a stereo pair that preserves lines/planes must be a composition of x-shears, x-translations and x-scales.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Internet is proving to be a popular medium for the distribution of stereoscopic images. The availability of low cost LCD shutter glasses and the high update rate of PC monitors are also factors driving this increased popularity. However, current methods of distributing stereoscopic media have a number of problems when compared to 2D media transmission, such as higher bandwidth requirements, larger file sizes, and duplication of media for 2D and 3D versions. An ideal format for stereoscopic media would minimize or eliminate these issues, would be compatible with standard browsers or media players, and would be independent of stereo viewing method. A format that meets these criteria is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of the present study was to examine size perception of objects depicted stereoscopically. Display size, target size, viewing distance, camera convergence distance, disparity information, and the scene background were manipulated. Subjects estimated the perceived height of a set of stereoscopic targets that were projected from photographic slides. The stereoscopic slides were taken with the targets either in a studio with a black background or outdoors with a backdrop of natural vegetation, using three camera convergence distances. The stereo slides were presented on two screens of different sizes and were viewed at three viewing distances. Disparity information was removed in half the trials. For the conditions examined, display and target size had a significant effect on perceived size; disparity and scene background had a small effect; viewing distance and camera convergence distance had a negligible effect. Interestingly, it was found that the range of estimates of perceived height was reduced compared to estimates of the actual targets. Furthermore, for the small display, the smaller targets tended to be perceived the same or a bigger size than the actual targets and the large targets tended to be perceived smaller than the actual targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Binocular telepresence systems afford the opportunity of increasing the inter-camera distance (ICD) beyond the normal interocular distance (IOD) which magnifies the magnitude of the disparity information. This improves performance in nulling and matching tasks. Here we examine whether telepresent observers can learn to use enhanced disparities to accurately perform tasks requiring the recovery of Euclidean geometry (a shape task). The design comprised three phases: pre-adaptation (ICD equals 6.5 cm), adaptation (ICD equals 3.25 or 13 cm) and post-adaptation (ICD equals 6.5 cm). Telepresent observers were required to adjust the magnitude of a depth interval (specified by binocular disparity) so that it matched a specified 2D interval specified by two lights (set between 5 and 15 cm) in an otherwise blacked-out scene. In the adaptation phase, the ICD/IOD ratio was changed to 0.5 or 2 and observers adjusted the depth interval repeatedly until a performance criterion was reached. Two forms of feedback were given in the adaptation phase: direct, where another light was shown at the correct disparity; and symbolic, where a signed number indicated the magnitude and direction of the error. Observers were clearly affected by ICD/IOD changes but learned the new ratio rapidly under both feedback conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A field-sequential stereoscopic acquisition system based on off-the-shelf equipment and on in-house developed software for interpolating fields to interlaced frames is described. The software relies on object-based image analysis and the scheme is relatively robust for different types of scenes, including those with relatively fast motion and those with occluded and newly exposed areas. The off-the-shelf hardware consisted of a Sony DSR-PD1 with a Nu-View SX2000 Adapter. The adapter is a lens attachment that allows to views to be recorded: a view through the lens and a view displaced from the lens. Thus, a left-eye view is recorded in the odd (even) field and the right-eye view is recorded in the even (odd) field. After processing, the stereoscopic images could be played back at 120 Hz field rate and viewed without flicker and with smooth motion using standard electronic shutter glasses that are synchronized with the display of the odd and even fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
IllusionHole is the ideal stereoscopic display system for multiple users. It allows three or more people to simultaneously observe individual stereoscopic image pairs from their own viewpoints. The system tracks the head positions of all users and generates images without distortion for each eye of each person. The system consists of a normal display and a display mask, which is equipped with a hole at the center. The display mask is located over the display surface at a suitable distance. By controlling the position of the image drawing area for each user according to the corresponding user's viewpoint, each user can observe the stereoscopic image pairs shown in an individual area of the display system with shutter glasses. On the other hand, each user is unable to see the image drawing areas of the other users because these areas are adequately occluded by the display mask. Accordingly, the IllusionHole display system provides intelligible 3D stereoscopic images for three or more moving observers simultaneously without flicker and distortion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe a system that creates a virtual bird's-eye view from multi-camera images in real-time and its application to 'HIR (Human-Oriented Information Restructuring) system for ITS' we have proposed. In recent years, studies on AHS (Advanced Highway Systems) are seen in many fields. However, there still remains many problems when we try to realize the automated driving, the goal of AHS. To overcome these problems of AHS, we have proposed HIR system, which assists drivers by providing them with integrated and restructured images. The striking point of the proposed system compared with the conventional one is that it is human, not the mechanical cars, that recognizes the situation and controls the car, and for that purpose, we just generate and show easy-to-understand images for human. The step is, integrate and restructure numerous camera images from all driving environment, such as cars and roads, and non-image information like VICS (Vehicle Information and Communication Systems) information. Then pick out the most important information according to the situation and show it in the form of 'image.' This paper proposes a bird's-eye view system as an example of HIR. We describe the algorithm and hardware to create a bird's-eye view in real-time. In the experiment, we show that the bird's-eye view is useful as a driver-assisting image under the situation of right turn at the intersection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coherent immersion of real objects in virtual environment requires to know their spatio-temporal behavior. In this paper we present a parameters tracking method for moving objects. This method uses multiple video streams of the tracked objects as input. In our approach we assume given a parametered model of the tracked subject, which includes geometrical description and parameters which are freedom degrees of the subject. The method solves the parameters tracking problem by finding the vector parameter which generates the closest synthetic views of the subject regards to the real images. This is done by introducing an adequation measure function between images using dense comparisons, allowing reliability and occlusions robustness. Parameters tracking is achieved using an adapted simplex-based algorithm, which gives fast and reliable results. We also present model auto-refinement, which allows to use an imprecise model of the tracked object, which is refined to match reality. Textures auto-extraction allows to extract real textures of objects, in order to improve model precision and realistic, or for analysis purpose. Experiments on real video sequences are presented. They show the efficiency and the robustness of the approach in various situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile augmented reality can be utilized in a number of different services, and it provides a lot of added value compared to the interfaces used in mobile multimedia today. Intelligent service connectivity architecture is needed for the emerging commercial mobile augmented reality services, to guarantee mobility and interoperability on a global scale. Some of the key responsibilities of this architecture are to find suitable service providers, to manage the connection with and utilization of such providers, and to allow smooth switching between them whenever the user moves out of the service area of the service provider she is currently connected to. We have studied the potential support technologies for such architectures and propose a way to create an intelligent service connectivity architecture based on current and upcoming wireless networks, an Internet backbone, and mechanisms to manage service connectivity in the upper layers of the protocol stack. In this paper, we explain the key issues of service connectivity, describe the properties of our architecture, and analyze the functionality of an example system. Based on these, we consider our proposition a good solution to the quest for global interoperability in mobile augmented reality services.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
View synthesis becomes a focus of attention of both the computer vision and computer graphics communities. It consists of creating novel images of a scene as it would appear from novel viewpoints. View synthesis can be used in a wide variety of applications such as video compression, graphics generation, virtual reality and entertainment. This paper addresses the following problem. Given a dense disparity map between two reference images, we would like to synthesize a novel view of the same scene associated with a novel viewpoint. Most of the existing work is relying on building a set of 3D meshes which are then projected onto the new image (the rendering process is performed using texture mapping). The advantages of our view synthesis approach are as follows. First, the novel view is specified by a rotation and a translation which are the most natural way to express the virtual location of the camera. Second, the approach is able to synthesize highly realistic images whose viewing position is significantly far away from the reference viewpoints. Third, the approach is able to handle the visibility problem during the synthesis process. Our developed framework has two main steps. The first step (analysis step) consists of computing the homography at infinity, the epipoles, and thus the parallax field associated with the reference images. The second step (synthesis step) consists of warping the reference image into a new one, which is based on the invariance of the computed parallax field. The analysis step is working directly on the reference views, and only need to be performed once. Examples of synthesizing novel views using either feature correspondences or dense disparity map have demonstrated the feasibility of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Retinal blurring resulting from the human eye's depth of focus has been shown to assist visual perception. Infinite focal depth within stereoscopically displayed virtual environments may cause undesirable effects, for instance, objects positioned at a distance in front of or behind the observer's fixation point will be perceived in sharp focus with large disparities thereby causing diplopia. Although published research on incorporation of synthetically generated Depth of Field (DoF) suggests that this might act as an enhancement to perceived image quality, no quantitative testimonies of perceptional performance gains exist. This may be due to the difficulty of dynamic generation of synthetic DoF where focal distance is actively linked to fixation distance. In this paper, such a system is described. A desktop stereographic display is used to project a virtual scene in which synthetically generated DoF is actively controlled from vergence-derived distance. A performance evaluation experiment on this system which involved subjects carrying out observations in a spatially complex virtual environment was undertaken. The virtual environment consisted of components interconnected by pipes on a distractive background. The subject was tasked with making an observation based on the connectivity of the components. The effects of focal depth variation in static and actively controlled focal distance conditions were investigated. The results and analysis are presented which show that performance gains may be achieved by addition of synthetic DoF. The merits of the application of synthetic DoF are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe an application supporting alternating interaction and animation for the purpose of exploration in a surround- screen projection-based virtual reality system. The exploration of an environment is a highly interactive and dynamic process in which the presentation of objects of interest can give the user guidance while exploring the scene. Previous systems for automatic presentation of models or scenes need either cinematographic rules, direct human interaction, framesets or precalculation (e.g. precalculation of paths to a predefined goal). We report on the development of a system that can deal with rapidly changing user interest in objects of a scene or model as well as with dynamic models and changes of the camera position introduced interactively by the user. It is implemented as a potential-field based camera data generating system. In this paper we describe the implementation of our approach in a virtual art museum on the CyberStage, our surround-screen projection-based stereoscopic display. The paradigm of guided exploration is introduced describing the freedom of the user to explore the museum autonomously. At the same time, if requested by the user, guided exploration provides just-in-time navigational support. The user controls this support by specifying the current field of interest in high-level search criteria. We also present an informal user study evaluating this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces VirtualExplorer, a customizable plugin- based virtual reality framework for immersive scientific data visualization, exploration and geometric modeling. The framework is layered on to of a run-time plugin system and reconfigurable virtual user interface and provides a variety of plugin components. The system provides access to scene- graph-based APIs, including Performer and OpenInventor, direct OpenGL support for visualization of time-critical data as well as collision and generic device mangers. Plugins can be loaded, disabled, enabled or unloaded at any time, triggered either through pre-defined events or through an external Python-based interface. The virtual user interface uses pre- defined geometric primitives that can be customized to meet application-specific needs. The entire widget set can be reconfigured dynamically on a per-widget basis or as a whole through a style manager. The system is being developed with a variety of application areas in mind, but its main emphasis is on user-guided data exploration and high-precision engineering design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In currently on-going project we develop methods and techniques for visualizing building services in our virtual room. At first we have established a conversion and transmission path from contractor's lighting modeling software to virtual environment software. In the next phase we have defined the conversion and transmission path of computational fluid dynamics data to the virtual room and investigated suitable ways to explore and visualize it in virtual room. The goal in this partial objective is the visualization of air flow data in a photo-realistic room in such a way that a non- specialist can easily understand the behavior of air flow. Another on-going process is evaluation of new navigation techniques. Our aim is to develop navigation techniques which allow an arbitrary visitor to explore the model without guidance. This requires the adaptation and testing of different interaction equipment and methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes 'X'tal Head (Crystal Head)' using 'X'tal Vision (Crystal Vision)' and a fixed screen based visual system for 'telexistence.' X'tal Vision is a projection-based augmented-reality system composed of a projector with a small iris and a retroreflective screen. The fixed screen based telexistence visual system is based on a fixed-orientation link mechanism. We applied X'tal Head to telecommunication as a novel implementation of the traditional 'Talking Head' system. As these systems use face-shaped screens, they pose the difficulty of matching the projected face to the shape of the screen. Our solution takes another approach, named X'tal Head. On the remote end, a fixed-orientation camera captures the image and tracks the orientation of a person's head. At the receiving end, a user can observe the remote person's head image, which is projected onto a retroreflective screen. This screen is attached to a mechanism that is controlled to follow the remote person's head motion. If the remote person nods his/her head, the screen and projected image not together. Thus, the user can observe the stereoscopic head image with an improved sensation of existence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
David F. McAllister, Bradley Edward Morris, Kris Matson, Richard Christopher Hogan, Donald H. Mershon, Celeste Marie Mayer, Ray Lim, Michael Holmes, Jay Tomlinson
A flight simulator was developed for studying the behavior of pilots in power-off aircraft landing situations. The simulation environment includes a 5-meter hemispherical dome in which the authors have installed a cockpit from a Cessna aircraft. The dome manufacturers provided their version of OPEN GL 1.1. The graphics rendering software has undergone constant modification because of computer and projection hardware changes and a lack of knowledge and understanding of the manufacturer's undocumented version of OPEN GL. The development team was led to believe that real time rendering of photographic quality images from 3D models was possible using the existing hardware and software. This was not true even for very simple environments. Flat surfaces must undergo major tessellation to project correctly on a hemispherical dome. The number of polygons to be rendered is increased by orders of magnitude. The tessellation also reverses some depth relationships, which causes parts of objects to disappear and reappear during the simulation. In addition, aliasing artifacts are severe because of the limited resolution and lack of antialiasing capabilities of the hardware. The authors document their experiences and their solutions to some of the rendering problems encountered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spatially Immersive Displays ('virtual rooms') and other variations have become readily available in the past few years. However, building one on your own is also possible, and this is what was attempted in Helsinki University of Technology. Two immersive display systems were built, one with single wall with 7-channel audio, and the other a full four- wall system with 16-channel audio, when a new building was completed. The differences and similarities between these systems are discussed, as well as the requirements a virtual projection system places on the building infrastructure, such as power, cooling, and lighting. The local modifications to the buildings made to fulfill these goals are described. A generic, simple virtual room recipe is presented, with details concerning frames and screens, which are the most complex parts. Special attention is paid to corners and edges, where disturbing artifacts may easily become visible. A technique used in the HUT virtual room, in which the visible edges are removed via a particular frame construction, is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the audio system built for the virtual room at Helsinki University of Technology. First we discuss the general problems for multichannel sound reproduction caused by the construction of, and the equipment in virtual rooms. We also describe the acoustics of the room in question, and the effect of the back-projected screens and reverberation to the sound. Compensation of the spectral deficiencies and the problems with the large listening area and high frequency attenuation are introduced. The hardware configuration used for sound reproduction is shortly described. We also report the software applications and libraries built for sound signal processing and 3D sound reproduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While head-mounted display (HMD) technologies have undergone great developments since the first HMD for 3D visualization originated by Ivan Sutherland in the 1960's, HMD technologies have suffered from tradeoffs in capability and limitation. The main issues of HMDs include the tradeoff between resolution and field of view (FOV), the presence of large distortion for wide field of view designs, the inaccurate eye point representation, the conflict of accommodation and convergence, the occlusion contradiction between virtual and real objects, the challenge of highly precise registration, and often the brightness conflict with background illumination. Some of these issues impose critical affects on accuracy of visualization, user performance and safety. Among these issues, the ignorance of eye movement is a typical aspect. Most of the current HMD technology uses head pose to approximate line-of-sight, which may cause significant disparity between what you are intended to look at and what you actually look at as a result of at least +/- 20 degrees eye movement. It is highly necessary to integrate eye-tracking capability into HMDs in some demanding applications. An HMD- eyetracker integrated system would be able to display stereoscopic virtual images as a classical HMD does, and also be able to tracking the gaze direction of the user. This paper will provide a brief survey of the eye tracking technologies that are suitable for HMD integration purpose, present an approach to integrating eye tracking technology into optical see-through type of HMDs, and discuss critical issues challenging the integration. Furthermore, early engineering implementation and experimental results are published.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The NYU Media Research Laboratory has developed a single- person, non-invasive, active autostereoscopic display with no mechanically moving parts that provides a realistic stereoscopic image over a large continuous viewing area and range of distance [Perlin]. We believe this to be the first such display in existence. The display uses eye tracking to determine the pitch and placement of a dynamic parallax barrier, but rather than using the even/odd interlace found in other parallax barrier systems, the NYU system uses wide vertical stripes both in the barrier structure and in the interlaced image. The system rapidly cycles through three different positional phases for every frame so that the stripes of the individual phases are not perceived by the user. By this combination of temporal and spatial multiplexing, we are able to deliver full screen resolution to each eye of an observer at any position within an angular volume of 20 degrees horizontally and vertically and over a distance range of 0.3 - 1.5 meters. We include a discussion of recent hardware and software improvements made in the second generation of the display. Hardware improvements have increased contrast, reduced flicker, improved eye tracking, and allowed the incorporation of OpenGL acceleration. Software improvements have increased frame rate, reduced latency and visual artifacts, and improved the robustness and accuracy of calibration. New directions for research are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer-based Stereoscopic Imaging and Applications
A wide range of technology is now available for the projection of high-quality stereoscopic presentations. This paper reviews the software tools that are currently available for the generation of presentations containing stereoscopic images and the hardware that is available for the projection of those presentations. This paper also discusses the need for more tools to take the difficulty out of preparing and projecting stereoscopic presentations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
DLP (Digital Light Processing) is about to invade stereo applications, one of the last bastions of CRT projection technology. This paper presents various methods for achieving stereo and their application to DLP projectors. The newly developed sequential stereo capable projectors are also introduced and their performance characteristics discussed along with artifacts. Also presented are ways to employ these projectors to realize multiple simultaneous viewers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.