KEYWORDS: Holograms, Holography, Integral imaging, Image sensors, 3D image reconstruction, Computer generated holography, Imaging systems, 3D image processing, Spatial light modulators, Signal to noise ratio
We demonstrate the depth measurement method of holographic images using integral imaging. The depth information of holographic images can be obtained with a single capture by conventional integral imaging pickup system composed of a micro lens array (MLA) and an image sensor. In order to verify the feasibility of our proposed method, an elemental image set of holographic images formed by a MLA was generated by a computer, and then refocused images at different planes were reconstructed numerically using computational integral imaging reconstruction (CIIR) technique for depth measurement. Note that we set the distance between MLA and image sensor as focal length of micro lens for large depth of focus. From the numerical results, we can measure the depth representation of holographic images successfully. However, refocused images from an optically captured elemental image set provide poor depth discrimination due to expected error in distance between MLA and image sensor. Only an object in a particular narrow depth range can be focused clearly when the image sensor is placed out of the MLA focal plane. The simulated results in this condition matched reasonably with the experiment result.
Recently, the computer generated hologram (CGH) calculated from real existing objects is more actively investigated to
support holographic video and TV applications. In this paper, we propose a method of generating a hologram of the
natural 3-D scene from multi-view images in order to provide motion parallax viewing with a suitable navigation range.
After a unified 3-D point source set describing the captured 3-D scene is obtained from multi-view images, a hologram
pattern supporting motion-parallax is calculated from the set using a point-based CGH method. We confirmed that 3-D
scenes are faithfully reconstructed using numerical reconstruction.
In this paper, we suggested a new way to overcome a shortcoming as stereoscopic depth distortion in common
stereoscopy based on computer graphics (CG). In terms of the way, let the objective space transform as the
distorted space to make a correct perceived depth sense as if we are seeing the scaled object volume which is
well adjusted to user's stereoscopic circumstance. All parameters which related the distortion such as a focal
length, an inter-camera distance, an inner angle between camera's axes, a size of display, a viewing distance and
an eye distance can be altered to the amount of inversed distortion in the transformed objective space by the
linear relationship between the reconstructed image space and the objective space. Actually, the depth distortion
is removed after image reconstruction process with a distorted objective space. We prepared a stereo image
having a right scaled depth from -200mm to +200mm with an interval as 100mm by the display plane in an
official stereoscopic circumstance and showed it to 5 subjects. All subjects recognized and indicated the
designed depths.
KEYWORDS: Computer programming, 3D modeling, Solid modeling, Visualization, Computer simulations, Iterated function systems, Surgery, Visual compression, Digital electronics, Data communications
In MPEG-4, 3D mesh coding (3DMC) achieves 40:1 to 50:1 compression ratio over 3-D meshes (in VRML IndexedFaceSet representation) without noticeable visual degradation. This substantial gain comes not for free: it changes the vertex and face permutation order of the original 3-D mesh model. This vertex and face permutation order change may cause a serious problem for animation, editing operation, and special effects, where the original permutation order information is critical not only to the mesh representation, but also to the related tools. To fix this problem, we need to transmit the vertex and face permutation order information additionally. This additional transmission causes the unexpected increase of the bitstream size. In this paper, we proposed a novel vertex and face permutation order compression algorithm to address the vertex and face permutation order change by the 3DMC encoding with the minimal increase of side information. Our proposed vertex and face permutation order coding method is based on the adaptive probability model, which makes allocating one fewer bits codeword to each vertex and face permutation order in every distinguishable unit as encoding proceeds. Additionally to the adaptive probability model, we further increased the coding efficiency of the proposed method by representing and encoding each vertex and face permutation order per connected component (CC). Simulation results demonstrated that the proposed algorithm can encode the vertex and face permutation order losslessly while making up to 12% bit-saving compared with the logarithmic representation based on the fixed probability model.
In order to transmit or store three-dimensional (3-D) mesh models efficiently, we need to simplify them. Although the quadric error metric (QEM) provides fast and accurate geometric simplification of 3-D mesh models, it cannot capture discontinuities faithfully. Recently, an enhanced QEM based on subdivided edge classification has been proposed to handle this problem. Although it can capture discontinuities well, it has slight degradation in the reconstruction quality. In this paper, we propose a novel mesh simplification algorithm where we employ a normal variation error metric, instead of QEM, to resolve the quality degradation issue. We also modify the subdivided edge classification algorithm to be cooperative with the normal variation error metric while preserving discontinuities. We have tested the proposed algorithm with various 3-D VRML models. Simulation results demonstrate that the proposed algorithm provides good approximations while maintaining discontinuities well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.