Senior Research Scientist at Korea Photonics Technology Institute
SPIE Involvement:
Author
Area of Expertise:
Auto-stereoscopic super multi-views 3D display ,
Human factors for 3D depth recognition ,
Diffractive optics ,
Pseudo Holographic 3D display ,
Geometrical Optics
In holographic space, continuous object space can be divided as several discrete spaces satisfied each of same depth of field (DoF). In the environment of wearable device using holography, specially, this concept can be applied to macroscopy filed in contrast of the field of microscopy. Since the former has not need to high depth resolution because perceiving power of eye in human visual system, it can distinguish clearly among the objects in depth space, has lower than optical power of microscopic field. Therefore continuous but discrete depth of field (DDoF) for whole object space can present the number of planes included sampled space considered its DoF. Each DoF plane has to consider the occlusion among the object’s areas in its region to show the occluded phenomenon inducing by the visual axis around the eye field of view. It makes natural scene in recognition process even though the combined discontinuous DoF regions are altered to the continuous object space. Thus DDoF pull out the advantages such as saving consuming time of the calculation process making the hologram and the reconstruction. This approach deals mainly the properties of several factors required in stereo hologram HMD such as stereoscopic DoF according to the convergence, least number of DDoFs planes in normal visual circumstance (within to 10,000mm), the efficiency of saving time for taking whole holographic process under the our method compared to the existing. Consequently this approach would be applied directly to the stereo-hologram HMD field to embody a real-time holographic imaging.
KEYWORDS: Holograms, Clouds, Holographic interferometry, Sensors, Digital holography, Data acquisition, Data modeling, 3D modeling, Wave propagation, Computer generated holography, Holography, RGB color model
Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.
In this paper, we introduce a high efficient and practical disparity estimation using hierarchical bilateral filtering for realtime view synthesis. The proposed method is based on hierarchical stereo matching with hardware-efficient bilateral filtering. Hardware-efficient bilateral filtering is different from the exact bilateral filter. The purpose of the method is to design an edge-preserving filter that can be efficiently parallelized on hardware. The proposed hierarchical bilateral filtering based disparity estimation is essentially a coarse-to-fine use of stereo matching with bilateral filtering. It works as follows: firstly, the hierarchical image pyramid are constructed; the multi-scale algorithm then starts by applying a local stereo matching to the downsampled images at the coarsest level of the hierarchy. After the local stereo matching, the estimated disparity map is refined with the bilateral filtering. And then the refined disparity map will be adaptively upsampled to the next finer level. The upsampled disparity map used as a prior of the corresponding local stereo matching at the next level, and filtered and so on. The method we propose is essentially a combination of hierarchical stereo matching and hardware-efficient bilateral filtering. As a result, visual comparison using real-world stereoscopic video clips shows that the method gives better results than one of state-of-art methods in terms of robustness and computation time.
Generally, aspheric glass lenses are manufactured using a glass molding press (GMP) method and a tungsten carbide mold core. This study analyzes the thermal deformation that occurs during the GMP process, and the results were applied to compensate an aspheric glass lens. After the compensation process, the form accuracy of aspheric glass lenses improved from ∼3.7 to ∼0.35 μm. The compensated lens complied with the actual specifications.
To expand the suitable stereoscopic viewing zone on depth directional and remove the crosstalk induced by the structures of the existing slanted lenticular lens sheet, Segmented Lenticular lens having Varying Optical Power (SL-VOP) is proposed.
KEYWORDS: Cameras, 3D modeling, Sensors, 3D scanning, Time division multiplexing, Infrared sensors, 3D image processing, Chest, Imaging systems, Visualization
This article proposes a 3D reconstruction method using multiple depth cameras. Since the depth camera acquires the depth information from a single viewpoint, it’s inadequate for 3D reconstruction. In order to solve this problem, we used multiple depth cameras. For 3D scene reconstruction, the depth information is acquired from different viewpoints with multiple depth cameras. However, when using multiple depth cameras, it’s difficult to acquire accurate depth information because of interference among depth cameras. To solve this problem, in this research, we propose Time-division multiplexing method. The depth information was acquired from different cameras sequentially. After acquiring the depth images, we extracted features using Fast Point Feature Histogram (FPFH) descriptor. Then, we performed 3D registration with Sample Consensus Initial Alignment (SAC-IA). We reconstructed 3D human bodies with our system and measured body sizes for evaluating the accuracy of 3D reconstruction.
Holography is one method to record the information from a real scene, but it requires coherent illumination and the lack of resolution of the pick-up device may strongly limit the size of the object recorded. It is also possible to generate a hologram of a real scene in incoherent illumination condition by using techniques likes integral imaging or multiple imaging, but the spatial resolution provided from these methods is usually quite poor. Hologram can be made from a virtual scene with a computer, but the heavy computational load limit the size of the scene, and it is difficult to create precise models of complicated objects. In this paper, we analyze the different techniques used to pick-up 3D data from a real object such as holography or integral imaging. And then, we present the first result of a simulator which is developed to evaluate the key parameters of hologram data according to the pick-up system. Preliminary results may be possible to evaluate their performance and to choose the optimal method one should use according to the resolution, the depth of field or the angle of view.
In this paper, the sugar content prediction determination system in optical non-contact type based on the near infrared light emitting diode (NIR-LED) lamp is proposed. As the result NIR-LED lamp reduced 86% of the energy consumption compared to the case of Halogen lamp in the same process of sugar content determination. And the result of prediction of sugar content by NIR-LED lamp is shown to as near the same level of Halogen lamp system.
Autostereoscopic multi-views 3D display system has a narrow freedom of degrees to the observational directions such as
horizontal and perpendicular direction to the display plane than the glasses on type. In this paper, we proposed an
innovative method that expanding a width of formed viewing zone on the depth direction keeping with the number of
views on horizontal direction by using the triple segmented-slanted parallax barrier (TS-SPB) in the glasses-off type of
3D display. The validity of the proposal is verified by optical simulation based on the environment similar to an actual
case. In benefits, the maximum number of views to display on horizontal direction is to be 2n and the width of viewing
zone on depth direction is to be increased up to 3.36 times compared to the existing one-layered parallax barrier system.
Autostereoscopy is a common method for providing 3D perception to viewers without glasses. They produce 3D images
with a wide perspective, and can achieve the effect of observing different images visible on the same plane from
difference point of view. In autostereoscopic displays, crosstalk occurs when incomplete isolation of the left and right
images so that one leakage into the other. This paper addresses a light intensity simulator that can calculate crosstalk
according to variable viewing positions by automatically tracking heads of viewers. In doing so, we utilize head tracking
technique based on infrared laser sensors to detect the observers' viewing positions. Preliminary results show that the
proposed system was appropriate to be operated in designing the autostereoscopic displays ensuring human safety.
Generally non-glass type three dimensional stereoscopic display systems should be considering human factor. Human
factor include the crosstalk, motion parallax, types of display, lighting, age, unknown aspects around the human factors
issues and user experience. Among these human factors, the crosstalk is very important human factor because; it reduces
3D effect and induces eye fatigue or dizziness. In these reason, we considered method of reduction crosstalk in three
dimensional stereoscopic display systems. In this paper, we suggest method of reduction crosstalk using lenticular lens.
Optical ray derived from projection optical system, converted to viewing zone shape by convolution of two apertures. In
this condition, we can minimize and control the beam width by optical properties of lenticular lens (refractive, pitch,
thickness, radius of curvature) and optical properties of projector (projection distance, optical features). In this
processing, Gaussian distribution type shape is converted to rectangular distribution type shape. According to the beam
width reduction will be reduce crosstalk, and it was verified used to lenticular lens.
In this paper, we suggested a way constructing an objective space transformation as the distorted objective space
to make a perceived scaled depth sense as the nature depth for an actual image circumstance. In there, a hybrid
camera system is adopted as a tool of multi-views actual images acquisition. There is consisted of two camera
system, one is a depth camera which used to take an actual object's depth information and the other is a common
camera which is used to map color information to the depth image. In previous work, we already showed the
possibility of the concept that a transformed an object space for taking a natural depth sense based on CG object
space is good approach. For improving the work, we show an advanced approach that the multi-views of the
right scaled object depth senses without a depth distortion based on an actual image by any size of display
adaptation can also perceived. Both systematic and observational stereoscopic constraints are already considered
to the distorted object space to make a scaled depth image in the reconstructed image space.
In this paper, we suggested a new way to overcome a shortcoming as stereoscopic depth distortion in common
stereoscopy based on computer graphics (CG). In terms of the way, let the objective space transform as the
distorted space to make a correct perceived depth sense as if we are seeing the scaled object volume which is
well adjusted to user's stereoscopic circumstance. All parameters which related the distortion such as a focal
length, an inter-camera distance, an inner angle between camera's axes, a size of display, a viewing distance and
an eye distance can be altered to the amount of inversed distortion in the transformed objective space by the
linear relationship between the reconstructed image space and the objective space. Actually, the depth distortion
is removed after image reconstruction process with a distorted objective space. We prepared a stereo image
having a right scaled depth from -200mm to +200mm with an interval as 100mm by the display plane in an
official stereoscopic circumstance and showed it to 5 subjects. All subjects recognized and indicated the
designed depths.
The mobile device has the lacked space for configuring cameras to make either ortho- or hyperstereoscopic
condition with a small size of display. Therefore mobile stereoscopy cannot provide a presence with a good
depth sense to an observer. To solve this problem, we focused on the depth sense control method with a
switchable stereo camera alignment. In converging type, the fusible stereo area becomes wider compared to a
parallel type when the same focal length was used in both types. This matter makes it that the stereo fusible area
formed by converging type to be equal to the parallel type with a more shorten focal length. Therefore there is a
kind of the zoom-out effect at the reconstructed depth sense. In diverging type, the fusible stereo area becomes
narrower than the parallel. As the same way, the diverging type guarantees a similar characteristic of that an
increased focal length is considered in parallel type. Therefore there is a zoom-in effect existing. Stereoscopic
zoom-in depth effect becomes rapidly changed by the increased angle but zoom-out becomes retarded relatively.
Metallic paint has been widely used to provide for special visual appearance to various products so that photorealistic rendering of this material has been an important issue in the development of new products. We introduce a new approach that predicts the reflectance of metallic paint while considering manufacturing parameters. Our main idea is to simulate the appearance of various metallic paints having different composition of constituent materials by combination of measured bidirectional reflectance distribution functions and thereby to find a paint that provides optimal appearance of a product with appropriate composition of constituent materials. We mainly focused on two paint parameters, average size and density of aluminum flakes, because they significantly affect the appearance of metallic paint. We also present a compact representation to approximate a large size of measured data using a few curve functions. We demonstrate the efficiency and usability of our reflectance estimation method using some examples.
This paper deals with a method to effectively compress the measured reflectance data of pearlescent paints. In
order to simulate the coated surface realistically, it is requested to measure the reflectance of the pearlescent
paints by using multiple wavelengths. The wavelength-based reflectance data requests a large amount of storage.
However, we can reduce the size of the measured BRDF and retain the accuracy the data by using several
factorization algorithms. In this paper, we analyze the decomposition of the measured BRDF of pearlescent
paint and find the number of lobes or basis functions to retain the visual accuracy of the measured reflectance.
Most recent bidirectional reflectance distribution function (BRDF) measurement systems are the image-based
that consist of a light source, a detector, and curved samples. They are useful for measuring the reflectance
properties of a material but they have two major drawbacks. They suffer from high cost of BRDF acquisition
and also give inaccurate results due to the limited use of spectral bands. In this paper, we propose a novel multispectral
HDR imaging system and its efficient characterization method. It combines two promising
technologies: high dynamic range (HDR) imaging and multispectral imaging to measure BRDF. We perform a
full spectral recovery using camera response curves for each wavelength band and its analysis. For this, we use
an HDR camera to capture HDR images and a liquid crystal tunable filter (LCTF) to generate multi-spectral
images. Our method can provide an accurate color reproduction of metameric objects as well as a saturated
image. Our multi-spectral HDR imaging system provides a very fast data acquisition time and also gives a low
system setup cost compared to previous multi-spectral imaging systems and point-based commercial spectroradiometers.
We verify the color accuracy of our multi-spectral HDR imaging system in terms of human vision
and metamerism using colorimetric and spectral metric.
In computer graphics applications, the standard diffusion model (SDA) is used to represent translucent materials. However, the SDA is not suitable for the representation of the translucent materials with weakly scattering properties. We represent the translucent materials with the weakly scattering properties by using the P3 approximation, since the P3 approximation can more accurately represent such translucent materials than the SDA. We also present an efficient and stable image-based bidirectional subsurface scattering reflectance distribution function measurement system that can measure the reflectance property of homogeneous translucent materials. From the high-dynamic-range image of a translucent material acquired from the proposed image-based measurement systems, the reduced scattering and the absorption coefficients of the material are estimated to represent its translucent property in computer graphics applications. We demonstrate the strength of the P3 approximation for representing the translucent materials using various examples with realistic illumination based on an environmental map.
A diverging-type stereo camera arrangement is introduced to use in hand-held mobile devices such as mobile phone,
hand PC and introscopes. The arrangement allows making the
inter-camera distance much smaller than that in the
conventional stereo camera arrangements such as parallel and radial types by adjusting the diverging angle. Computer
simulation shows that it can introduce more distortion than the parallel type but it can enhance the depth sense.
The use of pearlescent paints has grown significantly for many industrial products, due to their special visual effect, which originates from optical interference between many small pearlescent platelets. This makes the visual appearance of pearlescent paints vary with the light and view direction. Since pearlescent paints are very sensitive to specific wavelength bands, multispectrum-based representation methods that use spectral distributions give us more accurate image synthesis of them than does the use of RGB-based ones. In this paper, we present a novel image-based goniospectrophotometer system and its characterization method to acquire the spectral bidirectional reflectance distribution functions (BRDFs) for realistic image synthesis of pearlescent paints by combining two promising technologies, namely, high-dynamic-range images and multispectral images. The capability of our system is demonstrated by generating rendering results from four different material samples and comparing them with the RGB-based results and fully measured BRDF data.
We present a novel high-dynamic-range (HDR) camera-based bidirectional reflectance distribution function (BRDF) measurement system that can measure the reflectance property of isotropic materials. Our developed system can measure the BRDF of highly specular materials much faster than previous systems. It measures highly dense BRDF samples for a wider reflection angle with less noise so that it provides accuracy that is necessary for computer graphics application. To estimate the reflectance of a given material, we perform an absolute photometric calibration for the HDR camera. Our system is verified by checking the Helmholtz reciprocity and comparing the performance with that of previous image-based systems. The capability and efficiency of the developed system is demonstrated by comparing the images generated by measure-and-fit and direct-rendering methods using the measured data of four different isotropic materials.
KEYWORDS: Bidirectional reflectance transmission function, High dynamic range imaging, Cameras, Light sources, Gold, Sensors, Data acquisition, Calibration, Metals, Imaging systems
We present a novel image-based BRDF (Bidirectional Reflectance Distribution Function) measurement system for
materials that have isotropic reflectance properties. Our proposed system is fast due to simple set up and automated
operations. It also provides a wide angular coverage and noise reduction capability so that it achieves accuracy that is
needed for computer graphics applications. We test the uniformity and constancy of the light source and the reciprocity
of the measurement system. We perform a photometric calibration of HDR (High Dynamic Range) camera to recover an
accurate radiance map from each HDR image. We verify our proposed system by comparing it with a previous imagebased
BRDF measurement system. We demonstrate the efficiency and accuracy of our proposed system by generating
photorealistic images of the measured BRDF data that include glossy blue, green plastics, gold coated metal and gold metallic paints.
Recently, mobile phone has been requiring one of the most of necessaries as well as the mobile phone-camera would be
a representative method to presents as self-publicity and entertainment in human life. Furthermore, mobile phone with
stereo-camera to be more powerful tool to perform above mentioned fields and present specially three-dimensional
image to an observer compared to the existing it. In this paper, we investigated the constraints to obtain optimized
stereovision when it taken by the mobile phone applied stereo camera and they makes possible to good fusible stereo.
Theory and experiment were performed by permitted range of the disparity which was extracted by the stereogram on
mobile display. Consequently, the permitted horizontal and vertical disparities were taken up to +3.75mm and +2.59mm
based on mobile phone having 2.8" display, QVGA resolution, F/2.8, 54 degree of field of view and 220mm of viewing
distance. To examine suitability, the experiment is performed by ten subjects.
We present a depth map-based disparity estimation algorithm using multi-view and depth camera system. When many objects are arranged in the 3D space with a long depth range, the disparity search range should be large enough in order to find all correspondences. In this case, traditional disparity estimation algorithms that use the fixed disparity search range often produce mismatches if there are pixels that have similar color distribution and similar textures along the epipolar line. In order to reduce the probability of mismatch and save computation time for the disparity estimation, we propose a novel depth map-based disparity estimation algorithm that uses a depth map captured by the depth camera for setting the disparity search range adaptively as well as for setting the mid-point of the disparity search range. The proposed algorithm first converts the depth map into disparities for the stereo image pair to be matched using calibrated camera parameters. Next, we set the disparity search range for each pixel based the converted disparity. Finally, we estimate a disparity for each pixel between stereo images. Simulation results with various test data sets demonstrated that the proposed algorithm has better performance in terms of the smoothness, global quality and computation time compared to the other algorithms.
This paper presents a novel multi-depth map fusion approach for the 3D scene reconstruction. Traditional stereo matching techniques that estimate disparities between two images often produce inaccurate depth map because of occlusion and homogeneous area. On the other hand, Depth map obtained from the depth camera is globally accurate but noisy and provides a limited depth range. In order to compensate pros and cons of these two methods, we propose a depth map fusion method that fuses the multi-depth maps from stereo matching and the depth camera. Using a 3-view camera system that includes a depth camera for the center-view, we first obtain 3-view images and a depth map from the center-view depth camera. Then we calculate camera parameters by camera calibration. Using the camera parameters, we rectify left and right-view images with respect to the center-view image for satisfying the well-known epipolar constraint. Using the center-view image as a reference, we obtain two depth maps by stereo matching between the center-left image pair and the center-right image pair. After preprocessing each depth map, we pick an appropriate depth value for each pixel from the processed depth maps based on the depth reliability. Simulation results obtained by our proposed method showed improvements in some background regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.