We have already developed glasses-free three-dimensional (3-D) displays using multi-projectors and a special diffuser
screen that results in a highly realistic communication system. The system can display 70-200 inch large-sized 3-D
images with full high-definition video image quality. The displayed 3-D images were, however, only computergenerated
graphics or still images of actual objects. In this work, we studied a 3-D video capturing method for our multiprojection
3-D display. We analyzed the optimal arrangement of cameras for the display, and the image quality as
influenced by calibration error. In the experiments, we developed a prototype multi-camera system using 30 highdefinition
video cameras. The captured images were corrected via image processing optimized for the display. We
successfully captured and displayed, for the first time, 3-D video of actual moving objects in our glasses-free 3-D video
system.
KEYWORDS: Cameras, 3D displays, Imaging systems, Image processing, Associative arrays, 3D image processing, Stereoscopy, 3D visualizations, 3D modeling, Eye
The multi-view three-dimensional (3D) visualization by means of a 3D display requires reproduction of scene light
fields. The complete light field of a scene can be reproduced from the images of a scene ideally taken from infinite
viewpoints. However, capturing the images of a scene from infinite viewpoints is not feasible for practical applications.
Therefore, in this work, we propose a sparse camera image capture system and an image based virtual image generation
method for 3D imaging applications. We show a resulting virtual image produced by the proposed algorithm for
generating in-between view of two real images captured with our multi-camera image capture system.
We present a general concept of the proposed 3D imaging system called 3D-geometric camera (3D-gCam) to pick
up pixel-wise 3D surface profile information along with color information. The 3D-gCam system includes two
isotropic light sources placed at different geometric locations, an optical alignment system for aligning the light
rays projected onto the scene, and a high precision camera. To determine the pixel-wise distance information,
the system captures two images of the same scene during the stropping of the each light source. Then, the
intensity of each pixel location in these two images along with the displacement between the light sources are
utilized for calculating the distance information of object points corresponding to pixel locations in the image
to generate a dense 3D point cloud. The approach is suitable for capturing of 3D and color information in high
definition image format synchronously.
In this work, we present a novel concept to sense 3D surface profile of scenes along with their color images.
The method for achieving this functionality includes two isotropic light sources placed at different geometric
locations, an optic apparatus for aligning the light rays projected onto the scene, and a high precision camera.
To determine the pixel-wise distance information, the system captures two sequential images during the stropping
of each light source in sequence. Then, the intensity of each pixel location in these two images are utilized for
calculating the distance information of object points corresponding to pixel locations in the image to generate
a 3D surface profile of a scene. The approach is suitable for both capturing color information and sensing 3D
distance information in high definition format synchronously.
A general image analysis and segmentation method using fuzzy set classification and learning is described. The method uses a learned fuzzy representation of pixel region characteristics, based upon the conjunction and disjunction of extracted and derived fuzzy color and texture features. Both positive and negative exemplars of some visually apparent characteristic which forms the basis of the inspection, input by a human operator, are used together with a clustering algorithm to construct positive similarity membership functions and negative similarity membership functions. Using these composite fuzzified images, P and N, are produced using fuzzy union. Classification is accomplished via image defuzzification, whereby linguistic meaning is assigned to each pixel in the fuzzy set using a fuzzy inference operation. The technique permits: (1) strict color and texture discrimination, (2) machine learning of color and texture characteristics of regions, (3) and judicious labeling of each pixel based upon leaned fuzzy representation and fuzzy classification. This approach appears ideal for applications involving visual inspection and allows the development of image-based inspection systems which may be trained and used by relatively unskilled workers. We show three different examples involving the visual inspection of mixed waste drums, lumber and woven fabric.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.