PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
With the first whole-body 3D scanner now available the next adventure confronting the user is what to do with all of the data. While the system was built for anthropologists, it has created interest among users from a wide variety of fields. Users with applications in the fields of anthropology, costume design, garment design, entertainment, VR and gaming have a need for the data in formats unique to their fields. Data from the scanner is being converted to solid models for art and design and NURBS for computer graphics applications. Motion capture has made scan data move and dance. The scanner has created a need for advanced application software just as other scanners have in the past.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Cyberware WB4 whole body scanner is one of the first scanning systems in the world that generates a high resolution data set of the outer surface of the human body. The Computerized Anthropometric Research and Design (CARD) Laboratory of Wright-Patterson AFB intends to use the scanner to enable quick and reliable acquisition of anthropometric data. For this purpose, a validation study was initiated to check the accuracy, reliability and errors of the system. A calibration object, consisting of two boxes and a cylinder, was scanned in several locations in the scanning space. The object dimensions in the resulting scans compared favorably to the actual dimensions of the calibration object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A non-contact body measurement system (BMS) is under development for use in making made-to-measure apparel, and for other applications related to body measurement. The BMS design which consists of six stationary structured-light projectors and six CCD cameras is presented. The system acquires two-dimensional images of sinusoidal projected patterns utilizing a phase-shifting technique similar to phase measurement profilometry. Given calibrated projector and camera geometrical parameters, the solution for calculating three-dimensional surface points of a human body from the camera images is developed. A statistical error analysis is presented for the phase measurement and the three-dimensional point solution in terms of system measurement errors. An operating developmental implementation of the BMS is described and pictured. Contour plots of test subjects taken with this system, showing digitized three-dimensional surface segments, are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The extraction and location of feature points from range imaging is an important but difficult task in machine vision based measurement systems. There exist some feature points which are not able to be detected from pure geometric characteristics, particularly in those measurement tasks related to the human body. The Loughborough Anthropometric Shadow Scanner (LASS) is a whole body surface scanner based on structured light technique. Certain applications of LASS require accurate location of anthropometric landmarks from the scanned data. This is sometimes impossible from existing raw data because some landmarks do not appear in the scanned data. Identification of these landmarks has to resort to surface texture of the scanned object. Modifications to LASS were made to allow gray-scale images to be captured before or after the object was scanned. Two-dimensional gray-scale image must be mapped to the scanned data to acquire the 3D coordinates of a landmark. The method to map 2D images to the scanned data is based on the colinearity conditions and ray-tracing method. If the camera center and image coordinates are known, the corresponding object point must lie on a ray starting from the camera center and connecting to the image coordinate. By intersecting the ray with the scanned surface of the object, the 3D coordinates of a point can be solved. Experimentation has demonstrated the feasibility of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A system is proposed that simultaneously captures the three- dimensional shape of an object and its surface texture. The 3D acquisition system is based on an active technique, but in contrast to traditional active sensing does not require scanning or sequential projection of multiple patterns. The system projects a simple pattern of squares on a scene and views it from a different angle. The underlying software automatically detects projected pattern in the image and determines the shape. At the same time, the algorithm allows us to extract the texture from the same image, thereby avoiding shape/texture alignment problems. Furthermore, its one-shot operation principle enables the system to retrieve the shape of moving objects, such as talking heads. Experiments show that the algorithm is robust and provides accurate three-dimensional reconstructions. The experiments have been carried out on various industrial or other objects, faces and other parts of the human body. The recovered shape also allows us to extract both textural and geometrical features, that can be used for identification or authentication (faces) purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
DCS has developed an improved topographical mapping system, the 3-D Areal Mapping System. This system operates by projecting a structured multiple-line laser light array onto a target surface, temporally modulating the array, sensing the reflected light with one or more off-axis video cameras, and triangulating along the center of each line in the array. Through the use of the structured light array, an entire area of the target surface may be mapped with no movement of target or mapping system. The system's temporal modulation scheme gives it a high degree of immunity to variations in background illumination, target surface reflectivity and texture, and target topography. A holographic optical element generates the structured-light array; this element significantly reduces the size, complexity, and cost of the laser projector as compared to preceding systems and also removes certain aberrations in the projected array.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Anthropometric surveys conducted by the military provide comprehensive human body measurement data that are human interface requirements for successful mission performance of weapon systems, including cockpits, protective equipment, and clothing. The application of human body dimensions to model humans and human-machine performance begins with engineering anthropometry. There are two critical elements to engineering anthropometry: data acquisition and data analysis. First, the human body is captured dimensionally with either traditional anthropometric tools, such as calipers and tape measures, or with advanced image acquisition systems, such as a laser scanner. Next, numerous statistical analysis tools, such as multivariate modeling and feature envelopes, are used to effectively transition these data for design and evaluation of equipment and work environments. Recently, Air Force technology transfer allowed researchers at the Computerized Anthropometric Research and Design (CARD) Laboratory at Wright-Patterson Air Force Base to work with the Dayton, Ohio area medical community in assessing the rate of wound healing and improving the fit of total contract burn masks. This paper describes the successful application of CARD Lab engineering anthropometry to two medically oriented human interface problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The applicability of the light-stripe based obstacle detection system for an outdoor vehicle was studied, and a working prototype for a laboratory environment was implemented. The prototype was implemented using a light- stripe projector and a smart photodiode matrix sensor. Special attention was paid to the ability of the algorithms to isolate the light stripes from the sensor image in an environment where a lot of unwanted light sources are present. Knowledge about the intensity distribution of the light-stripe and the performance of the known optics and geometry was used to differentiate between the light-stripe and interference on the sensor image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This talk summarizes the conclusions of a few of these laser scanning experiments on remote sites and the potential of the technology for imaging applications. Parameters to be considered for these types of activities are related to the design of a large volume of view laser scanner, such as the depth of field, the ambient light interference (especially for outdoors) and, the scanning strategies. The first case reviewed is an inspection application performed in a coal- burning power station located in Alberta, Canada. The second case is the digitizing of the ODS (Orbiter Docking System) at the Kennedy Space Center in Florida and, the third case is the digitizing of a large sculpture located outside of the Canadian Museum of Civilisation in Ottawa-Hull, Canada.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The photonic mixer device (PMD) is a new electro-optical mixing semiconductor device. Integrated into a line or an array it may contribute a significant improvement in developing an extremely fast, flexible, robust and low cost 3D-solid-state camera. Three dimensional (3D)-cameras are of dramatically increasing interest in industrial automation, especially for production integrated quality control, in- house navigation, etc. The type of 3D-camera here under consideration is based on the principle of time-of-flight respectively phase delay of surface reflected echoes of rf- modulated light. In contrast to 3D-laser radars there is no scanner required since the whole 3D-scene is illuminated simultaneously using intensity-modulated incoherent light, e.g. in the 10 to 1000 MHz range. The rf-modulated light reflected from the 3D-scene represents the total depth information within the local delay of the back scattered phase front. If this incoming wave front is again rf- modulated by a 2D-mixer within the whole receiving aperture we get a quasi-stationary rf-inference pattern respectively rf-interferogram which may be captured by means of a conventional CCD-camera. This procedure is called rf- modulation interferometry (RFMI). Corresponding to first simulative results the new PMD-array will be appropriate to the RFMI-procedure. Though looking like a modified CCD-array or CMOS-photodetector array it will be able to perform both, the pixelwise mixing process for phase delay respectively depth evaluation and the pixelwise light intensity acquisition for gray level or color evaluation. Further advantageous properties are achieved by means of a four- quadrant (4Q)-PMD array which operates as a balanced inphase/quadrature phase (I/Q)-mixer and will be able to capture the total 3D-scene information of several 100,000 voxels within the microsecond(s) - to ms-range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method of projection moire specially devised for the three-dimensional inspection of printed circuit boards. This method incorporates phase-shifting technique in analyzing moire fringes so as to achieve a fine resolution of 1 micron in height measurement. Further a synchronous grating translation scheme enhances the lateral measuring resolution by inherently removing the original pattern of the reference grating in resulting moire fringes. Finally we discuss the advantages of the proposed method using several measurement results performed on the various types of solder paste silk-screened on printed circuit boards.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a 3D measurement procedure is presented, which combines the gray code light projection technique and the phase shift method. The aim is the improvement of the performance of a 3D imaging system based on structured light projection. The measuring procedure based on gray code light projection demonstrates the ability of measuring objects presenting marked discontinuities of shape, such as steep slopes, grooves and holes: the experimental tests performed show that the measurement accuracy is up to 0.42% and the precision is 0.3% of the measuring range. The accuracy of the measurement is mainly limited by its resolution. To increase the system performances, the ability of the phase shift method to achieve higher resolution has been used: the height values given by these two methods are combined in a procedure which shows increased resolution and accuracy on an extended measuring range. The basic aspects of the two basic techniques for 3D imaging and profiling are here discussed. The combined procedure which integrates them is detailed, and some relevant experimental results are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compression and decompression of volume data is gaining increasing importance, especially with wide use of volume visualization techniques in distributed environments. Hitherto researchers have used direct extensions of still- image and video compression techniques to volume data, but these are associated with limitations of scalar quantization. We present an orientation band (ORB) technique for volume compression that exploits orientation information, and consequently preserves structure within the data. The ORB scheme uses a hybrid of lossless (pyramidal) and lossy (vector quantized) techniques to compress within user-specified space and error bounds. The resulting compressed volumes are suited for progressive network transmission and for fast volume rendering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recent introduction of laser surface scanning to scientific applications presents a challenge to computer scientists and engineers. Full utilization of this two- dimensional (2-D) and three-dimensional (3-D) data requires advances in techniques and methods for data processing and visualization. This paper explores the development of software to support the visualization, analysis and manipulation of laser scan images. Specific examples presented are from on-going efforts at the Air Force Computerized Anthropometric Research and Design (CARD) Laboratory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The stereo analysis method is similar to the human visual system. Due to the way our eyes are positioned and controlled, our brains usually receive similar images of a scene taken from nearby points of the same horizontal level. Therefore the relative position of the images of an object will differ in the two eyes. Our brains are capable of measuring this difference and thus estimating the depth. Stereo analysis tries to imitate this principle. By locating corresponding positions in the two images, a stereo system can recover the geometrical relationships and thereby depth. Stereo computation is just one of the vision problems where the presence of outliers can not be neglected. Most standard algorithms make unrealistic assumptions about noise distributions, which leads to erroneous results that cannot be corrected in subsequent processing stages. In this work the standard area-based correlation approach is modified so that it can tolerate a significant number of outliers. The approach exhibits a robust behavior not only in the presence of mismatches but also in the case of depth discontinuities. Experimental results are given on synthetic images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Developments in laser digitizing technology now make it possible to capture very accurate 3D images of the surface of the human body in less than 20 seconds. Applications for the images range from animation of movie characters to the design and visualization of clothing and individual equipment (CIE). In this paper we focus on modeling the user/equipment interface. Defining the relative geometry between user and equipment provides a better understanding of equipment performance, and can make the design cycle more efficient. Computer-aided fit testing (CAFT) is the application of graphical and statistical techniques to visualize and quantify the human/equipment interface in virtual space. In short, CAFT looks to measure the relative geometry between a user and his or her equipment. The design cycle changes with the introducing CAFT; now some evaluation may be done in the CAD environment prior to prototyping. CAFT may be applied in two general ways: (1) to aid in the creation of new equipment designs and (2) to evaluate current designs for compliance to performance specifications. We demonstrate the application of CAFT with two examples. First, we show how a prototype helmet may be evaluated for fit, and second we demonstrate how CAFT may be used to measure body armor coverage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article presents an investigation study of stereo-based 3D surface reconstruction algorithms by providing an overview of the different approaches that have been investigated in the stereo literature during the last decade. This study considers only the two-views plain stereo algorithms and provides another classification for the stereo approaches based on the features used in the stereo literature. In addition, the article provides full details of two different stereo algorithms that give an idea of how stereo works.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel approach is proposed to obtain a record of the patient's occlusion using computer vision. Data acquisition is obtained using intra-oral video cameras. The technique utilizes shape from shading to extract 3D information from 2D views of the jaw, and a novel technique for 3D data registration using genetic algorithms. The resulting 3D model can be used for diagnosis, treatment planning, and implant purposes. The overall purpose of this research is to develop a model-based vision system for orthodontics to replace traditional approaches. This system will be flexible, accurate, and will reduce the cost of orthodontic treatments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.