Ambient light in a scene can introduce errors into range data from most commercial three-dimensional range scanners, particularly scanners that are based on projected patterns and structured lighting. We study the effects of ambient light on a specific commercial scanner. We further present a method for characterizing the range accuracy as a function of ambient light distortions. After a brief review of related research, we first describe the capabilities of the scanner we used and define the experimental setup for our study. Then we present the results of the range characterization relative to ambient light. In these results, we note a systematic error source that appears to be an artifact due to a structured light pattern. We conclude with a discussion of this error and the physical meaning of the results overall.
We present our research efforts toward the deployment of 3-D sensing technology to an under-vehicle inspection robot. The 3-D sensing modality provides flexibility with ambient lighting and illumination in addition to the ease of visualization, mobility, and increased confidence toward inspection. We leverage laser-based range-imaging techniques to reconstruct the scene of interest and address various design challenges in the scene modeling pipeline. On these 3-D mesh models, we propose a curvature-based surface feature toward the interpretation of the reconstructed 3-D geometry. The curvature variation measure (CVM) that we define as the entropic measure of curvature quantifies surface complexity indicative of the information present in the surface. We are able to segment the digitized mesh models into smooth patches and represent the automotive scene as a graph network of patches. The CVM at the nodes of the graph describes the surface patch. We demonstrate the descriptiveness of the CVM on manufacturer CAD and laser-scanned models.
KEYWORDS: 3D modeling, Data modeling, Systems modeling, Data acquisition, 3D acquisition, Motion models, 3D scanning, Data fusion, Error analysis, Sensors
3D models of real world environments are becoming increasingly important for a variety of applications: Vehicle simulators can be enhanced through accurate models of real world terrain and objects; Robotic security systems can benefit from as-built layout of the facilities they patrol; Vehicle dynamics modeling and terrain impact simulation can be improved through validation models generated by digitizing real tire/soil interactions. Recently, mobile scanning systems have been developed that allow 3D scanning systems to undergo the full range of motion necessary to acquire such real-world data in a fast, efficient manner. As with any digitization system, these mobile scanning systems have systemic errors that adversely affect the 3D models they are attempting to digitize. In addition to the errors given by the individual sensors, these systems also have uncertainties associated with the fusion of the data from several instruments. Thus, one of the primary foci for 3D model building is to perform the data fusion and post-processing of the models in such a manner as to reconstruct the 3D geometry of the scanned surfaces as accurately as possible, while alleviating the uncertainties posed by the acquisition system. We have developed a modular scanning system that can be configured for a variety of application resolutions, as well as the algorithms necessary to fuse and process the acquired data. This paper presents the acquisition system and the tools utilized for constructing 3D models under uncertain real-world conditions, as well as some experimental results on both synthetic and real 3D data.
The purpose of this research is to investigate imaging-based methods to reconstruct 3D CAD models of real-world objects. The methodology uses structured lighting technologies such as coded-pattern projection and laser-based triangulation to sample 3D points on the surfaces of objects and then to reconstruct these surfaces from the
dense point samples. This reverse engineering (RE) research presents reconstruction results for a military tire that is important to tire-soil simulations. The limitations of this approach are the current level of accuracy that imaging-based systems offer relative to more traditional CMM modeling systems. The benefit however is the potential for denser point samples and increased scanning speeds of objects, and with time, the imaging technologies should continue to improve to compete with CMM accuracy. This approach to RE should lead to high fidelity models of manufactured and prototyped components for comparison to the original CAD models and for simulation analysis. We focus this paper on the data collection and view registration problems within the RE pipeline.
State-of-the-art unmanned ground vehicles are capable of understanding and adapting to arbitrary road terrain for navigation. The robotic mobility platforms mounted with sensors detect and report security concerns for subsequent action. Often, the information based on the localization of the unmanned vehicle is not sufficient for deploying army resources. In such a scenario, a three dimensional (3D) map of the area that the ground vehicle has surveyed in its trajectory would provide a priori spatial knowledge for directing resources in an efficient manner. To that end, we propose a mobile, modular imaging system that incorporates multi-modal sensors for mapping unstructured arbitrary terrain. Our proposed system leverages 3D laser-range sensors, video cameras, global positioning systems (GPS) and inertial measurement units (IMU) towards the generation of photo-realistic, geometrically accurate, geo-referenced 3D terrain models. Based on the summary of the state-of-the-art systems, we address the need and hence several challenges in the real-time deployment, integration and visualization of data from multiple sensors. We document design issues concerning each of these sensors and present a simple temporal alignment method to integrate multi-sensor data into textured 3D models. These 3D models, in addition to serving as a priori for path planning, can also be used in simulators that study vehicle-terrain interaction. Furthermore, we show our 3D models possessing the required accuracy even for crack detection towards road surface inspection in airfields and highways.
The Imaging, Robotics, and Intelligent Systems (IRIS) Laboratory at the University of Tennessee is currently developing a modular approach to unmanned systems to increase mission flexibility and aid system interoperability for security and surveillance applications. The main focus of the IRIS research is the development of sensor bricks where the term brick denotes a self-contained system that consists of the sensor itself, a processing unit, wireless communications, and a power source. Prototypes of a variety of sensor bricks have been developed. These systems include a thermal imaging brick, a quad video brick, a 3D range brick, and a nuclear (gamma ray and neutron) detection bricks. These bricks have been integrated in a modular fashion into mobility platforms to form functional unmanned systems. Research avenues include sensor processing algorithms, system integration, communications architecture, multi-sensor fusion, sensor planning, sensor-based localization, and path planning. This research is focused towards security and surveillance applications such as under vehicle inspection, wide-area perimeter surveillance, and high value asset monitoring. This paper presents an overview of the IRIS research activities in modular robotics and includes results from prototype systems.
KEYWORDS: RGB color model, 3D scanning, Skin, Scanners, Cameras, Error analysis, Sensors, Commercial off the shelf technology, Manufacturing, Structured light
The characterization of commercial 3D scanners allows acquiring precise and useful data. The accuracy of range and, more recently, color for 3D scanners is usually studied separately, but when the 3D scanner is based on structured light with a color coding pattern, color influence on range accuracy should be investigated. The commercial product that we have tested has the particularity that it can acquire data under ambient light instead of a controlled environment as it is with most available scanners. Therefore, based on related work in the literature and on experiments we have done on a variety of standard illuminants, we have designed an interesting setup to control illuminant interference. Basically, the setup consists of acquiring the well-known Macbeth ColorChecker under a controlled environment and also ambient daylight. The results have shown variations with respect to the color. We have performed several statistical studies to show how the range results evolve with respect to the RGB and the HSV channels. In addition, a systematic noise error has also been identified. This noise depends on the object color. A subset of colors shows strong noise errors while other colors have minimal or even no systematic error under the same illuminant.
Our research efforts focus on the deployment of 3D sensing capabilities to a multi-modal under vehicle inspection robot. In this paper, we outline the various design challenges towards the automation of the 3D scene modeling task. We employ laser-based range imaging techniques to extract the geometry of a vehicle's undercarriage and present our results after range integration. We perform shape analysis on the digitized triangle mesh models by segmenting them into smooth surface patches based on the curvedness of the surface. Using a region-growing procedure, we then obtain the patch adjacency. On each of these patches, we apply our definition of the curvature variation measure (CVM) as a descriptor of surface shape complexity. We base the information-theoretic CVM on shape curvature, and extract shape information as the entropic measure of curvature to represent a component as a graph network of patches. The CVM at the nodes of the graph describe the surface patch. We then demonstrate our algorithm with results on automotive components. With apriori manufacturer information about the CAD models in the undercarriage we approach the technical challenge of threat detection with our surface shape description algorithm on the laser scanned geometry.
KEYWORDS: Inspection, Sensors, Computing systems, Data acquisition, Wireless communications, Data processing, Data modeling, Computer architecture, Ultraviolet radiation, Sensing systems
In this paper, a mobile scanning system for real-time under-vehicle inspection is presented, which is founded on a "Brick" architecture. In this "Brick" architecture, the inspection system is basically decomposed into bricks of three kinds: sensing, mobility, and computing. These bricks are physically and logically independent and communicate with each other by wireless communication. Each brick is mainly composed by five modules: data acquisition, data processing, data transmission, power, and self-management. These five modules can be further decomposed into submodules where the function and the interface are well-defined. Based on this architecture, the system is built by four bricks: two sensing bricks consisting of a range scanner and a line CCD, one mobility brick, and one computing brick. The sensing bricks capture geometric data and texture data of the under-vehicle scene, while the mobility brick provides positioning data along the motion path. Data of these three modalities are transmitted to the computing brick where they are fused and reconstruct a 3D under-vehicle model for visualization and danger inspection. This system has been successfully used in several military applications and proved to be an effective safer method for national security.
The current threats to U.S. security both military and civilian have led to an increased interest in the development of technologies to safeguard national facilities such as military bases, federal buildings, nuclear power plants, and national laboratories. As a result, the Imaging, Robotics, and Intelligent Systems (IRIS) Laboratory at The University of Tennessee (UT) has established a research consortium, known as SAFER (Security Automation and Future Electromotive Robotics), to develop, test, and deploy sensing and imaging systems for unmanned ground vehicles (UGV). The targeted missions for these UGV systems include -- but are not limited to --under vehicle threat assessment, stand-off check-point inspections, scout surveillance, intruder detection, obstacle-breach situations, and render-safe scenarios. This paper presents a general overview of the SAFER project. Beyond this general overview, we further focus on a specific problem where we collect 3D range scans of under vehicle carriages. These scans require appropriate segmentation and representation algorithms to facilitate the vehicle inspection process. We discuss the theory for these algorithms and present results from applying them to actual vehicle scans.
KEYWORDS: Image registration, 3D modeling, Cameras, Sensors, Optimization (mathematics), 3D image processing, Image processing, Medical imaging, Reflectivity, Data modeling
In this paper, we present a method for automatically registering a 3D range image and a 2D color image using the (chi) 2-similarity metric. The goal of this registration is to allow the reconstruction of a scene using multi-sensor information. Traditional registration algorithms use invariant image features to drive the registration process. This approach limits the applicability to multi-modal data since features of interest may not appear in each modality. However, the (chi) 2-similarity metric is an intensity- based approach that has interesting multi-modal characteristics. We explore this metric as a mechanism to govern the registration search. Using range data from a Perceptron laser camera and color data form a Kodak digital camera, we present result using this automatic registration with the (chi) 2-similarity metric.
Towards photo-realistic 3D scene reconstruction form range and color images, we present a statistical technique for multi-modal image registration. Statistical tools are employed to measure the dependence of tow imags, considered as random distributions of pixels, and to find the pose of one imaging system relative to the other. The similarity metrics used in our automatic registration algorithm are based on the chi-squared measure of dependence, which is presented as an alternative to the standard mutual information criterion. These two criteria belong to the class of information-theoretic similarity measures that quantify the dependence in terms of information provided by one image about the other. This approach requires the use of a robust optimization scheme for the maximization of the similarity measure. To achieve accurate reslut, we investigated the use of heuristics such as genetic algorithms. The retrieved pose parameters are used to generate a texture map from the color image, and the occluded areas in this image are determined and labeled. Finally the 3D scene is rendered as a triangular mesh with texture.
This paper describes a method to reduce multi-texture 3D meshes using a multi-resolution wavelet analysis. Large and dense multi-modal meshes require new methods for efficient display. In this paper, we present a mesh simplification process, that inherently deals with multi-dimensional data set, controlled in a feature space composed of geometry, curvature, and the textures themselves. The result of the wavelet analysis using a multi-resolution analysis (MRA) based on the 2D quincunx-wavelet transform is considered as texture map called the 'detail relevance'. Virtual range and texture images are captured from selected viewpoints located around the object. The detail extraction is achieved using a multi-resolution approach based on the wavelet cascade analysis. The MRA process extracts detail information at various resolutions and produces a texture image that shows the relevance information attached to each vertex of the mesh. The user has input in this process to select what resolutions are more relevant than others. This approach is useful for filtering noise, preserving discontinuities, mining for surface details, reducing data, and many other applications. We present simplification results of digital elevation maps and 3D objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.