3D modeling is an important topic concerning visual inspection in automatic quality control. Through visual inspection, it is possible to determine whether a product fulfills the required specifications or whether it contains surface or volume imperfections. Although some processes such as color analysis can be achieved by 2D techniques, more challenging tasks such as volume inspection of large and complex objects/scenes may require the use of accurate 3D registration techniques. 3D simultaneous localization and mapping has become a very important research topic not only in the robotics field for solving problems such as robot navigation and mapping of 2D/3D scenarios but also in the computer vision community for the estimation of the camera pose and the registration of large-scale objects. Although their techniques differ slightly depending on the application, both communities tend to solve similar problems by means of different approaches. We present a survey of the techniques used by the robotics and computer vision communities, pointing out the pros and cons and potential applications of each approach. Furthermore, the most representative techniques have been programmed and tested, obtaining experimental results that provide an accurate comparison of the methods in the presence of noise and outliers.
Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.
Augmented reality is used to improve color segmentation on human's body or on precious no touch artefacts. We
propose a technique based on structured light to project texture on a real object without any contact with it. Such
techniques can be apply on medical application, archeology, industrial inspection and augmented prototyping.
Coded structured light is an optical technique based on active stereovision which allows shape acquisition. By
projecting a light pattern onto the surface of an object and capturing images with a camera, a large number of
correspondences can be found and 3D points can be reconstructed by means of triangulation.
3D modelling is becoming an important research topic for visual inspection in automatic quality control. Through
visual inspection it is possible to determine whether a product fulfills the required specifications or whether it
contains surface or volume imperfections. Although some process such as color analysis can be achieved by 2D techniques, more challenging tasks such as volume inspection of large and complex objects/scenes may require the use of accurate 3D registration techniques. 3D Simultaneous Localization and Mapping has become a very
important research topic not only in the computer vision community for quality control applications but also
in the robotics field for solving problems such as robot navigation and registration of large surfaces. Although
their techniques differ slightly depending on the application, both communities tend to solve similar problems
by means of different approaches. This paper presents a survey of the techniques used by the robotics and
computer vision communities in which every approach has been compared pointing out their pros and cons and
their potential applications.
Nowadays, visual inspection is very important in the quality control for many industrial applications. However, the complexity of most 3D objects constrains the registration of range images; a complete surface is required to compare the acquired surface to the model. Range finders are very used to digitalize free form shape objects with large resolutions. Moreover, one single view is not enough to reconstruct the whole surface due to occlusions, shadows, etc. In these situations, the motion between reconstructed partial views are required to integrate all surfaces in a single model. However, the use of positioning systems is not always available or adequate due mainly to the size of the objects or the environmental conditions imposed by the precise mechanics which suffer from vibrations present in the industry. In order to solve this problem, a 3D hand sensor is developed to reconstruct 3D objects that let us to compare them with respect the original one.
In Computer vision there are a lot of applications that are based on the 3D vision. For example, object modeling for reverse engineering in manufacturing, map building, industrial inspection and so on. However, the surface acquired by most part of sensors only represents a part of the object. To solve this problem, several images of the same object are acquired in different positions. After that, all views are transformed to the same coordinate system. This process is known as Range Image Registration and it is the goal of this work.
This work surveys the most common registration methods. These kinds of methods are used to reconstruct 3D complete models of objects. Moreover, a classification of the registration methods is presented. This classification is based on the accuracy of the results. In this survey the principal methods are classified and commented. In order to compare them, experimental results are performed using synthetic and real data. The quality of some results indicates that the result of the registration can be used to compare a real object with the 3D models of them. It can be used in manufacturing process in order to inspect the quality of the produced objects.
This paper presents a contribution on the navigation of autonomous mobile vehicles in structured indoor environments, where most of the objects are made by perpendicular sides to the floor. We propose a navigation algorithm with the intention of bringing down the environment recognition problem, and, in this way, it allows the mobile robot to readjust its path dynamically. We propose to use some patterns made of a set of laser beam planes suitably faced. The light pattern, that is projected by the mobile robot on the navigation environment, generates images that allow it to identify walls, doors, and corridors. Although we have a 2D image, the differences between the broken edges of the pattern allow us to find out the depth. A variety of laser patterns have been analyzed and tested in a simulated environment of an automated store made by walls, doors, and corridors. The results have led us to improve a pattern that permits a high level of reliability in the autonomous indoor navigation. The robustness of the model has allowed us to move forward on the unexpected obstacles detection which generate deformations on the wished patterns. The system also permits the detection of slopes and columns located on its way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.