Additive manufactured parts have complex geometries featuring high slope angles and occlusions that can be difficult or even impossible to measure; in this scenario, photogrammetry presents itself as an attractive, low-cost candidate technology to acquire digital form data. In this paper, we propose a pipeline to optimise, automate and accelerate the photogrammetric measurement process. The first step is to detect the optimum camera positions which maximise surface coverage and measurement quality, while minimising the total number of images required. This is achieved through a global optimisation approach using a genetic algorithm. In parallel to the view optimisation, a convolutional neural network (CNN) is trained on rendered images of the CAD data of the part to predict the pose of the object relative to the camera from a single image. Once trained, the CNN can be used to find the initial alignment between object and camera allowing full automation of the optimised measurement procedure. These techniques are verified on a sample part showing good coverage of the object and accurate pose estimation. The procedure presented in this work simplifies the measurement process and represents a step towards a fully automated measurement and inspection pipeline.
Measurement of objects with complex geometry and many self-occlusions is increasingly important in many fields, including additive manufacturing. In a fringe projection system, the camera and the projector cannot move independently with respect to each other, which limits the ability of the system to overcome object self-occlusions. We demonstrate a fringe projection setup where the camera can move independently with respect to the projector, thus minimizing the effects of self-occlusion. The angular motion of the camera is tracked and recalibrated using an on-board inertial angular sensor, which can additionally perform automated point cloud registration.
Photogrammetry based systems are able to produce 3D reconstructions of an object given a set of images taken from different orientations. In this paper, we implement a light-field camera within a photogrammetry system in order to capture additional depth information, as well as the photogrammetric point cloud. Compared to a traditional camera that only captures the intensity of the incident light, a light-field camera also provides angular information for each pixel. In principle, this additional information allows 2D images to be reconstructed at a given focal plane, and hence a depth map can be computed. Through the fusion of light-field and photogrammetric data, we show that it is possible to improve the measurement uncertainty of a millimetre scale 3D object, compared to that from the individual systems. By imaging a series of test artefacts from various positions, individual point clouds were produced from depth-map information and triangulation of corresponding features between images. Using both measurements, data fusion methods were implemented in order to provide a single point cloud with reduced measurement uncertainty.
In this paper we show that, by using a photogrammetry system with and without laser speckle, a large range of additive
manufacturing (AM) parts with different geometries, materials and post-processing textures can be measured to high
accuracy. AM test artefacts have been produced in three materials: polymer powder bed fusion (nylon-12), metal powder
bed fusion (Ti-6Al-4V) and polymer material extrusion (ABS plastic). Each test artefact was then measured with the
photogrammetry system in both normal and laser speckle projection modes and the resulting point clouds compared with
the artefact CAD model. The results show that laser speckle projection can result in a reduction of the point cloud
standard deviation from the CAD data of up to 101 μm. A complex relationship with surface texture, artefact geometry
and the laser speckle projection is also observed and discussed.
In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change
significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to
keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for
photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To
calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector
is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns
on planes, before the actual object can continue to be measured after a motion of a camera or projector has been
introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup
changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep
learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based
on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the
success of this calibration pipeline can be greatly improved by using adequate a priori information from the
aforementioned sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.