Widely used in 3D modeling, reverse engineering and other fields, point cloud registration aims to find the translation and rotation matrix between two point clouds obtained from different perspectives, and thus correctly match the two point clouds. As the most common point cloud registration method, ICP algorithm, however, requires a good initial value, not too large transformation between the two point clouds, and also not too much occlusion; otherwise, the iteration would fail to converge to a correct result. To solve this problem, this paper proposes an ICP matching algorithm based on the local features of point clouds. With this algorithm, a robust and efficient three-dimensional local feature descriptor is firstly designed by combining the density, curvature, and normal information of the point clouds, then based on the feature description, the correspondence between the point clouds and also the initial registration result are found, and finally, the aforementioned result is used as the initial value of ICP to achieve fine tuning of the registration result. The experimental results on public data sets show that the ICP algorithm boasts good registration precision and robustness, and a fast running speed as well.
KEYWORDS: Clouds, Detection and tracking algorithms, 3D acquisition, Time of flight cameras, Cameras, Image resolution, Sensors, Feature extraction, Super resolution, Image enhancement
Time-of-Flight(ToF) cameras developed rapidly in recent years, but its low resolution limits its application. In this paper, we propose a new method to estimate the object’s 3D pose using a single ToF camera. We use the Iterative Closest Points(ICP) algorithm based on the Normal Aligned Radial Feature(NARF) to achieve the pose estimation of the target. The experimental results show that this method can accurately estimate the 3D pose of an object using a low-resolution ToF camera.
Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.
In this paper, we proposed a new method to improve the resolution of depth image. This method uses a high-resolution color image as a reference. We also use the bilateral filtered image as the guide image, and obtains the reconstructed image through joint bilateral filter and guidance filter. The experimental results show that this method can enhance the spatial resolution of the depth image and effectively eliminate the ringing effect and artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.