A deep learning-based method is proposed to recover the absolute phase value from a single fringe pattern. We propose a deep neural network architecture that includes two subnetworks used for wrapping phase calculation and phase unwrapping, respectively. The training set is generated with the absolute phase obtained by the combination of phase shifting and gray coding. In addition, a reference plane is adopted to provide periodic range information for phase unwrapping. Then according to the output of the well-trained network, a high-quality absolute phase is obtained through only a single fringe pattern of the measured object. Experiments on the test set verify that high accuracy for complex texture objects is acquired using the proposed method, which indicates its potential in high-speed measurement.
Point cloud has achieved great attention in 3D object classification, segmentation and indoor scene semantic parsing. In terms of face recognition, although image-based algorithm become more accurate and faster, open world face recognition still suffers from the influences i.e. illumination, occlusion, pose, etc. 3D face recognition based on point cloud containing both shape and texture information can compensate these shortcomings. However training a network to extract discriminative 3D feature is model complex and time inefficient due to the lack of large training dataset. To address these problems, we propose a novel 3D face recognition network(FPCNet) using modified PointNet++ and a 3D augmentation technique. Face-based loss and multi-label loss are used to train the FPCNet to enhance the learned features more discriminative. Moreover, a 3D face data augmentation method is proposed to synthesize more identity-variance and expression-variance 3D faces from limited data. Our proposed method shows excellent recognition results on CASIA-3D, Bosphorus and FRGC2.0 datasets and generalizes well for other datasets.
A newly developed flexible calibration algorithm for fringe projection profilometry system is presented in this paper. Previous studies have exploited images of spheres to calibrate the camera. It is shown in this paper that this approach can be improved to suit for the projector and ultimately achieve the overall calibration of FPP (Fringe Projection Profilometry) system. Taking the projector as a virtual camera, the images of sphere contour on the projectors plane can also be obtained through the phase information. The derivation and acquisition of intrinsic parameters for projector are just the same way used in the camera. In our algorithm, at least 3 images of sphere contour on both camera and projector are obtained to calculate the homography between these two views. Then the image of the sphere and its shadow on an induced plane settled in the back of the sphere are added to recover the epipolar geometry for the FPP system. Experimental results on real data are presented, which demonstrate the feasibility and accuracy achieved by our proposed algorithm.
This paper presents an approach for stereo-based 3D face reconstruction. Our approach is based on depth estimation of the face by using the stereo image pair, requiring neither expensive devices nor generic face models. In order to improve the depth estimation, our approach incorporates face properties to enhance the 3D face reconstruction. The approach starts with a sparse disparity map with 68 facial landmarks detected by dlib toolkit. Considering the face symmetry and smoothness, we complete the dense disparity estimation with the guidance of the sparse disparity map. Post-processing is implemented including outliers removal and surface mesh. Experimental results provided show that the algorithm is promising.
Targeting at 3D point cloud data without any foreknowledge of information, this paper presents a new algorithm of point cloud simplification. Because of usual way of shooting in daily life, there often exist more detailed information in x-y direction in the point cloud.By using this feature, the proposed algorithm firstly selects x-y axis as the direction for division and computation and obtains x-y boundary. After observation of normal vector of point cloud, it is easy to find that if the normal vector of the points in the local region changes gently, it indicates that the region is relatively flat. On the contrary, if the normal vector changes greatly, it indicates that the region fluctuates greatly. Therefore, compute the arithmetic mean of the included angle between the normal vector of one point in the point cloud and the normal vector of its k-neighborhood point. Define the feature of that point, and based on this, extract key feature points in data. Finally, the gridding method is used to divide the scattered point cloud data whose boundary and key points have been extracted and thus finish simplification. Experimental results show the effectiveness of the proposed algorithm.
Stereo matching is an important and hot research topic in computer vision. In order to solve the well-known streaking effects of dynamic programming, and reduce the mismatch points on edges, discontinuous and textureless regions, we propose a cross-scale constrained dynamic programming algorithm for stereo matching. The algorithm involves both image pyramid model and Gaussian scale space to operate a coarse-to-fine dynamic programming on multi-scale cost volumes. For the purpose of improving the disparity accuracy in textureless region, a cross-scale regularized constraint is added to ensure the cost consistency, the computational burden is reduced by using the disparity estimation from lower scale operation to seed the search on the larger image. Both synthetic and real scene experimental results show our algorithm can effectively reduce the mismatch in textureless regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.