The composition of multiple-layer Light Detection And Ranging (LiDAR) and camera is commonly used in autonomous perception systems. Complementary information of these sensors is instrumental in the reliable surrounding perception. However, it is a difficult work for obtaining the extrinsic parameters between LiDAR and camera, which must be known for some perception algorithms. In this study, we present a method, using only three 3D-2D correspondences to compute the extrinsic parameters between Velodyne-VLP16 LiDAR and monocular camera. The procedure is that 3D and 2D features are extracted respectively from the point cloud and image of a custom calibration target and then the extrinsic parameters are obtained based on these features by the perspective-3-point (P3P) algorithm. Outliers with minimum energy at geometrical discontinuities of target are used as control points for extracting key features in LiDAR point cloud. Moreover, a novel method is presented to distinguish the correct solution from multiple P3P solutions. The method depends on conic shape discrepancies in spaces of the different solutions.
In this manuscript, an acquisition platform for depth images based on Kinect V2 is designed, which can acquire depth images of the target model at any attitude angle (including view angle 0~80°, azimuth angle 0~360° and spin angle 0~360°). In addition, this manuscript implements a depth image recognition algorithm based on an integrated local surface patch (LSP). The algorithm first calculates feature points in regions with large shape variations, and then defines a LSP at each feature point, which is characterized by its surface type, the patch centroid, and the 2D histogram. Next, the potential corresponding patch pairs are found by matching two sets of LSPs, and the candidate models are obtained by the filtered potential corresponding patch pairs. Finally, the candidate models are validated by the iterative closest point (ICP) algorithm. Experiments are designed to validate the performance of the algorithm using multiple depth images with different attitude angles and occlusion ranges of eight military target models acquired by the platform. The results show that this depth image acquisition platform can provide rich data support for the design and verification of depth image recognition algorithms in the future.
As a signal processing theory, compressive sensing (CS) breaks through the limitations of the traditional Nyquist sampling theorem and provides the possibility to solve the high sampling rate, large data volume, and real-time processing difficulties of traditional high-resolution radar. Based on the theory of single-pixel cameras, an array detection imaging system is built, and main structural parameters are analyzed. The simulation experiment of a simple target is organized to show that the number of measurements can be reduced by achieving the parallel operation through increasing the number of detectors. When the target changes, it is found that the sparsity problem has a great influence on the number of measurements. Therefore, an improved method is proposed using the structure flexibility of fiber array and detectors, which can reduce the number of measurements simultaneously while decreasing the number of detectors, which is superior to the original method.
Although the streak tube imaging lidar(STIL) is widely applied in target recognition and imaging, combining the compressive sensing(CS) theory with it has only just begun. To the best of our knowledge, most studies on this combination theory are about ultra-fast imaging. We harness the advantages of streak tube and CS to provide a novel idea in three-dimensional imaging. The imaging system model is built, and mainly structures are introduced such as fiber array and digital micromirror device(DMD). Simulation experiments are organized. In the process of reconstructing the intensity image and range image of the target, the extraction methods of measurement matrix required by the CS algorithm are given respectively.
Streak tube imaging lidar has been widely applied in target recognition and imaging because of its high accuracy and frame rate. Compressed ultrafast photography technique employs a digital micromirror device (DMD) and a streak camera. It is developed to satisfy the requirements of imaging of ultrafast processes. The concept of structure provides a new direction for three-dimensional (3-D) imaging. This paper studies the streak tube 3-D imaging system based on compressive sensing (CS) from the perspective of imaging system construction and image reconstruction algorithms. The system model is built, and mainly structures are introduced such as the fiber array and DMD. Two simulation experiments are organized. First, the stripe images of a simple target are obtained. In the process of reconstructing the intensity image and range image, the extraction methods of the measurement matrix required by the CS algorithm are given, respectively. The resulting images and variance curve show that the image quality increases with the number of measurements. The second experiment with a complex target is carried out. Two levels of distance interval are used to analyze the imaging effect in the simulation. It is found that the image resolution is directly related to the distance interval selection.
Three-dimensional imaging is increasingly becoming important in a number of applications that observe and analyze real-world environments. Range sensors, such as flash imaging Lidar and Time-of-flight camera, which can deliver high accuracy range measurement images, but are limited by the low resolution. To overcome this limitation, this paper shows the benefit of multimodal sensor system, combining a low-resolution range sensor with a high-resolution optical sensor, in order to provide a high-resolution, low-noise range image of the scene. First, an extrinsic calibration algorithm is used to align the range map with optical image. Then, an image-guided algorithm is proposed to solve the super-resolution optimization problem. This algorithm using the Markov Random Field framework. It defines an energy function that combines a standard quadratic data term and a regularizing term with the weighting factors that relate optical image edges to range map edges. Experiments on synthetic and real data are provided and analyzed to validate this method. The result confirms that the quality of the estimated high-resolution range map is improved. This work can be extended for video super-resolution with the consideration of temporal coherence.
Nowadays, there are two main methods to realize three-dimensional non-scanning laser imaging detection, which are detection method based on APD and detection method based on Streak Tube. However, the detection method based on APD possesses some disadvantages, such as small number of pixels, big pixel interval and complex supporting circuit. The detection method based on Streak Tube possesses some disadvantages, such as big volume, bad reliability and high cost. In order to resolve the above questions, this paper proposes an improved three-dimensional non-scanning laser imaging system based on Digital Micromirror Device. In this imaging system, accurate control of laser beams and compact design of imaging structure are realized by several quarter-wave plates and a polarizing beam splitter. The remapping fiber optics is used to sample the image plane of receiving optical lens, and transform the image into line light resource, which can realize the non-scanning imaging principle. The Digital Micromirror Device is used to convert laser pulses from temporal domain to spatial domain. The CCD with strong sensitivity is used to detect the final reflected laser pulses. In this paper, we also use an algorithm which is used to simulate this improved laser imaging system. In the last, the simulated imaging experiment demonstrates that this improved laser imaging system can realize three-dimensional non-scanning laser imaging detection.
Compared with traditional 3-D shape data, ladar range images possess properties of strong noise, shape degeneracy, and sparsity, which make feature extraction and representation difficult. The slice image is an effective feature descriptor to resolve this problem. We propose four improved algorithms on target recognition of ladar range images using slice image. In order to improve resolution invariance of the slice image, mean value detection instead of maximum value detection is applied in these four improved algorithms. In order to improve rotation invariance of the slice image, three new improved feature descriptors—which are feature slice image, slice-Zernike moments, and slice-Fourier moments—are applied to the last three improved algorithms, respectively. Backpropagation neural networks are used as feature classifiers in the last two improved algorithms. The performance of these four improved recognition systems is analyzed comprehensively in the aspects of the three invariances, recognition rate, and execution time. The final experiment results show that the improvements for these four algorithms reach the desired effect, the three invariances of feature descriptors are not directly related to the final recognition performance of recognition systems, and these four improved recognition systems have different performances under different conditions.
In order to study the influence of nonlinear sweep voltage on the range accuracy of streak tube imaging lidar, a nonlinear distance model of streak tube is proposed. The model of the parallel-plate deflection system is studied, and the mathematical relation between the sweep voltage and the position of the image point on the screen is obtained based on the movement rule of phoelectron. And the mathematical model of the sweep voltage is established on the basis of its principle. The simulation of streak image is carried out for the selected staircase target, the range image of the target can be reconstructed by extremum method. Comparing reconstruction result and actual target, the range accuracy caused by the nonlinear sweep voltage is obtained. The curve of the errors varying with target ranges is also obtained. And the range accuracy of the system is analyzed by the means of changing the parameter relate to sweep time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.