Advanced sensor platforms often contain a wide array of sensors in order to collect and process a diverse range of environmental data. Proper calibration of these sensors is important so that the collected data can be interpreted and fused into an accurate depiction of the environment. Traditionally, LiDAR-stereo camera calibration requires human assistance to manually extract point pairs between the LiDAR and the camera system. Here, we present a fully automated technique for calibrating a visible camera system with a 360⁰ field-of-view LiDAR. This calibration is achieved by using the standard planar checkerboard calibration pattern to calculate the calibration parameters (intrinsic and extrinsic) for the stereo camera system. We then present a novel pipeline to determine accurate rigid-body transformation between LiDAR and the stereo camera coordinate systems with no additional experimental setup or human assistance. Our innovation lies in using the planarity of the checkerboard, whose surface coefficients can be estimated relative to the camera coordinates as well as the LiDAR sensor coordinates. We determine the rigid-body transformation between two sets of coefficients of the same calibration surface through least squares minimization. We then refine the estimate through iterative closest point minimization between the 3D points on the checkerboard pattern viewed from the LiDAR and the camera system. Using measurements from multiple views, we increase the confidence in the transformation estimate. The proposed method is less cumbersome and time consuming, unifying the stereo camera and LiDAR-camera calibration in a single step using only one calibration pattern.
|