|
1.IntroductionStereoscopic system is considered a classic system in photography measurement. Accurate calibration of cameras is especially crucial, because it plays an important role in the measurement accuracy of a stereoscopic system. General calibration of a stereoscopic system consists of estimating the internal and external parameters. The internal parameters determine the image coordinates of the measured points in a scene with respect to the camera coordinate frame, and the external parameters represent the geometrical relationship between the camera and the scene or between the different cameras. The existing techniques for camera calibration can be classified into two categories: parametric methods and general nonparametric methods.1 Parametric calibration methods are standard approaches to discover the relation of the three-dimensional (3-D) Euclidean world to the two-dimensional image space. However, for each different sensor type, a different parametric representation is required. In photography measurement, pinhole cameras are often used, so we mainly review the works that have been done for regular pinhole cameras. Considering all the parameters simultaneously, a direct nonlinear minimization is a good choice by using an iterative algorithm with the objective of minimizing residual errors of some equations.2,3 The great disadvantage is that the nonlinear processing may end in a local solution with different types of parameters included in one space. Some existing linear methods solve linear equations established by some constraints with the goal of computing a set of intermediate parameters.4,5 But in most cases, the lens distortions are not considered so the accuracy of the final solution is relatively low. Other parametric methods may compute some parameters first and then followed by others. Tsai6 derived a closed form solution for external parameters and the focal length and then used an iterative scheme to estimate other parameters. Straight lines in space were used as constraints in order to find the right parameters of the distortion model in Refs. 7–9. In Ref. 10, geometrical and epipolar constraints were imposed in a nonlinear minimization problem to correct points’ location in images first and then the lens distortion and fundamental matrix were estimated separately. With the parameterized model to handle the lens distortion, we find a serious discrepancy that the results obtained with calibration data are better than the ones with testing data.11 This discrepancy can be explained by the inadequacy of the parameterized distortion model. An alternative idea of nonparametric camera calibration was introduced by Grossberg and Nayar,12 who used a set of virtual sensing elements called raxels to describe a mapping from incoming scene rays to photosensitive elements on the image detector, and a more general approach was developed in Refs. 1314.–15. In this generic method, several planar calibration objects are used to determine the corresponding optical ray of each pixel, and it is powerful as it can be applied equally well to any arbitrary imaging systems. But for close-range photogrammetry, the calibration from pixel to pixel does not realize high-accuracy measurements. We have designed a pure optical distortion correction method for calibrating perspective imaging systems. The proposed method inherits the idea of nonparametric modeling and uses precise rotating platform and subpixel image processing to realize the mapping between incoming scene rays and photosensitive elements on the image detector (every photosensitive element could be divided into many parts to improve the accuracy if necessary). Then, we applied it to the calibration of a stereo vision system. In contrast to the standard parametric approach, it decouples the distortion estimation from the calibration of external parameters of two cameras at the same time, thus avoiding any error compensation between each other.16,17 2.Proposed Camera Correction ProcessIn the pin-hole camera model, the object point in scene and its image point obey a certain geometrical constraint: object point , image point , and the optical center of camera lie in a line. Because of the distortion in real circumstances, the optical lines are refracted when passing through the optical center. Then, we obtain the distorted image points that have deviations from the ideal ones. The parametric methods try to establish mathematical models in order to relate the distorted image plane and the undistorted one correctly. But, apparently, it is difficult since the distortion caused by the lenses shows both regularity and irregularity. So, we proposed a pure optical correction method to record the rays entering into the lens, as many as possible. Every ray will have an image when entering into lenses, and the direct method is to relate the coordinate of the image point and the angle determining the incident ray. First, we establish a Cartesian coordinate system as shown in Fig. 1. Take the optical center as origin , optical axis as -axis, and make -axis and -axis parallel to the vertical and horizontal axes of image plane, respectively. Then, rotate the camera around -axis and take a photograph of a fixed straight line in scene at every certain interval. At every angle that determines the plane , we can get the distorted image of the straight line on the image detector. When all of the angles in the field of view are recorded, a database about one-to-one correspondence between the angle and the curved line has been already established. Then, rotating the camera for 90 deg around -axis, with the same method, we can get another angle . Since, in practice, not all the incident rays could be recorded, the image plane is divided into many square grids. If the interval of the rotating angle is set small enough, high-accuracy results can be obtained. Given an arbitrary point in measuring field, as shown in Fig. 1, its correspondence on the image plane will lie in a small grid. We can get the fitted angle by bilinear interpolation18 as So with any measured point’s image coordinate , we can calculate its corresponding angle . If the image point just lies on the curved line, we can directly get the corresponding angle through searching the database. 3.Calibration of a Stereoscopic SystemWhen the above-mentioned procedures are completed, the stereoscopic system can be easily calibrated by placing a fixed-length reference one-dimensional (1-D) target arbitrarily in the field of view, which is commonly used.3,19 As shown in Fig. 2, two feature points are fixed on the ends of reference target with the distance exactly known in advance. The external parameters of a stereoscopic system include rotation matrix and translation vector , which can be represented with the essential matrix . Let be the fundamental matrix of a stereoscopic system, then we have where , virtual focal, , the angle between horizontal and vertical axis, usually is close to ; and are the virtual image coordinates of on the virtual image planes of right and left cameras, respectively. [Note that with the image coordinate of the feature point, we can figure out its corresponding angle , which determines the (half-)ray along which the light travels through the feature point. Then, we can set a virtual image plane for each camera before the optical center.]The fundamental matrix can be computed with the eight-point algorithm proposed in Refs. 4 and 20. At least seven pairs of corresponding virtual image points and the distance are needed to obtain , , and the 3-D coordinates of the reconstructed feature points, which are used as the initial values of the following nonlinear minimization. Then, we can establish the minimization function to obtain optimal values of external parameters with the fixed length of 1-D target and geometrical constraints. As shown in Fig. 3, , and represent two feature points located in the 1-D target, where is the number of positions that the 1-D target has been placed in. The plane, that should lie on, is , where is the number of the plane since a feature point is the intersection point of four planes; represents the distance between a point and a plane. Then, we have the error equation as follows: Denote and as the true and measured distance between the two feature points of the 1-D target, respectively, then we have where ; is the vector form of the rotation matrix .With Eqs. (3) and (4), the minimization equation is established as follows: The nonlinear minimization has the angles to feature points from the optical center and the real length of 1-D target as inputs. External parameters and feature points are corrected to minimize Eq. (5). The algorithm is a Levenberg–Marquardt nonlinear minimization that starts with the initial values and and ends with the optimized solution of the external parameters. 4.Experiments4.1.Optical Correction for the CameraA pure optical method requires a high-accuracy straight line in scene which can be captured by the camera. Note that the subpixel detection of image dots gives more reliable results than cross detection.21 A photoreflector material seems like a good choice, but it is liable to be affected by lighting variation, and an extra light source is often needed to get high contrast ratio. Since the repetitive positioning accuracy of the light-emitting diode’s (LED) image is better than 0.02 pixels,22–24 a set of near-infrared LEDs were used, which were adjusted into a straight line by a high-accuracy linear guide. The rotation of the camera around an axis was completed by the multitooth dividing table. The determination of the optical center that must be coincident with the rotating axis of multitooth-dividing table is critical to the overall accuracy. The determination of the rotating axis was completed by a six-dimensional (6-D) high-accuracy adjustable platform, a physical axis, and a dial gauge, as shown in Fig. 4. The multitooth-dividing table can rotate 360 deg. We adjusted the 6-D high-accuracy adjustable platform to keep numerical values measured by the dial gauge almost invariant when the multitooth-dividing table was rotating. We used collimated semiconductor laser beams shaped by an aperture stop as the narrow parallel beams, and in space we need at least two beams to determine a point. Here, we used three beams as shown in Fig. 5. The process of the alignment of the optical center with the rotating axis was
To test the performance of the pure optical method, a corrected camera was placed at several locations to capture multiple images of the straight line which was composed of a set of feature points before the camera. With the image coordinates of the feature points, we could obtain the corresponding horizontal and vertical angles through the proposed method, and all the feature points were reprojected on a virtual image plane, which was perpendicular to the optical axis. Then, their regression line was computed and the root mean square (RMS) distance from each feature point to the line was used as the measure error. Figure 6 denotes the orientation of the lines on the virtual image plane corrected by our method and Table 1 shows the RMS errors of the corrected and uncorrected lines, from which we can see that the RMS errors (in pixels) of the feature points to their regression lines were improved with our method. Table 1The root mean square error (RMSE, in pixels) of the feature points to their regression lines.
4.2.Spatial Measurement by the Stereoscopic SystemTwo FL2G-50S5M cameras with the resolution of , equipped with 23-mm lens, were used to set up a stereoscopic system. Its working distance was about 8000 mm and the range of measurement was , and the baseline between the two cameras was about 7000 mm. The 1-D target had two feature points with a 1026.150-mm interval, and the 1-D target could be located at any orientation from different viewpoints in the field of view. After the optimization with external parameters and feature points mentioned in Sec. 3, we got , the final result of external parameters, and the errors between the real distance of the two feature points on the target and the measured one listed in Table 1. The 1-D target was randomly placed another 10 times at different positions, maybe in the fringe field of view, and the measured distance between two endpoints on the target was used to evaluate the measuring accuracy of the stereoscopic system. From the calibration and measurement data listed in Tables 2 and 3, we can see that the RMS errors are in the same order of magnitude, and both of them are . Table 2Calibration results.
Table 3Measurement results.
5.ConclusionA novel camera calibration method based on nonparametric models has been defined. First, a database has been obtained to remove influences of the distortion caused by lenses by a pure optical adjustment method. Second, a stereoscopic system has been established to test the performance of the proposed method, and the external parameters of cameras can be accurately acquired with the 1-D target. This method gets rid of the constraints of camera distortion models, and it is applicable to a central camera equipped with any lenses. Also, the coupling among all the intrinsic and external parameters is avoided, which may otherwise lead to instability and compensation of each other. On the other hand, as the subdivision number of angles increases, the correction time increases too. However, since camera correction is an off-line process, more time consumed for higher accuracy is acceptable. AcknowledgmentsThis research was supported by the National Natural Science Funds for Distinguished Young Scholars of China (Grant No. 51225505) and the National High-technology & Development Program of China (863 Program, Grant No. 2012AA041205). The authors would like to express their sincere thanks to them and comments from the reviewers and the editor are very much appreciated. ReferencesA. K. DunneJ. MallonP. F. Whelan,
“A comparison of new generic camera calibration with the standard parametric approach,”
(2007). Google Scholar
Z. Zhang,
“A flexible new technique for camera calibration,”
IEEE Trans. Pattern Anal. Mach. Intell., 22
(11), 1330
–1334
(2000). http://dx.doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar
J. Sunet al.,
“A calibration method for stereo vision sensor with large FOV based on 1D targets,”
Opt. Lasers Eng., 49
(11), 1245
–1250
(2011). http://dx.doi.org/10.1016/j.optlaseng.2011.06.011 OLENDN 0143-8166 Google Scholar
H. C. Longuet-Higgins,
“A computer algorithm for reconstructing a scene from two projections,”
Nature, 293 133
–135
(1981). http://dx.doi.org/10.1038/293133a0 NATUAS 0028-0836 Google Scholar
X. ArmanguéJ. Salvi,
“Overall view regarding fundamental matrix estimation,”
Image Vis. Comput., 21
(2), 205
–220
(2003). http://dx.doi.org/10.1016/S0262-8856(02)00154-3 IVCODK 0262-8856 Google Scholar
R. Tsai,
“A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,”
IEEE J. Rob. Autom., 3
(4), 323
–344
(1987). http://dx.doi.org/10.1109/JRA.1987.1087109 IJRAE4 0882-4967 Google Scholar
B. PrescottG. F. McLean,
“Line-based correction of radial lens distortion,”
Graph. Models Image Process., 59
(1), 39
–47
(1997). http://dx.doi.org/10.1006/gmip.1996.0407 CGMPE5 1049-9652 Google Scholar
T. PajdlaT. Werner,
“Correcting radial lens distortion without knowledge of 3-D structure,”
(1997). Google Scholar
F. DevernayO. Faugeras,
“Straight lines have to be straight,”
Mach. Vis. Appl., 13
(1), 14
–24
(2001). http://dx.doi.org/10.1007/PL00013269 MVAPEO 0932-8092 Google Scholar
C. Ricolfe-VialaA. J. Sanchez-SalmeronE. Martinez-Berti,
“Calibration of a wide angle stereoscopic system,”
Opt. Lett., 36
(16), 3064
–3066
(2011). http://dx.doi.org/10.1364/OL.36.003064 OPLEDP 0146-9592 Google Scholar
C. Ricolfe-VialaA. J. Sanchez-SalmeronE. Martinez-Berti,
“Accurate calibration with highly distorted images,”
Appl. Opt., 51
(1), 89
–101
(2012). http://dx.doi.org/10.1364/AO.51.000089 APOPAI 0003-6935 Google Scholar
M. D. GrossbergS. K. Nayar,
“A general imaging model and a method for finding its parameters,”
in 8th Int. Conf. Proc. Computer Vision, ICCV 2001,
108
–115
(2001). Google Scholar
P. SturmS. Ramalingam,
“A generic calibration concept: theory and algorithms,”
(2003). Google Scholar
P. SturmS. Ramalingam,
“A generic concept for camera calibration,”
in Computer Vision-ECCV 2004,
1
–13
(2004). Google Scholar
S. RamalingamP. SturmS. K. Lodha,
“Towards complete generic camera calibration,”
in IEEE Computer Society Conf. Computer Vision and Pattern Recognition, CVPR 2005,
1093
–1098
(2005). Google Scholar
J. WengP. CohenM. Herniou,
“Camera calibration with distortion models and accuracy evaluation,”
IEEE Trans. Pattern Anal. Mach. Intell., 14
(10), 965
–980
(1992). http://dx.doi.org/10.1109/34.159901 ITPIDJ 0162-8828 Google Scholar
T. A. ClarkeX. WangJ. G. Fryer,
“The principal point and CCD cameras,”
Photogramm. Rec., 16
(92), 293
–312
(1998). http://dx.doi.org/10.1111/phor.1998.16.issue-92 PGREAY 0031-868X Google Scholar
T. M. LehmannC. GonnerK. Spitzer,
“Survey: interpolation methods in medical image processing,”
IEEE Trans. Med. Imag., 18
(11), 1049
–1075
(1991). http://dx.doi.org/10.1109/42.816070 ITMID4 0278-0062 Google Scholar
H. LeiZ. WeiG. Zhang,
“A simple global calibration method based on 1D target for multi-binocular vision sensor,”
in Int. Symp. Computer Science and Computational Technology, ISCSCT’08,
290
–294
(2008). Google Scholar
R. I. Hartley,
“In defense of the eight-point algorithm,”
IEEE Trans. Pattern Anal. Mach. Intell., 19
(6), 580
–593
(1997). http://dx.doi.org/10.1109/34.601246 ITPIDJ 0162-8828 Google Scholar
J. M. LavestM. VialaM. Dhome,
“Do we really need an accurate calibration pattern to achieve a reliable camera calibration?,”
in Computer Vision—ECCV’98,
158
–174
(1998). Google Scholar
Z. Jian, Study on the Precision Amelioration of Optical Coordinate Measuring, Tianjin University, Tianjin
(2009). Google Scholar
Z. Guangjun, Machine Vision, Science Press, Beijing
(2008). Google Scholar
J. AresJ. Arines,
“Influence of thresholding on centroid statistics: full analytical description,”
Appl. Opt., 43
(31), 5796
–5805
(2004). http://dx.doi.org/10.1364/AO.43.005796 APOPAI 0003-6935 Google Scholar
BiographyWei Wang is a PhD candidate in precision measuring technology and instruments in University of Tianjin, and he received his MS degree in precision measuring technology and instruments from the University of Tianjin in 2011. His research interests are in photoelectric precision measuring and photography measurement. Ji-Gui Zhu received his BS and MS degrees from the National University of Defense Science and Technology of China in 1991 and 1994, and his PhD degree in 1997 from Tianjin University, China. He is now a professor at the State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin University. His research interests are focused on laser and photoelectric measuring technology, such as industrial online measurement, and large-scale precision metrology. |