Abstract
Based on analyzing the measurement model of binocular vision sensor, we proposed a new flexible calibration method for binocular vision sensor using a planar target with several parallel lines. It only requires the sensor to observe the planar target at a few (at least two) different orientations. Relying on vanishing feature constraints and spacing constraints of parallel lines, linear method and nonlinear optimization are combined to estimate the structure parameters of binocular vision sensor. Linear method achieves the separation of the rotation matrix and translation vector which reduces the complexity of computation; Nonlinear algorithm ensures the calibration results for the global optimization. Towards the factors that affect the accuracy of the calibration, theoretical analysis and computer simulation are carried out respectively consequence in qualitative analysis and quantitative result. Real data shows that the accuracy of the proposed calibration method is about 0.040mm with the working distance of 800mm and the view field of 300 × 300mm. The comparison with Bougust toolbox and the method based on known length indicates that the proposed calibration method is precise and is efficient and convenient as its simple calculation and easy operation, especially for onsite calibration and self-calibration.
© 2015 Optical Society of America
1. Introduction
A basic binocular vision sensor (BVS) consists of two cameras. Depending on optical triangulation method and stereo parallax, BVS aims at completing the three-dimensional measurement of feature points, feature lines, etc. The binocular stereo vision measurement is non-contact, fast, good flexibility, high precision, and widely applies to not only the 3D model reconstruction but also the measurement of 2D profile, 3D topography and key geometric parameters of three-dimensional objects, etc. [1].
Calibrating the measurement model parameters is the key to the successful application of BVS. It means the complete solution of the intrinsic parameters of cameras and structure parameters of binocular vision system. The intrinsic parameters of cameras do not change with the structure of BVS, and they are suitable for offline calibration. The structure parameters of binocular system are different. They are vulnerable to the impact of the installation process and need to be calibrated onsite.
At present, the calibration methods for BVS mainly include: ①Calibration method based on the 3D target with known three-dimensional coordinates [2]; ②Using the unknown motion 2D round cavity target for calibrating BVS [3]; ③Calibration technique based on unknown movement of one-dimensional target [4–6 ]; ④Self-calibration method of BVS based on characters matching [7]. The method using three-dimensional target can get calibration images of high quality only when the target is at specific locations due to the mutual effect of illumination on different planes. Moreover, the three-dimensional target is difficult processing and high manufacturing cost. In the method based on the unknown motion of 2D round cavity target, the calculation is an iterative process of solving nonlinear equations with large amount of data course and complex computation. The method based on one-dimensional target of unknown motion, although high precision and easy to conduct, but the solution requires numerous matrix transforms and iterative processes of solving nonlinear equations for roots, and generates high computational complexity and calculation error. For the self-calibration method of BVS based on features matching, accurate extraction of feature points and exact matching are indispensable to achieve precise calibration. It is very difficult to guarantee in the industrial site with complex environment. In addition, there are some other calibration methods. Li et al. [8] proposed a calibration method for binocular vision sensor based on BP neural network, but did not involve the calibration accuracy. Ma et al. [9] proposed a self-calibration method for binocular active vision, but it required the presence of pure translation between two cameras. R. Hartley and A. Zisserman [10] put forward a solution of solving the rotation matrix between two cameras by using vanishing points, but did not mention the method of calculating the translation vector. J. Bouguet [11] provides a calibration toolbox for Matlab based on planar chessboard targets, which is most widely used.
In the real life, parallel lines are ubiquitous, such as the airport runway, guardrail, zebra crossing and so on. Motivated by the previous work [12–14 ], a novel calibration method of BVS using parallel lines is proposed in this article. A planar target with more than three equally spaced parallel lines is used in the method. In the measurement space, the target is placed freely n times and the images of target are captured by two cameras. The vanishing line of the target plane can be determined from the projections of parallel lines. Combining the known intrinsic parameters of cameras, the target plane’s orientation relative to the camera can be calculated. Two normal vectors of one target plane relative to corresponding camera are related by a rotation matrix. Meanwhile, the rotation matrix is the structure parameters of BVS R. As the target is placed n times, n constraints are obtained to solve the rotation vector. Since the distance D of each two adjacent parallel lines has been known exactly, the translation vector is determined. Finally, it is the overall optimization [15,16 ]. Because of the demands of vanishing lines in solving the translation vector T, we choose vanishing lines instead of vanishing points to calculate the rotation matrix R.
The paper is organized as follows. Section 2 describes the measurement model of BVS. Section 3 studies the calibration principle. Section 4 discusses the accuracy analysis and experiments: We first derive mathematical formulas consequence in the qualitative analysis of the various factors’ impact on the calibration results. Then Quantitative results are displayed through computer simulation. Finally, it is the real data. Section 5 describes the conclusion. In the Appendix, we provide the technique for estimating the vanishing line when the scene is a set of coplanar equally spaced parallel lines.
2. Measurement model
One arbitrary spatial point mapping in the image can be approximated by the usual pin-hole model. The measurement model of BVS is illustrated in Fig. 1 . The world coordinate frame (WCF) defined as is the same with the left camera coordinate frame (LCCF). The right camera coordinate frame (RCCF) is . The image coordinate frames of the left camera and right camera are and . Let and be the image coordinates of the spatial point mapped by the left camera and right camera respectively.
Denote the intrinsic matrixes of the left camera and right camera by and , and the projection matrix by and . The rotation matrix and translation vector between LCCF and RCCF are expressed as:
Consequently, the following linear equation stands:The values of can be calculated by solving Eq. (3) though the linear least-squares method. Then we reconstruct the three-dimensional spatial coordinate of point .
3. Principle
In this method, the calibration of BVS is based on the vanishing features of parallel lines. The calibration model of BVS is shown in Fig. 2 .The calibration method can be carried out in the following main steps:
- 1) The intrinsic parameters of the left camera and right camera are obtained by Zhang’ calibration method [11,17 ]. Two cameras are located according to the measurement requirement, and the relative ubiety is required unchangeable in the process of calibration.
- 2) Place the planar target in the cameras’ field of view. The images of target are captured by the calibrated cameras in step 1). By moving the target to many different positions, enough images are obtained. In the process, the angle between the target plane and cameras’ planes can’t be too small to ensure that the vanishing line can be calculated exactly.
- 3) According to the intrinsic parameters of the cameras, all images are rectified to compensate for cameras’ distortion. The feature points on the parallel lines are determined by Steger’s method [18]. The linear equations of parallel lines can be obtained from the extracted feature points using the least squares method.
- 4) The vanishing line of target plane can be solved by the linear equations obtained in step 3) according to the method described in Appendix. Combining the intrinsic matrix of camera, the normal direction of the target plane relative to the camera is determined. The rotation matrix of BVS can be worked out from the normal vectors of the target plane under LCCF and RCCF. Since the parallel lines on the target plane are coplanar and spacing of , the translation vector is determined. Finally, it is the whole optimization to ensure the calibration result for global optimization.
3.1. Related concepts and properties
The followings are some concepts and properties connected with the proposed calibration method in this article [10].
(Property 1) Points at infinity: In , consider two lines and .They are represented by vectors and for which the first two coordinates are the same. Computing the intersection of these lines gives no difficulty. The intersection is ,and ignoring the scale factor , this is the point . The points with the same form of are known as points at infinity. Now if we attempt to fine the inhomogeneous representation of this point, we obtain , which makes no sense, except to suggest that the point of intersection has infinite large coordinates. This observation agrees with the usual ideal that parallel lines meet at infinity, and a set of parallel lines meet at the same point at infinity.
(Property 2) Vanishing lines: The vanishing line is constructed, as illustrated in Fig. 3 , by intersecting the image with a plane parallel to the scene plane through the camera center . It is clear that a vanishing line depends only on the orientation of the scene plane, not depends on its position. Under the camera coordinate frame (CCF), the relation between the direction vector of the scene plane and the vanishing line can be defined as the following property:
Where is the intrinsic matrix of the camera.(Property 3) Back-projection plane: A set of points in space which map to a line in the image is a plane in space defined by the camera center and image line mapped by a 3-space line in space, as shown in Fig. 4 . This plane is known as the back-projection plane. It is obvious that the 3-space line lies on the back-projection plane. If a 3-space line is mapped under two cameras, we will obtain two back-projection planes whose intersection is the 3-space line. Under WCF, the following mathematical formulations stand:
Where and are the projection matrixes of the left camera and right camera; The 3-space line is represented as a dual Plücker matrix .3.2. Acquisition of the rotation matrix
As illustrated in Fig. 5 , and are projections of lines on the target plane. In the target images, the central lines of the parallel lines are picked up by Steger’s method [18], followed by the linking of lines based on the edge connection algorithm proposed by Peter Kovesi [19]. The recognition and matching of lines use the information of their positions. Equations of and expressed under homogeneous coordinate are obtained by the method of least squares fitting for the picked up points on the straight lines, meaning and . According to the Appendix, the vanishing lines , of the target plane under LCCF and RCCF can be determined by lines , (1, 2, 3, 4).
Since , acting as the intrinsic matrixes of the left camera and right camera are known, the direction vectors and of target plane under LCCF and RCCF satisfy:
Owing to the fact that both and belong to the same target plane just in different coordinate frames, the following equation stands:
If the target is placed n times, n equations similar to Eq. (8) stand. As an orthogonal matrix, the rotation matrix only has three independent variables. The rotation matrix is determined when .
3.3. Acquisition of the translation vector
Let and be the back-projection planes under WCF defined by lines and on the image plane (1, 2, 3, 4). According to Eq. (5), the following results stand:
The 3-space straight line can be represented as a dual Plücker matrix and satisfies the following property according to Eq. (6).
Let be expressed as , where , (1, 2, 3, 4). It is clear that and are independent of . As an antisymmetric matrix, meets and corresponds to a vector . It means , where and is the element of matrix in the position of column m and row n.
Since are parallel to each other, they have a common point at infinity defined as under WCF, where . As the point at infinity is also on lines the following equation stands:
It means (1, 2, 3, 4).The unit vector is determined by solving the equation . Furthermore, represents the orientation of lines under WCF.
Let be the plane passing through the 3-space line and perpendicular to the target plane , where , , , as shown in Fig. 6 . Here, is the unit normal vector of the target plane under WCF and has been calculated in Sect.3.2 when solving the rotation matrix . The Plüker matrix representation can be obtained from the dual Plüker matrix representation by a simple rewrite rule. The rewrite result is (1, 2, 3, 4). Since line is on both plane and plane , it satisfies the following equation.
From Eq. (12), we haveAs are parallel to each other and spacing of , the equation stands where 1, 2, 3, 4. In Eq. (13), there are 5 unknown parameters () and 4 constraints. Moving the target once increases 4 constraints and 2 unknown parameters (the new and ). If the target is placed n times independently, there are constraints and unknown parameters. The translation vector is determined when (meaning ).
3.4. Optimization
Let represent the plane perpendicular to lines where a is a constant value of 0. The intersection between the 3-space line and plane is defined as , then . As the parallel lines are spacing of , the equation stands, where is the non-homogeneous coordinates of (1, 2, 3, 4……).
Suppose the number of equally spaced coplanar parallel lines is m and the target is placed n times, the following optimization function is established:
Where is the distance between and ; and are the normal vectors of target plane under LCCF and RCCF; , are weight factors.Considering the principle of error distribution, takes for 0.1 and takes for 10. Aimed at a stable numerical solution, the orthogonal rotation matrix is translated into a Rodriguez vector [20]. Then the number of unknown parameters is six. The Eq. (14) is solved by Levenberg-Marquardt method. Owing to the excellence of the initial value calculated in Sect.3.2 and Sect.3.3, global optimization can be obtained via a few iterations. Thus both ensures the accuracy of the results and improves the computing speed.
4. Analysis and experiment
In this section, the factors on which the calibration accuracy depends are discussed by means of mathematical formulations, followed by computer simulation to analyze the impact of these factors on the calibration accuracy. Finally, it is the real data which contains contrast experiments and accuracy evaluation.
4.1 Accuracy analysis
Since the method applies vanishing lines to complete calibration, the accuracy of vanishing lines so directly determines the accuracy of the calibration. Here, the factors that affect the calculation of vanishing line are discussed. As described in the Appendix, the vanishing line is determined by the images of several equally spaced parallel lines. Let stand for the first line and stand for the (n + 1)-th line of the parallel lines, where and are the slops, and are the intercepts of the lines. Then, the vanishing line and , are related by:
According to Eq. (15), each partial derivative is:Let , , , be the errors caused by noise, and , be variations of the vanishing line. The error propagations for and are given by [21]:
From Eq. (18) and (19) , it can be seen that the variations of the vanishing line have inversely proportional relationship with and .The larger and , the larger the variations of the vanishing line. In the following, two factors that affect the slop difference and the intercept difference of lines on the image are discussed: the angle between the target plane and the image plane, the distance of the parallel lines.
Let be an arbitrary point with homogeneous coordinate under WCF. The homogeneous coordinate of its projection on the image plane is . The relationship between the 3-space point and its image projection is given by:
Where s is a scale factor, is the intrinsic matrix of camera, , are the rotation matrix and the translation vector between CCF and WCF.Without loss of generality, we assume the target plane is on of WCF. Let us denote the i-th column of the rotation matrix as .
Where and .For simplicity, the relationship between CCF and WCF is illustrated in Fig. 7 . The camera coordinate frame moves along and rotates around with angle in a clock-wise direction, and it is the WCF. As the target plane is on of the world coordinate frame, is also the angle between the target plane and image plane.
The projective transformation of points is shown as Eq. (21), so the projective transformation of lines is:
Let be the first line on the target plane and be the (n + 1)-th line on the target plane, where is the distance of parallel lines. So their homogeneous coordinates in the image plane are:
Then, we can calculate that:From Eq. (27) and (28) , it is obvious that and associate with not only the angle between the target plane and the image plane but also the distance of parallel lines. Furthermore, and have positive relationships with and .
Based on Eq. (18), (19), (27) and (28) , we know factors affecting the localization of vanishing line. The calibration accuracy can be improved by enlarging the space of the parallel lines or aggrandizing the angle between the target plane and the image plane.
4.2. Computer simulation
As discussed in the last subsection, the calibration accuracy of BVS can be affected by various factors. In this subsection, the calibration accuracy w.r.t. four main factors are further analyzed through computer simulations: 1) the noise level of images; 2) the number of the parallel lines; 3) the distance of parallel lines on the target; 4) the angle between the target plane and the image plane.
The intrinsic parameters of the simulated cameras are shown in Table 1 . The distortions of cameras are ignored. The rotation vector and translation vector between the camera coordinate frames of the two cameras is fixed as and , respectively. Here, the rotation vector is expressed as a Rodriguez vector. The working distance of cameras is 700mm with the view field of . A planar target with a set of equally spaced parallel lines is simulated as the model plane. Each line on the target is emulated by 100 points, and the target is placed 9 times randomly in front of the virtual cameras. The calibration accuracy is expressed by the simulation errors of and . Here, both and are in the form of vectors. The 2-norm of vectors’ differences between the real data and the simulation results represent the absolute error. Divided by the true values, the relative standard deviation defined as the calibration accuracy is determined.
4.2.1 Performance w.r.t. the noise level, the number and distance of parallel lines
Gaussian noise with 0 mean and standard deviation is added to the image points. For each value of parallel lines’ number (3, 5, 7) and different distance (15mm, 20mm, 25mm, 30mm), 100 independent trials are performed to compute the relative standard deviation of and . We take the average value as the final result.
From Figs. 8, 9 and 10 , it can be seen that the calibration errors of and increase when the noise level improves and decrease when the space of the parallel lines shrinks. As illustrated in Fig. 11 , the calibration performance will be ameliorated by means of adding the number of parallel lines. When , , , the calibration accuracies of and are 1.8‰ and 0.5‰.
4.2.2 Performance w.r.t. angle between the target plane and the image plane
Gaussian noise with 0 mean and 0.2 standard deviation is added to the image points. The number of parallel lines is 7 and the space is 16.5mm. The angle varies from to . For each value of , 100 trails are performed. We take the average as the final result.
From Fig. 12 , it can be seen that the calibration error of first increases and then decreases with the increasing of angle . The calibration error of takes its smallest value when . The calibration error of decreases monotony when the angle increases. Considering the calibration errors of both and , it is the most ideal when the angle is . At the same time, the calibration accuracy of , is 1.5‰, 0.8‰.
4.3 Real data
For the experiment with real data, the binocular vision system is composed of two AVT-F504B cameras whose resolutions are pixel. The working distance of cameras is about 700mm~900mm and the view field is mm. The physical system is shown in Fig. 13 .
4.3.1 Intrinsic parameter calibration
The intrinsic parameters of the left camera and right camera are calibrated by Bouguet toolbox [11] which is compiled on Zhang’s calibration method [17]. The planar target we use is a chessboard target with corner points evenly distributed. The distance between the adjacent points is 10mm in the horizontal and the vertical directions with the accuracy of 5um. In the experiment, the image pairs used for calibration are taken from 13 different orientations by the left and right camera simultaneously. One pair of them is shown in Fig. 14 . The calibration results of cameras’ intrinsic parameters are shown in Table 2 .
4.3.2 Structure parameter calibration
A planar target comprising 10 equally spaced parallel lines with distance of 10mm and accuracy of 0.02mm is used in our proposed calibration method of BVS (see Fig. 13). The target is placed 13 times independently. The angle between the normal direction of the target plane and the optical axis of cameras ranges from to . Both the left camera and right camera capture 13 images of the target. One pair of target images and extracted lines are illustrated in Fig. 15 .
As a comparison, the method using functions of the Bouguet toolbox [11], and known length based method [4] are also carried out to obtain the structure parameters. In the method of Bouguet toolbox, the image pairs of chessboard target are captured from more than 3 different orientations. The initial intrinsic parameters and the relationship between the camera coordination system and the target coordination system are obtained first. The initial structure parameters are acquired from coordinate transformation with the intermediate coordination system of target. An overall optimization of intrinsic parameters and structure parameters based on reprojection errors of feature points is the final procedure. It is obvious that we cannot calibrate the intrinsic parameters and structure parameters of BVS dividedly. Hence, the structure parameters obtained by Bougued toolbox are calibrated together with the intrinsic parameters using the pairs of images shown in Fig. 14. In the method based on known length, the structure parameters are calibrated by the same pairs of images with Bouguet toolbox for uniformity. The known length on the chessboard target we choose is shown in Fig. 16 .
Table 3 shows the comparative results of structure parameters through different techniques. It is clear that different calibration techniques have influence on these parameters since they use the same intrinsic parameters.
4.3.3 Accuracy evaluation
For the accuracy evaluation of the calibration results, anther 3 pairs of images of the chessboard target are captured. Every distance between the adjacent corners on the chessboard target in the horizontal and the vertical directions are measured by the calibrated BVS. The average value and root-mean-square (RMS) error are calculated to evaluate the calibration accuracy. To further investigate the validity of our method, the target is also measured using the other two groups of calibration results, as shown in Table 4 . In order to give a concrete display for our proposed calibration method, all the reconstructed 3D points are shown in Fig. 17 .
As shown in Table 4, the RMS error of our method is 0.041mm with the view field of . Compared with other two methods, our method is obviously better than the method based on known length but inferior to Bouguet toolbox. The average RMS error difference between our method and Bouguet toolbox is 0.013mm. Although both our method and Bouguet toolbox are based on the same intrinsic parameters, the targets when calibrating the structure parameters are different. The accuracy of parallel lines target used in our method is 0.02mm and the chessboard target used in Bouguet toolbox is the accuracy of 5um. Taking account of the accuracy difference of targets, it is easy to deduce that our method will perform better and reach the same accuracy level with Bouguet toolbox if the parallel lines target is also the accuracy of 5um. Analyzing of the standard error, it can be acquired that our method and Bouguet toolbox are more stable. Meanwhile, parallel lines are much more ubiquitous than chessboard in the real life, such as the airport runway, guardrail, zebra crossing and so on. Our method has great advantage on self-calibration and onsite calibration.
5. Conclusions
In this article, a calibration method of BVS based on the vanishing features of parallel lines is proposed. The planar target with a set of parallel lines spacing of is placed at least twice in the calibration progress. In addition, the calibration model of BVS proposed in the method achieves the separation of the rotation matrix and the translation vector , which has the benefit of reducing the complexity and difficulty of calculation. The objective optimization function is based on the absolute distance of parallel lines and the consistency of the target plane’s normal vector under LCCF and RCCF. The initial value is determined by linear method and following is the nonlinear optimization in the 3D measurement space, which ensures the global optimum. Towards the factors that affect calibration accuracy, theoretical derivation gives qualitative analysis and computer simulations aim at qualitative results. The specific method to decrease the calibration error caused by these factors is also proposed. Real data shows that the measurement accuracy is about 0.040mm with the view field of . Compared with conventional calibration methods, our method is simple, effective and precise, especially for onsite calibration and self-calibration. However, we observe that the algorithm may fail in the some cases. For example, when the target plane is nearly parallel to the image plane, the vanishing line approximates the infinity leading to great calculation error. If the angle between the target plane and the image plane is oversize or the image noise is too much, it is impossible to guarantee the extracting accuracy of parallel lines. Meanwhile, both enough number and adequate distance of parallel lines are necessary, or the calibration result may be inaccurate. In practice, it is easy to avoid the degenerate configurations described above. Finally, as the properties of lines, the proposed method is suitable for the calibration of cameras without public visual theoretically. We plan to incorporate the model and algorithms into global calibration of multi-sensor vision system.
Appendix
The vanishing line given the image of a set of coplanar equally spaced parallel lines.
A set of equally spaced lines on the scene plane may be represented as , where takes integer values. This set (a pencil) of lines may be written as , where is the line at infinity on the scene plane. Under perspective imaging the point transformation as is , and the corresponding line map is , where the image of , is the vanishing line of the plane. The imaged geometry is illustrated in Fig. 18 .
The vanishing line may be determined from the set provided their index (n) is identified. The closed solution for the vanishing line is:
If (Here, the case of lines parallel to the y-axis of image plane is ignored), the following equation stands:
Where and are scale factors. From Eq. (29), we haveThen the following equation stands:The vanishing line can be obtained by solving Eq. (31) with the linear least-squares method.Acknowledgments
This work is supported by the Instrument Special National Natural Science Foundation of China (No.61127009), the Natural Science Foundation of Beijing (No.3142012), and the National Key Scientific Instrument and Equipment Development Project (No.2012YQ140032). We appreciate for the editor and reviewers’ valuable comments on our manuscript.
References and links
1. G. Zhang, Visual Measurement, 1nd ed. (Beijing Sciences, 2008).
2. S. Ma and Z. Zhang, Computer Vision: Theory and Algorithms, 1st ed. (Beijing Sciences, 1998).
3. F. Zhou, J. Zhu, and X. Yang, “A field calibration technique for binocular vision sensor,” Yiqi Yibiao Xuebao 21(2), 142–145 (2000).
4. G. Zhang and F. Zhou, “The calibration method of stereo visual sensor structural parameters based on standard length,” in Proceedings of CSAA on Aviation Industry Measurement and Control Technology, ed. (Academic, 2001), pp. 259–263.
5. J. Sun, Z. Wu, Q. Liu, and G. Zhang, “Field calibration of stereo vision sensor with large FOV,” Opt. Precision Eng. 17(3), 633–640 (2009).
6. F. Zhou, G. Zhang, and Z. Wei, “Calibrating binocular vision sensor with one-dimensional target of unknown motion,” Chin. J. Mech. Ens-EN 42(6), 92–96 (2006).
7. R. Hartley, “Estimation of relative camera positions for uncalibrated cameras” In Proceedings of. European Conference on Computer Vision, ed. (Academic, 1992) pp. 579–587. [CrossRef]
8. M. Li, A. Zhang, and S. Hu, “On 3D measuring system of sheet metal surface based on computer vision,” Chin. Mech. Eng. 13(14), 1177–1180 (2002).
9. Y. Ma and W. Liu, “A linear self-calibration algorithm based on binocular active vision,” Robot 26(6), 486–490 (2004).
10. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge University, 2003).
11. J. Bouguet, “Camera calibration toolbox for Matlab,” http://www.vision.caltech.edu/bouguetj/calib_doc/.
12. Z. Wei, C. Li, and B. Ding, “Line structured light vision sensor calibration using parallel straight lines features,” Optik (Stuttg.) 125(17), 4990–4997 (2014). [CrossRef]
13. Z. Wei, M. Shao, G. Zhang, and Y. Wang, “Parallel-based calibration method for line-structured light vision sensor,” Opt. Eng. 53(3), 033101 (2014). [CrossRef]
14. Z. Wei, M. Xie, and G. Zhang, “Calibration method for line structured light vision sensor based on vanish points and lines,” inProcceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp.794–797. [CrossRef]
15. Moré J J, “The Levenberg-Marquardt algorithm: implementation and theory,” in Numerical Analysis (Springer Berlin Heidelberg, 1978), pp. 105–116.
16. M. Lourakis, “Levmar: Levenberg-marquardt nonlinear least squares algorithms in C/C++,” (Ics.forth, 2004), http://www. ics. forth. gr/~lourakis/levmar.
17. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]
18. C. Steger, “Unbiased extraction of curvilinear structures from 2D and 3D image,” Ph.D. Dissertation, Technische Universitaet Muenchen (1998).
19. P. D. Kovesi, “MATLAB and Octave functions for computer vision and image processing,” http://www. csse. uwa. edu. au/~pk/Research/MatlabFns/# match.
20. Rodrigues O, “Des lois géométriques qui régissent les déplacements d'un système solide dans l'espace: et de la variation des cordonnées provenant de ces déplacements considérés indépendamment des causes qui peuvent les produire,” (Publisher not identified, 1840).
21. Y. Fei, Error Theory and Data Processing, 1st ed. (China Machine, 2010).