Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Vanishing feature constraints calibration method for binocular vision sensor

Open Access Open Access

Abstract

Based on analyzing the measurement model of binocular vision sensor, we proposed a new flexible calibration method for binocular vision sensor using a planar target with several parallel lines. It only requires the sensor to observe the planar target at a few (at least two) different orientations. Relying on vanishing feature constraints and spacing constraints of parallel lines, linear method and nonlinear optimization are combined to estimate the structure parameters of binocular vision sensor. Linear method achieves the separation of the rotation matrix and translation vector which reduces the complexity of computation; Nonlinear algorithm ensures the calibration results for the global optimization. Towards the factors that affect the accuracy of the calibration, theoretical analysis and computer simulation are carried out respectively consequence in qualitative analysis and quantitative result. Real data shows that the accuracy of the proposed calibration method is about 0.040mm with the working distance of 800mm and the view field of 300 × 300mm. The comparison with Bougust toolbox and the method based on known length indicates that the proposed calibration method is precise and is efficient and convenient as its simple calculation and easy operation, especially for onsite calibration and self-calibration.

© 2015 Optical Society of America

1. Introduction

A basic binocular vision sensor (BVS) consists of two cameras. Depending on optical triangulation method and stereo parallax, BVS aims at completing the three-dimensional measurement of feature points, feature lines, etc. The binocular stereo vision measurement is non-contact, fast, good flexibility, high precision, and widely applies to not only the 3D model reconstruction but also the measurement of 2D profile, 3D topography and key geometric parameters of three-dimensional objects, etc. [1].

Calibrating the measurement model parameters is the key to the successful application of BVS. It means the complete solution of the intrinsic parameters of cameras and structure parameters of binocular vision system. The intrinsic parameters of cameras do not change with the structure of BVS, and they are suitable for offline calibration. The structure parameters of binocular system are different. They are vulnerable to the impact of the installation process and need to be calibrated onsite.

At present, the calibration methods for BVS mainly include: ①Calibration method based on the 3D target with known three-dimensional coordinates [2]; ②Using the unknown motion 2D round cavity target for calibrating BVS [3]; ③Calibration technique based on unknown movement of one-dimensional target [4–6 ]; ④Self-calibration method of BVS based on characters matching [7]. The method using three-dimensional target can get calibration images of high quality only when the target is at specific locations due to the mutual effect of illumination on different planes. Moreover, the three-dimensional target is difficult processing and high manufacturing cost. In the method based on the unknown motion of 2D round cavity target, the calculation is an iterative process of solving nonlinear equations with large amount of data course and complex computation. The method based on one-dimensional target of unknown motion, although high precision and easy to conduct, but the solution requires numerous matrix transforms and iterative processes of solving nonlinear equations for roots, and generates high computational complexity and calculation error. For the self-calibration method of BVS based on features matching, accurate extraction of feature points and exact matching are indispensable to achieve precise calibration. It is very difficult to guarantee in the industrial site with complex environment. In addition, there are some other calibration methods. Li et al. [8] proposed a calibration method for binocular vision sensor based on BP neural network, but did not involve the calibration accuracy. Ma et al. [9] proposed a self-calibration method for binocular active vision, but it required the presence of pure translation between two cameras. R. Hartley and A. Zisserman [10] put forward a solution of solving the rotation matrix between two cameras by using vanishing points, but did not mention the method of calculating the translation vector. J. Bouguet [11] provides a calibration toolbox for Matlab based on planar chessboard targets, which is most widely used.

In the real life, parallel lines are ubiquitous, such as the airport runway, guardrail, zebra crossing and so on. Motivated by the previous work [12–14 ], a novel calibration method of BVS using parallel lines is proposed in this article. A planar target with more than three equally spaced parallel lines is used in the method. In the measurement space, the target is placed freely n times and the images of target are captured by two cameras. The vanishing line of the target plane can be determined from the projections of parallel lines. Combining the known intrinsic parameters of cameras, the target plane’s orientation relative to the camera can be calculated. Two normal vectors of one target plane relative to corresponding camera are related by a rotation matrix. Meanwhile, the rotation matrix is the structure parameters of BVS R. As the target is placed n times, n constraints are obtained to solve the rotation vector. Since the distance D of each two adjacent parallel lines has been known exactly, the translation vector is determined. Finally, it is the overall optimization [15,16 ]. Because of the demands of vanishing lines in solving the translation vector T, we choose vanishing lines instead of vanishing points to calculate the rotation matrix R.

The paper is organized as follows. Section 2 describes the measurement model of BVS. Section 3 studies the calibration principle. Section 4 discusses the accuracy analysis and experiments: We first derive mathematical formulas consequence in the qualitative analysis of the various factors’ impact on the calibration results. Then Quantitative results are displayed through computer simulation. Finally, it is the real data. Section 5 describes the conclusion. In the Appendix, we provide the technique for estimating the vanishing line when the scene is a set of coplanar equally spaced parallel lines.

2. Measurement model

One arbitrary spatial point mapping in the image can be approximated by the usual pin-hole model. The measurement model of BVS is illustrated in Fig. 1 . The world coordinate frame (WCF) defined as OXYZ is the same with the left camera coordinate frame (LCCF). The right camera coordinate frame (RCCF) is OcrXcrYcrZcr. The image coordinate frames of the left camera and right camera are olulvl and orurvr. Let ml=(ul,vl,1)T and mr=(ur,vr,1)T be the image coordinates of the spatial point M(X,Y,Z,1)T mapped by the left camera and right camera respectively.

 figure: Fig. 1

Fig. 1 Measurement model of BVS.

Download Full Size | PDF

Denote the intrinsic matrixes of the left camera and right camera by Al and Ar, and the projection matrix by Pl and Pr. The rotation matrix R and translation vector T between LCCF and RCCF are expressed as:

slml=Al[I0]M=PlM,
srmr=Ar[RT]M=PrM.
Consequently, the following linear equation stands:

[ulp31lp11lvlp31lp21lurp31rp11rvrp31rp21rulp32lp12lvlp32lp22lurp32rp12rvrp32rp22rulp33lp13lvlp33lp23lurp33rp13rvrp33rp23r][XYZ]=[p14lulp34lp24lvlp34lp14rurp34rp24rvrp34r].

The values of X,Y,Z can be calculated by solving Eq. (3) though the linear least-squares method. Then we reconstruct the three-dimensional spatial coordinate of point M.

3. Principle

In this method, the calibration of BVS is based on the vanishing features of parallel lines. The calibration model of BVS is shown in Fig. 2 .The calibration method can be carried out in the following main steps:

 figure: Fig. 2

Fig. 2 Calibration model of BVS.

Download Full Size | PDF

  • 1) The intrinsic parameters of the left camera and right camera are obtained by Zhang’ calibration method [11,17 ]. Two cameras are located according to the measurement requirement, and the relative ubiety is required unchangeable in the process of calibration.
  • 2) Place the planar target in the cameras’ field of view. The images of target are captured by the calibrated cameras in step 1). By moving the target to many different positions, enough images are obtained. In the process, the angle between the target plane and cameras’ planes can’t be too small to ensure that the vanishing line can be calculated exactly.
  • 3) According to the intrinsic parameters of the cameras, all images are rectified to compensate for cameras’ distortion. The feature points on the parallel lines are determined by Steger’s method [18]. The linear equations of parallel lines can be obtained from the extracted feature points using the least squares method.
  • 4) The vanishing line of target plane can be solved by the linear equations obtained in step 3) according to the method described in Appendix. Combining the intrinsic matrix of camera, the normal direction of the target plane relative to the camera is determined. The rotation matrix R of BVS can be worked out from the normal vectors of the target plane under LCCF and RCCF. Since the parallel lines on the target plane are coplanar and spacing of D, the translation vector T is determined. Finally, it is the whole optimization to ensure the calibration result for global optimization.

3.1. Related concepts and properties

The followings are some concepts and properties connected with the proposed calibration method in this article [10].

(Property 1) Points at infinity: In IP2, consider two lines  ax+by+c=0 and  ax+by+c=0.They are represented by vectors l=(a,b,c)T and l=(a,b,c)T for which the first two coordinates are the same. Computing the intersection of these lines gives no difficulty. The intersection is x=l×l=(cc)(b,a,0)T,and ignoring the scale factor (cc), this is the point (b,a,0)T. The points with the same form of  x are known as points at infinity. Now if we attempt to fine the inhomogeneous representation of this point, we obtain  (b/0,a/0)T, which makes no sense, except to suggest that the point of intersection has infinite large coordinates. This observation agrees with the usual ideal that parallel lines meet at infinity, and a set of parallel lines meet at the same point at infinity.

(Property 2) Vanishing lines: The vanishing line is constructed, as illustrated in Fig. 3 , by intersecting the image with a plane parallel to the scene plane through the camera center C. It is clear that a vanishing line depends only on the orientation of the scene plane, not depends on its position. Under the camera coordinate frame (CCF), the relation between the direction vector n of the scene plane and the vanishing line l can be defined as the following property:

n=ATl,
Where A is the 3×3 intrinsic matrix of the camera.

 figure: Fig. 3

Fig. 3 Vanishing line formation.

Download Full Size | PDF

(Property 3) Back-projection plane: A set of points in space which map to a line in the image is a plane in space defined by the camera center and image line mapped by a 3-space line in space, as shown in Fig. 4 . This plane is known as the back-projection plane. It is obvious that the 3-space line lies on the back-projection plane. If a 3-space line is mapped under two cameras, we will obtain two back-projection planes whose intersection is the 3-space line. Under WCF, the following mathematical formulations stand:

πl=PlTll,πr=PrTlr,
L*=πlπrTπrπlT,
Where Pl and Pr are the projection matrixes of the left camera and right camera; The 3-space line L is represented as a dual Plücker matrix L* .

 figure: Fig. 4

Fig. 4 Back-projection of lines.

Download Full Size | PDF

3.2. Acquisition of the rotation matrix

As illustrated in Fig. 5 , l1,l2,l3,l4 and r1,r2,r3,r4 are projections of lines L1,L2,L3,L4 on the target plane. In the target images, the central lines of the parallel lines are picked up by Steger’s method [18], followed by the linking of lines based on the edge connection algorithm proposed by Peter Kovesi [19]. The recognition and matching of lines use the information of their positions. Equations of l1l4 and r1r4 expressed under homogeneous coordinate are obtained by the method of least squares fitting for the picked up points on the straight lines, meaning li=(li)3×1 and ri=(ri)3×1. According to the Appendix, the vanishing lines  li, lr of the target plane under LCCF and RCCF can be determined by lines li, ri (i=1, 2, 3, 4).

 figure: Fig. 5

Fig. 5 The diagram of target images.

Download Full Size | PDF

Since Al, Ar acting as the intrinsic matrixes of the left camera and right camera are known, the direction vectors nl and nrof target plane under LCCF and RCCF satisfy:

nl=AlTll/AlTll,nr=ArTlr/ArTlr.

Owing to the fact that both nl and nr belong to the same target plane just in different coordinate frames, the following equation stands:

nr=Rnl.

If the target is placed n times, n equations similar to Eq. (8) stand. As an orthogonal matrix, the rotation matrix R only has three independent variables. The rotation matrix R is determined when n2 .

3.3. Acquisition of the translation vector

Let πli and πri be the back-projection planes under WCF defined by lines li and ri on the image plane (T3×11, 2, 3, 4). According to Eq. (5), the following results stand:

πli=PlTli,πri=PrTri.

The 3-space straight line Li, can be represented as a dual Plücker matrix Li* and satisfies the following property according to Eq. (6).

Li*=πliπriTπriπliT=[(AlTliriTArRRTArTriliTAl)3×3(AlTliriTArT)3×1(TTArTriliTAl)1×30].

Let Li* be expressed as Li*=[AiBiTTTBiT0], where Ai=AlTliriTArRRTArTriliTAl, Bi=AlTliriTAr(i=1, 2, 3, 4). It is clear that S1 and S2 are independent of S3. As an antisymmetric matrix, Ai meets Ai=AiT and corresponds to a 3×1 vector ai. It means Ai=[ai]×, where ai=[a23a13a12]T and Li is the element of matrix Ai in the position of column m and row n.

Since L1,L2,L3,L4 are parallel to each other, they have a common point at infinity defined as v=[v3×1T0]T under WCF, where v¯=1. As the point at infinity is also on lines L1,L2,L3,L4 the following equation stands:

Li*v=0.
It means [AiBiTTTBiT0]v=0{Aiv¯=0BiTv¯=0 (i=1, 2, 3, 4).

The unit vector v¯ is determined by solving the equation [A1A2A3A4]v¯=0. Furthermore, v¯ represents the orientation of lines L1,L2,L3,L4 under WCF.

Let Si be the plane passing through the 3-space line Li and perpendicular to the target plane Пt, where Si=[nidi], Пt=[ntk], ns=v¯×nt, as shown in Fig. 6 . Here, nt is the unit normal vector of the target plane under WCF and has been calculated in Sect.3.2 when solving the rotation matrix R. The Plüker matrix representation Li can be obtained from the dual Plüker matrix representation Li* by a simple rewrite rule. The rewrite result is Li=[[-BiT]×aiaiT0] (i=1, 2, 3, 4). Since line Li is on both plane Si and plane Пt, it satisfies the following equation.

{LiПt=0LiSi=0
From Eq. (12), we have

 figure: Fig. 6

Fig. 6 The target plane Пt vs. planes S1,S2,S3,S4.

Download Full Size | PDF

{ni×BiTdiai=0nt×BiTkai=0

As L1,L2,L3,L4 are parallel to each other and spacing of D, the equation di=d1(i1)×D stands where i=1, 2, 3, 4. In Eq. (13), there are 5 unknown parameters (T3×1,d1,k) and 4 constraints. Moving the target once increases 4 constraints and 2 unknown parameters (the new d1 and k). If the target is placed n times independently, there are P=4n constraints and Q=2n+3 unknown parameters. The translation vector T3×1 is determined when PQ (meaning n2).

3.4. Optimization

Let π=[vTa]T represent the plane perpendicular to lines L1,L2,L3,L4 where a is a constant value of 0. The intersection between the 3-space line Li and plane π is defined as Xi, then Xi=Liπ. As the parallel lines are spacing of D, the equation xi+1xi=D stands, where xi is the non-homogeneous coordinates of Xi (i=1, 2, 3, 4……).

Suppose the number of equally spaced coplanar parallel lines is m and the target is placed n times, the following optimization function is established:

minF(R,T)=ρ1i=1nj=1m1|Ddj(xj+1i,xji)|+ρ2i=1n|nriRnli|,
Where dj(xj+1i,xji) is the distance between xj+1i and xji; nri and nli are the normal vectors of target plane under LCCF and RCCF; ρ1, ρ2 are weight factors.

Considering the principle of error distribution, ρ1 takes for 0.1 and ρ2 takes for 10. Aimed at a stable numerical solution, the orthogonal rotation matrix R is translated into a Rodriguez vector r=(rx,ry,rz)T [20]. Then the number of unknown parameters is six. The Eq. (14) is solved by Levenberg-Marquardt method. Owing to the excellence of the initial value calculated in Sect.3.2 and Sect.3.3, global optimization can be obtained via a few iterations. Thus both ensures the accuracy of the results and improves the computing speed.

4. Analysis and experiment

In this section, the factors on which the calibration accuracy depends are discussed by means of mathematical formulations, followed by computer simulation to analyze the impact of these factors on the calibration accuracy. Finally, it is the real data which contains contrast experiments and accuracy evaluation.

4.1 Accuracy analysis

Since the method applies vanishing lines to complete calibration, the accuracy of vanishing lines so directly determines the accuracy of the calibration. Here, the factors that affect the calculation of vanishing line are discussed. As described in the Appendix, the vanishing line is determined by the images of several equally spaced parallel lines. Let l0=(a0,1,b0)T stand for the first line and ln=(an,1,bn)T stand for the (n + 1)-th line of the parallel lines, where a0 and an are the slops, b0 and bn are the intercepts of the lines. Then, the vanishing line l=(α,1,β)T and l0, ln are related by:

(b0bn)α(a0an)β=anb0a0bn.
According to Eq. (15), each partial derivative is:

αa0=bnβb0bn,αb0=anαb0bn,αan=b0βb0bn,αbn=a0αb0bn,
βa0=bnβa0an,βb0=anαa0an,βan=b0βa0an,βbn=a0αa0an.

Let Δa0, Δb0, Δan, Δbn be the errors caused by noise, and Δα, Δβ be variations of the vanishing line. The error propagations for α and β are given by [21]:

Δα=βbnb0bnΔa0+anαb0bnΔb0+b0βb0bnΔan+αa0b0bnΔbn,
Δβ=bnβa0anΔa0+αana0anΔb0+βb0a0anΔan+a0αa0anΔbn.

From Eq. (18) and (19) , it can be seen that the variations of the vanishing line have inversely proportional relationship with |b0bn| and |a0an| .The larger |b0bn| and |a0an|, the larger the variations of the vanishing line. In the following, two factors that affect the slop difference and the intercept difference of lines on the image are discussed: the angle θ between the target plane and the image plane, the distance D of the parallel lines.

Let M be an arbitrary point with homogeneous coordinate (xw,yw,zw,1)T under WCF. The homogeneous coordinate of its projection on the image plane is P=(up,vp,1)T. The relationship between the 3-space point M and its image projection P is given by:

s[upvp1]=A[RwcTwc][xwywzw1],
Where s is a scale factor, A is the intrinsic matrix of camera, Rwc, Twc are the rotation matrix and the translation vector between CCF and WCF.

Without loss of generality, we assume the target plane is on zw=0 of WCF. Let us denote the i-th column of the rotation matrix Rwc as rwci.

sp=HM˜,
Where M˜=(xw,yw,1)T and H=A[rwc1rwc2Twc].

For simplicity, the relationship between CCF and WCF is illustrated in Fig. 7 . The camera coordinate frame moves tz along zc and rotates around yc with angle θ in a clock-wise direction, and it is the WCF. As the target plane is on zw=0 of the world coordinate frame, θ is also the angle between the target plane and image plane.

 figure: Fig. 7

Fig. 7 The position of the planar target.

Download Full Size | PDF

H=A[rwc1rwc2Twc]=[fx0u00fyv0001][cosθ00010sinθ0tz],
HT=[1fxcosθ0sinθtzfxcosθ01fy0u0fxcosθv0fyu0sinθtzfxcosθ+1tz].

The projective transformation of points is shown as Eq. (21), so the projective transformation of lines is:

l=HTl'.

Let l0'=(a,b,0)T be the first line on the target plane and ln'=(a,b,nD)T be the (n + 1)-th line on the target plane, where D is the distance of parallel lines. So their homogeneous coordinates in the image plane are:

l0=HTl0'=bfy[afybfxcosθ1au0fybfxcosθ+v0],
ln=HTln'=bfy[afybfxcosθ+nDfysinθbtzfxcosθ1nDbtz(u0fysinθfxcosθ+fy)+au0fybfxcosθ+v0].
Then, we can calculate that:

b0bn=nDbtz(u0fyfxtanθ+fy),(0°<θ<90°),
a0an=nDfybtzfxtanθ,(0°<θ<90°).

From Eq. (27) and (28) , it is obvious that |b0bn| and |a0an| associate with not only the angle between the target plane and the image plane but also the distance of parallel lines. Furthermore, |b0bn| and |a0an| have positive relationships with θ and D.

Based on Eq. (18), (19), (27) and (28) , we know factors affecting the localization of vanishing line. The calibration accuracy can be improved by enlarging the space of the parallel lines or aggrandizing the angle between the target plane and the image plane.

4.2. Computer simulation

As discussed in the last subsection, the calibration accuracy of BVS can be affected by various factors. In this subsection, the calibration accuracy w.r.t. four main factors are further analyzed through computer simulations: 1) the noise level 0o of images; 2) the number 35o of the parallel lines; 3) the distance D of parallel lines on the target; 4) the angle θ between the target plane and the image plane.

The intrinsic parameters of the simulated cameras are shown in Table 1 . The distortions of cameras are ignored. The rotation vector R=(rx,ry,rz)T and translation vector T=(t1,t2,t3)T between the camera coordinate frames of the two cameras is fixed as R=(0.035,0.800,0.097)T and T=(350.0,20.0,150.0)T, respectively. Here, the rotation vector R is expressed as a 3×1 Rodriguez vector. The working distance of cameras is 700mm with the view field of 300×300mm. A planar target with a set of equally spaced parallel lines is simulated as the model plane. Each line on the target is emulated by 100 points, and the target is placed 9 times randomly in front of the virtual cameras. The calibration accuracy is expressed by the simulation errors of R and T. Here, both R and T are in the form of 3×1 vectors. The 2-norm of vectors’ differences between the real data and the simulation results represent the absolute error. Divided by the true values, the relative standard deviation defined as the calibration accuracy is determined.

Tables Icon

Table 1. Intrinsic parameters of cameras

4.2.1 Performance w.r.t. the noise level, the number and distance of parallel lines

Gaussian noise with 0 mean and σ(0.050.6)standard deviation is added to the image points. For each value of parallel lines’ number N(3, 5, 7) and different distance D(15mm, 20mm, 25mm, 30mm), 100 independent trials are performed to compute the relative standard deviation of R and T. We take the average value as the final result.

From Figs. 8, 9 and 10 , it can be seen that the calibration errors of R and T increase when the noise level improves and decrease when the space of the parallel lines shrinks. As illustrated in Fig. 11 , the calibration performance will be ameliorated by means of adding the number of parallel lines. When σ=0.2, T, D=25mm, the calibration accuracies of R and T are 1.8‰ and 0.5‰.

 figure: Fig. 8

Fig. 8 Relative standard deviation vs. the noise level when N=3 and D = 15mm, 20mm, 25mm, 30mm.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Relative standard deviation vs. the noise level when N=5 and D = 15mm, 20mm, 25mm, 30mm.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Relative standard deviation vs. the noise level when N=7 and D = 15mm, 20mm, 25mm, 30mm.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Relative standard deviation vs. the space of parallel lines when σ=0.2 and N = 3, 5, 7.

Download Full Size | PDF

4.2.2 Performance w.r.t. angle between the target plane and the image plane

Gaussian noise with 0 mean and 0.2 standard deviation is added to the image points. The number of parallel lines is 7 and the space is 16.5mm. The angle θ varies from orurvr to ml(ul,vl). For each value of θ, 100 trails are performed. We take the average as the final result.

From Fig. 12 , it can be seen that the calibration error of R first increases and then decreases with the increasing of angle θ. The calibration error of R takes its smallest value when θ=25. The calibration error of T decreases monotony when the angle θ increases. Considering the calibration errors of both R and T, it is the most ideal when the angle θ is 30. At the same time, the calibration accuracy of R, T is 1.5‰, 0.8‰.

 figure: Fig. 12

Fig. 12 Relative standard deviation vs. the angle of the target plane w.r.t. the image plane.

Download Full Size | PDF

4.3 Real data

For the experiment with real data, the binocular vision system is composed of two AVT-F504B cameras whose resolutions are 1600×1200 pixel. The working distance of cameras is about 700mm~900mm and the view field is 300×300 mm. The physical system is shown in Fig. 13 .

 figure: Fig. 13

Fig. 13 The physical system.

Download Full Size | PDF

4.3.1 Intrinsic parameter calibration

The intrinsic parameters of the left camera and right camera are calibrated by Bouguet toolbox [11] which is compiled on Zhang’s calibration method [17]. The planar target we use is a chessboard target with 10×10 corner points evenly distributed. The distance between the adjacent points is 10mm in the horizontal and the vertical directions with the accuracy of 5um. In the experiment, the image pairs used for calibration are taken from 13 different orientations by the left and right camera simultaneously. One pair of them is shown in Fig. 14 . The calibration results of cameras’ intrinsic parameters are shown in Table 2 .

 figure: Fig. 14

Fig. 14 A sample of image pairs used for calibration: (a) left image; (b) right image.

Download Full Size | PDF

Tables Icon

Table 2. Calibration results of cameras’ intrinsic parameters

4.3.2 Structure parameter calibration

A planar target comprising 10 equally spaced parallel lines with distance of 10mm and accuracy of 0.02mm is used in our proposed calibration method of BVS (see Fig. 13). The target is placed 13 times independently. The angle between the normal direction of the target plane and the optical axis of cameras ranges from 15 to 45. Both the left camera and right camera capture 13 images of the target. One pair of target images and extracted lines are illustrated in Fig. 15 .

 figure: Fig. 15

Fig. 15 (a) Target images; (b) Extracted lines.

Download Full Size | PDF

As a comparison, the method using functions of the Bouguet toolbox [11], and known length based method [4] are also carried out to obtain the structure parameters. In the method of Bouguet toolbox, the image pairs of chessboard target are captured from more than 3 different orientations. The initial intrinsic parameters and the relationship between the camera coordination system and the target coordination system are obtained first. The initial structure parameters are acquired from coordinate transformation with the intermediate coordination system of target. An overall optimization of intrinsic parameters and structure parameters based on reprojection errors of feature points is the final procedure. It is obvious that we cannot calibrate the intrinsic parameters and structure parameters of BVS dividedly. Hence, the structure parameters obtained by Bougued toolbox are calibrated together with the intrinsic parameters using the pairs of images shown in Fig. 14. In the method based on known length, the structure parameters are calibrated by the same pairs of images with Bouguet toolbox for uniformity. The known length on the chessboard target we choose is shown in Fig. 16 .

 figure: Fig. 16

Fig. 16 Known length for calibrating structure parameters.

Download Full Size | PDF

Table 3 shows the comparative results of structure parameters through different techniques. It is clear that different calibration techniques have influence on these parameters since they use the same intrinsic parameters.

Tables Icon

Table 3. Comparative result of the structure parameters

4.3.3 Accuracy evaluation

For the accuracy evaluation of the calibration results, anther 3 pairs of images of the chessboard target are captured. Every distance between the adjacent corners on the chessboard target in the horizontal and the vertical directions are measured by the calibrated BVS. The average value and root-mean-square (RMS) error are calculated to evaluate the calibration accuracy. To further investigate the validity of our method, the target is also measured using the other two groups of calibration results, as shown in Table 4 . In order to give a concrete display for our proposed calibration method, all the reconstructed 3D points are shown in Fig. 17 .

Tables Icon

Table 4. Measurement results of chessboard corners

 figure: Fig. 17

Fig. 17 The reconstruction of all feature points of testing data using our proposed calibration method.

Download Full Size | PDF

As shown in Table 4, the RMS error of our method is 0.041mm with the view field of 300×300mm. Compared with other two methods, our method is obviously better than the method based on known length but inferior to Bouguet toolbox. The average RMS error difference between our method and Bouguet toolbox is 0.013mm. Although both our method and Bouguet toolbox are based on the same intrinsic parameters, the targets when calibrating the structure parameters are different. The accuracy of parallel lines target used in our method is 0.02mm and the chessboard target used in Bouguet toolbox is the accuracy of 5um. Taking account of the accuracy difference of targets, it is easy to deduce that our method will perform better and reach the same accuracy level with Bouguet toolbox if the parallel lines target is also the accuracy of 5um. Analyzing of the standard error, it can be acquired that our method and Bouguet toolbox are more stable. Meanwhile, parallel lines are much more ubiquitous than chessboard in the real life, such as the airport runway, guardrail, zebra crossing and so on. Our method has great advantage on self-calibration and onsite calibration.

5. Conclusions

In this article, a calibration method of BVS based on the vanishing features of parallel lines is proposed. The planar target with a set of parallel lines spacing of D is placed at least twice in the calibration progress. In addition, the calibration model of BVS proposed in the method achieves the separation of the rotation matrix Rand the translation vector T, which has the benefit of reducing the complexity and difficulty of calculation. The objective optimization function is based on the absolute distance of parallel lines and the consistency of the target plane’s normal vector under LCCF and RCCF. The initial value is determined by linear method and following is the nonlinear optimization in the 3D measurement space, which ensures the global optimum. Towards the factors that affect calibration accuracy, theoretical derivation gives qualitative analysis and computer simulations aim at qualitative results. The specific method to decrease the calibration error caused by these factors is also proposed. Real data shows that the measurement accuracy is about 0.040mm with the view field of 300×300mm. Compared with conventional calibration methods, our method is simple, effective and precise, especially for onsite calibration and self-calibration. However, we observe that the algorithm may fail in the some cases. For example, when the target plane is nearly parallel to the image plane, the vanishing line approximates the infinity leading to great calculation error. If the angle between the target plane and the image plane is oversize or the image noise is too much, it is impossible to guarantee the extracting accuracy of parallel lines. Meanwhile, both enough number and adequate distance of parallel lines are necessary, or the calibration result may be inaccurate. In practice, it is easy to avoid the degenerate configurations described above. Finally, as the properties of lines, the proposed method is suitable for the calibration of cameras without public visual theoretically. We plan to incorporate the model and algorithms into global calibration of multi-sensor vision system.

Appendix

The vanishing line given the image of a set of coplanar equally spaced parallel lines.

A set of equally spaced lines on the scene plane may be represented as ax+by+λ=0, where λ takes integer values. This set (a pencil) of lines may be written as ln'=(a,b,n)T=(a,b,0)+n(0,0,1)T, where (0,0,1)T is the line at infinity on the scene plane. Under perspective imaging the point transformation as is x=Hx, and the corresponding line map is ln=HTln'=l0+nl, where l the image of (0,0,1)T, is the vanishing line of the plane. The imaged geometry is illustrated in Fig. 18 .

 figure: Fig. 18

Fig. 18 Determining a plane’s vanishing line from imaged equally spaced parallel lines.

Download Full Size | PDF

The vanishing line l may be determined from the set provided their index (n) is identified. The closed solution for the vanishing line l is:

If ln=(an1bn),l0=(a01b0),l=(α1β) (Here, the case of lines parallel to the y-axis of image plane is ignored), the following equation stands:

ln=l0+nlρn(an1bn)=ρ0(a01b0)+n(α1β),
Where ρnandρ0 are scale factors. From Eq. (29), we have
[n0a0an0nb0bn][αβρ0]=n[anbn].
Then the following equation stands:
[10a0a101b0b120a0a202b0b2n0a0an0nb0bn][αβρ0]=[a1b12a22b2nannbn].
The vanishing line can be obtained by solving Eq. (31) with the linear least-squares method.

Acknowledgments

This work is supported by the Instrument Special National Natural Science Foundation of China (No.61127009), the Natural Science Foundation of Beijing (No.3142012), and the National Key Scientific Instrument and Equipment Development Project (No.2012YQ140032). We appreciate for the editor and reviewers’ valuable comments on our manuscript.

References and links

1. G. Zhang, Visual Measurement, 1nd ed. (Beijing Sciences, 2008).

2. S. Ma and Z. Zhang, Computer Vision: Theory and Algorithms, 1st ed. (Beijing Sciences, 1998).

3. F. Zhou, J. Zhu, and X. Yang, “A field calibration technique for binocular vision sensor,” Yiqi Yibiao Xuebao 21(2), 142–145 (2000).

4. G. Zhang and F. Zhou, “The calibration method of stereo visual sensor structural parameters based on standard length,” in Proceedings of CSAA on Aviation Industry Measurement and Control Technology, ed. (Academic, 2001), pp. 259–263.

5. J. Sun, Z. Wu, Q. Liu, and G. Zhang, “Field calibration of stereo vision sensor with large FOV,” Opt. Precision Eng. 17(3), 633–640 (2009).

6. F. Zhou, G. Zhang, and Z. Wei, “Calibrating binocular vision sensor with one-dimensional target of unknown motion,” Chin. J. Mech. Ens-EN 42(6), 92–96 (2006).

7. R. Hartley, “Estimation of relative camera positions for uncalibrated cameras” In Proceedings of. European Conference on Computer Vision, ed. (Academic, 1992) pp. 579–587. [CrossRef]  

8. M. Li, A. Zhang, and S. Hu, “On 3D measuring system of sheet metal surface based on computer vision,” Chin. Mech. Eng. 13(14), 1177–1180 (2002).

9. Y. Ma and W. Liu, “A linear self-calibration algorithm based on binocular active vision,” Robot 26(6), 486–490 (2004).

10. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge University, 2003).

11. J. Bouguet, “Camera calibration toolbox for Matlab,” http://www.vision.caltech.edu/bouguetj/calib_doc/.

12. Z. Wei, C. Li, and B. Ding, “Line structured light vision sensor calibration using parallel straight lines features,” Optik (Stuttg.) 125(17), 4990–4997 (2014). [CrossRef]  

13. Z. Wei, M. Shao, G. Zhang, and Y. Wang, “Parallel-based calibration method for line-structured light vision sensor,” Opt. Eng. 53(3), 033101 (2014). [CrossRef]  

14. Z. Wei, M. Xie, and G. Zhang, “Calibration method for line structured light vision sensor based on vanish points and lines,” inProcceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp.794–797. [CrossRef]  

15. Moré J J, “The Levenberg-Marquardt algorithm: implementation and theory,” in Numerical Analysis (Springer Berlin Heidelberg, 1978), pp. 105–116.

16. M. Lourakis, “Levmar: Levenberg-marquardt nonlinear least squares algorithms in C/C++,” (Ics.forth, 2004), http://www. ics. forth. gr/~lourakis/levmar.

17. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

18. C. Steger, “Unbiased extraction of curvilinear structures from 2D and 3D image,” Ph.D. Dissertation, Technische Universitaet Muenchen (1998).

19. P. D. Kovesi, “MATLAB and Octave functions for computer vision and image processing,” http://www. csse. uwa. edu. au/~pk/Research/MatlabFns/# match.

20. Rodrigues O, “Des lois géométriques qui régissent les déplacements d'un système solide dans l'espace: et de la variation des cordonnées provenant de ces déplacements considérés indépendamment des causes qui peuvent les produire,” (Publisher not identified, 1840).

21. Y. Fei, Error Theory and Data Processing, 1st ed. (China Machine, 2010).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1
Fig. 1 Measurement model of BVS.
Fig. 2
Fig. 2 Calibration model of BVS.
Fig. 3
Fig. 3 Vanishing line formation.
Fig. 4
Fig. 4 Back-projection of lines.
Fig. 5
Fig. 5 The diagram of target images.
Fig. 6
Fig. 6 The target plane П t vs. planes S 1 , S 2 , S 3 , S 4 .
Fig. 7
Fig. 7 The position of the planar target.
Fig. 8
Fig. 8 Relative standard deviation vs. the noise level when N = 3 and D = 15mm, 20mm, 25mm, 30mm.
Fig. 9
Fig. 9 Relative standard deviation vs. the noise level when N = 5 and D = 15mm, 20mm, 25mm, 30mm.
Fig. 10
Fig. 10 Relative standard deviation vs. the noise level when N = 7 and D = 15mm, 20mm, 25mm, 30mm.
Fig. 11
Fig. 11 Relative standard deviation vs. the space of parallel lines when σ = 0.2 and N = 3, 5, 7.
Fig. 12
Fig. 12 Relative standard deviation vs. the angle of the target plane w.r.t. the image plane.
Fig. 13
Fig. 13 The physical system.
Fig. 14
Fig. 14 A sample of image pairs used for calibration: (a) left image; (b) right image.
Fig. 15
Fig. 15 (a) Target images; (b) Extracted lines.
Fig. 16
Fig. 16 Known length for calibrating structure parameters.
Fig. 17
Fig. 17 The reconstruction of all feature points of testing data using our proposed calibration method.
Fig. 18
Fig. 18 Determining a plane’s vanishing line from imaged equally spaced parallel lines.

Tables (4)

Tables Icon

Table 1 Intrinsic parameters of cameras

Tables Icon

Table 2 Calibration results of cameras’ intrinsic parameters

Tables Icon

Table 3 Comparative result of the structure parameters

Tables Icon

Table 4 Measurement results of chessboard corners

Equations (31)

Equations on this page are rendered with MathJax. Learn more.

s l m l = A l [ I 0 ] M = P l M ,
s r m r = A r [ R T ] M = P r M .
[ u l p 31 l p 11 l v l p 31 l p 21 l u r p 31 r p 11 r v r p 31 r p 21 r u l p 32 l p 12 l v l p 32 l p 22 l u r p 32 r p 12 r v r p 32 r p 22 r u l p 33 l p 13 l v l p 33 l p 23 l u r p 33 r p 13 r v r p 33 r p 23 r ] [ X Y Z ] = [ p 14 l u l p 34 l p 24 l v l p 34 l p 14 r u r p 34 r p 24 r v r p 34 r ] .
n = A T l ,
π l = P l T l l , π r = P r T l r ,
L * = π l π r T π r π l T ,
n l = A l T l l / A l T l l , n r = A r T l r / A r T l r .
n r = R n l .
π l i = P l T l i , π r i = P r T r i .
L i * = π l i π r i T π r i π l i T = [ ( A l T l i r i T A r R R T A r T r i l i T A l ) 3 × 3 ( A l T l i r i T A r T ) 3 × 1 ( T T A r T r i l i T A l ) 1 × 3 0 ] .
L i * v = 0 .
{ L i П t = 0 L i S i = 0
{ n i × B i T d i a i = 0 n t × B i T k a i = 0
min F ( R , T ) = ρ 1 i = 1 n j = 1 m 1 | D d j ( x j + 1 i , x j i ) | + ρ 2 i = 1 n | n r i R n l i | ,
( b 0 b n ) α ( a 0 a n ) β = a n b 0 a 0 b n .
α a 0 = b n β b 0 b n , α b 0 = a n α b 0 b n , α a n = b 0 β b 0 b n , α b n = a 0 α b 0 b n ,
β a 0 = b n β a 0 a n , β b 0 = a n α a 0 a n , β a n = b 0 β a 0 a n , β b n = a 0 α a 0 a n .
Δ α = β b n b 0 b n Δ a 0 + a n α b 0 b n Δ b 0 + b 0 β b 0 b n Δ a n + α a 0 b 0 b n Δ b n ,
Δ β = b n β a 0 a n Δ a 0 + α a n a 0 a n Δ b 0 + β b 0 a 0 a n Δ a n + a 0 α a 0 a n Δ b n .
s [ u p v p 1 ] = A [ R w c T w c ] [ x w y w z w 1 ] ,
s p = H M ˜ ,
H = A [ r w c 1 r w c 2 T w c ] = [ f x 0 u 0 0 f y v 0 0 0 1 ] [ cos θ 0 0 0 1 0 sin θ 0 t z ] ,
H T = [ 1 f x cos θ 0 sin θ t z f x cos θ 0 1 f y 0 u 0 f x cos θ v 0 f y u 0 sin θ t z f x cos θ + 1 t z ] .
l = H T l ' .
l 0 = H T l 0 ' = b f y [ a f y b f x cos θ 1 a u 0 f y b f x cos θ + v 0 ] ,
l n = H T l n ' = b f y [ a f y b f x cos θ + n D f y sin θ b t z f x cos θ 1 n D b t z ( u 0 f y sin θ f x cos θ + f y ) + a u 0 f y b f x cos θ + v 0 ] .
b 0 b n = n D b t z ( u 0 f y f x tan θ + f y ) , ( 0 ° < θ < 90 ° ) ,
a 0 a n = n D f y b t z f x tan θ , ( 0 ° < θ < 90 ° ) .
l n = l 0 + n l ρ n ( a n 1 b n ) = ρ 0 ( a 0 1 b 0 ) + n ( α 1 β ) ,
[ n 0 a 0 a n 0 n b 0 b n ] [ α β ρ 0 ] = n [ a n b n ] .
[ 1 0 a 0 a 1 0 1 b 0 b 1 2 0 a 0 a 2 0 2 b 0 b 2 n 0 a 0 a n 0 n b 0 b n ] [ α β ρ 0 ] = [ a 1 b 1 2 a 2 2 b 2 n a n n b n ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.