Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Angular distance constraints calibration for outdoor zoom camera

Open Access Open Access

Abstract

Based on 2-D protractor property of camera, we proposed a flexible calibration method for zoom camera that used outdoors. It only requires the camera to observe control points once for given zooming settings, when there are several control points at infinity and known the angular distances. Under constraints of image points, the angular distance between their re-projecting vectors and the image of absolute conic (IAC), nonlinear optimization is used to solve parameters of IAC. Then IAC can be uniquely decomposed by the Cholesky factorization, and consequently the intrinsic parameters can be obtained. Towards the factors that affect the accuracy of the calibration, theoretical analysis and computer simulation are carried out respectively consequence in qualitative analysis and quantitative result. On the issues of inaccuracy of principal point, the zooming center is selected to improve the accuracy of calibration. Real data demonstrated the effectiveness of the techniques.

© 2016 Optical Society of America

1. Introduction

Camera calibration is a necessary and fundamental step in computer vision. The internal parameters of a camera, such as focal length and principal point, can be determined by calibration. Classical camera calibration is performed by observing calibration object whose geometry is known [1,2]: Calibration can be performed by observing a calibration object whose geometry in 3D space is known with very good precision [3–5]. Zhang [6] proposed a calibration method using planar pattern: take a few images of the model plane under different orientations, detect the feature points in the images, and then refine all parameters by nonlinear optimization. Wei et al. [7] proposed 1D-target based calibration method. These approaches require that the targets are placed at distance of working range, and their image take up significant proportion of view.

There have been many works on the IAC constraints, pure rotating or translation, vanishing point of parallel lines and etc. for camera self-calibration. It can be done without precise reference objects, which are widely used in field of 3D modeling and reconstruction, monitoring of extremely hazardous situations, automatic traffic monitoring and urban security surveillance, etc [8–13].

When zoom-camera is fixed precisely on a two-axis turn-table, it can be used for flying targets’ tracking, position measurement and pose estimation, based on monocular vision technology [14–16]. Then the camera should be calibrated. As the system is installed outdoors, focused on long range for that flying target is far away from the system, the size of view is very large at work distance, and classical methods are unfeasible for on-site calibration.

To a certain extent, the system is similar to pan-tilt-zoom (PTZ) camera, and there are studies on the issue. Sinha and Pollefeys [17] proposed a calibration method for PTZ cameras that we compare our algorithm against later in the simulation. The camera is first calibrated at the lowest zoom level and then the intrinsic parameters are computed for an increasing zoom sequence. Many small steps may be required to mitigate noise, which makes the calibration time-consuming.

About calibration of zoom lens camera, Willson [18] pointed out that, there are species of principal point, such as center of perspective projection, expansion for zoom and center of sensor coordinates etc. They coincide in ideal situation. Manufacturing tolerances can cause image center to move as the parameters are changed. However, this motion is usually regular and repeatable and can be modeled and compensated. Wu and Radke [19] observed that the principal point is stable with respect to zoom scale and also consistent with the zooming center. Generally, it is reasonable to select the zooming center as principal point [20,21]. Therefore the focal length is foremost task for zoom camera calibration.

The paper is organized as follows: Section 2 describes constraints on the IAC, image points and the angular distance between their back-projecting rays [22]; and provides the calibration procedure: We start with nonlinear optimization, followed by Cholesky factorization. Section 3 discusses the impact of input errors by accuracy analysis. Section 4 provides the experimental results. Both computer simulation and real data are used to validate the proposed technique. Section 5 discusses the additional constraints and advice of improvement in optimization. In the Appendix, we provide typical conduct procedure of accuracy analysis.

2. Calibration algorithm

When coordinates of image points and angular distance between their corresponding back-projection rays are known, the intrinsic parameters of a camera can be calibrated by single view. The main procedure of this algorithm is as follows:

  • 1) Select control points, place the camera on the observation point and take image of selected control points, then extract the coordinates of image points corresponding to these control points.
  • 2) Construct system of equations about image points’ coordinates, angle of control points back-projecting lines and the IAC.
  • 3) Estimate initial guess and refine parameters of the IAC by nonlinear optimization.
  • 4) Decompose the IAC into inverse of intrinsic matrix by the Cholesky factorization, and the intrinsic parameters can be solved.

2.1. Principle

The calibration model is illustrated in Fig. 1. The camera coordinate frame (CCF) defined as O-xyz, its origin O coincides with the observation point OS. C1, C2 are control points; the angular distance θ, between OSC1 and OSC2, is known or can be measured. I1, I2 are images of C1, C2, and their back-projecting rays are d1, d2 respectively, the angular distance between d1, d2 is θ also. The coordinates of I1, I2 will be modified when camera rotates around O, or its zoom parameter changed; but θ is constant. If C1, C2 are control points at infinity, θ is approximate constant under the condition that there are limited distance between OS and O.

 figure: Fig. 1

Fig. 1 Calibration model.

Download Full Size | PDF

The solid features can be adopted as control points, the pan and tilt angle parameters of every control point can be measured by a theodolite if it is fixed on the observation point, then the angular distance between them can be computed. If stars are adopted as control points, the pan and tilt angle parameter of every star can be determined through precise ephemeris when the position of observation point is known, and the angular distances between stars can be determined also.

If skew factor is not considered, the intrinsic parameters matrix can be expressed as:

K=[fx0u00fyv0001].

An image point I back-projects to a ray that defined by I and the camera centre. The K relates image point to the ray’s direction:

d=K1I,
where I = [u v 1]T is homogeneous coordinates of image point, and d is the ray’s direction (not a unit vector in general).

The angular distance between rays, with directions d1, d2 corresponding to image points I1, I2 respectively, can be obtained from the cosine formula for the angle between two vectors:

cosθ=d1Td2d1Td1d2Td2=(K1I1)T(K1I2)(K1I1)T(K1I1)(K1I2)T(K1I2)=I1T(KTK1)I2I1T(KTK1)I1I2T(KTK1)I2.

The IAC is conic ω = K-TK−1 = (KKT)−1, which depends only on the internal parameters K; it does not depend on the camera orientation or position [22]. It follows from Eq. (3) that the angular distance between two rays is given by the simple expression

cosθ=I1TωI2I1TωI1I2TωI2.

2.2. Calibration procedure

From definition of ω and Eq. (1) we know that ω is positive-definite symmetric matrix

ω=[ab/2d/2b/2ce/2d/2e/2h].

Assumed W = [a b c d e g]T, Eq. (4) determined by the ith and jth control points and their corresponding angular distance θij can be expressed as

cosθij=[uiujuivj+ujvi2vivjui+uj2vi+vj21]W[ui2uivivi2uivi1]W[uj2ujvjvj2ujvj1]W.

The Eq. (6) is quadratic equation about W with 6 unknowns, and k = n × (n-1)/2 equations can be determined by n control points. So W can be solved when k>6. The system of equations is overdetermined when n≥4 and can be solved by Levenberg-Marquardt nonlinear least squares optimization.

After that, the symmetric matrix ω can be uniquely decomposed into a product ω = K-TK−1 of an upper-triangular matrix with positive diagonal entries and its transpose by the Cholesky factorization, and K−1 is inverse of matrix K [22,23].

The initial guess can be determined using Eqs. (1) and (5) by rough estimate of the focal length (in pixel) and u0, v0 (such as the center of sensor plane).

3. Accuracy analysis

The inputs of this algorithm are coordinates of image and angular distance θ, their error’s impact on calibration results can be analyzed by the error transfer coefficient. The precision of angular distance can be affected by error of control points, collimation, angular measuring, and etc. Therefore, their effect can be unveiled by analysis on error of angular distance in this paper. From Eqs. (1) and (5), we have b = 0, given:

W=[acdeg]T=[1fx21fy22u0fx22v0fy21+u02fx2+v02fy2]T,S1=[uiujvivjui+uj2vi+vj21],S2=[ui2vi2uivi1],S3=[uj2vj2ujvj1],
Equation (6) can be simplified as follows

F=(S1W)2S2WS3Wcos2θ=0

The coordinates of image points and angular distance θ are obtained independently, so the partial derivative of F are

Ffx=2S1WS1Wfx(S2WfxS3W+S2WS3Wfx)cos2θFfy=2S1WS1Wfy(S2WfyS3W+S2WS3Wfy)cos2θFu0=2S1WS1Wu0(S2Wu0S3W+S2WS3Wu0)cos2θFv0=2S1WS1Wv0(S2Wv0S3W+S2WS3Wv0)cos2θFu1=2S1WS1u1W(S2u1WS3W+S2WS3u1W)cos2θFu2=2S1WS1u2W(S2u2WS3W+S2WS3u2W)cos2θFv1=2S1WS1v1W(S2v1WS3W+S2WS3v1W)cos2θFv2=2S1WS1v2W(S2v2WS3W+S2WS3v2W)cos2θFθ=2S2WS3Wcosθsinθ
where the partial derivative of W, S1, S2, S3 are

Wfx=fx'[2/fx304u0/fx302u02/fx3]TWfy=fy'[02/fy304v0/fy32v02/fy3]TWu0=u0'[002/fx202u0/fx2]TWv0=v0'[0002/fy22v0/fy2]TS1u1=[ui200.500]S1u2=[ui100.500]S1v1=[0vi200.50]S1v2=[0vi100.50]S1θ=[00000]S2u1=[2ui10100]S2u2=[00000]S2v1=[02vi1010]S2v2=[00000]S2θ=[00000]S3u1=[00000]S3u2=[2ui20100]S3v1=[00000]S3v2=[02vi2010]S3θ=[00000].

Then error of u0 can be computed [24] by

Δu0=u0u1Δu1+u0u2Δu2+u0v1Δv1+u0v2Δv2+u0θΔθ(Fu1+Fu2+Fv1+Fv2)Δuv+FθΔθFu0

We know that f∙sinθ<s (s is size of CCD sensor plane), and f>s in general (f>>s for long focal lens cameras), θ is very small in radian. Assumed fx = fy = f, and extraction error of image points ∆u1 = ∆u2 = ∆v1 = ∆v2 = ∆uv, Eq. (9) can be simplified (formula derivation procedure provided in appendix)

Δu0sin2θ[(u2u0)+(u1u0)+(v2v0)+(v1v0)]Δuv+f2cosθsinθΔθ2u0u1u2+cos2θ(u1u0+u2u0)=sin2θ[(u2u0)+(u1u0)+(v2v0)+(v1v0)]Δuv+f2cosθsinθΔθsin2θ(u1+u22u0)=Δuv+v1+v22v0u1+u22u0Δuv+f2cosθsinθ(u1+u22u0)Δθ.

Similarly, the ∆v0, ∆fx and ∆fy are

Δv0Δuv+(u1+u22u0)(v1+v22v0)Δuv+f2cosθsinθ(v1+v22v0)Δθ.
Δfxf3cosθsinθ(u1u0)(u2u0)cos2θ[(u1u0)2+(u2u0)2]Δθf3cosθsinθ-(u1u0)(u2u0)(u1u2)2Δθ.
Δfyf3cosθsinθ(v1v0)(v2v0)cos2θ[(v1v0)2+(v2v0)2]Δθf3cosθsinθ-(v1v0)(v2v0)(v1v2)2Δθ.

On error of principal point, ∆u0 and ∆v0, we learn from Eqs. (10) and (11) that transfer coefficient of ∆uv is independent of focal length, and mainly includes two aspects: the error itself and distribution of image points. About the latter one, it’s amplified significantly when u1 + u2-2u0 or v1 + v2-2v0 are small, especially when they near 0. Besides the distribution of image points, the effect of angular error mainly depends on f2 and ctgθ. It can be reduced by enlarging angular distance (i.e. distance of image points when f fixed). The transfer coefficient of ∆θ will be magnified observably with increasing focal length, and becomes the main factor that affect precision of principal point. Therefore, chosen zooming center as the principal point is more reliable and precise [20,21].

We can conclude from Eqs. (12) and (13) that precision of focal length is mainly affected by error of angular measurement. The transfer coefficient depends on f, θ and distribution of image points: 1) The further distance between image point and principal point, the higher precision of focal length; 2) The increasing of f would cause error of focal length enlarged in second-order, as f∙sinθ may change little. 3) The calibrated focal length will be a little shorter than truth-value, especially for long focal length cameras, because Eqs. (12) and (13) are negative in general. Generally, ∆θ is very little in radian, and focal length results in mm, the error of calibrated focal length in mm is smaller than that of principal point. Anyway, it’s preferred if the precision of angular parameters can be improved.

4. Experiments

4.1. Computer simulation

Besides error of image points and angular error that analyzed in section 3, the calibration accuracy can be affected by focal length and quantity of control points also. Assumed the camera resolution is 1600 × 1200pixels,size of the CCD is 8.8mm × 6.6mm, ∆uv = 0.5 pixels, ∆θ = 0.01°. The simulation repeated 150 times and observed the average error and RMS error as follows.

4.1.1. Effect of image points’ quantity

Quantity of equations is determined by number of control points, the more constraints, the higher precision of calibration.

From Fig. 2 we conclude that precision of principal point and focal length improved as quantity of control point rises. When the quantity >40, the RMS errors of principal point and focal length are better than 5 pixels and 0.004mm respectively; when there are less than 20 control points, RMS error of focal length is less than 0.01mm; and then the effect on principal point is more obvious that the RMS error may >10 pixels.

 figure: Fig. 2

Fig. 2 Effect of quantity of control points when focal length is 25mm, (a) shows mean error and RMS error of focal length; (b) shows mean error and RMS error of principal point.

Download Full Size | PDF

4.1.2. Effect of focal length

Field of view is determined by focal length, the longer focal length, the smaller field of view. Assumed precisions of angular distance and coordinates of image points are 0.01° and 0.5 pixels, focal length varies 10~110mm. The simulation results are shown in Fig. 3.

 figure: Fig. 3

Fig. 3 Effect of focal length variation, (a) shows the mean error and RMS error of focal length, (b) shows the mean error and RMS error of principal point.

Download Full Size | PDF

From Fig. 3, it can be seen that precision of focal length and principal point are ideal when focal length is 10mm. Their errors rise obviously when focal length rises, the RMS errors of principal point and focal length are 140 pixels and 0.03mm; that is, the effects of focal length rising on principal point are more obviously than that of focal length. And from (a), it can be seen that mean error of focal length tends towards negative as the focal length rising, it coincides with the analysis in section 3.

4.1.3. Comparisons with alternate method

We call the alternate method [17] “discrete zoom calibration” (DZC). The comparisons are taken out by simulation for variation of focal length. There are 20 simulated images on different settings for the DZC method, while the proposed method needs one shot; other settings are same to both methods. The simulations repeat 200 times at every focal length settings, the RMS error of calibration results are illustrated in Fig. 4.

 figure: Fig. 4

Fig. 4 Simulated calibration results of two methods. The error in each parameter is plotted as a function of zoom setting. (a) The error in the focal length. (b) The error in the principal point.

Download Full Size | PDF

It can be seen from Fig. 4 that the error of calibration results raises with focal length, and precisions of proposed method are better than the DZC method, especially for the focal length results. The focal length obtained by the proposed method is stable on wide range of zoom settings. While the proposed method needs only one shot, the DZC method needs several, even tens of images, that’s time consuming. The disadvantage of proposed method is that, the distortion cannot be estimated simultaneously.

4.2. Real data experiments

For the real data experiments, a zoom camera and turn-table compose system for monocular vision tracking and measurement. A JAI AB-200CL camera is mounted precisely on turn-table whose angular precision 20″. The system is similar to a theodolite. The camera’s parameters are listed as follow: resolution 1600 × 1200 pixels, cell size 5.5μm × 5.5μm, CCD size 8.8mm × 6.6mm. The custom-built lens can be controlled by computer, with focal length 24mm to 200mm, the output of focal length is calibrated in lab by optical method with collimator and measuring microscope etc [25]. Its distortion on the edge of lens is less than 85μm when the focal length is 24mm, and less than 5μm when focal length is 200mm. the system is focused on long range, and the aperture and focus parameters are fixed when the system working.

The center’s coordinate of expansion for zooming locates at (805.5, 600.3) in pixel. Its determination is shown in Fig. 5 when the camera is of fixed pose and position. The trajectories of control points are illustrated by red-cross in the figures, and the black “*” is the zooming center (principal point). It can be seen from the figures that the zooming center is stable and repeatable for both direction of zooming.

 figure: Fig. 5

Fig. 5 Determination of zooming center. The upper row illustrates views of camera when it zooms out, and the bottom row illustrates views of camera when it zooms in.

Download Full Size | PDF

The system is used for flying target’s tracking and measuring, whose working distance varies with the flying target, and the dimension of view at the working distance is very large. It had been focused at infinity with fixed aperture and focus settings. Only the zoom should be taken into account for the calibration as it’s the only parameter changed.

4.2.1. Obtain angular distances

We chose corner of buildings as control points in the range of 200m~5000m, that coincides with the ranging coverage of the system. It can be regarded as infinity because the distance between camera and control point is far greater than focal length. Then camera rotating with the turn-table can be regarded as rotating around its original, and angular distanceθ is constant regardless of the rotation. The limit of the range is that, compared with the distance between the camera and control points, if the position error of camera’s center is neglectable, then the range is accredited.

The proposed system is similar to theodolite. When the system is collimated a control point, its image coincides with the zooming center (principal point), the azimuth and tilt angles are corresponding angular parameters of selected control point. It is illustrated in Fig. 6. Assumed the azimuth and tilt angles are (αii) and (αjj) when the system aimed at control points Ci, Cj. The unit vector under turn-table coordinate system is d = (cosβcosα,cosβsinα,sinβ). And then θij, angular distance between di, dj can be computed by Eq. (14)

 figure: Fig. 6

Fig. 6 The pan, tilt angle and angular distance.

Download Full Size | PDF

cosθij=diTdjdiTdidjTdj=diTdj.

4.2.2. Image processing and calibration

For selected control points are corner of buildings, 1) position image coordinate of control points by Harris corner detector [26]; 2) ascertain azimuth and pitch angles corresponding to different image points respectively; 3) the calibration can be done when there are more than 4 control points. The experiments are carried out 3 times at every fixed nominal focal length settings, the results are shown in Table 1.

Tables Icon

Table 1. Calibration Results

From Table 1 it can be seen that, the fx and fy, compared with the output of focal length, are steady and reasonable. The principal point, compared with the center of zooming, is unstable. Its error is obvious and rising with the focal length rising, that coincides with the accuracy analysis and computer simulation results. Therefore, the center of zooming is adopted as principal point.

4.3. Calibration accuracy assessment

Classical measures for camera calibration accuracy, such as radius of ambiguity zone in ray tracing [3] and RMS error of corners re-projection, cannot be used directly, as there are no extrinsic parameters computed in the process of calibration.

The projecting vectors in CCF, corresponding control points respectively, can be computed, if azimuth and pitch angle of the turn-table is recorded when an image been taken. Then coordinates of re-projected image points can be determined by camera intrinsic parameters.

Distance between re-projected image points and real image points can be adopted as measure for calibration accuracy. Figure 7 illustrates image of control points (red circles) and re-projected image points (green crosses) when f = 24mm. Re-projected errors at different focal length are illustrated in Fig. 8.

 figure: Fig. 7

Fig. 7 Reprojection at f = 24mm, the red circles are images of control points, and green crosses are their re-projection correspondingly.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Re-projecting error when f = 24mm, 50mm, 100mm and 200mm.

Download Full Size | PDF

From Fig. 8, it can be seen that, re-projected points coincide with real image points approximately. About errors of re-projection, it can be seen from Fig. 8 that, they are 1.10, 1.76, 2.63, 5.58 in pixel corresponding the focal length 24mm、50mm、100mm、200mm respectively.

The re-projecting error rises with focal length increases: 1) The control points are feature of buildings with natural distribution, then their quantity in field of view decreases when the focal length increases: there are about 100 control points when f = 24mm, and less than 20 control points when f = 200mm. 2) Angle of view decreases when focal length increases, so the re-projecting error, caused by error of turn-table’s angular parameters, increases also: They are 0.43, 0.88, 1.76, 3.53 in pixel corresponding the focal length 24mm、50mm、100mm、200mm respectively.

The precision of re-projection approved that adopting zooming center as principal point is feasible.

5. Discussion

5.1. Additional constraint

As b = 0, there are 4 unknowns in K. It means that 5 unknowns in ω are not independent. We observed follows from Eqs. (1) and (5)

ω=KTK1=[1fx20u0fx201fy2v0fy2u0fx2v0fy21+u02fx2+v02fy2].
the ω33 can be expressed as follows:
ω33=1+u02fx2+v02fy2=1+(u0fx2)2/(1fx2)+(v0fy2)2/(1fy2),
so
g=1+d24a+e24c.
Equation (16) can be used to improve the stability and precision of optimization.

5.2. Advice of improvement in optimization

From the definition of ω, it can be seen that u0, v0 are coupled with 1/fx2, 1/fy2. The error of u0/fx2, v0/fy2, results of optimization, will be amplified by fx2, fy2 respectively when they are decoupled, the longer focal length, the worse precision of u0, v0. On the other hand, as elements of ω are very small, stricter demands for stepsize and residual can improve precision of optimization, especially the precision of principal point.

6. Conclusion

In this article, we proposed a new flexible calibration method for zoom camera that used outdoors. Under constraints of image points, the angular distance between back-projecting vectors and the IAC, nonlinear optimization is used to solve parameters of IAC. After that, the IAC can be uniquely decomposed by the Cholesky factorization, and then the intrinsic parameters can be obtained. So the calibration can be done with single view when there are several control points at infinity and known the angular distances in CCF. Towards the factors that affect the accuracy, theoretical analysis and computer simulation are carried out, and specific methods to decrease the calibration error are also proposed. Real data shows that the re-projecting RMS errors are 1.10, 1.76, 2.63, 5.58 in pixel corresponding the focal length 24mm、50mm、100mm、200mm respectively. Experiments using both simulated and real data show that the proposed methods are fast, accurate, and effective, can be used for outdoor zoom camera on-site calibration.

However, we observe that the algorithm may fail in some cases. For example, we assumed the principal point coincides with the zooming center, but, this may not be the case for some cameras [21]. Also, the distortion is not considered as it’s neglectable for our camera, in practice it cannot be ignored for some cameras, especially for wide-angle cameras. We plan to investigate all these issues, in order to make the method more accurate and comprehensive. Finally, we plan to incorporate the proposed algorithm into on-site calibration of tracking and measuring systems for flying target.

Appendix

Δu0=u0u1Δu1+u0u2Δu2+u0v1Δv1+u0v2Δv2+u0θΔθ(Fu1+Fu2+Fv1+Fv2)Δuv+FθΔθFu0={2[(u1u0)(u2u0)fx2+(v1v0)(v2v0)fy2+1]u2u0fx22cos2θ{u1u0fx2[(u2u0)2fx2+(v2v0)2fy2+1]}+2[(u1u0)(u2u0)fx2+(v1v0)(v2v0)fy2+1]u1u0fx22cos2θ{u2u0fx2[(u1u0)2fx2+(v1v0)2fy2+1]}+2[(u1u0)(u2u0)fx2+(v1v0)(v2v0)fy2+1]v2v0fy22cos2θ{v1v0fy2[(u2u0)2fx2+(v2v0)2fy2+1]}+2[(u1u0)(u2u0)fx2+(v1v0)(v2v0)fy2+1]v1v0fy22cos2θ{v2v0fy2[(u1u0)2fx2+(v1v0)2fy2+1]}}Δuv+{2cosθsinθ[(u1u0)2fx2+(v1v0)2fy2+1][(u2u0)2fx2+(v2v0)2fy2+1]}Δθ{2×[(u1u0)(u2u0)fx2+(v1v0)(v2v0)fy2+1]×2u0u1u2fx2+2cos2θ{(u1u0)fx2[(u2u0)2fx2+(v2v0)2fy2+1]+(u2u0)fx2[(u1u0)2fx2+(v1v0)2fy2+1]}}sin2θ[(u2u0)+(u1u0)+(v2v0)+(v1v0)]Δuv+fx2cosθsinθΔθ2u0u1u2+cos2θ(u1u0+u2u0)=sin2θ[(u2u0)+(u1u0)+(v2v0)+(v1v0)]Δuv+fx2cosθsinθΔθsin2θ(u1+u22u0)=Δuv+v1+v22v0u1+u22u0Δuv+fx2cosθsinθ(u1+u22u0)Δθ

References and links

1. S. Ma and Z. Zhang, Computer Vision: Theory and Algorithms (Beijing Sciences, 1998).

2. G. Zhang, Visual Measurement (Beijing Sciences, 2008).

3. R. Tsai, “An efficient and accurate camera calibration technique for 3D machine vision,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1986), pp. 364–374.

4. R. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the- shelf TV camera and lenses,” IEEE Trans. Robot. Autom. 3(4), 323–344 (1987). [CrossRef]  

5. O. Faugeras, Three-Dimensional Computer Vision: A Geometric Viewpoint (MIT Press, 1993).

6. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. 22(11), 1330–1334 (2000). [CrossRef]  

7. Z. Wei, L. Cao, and G. Zhang, “A novel 1D target-based calibration method with unknown orientation for structured light vision sensor,” Opt. Laser Technol. 42(4), 570–574 (2010). [CrossRef]  

8. B. He and Y. Li, “Camera calibration with lens distortion and from vanishing points,” Opt. Eng. 48(1), 013603 (2009). [CrossRef]  

9. F. Lv, T. Zhao, and R. Nevatia, “Camera calibration from video of a walking human,” IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1513–1518 (2006). [CrossRef]   [PubMed]  

10. B. Wu, H. Hu, Q. Zhu, and Y. Zhang, “A flexible method for zoom lens calibration and modeling using a planar checkerboard,” Photogram. Eng. Rem. Sens. 79(6), 555–571 (2013). [CrossRef]  

11. J. Jin and X. Li, “Efficient camera self-calibration method based on the absolute dual quadric,” J. Opt. Soc. Am. A 30(3), 287–292 (2013). [CrossRef]   [PubMed]  

12. H. Zhang, K. Y. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Mach. Intell. 29(3), 499–502 (2007). [CrossRef]   [PubMed]  

13. J. Davis and X. Chen, “Calibrating pan-tilt cameras in wide-area surveillance networks,” in Proceedings of IEEE International Conference on Computer Vision (IEEE 2003), pp. 144–149. [CrossRef]  

14. Photo-Sonic, “Mobile multispectral TSPI system,” http://www.photosonics.com/mmts.htm.

15. J. Kelsey, J. Byrne, M. Cosgrove, S. Seereeram, and R. Mehra, “Vision-based relative pose estimation for autonomous rendezvous and docking,” in Proceedings of IEEE Conference on Aerospace (IEEE 2006), pp. 1–20. [CrossRef]  

16. V. Lepetit and P. Fua, “Monocular model-based 3D tracking of rigid objects: a survey,” Found. Trends Comput. Graph. Vis. 1(1), 1–89 (2005). [CrossRef]  

17. S. Sinha and M. Pollefeys, “Pan-tilt-zoom camera calibration and high-resolution mosaic generation,” Comput. Vis. Image Underst. 103(3), 170–183 (2006). [CrossRef]  

18. R. Willson, Modeling and Calibration of Automated Zoom Lenses, Ph.D. Dissertation (Carnegie Mellon University, 1994).

19. Z. Wu and R. J. Radke, “Keeping a pan-tilt-zoom camera calibrated,” IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1994–2007 (2013). [CrossRef]   [PubMed]  

20. R. Lenz and R. Tsai, “Techniques for calibration of the scale factor and image center for high accuracy 3D machine vision metrology,” IEEE Trans. Pattern Anal. 10(5), 713–720 (1988). [CrossRef]  

21. M. Li and J. Lavest, “Some aspects of zoom lens camera calibration,” IEEE T. Pattern Anal. 18(11), 1105–1110 (1996). [CrossRef]  

22. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge University, 2003).

23. W. Press, S. Teukolsky, W. Vetterling, and B. Flannery, Numerical Recipes: The Art of Scientific Computing, 3rd ed. (Cambridge University 2007).

24. Y. Fei, Error Theory and Data Processing, 6th ed. (China Machine, 2010).

25. Z. He, Optical Measuring System (National Defense Industry, 2002).

26. C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of the 4th Alvey Vision Conference (Academic, 1988), pp. 147–151.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Calibration model.
Fig. 2
Fig. 2 Effect of quantity of control points when focal length is 25mm, (a) shows mean error and RMS error of focal length; (b) shows mean error and RMS error of principal point.
Fig. 3
Fig. 3 Effect of focal length variation, (a) shows the mean error and RMS error of focal length, (b) shows the mean error and RMS error of principal point.
Fig. 4
Fig. 4 Simulated calibration results of two methods. The error in each parameter is plotted as a function of zoom setting. (a) The error in the focal length. (b) The error in the principal point.
Fig. 5
Fig. 5 Determination of zooming center. The upper row illustrates views of camera when it zooms out, and the bottom row illustrates views of camera when it zooms in.
Fig. 6
Fig. 6 The pan, tilt angle and angular distance.
Fig. 7
Fig. 7 Reprojection at f = 24mm, the red circles are images of control points, and green crosses are their re-projection correspondingly.
Fig. 8
Fig. 8 Re-projecting error when f = 24mm, 50mm, 100mm and 200mm.

Tables (1)

Tables Icon

Table 1 Calibration Results

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

K=[ f x 0 u 0 0 f y v 0 0 0 1 ].
d= K 1 I,
cosθ= d 1 T d 2 d 1 T d 1 d 2 T d 2 = ( K 1 I 1 ) T ( K 1 I 2 ) ( K 1 I 1 ) T ( K 1 I 1 ) ( K 1 I 2 ) T ( K 1 I 2 ) = I 1 T ( K T K 1 ) I 2 I 1 T ( K T K 1 ) I 1 I 2 T ( K T K 1 ) I 2 .
cosθ= I 1 T ω I 2 I 1 T ω I 1 I 2 T ω I 2 .
ω=[ a b/2 d/2 b/2 c e/2 d/2 e/2 h ].
cos θ ij = [ u i u j u i v j + u j v i 2 v i v j u i + u j 2 v i + v j 2 1 ]W [ u i 2 u i v i v i 2 u i v i 1 ]W [ u j 2 u j v j v j 2 u j v j 1 ]W .
W= [ a c d e g ] T = [ 1 f x 2 1 f y 2 2 u 0 f x 2 2 v 0 f y 2 1+ u 0 2 f x 2 + v 0 2 f y 2 ] T , S 1 =[ u i u j v i v j u i + u j 2 v i + v j 2 1 ], S 2 =[ u i 2 v i 2 u i v i 1 ], S 3 =[ u j 2 v j 2 u j v j 1 ],
F= ( S 1 W ) 2 S 2 W S 3 W cos 2 θ=0
F f x =2 S 1 W S 1 W f x ( S 2 W f x S 3 W+ S 2 W S 3 W f x ) cos 2 θ F f y =2 S 1 W S 1 W f y ( S 2 W f y S 3 W+ S 2 W S 3 W f y ) cos 2 θ F u 0 =2 S 1 W S 1 W u 0 ( S 2 W u 0 S 3 W+ S 2 W S 3 W u 0 ) cos 2 θ F v 0 =2 S 1 W S 1 W v 0 ( S 2 W v 0 S 3 W+ S 2 W S 3 W v 0 ) cos 2 θ F u 1 =2 S 1 W S 1 u 1 W( S 2 u 1 W S 3 W+ S 2 W S 3 u 1 W ) cos 2 θ F u 2 =2 S 1 W S 1 u 2 W( S 2 u 2 W S 3 W+ S 2 W S 3 u 2 W ) cos 2 θ F v 1 =2 S 1 W S 1 v 1 W( S 2 v 1 W S 3 W+ S 2 W S 3 v 1 W ) cos 2 θ F v 2 =2 S 1 W S 1 v 2 W( S 2 v 2 W S 3 W+ S 2 W S 3 v 2 W ) cos 2 θ F θ =2 S 2 W S 3 Wcosθsinθ
W f x = f x ' [ 2 / f x 3 0 4 u 0 / f x 3 0 2 u 0 2 / f x 3 ] T W f y = f y ' [ 0 2 / f y 3 0 4 v 0 / f y 3 2 v 0 2 / f y 3 ] T W u 0 = u 0 ' [ 0 0 2 / f x 2 0 2 u 0 / f x 2 ] T W v 0 = v 0 ' [ 0 0 0 2 / f y 2 2 v 0 / f y 2 ] T S 1 u 1 =[ u i2 0 0.5 0 0 ] S 1 u 2 =[ u i1 0 0.5 0 0 ] S 1 v 1 =[ 0 v i2 0 0.5 0 ] S 1 v 2 =[ 0 v i1 0 0.5 0 ] S 1θ =[ 0 0 0 0 0 ] S 2 u 1 =[ 2 u i1 0 1 0 0 ] S 2 u 2 =[ 0 0 0 0 0 ] S 2 v 1 =[ 0 2 v i1 0 1 0 ] S 2 v 2 =[ 0 0 0 0 0 ] S 2θ =[ 0 0 0 0 0 ] S 3 u 1 =[ 0 0 0 0 0 ] S 3 u 2 =[ 2 u i2 0 1 0 0 ] S 3 v 1 =[ 0 0 0 0 0 ] S 3 v 2 =[ 0 2 v i2 0 1 0 ] S 3θ =[ 0 0 0 0 0 ].
Δ u 0 = u 0 u 1 Δ u 1 + u 0 u 2 Δ u 2 + u 0 v 1 Δ v 1 + u 0 v 2 Δ v 2 + u 0 θ Δθ ( F u 1 + F u 2 + F v 1 + F v 2 )Δuv+ F θ Δθ F u 0
Δ u 0 sin 2 θ[ ( u 2 u 0 )+( u 1 u 0 )+( v 2 v 0 )+( v 1 v 0 ) ]Δuv+ f 2 cosθsinθΔθ 2 u 0 u 1 u 2 + cos 2 θ( u 1 u 0 + u 2 u 0 ) = sin 2 θ[ ( u 2 u 0 )+( u 1 u 0 )+( v 2 v 0 )+( v 1 v 0 ) ]Δuv+ f 2 cosθsinθΔθ sin 2 θ( u 1 + u 2 2 u 0 ) =Δuv+ v 1 + v 2 2 v 0 u 1 + u 2 2 u 0 Δuv+ f 2 cosθ sinθ( u 1 + u 2 2 u 0 ) Δθ.
Δ v 0 Δuv+ ( u 1 + u 2 2 u 0 ) ( v 1 + v 2 2 v 0 ) Δuv+ f 2 cosθ sinθ( v 1 + v 2 2 v 0 ) Δθ.
Δ f x f 3 cosθsinθ ( u 1 u 0 )( u 2 u 0 ) cos 2 θ[ ( u 1 u 0 ) 2 + ( u 2 u 0 ) 2 ] Δθ f 3 cosθsinθ -( u 1 u 0 )( u 2 u 0 ) ( u 1 u 2 ) 2 Δθ.
Δ f y f 3 cosθsinθ ( v 1 v 0 )( v 2 v 0 ) cos 2 θ[ ( v 1 v 0 ) 2 + ( v 2 v 0 ) 2 ] Δθ f 3 cosθsinθ -( v 1 v 0 )( v 2 v 0 ) ( v 1 v 2 ) 2 Δθ.
cos θ ij = d i T d j d i T d i d j T d j = d i T d j .
ω= K T K 1 =[ 1 f x 2 0 u 0 f x 2 0 1 f y 2 v 0 f y 2 u 0 f x 2 v 0 f y 2 1+ u 0 2 f x 2 + v 0 2 f y 2 ].
ω 33 =1+ u 0 2 f x 2 + v 0 2 f y 2 =1+ ( u 0 f x 2 ) 2 / ( 1 f x 2 ) + ( v 0 f y 2 ) 2 / ( 1 f y 2 ) ,
g=1+ d 2 4a + e 2 4c .
Δ u 0 = u 0 u 1 Δ u 1 + u 0 u 2 Δ u 2 + u 0 v 1 Δ v 1 + u 0 v 2 Δ v 2 + u 0 θ Δθ ( F u 1 + F u 2 + F v 1 + F v 2 )Δuv+ F θ Δθ F u 0 = { 2[ ( u 1 u 0 )( u 2 u 0 ) f x 2 + ( v 1 v 0 )( v 2 v 0 ) f y 2 +1 ] u 2 u 0 f x 2 2 cos 2 θ{ u 1 u 0 f x 2 [ ( u 2 u 0 ) 2 f x 2 + ( v 2 v 0 ) 2 f y 2 +1 ] } +2[ ( u 1 u 0 )( u 2 u 0 ) f x 2 + ( v 1 v 0 )( v 2 v 0 ) f y 2 +1 ] u 1 u 0 f x 2 2 cos 2 θ{ u 2 u 0 f x 2 [ ( u 1 u 0 ) 2 f x 2 + ( v 1 v 0 ) 2 f y 2 +1 ] } +2[ ( u 1 u 0 )( u 2 u 0 ) f x 2 + ( v 1 v 0 )( v 2 v 0 ) f y 2 +1 ] v 2 v 0 f y 2 2 cos 2 θ{ v 1 v 0 f y 2 [ ( u 2 u 0 ) 2 f x 2 + ( v 2 v 0 ) 2 f y 2 +1 ] } +2[ ( u 1 u 0 )( u 2 u 0 ) f x 2 + ( v 1 v 0 )( v 2 v 0 ) f y 2 +1 ] v 1 v 0 f y 2 2 cos 2 θ{ v 2 v 0 f y 2 [ ( u 1 u 0 ) 2 f x 2 + ( v 1 v 0 ) 2 f y 2 +1 ] } }Δuv +{ 2cosθsinθ[ ( u 1 u 0 ) 2 f x 2 + ( v 1 v 0 ) 2 f y 2 +1 ][ ( u 2 u 0 ) 2 f x 2 + ( v 2 v 0 ) 2 f y 2 +1 ] }Δθ { 2×[ ( u 1 u 0 )( u 2 u 0 ) f x 2 + ( v 1 v 0 )( v 2 v 0 ) f y 2 +1 ]× 2 u 0 u 1 u 2 f x 2 +2 cos 2 θ{ ( u 1 u 0 ) f x 2 [ ( u 2 u 0 ) 2 f x 2 + ( v 2 v 0 ) 2 f y 2 +1 ]+ ( u 2 u 0 ) f x 2 [ ( u 1 u 0 ) 2 f x 2 + ( v 1 v 0 ) 2 f y 2 +1 ] } } sin 2 θ[ ( u 2 u 0 )+( u 1 u 0 )+( v 2 v 0 )+( v 1 v 0 ) ]Δuv+ f x 2 cosθsinθΔθ 2 u 0 u 1 u 2 + cos 2 θ( u 1 u 0 + u 2 u 0 ) = sin 2 θ[ ( u 2 u 0 )+( u 1 u 0 )+( v 2 v 0 )+( v 1 v 0 ) ]Δuv+ f x 2 cosθsinθΔθ sin 2 θ( u 1 + u 2 2 u 0 ) =Δuv+ v 1 + v 2 2 v 0 u 1 + u 2 2 u 0 Δuv+ f x 2 cosθ sinθ( u 1 + u 2 2 u 0 ) Δθ
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.