Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Analytical solution of uncertainty with the GUM method for a dynamic stereo vision measurement system

Open Access Open Access

Abstract

Directed at the strong correlation among the input parameters and long measurement chain, which are difficult for uncertainty analysis with the guide of the expression of uncertainty in the measurement (GUM) method, a novel dynamic stereo vision measurement system based on the quaternion theory is presented to reduce the orthogonality restrictions of shafting manufacturing and application. According to the quaternion theory in the kinematic model of the cameras and the analytical solution of uncertainty with the GUM method, the complete, detailed, and continuous uncertainty results of the full-scale measurement space can be obtained. Firstly, one-dimensional turntables and rigid connections are utilized to form the motion cores and the automatic control carriers in the system. Secondly, the novel measurement model is used in the measurement process to shorten the calibration and measurement chains. Once the system based on the novel measurement model is set up, the analytical solution of uncertainty is utilized in the accuracy process. During the analysis process, the strong correlation among the extrinsic parameters is decoupled by introducing virtual circles and the measurement strategy with the GUM method. Through analyzing the relationship among the attitude angles, the major factors which influence the uncertainties in each axis and the final uncertainty are clarified. Moreover, the analytical continuous uncertainty maps for the uncertainties along each axis, combined standard uncertainty, and the expanded uncertainty are illustrated and the uncertainty variation tendency is declared. Finally, the analytical solution of uncertainty with the GUM method proposed in this paper predicts the uncertainty in the full-scale space and provides a new idea of the uncertainty analysis for the complicated combined measurement system.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Uncertainty evaluation with the GUM method is a general and effective method for accuracy evaluation of measurement systems. Uncertainty analyses for the measurement systems whose measurement and propagation model is not very complicated are perfected, such as the measurement system consists of a single centralized station [13] and the systems made up of several same measurement units [47]. Nevertheless, the uncertainty analyses are hard applied in the complicated combined measurement systems composed of different kinds of measurement units, for example, the dynamic stereo vision measurement system.

Dynamic stereo vision measurement systems usually consist of cameras and turntables. The systems can track and measure the moving objects in the space through the motion provided by turntables, which are the motion cores and the automatic control carriers. The introduction of turntables into the dynamic stereo vision measurement system increases the error sources, grows chains of the calibration and measurement, and enhances the coupling of the motion parameters for the measurement units, which all sharply increase the accuracy evaluation difficulties, by using the uncertainty analysis with the GUM method. Due to the vast error sources, complex measurement chain and the strong correlation among all the input measurement parameters with the conventional kinematic model for the measurement units, most dynamic stereo vision measurement systems adopt a variety of accuracy analyses.

The dynamic stereo vision measurement systems in the following are all assembled with turntables to construct the conventional kinematic measurement model. Chai et al. [8] and Deng et al. [9] tested the precision of the dynamic stereo vision measurement system by using discrete three-dimensional points in the actual space. Yang et al. [10] reconstructed the reference length of the standard ruler, which is a kind of relative quantity, to complete the precision evaluation. The parallel stereo vision systems [1114] installed on the Automated Guided Vehicle used the difference value between the reconstructed trajectories and the data gained by the Global Positioning System to evaluate the precision of the measurement system. Samper et al. [15] utilized a planar target with reflective marks, which are known and constant, to get the precision report of the measurement system. Furthermore, Nakabo et al. [16] reconstructed the 3D (three dimensions) shape of the close-range moving object in the common view for a system with a high-speed linear slider and two orthogonal turntables for each camera. With the higher degrees of freedom measurement structure in [16], the depth map estimation is adopted to verify the system accuracy. Raden et al. [17], Mikko et al. [18], and Xu et al. [19] got the disparity map to reconstruct the 3D points and error map, with evaluation only along the depth axis, by using the parallel stereo vision systems. [20] used the additional calibration board and a general-purpose laser distance meter, which are the actual constraints, to demonstrate the measurement model that can be employed to determine the accuracy of the stereo vision system precisely along the X, Y, and Z axes. Di Leo et al. [21] dealt with a procedure in the propagation of errors for a single camera through calibration target with round mark points.

However, the results of the accuracy evaluation, above all, with the error model are discrete, the evaluation mode is sparse and the criteria for accuracy evaluation is complex. In order to get the complete accuracy results, uncertainty analysis is applied to clarify the accuracy distribution of the whole measurement space. Uncertainty analysis is a paramount procedure in the precision evaluation process for the measurement system because uncertainty represents the reasonable dispersion to the measured values and correlation coefficient of the measurement results.

Recently, with the conventional kinematic measurement model, some method [22] analyzed the uncertainty in the whole measurement space by using the measurement units with simple models whose relationship between all the variables are set as 0 simply, which do not suit the actual situations with the strong coupling between variables. If the correlation among all the variables is taken into consideration, the Monte-Carlo method is a popular approach to sample numerous data which may be up to 106 to cover 95% region of the real uncertainty [23,24]. For the sake of the balance between the data complexity and the time complexity, some paper [25] just sampled hundreds of data to obtain the final uncertainty map, which is unrepresentative according to the standard [26].

Thus, an analytical solution of uncertainty with the GUM method, whose results are overall, detailed, and continuous, is the most suitable accuracy evaluation for the dynamic stereo vision measurement system.

In allusion to the strong coupling between input variables and the complexity of the measurement chain, which are hard for the GUM method, in the conventional measurement model, we propose a novel measurement model with the quaternion theory and analytical uncertainty analysis with the GUM method for the dynamic stereo vision measurement system, which consists of four one-dimensional turntables and two cameras. Compared with the measurement model which uses the translation matrix, the most difference is that there are no need to fit the coordinate systems with the fitted axes and calculate the relationships between the above coordinate systems.

With the novel measurement model, the system proposed in the paper does not need to guarantee the orthogonality between the axes of the turntables, which can reduce the manufacturing cost and the difficulty of adjusting the orthogonality of shafting. By using the quaternion theory in the novel measurement model, there is also no need to calibrate the relationship between the turntables and cameras, so that we shorten the whole calibration chain and the measurement chain. When it comes to the analytical solution of uncertainty with the GUM method in more detail, we divide the system parameters into intrinsic parameters and extrinsic parameters according to the step of the calibration step and the measurement step. Moreover, the intrinsic parameters of the camera are constant and the extrinsic parameters are separated into the attitude angles and the translation vector according to the active and passive form of the motion. Besides, the introduction of a virtual circle is utilized in the uncertainty analyzing process to decouple the strong correlation between the variables of the extrinsic parameters. All kinds of uncertainty mapping results for the measurement space with the actual structural parameters of the dynamic stereo vision measurement system elaborate the uncertainty distribution in the measurement space with the intersection angle θ and pitch angles α, which may also be appropriate for other systems based on the intersection measurement model. After decoupling the correlation among all the variables deeply, the more detailed final uncertainty results represent a trend of continuous accuracy variation in the full-scale measurement space, according to the uniform standards of the GUM method.

The paper is organized as follows. Section 2 introduces the architecture of the dynamic stereo vision measurement system and describes the parameters calibration method of the measurement units. In Section 3, the accurate kinematic measurement model based on quaternion theory for the measurement units is presented. The analytical solution of uncertainty by using the novel method with virtual circle constraints in the measurement space is in Section 4. Section 5 summarizes all the paper and draws conclusions.

2. Composition and calibration of the dynamic stereo vision measurement system

2.1 Architecture of the measurement unit

To reduce the manufacturing cost and the calibration workload for adjusting the orthogonality between shafting and increasing the dynamic performance, a dynamic stereo vision measurement system architecture with a combined turntable which consists of two one-dimension turntables is presented in this paper. In Fig. 1, two one-dimension turntables combined with a rigid connection and a camera are installed at the bottom of the vertical turntable. The vertical axis and the horizontal axis are the rotating shifts for the horizontal turntable and the vertical turntable, respectively. The optical axis of the camera is almost perpendicular to the vertical axis and the horizontal axis.

 figure: Fig. 1.

Fig. 1. Composition of the novel measurement unit.

Download Full Size | PDF

Different from the conventional orthogonal shafting measurement unit, there is no need for the angle between the two axes in two turntables to be accurate 90°. And there is also no need for the intersection character for the rotating shifts in the measurement unit. As a result, the measurement unit can be easily set up with two independent turntables, a camera, and a rigid connection.

2.2 Composition of the whole measurement system

The system composed of two measurement units put in different places can complete the measurement task for measuring the moving parameters of the moving object in the measurement space. The system consists of two measurement units described in Section 2.1 is shown in Fig. 2. The two measurement units are in the state after the initialization, where the initial angle of the horizontal turntables and the vertical turntables are almost 0° and the optical axes of the two cameras are almost parallel.

 figure: Fig. 2.

Fig. 2. Two measurement units.

Download Full Size | PDF

Because each camera needs to track the moving object and locates the object at the center of the image planes, after the movement of the cameras, the final state of two cameras is illustrated in Fig. 3. Similar to the intersection measurement theory, two cones show the field of view (FOV) of two cameras and the red region denotes the common view of two separated measurement units. After adjusting the attitude of two cameras to make sure that the object is in the common view of two cameras, the coordinates of corresponding points on the image planes of two cameras on the object can be easily calculated by stereo vision measurement model described in Section 4.2.

 figure: Fig. 3.

Fig. 3. Measurement structure.

Download Full Size | PDF

2.3 Turntable calibration

The novel combined turntable is the motion core and the automatic control carrier for the camera in each measurement unit, so the turntable plays an important role in giving the accurate extrinsic parameters of the camera. Before completing the camera calibration process, the parameters of the novel combined turntable, which consists of the pose and position, in each measurement unit are needed to be clarified. As shown in Fig. 4, a Leica Laser Tracker is utilized in the turntable calibration process.

 figure: Fig. 4.

Fig. 4. Turntable calibration.

Download Full Size | PDF

Firstly, by fixing the vertical turntable and rotating the horizontal table with an interval of 2° from 0° to 90°, the cluster of points on the horizontal plane can be obtained. After fitting the cluster of points into a certain circle, the center of gyration and the axis of rotation (red arrow in Fig. 4 and brown arrow in Fig. 5) for the horizontal turntable are gained. As the same steps of the above statements, the parameters of the vertical turntable can also be obtained (green arrow in Fig. 4 and pink arrow in Fig. 5). For the left-side measurement unit, the fitting circles and parameters of two one-dimensional turntables are shown in Fig. 5. The points in green and the fitting circle in blue are on the horizontal plane. The points in pink and the fitting circle in green are on the vertical plane, the same steps for the right-side measurement unit.

 figure: Fig. 5.

Fig. 5. The parameters of the left-turntable.

Download Full Size | PDF

2.4 Camera calibration

When it comes to the camera calibration, the calibration results consist of the intrinsic parameters and the extrinsic parameters. Before the installation of the cameras and turntables, the intrinsic parameters are calibrated by [2729], and the intrinsic calibration results include the uncertainties for all the parameters. The focal lengths of the cameras are adjusted in advance by focusing the object in the FOV clearly, then the intrinsic calibration process is applied.

For the initial extrinsic parameters, PnP (Perspective-n-Point) method is applied with a control field consists of 10 control points. We set the control points with the HUBBS R1.5S retro-reflective photogrammetry targets which can be exchanged equivalently by the SMR which is the receiver of the Leica Laser Tracker 901, shown in Fig. 6(a) and Fig. 6(b), respectively.

 figure: Fig. 6.

Fig. 6. Receivers for the photogrammetry system and laser tracker system.

Download Full Size | PDF

The control field is set up as the structure shown in Fig. 7(a). The coordinates of 10 control points are measured by laser tracker in the laser tracker coordinate system, and the image of the control points with HUBBS R1.5S captured by a camera is shown in Fig. 7(b).

 figure: Fig. 7.

Fig. 7. Control fields for calibrating the relationship between the turntable and camera and the image with the control points. (a) The rigid structure with control points for the initial extrinsic parameters of cameras. (b) By using the illuminant, the image of the control points on the rigid structure in the cameras.

Download Full Size | PDF

By using the intrinsic parameters, control points’ coordinates, and the corresponding pixel coordinates on the image plane, the initial extrinsic parameters for the cameras in the system are obtained.

3. Measurement model based on quaternion theory

3.1 Kinematic model of cameras

To chase the moving objects in the measurement space, the cameras need to be aligned with the object on the center of the image plane. All kinds of camera movements are provided by the novel combined turntable on each side. The left side architecture of the system is described in Section 2.1, shown in Fig. 8. The ntlh (red arrow) and ntlv (green arrow) are the axes of rotation for the horizontal turntable and vertical turntable, respectively, and Otlh and Otlv are the centers of rotation obtained through Section 2.3. The coordinate system O-XwYwZw is denoted as the world coordinate system, where the origin O is coincided with Otlh.

 figure: Fig. 8.

Fig. 8. Left side motion architecture of the system.

Download Full Size | PDF

Firstly, after rotating θl around the ntlh of the horizontal turntable, the parameters of the vertical turntable change to the ntlv_dyn and Otlv_dyn. According to the quaternion theory, the rotation quaternion for the left horizontal turntable is qtlh shown in Eq. (1) and ntlv_dyn and Otlv_dyn are shown in Eq. (2) and Eq. (3).

$${{\boldsymbol q}_{{\boldsymbol {tlh}}}} = \cos (\frac{{{\theta _l}}}{2}) + {{\boldsymbol n}_{{\boldsymbol {tlh}}}}\sin (\frac{{{\theta _l}}}{2})$$
$${{\boldsymbol n}_{{\boldsymbol {tlv}}}}{\boldsymbol \_dyn} = [{{\boldsymbol q}_{{\boldsymbol {tlh}}}}\cdot {[{{\boldsymbol n}_{{\boldsymbol {tlv}}}}]_q}\cdot {\overline {\boldsymbol q} _{{\boldsymbol {tlh}}}}]_q^{ - 1}$$
$${O_{tlv}}\_dyn = [{{\boldsymbol q}_{{\boldsymbol {tlh}}}}\cdot {[{O_{tlv}} - O]_q}\cdot {\overline {\boldsymbol q} _{{\boldsymbol {tlh}}}} + {[O]_q}]_q^{ - 1}$$
The symbol [•]q can convert the space vectors to the pure quaternion and the symbol [•]q−1 is the inverse transformation of the [•]q.

What is more, the extrinsic parameters for the left camera are illustrated in Fig. 9, where the camera is fixed jointed with the vertical turntable. After the rotation of the horizontal turntable, the axes of the left camera coordinate are Xclh, Yclh, and Zclh, and the optical center transforms from Ocl to Oclh.

 figure: Fig. 9.

Fig. 9. The pose for the left camera after the rotation of the left horizontal turntable.

Download Full Size | PDF

Rlh, which consists of Xclh, Yclh, and Zclh axes, and Oclh are denoted as the attitude matrix of the left camera. Thus, the Oclh is shown in Eq. (4) and the vector of three axes in the left camera coordinate is shown in Eq. (5).

$${O_{clh}} = [{{\boldsymbol q}_{{\boldsymbol {tlh}}}}\cdot {[{O_{cl}} - O]_q}\cdot {\overline {\boldsymbol q} _{{\boldsymbol {tlh}}}} + {[O]_q}]_q^{ - 1}$$
$$\left\{ {\begin{array}{{c}} {{{\boldsymbol X}_{{\boldsymbol {clh}}}} = [{{\boldsymbol q}_{{\boldsymbol {tlh}}}}\cdot {{[{{\boldsymbol x}_{\boldsymbol l}} - O]}_q}\cdot {{\overline {\boldsymbol q} }_{{\boldsymbol {tlh}}}} + {{[O]}_q}]_q^{ - 1}}\\ {{{\boldsymbol Y}_{{\boldsymbol {clh}}}} = [{{\boldsymbol q}_{{\boldsymbol {tlh}}}}\cdot {{[{{\boldsymbol y}_{\boldsymbol l}} - O]}_q}\cdot {{\overline {\boldsymbol q} }_{{\boldsymbol {tlh}}}} + {{[O]}_q}]_q^{ - 1}}\\ {{{\boldsymbol Z}_{{\boldsymbol {clh}}}} = [{{\boldsymbol q}_{{\boldsymbol {tlh}}}}\cdot {{[{{\boldsymbol z}_{\boldsymbol l}} - O]}_q}\cdot {{\overline {\boldsymbol q} }_{{\boldsymbol {tlh}}}} + {{[O]}_q}]_q^{ - 1}} \end{array}} \right.$$
where the xl, yl, zl is from the origin attitude matrix Rcl before the rotation of the horizontal turntable, shown in Eq. (6).
$${R_{cl}} = \left[ {\begin{array}{{ccc}} {{{\boldsymbol x}_{\boldsymbol l}}}&{{{\boldsymbol y}_{\boldsymbol l}}}&{{{\boldsymbol z}_{\boldsymbol l}}} \end{array}} \right]$$

The attitude matrix of the left camera after the rotation of the left horizontal turntable in the world coordinate system is Eq. (7).

$${R_{clh}} = \left[ {\begin{array}{{ccc}} {{{\boldsymbol X}_{{\boldsymbol {clh}}}}}&{{{\boldsymbol Y}_{{\boldsymbol {clh}}}}}&{{{\boldsymbol Z}_{{\boldsymbol {clh}}}}} \end{array}} \right]$$

The same process is applied for the left vertical turntable, after rotating αl around the ntlh_dyn for the vertical turntable illustrated in Fig. 10, and the parameters are shown as following Eqs. (8)–(11), where qtlv is the quaternion which is made up of the rotation axis ntlv_dyn and the rotation angle αl. Oclv and Rclv present the optical center and the attitude matrix of the left camera after the rotation of the vertical turntable, respectively.

$${{\boldsymbol q}_{{\boldsymbol {tlv}}}} = \cos (\frac{{{\alpha _l}}}{2}) + {{\boldsymbol n}_{{\boldsymbol {tlv}}}}{\boldsymbol \_dyn}\sin (\frac{{{\alpha _l}}}{2})$$
$${O_{clv}} = [{{\boldsymbol q}_{{\boldsymbol {tlv}}}}\cdot {[{O_{clh}} - {O_{tlv}}\_dyn]_q}\cdot {\overline {\boldsymbol q} _{{\boldsymbol {tlv}}}} + {[{O_{tlv}}\_dyn]_q}]_q^{ - 1}$$
$$\left\{ {\begin{array}{{c}} {{{\boldsymbol X}_{{\boldsymbol {cl}}v}} = [{{\boldsymbol q}_{{\boldsymbol {tlv}}}}\cdot {{[{{\boldsymbol X}_{{\boldsymbol {clh}}}} - O]}_q}\cdot {{\overline {\boldsymbol q} }_{{\boldsymbol {tlv}}}} + {{[O]}_q}]_q^{ - 1}}\\ {{{\boldsymbol Y}_{{\boldsymbol {cl}}v}} = [{{\boldsymbol q}_{{\boldsymbol {tlv}}}}\cdot {{[{{\boldsymbol Y}_{{\boldsymbol {clh}}}} - O]}_q}\cdot {{\overline {\boldsymbol q} }_{{\boldsymbol {tlv}}}} + {{[O]}_q}]_q^{ - 1}}\\ {{{\boldsymbol Z}_{{\boldsymbol {cl}}v}} = [{{\boldsymbol q}_{{\boldsymbol {tlv}}}}\cdot {{[{{\boldsymbol Z}_{{\boldsymbol {clh}}}} - O]}_q}\cdot {{\overline {\boldsymbol q} }_{{\boldsymbol {tlv}}}} + {{[O]}_q}]_q^{ - 1}} \end{array}} \right.$$
$${R_{clv}} = \left[ {\begin{array}{{ccc}} {{{\boldsymbol X}_{{\boldsymbol {clv}}}}}&{{{\boldsymbol Y}_{{\boldsymbol {clv}}}}}&{{{\boldsymbol Z}_{{\boldsymbol {clv}}}}} \end{array}} \right]$$

 figure: Fig. 10.

Fig. 10. The pose for the left camera after the combination rotation of the horizontal and vertical turntable.

Download Full Size | PDF

Finally, according to the same theory of the above kinematic model, we can obtain the left camera matrix Mcl and the right camera matrix Mcr in the world coordinate system in Eqs. (12) and (13), where the matrix M is the transformation from the right turntable coordinate system from the left turntable determined by the laser tracker during the turntable calibration.

$${M_{cl}} = \left[ {\begin{array}{{cc}} {R_{clv}^T}&{ - R_{clv}^T{O_{clv}}}\\ {\boldsymbol 0}&1 \end{array}} \right]$$
$${M_{cr}} = \left[ {\begin{array}{{cc}} {R_{cr}^T}&{ - R_{cr}^T{O_{crv}}}\\ {\boldsymbol 0}&1 \end{array}} \right]M$$

3.2 Stereo vision measurement model

As described in the model in [30] and in Fig. 11, we defined the Oc-XlYlZl and the Oc-XrYrZr as the coordinate of the left camera and the right camera, the O-xlyl, and the O-xryr as the image-plane coordinate, the Op-ulvl and the Op-urvr as the pixel coordinate. P (X, Y, Z, 1)T is the homogeneous coordinate of an arbitrary point in the overlapped FOV of both cameras. pl (ul, vl, 1)T and pr (ur, vr, 1)T are the projection points on the left and the right camera image plane, respectively.

 figure: Fig. 11.

Fig. 11. Stereo vision measurement model.

Download Full Size | PDF

Mcl and Mcr are the transformation matrix between cameras. Denoted that the intrinsic matrixes of the left camera and the right camera are Kl and Kr, the mathematical expression of the model is presented in Eq. (14).

$$\left\{ {\begin{array}{{c}} {{\lambda_l}{p_l} = {K_l}{M_{cl}}P}\\ {{\lambda_r}{p_r} = {K_r}{M_{cr}}P} \end{array}} \right.$$

We describe the mij are the elements in the projection matrixes in the Mcl and Mcr. If we do not take the distortion of cameras into account, we obtain the world coordinate of point P by using the linear least-square method as Eqs. (15)–(17).

$$\left[ {\begin{array}{{ccc}} {{u_l}m_{31}^l - m_{11}^l}&{{u_l}m_{32}^l - m_{12}^l}&{{u_l}m_{33}^l - m_{13}^l}\\ {{v_l}m_{31}^l - m_{21}^l}&{{v_l}m_{32}^l - m_{22}^l}&{{v_l}m_{33}^l - m_{23}^l}\\ {{u_r}m_{31}^r - m_{11}^r}&{{u_r}m_{32}^r - m_{12}^r}&{{u_r}m_{33}^r - m_{13}^r}\\ {{v_r}m_{31}^r - m_{21}^r}&{{v_r}m_{32}^r - m_{22}^r}&{{v_r}m_{33}^r - m_{23}^r} \end{array}} \right]\left[ {\begin{array}{{c}} X\\ Y\\ Z \end{array}} \right] = \left[ {\begin{array}{{c}} {m_{14}^l - {u_l}m_{34}^l}\\ {m_{24}^l - {v_l}m_{34}^l}\\ {m_{14}^r - {u_r}m_{34}^r}\\ {m_{24}^r - {v_r}m_{34}^r} \end{array}} \right]$$
$$A\left[ {\begin{array}{{c}} X\\ Y\\ Z \end{array}} \right] = b$$
$$\left[ {\begin{array}{{c}} X\\ Y\\ Z \end{array}} \right] = {({{A^T}A} )^{ - 1}}{A^T}b$$

4. Analytical solution of uncertainty with the GUM method

In this section, the factors on which the measurement uncertainty depends are discussed employing mathematical formulations, to analyze the impacts of these factors on the measurement uncertainty. Finally, the real input uncertainties are applied to the dynamic stereo vision measurement system and evaluate the proposed uncertainty model.

To analyze the uncertainty of the whole measurement space, we should confirm the input sources of the uncertainty. As described in Eq. (17), the coordinates in X, Y, and Z directions can be expressed in Eq. (18).

$$\left\{ {\begin{array}{{c}} {X = {F_x}({f_l},{u_{l0}},{v_{l0}},{f_r},{u_{r0}},{v_{r0}},{\theta_l},{\theta_r},{\alpha_l},{\alpha_r},{t_x},{t_y},{t_z})}\\ {Y = {F_y}({f_l},{u_{l0}},{v_{l0}},{f_r},{u_{r0}},{v_{r0}},{\theta_l},{\theta_r},{\alpha_l},{\alpha_r},{t_x},{t_y},{t_z})}\\ {Z = {F_z}({f_l},{u_{l0}},{v_{l0}},{f_r},{u_{r0}},{v_{r0}},{\theta_l},{\theta_r},{\alpha_l},{\alpha_r},{t_x},{t_y},{t_z})} \end{array}} \right.$$

In Eq. (18), [fl, ul0, vl0]T, [fr, ur0, vr0]T are the parameter vectors of the intrinsic matrixes for the left and right camera, respectively. [θl, θr, αl, αr] is the angle vector given to the cameras by turntables during the measurement process. [tx, ty, tz] are the elements of the transformation vector of the initial orientation for the cameras before the measurement process by utilizing the PnP. The Fx(•), Fy(•), and Fz(•) are the mapping operators that can complete the transformation from the variables space to the coordinates space.

The calibration results and uncertainties of the corresponding variables after correcting the distortion of the images, following the step in Section 2.4, are shown in Table 1 and Table 2, respectively. Without loss of generality, the relationships between the elements in the intrinsic matrixes are calculated by [2729], so the covariance matrix for the intrinsic elements part is expressed in Eq. (19).

$${\mathop{\rm cov}} (I )= \left[ {\begin{array}{{cccccc}} {{u^2}({f_l})}&{}&{}&{}&{}&{}\\ {}&{{u^2}({u_{l0}})}&{}&{}&{{\boldsymbol covariance}}&{}\\ {}&{}&{{u^2}({v_{l0}})}&{}&{}&{}\\ {}&{}&{}&{{u^2}({f_r})}&{}&{}\\ {}&{{\boldsymbol {covariance}}}&{}&{}&{{u^2}({u_{r0}})}&{}\\ {}&{}&{}&{}&{}&{{u^2}({v_{r0}})} \end{array}} \right]$$

Tables Icon

Table 1. Calibration results for the intrinsic and extrinsic parameters.

Tables Icon

Table 2. Intrinsic parameters uncertainties for two cameras.

Based on the centering measurement strategy and the intersection measurement theory, there are strong coupling relationships between the attitude angles from the turntables, so the elements in the covariance matrix for the attitude angles given by turntables need to be considered carefully. Firstly, the plane with a fixed pitch angle is chosen to analyze the correlation between the yaw angles for two cameras. For the intersection angle, the optical centers of two cameras and the point that need to be measured are located on a circle with a certain radius, hypothetically. As a result, the segment between two cameras is a chord on the circle plane, and the angle of the circumference (denotes as the intersection angle) is always the constant along the major arc. The relationship between the intersection angle and yaw angles is in three cases, shown in Figs. 12(a)–12(c).

 figure: Fig. 12.

Fig. 12. Three cases of the intersection situations.

Download Full Size | PDF

From the three cases of the intersection situations with a fixed pitch angle, the scalar relationship between the intersection angle and yaw angles of two cameras are as followings in Eq. (20), where the signs of the angles are defined by the coordinate in Section 3.1.

$$\left\{ {\begin{array}{{cccc}} {case.(a)}&{{\theta_l} = {\theta_r} - \theta }&{{\theta_l} < 0}&{{\theta_r} < 0}\\ {case.(b)}&{{\theta_l} = \theta - {\theta_r}}&{{\theta_l} > 0}&{{\theta_r} < 0}\\ {case.(c)}&{{\theta_l} = {\theta_r} + \theta }&{{\theta_l} > 0}&{{\theta_r} > 0} \end{array}} \right.$$

If we adjust the yaw angles into the same sign, the relationship between θl and θr is a completely positive correlation, so the correlation coefficient between two yaw angles of cameras is r = 1. For the reason that the image of the moving object is always at the center of the camera image planes, the angle between the plane O-XwZw (defined in Fig. 8) and the plane which is made up of the measured object and two optical centers of two cameras is always the same. The pitch angles of two cameras are also the same at any time, therefore the correlation coefficient between two pitch angles of cameras is r = 1, either. Furthermore, the correlation coefficients between the pitch angles and the yaw angles are 0, because of the independence of the horizontal and vertical turntables. As described above, the covariance matrix for the variables vector [θl, θr, αl,αr] is shown in Eq. (21), where u(θl), u(θr), u(αl) and u(αr) are the input uncertainties, determined by the hardware structure of the turntable, of the yaw angles and pitch angles.

$${\mathop{\rm cov}} (\theta \alpha ) = \left[ {\begin{array}{{cccc}} {{u^2}({\theta_l})}&{ru({\theta_l})u({\theta_r})}&0&0\\ {ru({\theta_r})u({\theta_l})}&{{u^2}({\theta_r})}&0&0\\ 0&0&{{u^2}({\alpha_l})}&{ru({\alpha_l})u({\alpha_r})}\\ 0&0&{ru({\alpha_r})u({\alpha_l})}&{{u^2}({\alpha_l})} \end{array}} \right]$$

As all the one-dimensional turntables are the same, all the input uncertainties of turntable parameters are equal, where u(θl) = u(θr) = u(θ) and u(αl) = u(αr) = u(α). In light of the above description, the final covariance matrix for the angle vector [θl, θr, αl,αr] is shown in Eq. (22).

$${\mathop{\rm cov}} (\theta \alpha ) = \left[ {\begin{array}{{cccc}} {{u^2}(\theta )}&{{u^2}(\theta )}&0&0\\ {{u^2}(\theta )}&{{u^2}(\theta )}&0&0\\ 0&0&{{u^2}(\alpha )}&{{u^2}(\alpha )}\\ 0&0&{{u^2}(\alpha )}&{{u^2}(\alpha )} \end{array}} \right]$$

When it comes to the initial translation vector [tx, ty, tz], we should concentrate on how the vector can be obtained by using the PnP method. According to the PnP method, the equation to calculate the transformation vector is shown in Eq. (23).

$$\left[ {\begin{array}{{cccccc}} {P_1^T}&0&{ - {u_1}P_1^T}&1&0&{ - {u_1}}\\ 0&{P_1^T}&{ - {v_1}P_1^T}&0&1&{ - {v_1}}\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ {P_n^T}&0&{ - {u_n}P_n^T}&1&0&{ - {u_n}}\\ 0&{P_n^T}&{ - {v_n}P_n^T}&0&1&{ - {v_n}} \end{array}} \right]\left[ {\begin{array}{{c}} {{{\boldsymbol r}_{\boldsymbol 1}}}\\ {{{\boldsymbol r}_2}}\\ {{{\boldsymbol r}_3}}\\ {{t_x}}\\ {{t_y}}\\ {{t_z}} \end{array}} \right] = 0$$

The equation where the [P1T, …, PnT]T, [(u1, v1)T, …, (un, vn)T]T and [r1, r2, r3, tx, ty, tz]T denote the control points in Fig. 7, the pixel coordinates related to the control points and the camera pose vectors to be solved is shown in Eq. (24).

$$\left\{ {\begin{array}{{@{}c@{}}} {{t_x} = {T_x}(\left[ {\begin{array}{{@{}ccc@{}}} {{P_1}}& \cdots &{{P_n}} \end{array}} \right],\left[ {\begin{array}{{@{}ccc@{}}} {\left( {\begin{array}{{@{}cc@{}}} {{u_1}{P_1}}&{{v_1}{P_1}} \end{array}} \right)}& \cdots &{\left( {\begin{array}{{@{}cc@{}}} {{u_n}{P_n}}&{{v_n}{P_n}} \end{array}} \right)} \end{array}} \right],\left[ {\begin{array}{{@{}ccc@{}}} {\left( {\begin{array}{{@{}cc@{}}} {{u_1}}&{{v_1}} \end{array}} \right)}& \cdots &{\left( {\begin{array}{{@{}cc@{}}} {{u_n}}&{{v_n}} \end{array}} \right)} \end{array}} \right])}\\ {{t_y} = {T_y}(\left[ {\begin{array}{{@{}ccc@{}}} {{P_1}}& \cdots &{{P_n}} \end{array}} \right],\left[ {\begin{array}{{@{}ccc@{}}} {\left( {\begin{array}{{@{}cc@{}}} {{u_1}{P_1}}&{{v_1}{P_1}} \end{array}} \right)}& \cdots &{\left( {\begin{array}{{@{}cc@{}}} {{u_n}{P_n}}&{{v_n}{P_n}} \end{array}} \right)} \end{array}} \right],\left[ {\begin{array}{{@{}ccc@{}}} {\left( {\begin{array}{{@{}cc@{}}} {{u_1}}&{{v_1}} \end{array}} \right)}& \cdots &{\left( {\begin{array}{{@{}cc@{}}} {{u_n}}&{{v_n}} \end{array}} \right)} \end{array}} \right])}\\ {{t_z} = {T_z}(\left[ {\begin{array}{{@{}ccc@{}}} {{P_1}}& \cdots &{{P_n}} \end{array}} \right],\left[ {\begin{array}{{@{}ccc@{}}} {\left( {\begin{array}{{@{}cc@{}}} {{u_1}{P_1}}&{{v_1}{P_1}} \end{array}} \right)}& \cdots &{\left( {\begin{array}{{@{}cc@{}}} {{u_n}{P_n}}&{{v_n}{P_n}} \end{array}} \right)} \end{array}} \right],\left[ {\begin{array}{{@{}ccc@{}}} {\left( {\begin{array}{{@{}cc@{}}} {{u_1}}&{{v_1}} \end{array}} \right)}& \cdots &{\left( {\begin{array}{{@{}cc@{}}} {{u_n}}&{{v_n}} \end{array}} \right)} \end{array}} \right])} \end{array}} \right.$$

According to Eq. (24) which is the homogeneous linear equations set, the final uncertainties for the elements in the initial transformation vector are determined by the maximum uncertainty of the input uncertainties. The Tx(•), Ty(•), and Tz(•) are the mapping operators that can complete the transformation from the variables space to the translation vector space. The control points are measured by the Leica Laser Tracker whose uncertainty is 15µm+6ppm. Because the calibration distance is from 3m to 5m, the maximum uncertainty for the initial transformation vector is 45µm. On account of no relationship between the input uncertainties in Eq. (24), the uncertainties for the initial transformation vector are in Eq. (25) and the covariance matrix for it is in Eq. (26).

$$u(T )= {\left[ {\begin{array}{{ccc}} {u({t_x})}&{u({t_y})}&{u({t_z})} \end{array}} \right]^T} = {\left[ {\begin{array}{{ccc}} {0.0\textrm{45}}&{0.0\textrm{45}}&{0.0\textrm{45}} \end{array}} \right]^T}(mm)$$
$${\mathop{\rm cov}} (T) = \left[ {\begin{array}{{ccc}} {{u^2}({t_x})}&{}&{\boldsymbol 0}\\ {}&{{u^2}({t_y})}&{}\\ {\boldsymbol 0}&{}&{{u^2}({t_z})} \end{array}} \right]$$

On account of the correlation between the attitude angles and the elements in the translation vector, the detailed relationship in the WCS (world coordinate system) is shown in Eq. (9). Thus, the relationships between the attitude and the elements in the translation vector are in direct proportion to -RclvT and -RcrvT, respectively, according to Eq. (12) and Eq. (13). Finally, the covariance matrix for the extrinsic parameters, which include the attitude angles and the translation vector is expressed in Eq. (27).

$${\mathop{\rm cov}} (E) = \left[ {\begin{array}{{@{}ccccccc@{}}} {{u^2}(\theta )}&{{u^2}(\theta )}&0&0&{ - u(\theta )u({t_x})}&{ - u(\theta )u({t_y})}&{ - u(\theta )u({t_z})}\\ {{u^2}(\theta )}&{{u^2}(\theta )}&0&0&{ - u(\theta )u({t_x})}&{ - u(\theta )u({t_y})}&{ - u(\theta )u({t_z})}\\ 0&0&{{u^2}(\alpha )}&{{u^2}(\alpha )}&{ - u(\alpha )u({t_x})}&{ - u(\alpha )u({t_y})}&{ - u(\alpha )u({t_z})}\\ 0&0&{{u^2}(\alpha )}&{{u^2}(\alpha )}&{ - u(\alpha )u({t_x})}&{ - u(\alpha )u({t_y})}&{ - u(\alpha )u({t_z})}\\ { - u(\theta )u({t_x})}&{ - u(\theta )u({t_x})}&{ - u(\alpha )u({t_x})}&{ - u(\alpha )u({t_x})}&{{u^2}({t_x})}&0&0\\ { - u(\theta )u({t_y})}&{ - u(\theta )u({t_y})}&{ - u(\alpha )u({t_y})}&{ - u(\alpha )u({t_y})}&0&{{u^2}({t_y})}&0\\ { - u(\theta )u({t_z})}&{ - u(\theta )u({t_z})}&{ - u(\alpha )u({t_z})}&{ - u(\alpha )u({t_z})}&0&0&{{u^2}({t_z})} \end{array}} \right]$$

The uncertainty of turntables is 0.006° according to the manual of the turntable, which conforms to the uniform distribution with the expanded factor k =$\sqrt 3 $. Table 3 shows the input uncertainties of the extrinsic variables.

Tables Icon

Table 3. Extrinsic parameter uncertainties.

Joined up the covariances among the intrinsic parameters of cameras calculated by [2729], attitude angles, and transformation vector, the complete covariance matrix cov(para) for the uncertainty of the dynamic stereo vision system is shown in Eq. (28) where cov(I) and cov(E) is described in Eq. (19) and Eq. (27).

$${\mathop{\rm cov}} (para) = \left[ {\begin{array}{{cc}} {{\mathop{\rm cov}} (I)}&{\boldsymbol 0}\\ {\boldsymbol 0}&{{\mathop{\rm cov}} (E)} \end{array}} \right]$$

The Jacobian matrix J for all the variables is in Eq. (29). The combined standard expressions for the uncertainties in each axis are in Eq. (30), where the “$\widetilde {diag} $” denotes that the operation of extracting diagonal elements from a matrix.

$$J = {\left[ {\begin{array}{{ccccccccccccc}} {\frac{{\partial {F_x}}}{{\partial {f_l}}}}&{\frac{{\partial {F_x}}}{{\partial {u_{l0}}}}}&{\frac{{\partial {F_x}}}{{\partial {v_{l0}}}}}&{\frac{{\partial {F_x}}}{{\partial {f_r}}}}&{\frac{{\partial {F_x}}}{{\partial {u_{r0}}}}}&{\frac{{\partial {F_x}}}{{\partial {v_{r0}}}}}&{\frac{{\partial {F_x}}}{{\partial {\theta_l}}}}&{\frac{{\partial {F_x}}}{{\partial {\theta_r}}}}&{\frac{{\partial {F_x}}}{{\partial {\alpha_l}}}}&{\frac{{\partial {F_x}}}{{\partial {\alpha_r}}}}&{\frac{{\partial {F_x}}}{{\partial {t_x}}}}&{\frac{{\partial {F_x}}}{{\partial {t_y}}}}&{\frac{{\partial {F_x}}}{{\partial {t_z}}}}\\ {\frac{{\partial {F_y}}}{{\partial {f_l}}}}&{\frac{{\partial {F_y}}}{{\partial {u_{l0}}}}}&{\frac{{\partial {F_y}}}{{\partial {v_{l0}}}}}&{\frac{{\partial {F_y}}}{{\partial {f_r}}}}&{\frac{{\partial {F_y}}}{{\partial {u_{r0}}}}}&{\frac{{\partial {F_y}}}{{\partial {v_{r0}}}}}&{\frac{{\partial {F_y}}}{{\partial {\theta_l}}}}&{\frac{{\partial {F_y}}}{{\partial {\theta_r}}}}&{\frac{{\partial {F_y}}}{{\partial {\alpha_l}}}}&{\frac{{\partial {F_y}}}{{\partial {\alpha_r}}}}&{\frac{{\partial {F_y}}}{{\partial {t_x}}}}&{\frac{{\partial {F_y}}}{{\partial {t_y}}}}&{\frac{{\partial {F_y}}}{{\partial {t_z}}}}\\ {\frac{{\partial {F_z}}}{{\partial {f_l}}}}&{\frac{{\partial {F_z}}}{{\partial {u_{l0}}}}}&{\frac{{\partial {F_z}}}{{\partial {v_{l0}}}}}&{\frac{{\partial {F_z}}}{{\partial {f_r}}}}&{\frac{{\partial {F_z}}}{{\partial {u_{r0}}}}}&{\frac{{\partial {F_z}}}{{\partial {v_{r0}}}}}&{\frac{{\partial {F_z}}}{{\partial {\theta_l}}}}&{\frac{{\partial {F_z}}}{{\partial {\theta_r}}}}&{\frac{{\partial {F_z}}}{{\partial {\alpha_l}}}}&{\frac{{\partial {F_z}}}{{\partial {\alpha_r}}}}&{\frac{{\partial {F_z}}}{{\partial {t_x}}}}&{\frac{{\partial {F_z}}}{{\partial {t_y}}}}&{\frac{{\partial {F_z}}}{{\partial {t_z}}}} \end{array}} \right]^T}$$
$$\textrm{diag}\left[ {\begin{array}{{ccc}} {{u^2}(x)}&{{u^2}(y)}&{{u^2}(z)} \end{array}} \right]\textrm{ = }\widetilde {diag}[{{J^T}{\mathop{\rm cov}} (para)J} ]$$
Continuous representation maps of the measurement uncertainty in the whole space with a three-dimensional diagram need two independent variables and a dependent variable. From the above analysis, there are only three independent variables: θl, θr, and α, finally. An operation of dimension reduction is needed in expressing the uncertainty.

As Eq. (22) shows, the rank of the cov(θα) is 2, after adopting the centering method and introducing the evaluation method with circle constraints. With the eigenvalues decomposition of cov(θα), the results are shown in Eq. (31).

$$eigen({\mathop{\rm cov}} (\theta \alpha )) = \left\{ {\begin{array}{{c}} {\begin{array}{{cc}} {{\lambda_1} = {u^2}(\theta )}&{{v_1} = \left[ {\begin{array}{{c}} {\begin{array}{{c}} {{1 / {\sqrt 2 }}}\\ {{1 / {\sqrt 2 }}} \end{array}}\\ {\begin{array}{{c}} 0\\ 0 \end{array}} \end{array}} \right]} \end{array}}\\ {\begin{array}{{cc}} {{\lambda_2} = {u^2}(\alpha )}&{{v_2} = \left[ {\begin{array}{{c}} {\begin{array}{{c}} 0\\ 0 \end{array}}\\ {\begin{array}{{c}} {{1 / {\sqrt 2 }}}\\ {{1 / {\sqrt 2 }}} \end{array}} \end{array}} \right]} \end{array}} \end{array}} \right.$$

The combined standard uncertainty components are shown in Figs. 13(a)–13(c). As shown in the figures, the uncertainty components along three axes illustrate different variation tendencies according to the different measurement space scales. For the uncertainty along the X axis, except the value intervals of α [−50°, 50°] and θ [150°, 170°], the uncertainty along the X axis goes down as the intersection angle θ goes up, while the pitch angle α does not affect the change of the variation tendency. In the except region mentioned above, there are two maximum values around α= 0° and θ= 170°, because α= 0° is a singular point in the algebra expression of u(x). The maximum uncertainty along the X axis is about 0.3mm. In Fig. 13(b) which declares the uncertainty along the Y axis varies from the intersection angle and the pitch angle, the uncertainty is essentially unchanged except for the region mentioned above. Around the singular point α= 0°, the maximum appears with a value of about 0.04mm. As shown in Fig. 13(c) which expresses the tendency of the uncertainty along the Z axis, the maximum appears where the intersection angle θ is around 10° and the pitch angle α is around boundary values with the values about 3.5mm. While in the other region, the uncertainty decreases as the intersection angle increases with the pitch angle converging to 0°.

 figure: Fig. 13.

Fig. 13. Uncertainties along the X, Y, and Z axis respectively.

Download Full Size | PDF

When it comes to the final combined standard uncertainty for the measurement space, the mathematical expression is illustrated in Eq. (32).

$$u = \left\|{\left[ {\begin{array}{{c}} {u(x)}\\ {u(y)}\\ {u(z)} \end{array}} \right]} \right\|$$

On account of the linearly independent eigenvectors in Eq. (31), the intersection angle, which is defined by two yaw angles, and the pitch angles are more vital to the final uncertainty in the proposed system, where the relationship is the intersection angle θ in Fig. 12. As a result, the variables in the expression of the combined standard uncertainty are intersection angle θ and pitch angle α, shown in Fig. 14, where the intersection angle θ varies from the open interval of (10°,170°) and the pitch angles α varies from the open interval of (−80°,80°). The combined standard uncertainty results show that when the intersection angle is around 100° and the pitch angles are around 0°, the minimum uncertainty can be obtained. The maximum uncertainty values appear in the situation where the intersection angle is around 0° and the pitch angles are at the boundary values in the open interval as described above. Moreover, when the pitch angles are around the boundary angles, the decreasing of the measurement uncertainty varies from the reduction of the intersection angle. Furthermore, when the intersection angle is around 0°, the maximum values are gained with the pitch angles are around the boundary values of the value range. When it comes to the closed-ranged measurement space where the intersection angle is in the open interval of (140°,170°) and the pitch angles are around 0°, another two maximum values appear because of the singular values for the trigonometric function in the attitude matrix mentioned above. According to the value range of the intersection angle and the pitch angles, the combined standard uncertainty and analysis for the proposed system in the measurement space is completed.

 figure: Fig. 14.

Fig. 14. The combined standard uncertainty varies from the intersection angle and pitch angle.

Download Full Size | PDF

Since all the components of uncertainty are normally distributed, the coverage factor of the expanded uncertainty is k = 3 (P=99.73%). The map of the final expanded uncertainty with the coverage factor is illustrated in Fig. 15. Because of the linear relationship between the combined standard uncertainty and the expanded uncertainty, the variation tendency in Fig. 15 is the same as that shown in Fig. 14. And the maximum values and the minimum values of the expanded uncertainty are three times as much as those of the combined standard uncertainty, respectively. When the intersection angle is around 90° and the pitch angles are around 0°, the minimum expanded uncertainty with the value of 0.062mm appears. While the maximum expanded uncertainties with the value of 9.55mm appear with the intersection angle around 10° and the pitch angle around ±80°.

 figure: Fig. 15.

Fig. 15. The map of the final expanded uncertainty with coverage factor k=3 (P=99.73%).

Download Full Size | PDF

5. Summaries and conclusions

Above all, the paper claims the problems with the error model in the accuracy evaluation process: the evaluation results are discrete, the evaluation mode is sparse, and the evaluation criteria are complex. In order to get the detailed, complete, and continuous evaluation results of the whole measurement space, an analytical solution of uncertainty with the GUM method for the dynamic stereo vision measurement system is proposed. Aimed at the difficulties that the GUM method is hard to apply into the measurement system, such as long measurement chain, the strong correlation among the input variables, and the trouble in adjusting the shafting orthogonality, a novel dynamic stereo vision measurement model is applied into the system.

Then, a measurement system based on the novel dynamic stereo vision measurement model, which is made up of one-dimensional turntables and cameras, is set out. By using the quaternion theory in the kinematic model, there is no need for precise calibration between the cameras and the turntables, reducing the quantity of calculation and the chain in the measurement process.

In the uncertainty analysis process, all the variables, which are in the intrinsic matrix and extrinsic matrix, and the covariance among the variables in the extrinsic matrix are clarified by introducing a virtual circle to decouple the strong correlation among the variables. Furthermore, by analyzing the factors that may influence the uncertainty components in each axis and the final uncertainty, the intersection θ and the pitch angles α are the major factors for the uncertainties for the proposed measurement system. In the large-scale measurement space, it is obvious that the uncertainty along the Z axis (the depth direction) influences the final uncertainty. In the close-range measurement space, the uncertainty along the X axis is more important to the final uncertainty.

Finally, the measurement system based on the proposed model is easy to set up without guaranteeing the strictly orthogonal between three axes. Meanwhile, with all turnover turntables, the measurement system expands the FOV and completes the full-scale space measurement. The introduction of a virtual circle perfects the analytical model of uncertainty with the GUM method, which is vital to catch the major influencing factors for the uncertainties, to some extent. The proposed method provides new ideas for the intersection measurement model, is appropriate for the heterogeneous-network coordinate measurement represented by different intersection observables, and can predict the uncertainty in the full-scale space.

Funding

National Natural Science Foundation of China (51835007, 51775380, 52075382, 51721003).

Disclosures

The authors declared that they have no conflicts of interest to this work.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. Meid, Verifying the accuracy of laser tracker measurements in a bundle adjustment, Coordinate Measurement Systems Committee Conference, 1998.

2. B. Hughes, A. Forbes, A. Lewis, W. Sun, and K. Nasr, “Laser tracker error determination using a network measurement,” Meas. Sci. Technol. 22(4), 045103 (2011). [CrossRef]  

3. Association of German Engineers, VDI/VDE Guideline 2634 Part 1, Optical 3-D Measuring Systems and Imaging Systems with Point-by-Point Probing. 2002.

4. J. E. Muelaner, Z. Wang, J. Jamshidi, P. G. Maropoulos, A. R. Mileham, E. B. Hughes, and A. B. Forbes, “Study of the uncertainty of angle measurement for a rotary-laser automatic theodolite (r-lat),” Proc. Inst. Mech. Eng., Part B 223(3), 217–229 (2009). [CrossRef]  

5. W. Li, L. Xie, Z. P. Yin, and H. Ding, “The Research of Geometric Error Modeling of Robotic Machining: I Spatial Motion Chain and Error Transmission,” J. Mech. Eng. 57(5), 1–15 (2021).

6. J. Guillory, D. Truong, and J. P Wallerand, “Uncertainty assessment of a prototype of multilateration coordinate measurement system,” Precision Eng. 66, 496–506 (2020). [CrossRef]  

7. J. E. Muelaner, Z. Wang, P. S. Keogh, J. Brownell, and D Fisher, “Uncertainty of measurement for large product verification: evaluation of large aero gas turbine engine datums,” Meas. Sci. Technol. 27, 115003 (2016). [CrossRef]  

8. B. Chai, F. Liu, Z. Huang, K. Tan, and Z. Wei, An Outdoor Accuracy Evaluation Method of Aircraft Flight Attitude Dynamic Vision Measure System, International Symposium on Optoelectronic Technology and Application, Key Laboratory of Precision Opto-mechatronics Technology, Ministry of Education, Beihang UniversityBeijing, China, 100191.

9. H. Deng, F. Wang, J. Zhang, G. Hu, M. Ma, and X Zhong, “Vision measurement error analysis for nonlinear light refraction at high temperature,” Appl. Opt. 57, 5556–5565 (2018). [CrossRef]  

10. S. Yang, Y. Gao, Z. Liu, and G. Zhang, “A calibration method for binocular stereo vision sensor with short-baseline based on 3d flexible control field,” Optics and Lasers in Engineering 124, 105817 (2020). [CrossRef]  

11. M. Marrón-Romera, J. C. García, M. A. Sotelo, D. Pizarro, M. Mazo, J. M. Canas, C. Losada, and Á Marcos, “Stereo vision tracking of multiple objects in complex indoor environments,” Sensors 10(10), 8865–8887 (2010). [CrossRef]  

12. Arjun B Krishnan and J. Kollipara, “Intelligent Indoor Mobile Robot Navigation Using Stereo Vision,” Signal Image Proc. 5, 5405 (2014). [CrossRef]  

13. P. Perek, D. Makowski, A. Mielczarek, A. Napieralski, and P. Sztoch, Towards automatic calibration of stereoscopic video systems. 2015 MIXDES - 22nd International Conference, Mixed Design of Integrated Circuits & Systems, IEEE.

14. Mohammad Javad Abbaspour, M. Yazdi, and M. A. M. Shirazi, “Robust approach for people detection and tracking by stereo vision,” International Symposium on Telecommunications IEEE, 2015.

15. D. Samper and J. Santolaria, “A stereo-vision system to automate the manufacture of a semitrailer chassis,” Int J Adv Manuf Technol 67(9-12), 2283–2292 (2013). [CrossRef]  

16. Y. Nakabo, T. Mukai, Y. Hattori, Y. Takeuchi, and N. Ohnishi, Variable Baseline Stereo Tracking Vision System Using High-Speed Linear Slider, IEEE International Conference on Robotics & Automation. IEEE (2006).

17. R. A. Setyawan, R. Soenoko, P. Mudjirahardjo, and M. A. Choiron, “Measurement Accuracy Analysis of Distance Between Cameras in Stereo Vision,” 2018 Electrical Power, Electronics, Communications, Controls and Informatics Seminar (EECCIS), Batu, East Java, Indonesia, 2018, pp. 169–172.

18. Mikko Kytö, M. Nuutinen, and P. Oittinen, “Method for measuring stereo camera depth accuracy based on stereoscopic vision,” Proc. SPIE 7864, 786401 (2011). [CrossRef]  

19. Y. Xu, Y. Zhao, F. Wu, and K. Yang, “Error analysis of calibration parameters estimation for binocular stereo vision system,” 2013 IEEE International Conference on Imaging Systems and Techniques (IST), Beijing, 2013, pp. 317–320.

20. K. Grabowski, D. Kacperski, W. Sankowski, and M. Wkodarczyk, “Estimation of measurement uncertainty in stereo vision system,,” Image Vision Comp. 61, 70>–81 (2017). [CrossRef]  

21. G. Di Leo and A. Paolillo, “Uncertainty evaluation of camera model parameters,” Conference Record - IEEE Instrumentation and Measurement Technology Conference (2011):1–6.

22. W. Bin, D. Wen, Y. Fengting, and X. Ting, “The error analysis of the Non-orthogonal Total Station Coordinate Measurement System,” Acta Metrologica Sinica 38(6), 661–666 (2017). [CrossRef]  

23. “Suppl. 1: 2008, Uncertainty of measurement-Part 3: Guide to the expression of uncertainty in measurement (GUM: 1995) Supplement 1: Propagation of distributions using a Monte Carlo method,” ISO/IEC GUIDE 98-3.

24. JJF 1059.2-2012: “Evaluating the uncertainty of measurement by using the Monte Carlo method.": 2012, 44–45.

25. R. Liao, J. Zhu, L. Yang, J. Lin, B. Sun, and J. Yang, “Flexible calibration method for line-scan cameras using a stereo target with hollow stripes,” Optics and Lasers in Engineering 113, 6–13 (2019). [CrossRef]  

26. “2008, Uncertainty of measurement-Part 3: Guide to the expression of uncertainty in measurement (GUM: 1995),” ISO/IEC GUIDE 98-3.

27. Robson Stuart, MacDonald Lindsay, Kyle Stephen, Boehm Jan, and Shortis Mark, “Optimised multi-camera systems for dimensional control in factory environments,” Proc. Inst. Mech. Eng., Part B 232, 0954405416654936 (2016). [CrossRef]  

28. S. Robson, L. Macdonald, S. Kyle, and M. R. Shortis, Close Range Calibration of Long Focal Length Lenses in a Changing Environment, 23rd Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS) (2016).

29. M.R. Shortis, S. Robson, T.W. Jones, W.K. Goad, and C.B Lunsford, “Photogrammetric tracking of aerodynamic surfaces and aerospace models at NASA Langley Research Center,” ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Czech, 12 19 July 2016, pp. 27–34.

30. X. Chen, J. Lin, L. Yang, Y. Sun, and J. Zhu, “Flexible calibration method for visual measurement using an improved target with vanishing constraints,” J. Opt. Soc. Am. A 37(3), 435–443 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Composition of the novel measurement unit.
Fig. 2.
Fig. 2. Two measurement units.
Fig. 3.
Fig. 3. Measurement structure.
Fig. 4.
Fig. 4. Turntable calibration.
Fig. 5.
Fig. 5. The parameters of the left-turntable.
Fig. 6.
Fig. 6. Receivers for the photogrammetry system and laser tracker system.
Fig. 7.
Fig. 7. Control fields for calibrating the relationship between the turntable and camera and the image with the control points. (a) The rigid structure with control points for the initial extrinsic parameters of cameras. (b) By using the illuminant, the image of the control points on the rigid structure in the cameras.
Fig. 8.
Fig. 8. Left side motion architecture of the system.
Fig. 9.
Fig. 9. The pose for the left camera after the rotation of the left horizontal turntable.
Fig. 10.
Fig. 10. The pose for the left camera after the combination rotation of the horizontal and vertical turntable.
Fig. 11.
Fig. 11. Stereo vision measurement model.
Fig. 12.
Fig. 12. Three cases of the intersection situations.
Fig. 13.
Fig. 13. Uncertainties along the X, Y, and Z axis respectively.
Fig. 14.
Fig. 14. The combined standard uncertainty varies from the intersection angle and pitch angle.
Fig. 15.
Fig. 15. The map of the final expanded uncertainty with coverage factor k=3 (P=99.73%).

Tables (3)

Tables Icon

Table 1. Calibration results for the intrinsic and extrinsic parameters.

Tables Icon

Table 2. Intrinsic parameters uncertainties for two cameras.

Tables Icon

Table 3. Extrinsic parameter uncertainties.

Equations (32)

Equations on this page are rendered with MathJax. Learn more.

q t l h = cos ( θ l 2 ) + n t l h sin ( θ l 2 )
n t l v _ d y n = [ q t l h [ n t l v ] q q ¯ t l h ] q 1
O t l v _ d y n = [ q t l h [ O t l v O ] q q ¯ t l h + [ O ] q ] q 1
O c l h = [ q t l h [ O c l O ] q q ¯ t l h + [ O ] q ] q 1
{ X c l h = [ q t l h [ x l O ] q q ¯ t l h + [ O ] q ] q 1 Y c l h = [ q t l h [ y l O ] q q ¯ t l h + [ O ] q ] q 1 Z c l h = [ q t l h [ z l O ] q q ¯ t l h + [ O ] q ] q 1
R c l = [ x l y l z l ]
R c l h = [ X c l h Y c l h Z c l h ]
q t l v = cos ( α l 2 ) + n t l v _ d y n sin ( α l 2 )
O c l v = [ q t l v [ O c l h O t l v _ d y n ] q q ¯ t l v + [ O t l v _ d y n ] q ] q 1
{ X c l v = [ q t l v [ X c l h O ] q q ¯ t l v + [ O ] q ] q 1 Y c l v = [ q t l v [ Y c l h O ] q q ¯ t l v + [ O ] q ] q 1 Z c l v = [ q t l v [ Z c l h O ] q q ¯ t l v + [ O ] q ] q 1
R c l v = [ X c l v Y c l v Z c l v ]
M c l = [ R c l v T R c l v T O c l v 0 1 ]
M c r = [ R c r T R c r T O c r v 0 1 ] M
{ λ l p l = K l M c l P λ r p r = K r M c r P
[ u l m 31 l m 11 l u l m 32 l m 12 l u l m 33 l m 13 l v l m 31 l m 21 l v l m 32 l m 22 l v l m 33 l m 23 l u r m 31 r m 11 r u r m 32 r m 12 r u r m 33 r m 13 r v r m 31 r m 21 r v r m 32 r m 22 r v r m 33 r m 23 r ] [ X Y Z ] = [ m 14 l u l m 34 l m 24 l v l m 34 l m 14 r u r m 34 r m 24 r v r m 34 r ]
A [ X Y Z ] = b
[ X Y Z ] = ( A T A ) 1 A T b
{ X = F x ( f l , u l 0 , v l 0 , f r , u r 0 , v r 0 , θ l , θ r , α l , α r , t x , t y , t z ) Y = F y ( f l , u l 0 , v l 0 , f r , u r 0 , v r 0 , θ l , θ r , α l , α r , t x , t y , t z ) Z = F z ( f l , u l 0 , v l 0 , f r , u r 0 , v r 0 , θ l , θ r , α l , α r , t x , t y , t z )
cov ( I ) = [ u 2 ( f l ) u 2 ( u l 0 ) c o v a r i a n c e u 2 ( v l 0 ) u 2 ( f r ) c o v a r i a n c e u 2 ( u r 0 ) u 2 ( v r 0 ) ]
{ c a s e . ( a ) θ l = θ r θ θ l < 0 θ r < 0 c a s e . ( b ) θ l = θ θ r θ l > 0 θ r < 0 c a s e . ( c ) θ l = θ r + θ θ l > 0 θ r > 0
cov ( θ α ) = [ u 2 ( θ l ) r u ( θ l ) u ( θ r ) 0 0 r u ( θ r ) u ( θ l ) u 2 ( θ r ) 0 0 0 0 u 2 ( α l ) r u ( α l ) u ( α r ) 0 0 r u ( α r ) u ( α l ) u 2 ( α l ) ]
cov ( θ α ) = [ u 2 ( θ ) u 2 ( θ ) 0 0 u 2 ( θ ) u 2 ( θ ) 0 0 0 0 u 2 ( α ) u 2 ( α ) 0 0 u 2 ( α ) u 2 ( α ) ]
[ P 1 T 0 u 1 P 1 T 1 0 u 1 0 P 1 T v 1 P 1 T 0 1 v 1 P n T 0 u n P n T 1 0 u n 0 P n T v n P n T 0 1 v n ] [ r 1 r 2 r 3 t x t y t z ] = 0
{ t x = T x ( [ P 1 P n ] , [ ( u 1 P 1 v 1 P 1 ) ( u n P n v n P n ) ] , [ ( u 1 v 1 ) ( u n v n ) ] ) t y = T y ( [ P 1 P n ] , [ ( u 1 P 1 v 1 P 1 ) ( u n P n v n P n ) ] , [ ( u 1 v 1 ) ( u n v n ) ] ) t z = T z ( [ P 1 P n ] , [ ( u 1 P 1 v 1 P 1 ) ( u n P n v n P n ) ] , [ ( u 1 v 1 ) ( u n v n ) ] )
u ( T ) = [ u ( t x ) u ( t y ) u ( t z ) ] T = [ 0.0 45 0.0 45 0.0 45 ] T ( m m )
cov ( T ) = [ u 2 ( t x ) 0 u 2 ( t y ) 0 u 2 ( t z ) ]
cov ( E ) = [ u 2 ( θ ) u 2 ( θ ) 0 0 u ( θ ) u ( t x ) u ( θ ) u ( t y ) u ( θ ) u ( t z ) u 2 ( θ ) u 2 ( θ ) 0 0 u ( θ ) u ( t x ) u ( θ ) u ( t y ) u ( θ ) u ( t z ) 0 0 u 2 ( α ) u 2 ( α ) u ( α ) u ( t x ) u ( α ) u ( t y ) u ( α ) u ( t z ) 0 0 u 2 ( α ) u 2 ( α ) u ( α ) u ( t x ) u ( α ) u ( t y ) u ( α ) u ( t z ) u ( θ ) u ( t x ) u ( θ ) u ( t x ) u ( α ) u ( t x ) u ( α ) u ( t x ) u 2 ( t x ) 0 0 u ( θ ) u ( t y ) u ( θ ) u ( t y ) u ( α ) u ( t y ) u ( α ) u ( t y ) 0 u 2 ( t y ) 0 u ( θ ) u ( t z ) u ( θ ) u ( t z ) u ( α ) u ( t z ) u ( α ) u ( t z ) 0 0 u 2 ( t z ) ]
cov ( p a r a ) = [ cov ( I ) 0 0 cov ( E ) ]
J = [ F x f l F x u l 0 F x v l 0 F x f r F x u r 0 F x v r 0 F x θ l F x θ r F x α l F x α r F x t x F x t y F x t z F y f l F y u l 0 F y v l 0 F y f r F y u r 0 F y v r 0 F y θ l F y θ r F y α l F y α r F y t x F y t y F y t z F z f l F z u l 0 F z v l 0 F z f r F z u r 0 F z v r 0 F z θ l F z θ r F z α l F z α r F z t x F z t y F z t z ] T
diag [ u 2 ( x ) u 2 ( y ) u 2 ( z ) ]  =  d i a g ~ [ J T cov ( p a r a ) J ]
e i g e n ( cov ( θ α ) ) = { λ 1 = u 2 ( θ ) v 1 = [ 1 / 2 1 / 2 0 0 ] λ 2 = u 2 ( α ) v 2 = [ 0 0 1 / 2 1 / 2 ]
u = [ u ( x ) u ( y ) u ( z ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.