Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Measurement model of the omnidirectional structured-light vision and its calibration method

Open Access Open Access

Abstract

The omnidirectional structured-light vision measurement is significant for inner surface inspections. In existing systems, the camera and the projector are installed inside a glass tube, inevitably causing the refraction distortion. In this paper, we propose a measurement model of the omnidirectional structured-light vision and the corresponding calibration method. The model can correct the refraction distortion and realize the omnidirectional measurement. An aluminum tube with an internal diameter of 288.50 mm is measured by the system. The repeatability and precision reach 0.05 mm and 0.23 mm, respectively. The experimental results prove that the accuracy is improved by 7.9 times compared with the model ignoring distortion.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The omnidirectional structured-light vision is an important noncontact measurement method that realizes the omnidirectional measurement. It has the same advantages as the traditional one, such as the wide range and the good system flexibility. Moreover, it is suitable for inspecting the inner surface of the pipelines.

Though the multiple line structured-light vision sensors can realize the omnidirectional measurement [1], the components of the system will result in the complex assembly and the global calibration. Instead, the circle structured-light vision sensor performs better. The projector emits a conical light conicoid with a small cone angle. Then the light is reflected by a conical mirror and turns into a circle structured-light [2,3]. Compared with the multiple one, the omnidirectional structured-light vision sensor with the circle structured-light is immune from the global calibration.

In the omnidirectional structured-light vision measurement system with the circle structured-light, the transparent glass tube is used to connect the projector with the camera [4]. The camera is fitted inside the glass tube and hence, the light cannot be projected onto the camera image plane until it passes through the glass tube. However, it causes the refraction distortion and decreases the measurement accuracy. The light’s propagation is the reason of the refraction distortion and meanwhile, the propagation is illustrated in detail in Fig. 1 in Section 2. To correct the glass tube refraction distortion, Zhang et al. [5] constructed a mapping table of the distorted image coordinates and the corresponding undistorted ones. The image coordinates were obtained with the glass tube included and excluded from the system, respectively. Then the mapping table was constructed rather than a geometric distortion model.

 figure: Fig. 1.

Fig. 1. The measurement model of the omnidirectional structured-light vision.

Download Full Size | PDF

In fact, when the light stripe is projected through the glass tube and to the image plane, the ray is refracted on the two interfaces of the air and the glass. In the past researches, the refraction distortion was corrected by some specific assumptions [6], making the measurement models less universal. Yoshizawa and Wakayama [7] proposed a measurement principle that required the entire refraction process lying on the same plane. Nevertheless, the two refraction processes on the inner and the outer surfaces of the glass tube aren’t invariably coplanar and the principles cannot be applied until the camera optical axis coincides with the glass tube axis. This installation requires the precise alignment and fails in the practical applications.

Nowadays, most refraction distortion correction models aim at the ones occur on the same plane or on flat planes. Gong et al. [8] constructed a distortion correction model for the refraction on the multiple planar glass ports. Similarly, Huang et al. [9] presented a plate refractive camera model. Feng et al. [10] and Huang et al. [11] established the multi-camera calibration methods depending on the flat refractive geometry. Li et al. [12] developed a calibration method of an underwater camera and constructed the relation between the underwater points and the corresponding image points. Likewise, Zhang et al. [13] put forward a model of the underwater stereo vision. The cameras were protected by a flat glass container that caused the refraction distortion. Fu and Liu [14] adopted a distortion correction method to reconstruct the 3D bubble shape by tracing the ray refracted on the planar interfaces. In the researches mentioned above, the refractive surfaces were flat and parallel and the distortion correction models were built on the plane determined by the incident ray and the emergent ray.

Although Morinaka et al. [15] propounded a 3D reconstruction method with the distortion caused by any refractive media, the method only built a nonlinear mapping table that related the image points to the 2D points on the calibration board. Consequently, the refraction distortion on the cylindrical surfaces remains to be analyzed carefully and the refraction distortion correction is the key point of the omnidirectional structured-light vision measurement.

To solve the disadvantages of the existing omnidirectional structured-light vision measurement, we propose a measurement model of the omnidirectional structured-light vision in this paper. The projector emits a circle structured-light and the camera takes images through the glass tube. By tracing the ray’s propagation, we find that the two refractions on the interfaces of the air and the glass are non-coplanar and that the existing correction models cannot be applied on this occasion. Thus, our model focuses on the non-coplanar refraction planes and corrects the corresponding refraction distortion. Meanwhile, we devise a calibration method of the omnidirectional structured-light vision model including the refraction distortion.

The rest of this paper is organized as follows. A measurement model of the omnidirectional structured-light vision is detailed in Section 2. The corresponding calibration method is presented in Section 3. Afterwards, the experimental results are provided in Section 4 and the conclusions are given in Section 5.

2. Measurement model of the omnidirectional structured-light vision

The omnidirectional structured-light vision measurement model is shown in Fig. 1. The projector emits the circle structured-light to its surrounding scene. P denotes a spatial point on the structured-light. The camera takes images through the glass tube, whose internal radius is denoted by d and the external radius is denoted by D. When P is projected to its projection denoted by P’, the ray is refracted on the outer and the inner surfaces of the glass tube. Let us define the ray from P to the outer surface, from the outer surface to the inner surface and from the inner surface to the camera optical center denoted by Oc as the incident ray, the refracted ray and the emergent ray, respectively.

The camera coordinate system and the image coordinate system are set as Ocxcyczc and O’uv, respectively. Due to the reversibility of the light path, we set the vectors of the opposite direction of the incident ray, the refracted ray and the emergent ray as ri, rr and re, respectively. The refraction point denoted by Q1 is the intersection of re and the inner surface of the glass tube. Likewise, the refraction point denoted by Q0 is the intersection of rr and the outer surface of the glass tube. Considering the cylindrical surfaces, the normal rays at Q1 and Q0 are in their radial directions, denoted by n1 and n0, respectively. n1 and the glass tube axis form a plane denoted by α1. Similarly, n0 and the glass tube axis form a plane denoted by α0. Since the angle denoted by ɛ is between α0 and α1, n1 and n0 are non-coplanar. Therefore, the two refraction surfaces are also non-coplanar.

According to the camera perspective projection model, re can be determined by the image pixel coordinates of P’. The equation of the emergent ray can be expressed as re T x = 0. On the inner surface of the glass tube, let us set the coordinates of a point as x = (x, y, z) T. Then x satisfies

$$||{({{\boldsymbol x} - {\boldsymbol M}} )\times {{\boldsymbol r}_{\textrm{axis}}}} ||= d$$
where M is the coordinates of point M on the glass tube axis and raxis= (r1, r2, r3) T is the unit vector of the glass tube axis. The equation of the inner surface denoted by fi (x, y, z) = 0 is shown in Eq. (2)
$${f_{\textrm{i}}}({x,y,z} )= {\tilde{{\boldsymbol x}}^{\textrm{T}}}\left[ {\begin{array}{{cc}} {\boldsymbol C}&{\boldsymbol S}\\ {{{\boldsymbol S}^{\textrm{T}}}}&{ - {d^2}} \end{array}} \right]\tilde{{\boldsymbol x}}$$
where $\tilde{{\boldsymbol x}}$ is the homogeneous vector of x. C and S are presented in Eq. (3).
$$\boldsymbol{C} = \left[ {\begin{array}{{ccc}} {r_2^2 + r_3^2}&{ - {r_1}{r_2}}&{ - {r_1}{r_3}}\\ { - {r_1}{r_2}}&{r_1^2 + r_3^2}&{ - {r_2}{r_3}}\\ { - {r_1}{r_3}}&{ - {r_2}{r_3}}&{r_1^2 + r_2^2} \end{array}} \right],\ \boldsymbol{S} = - \boldsymbol{CM}. $$
Further, Q1 can be determined by substituting the equation of the emergent ray into Eq. (2). n1 is the radial direction at Q1 and is formulated in Eq. (4)
$${{\boldsymbol n}_{1}} = \left[ {\begin{array}{{cc}} {\boldsymbol C}&{\boldsymbol S} \end{array}} \right]{\tilde{{\boldsymbol Q}}_{1}}$$
where $\tilde{{\boldsymbol Q}}$1 denotes the homogeneous coordinates of Q1.

Similarly, the equation of the outer surface of the glass tube denoted by fo (x, y, z) = 0 can be expressed in Eq. (5).

$${f_{\textrm{o}}}({x,y,z} )= {\tilde{{\boldsymbol x}}^{\textrm{T}}}\left[ {\begin{array}{{cc}} {\boldsymbol C}&{\boldsymbol S}\\ {{{\boldsymbol S}^{\textrm{T}}}}&{ - {D^2}} \end{array}} \right]\tilde{{\boldsymbol x}}$$
Q0 can be determined by substituting the equation of the refracted ray into Eq. (5). Let us set the homogeneous coordinates of Q0 as $\tilde{{\boldsymbol Q}}$0. Then n0 is presented in Eq. (6).
$${{\boldsymbol n}_{0}} = \left[ {\begin{array}{{cc}} {\boldsymbol C}&{\boldsymbol S} \end{array}} \right]{\tilde{{\boldsymbol Q}}_{0}}$$
The refraction plane at Q1 is shown in Fig. 2(a). The emergence angle, the refraction angle, the refractive indices of the air and the glass are denoted by θ3, θ2, nair and nglass, respectively. We define the refraction coordinate system as Or1xr1yr1zr1 with the origin Or1 overlapping Q1. Three coordinate axes are set as follows. Or1xr1=n1×re. Or1yr1=n1. Or1zr1=n1×re×n1. In Or1xr1yr1zr1 let us set rr= (0, 1, tanθ2) T. Hence, in Ocxcyczc, rr can be formulated in Eq. (7).
$${{\boldsymbol r}_{\textrm{r}}} = \left[ {\begin{array}{{ccc}} {{{\boldsymbol n}_{1}} \times {{\boldsymbol r}_{\textrm{e}}}}&{{{\boldsymbol n}_{1}}}&{{{\boldsymbol n}_{1}} \times {{\boldsymbol r}_{\textrm{e}}} \times {{\boldsymbol n}_{1}}} \end{array}} \right]{\left[ {\begin{array}{{ccc}} 0&1&{\tan {\theta_{2}}} \end{array}} \right]^{\textrm{T}}}$$
On the plane of Or1yr1zr1, according to the Snell’s law, nglass·sinθ2 = nair·sinθ3. Then rr is shown in Eq. (8).
$${{\boldsymbol r}_{\textrm{r}}} = \left[ {\begin{array}{{ccc}} {{{\boldsymbol n}_{1}} \times {{\boldsymbol r}_{\textrm{e}}}}&{{{\boldsymbol n}_{1}}}&{{{\boldsymbol n}_{1}} \times {{\boldsymbol r}_{\textrm{e}}} \times {{\boldsymbol n}_{1}}} \end{array}} \right]{\left[ {\begin{array}{{ccc}} 0&1&{\tan (\arcsin (\frac{{{n_{\textrm{air}}}}}{{{n_{\textrm{glass}}}}} \cdot \sin {\theta_{3}}))} \end{array}} \right]^{\textrm{T}}}$$

 figure: Fig. 2.

Fig. 2. The two non-coplanar refraction planes (a) at ${Q_1}$. (b) at ${Q_0}$.

Download Full Size | PDF

Thus, the equation of the refracted ray can be expressed as rr T x = 0.

The refraction at Q0 is similar to the one at Q1. As shown in Fig. 2(b), the refraction angle and the incidence angle are denoted by θ1 and θ0, respectively. We build a new refraction coordinate system denoted by Or0xr0yr0zr0 with the origin Or0 overlapping Q0. Likewise, Or0xr0=n0×rr. Or0yr0=n0. Or0zr0=n0×rr×n0. On the plane of Or0yr0zr0, nglass·sinθ1 = nair·sinθ0. We set ri = (0, 1, tanθ0) T in Or0xr0yr0zr0. Then in Ocxcyczc, ri is shown in Eq. (9).

$${{\boldsymbol r}_{\textrm{i}}} = \left[ {\begin{array}{{ccc}} {{{\boldsymbol n}_{0}} \times {{\boldsymbol r}_{\textrm{r}}}}&{{{\boldsymbol n}_{0}}}&{{{\boldsymbol n}_{0}} \times {{\boldsymbol r}_{\textrm{r}}} \times {{\boldsymbol n}_{0}}} \end{array}} \right]{\left[ {\begin{array}{{ccc}} 0&1&{\tan (\arcsin (\frac{{{n_{\textrm{air}}}}}{{{n_{\textrm{glass}}}}} \cdot \sin {\theta_{0}}))} \end{array}} \right]^{\textrm{T}}}$$
Hence, the equation of the incident ray can be presented as ri T x = 0. Moreover, the corrected image pixel coordinates can be obtained due to the camera perspective projection model.

Since P is the intersection of the incident ray and the structured-light, the mathematical model of the omnidirectional structured-light vision measurement can be formulated in Eq. (10)

$$\left\{ {\begin{array}{{l}} {{{\boldsymbol r}_{\textrm{i}}}^{\textrm{T}}\boldsymbol{x} = 0}\\ {{\boldsymbol{A}_{\alpha }}\boldsymbol{x} = 0} \end{array}} \right.$$
where Aαx = 0 denotes the equation of the structured-light and Aα denotes its coefficient matrix.

3. Calibration method of the omnidirectional structured-light vision model

In the omnidirectional structured-light vision model, the parameters to be calibrated are M, raxis, d, D and the equation of the structured-light. The calibration method of the equation of the structured-light relies on the other four parameters and the calibration method of Sun et al. [16].

As the parameters of the refraction distortion, M, raxis, d and D are calibrated with an auxiliary camera and a calibration board. The vision sensor camera takes images of the calibration board through the glass tube. Meanwhile, the auxiliary camera captures the same board directly. With the mathematical model of the omnidirectional structured-light vision, the corrected 3D coordinates of the jth feature point denoted by xrj can be expressed by M, raxis, d and D. At the same time, the auxiliary camera offers its undistorted coordinates denoted by xcj. Based on the corresponding xrj and xcj, we build and optimize the objective function shown in Eq. (11) to determine M, raxis, d and D.

$$\sum\limits_{j = 1}^n {||{{\boldsymbol{x}_{\textrm{c}j}} - {\boldsymbol{x}_{\textrm{r}j}}(\boldsymbol{M},{\boldsymbol{r}_{\textrm{axis}}},d,D)} ||}$$
Afterwards, the equation of the structured-light can be calibrated. The structured-light is projected onto the calibration board and the camera takes images of the board. With the mathematical model of the omnidirectional structured-light vision, the corrected image coordinates of the feature point are obtained, denoted by $\tilde{{\boldsymbol p}}$. In the calibration board plane coordinate system, the corresponding 2D coordinates are certain, denoted by $\tilde{{\boldsymbol P}}$. Then the relationship between the image plane and the calibration board plane can be shown in Eq. (12)
$$s\boldsymbol{\tilde{p}} = \boldsymbol{H\tilde{P}}$$
where s denotes the scale factor and H = A (r1, r2, t). A denotes the camera intrinsic matrix. r1 and r2 are the first two columns of R. R and t denote the rotation matrix and the translation vector between the calibration board coordinate system and the camera coordinate system, respectively. According to Eq. (12) H can be determined with plenty of feature points on the calibration board.

Since the corrected image coordinates of the structured-light can be acquired, we can also get the corresponding 2D coordinates in the calibration board plane coordinate system. Let us set the 2D coordinates as (x, y) T and then the 3D coordinates in the calibration board coordinate system can be expressed by $\tilde{{\boldsymbol Q}}$i=(x, y, 0) T. Further, the coordinates of the structured-light in the camera coordinate system denoted by $\tilde{{\boldsymbol Q}}$c can be determined by Eq. (13).

$${\boldsymbol{\tilde{Q}}_{\textrm{c}}} = \left( {\begin{array}{{cc}} \boldsymbol{R}&\boldsymbol{t}\\ {\boldsymbol 0}&1 \end{array}} \right){\boldsymbol{\tilde{Q}}_{\textrm{i}}}$$
By fitting the coordinates obtained in Eq. (13), we can get the equation of the structured-light.

4. Experiments and analysis

An omnidirectional structured-light vision measurement system was built in the laboratory, shown in Fig. 3(a). The system consisted of an industrial DAHENG MER-504-10GM-P camera with a resolution of 2448×2048 pixels, a Schneider Cinegon 1.4/8 mm lens, a cylindrical glass tube, a HB365050X structured-light projector and some brackets. In order to evaluate the performance of the model, the system was used to measure the internal diameter of an aluminum tube. The omnidirectional structured-light was emitted to the inner surface of the aluminum tube and formed a closed light stripe. By the measurement model we acquired the 3D coordinates of the light stripe points. To avoid the impact of the position and the orientation of the aluminum tube, an ellipse was fitted by the least square method. The minor axis of the ellipse was the internal diameter of the aluminum tube.

 figure: Fig. 3.

Fig. 3. Experimental devices. (a)Vision measurement system. (b)Calibration devices.

Download Full Size | PDF

4.1 Calibration results

The calibration devices are exhibited in Fig. 3(b). The auxiliary camera was identical to the vision sensor camera. When the intrinsic parameters of the two cameras were being calibrated, the system was fitted without the glass tube. When we calibrated the parameters of the camera model with Zhang’s method [17] , a planar glass calibration board of a 17×17 chessboard with an interval of 10 mm was used. The calibration results are shown in Table 1.

Tables Icon

Table 1. Calibration results of the parameters of the camera model

To calibrate M, raxis, d and D, we placed the calibration board in 29 positions or orientations. The vision sensor camera and the auxiliary camera captured the calibration board at the same time. The images are presented in Fig. 4 and the calibration results are shown in Table 2.

 figure: Fig. 4.

Fig. 4. Images captured by (a)the vision sensor camera; (b)the auxiliary camera.

Download Full Size | PDF

Tables Icon

Table 2. Calibration results of M, raxis, d and D

To calibrate the equation of the structured-light, a planar ceramic calibration board was used. The equation of the structured-light denoted by Aα x = 0 was determined by the proposed calibration method. The calibration result of Aα is expressed in Eq. (14).

$${\boldsymbol{A}_\alpha } = \left[ {\begin{array}{{cccc}} { - 2.2580\textrm{e}^{\wedge}{( - 5)}}&{ - 7.0373\textrm{e}^{\wedge}{( - 6)}}&{ - 0.0016}&{633.8265} \end{array}} \right]$$

4.2 Repeatability

The repeatability experiments were carried out by measuring the internal diameter of the aluminum tube ten times. The measurement values are presented in Table 3. The repeatability of the measurement system reaches 0.05 mm. The experimental results prove the good stability and reliability of the system.

Tables Icon

Table 3. Results of the repeatability measurement mm

4.3 Measurement results

The measurement devices are shown in Fig. 5. The internal diameter of the aluminum tube is 288.50 mm measured by a vernier caliper with a precision of 0.02 mm. Table 4 shows the measurement values of ten pictures. The root mean square error of the measurement system is 0.23 mm. It validates the accuracy of the proposed model.

 figure: Fig. 5.

Fig. 5. Measurement devices.

Download Full Size | PDF

Tables Icon

Table 4. Results of the measurement experiments mm

4.4 Comparison

The experiments were carried out to compare the performance of the proposed correction model and the model without correction. Figure 6 shows the light stripe points on the image plane and in space, respectively. On the image plane, the distorted image points were obtained by the light stripe center extraction method, while the corrected ones were obtained by the correction model. In space, the distorted 3D points were obtained by the camera perspective projection model based on the distorted image points, while the corrected ones were obtained by the omnidirectional structured-light vision model.

 figure: Fig. 6.

Fig. 6. The distorted points and the corrected ones (a) on the image plane; (b)in space.

Download Full Size | PDF

Compared with the incident ray, the emergent ray is closer to the camera optical axis. Hence, the ellipse formed by the distorted points is smaller than the one formed by the corrected ones both on the image plane and in space. The measurement values are presented in Table 5. The comparison results prove that the refraction distortion has a certain impact on the measurement. The system accuracy with the proposed correction model is improved by 7.9 times compared with the model without correction.

Tables Icon

Table 5. Results of the comparison experiments mm

5. Conclusion

A measurement model of the omnidirectional structured-light vision is constructed in this paper. Accordingly, we devise a corresponding calibration method. The measurement model can correct the distortion caused by two non-coplanar refraction planes on the cylindrical surfaces. Therefore, the model is general and effective for any position or orientation of the camera. It avoids the strict alignment and can be commonly used in practical applications.

The repeatability and the measurement experiments prove that the model is reliable and accurate. Moreover, the comparison experiments demonstrate that the refraction distortion affects the measurement results and that the system accuracy with our correction model is improved by 7.9 times compared with the model without correction. Despite the fact that the proposed model is focused on the cylindrical surfaces in this paper, it can also be extended to the refraction distortion correction on any other surfaces.

Funding

Aeronautical Science Foundation of China (2017ZE51062).

References

1. F. Zhou, B. Peng, Y. Cui, Y. Wang, and H. Tan, “A novel laser vision sensor for omnidirectional 3D measurement,” Opt. Laser Technol. 45(1), 1–12 (2013). [CrossRef]  

2. Y. Wang and R. Zhang, “In-pipe surface circular structured light 3D vision inspection system,” Infrared Laser Eng. 43(3), 891–896 (2014).

3. Y. Zhu, Y. Gu, Y. Jin, and C. Zhai, “Flexible calibration method for an inner surface detector based on circle structured light,” Appl. Opt. 55(5), 1034–1039 (2016). [CrossRef]  

4. T. Wu, S. Lu, and Y. Tang, “An in-pipe internal defects inspection system based on the active stereo omnidirectional vision sensor,” in 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD) (IEEE, 2015), pp. 2637-2641.

5. G. Zhang, J. He, and X. Li, “3D vision inspection for internal surface based on circle structured light,” Sens. Actuators A 122(1), 68–75 (2005). [CrossRef]  

6. P. Buschinelli, T. Pinto, F. Silva, J. Santos, and A. Albertazzi, “Laser Triangulation Profilometer for Inner Surface Inspection of 100 millimeters (4") Nominal Diameter,” J. Phys. Conf. Ser. 648, 012010 (2015). [CrossRef]  

7. T. Yoshizawa and T. Wakayama, “Development of an inner profile measurement instrument using a ring beam device,” Proc. SPIE 7855, 78550B (2010). [CrossRef]  

8. Z. Gong, Z. Liu, and G. Zhang, “Flexible method of refraction correction in vision measurement systems with multiple glass ports,” Opt. Express 25(2), 831–847 (2017). [CrossRef]  

9. L. Huang, X. Zhao, S. Cai, and Y. Liu, “Plate refractive camera model and its applications,” J. Electron. Imaging 26(2), 023020 (2017). [CrossRef]  

10. M. Feng, S. Huang, J. Wang, B. Yang, and T. Zheng, “Accurate calibration of a multi-camera system based on flat refractive geometry,” Appl. Opt. 56(35), 9724–9734 (2017). [CrossRef]  

11. S. Huang, M. C. Feng, T. X. Zheng, F. Li, J. Q. Wang, and L. F. Xiao, “A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry,” IOP Conf. Ser. Mater. Sci. Eng. 320(1), 012016 (2018). [CrossRef]  

12. S.-Q. Li, X.-P. Xie, and Y.-J. Zhuang, “Research on the calibration technology of an underwater camera based on equivalent focal length,” Measurement 122, 275–283 (2018). [CrossRef]  

13. C. Zhang, X. Zhang, Y. Zhu, J. Li, and D. Tu, “Model and calibration of underwater stereo vision based on the light field,” Meas. Sci. Technol. 29(10), 105402 (2018). [CrossRef]  

14. Y. Fu and Y. Liu, “3D bubble reconstruction using multiple cameras and space carving method,” Meas. Sci. Technol. 29(7), 075206 (2018). [CrossRef]  

15. S. Morinaka, F. Sakaue, J. Sato, K. Ishimaru, and N. Kawasaki, “3D reconstruction under light ray distortion from parametric focal cameras,” Pattern Recognit. Lett. to be published. [CrossRef]  

16. J. Sun, G. Zhang, Q. Liu, and Z. Yang, “Universal Method for Calibrating Structured-light Vision Sensor on the Spot,” J. Mech. Eng. 45(03), 174–177 (2009). [CrossRef]  

17. Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. The measurement model of the omnidirectional structured-light vision.
Fig. 2.
Fig. 2. The two non-coplanar refraction planes (a) at ${Q_1}$. (b) at ${Q_0}$.
Fig. 3.
Fig. 3. Experimental devices. (a)Vision measurement system. (b)Calibration devices.
Fig. 4.
Fig. 4. Images captured by (a)the vision sensor camera; (b)the auxiliary camera.
Fig. 5.
Fig. 5. Measurement devices.
Fig. 6.
Fig. 6. The distorted points and the corrected ones (a) on the image plane; (b)in space.

Tables (5)

Tables Icon

Table 1. Calibration results of the parameters of the camera model

Tables Icon

Table 2. Calibration results of M, raxis, d and D

Tables Icon

Table 3. Results of the repeatability measurement mm

Tables Icon

Table 4. Results of the measurement experiments mm

Tables Icon

Table 5. Results of the comparison experiments mm

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

||(xM)×raxis||=d
fi(x,y,z)=x~T[CSSTd2]x~
C=[r22+r32r1r2r1r3r1r2r12+r32r2r3r1r3r2r3r12+r22], S=CM.
n1=[CS]Q~1
fo(x,y,z)=x~T[CSSTD2]x~
n0=[CS]Q~0
rr=[n1×ren1n1×re×n1][01tanθ2]T
rr=[n1×ren1n1×re×n1][01tan(arcsin(nairnglasssinθ3))]T
ri=[n0×rrn0n0×rr×n0][01tan(arcsin(nairnglasssinθ0))]T
{riTx=0Aαx=0
j=1n||xcjxrj(M,raxis,d,D)||
sp~=HP~
Q~c=(Rt01)Q~i
Aα=[2.2580e(5)7.0373e(6)0.0016633.8265]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.