Abstract
The omnidirectional structured-light vision measurement is significant for inner surface inspections. In existing systems, the camera and the projector are installed inside a glass tube, inevitably causing the refraction distortion. In this paper, we propose a measurement model of the omnidirectional structured-light vision and the corresponding calibration method. The model can correct the refraction distortion and realize the omnidirectional measurement. An aluminum tube with an internal diameter of 288.50 mm is measured by the system. The repeatability and precision reach 0.05 mm and 0.23 mm, respectively. The experimental results prove that the accuracy is improved by 7.9 times compared with the model ignoring distortion.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
The omnidirectional structured-light vision is an important noncontact measurement method that realizes the omnidirectional measurement. It has the same advantages as the traditional one, such as the wide range and the good system flexibility. Moreover, it is suitable for inspecting the inner surface of the pipelines.
Though the multiple line structured-light vision sensors can realize the omnidirectional measurement [1], the components of the system will result in the complex assembly and the global calibration. Instead, the circle structured-light vision sensor performs better. The projector emits a conical light conicoid with a small cone angle. Then the light is reflected by a conical mirror and turns into a circle structured-light [2,3]. Compared with the multiple one, the omnidirectional structured-light vision sensor with the circle structured-light is immune from the global calibration.
In the omnidirectional structured-light vision measurement system with the circle structured-light, the transparent glass tube is used to connect the projector with the camera [4]. The camera is fitted inside the glass tube and hence, the light cannot be projected onto the camera image plane until it passes through the glass tube. However, it causes the refraction distortion and decreases the measurement accuracy. The light’s propagation is the reason of the refraction distortion and meanwhile, the propagation is illustrated in detail in Fig. 1 in Section 2. To correct the glass tube refraction distortion, Zhang et al. [5] constructed a mapping table of the distorted image coordinates and the corresponding undistorted ones. The image coordinates were obtained with the glass tube included and excluded from the system, respectively. Then the mapping table was constructed rather than a geometric distortion model.
In fact, when the light stripe is projected through the glass tube and to the image plane, the ray is refracted on the two interfaces of the air and the glass. In the past researches, the refraction distortion was corrected by some specific assumptions [6], making the measurement models less universal. Yoshizawa and Wakayama [7] proposed a measurement principle that required the entire refraction process lying on the same plane. Nevertheless, the two refraction processes on the inner and the outer surfaces of the glass tube aren’t invariably coplanar and the principles cannot be applied until the camera optical axis coincides with the glass tube axis. This installation requires the precise alignment and fails in the practical applications.
Nowadays, most refraction distortion correction models aim at the ones occur on the same plane or on flat planes. Gong et al. [8] constructed a distortion correction model for the refraction on the multiple planar glass ports. Similarly, Huang et al. [9] presented a plate refractive camera model. Feng et al. [10] and Huang et al. [11] established the multi-camera calibration methods depending on the flat refractive geometry. Li et al. [12] developed a calibration method of an underwater camera and constructed the relation between the underwater points and the corresponding image points. Likewise, Zhang et al. [13] put forward a model of the underwater stereo vision. The cameras were protected by a flat glass container that caused the refraction distortion. Fu and Liu [14] adopted a distortion correction method to reconstruct the 3D bubble shape by tracing the ray refracted on the planar interfaces. In the researches mentioned above, the refractive surfaces were flat and parallel and the distortion correction models were built on the plane determined by the incident ray and the emergent ray.
Although Morinaka et al. [15] propounded a 3D reconstruction method with the distortion caused by any refractive media, the method only built a nonlinear mapping table that related the image points to the 2D points on the calibration board. Consequently, the refraction distortion on the cylindrical surfaces remains to be analyzed carefully and the refraction distortion correction is the key point of the omnidirectional structured-light vision measurement.
To solve the disadvantages of the existing omnidirectional structured-light vision measurement, we propose a measurement model of the omnidirectional structured-light vision in this paper. The projector emits a circle structured-light and the camera takes images through the glass tube. By tracing the ray’s propagation, we find that the two refractions on the interfaces of the air and the glass are non-coplanar and that the existing correction models cannot be applied on this occasion. Thus, our model focuses on the non-coplanar refraction planes and corrects the corresponding refraction distortion. Meanwhile, we devise a calibration method of the omnidirectional structured-light vision model including the refraction distortion.
The rest of this paper is organized as follows. A measurement model of the omnidirectional structured-light vision is detailed in Section 2. The corresponding calibration method is presented in Section 3. Afterwards, the experimental results are provided in Section 4 and the conclusions are given in Section 5.
2. Measurement model of the omnidirectional structured-light vision
The omnidirectional structured-light vision measurement model is shown in Fig. 1. The projector emits the circle structured-light to its surrounding scene. P denotes a spatial point on the structured-light. The camera takes images through the glass tube, whose internal radius is denoted by d and the external radius is denoted by D. When P is projected to its projection denoted by P’, the ray is refracted on the outer and the inner surfaces of the glass tube. Let us define the ray from P to the outer surface, from the outer surface to the inner surface and from the inner surface to the camera optical center denoted by Oc as the incident ray, the refracted ray and the emergent ray, respectively.
The camera coordinate system and the image coordinate system are set as Ocxcyczc and O’uv, respectively. Due to the reversibility of the light path, we set the vectors of the opposite direction of the incident ray, the refracted ray and the emergent ray as ri, rr and re, respectively. The refraction point denoted by Q1 is the intersection of re and the inner surface of the glass tube. Likewise, the refraction point denoted by Q0 is the intersection of rr and the outer surface of the glass tube. Considering the cylindrical surfaces, the normal rays at Q1 and Q0 are in their radial directions, denoted by n1 and n0, respectively. n1 and the glass tube axis form a plane denoted by α1. Similarly, n0 and the glass tube axis form a plane denoted by α0. Since the angle denoted by ɛ is between α0 and α1, n1 and n0 are non-coplanar. Therefore, the two refraction surfaces are also non-coplanar.
According to the camera perspective projection model, re can be determined by the image pixel coordinates of P’. The equation of the emergent ray can be expressed as re T x = 0. On the inner surface of the glass tube, let us set the coordinates of a point as x = (x, y, z) T. Then x satisfies
where M is the coordinates of point M on the glass tube axis and raxis = (r1, r2, r3) T is the unit vector of the glass tube axis. The equation of the inner surface denoted by fi (x, y, z) = 0 is shown in Eq. (2)Similarly, the equation of the outer surface of the glass tube denoted by fo (x, y, z) = 0 can be expressed in Eq. (5).
Thus, the equation of the refracted ray can be expressed as rr T x = 0.
The refraction at Q0 is similar to the one at Q1. As shown in Fig. 2(b), the refraction angle and the incidence angle are denoted by θ1 and θ0, respectively. We build a new refraction coordinate system denoted by Or0xr0yr0zr0 with the origin Or0 overlapping Q0. Likewise, Or0xr0=n0×rr. Or0yr0=n0. Or0zr0=n0×rr×n0. On the plane of Or0yr0zr0, nglass·sinθ1 = nair·sinθ0. We set ri = (0, 1, tanθ0) T in Or0xr0yr0zr0. Then in Ocxcyczc, ri is shown in Eq. (9).
Since P is the intersection of the incident ray and the structured-light, the mathematical model of the omnidirectional structured-light vision measurement can be formulated in Eq. (10)
3. Calibration method of the omnidirectional structured-light vision model
In the omnidirectional structured-light vision model, the parameters to be calibrated are M, raxis, d, D and the equation of the structured-light. The calibration method of the equation of the structured-light relies on the other four parameters and the calibration method of Sun et al. [16].
As the parameters of the refraction distortion, M, raxis, d and D are calibrated with an auxiliary camera and a calibration board. The vision sensor camera takes images of the calibration board through the glass tube. Meanwhile, the auxiliary camera captures the same board directly. With the mathematical model of the omnidirectional structured-light vision, the corrected 3D coordinates of the jth feature point denoted by xrj can be expressed by M, raxis, d and D. At the same time, the auxiliary camera offers its undistorted coordinates denoted by xcj. Based on the corresponding xrj and xcj, we build and optimize the objective function shown in Eq. (11) to determine M, raxis, d and D.
Since the corrected image coordinates of the structured-light can be acquired, we can also get the corresponding 2D coordinates in the calibration board plane coordinate system. Let us set the 2D coordinates as (x, y) T and then the 3D coordinates in the calibration board coordinate system can be expressed by $\tilde{{\boldsymbol Q}}$i=(x, y, 0) T. Further, the coordinates of the structured-light in the camera coordinate system denoted by $\tilde{{\boldsymbol Q}}$c can be determined by Eq. (13).
4. Experiments and analysis
An omnidirectional structured-light vision measurement system was built in the laboratory, shown in Fig. 3(a). The system consisted of an industrial DAHENG MER-504-10GM-P camera with a resolution of 2448×2048 pixels, a Schneider Cinegon 1.4/8 mm lens, a cylindrical glass tube, a HB365050X structured-light projector and some brackets. In order to evaluate the performance of the model, the system was used to measure the internal diameter of an aluminum tube. The omnidirectional structured-light was emitted to the inner surface of the aluminum tube and formed a closed light stripe. By the measurement model we acquired the 3D coordinates of the light stripe points. To avoid the impact of the position and the orientation of the aluminum tube, an ellipse was fitted by the least square method. The minor axis of the ellipse was the internal diameter of the aluminum tube.
4.1 Calibration results
The calibration devices are exhibited in Fig. 3(b). The auxiliary camera was identical to the vision sensor camera. When the intrinsic parameters of the two cameras were being calibrated, the system was fitted without the glass tube. When we calibrated the parameters of the camera model with Zhang’s method [17] , a planar glass calibration board of a 17×17 chessboard with an interval of 10 mm was used. The calibration results are shown in Table 1.
To calibrate M, raxis, d and D, we placed the calibration board in 29 positions or orientations. The vision sensor camera and the auxiliary camera captured the calibration board at the same time. The images are presented in Fig. 4 and the calibration results are shown in Table 2.
To calibrate the equation of the structured-light, a planar ceramic calibration board was used. The equation of the structured-light denoted by Aα x = 0 was determined by the proposed calibration method. The calibration result of Aα is expressed in Eq. (14).
4.2 Repeatability
The repeatability experiments were carried out by measuring the internal diameter of the aluminum tube ten times. The measurement values are presented in Table 3. The repeatability of the measurement system reaches 0.05 mm. The experimental results prove the good stability and reliability of the system.
4.3 Measurement results
The measurement devices are shown in Fig. 5. The internal diameter of the aluminum tube is 288.50 mm measured by a vernier caliper with a precision of 0.02 mm. Table 4 shows the measurement values of ten pictures. The root mean square error of the measurement system is 0.23 mm. It validates the accuracy of the proposed model.
4.4 Comparison
The experiments were carried out to compare the performance of the proposed correction model and the model without correction. Figure 6 shows the light stripe points on the image plane and in space, respectively. On the image plane, the distorted image points were obtained by the light stripe center extraction method, while the corrected ones were obtained by the correction model. In space, the distorted 3D points were obtained by the camera perspective projection model based on the distorted image points, while the corrected ones were obtained by the omnidirectional structured-light vision model.
Compared with the incident ray, the emergent ray is closer to the camera optical axis. Hence, the ellipse formed by the distorted points is smaller than the one formed by the corrected ones both on the image plane and in space. The measurement values are presented in Table 5. The comparison results prove that the refraction distortion has a certain impact on the measurement. The system accuracy with the proposed correction model is improved by 7.9 times compared with the model without correction.
5. Conclusion
A measurement model of the omnidirectional structured-light vision is constructed in this paper. Accordingly, we devise a corresponding calibration method. The measurement model can correct the distortion caused by two non-coplanar refraction planes on the cylindrical surfaces. Therefore, the model is general and effective for any position or orientation of the camera. It avoids the strict alignment and can be commonly used in practical applications.
The repeatability and the measurement experiments prove that the model is reliable and accurate. Moreover, the comparison experiments demonstrate that the refraction distortion affects the measurement results and that the system accuracy with our correction model is improved by 7.9 times compared with the model without correction. Despite the fact that the proposed model is focused on the cylindrical surfaces in this paper, it can also be extended to the refraction distortion correction on any other surfaces.
Funding
Aeronautical Science Foundation of China (2017ZE51062).
References
1. F. Zhou, B. Peng, Y. Cui, Y. Wang, and H. Tan, “A novel laser vision sensor for omnidirectional 3D measurement,” Opt. Laser Technol. 45(1), 1–12 (2013). [CrossRef]
2. Y. Wang and R. Zhang, “In-pipe surface circular structured light 3D vision inspection system,” Infrared Laser Eng. 43(3), 891–896 (2014).
3. Y. Zhu, Y. Gu, Y. Jin, and C. Zhai, “Flexible calibration method for an inner surface detector based on circle structured light,” Appl. Opt. 55(5), 1034–1039 (2016). [CrossRef]
4. T. Wu, S. Lu, and Y. Tang, “An in-pipe internal defects inspection system based on the active stereo omnidirectional vision sensor,” in 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD) (IEEE, 2015), pp. 2637-2641.
5. G. Zhang, J. He, and X. Li, “3D vision inspection for internal surface based on circle structured light,” Sens. Actuators A 122(1), 68–75 (2005). [CrossRef]
6. P. Buschinelli, T. Pinto, F. Silva, J. Santos, and A. Albertazzi, “Laser Triangulation Profilometer for Inner Surface Inspection of 100 millimeters (4") Nominal Diameter,” J. Phys. Conf. Ser. 648, 012010 (2015). [CrossRef]
7. T. Yoshizawa and T. Wakayama, “Development of an inner profile measurement instrument using a ring beam device,” Proc. SPIE 7855, 78550B (2010). [CrossRef]
8. Z. Gong, Z. Liu, and G. Zhang, “Flexible method of refraction correction in vision measurement systems with multiple glass ports,” Opt. Express 25(2), 831–847 (2017). [CrossRef]
9. L. Huang, X. Zhao, S. Cai, and Y. Liu, “Plate refractive camera model and its applications,” J. Electron. Imaging 26(2), 023020 (2017). [CrossRef]
10. M. Feng, S. Huang, J. Wang, B. Yang, and T. Zheng, “Accurate calibration of a multi-camera system based on flat refractive geometry,” Appl. Opt. 56(35), 9724–9734 (2017). [CrossRef]
11. S. Huang, M. C. Feng, T. X. Zheng, F. Li, J. Q. Wang, and L. F. Xiao, “A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry,” IOP Conf. Ser. Mater. Sci. Eng. 320(1), 012016 (2018). [CrossRef]
12. S.-Q. Li, X.-P. Xie, and Y.-J. Zhuang, “Research on the calibration technology of an underwater camera based on equivalent focal length,” Measurement 122, 275–283 (2018). [CrossRef]
13. C. Zhang, X. Zhang, Y. Zhu, J. Li, and D. Tu, “Model and calibration of underwater stereo vision based on the light field,” Meas. Sci. Technol. 29(10), 105402 (2018). [CrossRef]
14. Y. Fu and Y. Liu, “3D bubble reconstruction using multiple cameras and space carving method,” Meas. Sci. Technol. 29(7), 075206 (2018). [CrossRef]
15. S. Morinaka, F. Sakaue, J. Sato, K. Ishimaru, and N. Kawasaki, “3D reconstruction under light ray distortion from parametric focal cameras,” Pattern Recognit. Lett. to be published. [CrossRef]
16. J. Sun, G. Zhang, Q. Liu, and Z. Yang, “Universal Method for Calibrating Structured-light Vision Sensor on the Spot,” J. Mech. Eng. 45(03), 174–177 (2009). [CrossRef]
17. Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]