Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-accuracy projector calibration method for fringe projection profilometry considering perspective transformation

Open Access Open Access

Abstract

Camera and projector are the key components of structured light three-dimensional (3-D) measurements, and Digital Light Processing (DLP) projector has been widely used for projecting digital structured light patterns for the measurement. The light projecting of projectors can be modeled as the inverse procedures of camera imaging, and its high-accuracy calibration is still a remaining challenge. Therefore, this paper proposes a novel projector calibration method to improve the calibration accuracy of DLP projector. By fixing the position of the camera and calibration board, this method essentially eliminates the perspective transformation error and effectively avoids the distortion of the extracted marker points. The proposed projector calibration procedures are given as follows: Firstly, the optical axis of the camera is adjusted parallel to the normal of the hollow ring calibration board, and a texture image is captured by the camera; Secondly, the horizontal and vertical fringe patterns with nine different positions and directions are projected onto the calibration board, and nine sets of projected images are taken; Finally, a one-to-one correspondence between the camera and the projector is established, and the projector is accurately calibrated using the phase equivalence. The experimental results show that the proposed projector calibration method is feasible and easy to operate, which can essentially eliminate the perspective transformation error and ensure the competitive accuracy.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Structured light three-dimensional (3-D) sensing technology based on fringe projection profilometry (FPP) has become the mainstream optical method due to its advantages of non-contact, high accuracy, full-field measurement, and efficient point cloud reconstruction [13]. It has been widely used in industrial inspection, reverse engineering, biomedical, machine vision, etc. [46]. In FPP, a projector (mostly Digital Light Processing projector) projects sinusoidal fringe patterns onto the surface of the measured object. Then the camera captures the distorted fringe patterns by depth modulation of the object surface. Finally, the depth information of the object surface is obtained by demodulating the phase information of the deformed fringe patterns. The relationship between absolute phase and 3-D data is established by system calibration. The calibration accuracy of the camera and projector plays an important role in the reconstruction procedure. The projector, unlike a camera, cannot capture images, so the relative relationship between projector pixels and object points cannot be obtained directly [7]. To address this problem, the projector is usually regarded as a reverse camera [8], and its high-accuracy calibration is still a remaining challenge.

In FPP, the expected calibration accuracy of the projector is limited if the effect of errors is ignored. To improve the accuracy of the projector calibration, a number of methods have been proposed. Generally speaking, the main error sources of the projector can be roughly categorized into three types: a) nonlinear response error [918], b) defocus effect error [9,1923] and c) lens distortion error [2426]. The first category of improved projector calibration is to eliminate the gamma nonlinearity of the projector. This kind of methods may include active pre-correction [9,10], mathematical gamma model method [11], generic exponential fringe model method [12], nonlinear self-calibration method [13], iterative method [14], higher-order phase-shift technique [15], fringe pattern filtering method with bandpass filters [16], and pre-computed look-up table method of stored phase errors [17,18]. The second category of improved projector calibration is to eliminate the defocus effect of the projector. A more direct method is based on the improvement of hardware devices. For example, the interference fringes are regarded as standard sinusoidal fringes by using coherent light illumination techniques [19,20], or binary defocus projection techniques [2123] to form sinusoidal fringes in the measurement space.

The third category of improved projector calibration is to correct the lens distortion of the projector. Some researchers have proposed to find a homography transformation between the calibration board and the projector image plane [2426]. However, the homography transformation is a linear operator that cannot model the nonlinear deformation of the projector lens. The plane calibration board consisting of multiple hollow ring coded marker points has been widely used in computer vision fields such as system calibration and 3-D measurement because of its high positioning accuracy, easy identification, and good stability [2729]. The accuracy of marker point positioning directly affects the calibration accuracy of the camera and projector as well as the overall measurement system. Locating the center of the hollow ring coded marker point is mainly performed by extracting the innermost ring of the marker point. Many extraction methods have been studied, such as the ellipse fitting method based on image edges [30], the Hough transform method [31,32], and the grayscale center-of-mass method based on image grayscale [33]. But lens distortion can cause the ring marker points to be deformed into ellipses, thus leading to inaccurate extraction of the center of the coded marker points.

While the three existing types of improved projector calibration methods work well, they involve complex processes or high computational cost. To address these issues, this paper proposes a simple and flexible method to accurately calibrate a DLP projector. The calibration procedures are given as follows: Firstly, the position of the camera and the calibration board is fixed to ensure that the optical axis of the camera is parallel to the normal of the calibration board. Then, the projector is moved to several different positions, and the camera captures the fringe patterns projected from different perspectives. Finally, the corresponding relationship between the camera and the projector is established, and the intrinsic and extrinsic parameters of the projector are calculated. By contrast with conventional methods, the proposed method improves the calibration accuracy of the projector by ensuring that the camera is aligned with the calibration plate at each position, which also avoids elliptical deformation of the ring caused by the perspective phenomenon directly. The validation experiments demonstrated that the proposed method is simple and effective.

The remainder of this paper is organized as follows. The principle of the proposed high-accuracy projector calibration method to avoid perspective projection of the projector is presented in Section 2. Then, experimental results to verify the perspective projection compensation method are provided in Section 3. Finally, the conclusions are drawn and final remarks are given in Section 4.

2. Principle

2.1 Phase calculation

Phase-shift method has been widely used in high-accuracy 3-D measurement. For the N-step phase shift method, the projector successively projects N fringe patterns with a phase difference of 2π(n-1)/N to the reference plane. The intensity distribution of the n-th phase shift fringe acquired by the camera can be written as

$${I_n}({x,y} )= A + B\cos [{\varphi ({x,y} )+ {\delta_n}} ],$$
where A is the ambient light intensity, B is the modulated light intensity, and φ(x, y) is the phase to be solved. n = 1,2,3,…,N, let δn = 2π(n-1)/N. The phase distribution can be obtained by Eq. (2)
$$\varphi ({x,y} )= \arctan \left[ {\frac{{\sum\nolimits_{n = 1}^N {{I_n}} ({x,y} )\sin ({{\delta_n}} )}}{{\sum\nolimits_{n = 1}^N {{I_n}} ({x,y} )\cos ({{\delta_n}} )}}} \right],$$

The obtained phase in Eq. (2) is a wrapped phase distributed between [-π, π] with a step of 2π, which needs to be unwrapped to transform the discontinuous phase distribution into a monotonic continuous phase [34]. The continuous phase can be obtained as

$$\phi ({x,y} )= \varphi ({x,y} )\textrm{ + }2k({x,y} )\pi ,$$
where ϕ(x, y) is the absolute phase after phase unwrapping and k(x, y) is the fringe level at a point.

2.2 High-accuracy calibration of the projector

The DLP projector is an important device for structured light-based 3-D imaging system. Projector projection can be modeled as the inverse procedures of camera imaging.

2.2.1 Generation of DMD images

The DLP projector can be used as a reverse camera. Digital micromirror device (DMD) of the projector emits light while the CCD camera receives light. The DMD image is converted from the CCD image pixel by pixel, which is called the “captured” image of the projector. Based on an ideal pinhole model of the projector, the mathematical transformation between the world coordinates (xw, yw, zw) and the pixel coordinates (u, v) is as follows [26]

$$\left[ \begin{array}{l} u\\ v\\ 1 \end{array} \right]\textrm{ = }s\left[ {\begin{array}{cccc} {f_u^p}&0&{u_0^p}&0\\ 0&{f_v^p}&{v_0^p}&0\\ 0&0&1&0 \end{array}} \right]\left[ {\begin{array}{cc} {{R^p}}&{{T^p}}\\ {0_3^T}&1 \end{array}} \right]\left[ {\begin{array}{c} {{x_w}}\\ {{y_w}}\\ {{z_w}}\\ 1 \end{array}} \right] = s{I^p}{E^p}\left[ {\begin{array}{c} {{x_w}}\\ {{y_w}}\\ {{z_w}}\\ 1 \end{array}} \right],$$
where parameter s is an arbitrary scale factor. Ip represents the intrinsic parameter matrix of the projector, which consists of focal length coordinates (fup, fvp) and main point coordinates (u0p, v0p). Ep represents the extrinsic parameter matrix of the projector, which consist of 3 × 3 rotation matrix Rp and 3 × 1 translation vector Tp between the world coordinate system and the projector coordinate system. The superscript T indicates the transpose of the matrix. It can be seen from Eq. (4), the projector can be calibrated by the correspondence between world coordinates and pixel coordinates.

The most important step in projector calibration is the conversion of the pixel points in the camera pixel coordinate system to the projector pixel coordinate system. The corresponding relationship between camera pixel coordinate system and projector pixel coordinate system is established by using a series of horizontal and vertical fringe patterns. The camera captures the image reflected by the projector on the calibration board. Assuming that a point in the camera pixel coordinate system is p(uic, vic), and its absolute phase values in horizontal and vertical directions are ϕh(uic, vic) and ϕv(uic, vic), respectively, the corresponding point p’(uip, vip) in the projector pixel coordinate system is calculated by the following equations [26]

$$\left\{ {\begin{array}{c} {u_i^p = \frac{{V{\phi_v}({u_i^c,v_i^c} )}}{{2\pi {N_v}}} + \frac{V}{2}}\\ {v_i^p = \frac{{H{\phi_h}({u_i^c,v_i^c} )}}{{2\pi {N_h}}} + \frac{H}{2}} \end{array}} \right.,$$
where Nh and Nv are the maximum number of fringes in the horizontal and vertical fringe patterns projected on the calibration board, V and H are the width and height of the fringe pattern, respectively. Through Eq. (5), the corresponding relationship between camera and projector imaging plane at each center of marker point can be established. The pixel coordinates of the center of all marker points under the projector pixel coordinate system are generated using the above conversion method, and then a set of DMD images is generated. Once the DMD images of the projector are obtained, the calibration procedure of the projector is similar to the calibration of the camera. The projector can be calibrated by substituting the pixel coordinates of the DMD image and the world coordinates of 3-D space points into Eq. (4), and then the intrinsic and extrinsic parameters of the projector can be obtained.

2.2.2 Elimination of perspective error of the projector

The use of ring object targets is very common in spatial object calibration and reconstruction. The traditional projector calibration method usually fixes the position of the camera and projector. A ceramic calibration board with hollow ring marks on the surface is used to calibrate the projector and to verify the accuracy, as shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Ceramic hollow ring calibration board. (a) One captured image of the manufactured board; (b) extracted ring center point image; (c) perspective projection transformation of space ring.

Download Full Size | PDF

In the existing method, the spatial point pixel coordinates are provided by moving the ring calibration board to different positions, and then captured by the camera. One of the captured calibration board images is shown in Fig. 1(a). The pixel coordinates of the center points of all image rings are extracted, as shown in Fig. 1(b). The projector is calibrated by establishing the conversion relationship between the pixel coordinates of the camera and the projector at each position. However, due to the perspective transformation of the camera, an object ring on the camera imaging plane will be elliptical deformed if the calibration board is not parallel to the camera imaging plane [35], resulting in the deviation of the pixel coordinates of the center of the extracted mark point, as shown in Fig. 1(c). The center of the ellipse does not coincide with the projection point of the center of the ring. The center of the ring plane π1 is the red cross A. After being photographed by the camera, it is elliptical on the camera imaging plane π2 with the projection point A’, but the center B’ of the ellipse should be extracted as the marker point. This pixel deviation of the extracted marker points directly affects the accuracy of the camera calibration, and will accumulate effect on the calibration accuracy of the projector.

A novel projector calibration method is proposed to solve the problem of inaccurate extraction of hollow ring center point, which avoids the perspective error and does not require software correction for elliptical distortion. The calibration configuration of the proposed method is illustrated in Fig. 2. It ensures that the camera is toward the calibration board, that is, the optical axis of the camera is always parallel to the normal direction of the board. The whole calibration procedure requires only the movement of the projector. In order to adapt to different working distances and orientations, the projector should be placed in several different positions and orientations within the camera's field of view. The optical path of the camera is shown in Fig. 3(a). The captured ring image can avoid perspective error and thus does not produce elliptical distortion, so the extracted coordinates of the ring center point are more accurate. The optical path of the projector is shown in Fig. 3(b). The perspective projection is produced due to the certain angle between the optical axis of the projector and the normal direction of the calibration plate, resulting in trapezoidal distortion of the projected image. However, the trapezoidal distortion has little effect on the accuracy of phase calculation and projector calibration. The accuracy of the projector calibration is largely limited by the positioning accuracy of the ring center, because the pixel accuracy of the extracted center of the ring marker will directly affect the pixel conversion relationship between the camera and the projector. Therefore, the accuracy of projector calibration can be guaranteed by ensuring that the captured image is not distorted in perspective.

 figure: Fig. 2.

Fig. 2. The calibration configuration of the proposed method.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. The optical path diagram. (a) Camera; (b) projector.

Download Full Size | PDF

In order to ensure that the captured image without perspective distortion, it needs to adjust the camera toward the calibration board. The specific steps are as follows: The first step is to make a rough adjustment to the height of the camera. The shooting viewpoint, that is, the height of the camera position, should be adjusted to have the same height as the object's midpoint. The second step is to make a fine adjustment to the relative position of the camera and the object. Adjusting the camera's imaging axis to be perpendicular to the surface of the calibration board, as shown in Fig. 4(a). A piece of background paper with cross grid line is pasted on the surface of the calibration board. Adjusting the four center points in the horizontal and vertical directions of the camera's field of view to be completely aligned with the horizontal and vertical cross lines of the background paper. The window displayed on the computer after the adjustment is shown in Fig. 4(b). Then the background paper is removed so that the imaging axis of the camera is perpendicular to the calibration board.

 figure: Fig. 4.

Fig. 4. Schematic diagram of the camera adjustment. (a) Position of the camera and calibration board; (b) displayed window by the computer after adjustment.

Download Full Size | PDF

2.2.3 Optimized projector calibration

In order to achieve better results in projector calibration, it is necessary to optimize each parameter to achieve the minimum value of re-projection error in the calibration results, where the objective function F is

$$F = \textrm{min}{\sum\limits_{i = 1}^N {\sum\limits_{\textrm{j} = \textrm{1}}^\textrm{M} {||{{p_{ij}} - {{\tilde{p}}_{ij}}({{I^p},R_j^p,T_j^p,{P_i}} )} ||} } ^2},$$
where N is the number of reference points, M is the number of images for each position of the projector, pij denotes the pixel coordinates of the i-th marker point on the j-th DMD image, ${\tilde{p}_{ij}}$ denotes the coordinates that are re-projected to the DMD image through the parameters to be optimized, Ip denotes the intrinsic parameters of the projector, Rjp and Tjp are the rotation matrix and translation vector of the j-th image, Pi is the input 3-D world coordinate. The objective function F is optimized by the nonlinear least square optimization algorithm [36], and the high-accuracy calibration results are obtained.

Procedure of the proposed projector calibration method mainly includes the following eight steps. Figure 5 shows the flowchart of the proposed method.

  • 1. Fixing the position of the camera and the calibration board. In this position, ensuring that the optical axis of the camera is always parallel to the normal of the calibration board, and then taking a texture image of the hollow ring calibration board.
  • 2. Placing the projector in the camera's field of view. In an arbitrary position, 24 images of the projected fringe patterns, 12 patterns horizontal and 12 patterns vertical, are captured by the camera.
  • 3. Calculating the absolute phase value. The horizontal and vertical absolute phase values are obtained using the four-step phase-shift algorithm and the optimum three-fringe number selection method [34,37].
  • 4. Changing the positions and orientations of the projector n times. If n ≤ 9, repeating step 2 and step 3 in different projecting positions.
  • 5. Saving the horizontal and vertical absolute phases obtained from nine different projector positions, and extracting all hollow ring center points of 1 ring calibration board texture image.
  • 6. Establishing the correspondence. The one-to-one mapping relationship between the camera and projector is established, allowing the conversion of camera pixel coordinates to projector pixel coordinates and generating a set of DMD images.
  • 7. Calibrating the projector. The intrinsic and extrinsic parameters of the projector are obtained by substituting the known projector world coordinates and pixel coordinates into the camera calibration method.
  • 8. Optimizing the projector calibration parameters. A nonlinear least square optimization algorithm is used to optimize the objective function F and reduce the re-projection error.

3. Experiments

3.1 System setup

The hardware system mainly consists of a digital projector (DLP LightCrafter 4500, NASDAQ: TXN), a CCD camera (ECO445CVGE, SVS-VISTEK, Germany) and a planar ceramic calibration board (with a hollow ring pattern). The brightness of projector output is 150 lumens, the inherent resolution is 1140 × 912, the image resolution is up to 1280 × 800, the refresh rate of 8-bit grayscale graphics is 120 Hz, and the refresh rate of binary graphics is up to 4225 Hz. It can realize the simultaneous projection of R, G and B channels. The resolution of the camera is 1296 × 964 pixels with a nominal focal length of 12-36 mm, and its pixel size is 3.75 × 3.75 µm square. There are 12 × 12 discrete black hollow ring markers on the surface of the calibration board. The separation of neighbor markers along X and Y direction has the same value of 15 mm with an accuracy of 5 µm. The ring calibration board is placed 570 mm from the structured light system. The system setup is shown in Fig. 6.

 figure: Fig. 5.

Fig. 5. Flowchart of the proposed projector calibration method.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. System setup map.

Download Full Size | PDF

3.2 Experimental results

The experiments have been carried out to demonstrate the validity and feasibility of the proposed method. At first, the camera can be calibrated using Zhengyou Zhang calibration method [38]. It only requires the camera to observe the ring calibration board shown at a few (at least two) different orientations. Then, the projector is calibrated by using the proposed method and the projector lens distortion is corrected with the method in [26]. Only one captured image of the calibration board with a hollow ring pattern is needed in the procedure of projector calibration, and in this captured position, ensure that the camera is toward the calibration board. The center points in the image of the calibration board are extracted as marked points, and 144 marked points are extracted. In an arbitrary position of the projector, 24 images of the projected fringe patterns, 12 patterns horizontal and 12 patterns vertical, are captured by the camera. The absolute phase maps of horizontal and vertical fringe patterns are computed by the standard four-step phase-shifting algorithm and the optimum three-fringe number selection method with fringe numbers of 100, 99 and 90. Totally one image of the ring calibration board and nine sets of projected fringe patterns are captured for the projector calibration. For each set of images, the above-mentioned phase calculation is repeated. As shown in Fig. 7, one extracted ring calibration point image and nine generated DMD images are obtained. The black dots indicate the pixel coordinates of the marker points on the camera imaging plane, and the colored crosses indicate the pixel coordinates of the marker points on the projector imaging plane after coordinate transformation. The marker points of both planes are displayed on the same image. After substituting the pixel coordinates and world coordinates of the projector into the camera calibration, the intrinsic parameters of the projector are obtained, as shown in Table 1. The second and third columns of Table 1 are the intrinsic parameters with and without perspective projection transformation error, respectively. Nine different positions and orientations of the projector for calibration are shown in Fig. 8. The extrinsic parameters can be obtained at one position of the projector, and the extrinsic parameters [R T] can be calculated as $\left[ {\begin{array}{cc} R&T \end{array}} \right] = \left[ {\begin{array}{cccc} {\textrm{0}\textrm{.9370}}&{\textrm{0}\textrm{.0080}}&{\textrm{ - 0}\textrm{.3494}}&{\textrm{ - 90}\textrm{.5941}}\\ {\textrm{0}\textrm{.0723}}&{\textrm{ - 0}\textrm{.9825}}&{\textrm{0}\textrm{.1714}}&{\textrm{ - 57}\textrm{.0806}}\\ {\textrm{ - 0}\textrm{.3420}}&{\textrm{ - 0}\textrm{.1859}}&{\textrm{ - 0}\textrm{.9211}}&{\textrm{524}\textrm{.2281}} \end{array}} \right]$.

 figure: Fig. 7.

Fig. 7. Generation of the DMD images. (a)-(i) are the generated DMD images of the nine projector positions.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Positions and orientations of the projector for calibration evaluation.

Download Full Size | PDF

Tables Icon

Table 1. Comparison of intrinsic parameters of the two methods.

3.3 Experimental analysis

From the calibration data, it can be seen that the horizontal main point position of the projector is basically near the theoretical main point position. However, due to the elevation angle design in the procedure of assembly and manufacturing, the projection state of the projector usually requires to be oblique upward, so the vertical main point position is larger than the theoretical main point position value, which is consistent with the actual situation.

Table 2 and Fig. 9 show the re-projection error values and images of the projector. The average re-projection error of the proposed method is better than both the method that ignores the perspective error and the existing method [39], and the re-projection errors distribution are within the range of ± 0.28 pixels while avoiding perspective projection transformation error. On the other hand, it can be seen that the re-projection error values in Fig. 9(b) are more aggregated, which verifies that the proposed method can effectively avoid elliptical deformation of the ring calibration board, eliminate perspective errors, and further improve the center extraction accuracy.

 figure: Fig. 9.

Fig. 9. Re-projection error images of the projector. (a) With and (b) without perspective projection transformation error.

Download Full Size | PDF

Tables Icon

Table 2. Re-projection error values of the projector.

To quantitatively evaluate the performance of the proposed projector calibration method, the white board is placed on an accurately translating stage with an accuracy of 1 µm and moved to the positions of -2.5 mm, 0 mm, 1.25 mm and 7 mm. The 3-D shape of the white board at -2.5 mm position and 0 mm position (chosen as the reference) is reconstructed by using the methods with and without the perspective projection transformation error. Figures 10(a) and (b) show the distance difference images of the white board of the two methods. The shape irregularity is significant when the perspective error is not eliminated and is greatly reduced by applying the proposed method. Figures 11(a), (b) and (c) represent the measured depth profile images along the middle row for three positions of -2.5 mm, 1.25 mm and 7 mm with the proposed method, where x-axis is the pixel positions along row direction with a range 1, 2, 3, …, 1296, and y-axis is the reconstructed depth information. The absolute errors (absolute difference between the measured average distance and the mechanical component moving distance) of the measured depth along the middle row are 0.021 mm, 0.019 mm and 0.016 mm at the three positions. The maximum absolute error along the depth direction in the existing method [34] is 0.087 mm and reduces to 0.021 mm once the perspective error is eliminated, which means the error is approximately 4.1 times smaller. The comparison experiment confirms the effectiveness of eliminating the perspective error to improve the measurement accuracy.

 figure: Fig. 10.

Fig. 10. Distance difference images of the reconstructed white board between - 2.5 mm position and the reference position. (a) With and (b) without perspective projection transformation error.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Measured depth profile images along the middle row at the position of -2.5 mm, 1.25 mm and 7 mm in (a), (b) and (c).

Download Full Size | PDF

4. Conclusion

A novel projector calibration method is proposed to provide higher measurement accuracy compared to the existing methods. It essentially solves the influence of the geometric-optical perspective phenomenon on the calibration results by ensuring that the camera's optical axis is always parallel to the normal of the calibration board, and only the position of the projector needs to be moved. The comparison experiments verify that the accuracy of the proposed projector calibration method is higher than that ignores perspective projection transformation error. The maximum absolute error of the measured depth along the middle row is 0.021 mm, which is 4.1 times smaller compared with the existing methods.

In comparison with the existing projector calibration methods for FPP based on structured light system, the proposed method offers some superior performances. (1) Simple operations: the camera only needs to capture a ring texture image and nine sets of projected fringe patterns for extracting the center coordinates of the marker points and calculating the absolute phase of each marker point; (2) Simple equipment: no additional experimental equipment is required and the system only needs a camera, a projector and a ring calibration board; (3) High accuracy: the proposed projector calibration method essentially eliminates the perspective error, ensures the extraction accuracy of ring marker points, and further improves the calibration accuracy of the projector.

In the next step, further researches need to be done to overcome some limitations. On the one hand, the acquisition time of fringe images needs to be decreased. An orthogonal red/blue sinusoidal fringe images consist of horizontal and vertical directions can be considered. The horizontal and vertical fringes can be separated by color channel modulation during phase calculation, which can reduce the number of acquisition pattern by half. On the other hand, the perspective projection transformation error of the camera should also be considered. The phase target can be used to solve the perspective error during camera calibration, so as to further improve the accuracy of 3-D measurement.

Funding

National Key Research and Development Program of China (2017YFF0106404); National Natural Science Foundation of China (51675160, 52075147); Hebei Province Graduate Innovation Funding Project (CXZZBS2021032).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Xu, S. Liu, A. Wan, B. Gao, Q. Yi, D. Zhao, R. Luo, and K. Chen, “An absolute phase technique for 3D profile measurement using four-step structured light pattern,” Opt. Lasers Eng. 50(9), 1274–1280 (2012). [CrossRef]  

2. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithm for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

3. Z. Niu, N. Gao, Z. Zhang, F. Gao, and X. Jiang, “3D shape measurement of discontinuous specular objects based on advanced PMD with bi-telecentric lens,” Opt. Express 26(2), 1615 (2018). [CrossRef]  

4. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

5. T. B. Moeslund and E. Granum, “A survey of computer vision-based human motion capture,” Comput. Vis. Image Underst. 81(3), 231–268 (2001). [CrossRef]  

6. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

7. H. Guo and S. Xing, “Iterative calibration method for measurement system having lens distortions in fringe projection profilometry,” Opt. Express 28(2), 1177 (2020). [CrossRef]  

8. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

9. C. R. Coggrave and J. M. Huntley, “High-speed surface profilometer based on a spatial light modulator and pipeline image processor,” Opt. Eng. 38(9), 1573–1581 (1999). [CrossRef]  

10. M. Dai, F. Yang, and X. He, “Single-shot color fringe projection for three-dimensional shape measurement of objects with discontinuities,” Appl. Opt. 51(12), 2062–2069 (2012). [CrossRef]  

11. H. Guo, H. He, and M. Chen, “Gamma correction for digital fringe projection profilometry,” Appl. Opt. 43(14), 2906–2914 (2004). [CrossRef]  

12. C. Chen, N. Gao, X. Wang, Z. Zhang, F. Gao, and X. Jiang, “Generic exponential fringe model for alleviating phase error in phase measuring profilometry,” Opt. Lasers Eng. 110, 179–185 (2018). [CrossRef]  

13. S. Xing and H. Guo, “Correction of projector nonlinearity in multi-frequency phase-shifting fringe projection profilometry,” Opt. Express 26(13), 16277–16291 (2018). [CrossRef]  

14. B. Pan, K. Qian, L. Huang, and A. Asundi, “Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry,” Opt. Lett. 34(4), 416–418 (2009). [CrossRef]  

15. Z. Cai, X. Liu, H. Jiang, D. He, X. Peng, S. Huang, and Z. Zhang, “Flexible phase error compensation based on hilbert transform in phase shifting profilometry,” Opt. Express 23(19), 25171–25181 (2015). [CrossRef]  

16. S. Ma, R. Zhu, C. Quan, B. Li, C. J. Tay, and L. Chen, “Blind phase error suppression for color-encoded digital fringe projection profilometry,” Opt. Commun. 285(7), 1662–1668 (2012). [CrossRef]  

17. Y. Fu, Z. Wang, G. Jiang, and J. Yang, “A novel three-dimensional shape measurement method based on a look-up table,” Optik 125(6), 1804–1808 (2014). [CrossRef]  

18. S. Zhang and S. T. Yau, “Generic nonsinusoidal phase error correction for three-dimensional shape measurement using a digital video projector,” Appl. Opt. 46(1), 36–43 (2007). [CrossRef]  

19. T. Anna, S. K. Dubey, C. Shakher, A. Roy, and D. S. Mehta, “Sinusoidal fringe projection system based on compact and non-mechanical scanning low-coherence Michelson interferometer for three-dimensional shape measurement,” Opt. Commun. 282(7), 1237–1242 (2009). [CrossRef]  

20. N. Berberova, E. Stoykova, H. Kang, J. S. Park, and B. Lvanov, “SLM-based sinusoidal fringe projection under coherent illumination,” Opt. Commun. 304, 116–122 (2013). [CrossRef]  

21. G. A. Ayubi, M. D. Martino, J. R. Alonso, A. Fernández, C. D. Perciante, and J. A. Ferrari, “Three-dimensional profiling with binary fringes using phase-shifting interferometry algorithms,” Appl. Opt. 50(2), 147–154 (2011). [CrossRef]  

22. G. A. Ayubi, M. D. Martino, J. R. Alonso, A. Fernández, J. L. Flores, and J. A. Ferrari, “Color encoding of binary fringes for gamma correction in 3-D profiling,” Opt. Lett. 37(8), 1325–1327 (2012). [CrossRef]  

23. J. Zhu, Z. Pei, X. Su, and Z. You, “Accurate and fast 3D surface measurement with temporal-spatial binary encoding structured illumination,” Opt. Express 24(25), 28549–18560 (2016). [CrossRef]  

24. C. Chen and H. Chien, “An incremental target-adapted strategy for active geometric calibration of projector-camera systems,” Sensors 13(2), 2664–2681 (2013). [CrossRef]  

25. H. Anwar, I. Din, and K. Park, “Projector calibration for 3D canning using virtual target images,” Int. J. Precis. Eng. Manuf. 13(1), 125–131 (2012). [CrossRef]  

26. J. Yu, N. Gao, Z. Zhang, and Z. Meng, “High sensitivity fringe projection profilometry combining optimal fringe frequency and optimal fringe direction,” Opt. Lasers Eng. 129, 106068 (2020). [CrossRef]  

27. X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017). [CrossRef]  

28. R. Chen, J. Xu, S. Zhang, H. Chen, Y. Guan, and K. Chen, “A self-recalibration method based on scale-invariant registration for structured light measurement systems,” Opt. Lasers Eng. 88, 75–81 (2017). [CrossRef]  

29. Z. Huang, J. Xi, Y. Yu, and Q. Guo, “Accurate projector calibration based on a new point-to-point mapping relationship between the camera and projector images,” Appl. Opt. 54(3), 347–356 (2015). [CrossRef]  

30. R. M. Haralick, “Digital step edges from zero crossing of second directional derivatives,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-6(1), 58–68 (1984). [CrossRef]  

31. P. V. C. Hough, A. Arbor, and Mich, “Method and means for recognizing complex patterns,” US 3069654, 12-18 (1962).

32. H. G. Dantanarayana and J. M. Huntley, “Object recognition and localization from 3D point clouds by maximum-likelihood estimation,” R. Soc. Open Sci. 4(8), 160693 (2017). [CrossRef]  

33. C. Liu, R. Che, and Y. Gao, “High precision location algorithm for optical feature of vision measurement,” J. Phys.: Conf. Ser. 48(1), 474–478 (2006). [CrossRef]  

34. Z. Zhang, S. Huang, S. Meng, F. Gao, and X. Jiang, “A simple, flexible and automatic 3D calibration method for a phase calculation-based fringe projection imaging system,” Opt. Express 21(10), 12218–12227 (2013). [CrossRef]  

35. S. J. Ahn, H. J. Warnecke, and R. Kotowski, “Systematic geometric image measurement errors of circular object targets: mathematical formulation and correction,” Photogramm Rec. 16(93), 485–502 (1999). [CrossRef]  

36. M. Lourakis, “A brief description of the Levenberg-Marquardt algorithm implemented by levmar,” Foundation of Research and Technology 4, 1–6 (2005).

37. Z. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for simultaneous 3D shape and color using optimum 3-frequency selection,” Opt. Express 14(14), 6444–6455 (2006). [CrossRef]  

38. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

39. J. Yu, Z. Meng, N. Gao, and Z. Zhang, “A three-dimensional measurement system calibration method based on red/blue orthogonal fringe projection,” Opt. Lasers Eng. 139, 106506 (2021). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Ceramic hollow ring calibration board. (a) One captured image of the manufactured board; (b) extracted ring center point image; (c) perspective projection transformation of space ring.
Fig. 2.
Fig. 2. The calibration configuration of the proposed method.
Fig. 3.
Fig. 3. The optical path diagram. (a) Camera; (b) projector.
Fig. 4.
Fig. 4. Schematic diagram of the camera adjustment. (a) Position of the camera and calibration board; (b) displayed window by the computer after adjustment.
Fig. 5.
Fig. 5. Flowchart of the proposed projector calibration method.
Fig. 6.
Fig. 6. System setup map.
Fig. 7.
Fig. 7. Generation of the DMD images. (a)-(i) are the generated DMD images of the nine projector positions.
Fig. 8.
Fig. 8. Positions and orientations of the projector for calibration evaluation.
Fig. 9.
Fig. 9. Re-projection error images of the projector. (a) With and (b) without perspective projection transformation error.
Fig. 10.
Fig. 10. Distance difference images of the reconstructed white board between - 2.5 mm position and the reference position. (a) With and (b) without perspective projection transformation error.
Fig. 11.
Fig. 11. Measured depth profile images along the middle row at the position of -2.5 mm, 1.25 mm and 7 mm in (a), (b) and (c).

Tables (2)

Tables Icon

Table 1. Comparison of intrinsic parameters of the two methods.

Tables Icon

Table 2. Re-projection error values of the projector.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = A + B cos [ φ ( x , y ) + δ n ] ,
φ ( x , y ) = arctan [ n = 1 N I n ( x , y ) sin ( δ n ) n = 1 N I n ( x , y ) cos ( δ n ) ] ,
ϕ ( x , y ) = φ ( x , y )  +  2 k ( x , y ) π ,
[ u v 1 ]  =  s [ f u p 0 u 0 p 0 0 f v p v 0 p 0 0 0 1 0 ] [ R p T p 0 3 T 1 ] [ x w y w z w 1 ] = s I p E p [ x w y w z w 1 ] ,
{ u i p = V ϕ v ( u i c , v i c ) 2 π N v + V 2 v i p = H ϕ h ( u i c , v i c ) 2 π N h + H 2 ,
F = min i = 1 N j = 1 M | | p i j p ~ i j ( I p , R j p , T j p , P i ) | | 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.