Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-shot 3D shape measurement of discontinuous objects based on a coaxial fringe projection system

Open Access Open Access

Abstract

Fringe projection profilometry has been widely used in high-speed three-dimensional (3D) shape measurement. To improve the speed without loss of accuracy, we present a novel single-shot 3D shape measuring system that utilizes a coaxial fringe projection system and a 2CCD camera. The coaxial fringe projection system, comprising a visible light (red, green, and blue) projector and an infrared (IR) light projector, can simultaneously project red, green, blue, and IR fringe patterns. The 2CCD camera, as the name suggests, has two CCD chips that can acquire visible and IR fringe patterns at the same time. Combining the two-step phase-shifting algorithm, Fourier transform profilometry, and the optimum three-frequency selection method, 3D shape measurement of complex surfaces such as large slopes or discontinuous objects can be obtained from single-shot acquisition. A virtual fringe projection measurement system has been established to generate pre-deformed fringe patterns to correct positional deviations of the coaxial fringe projection system. This method has been applied to simulations and experiments on static and dynamic objects with promising results.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. INTRODUCTION

During recent years, high-speed three-dimensional (3D) shape measurement has been utilized in various applications, including industrial on-line detection, virtual reality, biometric identification, cultural heritage preservation, reverse engineering, and security systems [1]. Fringe projection profilometry (FPP) is a set of main techniques, with the ability to provide non-contact, full-field, accurate, and high-speed 3D shape measurement [2,3]. Based on the procedure of phase retrieval, traditional FPP can be classified into two categories: Fourier transform profilometry (FTP) [4,5] and phase-shifting profilometry (PSP) [68]. FTP utilizes a single fringe pattern to determine the unwrapped phase with high speed, although it is a challenge to overcome the band overlapping problem. PSP calculates the phase based on the intensity of multiple phase shift fringe patterns with high accuracy and resolution, but it needs more than three patterns to retrieve the unwrapped phase. When complex surfaces are involved, several additional fringe patterns are needed to obtain the absolute phase with a temporal phase-unwrapping algorithm [9,10]. Also, it is a challenge to use fewer fringe numbers, obtain higher quality phase, and retrieve the absolute phase for FPP. Many high-speed techniques have already been proposed, including a dual frequency phase-shifting [11] algorithm, 2+1 phase-shifting algorithm [12], utilizing multicolor RGB channels of a projector [6,13,14], or color encoded approaches [15,16]. Binary patterns have also been demonstrated for achieving high projection speed [1719]. A secondary color camera and an infrared (IR) fringe projection system have also been proposed to simultaneously capture natural 2D color texture and 3D shape in real time [20,21].

In this paper, we propose a single-shot measurement approach to increase the speed with fewer fringe numbers. It can obtain high absolute phase and overcome the problem of retrieving absolute phase for objects that may be discontinuous or have large slopes. First, we present the coaxial projection measurement system comprising of an IR light projector (ILP), a visible light projector (VLP), a beam splitter (BS), and a dual-sensor charge coupled device (2CCD). The coaxial fringe projection system projects two fringe patterns (one composite color fringe pattern and one IR fringe pattern) onto the object, and the 2CCD captures two deformed fringe patterns synchronously. Then we propose an algorithm for phase extraction that combines the two-step phase-shifting (TPS) algorithm [22], FTP, and the optimum three-frequency selection method [23], allowing to recover the 3D shape of objects with single-shot measurement. Also, a virtual fringe projection system is established to obtain the pre-deformed fringe patterns for correction of positional deviations of the coaxial fringe projection measurement system. Finally, several complex test surfaces have been used to validate the proposed method, including a mask, a set of steps, and a moving human hand. The remainder of this paper is organized as follows: Section 2 describes the principle of the proposed single-shot measurement; Section 3 presents the technique to correct deviations of the coaxial projection system; simulation and actual experiments are reported in Section 4; and Section 5 presents the conclusions.

2. PRINCIPLE

In this section, we describe the principle of FPP, the coaxial projection measurement system, and phase calculation of the proposed single-shot measurement.

A. Fringe Projection Profilometry

Figure 1 shows the geometry of FPP in which the optical axis P1P2 of a projector lens intersects the optical axis E1E2 of a camera lens at point O on a reference plane R, which serves as the reference from which object height h(x,y) is measured. Δφ(x,y) is the difference between the unwrapped phase on the measured object and the reference plane. θ is the angle between P1P2 and E1E2. E2 is located at a distance L0 from plane R, and d is the distance between P2 and E2. A, C are two points on the reference plane R, and D is a point on the object surface. P0 is the period of the projected fringe pattern on a virtual plane perpendicular to the projection axis.

 figure: Fig. 1.

Fig. 1. Geometry of FPP.

Download Full Size | PDF

The relationship between unwrapped phase and height data is a complicated function of pixel coordinate positions of x and y, which can be represented by the following equation [24]:

h(x,y)=L02πL02dcosθP0Δφ(x,y)(L0+xcosθsinθ)2dcosθsinθL0+xcosθsinθ+1.
In a practical system, it is difficult to determine the height using the parameters of the optical setup, since it is impossible to measure system parameters L0, d, θ, and P0 accurately. Here, we can use a polynomial-based calibration method to build up the relationship between unwrapped phase and height data:
h(x,y)=n=0Nan(x,y)Δφ(x,y)n,n=0,1,,N,
where an(x,y) is a coefficient set containing the systematic parameters.

B. Coaxial Projection Measurement System

The schematic diagram of the coaxial projection system is shown in Fig. 2. It comprises an ILP, a VLP, a BS, and a 2CCD camera. The BS is a 50:50 BS with transmittance and reflectance of 50% each. The visible light projected by VLP is transmitted through the BS to the objects, while the IR light projected by ILP is reflected from the BS. The virtual image of ILP is overlapped with VLP via the BS. Two projectors are placed at identical equivalent spatial positions, capable of projecting red, green, blue, and IR fringe patterns. The 2CCD camera has two chip sensors to acquire the visible color image and IR image synchronously.

 figure: Fig. 2.

Fig. 2. Schematic diagram of coaxial projection measurement system.

Download Full Size | PDF

Four sinusoidal fringe patterns are encoded into four color channels, comprising one composite color fringe pattern and one IR fringe patterns. Two fringe patterns are simultaneously projected onto the surface of the object by the coaxial projection system, and 2CCD synchronously captures the deformed composite and IR fringe pattern. With the proposed algorithm for phase calculation, we can recover the absolute phase of objects with single-shot measurement.

C. Phase Calculation

In the current approach, one composite color fringe pattern is encoded with two sinusoidal fringe patterns in its red and blue channels to establish TPS for wrapping the primary phase, whereas one sinusoidal fringe pattern in its green channel is encoded to accomplish FTP. Additionally, one IR sinusoidal fringe pattern is utilized to establish FTP. Four ideal independent patterns for four channels are shown in Fig. 3. The intensity of four channels can be described as

IR(x,y)=IR(x,y)+IR(x,y)cos[2πf0x+φ(x,y)],
IB(x,y)=IB(x,y)+IB(x,y)cos[2πf0x+φ(x,y)+π2],
IG(x,y)=IG(x,y)+IG(x,y)cos[2πf0x+φ(x,y)],
IIR(x,y)=IIR(x,y)+IIR(x,y)cos[2πf0x+φ(x,y)],
where {Ii(x,y),Ii(x,y),Ii(x,y),i=R,B,G,IR} are the intensity, background intensity, intensity modulation of the red, blue, and green of the composite color fringe pattern and IR components, f0 denotes the fundamental frequency of sinusoidal fringe, and φ(x,y) is the phase to be solved.

 figure: Fig. 3.

Fig. 3. Four ideal independent patterns for four channels.

Download Full Size | PDF

1. Phase Calculation by TPS

After removing the background intensity using the mean envelope method [25], two sinusoidal fringes in the red and blue components of the composite fringe pattern can be rewritten as

IB(x,y)IR(x,y)=IB(x,y)cos[2πf0x+φ(x,y)+π2]IR(x,y)cos[2πf0x+φ(x,y)]=tan[φ(x,y)].
The wrapped phase can be computed by
2πf0x+φ(x,y)=arctan[IB(x,y)IR(x,y)].

2. Phase Calculation by FTP

In the 2D FTP case, the green channel of the composite color fringe pattern and the IR fringe pattern can be rewritten as

I(x,y)=I(x,y)+c(x,y)exp(2πjfoxx+2πjfoyy)+c*(x,y)exp(2πjfoxx2πjfoyy),
where
c(x,y)=12I(x,y)exp[jφ(x,y)].
* denotes complex conjugation. After background intensity removal, Eq. (9) is Fourier transformed with respect to x and y to obtain
I(u,v)=C(uf0x,vf0y)+C*(u+f0x,v+f0y).
By the use of a filter function, we can obtain the function C(uf0x,vf0y). Then we can extract the wrapped phase by the inverse Fourier transform of C(uf0x,vf0y):
φ(x,y)=arctanIm[c(X,Y)]Re[c(X,Y)],
where Re and Im represent the real and imaginary parts of c(X,Y), respectively.

While utilizing FTP to calculate the wrapped phase, leakage effects are inevitable, especially with discontinuous surfaces. The phase errors in the neighborhood of discontinuities can lead to inaccurate reconstructions. However, they can be eliminated by setting an appropriate low modulation threshold to identify valid data [14].

The measurement range of traditional FTP is [26]

|h(x,y)x|<L03d.
After removing the background intensity of the deformed fringe pattern, the zero component is subtracted from the Fourier spectrum of the deformed stripe pattern. Compared to traditional FTP, the measurement range can be extended by nearly three times:
|h(x,y)x|<L0d.
Overall, three sets of wrapped phases φ(x,y) are obtained, and to gain the unwrapped phase, we select the optimum three-frequency method with fringe numbers of N, N1, NNto resolves fringe order ambiguity, as the beat obtained between N and N1 is a single fringe over the full field of view [23]. It can be used to calculate the unwrapped phase of complex objects with large slopes and discontinuities point by point independently.

3. DEVIATION CORRECTION OF COAXIAL PROJECTION SYSTEM

The accuracy of the coaxial projection system cannot be guaranteed by hardware system adjustments only. Hence, a robust correspondence needs to be established between two projectors. A virtual fringe projection system based on a camera and projector model is proposed to verify the impact of positional deviations on phase calculation, and the acceptable position deviation range of the coaxial projection system. After calibrating the coaxial projectors synchronously, pre-deformed fringe patterns can be obtained by a virtual fringe projection system to correct positional deviations.

A. Virtual Fringe Projection System

The virtual projection system based on the camera and projector model is shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Model of virtual fringe projection system.

Download Full Size | PDF

1. Camera and Projector Model

In Fig. 4, OcXcYcZc, OpXpYpZp, OwXwYwZw represent the camera, projector, and world coordinate system, respectively, and ocucvc, opupvp represent the camera and projector image coordinate system, respectively. P(xw,yw,zw) is an arbitrary point on the object’s surface, while p(uc,vc) and p(up,vp) represent its corresponding imaging point on the imaging plane and projection point on the projection plane, respectively.

The relationship between a point P(xw,yw,zw) on the object and its imaging point p(uc,vc) can be described as follows based on the pinhole camera model [27]:

spc=Ac[RcTc]Pw=HcPw,
Ac=[fuc0u0c00fvcv0c00010],
where pc=[uc,vc,1]T is the homogeneous coordinate of the imaging point p in the ocucvc,Pw=(xw,yw,zw,1)T is the homogeneous coordinate of the point P in OwXwYwZw, and Ac is the intrinsic parameters matrix of the camera; (u0c,v0c) is the coordinate of principal point in ocucvc, fuc and fvc are the equivalent focal lengths along the two orthogonal directions defined by the camera image plane, and s is a scale factor. The extrinsic parameters matrix [RcTc] represents the relative rotation and translation between OwXwYwZw and OcXcYcZc, respectively, and []T denotes the transpose. Hc=Ac[RcTc] is the projection matrix of the camera.

Actually, the pinhole camera model mentioned above is an ideal model that does not take into account the effects of camera lens distortion. In the actual measurement system, the camera lens usually exhibits nonlinear optical distortion. In practice, the lens distortion is modeled by

xdc=(1+k1r2+k2r4)xnc,+(2k3xncync+k4(r2+2(xnc)2)),ydc=(1+k1r2+k2r4)ync+(k3(r2+2(ync)2)+2k4xncync)r2=(xnc)2+(ync)2,
where (xnc,ync) denotes the normalized location of a distortion-free point (uc,vc), (xdc,ydc) denotes the distorted point (udc,vdc) in the imaging plane, k1 and k2 are the radial distortion coefficients, and k3 and k4 are the tangential distortion coefficients. The relationship between distorted point (udc,vdc,1)T and distortion-free point (uc,vc,1)T is as follows:
pc=Acpdc.
Similarly, the projector can be viewed as an inverse camera with similar lens distortion [28]. The relationship between a point P(xw,yw,zw) on the object and its projection point p(up,vp) in opupvp is established:
spp=Ap[RpTp]Pw=HpPw,
where pp=[up,vp,1]T is the homogeneous coordinate of the projection point p in the opupvp, Ap is the intrinsic parameters matrix of the projector, including the principal point (u0p,v0p), and fup and fvp are the focal lengths along the u and v axes of the projection plane, respectively. The extrinsic parameters matrix [RpTp] relates the OwXwYwZw to the OpXpYpZp. The rotational matrix Rp is a 3×3 matrix, and Omp is a 3×1 rotation vector associated with Rp. Tp is a 3×1 translation vector. Hp=Ap[RpTp] is the projection matrix of the projector.

2. Simulation of the Imaged Object in the Camera Coordinate System

The virtual projection system is set up in OcXcYcZc. The equation of measured objects can be expressed as follows:

F(xc,yc,zc)=0.
For point P(xc,yc,zc), the corresponding pixel coordinates p(uc,vc) can be obtained by the camera model:
spc=AcPc.
Here, Pc=[xc,yc,zc,1]T is the homogeneous coordinate of the imaging point p in OcXcYcZc.

With the known point P(xw,yw,zw) in OwXwYwZw and the imaging point p(uc,vc) in ocucvc, the projection matrix of the camera Hc from OwXwYwZw to ocucvc can be calculated by Eq. (15). Equation (15) is multiplied by (Hc)1 and can be rewritten as

(Hc)1pc=Pw.
Similarly, the projection matrix of the projector Hp from OwXwYwZw to ocupvp can be calculated by Eq. (18). With the same point P(xw,yw,zw), we substitute Eq. (21) into Eq. (18). The correspondence relationship between the camera and the projector can be established:
spp=HpPw=Hp(Hc)1pc=Hpcpc,
where Hpc=Hp(Hc)1.

Substituting Eq. (20) into Eq. (22), Eq. (22) is reformulated as

spp=Hpcpc=HpcAcPc.
Camera intrinsic parameters such as f, dx, dy, u0c, v0c can be predefined, so for each point p(xc,yc,zc), the corresponding coordinate p(uc,vc) can be obtained. Similarly, the corresponding coordinate p(up,vp) can be obtained by Eq. (23). Hence, we have established the correspondence between the camera image coordinate of the measured object and the projector image coordinates. Using the principle of fringe projection, the corresponding deformed fringe patterns can be generated.

3. Simulation System Verification

To verify the validity of the proposed simulation system, a simulated experiment is conducted, as shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Simulation experiments.

Download Full Size | PDF

The intrinsic parameters of the virtual camera and projector in the simulation experiment are identical: resolution is 1024×768, physical size of a pixel on the chip dx×dy=4.65μm×4.65μm, and focal length f=12mm. The extrinsic parameters are set as follows: distance from the reference plane to the center of the virtual projector or virtual camera, Lc=Lp=800mm. The reference plane is set at zc=800mm in the virtual camera coordinate system.

The virtual projector projects a fringe pattern with a fringe number of 36 onto the reference plane, and the virtual camera captures the deformed fringe pattern, as shown in Fig. 6. Figure 6 shows that the captured fringe pattern is identical to the fringe pattern projected by the virtual projector, which is consistent with the actual arrangement.

 figure: Fig. 6.

Fig. 6. Deformed fringe patterns captured by virtual camera.

Download Full Size | PDF

4. Analyzing the Impact of Positional Deviations on Phase Calculation

To analyze the effect of position deviations of two projectors on phase calculation, the system is simulated with realistic parameters. A 1024×768 CCD camera with a sensor size of 4.65μm×4.65μm and a focal length of 12 mm is selected as the camera; two 912×1140 digital light processing (DLP) projectors with a micromirror pitch of 7.63μm×7.63μm and a focal length of 12 mm are selected as the projectors. The center of the virtual camera to the reference plane distance Lc=800mm, and the center of the virtual projector P1 to the reference plane distance Lp=400mm. It is assumed that projector P2 and projector P1 are not completely coaxial with rotation vector Omp12=[π/180,π/180,π/180]T and translation vector Tp12=[0.5,0.5,0.5]T, respectively.

Virtual projectors P1 and P2 project fringe patterns with optimum numbers onto a planar board. One set of fringe patterns is projected onto a planar board by single projector P1, and the other set of fringe patterns is partly projected by projector P1 and projector P2 onto the planar board. Two sets of unwrapped phases are shown in Figs. 7(a) and 7(b). It is clear that Fig. 7(b) is inaccurate.

 figure: Fig. 7.

Fig. 7. Unwrapped phase map: (a) by single projector P1; (b) partly by projector P1and projector P2.

Download Full Size | PDF

The deviations of coaxial projectors consist of six parameters: rotation vector Omp12=[Rx,Ry,Rz]T and translation vector Tp12=[Tx,Ty,Tz]T. The rotation vector Omp12 represents the angle of rotation around the x axis, y axis, and z axis. Omp12 and Rp12 can be transformed by Rodrigues.

To obtain the allowable deviations of coaxial projectors, we assume one of the parameters is changed and the other parameters are fixed values of zero. Iterating the process of projecting two sets of fringe patterns onto a planar board, the error of the two sets of the phase is obtained. We change the parameters from 0 to the maximum allowable deviation, which corresponds to the critical value of zero value of the phase error with a reasonable step.

Iterating the process of the other parameters, we can reasonably obtain the allowable range of the other parameters. The allowable deviations of coaxial projectors are shown in Table 1.

Tables Icon

Table 1. Allowable Deviations of Coaxial Projectors

B. Deviation Correction of Coaxial Projectors

Step 1: Calibrate two projectors and obtain the [RpTp] of two projectors.

To calibrate a projector like a camera, “captured images” need to be created for a projector. A calibration board with 9×12 arrays of black circular rings printed on a flat white board is positioned with some different orientations. Using phase mapping method, the captured images can be obtained [29]. Then we can use the MATLAB calibration toolbox to calibrate the projector essentially the same as that of a camera.

For simplicity, we denote VLP as P1 and ILP as P2. Pp1=[xp1,yp1,zp1,1]T is the homogeneous coordinate of the projection point p in the Op1Xp1Yp1Zp1. Pp2=[xp2,yp2,zp2,1]T is the homogeneous coordinate of the projection point p in the Op2Xp2Yp2Zp2:

sPp1=Rp1Pw+Tp1,
sPp2=Rp2Pw+Tp2.
Step 2: Calculate the position deviations of two projectors.

With the same world coordinates, the transformation matrix (position deviations) Rp12, Tp12 from P1 to P2 can be calculated as

sPp2=Rp2(Rp1)1(Pp1Tp1)+Tp2=Rp12Pp1+Tp12,
Rp12=Rp2(Rp1)1,Tp12=Rp2(Rp1)1Tp1+Tp2.
Step 3: Obtain a pre-deformed fringe pattern by the simulation system established in Section 3.A.

When the intrinsic and extrinsic parameters of the virtual camera are identical to those of the virtual projector, fringe patterns captured by the virtual camera on the reference plane are identical to the patterns projected. Hence, it can be assumed that the intrinsic and extrinsic parameters of the virtual camera are identical to P2. Simulated sinusoidal fringes are projected by P1 on the reference plane, and the deformed fringe patterns are captured by the virtual camera. This process is equivalent to the projection transformation from P1 to P2. A sinusoidal fringe pattern projected by P1 undergoes rotation and translation transformations [Rp12Tp12], and becomes the pre-deformed fringe pattern projected by P2. So the required pre-deformed fringe pattern projected by P2 is the deformed fringe patterns captured by the virtual camera.

C. Deviation Correction Experiment

To validate the proposed method, the coaxial projection measurement system is set up as shown in Fig. 8. In the current setup, the VLP is a Lightcrafter 4500 from TI with a resolution of 912×1140. The ILP is a Lightcrafter 4500 from TI, whose digital micromirror device and light source have been replaced by the corresponding IR components. The 2CCD camera has model number AD-080GE from JAI with a 12 mm prime lens.

 figure: Fig. 8.

Fig. 8. Photograph of coaxial projection measurement system.

Download Full Size | PDF

As aforementioned, we used a total of nine different poses for projector calibration. The captured images created for projector calibration are shown in Fig. 9. Both the VLP and ILP are calibrated. Their intrinsic parameters and coefficients of lens distortion k1, k2, k3, k4 are shown in Table 2. The reprojection errors of two projectors are shown in Fig. 10. The average values of the reprojection errors of VLP and ILP are 0.2690 pixels and 0.2210 pixels, respectively. The translation matrices [Rp12Tp12] of the two projectors are calculated by Eq. (27).

 figure: Fig. 9.

Fig. 9. “Captured images” created for projector calibration.

Download Full Size | PDF

Tables Icon

Table 2. Calibrated Intrinsic Parameters and Coefficients of Lens Distortion of Projectors

 figure: Fig. 10.

Fig. 10. Reprojection error of two projectors. (a) VLP; (b) ILP.

Download Full Size | PDF

According to deviations [Rp12, Tp12] of the coaxial projectors, pre-deformed fringe patterns can be generated by a simulation system.

VLP and ILP are calibrated after deviation correction, and deviations of coaxial projectors are shown in Table 3. It can be observed that the position deviations are much smaller than allowable deviations obtained in Table 1.

Tables Icon

Table 3. Deviations of Coaxial Projectors After Correction

To further verify the effect of deviation correction in actual measurements, a set of phase-shifted fringe patterns is projected onto a planar board using three methods: (1) single projector; (2) coaxial projectors without deviation correction; and (3) coaxial projectors after deviation correction. The profile along row 130 for three unwrapped phase maps of the planar board is calculated as shown in Fig. 11.

 figure: Fig. 11.

Fig. 11. Profile along row 130 for three unwrapped phase maps.

Download Full Size | PDF

It can be clearly seen that the unwrapped phase obtained by coaxial projectors after deviation correction (dotted line) is nearly identical to the single projector case (solid line); the unwrapped phase obtained by coaxial projectors without deviation correction (dashed line) and the single projector case (solid line) differ greatly in phase value. The partially enlarged view is shown in the black rectangle.

By using the unwrapped phase of a single projector as the standard data, we calculate the phase errors of the coaxial projectors without deviation correction and coaxial projectors after deviation correction. The profiles along row 130 for phase errors are shown in Figs. 12(a) and 12(b), respectively. The root mean square (RMS) in Figs. 12(a) and 12(b) is 0.277 and 0.031, indicating the deviation correction method is effective.

 figure: Fig. 12.

Fig. 12. Profile along row 130 for phase errors. (a) Coaxial projectors without deviation correction; (b) coaxial projectors after deviation correction.

Download Full Size | PDF

4. EXPERIMENTS

To demonstrate the validity of the proposed method, simulated and actual experiments have been carried out in Sections 4.A and in 4.B.

A. Simulations and Results

To illustrate the performance of the proposed phase extraction methods, the height distribution in the simulation given as peaks is shown in Fig. 13(a). The peaks are a function defined in MATLAB:

z=3(1x)2exp[x2(y+1)2]10(15xx3y5)exp(x2y2)13exp[(x+1)2y2].
In an actual situation, the reconstructed phase is affected by random noise in the captured fringe patterns, so 3% random noise is added to the gray scale of the original fringe patterns, and the deformed fringe pattern of the simulated object is shown in Fig. 13(b).

 figure: Fig. 13.

Fig. 13. Simulated measured object. (a) Height distribution of simulated object; (b) deformed fringe pattern of the simulated object.

Download Full Size | PDF

Taking three sets of fringe patterns with the optimum fringe numbers of 36, 35, and 30, the wrapped phase maps are obtained by traditional FTP, improved FTP, the proposed method (one set of wrapped phase map by TPS, two sets of wrapped phase map by the improved FTP), and a four-step phase-shifting algorithm. Then the unwrapped phase map is obtained by the optimum three-frequency selection method, as shown in Fig. 14. In Fig. 14(a), we can see that the phase error appears at the position of the large slopes, as the traditional FTP is limited by the gradient of the object. Figure 14(b) shows that the improved FTP can calculate the phase of the large slopes accurately. By using the four-step phase-shifting algorithm [Fig. 14(d)] as the criterion, the profile along row 760 for phase errors of the other three maps are calculated as shown in Fig. 15. The RMS errors are 0.1316 rad, 0.0408 rad, and 0.0099 rad, respectively. The RMS data demonstrate that the proposed method algorithm has much higher accuracy than the other two methods.

 figure: Fig. 14.

Fig. 14. Unwrapped phase maps. (a) Traditional FTP; (b) improved FTP; (c) proposed method; (d) four-step phase-shifting algorithm.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. Profile along row 760 for phase errors.

Download Full Size | PDF

B. Actual Experiments

To verify the performance of the proposed coaxial projection system after deviation correction, the 3D shape of a mask, a set of steps, and a human hand are measured. The crosstalk coefficient calibration method is used to calculate the crosstalk coefficient between the color channels and compensate for the energy leaked from other channels.

One composite color fringe pattern with optimum fringe numbers of 36, an assigned phase-shifting value of π/2 between red and blue components, along with an optimum fringe number of 35 in green components is projected by VLP, as shown in Fig. 16(a). The pre-deformed fringe pattern with an optimum fringe number of 30 is encoded into the IR channel of ILP, as shown in Fig. 16(b). The images captured by the 2CCD representing the deformed composite RGB and IR fringe patterns of the mask are shown in Figs. 16(c) and 16(d), respectively. TPS is applied to the red and blue components of the composite RGB fringe pattern to obtain the wrapped phase, as shown in Fig. 17(a). FTP is applied to the green component and IR component to obtain two wrapped phase maps, as shown in Figs. 17(b) and 17(c). Using the optimum three-frequency method, the unwrapped phase of a mask is shown in Fig. 17(d). Finally, with the calibrated system, the 3D shape data of the mask are shown in Fig. 18.

 figure: Fig. 16.

Fig. 16. Fringe patterns of the mask. (a) Projected composite color fringe pattern; (b) projected IR fringe pattern; (c) captured deformed composite color fringe pattern; (d) captured deformed IR fringe pattern.

Download Full Size | PDF

 figure: Fig. 17.

Fig. 17. Phase maps of the mask. (a) Wrapped phase in red and blue components; (b) wrapped phase in green component; (c) wrapped phase in IR component; (d) unwrapped phase.

Download Full Size | PDF

 figure: Fig. 18.

Fig. 18. 3D shape data of the mask.

Download Full Size | PDF

Following similar methodology, a set of steps and a human hand are also measured. The captured deformed composite RGB and IR fringe patterns of a set of steps are shown in Figs. 19(a) and 19(b), the phase maps of a set of steps are shown in Fig. 20, and 3D shape data of it are shown in Fig. 21. 3D shape data of the human hand derived from the 3D movie are shown in Fig. 22. These measurements show that the system developed in this paper can measure both discontinuous objects and dynamic objects.

 figure: Fig. 19.

Fig. 19. Captured patterns of a set of steps. (a) Deformed composite color fringe pattern; (b) deformed IR fringe pattern.

Download Full Size | PDF

 figure: Fig. 20.

Fig. 20. Phase maps of a set of steps. (a) Wrapped phase in red and blue components; (b) wrapped phase in green component; (c) wrapped phase in IR component; (d) unwrapped phase.

Download Full Size | PDF

 figure: Fig. 21.

Fig. 21. 3D shape data of a set of steps.

Download Full Size | PDF

 figure: Fig. 22.

Fig. 22. 3D shape data of a human hand (see Visualization 1).

Download Full Size | PDF

5. CONCLUSION

In this paper, a novel 3D shape measurement system is proposed based on a coaxial fringe projection system. It can simultaneously project red, green, blue, and IR fringe patterns onto the measured objects, and a 2CCD camera can simultaneously acquire the visible color image along with the IR image. In combination with TPS, FTP, and the optimum three-frequency selection method, 3D shape measurement of complex surfaces such as discontinuous objects can be performed with a single measurement. Additionally, a virtual fringe projection measurement system has been established in order to correct deviations of the coaxial fringe projection system. It can be implemented in a straightforward manner for a wide range of applications where correspondence needs to be established between multiple projection systems. The simulation results and experiments on static and dynamic objects demonstrate that the developed method is suitable for fast 3D shape measurement of complex objects.

Funding

National Key R&D Program of China (2017YFF0106404); National Natural Science Foundation of China (NSFC) (51675160); Talents Project Training Funds in Hebei Province (A201500503); Innovative and Entrepreneurial Talent Project Supported by Jiangsu Province (2016A377); Engineering and Physical Sciences Research Council (EPSRC) (EP/P006930/1).

REFERENCES

1. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recogn. 43, 2666–2680 (2010). [CrossRef]  

2. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48, 133–140 (2010). [CrossRef]  

3. F. Chen, G. M. Brown, and M. M. Song, “Overview of 3-D shape measurement using optical methods,” Opt. Eng. 39, 10–22 (2000). [CrossRef]  

4. M. Takeda, Q. Gu, M. Kinoshita, H. Takai, and Y. Takahashi, “Frequency-multiplex Fourier-transform profilometry: a single-shot three-dimensional shape measurement of objects with large height discontinuities and/or surface isolations,” Appl. Opt. 36, 5347–5354 (1997). [CrossRef]  

5. L. C. Chen, H. W. Ho, and X. L. Nguyen, “Fourier transform profilometry (FTP) using an innovative band-pass filter for accurate 3-D surface reconstruction,” Opt. Lasers Eng. 48, 182–190 (2010). [CrossRef]  

6. P. S. Huang, C. P. Zhang, and F. P. Chiang, “High-speed 3-D shape measurement based on digital fringe projection,” Opt. Eng. 42, 163–168 (2003). [CrossRef]  

7. X. Y. Su, G. V. Bally, and D. Vukicevic, “Phase-stepping grating profilometry: utilization of intensity modulation analysis in complex objects evaluation,” Opt. Commun. 98, 141–150 (1993). [CrossRef]  

8. D. S. Mehta, S. K. Dubey, M. M. Hossain, and C. Shakher, “Simple multifrequency and phase-shifting fringe-projection system based on two-wavelength lateral shearing interferometry for three-dimensional profilometry,” Appl. Opt. 44, 7515–7521 (2005). [CrossRef]  

9. J. P. Zhu, P. Zhou, X. Y. Su, and Z. S. You, “Accurate and fast 3D surface measurement with temporal-spatial binary encoding structured illumination,” Opt. Express 24, 28549–28560 (2016). [CrossRef]  

10. C. E. Towers, D. P. Towers, and J. D. C. Jones, “Absolute fringe order calculation using optimized multi-frequency selection in full-field profilometry,” Opt. Lasers Eng. 43, 788–800 (2005). [CrossRef]  

11. K. Liu, Y. C. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Dual-frequency pattern scheme for high-speed 3-D shape measurement,” Opt. Express 18, 5229–5244 (2010). [CrossRef]  

12. S. Zhang and S. T. Yau, “High-speed three-dimensional shape measurement system using a modified two-plus-one phase-shifting algorithm,” Opt. Eng. 46, 113603 (2007). [CrossRef]  

13. S. Zhang and P. S. Huang, “High-resolution, real-time three-dimensional shape measurement,” Opt. Eng. 45, 1269–1278 (2006). [CrossRef]  

14. Z. H. Zhang, D. P. Towers, and C. E. Towers, “Snapshot color fringe projection for absolute three-dimensional metrology of video sequences,” Appl. Opt. 49, 5947–5953 (2010). [CrossRef]  

15. W. H. Su, “Color-encoded fringe projection for 3D shape measurements,” Opt. Express 15, 13167–13181 (2007). [CrossRef]  

16. W. Y. Liu, Z. Q. Wang, G. G. Mu, and Z. L. Fang, “Color-coded projection grating method for shape measurement with a single exposure,” Appl. Opt. 39, 3504–3508 (2000). [CrossRef]  

17. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48, 149–158 (2010). [CrossRef]  

18. S. Zhang, D. V. D. Weide, and J. Oliver, “Superfast phase-shifting method for 3-D shape measurement,” Opt. Express 18, 9684–9689 (2010). [CrossRef]  

19. J. S. Hyun, B. W. Li, and S. Zhang, “High-speed high-accuracy three-dimensional shape measurement using digital binary defocusing method versus sinusoidal method,” Opt. Eng. 56, 074102 (2017). [CrossRef]  

20. P. Ou, B. W. Li, Y. J. Wang, and S. Zhang, “Flexible real-time natural 2D color and 3D shape measurement,” Opt. Express 21, 16736–16741 (2013). [CrossRef]  

21. Y. J. Xu, C. Chen, S. J. Huang, and Z. H. Zhang, “Simultaneously measuring 3D shape and colour texture of moving objects using IR and colour fringe projection techniques,” Opt. Lasers Eng. 61, 1–7 (2014). [CrossRef]  

22. A. H. Phan, M. L. Piao, J. H. Park, and N. Kim, “Error analysis in parallel two-step phase-shifting method,” Appl. Opt. 52, 2385–2393 (2013). [CrossRef]  

23. Z. H. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency selection,” Opt. Express 14, 6444–6455 (2006). [CrossRef]  

24. Z. H. Zhang, S. J. Huang, S. S. Meng, F. Gao, and X. Q. Jiang, “A simple, flexible and automatic 3D calibration method for a phase calculation-based fringe projection imaging system,” Opt. Express 21, 12218–12227 (2013). [CrossRef]  

25. X. X. Zhang, Y. M. Wang, S. J. Huang, N. Gao, and Z. H. Zhang, “A two-step phase-shifting algorithm for phase calculation,” Acta Photon. Sinica 46, 0311005 (2017). [CrossRef]  

26. X. Y. Su and W. J. Chen, “Fourier transform profilometry: a review,” Opt. Lasers Eng. 35, 263–284 (2001). [CrossRef]  

27. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000). [CrossRef]  

28. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53, 3415–3426 (2014). [CrossRef]  

29. S. J. Huang, L. L. Xie, Z. Y. Wang, Z. H. Zhang, F. Gao, and X. Q. Jiang, “Accurate projector calibration method by using an optical coaxial camera,” Appl. Opt. 54, 789–795 (2015). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       3D shape data of a human hand

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (22)

Fig. 1.
Fig. 1. Geometry of FPP.
Fig. 2.
Fig. 2. Schematic diagram of coaxial projection measurement system.
Fig. 3.
Fig. 3. Four ideal independent patterns for four channels.
Fig. 4.
Fig. 4. Model of virtual fringe projection system.
Fig. 5.
Fig. 5. Simulation experiments.
Fig. 6.
Fig. 6. Deformed fringe patterns captured by virtual camera.
Fig. 7.
Fig. 7. Unwrapped phase map: (a) by single projector P 1 ; (b) partly by projector P 1 and projector P 2 .
Fig. 8.
Fig. 8. Photograph of coaxial projection measurement system.
Fig. 9.
Fig. 9. “Captured images” created for projector calibration.
Fig. 10.
Fig. 10. Reprojection error of two projectors. (a) VLP; (b) ILP.
Fig. 11.
Fig. 11. Profile along row 130 for three unwrapped phase maps.
Fig. 12.
Fig. 12. Profile along row 130 for phase errors. (a) Coaxial projectors without deviation correction; (b) coaxial projectors after deviation correction.
Fig. 13.
Fig. 13. Simulated measured object. (a) Height distribution of simulated object; (b) deformed fringe pattern of the simulated object.
Fig. 14.
Fig. 14. Unwrapped phase maps. (a) Traditional FTP; (b) improved FTP; (c) proposed method; (d) four-step phase-shifting algorithm.
Fig. 15.
Fig. 15. Profile along row 760 for phase errors.
Fig. 16.
Fig. 16. Fringe patterns of the mask. (a) Projected composite color fringe pattern; (b) projected IR fringe pattern; (c) captured deformed composite color fringe pattern; (d) captured deformed IR fringe pattern.
Fig. 17.
Fig. 17. Phase maps of the mask. (a) Wrapped phase in red and blue components; (b) wrapped phase in green component; (c) wrapped phase in IR component; (d) unwrapped phase.
Fig. 18.
Fig. 18. 3D shape data of the mask.
Fig. 19.
Fig. 19. Captured patterns of a set of steps. (a) Deformed composite color fringe pattern; (b) deformed IR fringe pattern.
Fig. 20.
Fig. 20. Phase maps of a set of steps. (a) Wrapped phase in red and blue components; (b) wrapped phase in green component; (c) wrapped phase in IR component; (d) unwrapped phase.
Fig. 21.
Fig. 21. 3D shape data of a set of steps.
Fig. 22.
Fig. 22. 3D shape data of a human hand (see Visualization 1).

Tables (3)

Tables Icon

Table 1. Allowable Deviations of Coaxial Projectors

Tables Icon

Table 2. Calibrated Intrinsic Parameters and Coefficients of Lens Distortion of Projectors

Tables Icon

Table 3. Deviations of Coaxial Projectors After Correction

Equations (29)

Equations on this page are rendered with MathJax. Learn more.

h ( x , y ) = L 0 2 π L 0 2 d cos θ P 0 Δ φ ( x , y ) ( L 0 + x cos θ sin θ ) 2 d cos θ sin θ L 0 + x cos θ sin θ + 1 .
h ( x , y ) = n = 0 N a n ( x , y ) Δ φ ( x , y ) n , n = 0 , 1 , , N ,
I R ( x , y ) = I R ( x , y ) + I R ( x , y ) cos [ 2 π f 0 x + φ ( x , y ) ] ,
I B ( x , y ) = I B ( x , y ) + I B ( x , y ) cos [ 2 π f 0 x + φ ( x , y ) + π 2 ] ,
I G ( x , y ) = I G ( x , y ) + I G ( x , y ) cos [ 2 π f 0 x + φ ( x , y ) ] ,
I IR ( x , y ) = I I R ( x , y ) + I I R ( x , y ) cos [ 2 π f 0 x + φ ( x , y ) ] ,
I B ( x , y ) I R ( x , y ) = I B ( x , y ) cos [ 2 π f 0 x + φ ( x , y ) + π 2 ] I R ( x , y ) cos [ 2 π f 0 x + φ ( x , y ) ] = tan [ φ ( x , y ) ] .
2 π f 0 x + φ ( x , y ) = arctan [ I B ( x , y ) I R ( x , y ) ] .
I ( x , y ) = I ( x , y ) + c ( x , y ) exp ( 2 π j f o x x + 2 π j f o y y ) + c * ( x , y ) exp ( 2 π j f o x x 2 π j f o y y ) ,
c ( x , y ) = 1 2 I ( x , y ) exp [ j φ ( x , y ) ] .
I ( u , v ) = C ( u f 0 x , v f 0 y ) + C * ( u + f 0 x , v + f 0 y ) .
φ ( x , y ) = arctan Im [ c ( X , Y ) ] Re [ c ( X , Y ) ] ,
| h ( x , y ) x | < L 0 3 d .
| h ( x , y ) x | < L 0 d .
s p c = A c [ R c T c ] P w = H c P w ,
A c = [ f u c 0 u 0 c 0 0 f v c v 0 c 0 0 0 1 0 ] ,
x d c = ( 1 + k 1 r 2 + k 2 r 4 ) x n c , + ( 2 k 3 x n c y n c + k 4 ( r 2 + 2 ( x n c ) 2 ) ) , y d c = ( 1 + k 1 r 2 + k 2 r 4 ) y n c + ( k 3 ( r 2 + 2 ( y n c ) 2 ) + 2 k 4 x n c y n c ) r 2 = ( x n c ) 2 + ( y n c ) 2 ,
p c = A c p d c .
s p p = A p [ R p T p ] P w = H p P w ,
F ( x c , y c , z c ) = 0 .
s p c = A c P c .
( H c ) 1 p c = P w .
s p p = H p P w = H p ( H c ) 1 p c = H p c p c ,
s p p = H p c p c = H p c A c P c .
s P p 1 = R p 1 P w + T p 1 ,
s P p 2 = R p 2 P w + T p 2 .
s P p 2 = R p 2 ( R p 1 ) 1 ( P p 1 T p 1 ) + T p 2 = R p 12 P p 1 + T p 12 ,
R p 12 = R p 2 ( R p 1 ) 1 , T p 12 = R p 2 ( R p 1 ) 1 T p 1 + T p 2 .
z = 3 ( 1 x ) 2 exp [ x 2 ( y + 1 ) 2 ] 10 ( 1 5 x x 3 y 5 ) exp ( x 2 y 2 ) 1 3 exp [ ( x + 1 ) 2 y 2 ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.