Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement

Open Access Open Access

Abstract

It is a challenge for any optical method to measure objects with a large range of reflectivity variation across the surface. Image saturation results in incorrect intensities in captured fringe pattern images, leading to phase and measurement errors. This paper presents a new adaptive digital fringe projection technique which avoids image saturation and has a high signal to noise ratio (SNR) in the three-dimensional (3-D) shape measurement of objects that has a large range of reflectivity variation across the surface. Compared to previous high dynamic range 3-D scan methods using many exposures and fringe pattern projections, which consumes a lot of time, the proposed technique uses only two preliminary steps of fringe pattern projection and image capture to generate the adapted fringe patterns, by adaptively adjusting the pixel-wise intensity of the projected fringe patterns based on the saturated pixels in the captured images of the surface being measured. For the bright regions due to high surface reflectivity and high illumination by the ambient light and surfaces interreflections, the projected intensity is reduced just to be low enough to avoid image saturation. Simultaneously, the maximum intensity of 255 is used for those dark regions with low surface reflectivity to maintain high SNR. Our experiments demonstrate that the proposed technique can achieve higher 3-D measurement accuracy across a surface with a large range of reflectivity variation.

© 2016 Optical Society of America

1. Introduction

Three-dimensional (3-D) shape measurement has been investigated for several decades and been used in a broad range of applications including industrial design, reverse engineering and prototyping, quality control/inspection and documentation of cultural artifacts [1, 2 ]. In general, 3-D shape measurement methods can be classified into two categories: surface contact and surface noncontact. Coordinate measuring machine (CMM) is a typical system that uses surface contact methods for 3-D shape measurement. Due to its surface-contact nature, it isn’t sensitive to the surface optical properties, but it is difficult to handle soft objects since the surface-contact may destruct the measured contour. Moreover, because it is a point-by-point measurement method, its measurement speed is very slow which usually results in low efficiency.

In contrast, the optical methods have characteristics of high speed and noncontact [1, 3 ], including stereo vision, photogrammetry, laser range scanning, structured light etc., among which, structured light is one of the most widely used 3-D shape measurement method benefiting from its superiorities of full-filed inspect, high resolution and accuracy [2]. Typically, some fringe patterns are projected onto an object surface, images of the fringe patterns deformed by the object surface are captured by a camera, and a phase map is calculated from the pixel in the images [3]. The absolute phase map can be used for determining correspondences between cameras in a multi-camera stereo vision system [4], or between a camera and a projector in a structured light system [5, 6 ]. Finally, by means of a phase-to-height mapping based on a calibration performed earlier over the measurement volume of interest, 3-D coordinates of the object surface can be calculated from the absolute phase map.

Although the structure light method has many advantages, it does not perform well if the surface does not have good optical properties. In practice, most of the optical methods assume the object surface having a diffuse or near diffuse reflectance and low reflectivity variations from point to point. Therefore, it is a challenge for any optical methods to measure the object surface, which is specular, shiny or has a large range of reflectivity variation. For example, Fig. 1(a) shows an image of a plastic block, with the corresponding captured fringe pattern image in Fig. 1(b). It is obvious that the surface of the plastic block is diffuse, so the camera can capture a clear image of the fringe pattern. Figure 1(c) shows an image of a metallic workpiece, with different surface material. As seen in Fig. 1(d), only regions with strong reflection toward the camera can provide measurable fringe patterns while light reaching the camera from other regions of the metallic workpiece is extremely weak. Thus, there is no sufficient information available to obtain the 3-D shape of the metallic workpiece. As we increase the exposure time, in Fig. 1(e), fringe patterns reflected from regions of the metallic workpiece with low reflection toward the camera become visible while regions with higher reflection toward the camera become very bright. This is because the light reflected is so intense that it always leads to camera sensor being saturated. In the structured light method, it means that the fringe patterns modulated by the highlight can’t be correctly decoded. Therefore those regions are not measurable. At present in the industrial application, in order to solve this problem, a thin layer of powder is generally sprayed onto the object to make its surface diffuse prior to its measurement. This supplementary step is troublesome and time-consuming because the object needs to be cleaned afterwards. In addition, the final accuracy is often dependent on the powder thickness and its homogeneity [7].

 figure: Fig. 1

Fig. 1 The images of objects with different surface material, and the corresponding captured fringe pattern images: (a) a plastic block, (b) a fringe pattern image of the plastic block, (c) a metallic workpiece, (d) reflected fringe pattern captured by the camera from the metallic workpiece with exposure time = 16.7 ms, and (e) reflected pattern of the same sample with exposure time = 100ms.

Download Full Size | PDF

For varying reflectivity across a surface, Zhang et al. [8] proposed a high dynamic range scanning technique by taking different exposures. A sequence of images captured at different exposures is combined into a single set of phase-shifting images by selecting the brightest unsaturated intensities at each pixel. The saturated pixels due to specular reflection in the higher exposure were replaced by the corresponding pixels of the lower exposure. Therefore, the specular or dark region can be properly measured without affecting the rest of the regions. Alternatively, the highest intensity modulation has been used to create composite phase-shifting images [9]. This technique reduces the influence of ambient illumination and improves the signal to noise ratio (SNR) and dynamic range of measurement. As for these methods by taking multiple exposures, although the aperture of camera lens and the camera exposure time can be adjusted easily, the selection of exposures is not quantified. Besides, in order to obtain high contrast phase-shifting images, a large number of exposures are required, which is time-consuming. To solve this problem, Zhao et al. [10] proposed a design of a very fast, high dynamic range fringe pattern projector, using 700 Hz frame rate, with a reduction of 88% of the routine projecting time. Similarly, for this problem, Ekstrand et al. [11] presented an auto-exposure technique in which the required exposure time can be predicted automatically according to the reflectivity of measured object surface. This single predicted exposure time was selected to make a trade-off between overexposing the brightest regions of an object and losing the fringe patterns in the darkest regions of the object in shadow. This technique reduces the human intervention and improves the intelligence of the 3-D shape measurement system. But the single exposure time does not always fit the surface with a large range of reflectivity variation. Ri et al. [12] proposed an intensity extension method using a digital micromirror device camera. The method can acquire high-contrast fringe patterns through the adjustment of the optimum exposure time for each CCD pixel.

An alternative method of avoiding image saturation is to reduce the projecting intensity of fringe patterns. Waddington and Kofman [13] developed a technique, i.e., by adaptive adjustment of the projected fringe pattern intensities, which makes the maximum input gray level to accommodate ambient light for saturation avoidance, a composite image is fused by raw fringe pattern images captured at different illuminations. However, compared to those methods by taking multiple exposures, this technique also could be time-consuming. Recently, Babaie et al. [14] proposed a new technique to enhance the dynamic range of a fringe projection system for measuring 3-D profile of objects with wide variation range in their optical reflectivity. The high dynamic range fringe pattern images are acquired by recursively controlling the intensity of the projection pattern at pixel level based on the feedback from the reflected images captured by the camera. A four step phase-shifting algorithm is used to obtain the absolute phase map of the object from the acquired high dynamic range fringe pattern images. Different from the previously described phase-shifting method, Zhang et al. [15] utilized monochromatic white and black stripe patterns to measure objects with different reflectivity, thus increased the robustness. An adaptive intensity mask is adopted to dynamically adjust the pattern intensity to prevent overexposure in shiny regions, which is derived from point spread function and camera response function. The point spread function in this method is based on the homography matrix from the camera image plane to the projector image plane, which can be calibrated through the measurement table in advance. But during the measurement, it is difficult to ensure the position of the object to be measured is the same as that of the checkerboard, so the coordinates after mapping by the homography matrix may not be very accurate.

To handle specular reflections that appear in isolated regions of an image, other methods, such as multiple camera viewpoints [16], multiple cameras, color light sources, color filters [17], polarization filters [18, 19 ] and multiple light projection directions [20] have been used to construct composite images without saturation from the different captured images. These methods required additional complexity in hardware, system setup, and processing of image mask and registration to merge different surface regions.

In this paper, we present a new technique called adaptive digital fringe projection (ADFP), which avoids image saturation and maintains high SNR in 3-D shape measurement where there is a large range of reflectivity variation across the object surface, by adaptively adjusting the pixel-wise intensity of the projected fringe patterns based on the saturated pixels in the captured images of the surface being measured. This technique is a method that can improve the dynamic range of scanning with no physical modification in the fringe projection system, and can be used to measure high reflectivity surfaces of machined or polished parts such as turbine blades, shafts or glossy plastic parts.

The remainder of this paper is organized as follows. Section 2 explains the principles of the proposed method. In section 3 some experimental results related to the proposed method are given. Section 4 discusses the advantages of the proposed method, and Section 5 summarizes the paper.

2. Principle

Fringe projection techniques offer the advantage of full-field surface measurement, and typically acquire a 3-D coordinates of a point on the surface at every image pixel. Among fringe projection techniques, Fourier-transform profilometry [21] and phase-shifting algorithm [22] have been widely used. Over the years, numerous phase-shifting algorithms have been developed, including three-step, four-step, double three-step, etc. In this research, we use a four-step phase-shifting algorithm to find the phase value.

2.1 Four-step phase-shifting algorithm

The fringe image intensities with a phase shift of π/2 are written as

I1(x,y)=I(x,y)+I(x,y)cos[ϕ(x,y)].
I2(x,y)=I(x,y)+I(x,y)cos[ϕ(x,y)+π/2].
I3(x,y)=I(x,y)+I(x,y)cos[ϕ(x,y)+π].
I4(x,y)=I(x,y)+I(x,y)cos[ϕ(x,y)+3π/2].

where (x,y) is the pixel coordinate on the image plane of camera, I(x,y) is the average intensity, I(x,y) is the intensity modulation, and ϕ(x,y) is the phase to be calculated. From Eqs. (1)–(4) , we can obtain the phase

ϕ(x,y)=arctan[I4(x,y)I2(x,y)I1(x,y)I3(x,y)].

Equation (5) provides the ϕ(x,y) value, ranging from − π to + π with 2π discontinuities, which is usually called the wrapped phase. Therefore, a phase unwrapping algorithm is required to remove the 2π phase discontinuities to obtain a continuous phase map. Multi-frequency heterodyne principle [23] is used for full-field phase unwrapping, which is realized by projecting the fringe patterns with three different frequencies. Once the continuous phase map also called the absolute phase map is obtained, the 3-D coordinates can be recovered from the phase if the system is calibrated [6].

2.2 The Adaptive Digital Fringe Projection (ADFP) method proposed

The ADFP technique of avoiding image saturation and maintaining high SNR in 3-D shape measurement where there is a large range of reflectivity variation across the object surface was proposed and developed using a phase-shifting fringe projection system (Fig. 2 ) that consists of a digital projector to project sinusoidal phase-shifting fringe patterns onto an object surface, a camera at an angle to the projector to capture images of the fringe patterns that appear deformed on the object surface, and a computer to generate the fringe patterns and control and perform all processing.

 figure: Fig. 2

Fig. 2 Schematic diagram of the phase-shifting fringe projection system for 3-D surface measurement.

Download Full Size | PDF

Given an arbitrary object with an unknown surface reflectivity, and under the influence of ambient light and complex surface structures (such as interreflections), it is very difficult to determine the optimal intensity of the projected fringe pattern at the pixel level. Therefore, first of all, the object’s surface reflectivity, the ambient light and the light of surface interreflections must be estimated. To overcome this problem, the camera response function and the coordinate mapping function from the camera to the projector need to be determined, which are the two most important tasks, and will be described in detail in the following section. Here, the ‘adaptive’ means that based on the large range of reflectivity variation across the measured object surface and on the influence of ambient light and complex surface interreflections, the technique is able to predict different optimal intensity of the projected fringe pattern at the pixel level for surface with different reflectivity and ambient light etc.

A. Calculation of the optimal intensity of each pixel in fringe patterns

To determine the relationship between the intensity captured by the camera and that outputted from the projector, it’s necessary to understand how the image of the object forms on the camera sensor. Typically, in a structured light system, the following factors influence the fringe pattern images formation: (1) the ambient light coming directly to the camera sensor with an intensity of La, (2) the ambient light and the projected light from multiple light paths of other surface patches reflected by the object with surface reflectivity of r, rLi, (3) the projected light with an intensity of Lp reflected by the corresponding surface patch with the reflectivity of r, rLp, (4) the exposure t referred to the aperture of camera lens and to the exposure time, (5) the camera sensitivity k, and (6) the noise of the sensors In~N(0,σ2). The influence of different sources of illumination in capturing fringe pattern images by the camera is shown in Fig. 3 , and described in the following equation:

I(x,y)=kt{r(x,y)[Lp(x,y)+Li(x,y)]+La(x,y)}+In(x,y).
where (x, y) denotes the pixel coordinate in the camera image plane. Further, Eq. (6) can be written as

 figure: Fig. 3

Fig. 3 Light sources in capturing fringe pattern images.

Download Full Size | PDF

I(x,y)=ktr(x,y)Lp(x,y)+kt[r(x,y)Li(x,y)+La(x,y)]+In(x,y).

Letting x1=ktLp(x,y), x2=kt, b1=r(x,y), b2=r(x,y)Li(x,y)+La(x,y), Eq. (7) can be simplified as

I=b1x1+b2x2+In.

Assuming there are existing n points (data pairs) (xi1,xi2,Ii), i = 1,…,n, where xi1 and xi2 are independent variable and Ii is a dependent variable, we can estimate the values of parameters b1 and b2. Defining Q=i=1n(Iib1xi1b2xi2)2, it is necessary for the two partial derivatives of Q, with respect to b1 and b2, to be equal to zero at the minimum, as follows:

{Qb1=2i=1n(I1b1xi1b2xi2)xi1=0Qb2=2i=1n(I1b1xi1b2xi2)xi2=0.

These provide two linear equations in b1 and b2

{b1i=1nxi12+b2i=1nxi1xi2=i=1nxi1Iib1i=1nxi1xi2+b2i=1nxi22=i=1nxi2Ii.

Letting X=[x11x21xn1x12x22xn2], B=[b1b2], I=[I1I2In], Eq. (10) can be concisely expressed in matrix form as

XTXB=XTI.

If the matrix XTX is invertible, as a result, the unique solution to the linear system is given by

B=[b1b2]=(XTX)1XTI.

Thus, the reflectivity estimator of each pixel r(x,y) is b1, the intensity estimator of the ambient light and surface interreflections r(x,y)Li(x,y)+La(x,y) is b2, and the camera response function is

I(x,y)=kt[b1Lp(x,y)+b2].

Equation (13) shows that for a given object and measurement environment, the captured intensity I(x,y) depends on the projected light intensity Lp(x,y), the camera sensitivity k, and the exposure t. Thus, to ensure the captured fringe pattern images with good quality, the only thing that needs to be done is to properly set the value of above three parameters. For simplicity, here, both the parameters k and t keep unchanged. Thus, an optimal projected light intensity Lp(x,y) can ensure that the captured intensity of camera I(x,y) is a suitable value without overexposure, which can be derived from the inverse camera response function. Hence, by Eq. (13), we can deduce

Lp(x,y)=I(x,y)b2ktb1kt.

Given that Lopt(x,y) is the optimal intensity of the projected fringe pattern and Iideal is the corresponding ideal captured intensity, theoretically, Iideal is selected invariably as 254 if 8 bits are used to represent light intensity. On one hand, this value would ensure fringe pattern images of high SNR. On the other hand, it avoids the occurrence of image saturation. As the Iideal is determined, by Eq. (14), the optimal intensity of the projected fringe pattern Lopt(x,y) can be obtained

Lopt(x,y)=Iidealb2ktb1kt.

For varying reflectivity across the surface, r(x,y) will vary from pixel to pixel, and Lopt(x,y) would also have to vary accordingly from pixel to pixel to maintain a high SNR without causing image saturation. Because the phase-shifting algorithm calculates the phase point by point, by Eq. (5), the phase can be calculated pixel by pixel in the captured fringe pattern images and then be further converted into 3-D coordinates.

B. Coordinates mapping

The camera response function only solves the magnitude of the intensity of the projected fringe pattern; however, it cannot answer where the optimal projected intensity should be located. In other words, we have to determine the coordinate mapping function to assign the optimal intensity to the correct location in the projector image plane.

Identification and clustering of saturated pixels is performed using the maximum intensity of the projected fringe pattern set to 255 as it is the highest possible gray level input to the projector (for an 8-bit projector) to ensure high intensity modulation of captured fringe patterns in dark image regions, regardless of image saturation occurring in bright image regions. Besides, other measures, such as minimize aperture of the camera lens, use smaller exposure time of the camera, and set the camera sensitivity k to 0dB, can reduce the influence of ambient light and surface interreflections as much as possible. Taking into account the noise of the camera sensor, reserving some gray level space, and avoiding that the noise causes the camera sensor to be saturated, the threshold is set to 248. That means for each camera pixel, if the captured intensity from any fringe pattern images (for any phase shift i∈ [1, 4 ]) reaches 248, the pixel is identified as a saturated pixel. A camera mask image Mc(x,y) is then generated by

Mc(x,y)={0(Ii(x,y)248)i[1,4]255otherwise.

In the camera mask image, connected saturated pixels form saturated-pixel clusters. Contours just outside of each saturated-pixel cluster are extracted by border following. Generally, each contour is represented by a vector of points. These points are unsaturated and have the absolute phase. Thus they can be mapped to a projector image plane, and determine the matching contours in the projector image plane. For each contour point (x, y) in the camera image coordinate system, the coordinate mapping function can be defined to map it to the matching contour point (u, v) in the projector image coordinate system. The coordinate mapping function between the camera image plane and the projector image plane becomes a homography mapping by

s[uv1]=H×[xy1].

where H is the 3x3 homography matrix, s is the scalar factor. The homography matrix H can be derived by those points outside and surrounding the contour point, which are unsaturated and their absolute phase, i.e. (u, v) are calculated from the captured fringe pattern images. After the camera mask image is done, a projector mask image Mp(u,v) can be generated to include all matching contours, where points labeled 0 on and inside all matching contours form saturated-pixel clusters, and the remaining points are labeled 255 (unsaturated). The matching saturated-pixel clusters in the projector mask image thus indicate where the optimal projected intensity should be located in all fringe patterns, to specially to illuminate those surface regions that lead to image saturation due to high reflectivity and surface interreflections, and those surface regions with low reflectivity.

C. ADFP procedure

As shown in Fig. 4 , the processing flowchart of ADFP can be further decomposed into six steps:

 figure: Fig. 4

Fig. 4 Flowchart of the ADFP method proposed.

Download Full Size | PDF

  • 1. Project a series of uniform gray level patterns Li, i = 1,2,…,7, varying from 0 to 255 by the projector, capture the images Ii, i = 1,2,…,7, by the camera. To simplify the calculation and minimize the influence of the ambient light and surface interreflections, in the process, minimize aperture of the camera lens, fix the camera exposure time t, and set the camera sensitivity k to 0dB, i.e. k = 1. The camera exposure time t is preferably set as an integer times of 1/fp sec and fp is the refresh frequency of the digital projector (fp is 60Hz generally). This makes the camera to synchronize with the projector better. The exposure time of the camera should not be too large, because the long exposure time will dramatically increase the overall measurement time.
  • 2. Calculate the reflectivity of each pixel r(x,y)=b1, and the intensity of the ambient light and surface interreflections r(x,y)Li(x,y)+La(x,y)=b2 by Eq. (12).
    B=[b1b2]=(XTX)1XTI=([L1L2L7ttt][L1L2L7ttt])1×[L1L2L7ttt][I1(x,y)I2(x,y)I7(x,y)].
  • 3. According to the camera response function, calculate the optimal intensity of each pixel in fringe patterns Lopt(x,y) from Eq. (15).
  • 4. Use the maximum intensity of 255 to project fringe patterns, capture a set of phase-shifting images, generate the camera mask image Mc(x,y), and solve the absolute phase map. For those saturated pixels, their absolute phase is false, but those pixels on and outside the contours is unsaturated, which can be used for mapping coordinates.
  • 5. As the optimal intensity of each pixel Lopt(x,y), map its coordinates to the projector image plane (u, v) by Eq. (17). Then, for the use of four-step phase-shifting algorithm, as seen in Eq. (1), the intensity modulation I(u,v) of the fringe patterns is given by
    I(u,v)=Lopt(u,v)Lmin(u,v)2.

    where Lmin(u,v) is the minimum intensity of the fringe patterns (generally is 0). The average intensity I(u,v) is given by

    I(u,v)=Lopt(u,v)+Lmin(u,v)2.

  • 6. Use the intensity modulation I(u,v) and the average intensity I(u,v) to generate the adapted fringe patterns to scan.

Such generated fringe patterns by adaptively adjusting the intensity of each pixel can solve the problem of the highlight and dark in the captured fringe pattern images. For the bright regions due to high surface reflectivity and high illumination by the ambient light and surfaces interreflections, reduce the projected intensity just to be low enough to avoid image saturation. Simultaneously, use the maximum intensity of 255 for those dark regions with low surface reflectivity to maintain high SNR. Therefore, it is able to obtain high quality fringe pattern images to restore the 3-D shape of the object. Instead of capturing a set of fringe patterns repeatedly at many camera exposure times, only one set of adapted fringe patterns are captured while employing the ADFP method, which locally adjust the projected intensity according to the reflectivity characteristics of local surface regions and to the illumination by the ambient light and surfaces interreflections after only two preliminary steps. The AFPP method also does not require any additional optical and control hardware.

3. Experiments

As shown in Fig. 5 , we developed a 3-D shape measurement system based on the fringe projection technique, consisting of a monochrome CCD camera (PointGrey GS3-U3-28S4M-C) with a resolution of 1928 x 1448 pixels, a digital projector (Optoma HD26) with a resolution of 1920 x 1080 pixels, and a computer with a frame grabber (PointGrey ACC-01-1201). A workpiece of aluminum alloy with a large range of reflectivity variation across the surface was chosen as the test sample. The fringe patterns were projected onto the workpiece being measured by the digital projector controlled by the computer. The projected fringe patterns reflected from the workpiece surface were sensed and converted to digital images by the camera, and further transferred to the computer by the frame grabber. The computer processed these images to retrieve the phase using the phase-shifting algorithm. The phase was further converted to 3-D coordinates using the calibrated system parameters. The intrinsic and extrinsic parameters of the camera and the projector were calibrated through the method in reference [6]. The distance between the camera and the object was about 450 mm, and the valid field of view had a width of around 320 mm.

 figure: Fig. 5

Fig. 5 System setup.

Download Full Size | PDF

3.1 Measurement without the ADFP method

For the first test, using a traditional phase-shifting method, one of the vertical fringe patterns using the maximum intensity of 255 is shown in Fig. 6(a) , with the corresponding captured fringe pattern image in Fig. 6(b). It can be seen that there are large areas of highlight in the middle of the workpiece. Figure 6(c) details the middle region of the workpiece under the projection of high frequency fringe pattern. The convex ball and the concave ball, which are located in the middle & upper region and the middle & lower region respectively, form highlight, resulting in errors seen as discontinuities in the absolute phase map in Fig. 6(d). The color changed from red to blue indicate that the phase is changed from small to large accordingly, while the grey indicates that there is no phase.

 figure: Fig. 6

Fig. 6 The problems in the workpiece measurement without ADFP: (a) a fringe pattern using the maximum intensity of 255, (b) captured image of the workpiece with projection of the fringe pattern in (a), (c) highlights under the projection of high frequency fringe pattern, (d) absolute phase map of the workpiece from (b) and (c).

Download Full Size | PDF

Figure 7 shows results of the workpiece measurement without ADFP. Figure 7(a) shows the 3-D point cloud converted from the absolute phase map in Fig. 6(d). It can be seen here that there are large areas of holes in the middle of the workpiece due to the highlights. Since the highlights are directly related to an angle, we rotated and scanned the workpiece 5 times, merged the point clouds after alignment, and then built a surface using the Poison surface reconstruction algorithm. Even so, the merged point cloud is still relatively sparse, which contains only 99735 faces and 51222 vertices, while Fig. 7(b) shows a reconstructed result rendered in 3-D shaded mode. Figure 8(a) shows a result of surface comparison in color between the reconstructed result without ADFP and the CAD model with true 3-D coordinates, with most deviations in the range [-0.5, 0.5] mm. Figure 8(b) shows a cross section of the surface comparison and deviations of 13 points on it in the range [0.03, 0.33] mm. The mean deviation of the 13 points is 0.115 mm, while the root mean square (RMS) error is 0.023 mm. Obviously, the number of merged point cloud is not enough, resulting in a large measurement error.

 figure: Fig. 7

Fig. 7 Measurement result of the workpiece without the ADFP: (a) 3-D point cloud, (b) 3-D reconstructed result rendered in shaded mode.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Surface comparison between the CAD model and the reconstructed result without the ADFP: (a) deviations of surface comparison in color, (b) deviations of 13 points on the cross section.

Download Full Size | PDF

3.2 Measurement with the ADFP method

Results of the fringe pattern projection and captured images in measuring the workpiece using the ADFP method are shown in Fig. 9 . First, Fig. 9(a) shows the optimal intensity of each pixel in the fringe pattern which is calculated according to Eq. (15). To be directed against the location of highlights across the surface, according the camera response function, reduce the intensity of projection, i.e. the brighter in the location of highlights, the lower the intensity of projection is. From captured fringe pattern images, containing saturated-pixel clusters, contours of saturated-pixel clusters are extracted, as shown in Fig. 9(b). After coordinates mapping, one of the vertical adapted fringe patterns for the workpiece measurement is generated shown in Fig. 9(c), as a result, highlights can be avoided in the corresponding captured fringe pattern image, as seen in Fig. 9(d). Figure 9(e) details the middle region of the workpiece under the projection of high frequency fringe pattern using the ADFP method. Obviously, the fringe pattern can be seen clearly compared to the highlights seen in Fig. 6(c). The absolute phase map of the workpiece using the ADFP method in Fig. 9(f) has no apparent discontinuity or visible error, which is an improvement compared to the discontinuities seen in Fig. 6(d). Results of the workpiece measurement using the ADFP method are shown in Fig. 10 . Figure 10(a) shows the 3-D point cloud converted from the absolute phase map in Fig. 9(f). Compared to the point cloud seen in Fig. 7(a), it’s much denser, especially in the middle region of the workpiece. Similarly, we rotated and scanned the workpiece 5 times, merged the point clouds after alignment, and then built a surface using the Poison surface reconstruction algorithm. The reconstructed result contains 1337615 faces and 680740 vertices shown in Fig. 10(b). Figure 11(a) shows a result of surface comparison in color between the reconstructed result using the ADFP method and the CAD model with true 3-D coordinates, with most deviations in the range [-0.2, 0.2] mm. Compared to the measurement result without ADFP, deviation reduction of 60% is achieved. Figure 11(b) shows a cross section of the surface comparison and deviations of 13 points on it in the range [-0.1, 0.07] mm. The mean deviation of the 13 points is −0.03 mm, while the RMS error is 0.012 mm with an error reduction of 48% comparing to the measurement result without the ADFP.

 figure: Fig. 9

Fig. 9 Results of the fringe pattern projection and captured images in measuring the workpiece using the ADFP method: (a) the optimal intensity of each pixel in the fringe pattern, (b) contours of saturated-pixel clusters, (c) a vertical adapted fringe pattern at matching contours in (b), (d) captured image of the workpiece with projection of the adapted fringe pattern in (c), (e) the details under the projection of high frequency adapted fringe pattern, (f) absolute phase map of the workpiece from (d) and (e).

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Measurement result of the workpiece using the ADFP method: (a) 3-D point cloud, (b) 3-D reconstructed result rendered in shaded mode.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Surface comparison between the CAD model and the reconstructed result using the ADFP method: (a) deviations of surface comparison in color, (b) deviations of 13 points on the cross section.

Download Full Size | PDF

4. Discussion

The above results demonstrate the effectiveness of the ADFP method in measuring a surface with a large range of reflectivity variation across the surface. Overall deviation reduction of 60% and locally error reduction of 48% are achieved compared to the measurement without the ADFP. From the experimental results shown previously, it can be seen that the advantages of the ADFP method are obvious:

  • A. Low cost. The ADFP method only needs to adjust the intensity of projection over image pixels. Therefore, no additional cost of the hardware is necessary.
  • B. Higher efficiency. Compared to those methods by taking multiple exposures and methods by projecting fringe patterns with multiple intensities, a large number of fringe pattern images are needed to be captured, resulting in time consuming, especially, for measurement of the object with a large range of reflectivity variation across the surface. The ADFP method uses only two preliminary steps of fringe pattern projection and image capture to generate the adapted fringe patterns for 3-D measurement. For example, the experiment in the reference [8] needs 23 exposures to obtain high-quality 3-D data of a China vase. Even with the three-step phase-shifting algorithm and three-frequency heterodyne phase-unwrapping algorithm, it takes a total of 3*3*23 = 207 images to fuse a single set of phase-shifting images. Similarly, the experiment of a black and white checkerboard in the reference [24] needs 13 maximum input gray levels ranging from 255 to 60 with a constant step size of 15 gray level to project fringe patterns. This means the method has to capture 4*3*13 = 156 images when the four-step phase-shifting algorithm and three-frequency heterodyne phase-unwrapping algorithm are used. In contrast, the ADFP method proposed in this paper only needs to capture 7 + 4*3 + 4*3 = 31 images to obtain the 3-D data of the workpiece.
  • C. Quantify precisely. As the novel mathematical model was established to calculate the intensity of the ambient light and surface interreflections, the optimal intensity of each pixel in the fringe pattern is precisely calculated, which is also superior to those method using composite fringe pattern constructed pixel by pixel of unsaturated pixel intensities from fringe pattern images at projected uniform lights of multiple intensities.

The ADFP method proposed has its limitation in the measurement due to the digital projector adopted. Since a 8 bit digital projector was used in this method to generate the fringe patterns, which included 256 gray levels from 0 to 255,it results in a limited range of intensity variation of the projected fringe patterns. Therefore, the ADFP method may not be appropriate for measuring objects with extremely large range of reflectivity variation across the surface, such as a parabolic metal mirror. On the other hand, the ADFP method demands a digital projector with a high fresh rate and high dynamic range of intensity variation to achieve a fast 3-D measurement. Since the digital projector and the camera must be precisely synchronized, a low fresh rate of the projector will limit the minimal exposure time of the camera and result in a longer measurement time. Further research work will be carried out on development of a new digital fringe pattern projector.

5. Conclusion

We have presented a new adaptive digital fringe projection (ADFP) method, which avoids image saturation and maintains high SNR in 3-D shape measurement where there is a large range of reflectivity variation across the object surface, by adaptively adjusting the pixel-wise intensity of the projected fringe patterns based on the saturated pixels in the captured images of the surface being measured. For the bright regions due to high surface reflectivity and high illumination by the ambient light and surfaces interreflections, reduce the projected intensity just to be low enough to avoid image saturation. Simultaneously, use the maximum intensity of 255 for those dark regions with low surface reflectivity to maintain high SNR. Experiments verified that the ADFP method can achieve higher 3-D measurement accuracy across a surface with a large range of reflectivity variation than the measurement without ADFP. The ADFP method uses only two preliminary steps of fringe pattern projection and image capture to generate the adapted fringe patterns for 3-D measurement, without many exposures and fringe pattern projections used in previous high dynamic range 3-D scan methods, avoiding the additional complexity of multiple camera viewpoints, projection directions, and optical and control hardware.

Acknowledgments

This work was supported by the National Basic Research Program of China (973 Program No.2011CB013104), the R&D Key Projects from Guangdong Province (No. 2015B010104008, No. 2015B010133005, and No.2015A030312008), and the Young Talent Innovation Projects from Guangdong Educational Department (No. 2015KQNCX147).

References and links

1. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000). [CrossRef]  

2. J. Salvi, J. Pagès, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognit. 37(4), 827–849 (2004). [CrossRef]  

3. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

4. X. Han and P. Huang, “Combined stereovision and phase shifting method: a new approach for 3D shape measurement,” Proc. SPIE 7389, 73893C (2009). [CrossRef]  

5. Z. Li, Y. Shi, and C. W. Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008). [CrossRef]  

6. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

7. D. Palousek, M. Omasta, D. Koutny, J. Bednar, T. Koutecky, and F. Dokoupil, “Effect of matte coating on 3D optical measurement accuracy,” Opt. Mater. 40, 1–9 (2015). [CrossRef]  

8. S. Zhang and S.-T. Yau, “High dynamic range scanning technique,” Opt. Eng. 48(3), 033604 (2009). [CrossRef]  

9. H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for high-reflective surfaces,” Opt. Lasers Eng. 50(10), 1484–1493 (2012). [CrossRef]  

10. H. Zhao, X. Liang, X. Diao, and H. Jiang, “Rapid in-situ 3D measurement of shiny object based on fast and high dynamic range digital fringe projector,” Opt. Lasers Eng. 54, 170–174 (2014). [CrossRef]  

11. L. Ekstrand and S. Zhang, “Autoexposure for three-dimensional shape measurement using a digital-light-processing projector,” Opt. Eng. 50(12), 123603 (2011). [CrossRef]  

12. S. Ri, M. Fujigaki, and Y. Morimoto, “Intensity range extension method for three-dimensional shape measurement in phase-measuring profilometry using a digital micromirror device camera,” Appl. Opt. 47(29), 5400–5407 (2008). [CrossRef]   [PubMed]  

13. C. Waddington and J. Kofman, “Saturation avoidance by adaptive fringe projection in phase-shifting 3D surface-shape measurement,” in 2010 International Symposium on Optomechatronic Technologies, (IEEE, 2010), pp. 1–4. [CrossRef]  

14. G. Babaie, M. Abolbashari, and F. Farahi, “Dynamics range enhancement in digital fringe projection technique,” Precision Eng. 39, 243–251 (2015).

15. C. Zhang, J. Xu, N. Xi, J. Zhao, and Q. Shi, “A Robust Surface Coding Method for Optically Challenging Objects Using Structured Light,” IEEE Trans. Autom. Sci. Eng. 11(3), 775–788 (2014). [CrossRef]  

16. G. H. Liu, X.-Y. Liu, and Q.-Y. Feng, “3D shape measurement of objects with high dynamic range of surface reflectivity,” Appl. Opt. 50(23), 4557–4565 (2011). [CrossRef]   [PubMed]  

17. Q. Hu, K. G. Harding, X. Du, and D. Hamilton, “Shiny parts measurement using color separation,” Proc. SPIE 6000, 60000D (2005). [CrossRef]  

18. S. K. Nayar, X. S. Fang, and T. Boult, “Separation of Reflection Components Using Color and Polarization,” Int. J. Comput. Vis. 21(3), 163–186 (1997). [CrossRef]  

19. B. Salahieh, Z. Chen, J. J. Rodriguez, and R. Liang, “Multi-polarization fringe projection imaging for high dynamic range objects,” Opt. Express 22(8), 10064–10071 (2014). [CrossRef]   [PubMed]  

20. R. Kowarschik, P. Kuhmstedt, and J. Gerber, “Adaptive optical three dimensional measurement with structured light,” Opt. Eng. 39(1), 150–158 (2000). [CrossRef]  

21. X. Su and W. Chen, “Fourier transform profilometry: a review,” Opt. Lasers Eng. 35(5), 263–284 (2001). [CrossRef]  

22. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010). [CrossRef]  

23. C. Reich, R. Ritter, and J. Thesing, “White light heterodyne principle for 3D-measurement,” Proc. SPIE 3100, 236–244 (1997). [CrossRef]  

24. C. Waddington and J. Kofman, “Camera-independent saturation avoidance in measuring high-reflectivity-variation surfaces using pixel-wise composed images from projected patterns of different maximum gray level,” Opt. Commun. 333, 32–37 (2014). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 The images of objects with different surface material, and the corresponding captured fringe pattern images: (a) a plastic block, (b) a fringe pattern image of the plastic block, (c) a metallic workpiece, (d) reflected fringe pattern captured by the camera from the metallic workpiece with exposure time = 16.7 ms, and (e) reflected pattern of the same sample with exposure time = 100ms.
Fig. 2
Fig. 2 Schematic diagram of the phase-shifting fringe projection system for 3-D surface measurement.
Fig. 3
Fig. 3 Light sources in capturing fringe pattern images.
Fig. 4
Fig. 4 Flowchart of the ADFP method proposed.
Fig. 5
Fig. 5 System setup.
Fig. 6
Fig. 6 The problems in the workpiece measurement without ADFP: (a) a fringe pattern using the maximum intensity of 255, (b) captured image of the workpiece with projection of the fringe pattern in (a), (c) highlights under the projection of high frequency fringe pattern, (d) absolute phase map of the workpiece from (b) and (c).
Fig. 7
Fig. 7 Measurement result of the workpiece without the ADFP: (a) 3-D point cloud, (b) 3-D reconstructed result rendered in shaded mode.
Fig. 8
Fig. 8 Surface comparison between the CAD model and the reconstructed result without the ADFP: (a) deviations of surface comparison in color, (b) deviations of 13 points on the cross section.
Fig. 9
Fig. 9 Results of the fringe pattern projection and captured images in measuring the workpiece using the ADFP method: (a) the optimal intensity of each pixel in the fringe pattern, (b) contours of saturated-pixel clusters, (c) a vertical adapted fringe pattern at matching contours in (b), (d) captured image of the workpiece with projection of the adapted fringe pattern in (c), (e) the details under the projection of high frequency adapted fringe pattern, (f) absolute phase map of the workpiece from (d) and (e).
Fig. 10
Fig. 10 Measurement result of the workpiece using the ADFP method: (a) 3-D point cloud, (b) 3-D reconstructed result rendered in shaded mode.
Fig. 11
Fig. 11 Surface comparison between the CAD model and the reconstructed result using the ADFP method: (a) deviations of surface comparison in color, (b) deviations of 13 points on the cross section.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

I 1 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ ϕ ( x , y ) ] .
I 2 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ ϕ ( x , y ) + π / 2 ] .
I 3 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ ϕ ( x , y ) + π ] .
I 4 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ ϕ ( x , y ) + 3 π / 2 ] .
ϕ ( x , y ) = arc tan [ I 4 ( x , y ) I 2 ( x , y ) I 1 ( x , y ) I 3 ( x , y ) ] .
I ( x , y ) = k t { r ( x , y ) [ L p ( x , y ) + L i ( x , y ) ] + L a ( x , y ) } + I n ( x , y ) .
I ( x , y ) = k t r ( x , y ) L p ( x , y ) + k t [ r ( x , y ) L i ( x , y ) + L a ( x , y ) ] + I n ( x , y ) .
I = b 1 x 1 + b 2 x 2 + I n .
{ Q b 1 = 2 i = 1 n ( I 1 b 1 x i 1 b 2 x i 2 ) x i 1 = 0 Q b 2 = 2 i = 1 n ( I 1 b 1 x i 1 b 2 x i 2 ) x i 2 = 0 .
{ b 1 i = 1 n x i 1 2 + b 2 i = 1 n x i 1 x i 2 = i = 1 n x i 1 I i b 1 i = 1 n x i 1 x i 2 + b 2 i = 1 n x i 2 2 = i = 1 n x i 2 I i .
X T X B = X T I .
B = [ b 1 b 2 ] = ( X T X ) 1 X T I .
I ( x , y ) = k t [ b 1 L p ( x , y ) + b 2 ] .
L p ( x , y ) = I ( x , y ) b 2 k t b 1 k t .
L o p t ( x , y ) = I i d e a l b 2 k t b 1 k t .
M c ( x , y ) = { 0 ( I i ( x , y ) 248 ) i [ 1 , 4 ] 255 o t h e r w i s e .
s [ u v 1 ] = H × [ x y 1 ] .
B = [ b 1 b 2 ] = ( X T X ) 1 X T I = ( [ L 1 L 2 L 7 t t t ] [ L 1 L 2 L 7 t t t ] ) 1 × [ L 1 L 2 L 7 t t t ] [ I 1 ( x , y ) I 2 ( x , y ) I 7 ( x , y ) ] .
I ( u , v ) = L o p t ( u , v ) L min ( u , v ) 2 .
I ( u , v ) = L o p t ( u , v ) + L min ( u , v ) 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.