Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Application of orthogonal fringe patterns in uniaxial microscopic 3D profilometry

Open Access Open Access

Abstract

This research presents a novel uniaxial microscopic 3D profilometry method with the application of orthogonal fringe patterns in a structured light system. Specifically, the projector alternately projects vertical and horizontal stripes, and subtraction of two adjacent shot images is applied to eliminate the influence of the background information. This method requires only one-tenth of our previous method in collection and data processing volume, but the accuracy can achieve almost the same as that method. We will describe the principle of this uniaxial microscopic 3D profilometry and demonstrate the accuracy of the proposed measurement framework by comparing it with the ten-step phase-shifting method and the Fourier transform method. It shows that approximately 5.22 µm root-mean-square (RMS) error with a depth range of 1100 µm can be achieved by the proposed approach.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The optical microscopic 3D measurement technologies with the virtues of non-destructive, non-contact, flexibility, high-speed, high-resolution and high-accuracy had received great popularity in numerous fields such as biochemistry, energy and security. With recent advancements in modern manufacturing technology, multitudinous non-contact optical approaches have been developed to achieve micro-level 3D metrology including white-light interferometry [15], phase-shifting interferometry [610], wavelength scanning interferometry [1115], focus variation [16,17], infinitive focus microscopy [18,19], and confocal microscopy [20].

All of these approaches can be applied to measure object with sorely sharp variation in surface such as grooves, step distribution and deep holes. For white-light interferometry [3], it applies fringe contrast to identify surface height. While the philosophy behind phase-shifting interferometry [6] is to apply a time-varying phase shift between the reference and test wavefronts, which generally uses a piezoelectric ceramic transformer as the phase shifter. The wavelength scanning interferometry [14] is a volumetric imaging technique in which 2D image sequences are recorded as the wavenumber of the light source varies with time. The focus variation [21] combines the small depth of focus of an optical system with vertical scanning to provide topographical and colour information based on the variation of focus. Infinite Focus Microscope [19] reconstructs a 3D image from a series of 2D images captured between the lowest and highest focal plane, areas in focus are assembled to produce a reconstruction of the specimen surface. For the chromatic confocal microscopy [22,23], this method for depth measurement is based on the application of axial chromatic aberration. The above vertical measurement techniques can be divided into two types of techniques according to whether mechanical moving parts are required. Those that do not require mechanical moving parts could achieve fast measurement speed and high measurement accuracy, but has a relatively smaller field of view (FOV). The others could achieve both high measurement accuracy and large FOV, yet the measurement speed is slower. Although these state-of-the-art vertical measurement techniques have been successful in many fields of application, all of them require well alighted optical components and a well-controlled environment to achieve high measurement accuracy.

The uniaxial microscopic 3D profilometry based on structure light doesn’t require the mechanical moving device. This technique utilizes the phase-shifting method [24] (PSM) or the Fourier transform method (FTM)to extract the modulation distribution at each position, then reconstructs the surface shape of the specimen based on the relationship between the modulation and depth value. However, the phase shift method belongs to the multi-frame fringes processing technology, which requires at least 3 frames of fringe patterns at each position to achieve the modulation retrieval; the Fourier transform method can be accepted as a single-frame fringe processing technology, that only one fringe is needed to be captured to get the modulation distribution. While for this approach, when the background light field is uneven, or the surface shape of the object is complex, fundamental frequency containing the height information of the object will be affected by the zero-frequency component, which will affect the measurement accuracy.

This paper proposed a method of uniaxial microscopic 3D profilometry using orthogonal fringe patterns. Specifically, the orthogonal fringe pattern is generated by the subtraction of a horizontal stripe and a vertical stripe, which respectively records the height information of object corresponding to different projection focal lengths. The subtraction process has eliminated the background information, so when the Fourier transform is used to extract the fundamental frequency information from two orthogonal directions, the influence of the zero frequency on the fundamental frequency can be avoided, and a more accurate modulation distribution can be obtained. In our experiments, 2N frames of fringes are projected on the measured object and the corresponding images are captured by the camera, which has the same collection and data processing volume as the FTM, but can achieve higher measurement accuracy. Compared with our previous work using the PSM, it requires only one-tenth of the data collected and processed, while it can reach almost the same measurement accuracy.

Section 2 presents the principle of the uniaxial microscopic system and the algorithms for modulation retrieval with orthogonal fringes. Section 3 illustrates the calibration framework. Section 4 shows the experimental results. Finally, Section 5 summarizes this paper.

2. Principle

As shown in the Fig. 1, assuming the light source is a point object, for a perfect optical system, the image of this point (the red point on plane P$4$ on the focal plane will be the same as the original point. While on the planes before and after the focal plane, this point will become a disc with varying degrees of blur (the yellow discs on planes before or after plane P$4$), and the further the distance from the focal plane, the larger the blurred disc diameter. For the image of a sinusoidal grating through this lens imaging system, the variation of the depth will be embodied in the blurriness of the image. When projecting the image of the grating onto the specimen surface, the clearest image will be on the focal plane, and before or after the focal plane, as the distance from the focal plane increases, the blur of the image becomes more serious. If one uses the modulation value to describe the level of blurriness, respectively extracts the modulation value for the eponymous pixel on each image, and then a curve as shown in Fig. 1 will be produced. The extracted values on these planes are marked by blue dots on this curve. The maximum value of the modulation corresponds to the clearest picture. As the distance from the focal plane gradually increases, the modulation value gradually decreases due to the blurriness. If more planes with smaller interval between any two adjacent planes are applied, a smooth curve will be acquired by capturing the images on these planes and extracting the modulation value for the eponymous pixel. In fact, this curve represents the relationship between the modulation value and the depth information in optical axis direction, which is the basis of the present framework.

 figure: Fig. 1.

Fig. 1. The principle of uniaxial microscopic 3D shape measurement technique.

Download Full Size | PDF

2.1 Modulation retrieval with orthogonal fringes

As shown in Fig. 2, assuming that the projector projects 2N (N is even) fringes, and any two adjacent stripes are orthogonal to each other. 2N planes with fixed small intervals are sequentially placed on the optical axis. Plane PN-1 represents the focal plane, and others are the defocus planes. The vertical stripe on the focal plane can be mathematically formulated as

$$\begin{aligned} I_{N - 1}^{V}(x,y ) & = \frac{{R_0(x,y)B_0(x,y)}}{{{E^2}}} + \frac{{R_0(x,y)C_0(x,y)}}{{{E^2}}} \cos \left[ {2\pi {f_0}x + {\Phi _0}(x,y)} \right] \end{aligned}$$

where $\textit {I}_{N - 1}^{{\textit {V}}} \left (\textit {x}, \textit {y}\right )$ is the light intensity distribution of the fringe patterns on the focal plane; $\textit {R}_0 \left (\textit {x}, \textit {y}\right )$ represents the reflectivity of the object; $\textit {E}$ is the magnification of the optical system; $\textit {B}_0 \left (\textit {x}, \textit {y}\right )$ and $\textit {C}_0 \left (\textit {x}, \textit {y}\right )$ respectively are background intensity and fringe contrast; $\textit {f}_{0}$ indicates the period of the fringe, and $\Phi _{0}\left (\textit {x}, \textit {y}\right )$ denotes the initial phase.

 figure: Fig. 2.

Fig. 2. Modulation retrieval with orthogonal fringes.

Download Full Size | PDF

Equation (1) are the clearest images on the focal plane PN-1. While the images on the defocused plane can be described as the result of convolving the focused image with the point spread function $\textit {G}\left (\textit {x}, \textit {y};\delta \right )$.

$$I_{n - 1}^{V'}(x,y;\delta_{n - 1} ) = G(x,y;\delta_{n - 1} ) \otimes I_{N - 1}^{V}(x,y)$$
where
$$G(x,y;\delta_{n - 1} ) = {1 \over {2\pi \sigma_{n - 1}^2}}{e^{ - {{{x^2} + {y^2}} \over {2\sigma_{n - 1}^2}}}}$$

Operator $\otimes$ represents the convolution operation, n $\in$ [2, 4,…, N,…, 2N] . $\delta _{n - 1}$ is the distance from the defocused plane Pn-1 to the focal plane Pn-1. $\sigma _{n - 1}$, the standard deviations of the point spread function, is the spread parameter, whose value is proportional to the radius of the fuzzy spot that $\sigma _{n - 1}$ = $\alpha \textit {r}_{\textit {n} - 1}$. The value of $\alpha$ depends on the parameters of the optical system. Generally, $\alpha$ takes an approximate value $\sqrt {2}$.

Substitute Eq. (1) and Eq. (3) into Eq. (2), the out-of-focus image can be written as

$$\begin{aligned} I_{n - 1}^{V'}(x,y;\delta_{n - 1} ) = \frac{{R_0(x,y)B_0(x,y)}}{{{E^2}}}+ \frac{{R_0(x,y)C_0(x,y)}}{{{E^2}}}{e^{ - \frac{1}{2}{f_0}^2{\sigma_{n - 1}^2}}}\cos \left[ {2\pi {f_0}x + {\Phi _0}(x,y)} \right] \end{aligned}$$

Actually, in our experiment, any two adjacent stripes are orthogonal to each other. Equation (4) shows the light intensity distribution for the odd plane. While for the even plane such as the latter plane Pn, due to the reflectivity of the object $\textit {R}_0 \left (\textit {x}, \textit {y}\right )$, background intensity $\textit {B}_0 \left (\textit {x}, \textit {y}\right )$ and fringe contrast $\textit {C}_0 \left (\textit {x}, \textit {y}\right )$ change slowly, for simplicity, these parameters for two adjacent fringes at the same point (x, y) can be accepted as the same value. Therefore, the corresponding light intensity distribution written as

$$\begin{aligned} I_n^{H'}(x,y;\delta_{n} ) = \frac{{R_0(x,y)B_0(x,y)}}{{{E^2}}} + \frac{{R_0(x,y)C_0(x,y)}}{{{E^2}}}{e^{ - \frac{1}{2}{f_0}^2{\sigma_{n}^2}}}\cos \left[ {2\pi {f_0}y + {\Phi _0}(x,y)} \right] \end{aligned}$$
By subtracting Eq. (5) from Eq. (4), we can get
$$\begin{aligned} I_{\frac{n}{2}}^{O'}(x,y;\delta_{n-1};\delta_{n}) = I_{n - 1}^{V'}(x,y;\delta_{n - 1} )-I_n^{H'}(x,y;\delta_{n} ) ={\textrm{K}^V} - {\textrm{K}^H} \end{aligned}$$
where
$$\begin{aligned} \textrm{K}^V & = \frac{{R_0C_0}}{{{2E^2}}} {e^{ - \frac{1}{2}{f_0}^2{\sigma_{n - 1}^2}}}\cos \left[ {2\pi {f_0}x + {\Phi _0}} \right]\\ \textrm{K}^H & = \frac{{R_0C_0}}{{{2E^2}}}{e^{ - \frac{1}{2}{f_0}^2{\sigma_{n}^2}}}\cos \left[ {2\pi {f_0}y + {\Phi _0}} \right] \end{aligned}$$

Employing the relationship between the cosine function and the complex exponent function that $\cos (\textit {x})$ = $\frac {1}{2}e^{\textit {ix}}$ + $\frac {1}{2}e^{\textit {-ix}}$, Eq. (6) can also be expressed as

$$\begin{aligned} I_{\frac{n}{2}}^{O'}(x,y;\delta_{n-1};\delta_{n}) = \textrm{K}_1^V + \textrm{K}_{ - 1}^V - \textrm{K}_1^H - \textrm{K}_{ - 1}^H \end{aligned}$$
where
$$\begin{aligned} \textrm{K}_1^V & = \frac{{R_0C_0}}{{{2E^2}}} {e^{ - \frac{1}{2}{f_0}^2{\sigma_{n - 1}^2}}} {e^{i\left[ {2\pi {f_0}x + {\Phi _0}} \right]}}\\ \textrm{K}_{ - 1}^V & = \frac{{R_0C_0}}{{{2E^2}}}{e^{ - \frac{1}{2}{f_0}^2{\sigma_{n - 1}^2}}} {e^{-i\left[ {2\pi {f_0}x + {\Phi _0}} \right]}}\\ \textrm{K}_1^H & = \frac{{R_0C_0}}{{{2E^2}}}{e^{ - \frac{1}{2}{f_0}^2{\sigma_{n}^2}}}{e^{i\left[ {2\pi {f_0}y + {\Phi _0}} \right]}}\\ \textrm{K}_{ - 1}^H & = \frac{{R_0C_0}}{{{2E^2}}}{e^{ - \frac{1}{2}{f_0}^2{\sigma_{n}^2}}}{e^{-i\left[ {2\pi {f_0}y + {\Phi _0}} \right]}} \end{aligned}$$

Equation (8) shows that the subtraction of two adjacent stripes will eliminate the constant term (zero-frequency information in the frequency domain). Then the Fourier transform operation is followed. Two adequate windows are used to separately filter the fundamental component from the frequency domain in two directions, and do inverse Fourier transform operation to get the two modulation distributions for the adjacent captured fringes.

$$M_{n - 1}^{V'}(x,y;\delta_{n-1}) = \frac{{R_0C_0}}{{{2E^2}}}{e^{ - \frac{1}{2}{f_0}^2{\sigma_{n - 1}^2}}}= M_{f} {e^{ - \frac{1}{2}{f_0}^2{\sigma_{n - 1}^2}}}$$
$$M_{n}^{H'}(x,y;\delta_{n}) = \frac{{R_0C_0}}{{{2E^2}}}{e^{ - \frac{1}{2}{f_0}^2{\sigma_{n}^2}}}=M_{f} {e^{ - \frac{1}{2}{f_0}^2{\sigma_{n}^2}}}$$

$M_{n - 1}^{V'}(x,y;\delta _{n-1})$ is the modulation distribution for plane Pn-1, and $M_{n}^{H'}(x,y;\delta _{n})$ represents the modulation distribution for plane Pn. $M_{f}$ is the modulation distribution of the fringe pattern on the focal plane.

3. System calibration

Before the measurement, system calibration should be completed in advance. Figure 3 shows the schematic diagram of the uniaxial microscopic 3D profilometry with orthogonal fringe patterns. The process is to capture a series of patterns on real planes with a fixed distance, use these images to calculate the modulation distribution, and establish a lookup table of height and serial number for the maximum modulation value.

 figure: Fig. 3.

Fig. 3. Schematic diagram of the uniaxial microscopic 3D profilometry with orthogonal fringe patterns.

Download Full Size | PDF

In our system, the calibration range is 1100 $\mu$m and the interval between any two adjacent calibration planes is 100 $\mu$m. A controller board is utilized to precisely synchronize a camera (Point Grey camera GS3-U3-23S6M), a projector (Light Crafter PRO6500) and an electrically tunable lens (ETL, Model: EL-16-40-TC). As shown in Fig. 3, firstly, put the reference plane P$1$ at Z(1) = 0 $\mu$m, whose height is assumed to be 0 $\mu$m. Utilize the digital stripe projector with the bases of micromirror projection units to project a series of fringe patterns. For these fringes, any two adjacent stripes are orthogonal to each other. The ETL is applied to set different focal distances. With the continuous alternating projection of the two orthogonal fringe patterns, the current value of ETL varies from small to large at the same interval value, that is, the amount of defocusing gradually changes from small to large. Meanwhile, the images on the surface of the reference plane are synchronously captured by the digital CMOS camera. The resolution of this camera is 1920 $\times$ 1200 pixels, which is connected with a telecentric lens. The total number of captured images for each plane is 316, and these images are respectively numbered from 1 to 316 in order (called serial number). Calculate the modulation distribution for each image and one can get a curve as Fig. 1 for any eponymous pixel (x, y). Find the greatest modulation value for this curve, an exact serial number $\textit {S}(1, \textit {x}, \textit {y})_{\textit {m}\textit {a}\textit {x}}$ corresponding to this greatest modulation value can be obtained. Then a relationship between the relative height Z(1, x, y) = 0 $\mu$m and serial number $\textit {S}(1, \textit {x}, \textit {y})_{\textit {m}\textit {a}\textit {x}}$ can be established for this pixel (x, y). Utilize a one-dimensional precision translation platform (Newport: 462-X-M) to move the plane to the second position $\textit {Z}(2, \textit {x}, \textit {y}) = 100$ $\mu$m, collect another group of 316 images, establish the relationship between the relative height Z(2, x, y) = 100 $\mu$m and serial number $\textit {S} (2, \textit {x}, \textit {y})_{\textit {m}\textit {a}\textit {x}}$. Repeat the above operation until the completion of establishment for the relationship between the relative height Z(J, x, y) = (J-1) $\times$ 100 $\mu$m and the serial number $\textit {S} \left (\textit {J}, \textit {x}, \textit {y} \right )_{\textit {m}\textit {a}\textit {x}}$ ( J = 12 in our system). In fact, for this process, the same pixel (x, y) for each calibration plane corresponds to a relative height, whose maximum modulation value is extracted from the captured fringes with a unique serial number. Therefore, we have established a relationship between the serial number for the maximum modulation value and the relative height value at pixel (x, y). Subsequently, a look-up table between the relative height value Z(j, x, y) (j = 1, 2, 3$\cdots$, J) and the serial number $\textit {S} \left (\textit {j}, \textit {x}, \textit {y} \right )_{\textit {m}\textit {a}\textit {x}}$ can be established, which can be described as

$$\begin{aligned} Z(j,x,y) = a(x,y) + b(x,y)S{(j,x,y)_{\max }}+ c(x,y){S^2}{(j,x,y)_{\max }} \end{aligned}$$
where $\textit {Z}\left (\textit {j}, \textit {x}, \textit {y}\right )$ represents the relative height for point (x, y). $\textit {a}\left (\textit {x}, \textit {y}\right )$, $\textit {b}\left (\textit {x}, \textit {y}\right )$ and $\textit {c}\left (\textit {x}, \textit {y}\right )$ are the coefficients of the quadratic curve fitting used to express this relationship. Each pixel in the images has its unique formula with different values of parameters $\textit {a}\left (\textit {x}, \textit {y}\right )$, $\textit {b}\left (\textit {x}, \textit {y}\right )$ and $\textit {c}\left (\textit {x}, \textit {y}\right )$ because the calibration plane is not an ideal plane, that, there are uneven areas on its surface. Due to the application of quadratic curve fitting and interpolation, any relative height within the calibration rang from 0 $\mu$m to 1100 $\mu$m can find the corresponding serial number.

To reduce the amount of computation in system calibration, the captured fringe patterns are cut to be with the size of 820 $\times$ 1430 pixels. To reduce the random noise impact, each fringe is captured 20 times and uses the average image for further analysis. Figure 4 shows the relationship between the relative height value $\textit {Z}(\textit {j})$ and the serial number S(j)max at the point (410, 715).

 figure: Fig. 4.

Fig. 4. The relationship between the serial number for the maximum modulation value and the relative height value.

Download Full Size | PDF

To evaluate calibration accuracy, four flat planes (a coated mirror surface) were tested. Simultaneously, to make a comparison, the PSM and the FTM with a single fringe pattern is used to reconstruct the shape surface of the calibration plane. The measurement accuracy is evaluated by taking the difference between the measurement result and its fitted plane. The plane function utilized in this experiment is written as

$$Z(x,y) = \alpha + \beta x + \gamma y,$$
where $\alpha$, $\beta$, $\gamma$ are the coefficients of the plane fitting equation. Figures 5(a) - (c) respectively show the error distributions of the first flat plane with the PSM, proposed method (PM) and FTM. While Fig. 5(d) - (l) respectively depict the error distribution of the other three tested flat planes with these three methods. Table 1 shows the coefficients of the plane fitting equation for the four tested planes by the three methods. Table 2 illustrates the root-mean-square error for each flat plane, and the root mean square (RMS) values of the plane errors for the three methods are approximately 4.44 $\mu$m, 5.22 $\mu$m and 7.69 $\mu$m. Although the PM requires only one-tenth of the PSM in the collection and data processing volume, its measurement accuracy is very close to the that of the PSM, but much higher than the FTM.

 figure: Fig. 5.

Fig. 5. Error maps of four tested flat planes. The first row shows error maps the PSM, the second row shows error maps the PM, the third row shows error maps the FTM. The first column shows error maps for plane 1, the second column shows error maps for plane 2, the third column shows error maps for plane 3, the forth column shows error maps for plane 4.

Download Full Size | PDF

Tables Icon

Table 1. Coefficients of the plane fitting equation for the tested planes.

Tables Icon

Table 2. Shape Functions for Quadratic Line Elements ($\mu$m)

In the actual experiment, 1) To make a comparison for the three approaches, eleven fringe patterns were projected and photographed for each current value. The first fringe is a horizontal stripe, and the others are ten-step phase-shifting pictures. For the PSM, the last ten images are applied for system calibration and plane reconstruction; for the proposed method (PM) with two adjacent orthogonal fringes, vertical stripes for odd serial numbers and horizontal stripes for even serial numbers are used for system calibration and plane reconstruction; for the FTM with a single fringe, only the second image (vertical fringe) for each current is used for system calibration and plane reconstruction. That is, a comparison of these methods is made under the same experimental conditions with their respective look-up tables. 2) $\textit {N}$ can be odd. For example, when $\textit {N}$ = 315, the first 314 fringes are subtracted in pairs to obtain their respective modulation information. While the last fringe can be subtracted from the previous picture (the $314^{th}$ fringe), but only one filter window is used to extract the fundamental frequency of the last fringe to get its modulation distribution.

4. Experiment

A sequence of experiments is conducted to show the success of the PM with orthogonal fringes. To make a comparison, the PSM and FTM with a single fringe are applied to reconstruct the surface of the specimen. The range and the interval of the current for the optotune electrically focus-tunable lenses in this section are the same as the system calibration.

Firstly, a specimen with protrusions forming the letters "DOWN" was measured to verify the PM. All the fringe patterns are cropped to be with the size of 820 $\times$ 1430 pixels to reduce the amount of calculation. Figure 6(a) shows the fringe (vertical stripe) when the serial number S(j) = 151. To show the variety of focus plane under the condition of different serial numbers S(j) controlled by the electronically tunable lens. Three serial numbers S(j) were selected: they are S(j) = 101, S(j) = 151 and S(j) = 201. Figures 6(b) - (d) shows the local closeup images marked in Fig. 6(a) with a red dotted rectangle box (row: 320 - 760, column: 650 - 1080) for these serial numbers. Obviously, the focus plane of the projector changes from top to bottom of the measured object.

 figure: Fig. 6.

Fig. 6. Fringes for the specimen. (a) Fringe image when S(j) = 151; (b) A closeup image when S (j) = 101; (c) A closeup image when S (j) = 151; (d) A closeup image when S (j) = 201.

Download Full Size | PDF

Figure 7(a) shows the fringe (horizontal stripe) when the serial number S(j) = 152, whose local closeup images is shown in Fig. 7(b). The spectrum of the subtraction for two adjacent orthogonal fringe patterns (Fig. 6(a) and Fig. 7(a)) shows in Fig. 7(c), and the spectrum of a single fringe patten (Fig. 6(a)) shows in Fig. 7(d). Since the background information is subtracted, as shown in Fig. 7(c), only the fundamental frequency information in the orthogonal direction exists, which avoids the influence of zero-frequency component on the fundamental frequency component. Since 1) the intensity of the zero-frequency component is much higher than the intensity of the fundamental frequency component, and 2) the measured object is very complex, and the background light intensity is uneven. Both zero-frequency component and fundamental frequency component will expand to a large extent towards each other, the effect of the zero-frequency component on the fundamental frequency component is unknown from Fig. 7(d). To show the spectrum information more clearly, part of a cross section (row: 411, column: 720 - 1430) for Fig. 7(c) and that for Fig. 7(d) are respectively shown in Fig. 8(a) and 8(b). One can obviously see that the zero-frequency component and the fundamental frequency component are aliased together in Fig. 8(b). If the window size of the filter is small, the surface of the measured object will lose a lot of details, even the edges and corners will be directly smoothed. While if the window size of the filter is large, the zero-frequency component and noise information will affect the extraction of modulation information. In this experiment, the same appropriate filter size was selected in the application for the PM and the FTM for comparison.

 figure: Fig. 7.

Fig. 7. Spectrum for the fringes. (a) Fringe image when S(j) = 152; (b) A closeup image of Fig. 7(a); (c) The spectrum of the subtraction for Fig. 6(a) and Fig. 7(a); (d) The spectrum of the Fig. 6(a).

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Spectrum. (a) Part of a cross section for Fig. 7(c); (b) Part of a cross section for Fig. 7(d).

Download Full Size | PDF

Figures 9(a) - (c) respectively show the reconstruction results by the PSM, PM and FTM. Since the bottom of the letters is out of focus in the camera’s field of view, the reconstruction results of PSM and PM are almost a plane. While for the FTM, the fundamental frequency component and the zero-frequency component are aliased. The fundamental frequency component by this method can not be extracted accurately, which results in a coarse surface for the bottom of reconstruction. For the area that are clearly visible in the camera’s field of view, detailed features were well captured even for deep grooves by the PSM and PM. While for the FTM with a single fringe fails to correctly reconstruct the surface shape of the specimen. To better visualize the reconstructed 3D shape, the 540$^{th}$ row from the 650$^{th}$ column to the 1080$^{th}$ column of the 3D data by these methods, which is marked in Fig. 6(a) with a green line, was plotted in Fig. 9(d). Clearly, the results of the PM are more consistent with that of the PSM, while glitches occur on the surface of the result by FTM.

 figure: Fig. 9.

Fig. 9. Experimental results. (a) 3D reconstruction of the specimen by PSM; (b) 3D reconstruction of the specimen by PM; (c) 3D reconstruction of the specimen by FTM; (d) Comparison of one partial cross section for these results.

Download Full Size | PDF

Similarly, to further demonstrate the merits of the PM, the 3D profile of the screw is measured. Figure 10(a) shows the fringe (vertical stripe) when the serial number S(j) = 151. The local closeup images marked in Fig. 10(a) with a red dotted rectangle box for serial numbers S(j) = 101, S(j) = 151 and S(j) = 201 are respectively shown in Fig. 10(b)–10(d).

 figure: Fig. 10.

Fig. 10. Fringes for the screw. (a) Fringe image when S(j) = 151; (b) A closeup image when S (j) = 101; (c) A closeup image when S (j) = 151; (d) A closeup image when S (j) = 201.

Download Full Size | PDF

Figure 11(a) - (c) are the overall 3D shape by the these three methods. The screw is clear and well measured by the PSM and PM. However, many errors occur on the surface by the FTM with a single fringe. Figure 11 (d) depicts the partial cross section (marked in Fig. 10(a) with a green line) of these reconstruction results, and one can obviously distinguish the errors on the shape surface obtained by the FTM.

 figure: Fig. 11.

Fig. 11. Experimental results. (a) 3D reconstruction of the screw by PSM; (b) 3D reconstruction of the screw by PM; (c) 3D reconstruction of the screw by FTM; (d) Comparison of one partial cross section for these results.

Download Full Size | PDF

5. Conclusion

This paper has presented an approach to eliminate the effect of the zero-frequency component on the fundamental frequency component by the application of two adjacent orthogonal fringe patterns in microscopic 3D profilometry. Performing Fourier transform on the subtraction of these two images will separate the fundamental frequency components in two orthogonal directions. Respectively filtering the two components will achieve the extraction of modulation information for the two fringes. Approximate 5.22 $\mu$m root-mean-square (RMS) error with a depth range of 1100 $\mu$m can be achieved. With only one-tenth of the PSM in the collection and data processing volume, the accuracy of the PM almost can reach that of PSM, which is higher than the measurement accuracy of FTM.

Funding

National Natural Science Foundation of China (61801057); Sichuan education department project (18ZB0124); Sichuan Science and Technology Program (2020YJ0431); College Students' innovation and entrepreneurship training programs (S201910621117, 202010621108).

Acknowledgments

The authors would like to acknowledge the support of the National Natural Science Foundation of China (NSFC) (61801057), Sichuan education department project (18ZB0124), Sichuan Science and Technology Program (2020YJ0431), College Students’ innovation and entrepreneurship training programs (S201910621117, 202010621108).

Disclosures

The authors declare no conflicts of interest.

References

1. Z. Lei, X. Liu, L. Chen, W. Lu, and S. Chang, “A novel surface recovery algorithm in white light interferometry,” Measurement 80, 1–11 (2016). [CrossRef]  

2. Q. Vo, F. Fang, X. Zhang, and H. Gao, “Surface recovery algorithm in white light interferometry based on combined white light phase shifting and fast fourier transform algorithms,” Appl. Opt. 56(29), 8174–8185 (2017). [CrossRef]  

3. P. Pavlicek and E. Mikeska, “White-light interferometer without mechanical scanning,” Opt. Lasers Eng. 124, 105800 (2020). [CrossRef]  

4. P. Zhu and K. Wang, “Single-shot two-dimensional surface measurement based on spectrally resolved white-light interferometry,” Appl. Opt. 51(21), 4971–4975 (2012). [CrossRef]  

5. P. Pavlicek and O. Hýbl, “White-light interferometry on rough surfaces–measurement uncertainty caused by noise,” Appl. Opt. 51(4), 465–473 (2012). [CrossRef]  

6. Z. Zhai, Z. Li, Y. Zhang, Z. Dong, X. Wang, and Q. Lv, “An accurate phase shift extraction algorithm for phase shifting interferometry,” Opt. Commun. 429, 144–151 (2018). [CrossRef]  

7. Q. Liu, Y. Wang, J. He, and F. Ji, “Modified three-step iterative algorithm for phase-shifting interferometry in the presence of vibration,” Appl. Opt. 54(18), 5833–5841 (2015). [CrossRef]  

8. Q. Liu, L. Li, H. Zhang, W. Huang, and X. Yue, “Simultaneous dual-wavelength phase-shifting interferometry for surface topography measurement,” Opt. Lasers Eng. 124, 105813 (2020). [CrossRef]  

9. Y. Li, Y. Zhang, Y. Yang, C. Wang, Y. Chen, and J. Bai, “Accurate phase retrieval algorithm based on linear correlation in self-calibration phase-shifting interferometry with blind phase shifts,” Opt. Commun. 466, 125612 (2020). [CrossRef]  

10. S. Chen and Y. Zhu, “Phase sensitivity evaluation and its application to phase shifting interferometry,” Methods 136, 50–59 (2018). [CrossRef]  

11. H. Muhamedsalih, F. Gao, and X. Jiang, “Comparison study of algorithms and accuracy in the wavelength scanning interferometry,” Appl. Opt. 51(36), 8854–8862 (2012). [CrossRef]  

12. X. Jiang, K. Wang, F. Gao, and H. Muhamedsalih, “Fast surface measurement using wavelength scanning interferometry with compensation of environmental noise,” Appl. Opt. 49(15), 2903–2909 (2010). [CrossRef]  

13. J. Tan, Y. Bai, B. Dong, and Z. He, “Phase noise reduction in wavelength scanning interferometry using a phase synthesis approach,” Opt. Commun. 475, 126295 (2020). [CrossRef]  

14. A. Davila, J. M. Huntley, C. Pallikarakis, P. D. Ruiz, and J. M. Coupland, “Wavelength scanning interferometry using a ti:sapphire laser with wide tuning range,” Opt. Lasers Eng. 50(8), 1089–1096 (2012). [CrossRef]  

15. A. Davila, “Wavelength scanning interferometry using multiple light sources,” Opt. Express 24(5), 5311–5322 (2016). [CrossRef]  

16. F. Helmli, R. Danzl, M. Prantl, and M. Grabner, Ultra high speed 3d measurement with the focus variation method, pages 617–622, 2014.

17. W. Kaplonek, K. Nadolny, and G. Krolczyk, “The use of focus-variation microscopy for the assessment of active surfaces of a new generation of coated abrasive tools,” Measurement Sci. Rev. 16(2), 42–53 (2016). [CrossRef]  

18. Y. Chen and L. Ju, “Method for fast quantification of pitting using 3d surface parameters generated with infinite focus microscope,” Corrosion 71(10), 1184–1196 (2015). [CrossRef]  

19. M. M. Mahat, A. H. M. Aris, U. S. Jais, M. F. Z. R. Yahya, and R. Ramli, “Infinite focus microscope (ifm): Microbiologically influenced corrosion (mic) behavior on mild steel by pseudomonas aeruginosa,” In 2011 International Symposium on Humanities, Science and Engineering Research, pages 106–110, (2011).

20. C. M. St. Croix, S. H. Shand, and S. C. Watkins, “Confocal microscopy: comparisons, applications, and problems,” BioTechniques 39(6S), S2–S5 (2005). [CrossRef]  

21. R. Danzl, F. Helmli, and S. Scherer, “Focus variation–a robust technology for high resolution optical 3d surface metrology,” Strojniski Vestnik 2011(3), 245–256 (2011). [CrossRef]  

22. M. A. Browne, O. Akinyemi, and A. Boyde, “Confocal surface profiling utilizing chromatic aberration,” Scanning 14(3), 145–153 (1992). [CrossRef]  

23. H. J. Tiziani and H. Uhde, “Three-dimensional image sensing by chromatic confocal microscopy,” Appl. Opt. 33(10), 1838–1843 (1994). [CrossRef]  

24. M. Zhong, J. Cui, J. Hyun, L. Pan, P. Duan, and S. Zhang, “Uniaxial 3d phase-shifting profilometry using a dual-telecentric structured light system in micro-scale devices,” Meas. Sci. Technol. 31(8), 085003 (2020). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. The principle of uniaxial microscopic 3D shape measurement technique.
Fig. 2.
Fig. 2. Modulation retrieval with orthogonal fringes.
Fig. 3.
Fig. 3. Schematic diagram of the uniaxial microscopic 3D profilometry with orthogonal fringe patterns.
Fig. 4.
Fig. 4. The relationship between the serial number for the maximum modulation value and the relative height value.
Fig. 5.
Fig. 5. Error maps of four tested flat planes. The first row shows error maps the PSM, the second row shows error maps the PM, the third row shows error maps the FTM. The first column shows error maps for plane 1, the second column shows error maps for plane 2, the third column shows error maps for plane 3, the forth column shows error maps for plane 4.
Fig. 6.
Fig. 6. Fringes for the specimen. (a) Fringe image when S(j) = 151; (b) A closeup image when S (j) = 101; (c) A closeup image when S (j) = 151; (d) A closeup image when S (j) = 201.
Fig. 7.
Fig. 7. Spectrum for the fringes. (a) Fringe image when S(j) = 152; (b) A closeup image of Fig. 7(a); (c) The spectrum of the subtraction for Fig. 6(a) and Fig. 7(a); (d) The spectrum of the Fig. 6(a).
Fig. 8.
Fig. 8. Spectrum. (a) Part of a cross section for Fig. 7(c); (b) Part of a cross section for Fig. 7(d).
Fig. 9.
Fig. 9. Experimental results. (a) 3D reconstruction of the specimen by PSM; (b) 3D reconstruction of the specimen by PM; (c) 3D reconstruction of the specimen by FTM; (d) Comparison of one partial cross section for these results.
Fig. 10.
Fig. 10. Fringes for the screw. (a) Fringe image when S(j) = 151; (b) A closeup image when S (j) = 101; (c) A closeup image when S (j) = 151; (d) A closeup image when S (j) = 201.
Fig. 11.
Fig. 11. Experimental results. (a) 3D reconstruction of the screw by PSM; (b) 3D reconstruction of the screw by PM; (c) 3D reconstruction of the screw by FTM; (d) Comparison of one partial cross section for these results.

Tables (2)

Tables Icon

Table 1. Coefficients of the plane fitting equation for the tested planes.

Tables Icon

Table 2. Shape Functions for Quadratic Line Elements ( μ m)

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

I N 1 V ( x , y ) = R 0 ( x , y ) B 0 ( x , y ) E 2 + R 0 ( x , y ) C 0 ( x , y ) E 2 cos [ 2 π f 0 x + Φ 0 ( x , y ) ]
I n 1 V ( x , y ; δ n 1 ) = G ( x , y ; δ n 1 ) I N 1 V ( x , y )
G ( x , y ; δ n 1 ) = 1 2 π σ n 1 2 e x 2 + y 2 2 σ n 1 2
I n 1 V ( x , y ; δ n 1 ) = R 0 ( x , y ) B 0 ( x , y ) E 2 + R 0 ( x , y ) C 0 ( x , y ) E 2 e 1 2 f 0 2 σ n 1 2 cos [ 2 π f 0 x + Φ 0 ( x , y ) ]
I n H ( x , y ; δ n ) = R 0 ( x , y ) B 0 ( x , y ) E 2 + R 0 ( x , y ) C 0 ( x , y ) E 2 e 1 2 f 0 2 σ n 2 cos [ 2 π f 0 y + Φ 0 ( x , y ) ]
I n 2 O ( x , y ; δ n 1 ; δ n ) = I n 1 V ( x , y ; δ n 1 ) I n H ( x , y ; δ n ) = K V K H
K V = R 0 C 0 2 E 2 e 1 2 f 0 2 σ n 1 2 cos [ 2 π f 0 x + Φ 0 ] K H = R 0 C 0 2 E 2 e 1 2 f 0 2 σ n 2 cos [ 2 π f 0 y + Φ 0 ]
I n 2 O ( x , y ; δ n 1 ; δ n ) = K 1 V + K 1 V K 1 H K 1 H
K 1 V = R 0 C 0 2 E 2 e 1 2 f 0 2 σ n 1 2 e i [ 2 π f 0 x + Φ 0 ] K 1 V = R 0 C 0 2 E 2 e 1 2 f 0 2 σ n 1 2 e i [ 2 π f 0 x + Φ 0 ] K 1 H = R 0 C 0 2 E 2 e 1 2 f 0 2 σ n 2 e i [ 2 π f 0 y + Φ 0 ] K 1 H = R 0 C 0 2 E 2 e 1 2 f 0 2 σ n 2 e i [ 2 π f 0 y + Φ 0 ]
M n 1 V ( x , y ; δ n 1 ) = R 0 C 0 2 E 2 e 1 2 f 0 2 σ n 1 2 = M f e 1 2 f 0 2 σ n 1 2
M n H ( x , y ; δ n ) = R 0 C 0 2 E 2 e 1 2 f 0 2 σ n 2 = M f e 1 2 f 0 2 σ n 2
Z ( j , x , y ) = a ( x , y ) + b ( x , y ) S ( j , x , y ) max + c ( x , y ) S 2 ( j , x , y ) max
Z ( x , y ) = α + β x + γ y ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.