Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Quasi-pixelwise motion compensation for 4-step phase-shifting profilometry based on a phase error estimation

Open Access Open Access

Abstract

Phase-shifting profilometry (PSP) is widely used in 3D shape measurement due to its high accuracy. However, in dynamic scenarios, the motion of objects will introduce phase-shifting errors and result in measurement errors. In this paper, a novel compensation method based on 4-step phase-shifting profilometry is proposed to reduce motion-induced errors when objects undergo uniform or uniformly accelerated motion. We utilize the periodic characteristic of fringe patterns to estimate the phase errors from only four phase-shifting patterns and realize a pixel-wise error compensation. This method can also be applied to non-rigid deforming objects and help restore high-quality texture. Both simulation and experiments demonstrate that the proposed method can effectively improve the measurement accuracy and reduce surface ripples introduced by motion for a standard monocular structured-light system.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical non-contact 3D measurement has been widely used in industrial manufacturing, scientific research and education, cultural relics scanning, medical and health, digital entertainment, and other fields [1]. With the development of low-cost and high-resolution digital projection systems based on LCD technology and DLP technology, structured-light techniques are being one of mainstreams in 3D measurement [2]. Among them, fringe projection profilometry is one of the most popular methods due to the high accuracy and high density of point clouds. According to different phase solution methods, it can mainly be divided into two categories: FTP (Fourier transform profilometry) [3] and PSP (phase-shifting profilometry) [4]. FTP only requires one sinusoidal fringe pattern to retrieve the phase map by Fourier transform and band-pass filtering, thus it is not affected by the motion of objects. However, the robustness and the accuracy of the FTP method are limited because it is easily affected by the ambient noise and the object’s texture [2,4]. On the contrary, PSP can effectively solve this problem by projecting multiple sinusoidal fringe patterns in the temporal domain, which differs from the FTP method in the spatial domain. Theoretically, PSP needs at least three sinusoidal fringes to calculate the phase value, which is resistant to the background light and reflectivity, thus leading to robust phase retrieval and high accuracy for 3D reconstruction. To address the problem of motion-induced errors in PSP systems, many scholars and researchers have also proposed a large number of methods in the past few decades.

Some eliminate the motion-induced error by increasing the capturing and projecting speed of the hardware like using binary defocusing techniques [5]. When the measurement speed is much faster than the moving speed, the object can approximately be seen as static. Wang et al. [6] proposed a two-frequency binary phase-shifting technique to measure the 3D absolute shape of beating rabbit hearts at the rate of around 800 frames per second. Wu et al. [7] combined the phase-shifting algorithm and the complementary Gray-code patterns to realize a frame rate of 357 fps. However, high-speed equipment will lead to higher hardware costs.

Meanwhile, lots of motion error compensation methods have also been proposed to reduce the motion-induced error. Lu et al. proposed to track the motion of the object by placing markers around the object [8] and utilizing scale-invariant feature transform (SIFT) [9], and then correct the motion-induced error in the phase map. But it is limited to rigid objects. Guo et al. [10] utilized Lucas-Kanada optical flow method to estimate the object’s displacement for compensating the motion error. However, it does not work for 3D non-uniform motion. Weise et al. [11] developed a closed-form expression for motion error with a Taylor series under the assumption of local phase linearity and applied motion compensation to each pixel. However, it might not be efficient when the object undergoes non-uniform motion. Feng et al. [12] proposed to solve the non-uniform motion artifacts by segmenting objects with the Sobel operator in the edge detecting, but this method was unable to deal with the non-rigid object.

Considering that the FTP method only requires one fringe pattern to retrieve the phase map which is not affected by the object’s motion, Qian et al. [13] proposed a Fourier-assisted PSP approach to estimate the phase error by differentiating the phase of two successive fringe images and realized pixel-wise motion detection. But these FTP-based methods are inherently limited by the quality of the obtained phase map that is subject to noise and textural variations. Wang et al. [14] used Hilbert transform for motion error estimation, but it was hard to select the length of the transform window effectively, thus the phase measurement accuracy was also limited.

Liu et al. [15] proposed a method of calculating the phase error by the use of eight images and three phase maps based on the 4-step shifting technique. However, this method requires binocular phase unwrapping which adds an extra camera. Guo et al. [16] didn’t choose to estimate the actual phase error but reduced the motion error by averaging the phase from 4-step phase-shifting profilometry based on a π/2 phase shift between adjacent frames. This method improves the 3D-measurement accuracy effectively, but the correct texture of the object is difficult to be restored from the four captured fringe patterns because the phase deviation is not estimated directly.

In this paper, for a single-projector and single-camera measurement system, we propose a new error compensation method based on 4-step phase-shifting profilometry by estimating phase errors through only four sinusoidal fringe patterns. It is effective in reducing motion-induced errors for dynamical rigid objects and deforming objects as it is a pixel-wise error compensation. For some non-uniform motions, the errors can also be reduced to a certain extent. Our method can restore the object texture by the use of the estimated phase errors. Besides, this method has no iteration and thus can easily be processed in parallel.

This paper is organized as follows. Section 2 presents the principle of our proposed method. In Section 3, we conduct some simulation experiments to show the efficiency in reducing motion-induced errors. Section 4 gives real experimental results to validate our method. In Section 5, we conclude this paper.

2. Principle

2.1 Framework of the proposed system

The system framework is shown as Fig. 1. Our system estimates phase errors from four phase-shifting patterns based on the periodic characteristic of fringe patterns. Utilizing estimated phase errors, we can retrieve the phase map and restore the texture. After unwrapping the phase map, we can reconstruct the 3-D shape of the object combined with the calibration results. Different from Liu’s method [15] that calculates the difference between the computed three phase maps to estimate the phase errors and reqiures binocular system to unwrap the phase, our method can estimate phase errors by the use of only four captured phase-shifting patterns, which is easier to combine with different temporal phase unwrapping methods under a standard monocular structured-light system. Guo’s [16] method reduces the motion errors by means of average phase compensation, but it only concerns the object spatial information and thus can’t restore the object texture which is important in some 3D vision applications. Basically, our proposed method can use estimated phase errors to obtain the correct texture.

 figure: Fig. 1.

Fig. 1. The proposed system framework for 3D reconstruction

Download Full Size | PDF

2.2 Phase-shifting algorithm

The standard PSP technique employs a set of phase-shifting sinusoidal fringe patterns. After projecting the patterns sequentially onto the object surface, the intensity distribution of N fringe images captured by the camera can be described as

$${I_n}({x,y} )= A({x,y} )+ B({x,y} )\cos ({\varphi ({x,y} )- {\delta_n}} ),$$
where $A({x,y} )$ and $B({x,y} )$ respectively represent the average intensity and amplitude of modulation, $\varphi ({x,y} )$ is the corresponding wrapped phase. ${\delta _n}({n = 1,2,\ldots ,N} )$ is the phase shift for the nth frame. Since there are totally three unknowns $A({x,y} )$, $B({x,y} )$, $\varphi ({x,y} )$, the minimum number of fringe patterns should be three to solve the equations. For simplification, we can rewrite Eq. (1) as
$${I_n}({x,y} )= {a_0}({x,y} )+ {a_1}({x,y} )\sin ({{\delta_n}} )+ {a_2}({x,y} )\textrm{cos}({{\delta_n}} ).$$

Here ${a_0}({x,y} )$ is the object texture to solve. Using the least square method [17], the following equation can be obtained:

$$\left[ {\begin{array}{c} {{a_0}({x,y} )}\\ {{a_1}({x,y} )}\\ {{a_2}({x,y} )} \end{array}} \right] = {\left[ {\begin{array}{ccc} N&{\sum \cos {\delta_n}}&{\sum \sin {\delta_n}}\\ {\sum \cos {\delta_n}}&{\sum {{\cos }^2}{\delta_n}}&{\sum \cos {\delta_n}\sin {\delta_n}}\\ {\sum \sin {\delta_n}}&{\sum \cos {\delta_n}\sin {\delta_n}}&{\sum {{\sin }^2}{\delta_n}} \end{array}} \right]^{ - 1}}\left[ {\begin{array}{c} {\sum {I_n}}\\ {\sum {I_n}\cos {\delta_n}}\\ {\sum {I_n}\sin {\delta_n}} \end{array}} \right].$$

Simultaneously the wrapped phase map can be computed as

$$\varphi ({x,y} )= {\tan ^{ - 1}}\left[ {\frac{{{a_2}({x,y} )}}{{{a_1}({x,y} )}}} \right].$$

Then, we need determine fringe order $k({x,y} )$ to obtain the continuous phase map $\mathrm{\Phi }({x,y} )$ from the wrapped phase $\varphi ({x,y} )$ by selecting an appropriate phase unwrapping algorithm

$$\mathrm{\Phi }({x,y} )= \varphi ({x,y} )+ 2\pi k({x,y} ).$$

2.3 Motion-induced phase errors in phase-shifting profilometry

Every point in the camera’s image plane will correspond to a different point on the projector’s image plane if the object moves to a different location in the 3D space. Thus, there will be an additional unknown phase-shifting error between successive captured images when the measured object is moving. Since the position of the object is changing during the measurement, we can define the first captured phase-shifting image as a reference to assist in representing the phase error in each image $I_n^{\prime}\; ({n = 1, \ldots ,N} )$ . The intensity of captured fringe images that are subject to object motion can be described by the following equations:

$$I_1^{\prime}({x,y} )= A({x,y} )+ B({x,y} )\cos ({\varphi ({x,y} )} )$$
$$I_2^{\prime}({x,y} )= A({x,y} )+ B({x,y} )\cos ({\varphi ({x,y} )- \pi /2 + {\varepsilon_1}({x,y} )} )$$
$$I_3^{\prime}({x,y} )= A({x,y} )+ B({x,y} )\cos ({\varphi ({x,y} )- \pi + {\varepsilon_1}({x,y} )+ {\varepsilon_2}({x,y} )} )$$
$$I_4^{\prime}({x,y} )= A({x,y} )+ B({x,y} )\cos ({\varphi ({x,y} )- 3\pi /2 + {\varepsilon_1}({x,y} )+ {\varepsilon_2}({x,y} )+ {\varepsilon_3}({x,y} )} )$$
where ${\varepsilon _n}({n = 1,2,3} )$ the motion-induced phase error, $\varphi ({x,y} )$ the true phase of the object surface.

The phase map $\varphi ^{\prime}({x,y} )$ that is obtained from the standard 4-step phase-shifting method

$$\varphi ^{\prime}({x,y} )= {\tan ^{ - 1}}\left[ {\frac{{I_2^{\prime}({x,y} )- I_4^{\prime}({x,y} )}}{{I_1^{\prime}({x,y} )- I_3^{\prime}({x,y} )}}} \right]$$
will contain the motion-induced phase error $\Delta \varphi ({x,y} )$. For simplification, we leave out the pixel coordinate:
$$\Delta \varphi = \varphi ^{\prime} - \varphi = {\tan ^{ - 1}}\left[ {\frac{{sin({\varphi + {\varepsilon_1}} )+ sin({\varphi + {\varepsilon_1} + {\varepsilon_2} + {\varepsilon_3}} )}}{{\textrm{cos}\varphi + \textrm{cos}({\varphi + {\varepsilon_1} + {\varepsilon_2}} )}}} \right] - ta{n^{ - 1}}[{tan\varphi } ].$$

With the trigonometry, $\Delta \varphi $ can then be expressed as

$$\Delta \varphi = {\tan ^{ - 1}}\left[ {\frac{{{c_1}\textrm{cos}2\varphi + {c_2}\textrm{sin}2\varphi + {c_3}}}{{{c_4}\textrm{cos}2\varphi + {c_5}\textrm{sin}2\varphi + {c_6}}}} \right]$$
where ${c_i}({i = 1,2,..,6} )$ is a constant decided by the phase error ${\varepsilon _n}$.

Equation (12) shows that the phase error $\Delta \varphi $ is dependent with $2\varphi $. In other words, the frequency of the residual error is twice that of projected fringes, which leads to the surface ripples of the reconstructed object.

2.4 Proposed motion-induced phase error estimation method

For motion at constant speed or constant acceleration, like thoracic respiratory movement, assembly line product quality detection and other scenarios, the following equation holds for these three phase-shift errors:

$${\mathrm{\varepsilon }_2}({x,y} )= ({{\mathrm{\varepsilon }_1}({x,y} )+ {\mathrm{\varepsilon }_3}({x,y} )} )/2.$$

In order to solve the phase-shifting error ${\varepsilon _n}({x,y} )$, we make the following equation transformation:

$$|{I_1^{\prime} - I_2^{\prime}} |= \left|{2Bsin\left( {\phi - \frac{\pi }{4} + \frac{{{\varepsilon_1}}}{2}} \right)} \right|\textrm{sin}\left( {\frac{\mathrm{\pi }}{4} - \frac{{{\varepsilon_1}}}{2}} \right) = {t_1}\textrm{sin}\left( {\frac{\mathrm{\pi }}{4} - \frac{{{\varepsilon_1}}}{2}} \right)$$
$$|{I_3^{\prime} - I_4^{\prime}} |= \left|{2Bsin\left( {\phi - \frac{\pi }{4} + {\varepsilon_1} + {\varepsilon_2} + \frac{{{\varepsilon_3}}}{2}} \right)} \right|\textrm{sin}\left( {\frac{\mathrm{\pi }}{4} - \frac{{{\varepsilon_3}}}{2}} \right) = {t_2}\textrm{sin}\left( {\frac{\mathrm{\pi }}{4} - \frac{{{\varepsilon_3}}}{2}} \right)$$
$$|{I_1^{\prime} - I_4^{\prime}} |= \left|{2Bsin\left( {\phi - \frac{{3\pi }}{4} + \frac{{{\varepsilon_1}}}{2} + \frac{{{\varepsilon_2}}}{2} + \frac{{{\varepsilon_3}}}{2}} \right)} \right|\textrm{sin}\left( {\frac{\mathrm{\pi }}{4} + \frac{{3{\varepsilon_2}}}{2}} \right) = {t_3}\textrm{sin}\left( {\frac{\mathrm{\pi }}{4} + \frac{{3{\varepsilon_2}}}{2}} \right)$$
$$|{I_2^{\prime} - I_3^{\prime}} |= \left|{2Bsin\left( {\phi - \frac{{3\pi }}{4} + {\varepsilon_1} + \frac{{{\varepsilon_2}}}{2}} \right)} \right|\textrm{sin}\left( {\frac{\mathrm{\pi }}{4} - \frac{{{\varepsilon_2}}}{2}} \right) = {t_4}\textrm{sin}\left( {\frac{\mathrm{\pi }}{4} - \frac{{{\varepsilon_2}}}{2}} \right).$$

For the simplification of the derivation process, we use ${t_1} - {t_4}$ to replace the absolute value item. Due to the existence of phase-shifted error, we can’t directly suppose that ${t_1} \approx {t_2}$, ${t_3} \approx {t_4}$. Then, we select its 1-D neighborhood pixels for average approximation. The neighborhood window size will be set to half the fringe period: $W = T/2$. The reason is as follows: the wrapped phase ranges from ${\phi _0} - \mathrm{\pi }/2$ to ${\phi _0} + \mathrm{\pi }/2$ within the neighborhood of the pixel, which can be formulated as,

$${\left\langle {B(x,y)|{\sin ({\phi ({x,y} )} )} |} \right\rangle _D} = \frac{1}{\pi }\mathop \smallint \nolimits_{{\phi _0} - \frac{\mathrm{\pi }}{2}}^{{\phi _0} + \frac{\mathrm{\pi }}{2}} B(x,y)|{\sin ({\phi ({x,y} )} )} |\textrm{d}\phi.$$

Here ${\left\langle \cdot \right\rangle _D}$ denotes an averaging operator, where the average is computed over all valid pixels within a small 1-D region D around pixel $({x,y} )$.

In fact, the intensity modulation B is only proportional to the surface reflectivity. In many practical applications, it is reasonable to assume that the surface reflectivity is continuous in the small neighborhood when environmental lighting conditions are normal. Therefore, according to the first theorem of median integral, we can obtain Eq. (18). For simplification, we omit the pixel coordinate $({x,y} )$.

$${\left\langle {B|{\sin (\phi )} |} \right\rangle _D} = \frac{1}{\pi }B(\mathrm{\varepsilon } )\mathop \smallint \nolimits_{{\phi _0} - \frac{\mathrm{\pi }}{2}}^{{\phi _0} + \frac{\mathrm{\pi }}{2}} |{\sin (\phi )} |\textrm{d}\phi = \frac{2}{\pi }B(\mathrm{\varepsilon } ).$$

Similarly, in the discrete domain of camera pixels, the following relationship is also satisfied:

$${\left\langle {B|{\sin (\phi )} |} \right\rangle _D} = \frac{1}{\pi }B(\mathrm{\varepsilon } )\mathop \sum \limits_{n ={-} W/2 + 1}^{n = W/2} |{\sin ({\phi (n )} )} |= \frac{2}{\pi }B(\mathrm{\varepsilon } ).$$

Considering that the phases of ${t_1}$ and ${t_2}$ are similar, the phases of ${t_3}$ and ${t_4}$ are similar and there is a nearly $\mathrm{\pi }/2$ phase shift between ${t_1}$/${t_2}$ and ${t_3}$/${t_4}$, it is more proper to make the following approximation:

$${\left\langle {{t_1}} \right\rangle _\textrm{D}} \approx {\left\langle {{t_2}} \right\rangle _\textrm{D}},{\left\langle {{t_3}} \right\rangle _\textrm{D}} \approx {\left\langle {{t_4}} \right\rangle _\textrm{D}}.$$

Then we rewrite Eq. (1417) as

$$\left\langle {|{I_1^{\prime} - I_2^{\prime}} |} \right\rangle = {d_1}\textrm{sin}\left( {\frac{\mathrm{\pi }}{4} - \frac{{{\varepsilon_1}}}{2}} \right) = i$$
$$\left\langle {|{I_3^{\prime} - I_4^{\prime}} |} \right\rangle = {d_1}\textrm{sin}\left( {\frac{\mathrm{\pi }}{4} - \frac{{{\varepsilon_3}}}{2}} \right) = j$$
$$\left\langle {|{I_1^{\prime} - I_4^{\prime}} |} \right\rangle = {d_2}\textrm{sin}\left( {\frac{\mathrm{\pi }}{4} + \frac{{3{\varepsilon_2}}}{2}} \right) = k$$
$$\left\langle {|{I_2^{\prime} - I_3^{\prime}} |} \right\rangle = {d_2}\textrm{sin}\left( {\frac{\mathrm{\pi }}{4} - \frac{{{\varepsilon_2}}}{2}} \right) = m.$$

Using Eq. (24), (25) and Taylor series approximation we can find a function for the phase-shift error ${\mathrm{\varepsilon }_2}$. And We replace ${\mathrm{\varepsilon }_2}$ with $\mathrm{\theta }$ for simplification:$\mathrm{\;\ \theta } = {\varepsilon _2}/2$,

$$\frac{k}{{{d_2}}} = \frac{{\sqrt 2 }}{2}({\sin 3\theta + \cos 3\theta } )= \frac{{\sqrt 2 }}{2}({3\sin \theta - 3\cos \theta + 4 - 6{{\sin }^2}\theta + O({{{\sin }^3}\theta } )} )$$
$$\frac{m}{{{d_2}}} = \frac{{\sqrt 2 }}{2}({\sin \theta + \cos \theta } ).$$

More details are in the appendix. Given that $\theta $ is small, it is reasonable to omit the high-order minor item of Eq. (26). Then we can transform the above solving questions into a univariate quadratic equation, we record that ${s_1} = k/m$,

$$({2{s_1}^2 + 12{s_1} - 30} ){\sin ^2}\theta + ({24 + 8{s_1}} )\sin \theta + ({7 - 6{s_1} - {s_1}^2} )= 0.$$

It is known that the value of ${s_1}\; $ is near 1, and we can infer that the correct solution of Eq. (28) must be the right root of the axis of symmetry. Therefore, there is no need for comparison, which will reduce the computational complexity. Then we use Eq. (22, 23) to solve for the phase-shift error ${\varepsilon _1}$ and ${\varepsilon _3}$. Recording that ${s_2} = i/j$, we can express ${\varepsilon _1}$ as

$${\varepsilon _1} = sign(p )\times 2\arcsin \left( {\sqrt {\frac{1}{{1 + {p^2}}}} } \right)$$
where $sign(p )$ is the sign bit of p,
$$p = \frac{{1 + {s_2}({\cos 2\theta + \sin 2\theta } )}}{{1 - {s_2}({\cos 2\theta - \sin 2\theta } )}}\; .$$

At last, we can obtain ${\varepsilon _3}$ by the formulation ${\varepsilon _3} = 4\theta - {\varepsilon _1}$.

2.5 Phase unwrapping

We can only obtain the wrapped phase which ranges from $\mathrm{\ -\ \pi }$ to $\mathrm{\pi }$ by the 4-step phase-shifted fringe patterns. Therefore, phase unwrapping is required to solve phase ambiguity for obtaining an absolute phase distribution. The gray-coded method which eliminates phase ambiguity by projecting a series of binary gray-coded patterns has been widely used in 3D shape measurement due to its robustness and anti-noise ability [18]. However, in dynamic scenarios, the motion will result in jump errors on the boundaries of the gray-coded words. Luckily, it is corrected easily by its neighborhood pixels using monotonicity detection [19].

2.6 System calibration

The key to obtaining the 3D surface information of the object from the unwrapped phase map is system calibration. Over the past few years, various calibration methods for typical fringe structured-light systems that consist of a single camera and a single projector have been proposed. The triangular stereo model is one of the most widely used for calibration due to its large measurement range and high accuracy. This method requires the calibration of both the camera and projector using the pinhole model such that their perspective matrices are given by:

$${M_c} = {K_c}[{{R_c}\textrm{|}{T_c}} ]= \left[ {\begin{array}{cccc} {m_{11}^c}&{m_{12}^c}&{m_{13}^c}&{m_{14}^c}\\ {m_{21}^c}&{m_{22}^c}&{m_{23}^c}&{m_{24}^c}\\ {m_{31}^c}&{m_{32}^c}&{m_{33}^c}&{m_{34}^c} \end{array}} \right],{M_p} = {K_p}[{{R_p}\textrm{|}{T_p}} ]= \left[ {\begin{array}{cccc} {m_{11}^p}&{m_{12}^p}&{m_{13}^p}&{m_{14}^p}\\ {m_{21}^p}&{m_{22}^p}&{m_{23}^p}&{m_{24}^p}\\ {m_{31}^p}&{m_{32}^p}&{m_{33}^p}&{m_{34}^p} \end{array}} \right]$$
where ${K_c}$ and ${K_p}$ respectively are the intrinsic matrix, ${R_c}$ and ${R_p}$ are the rotation matrix, ${T_c}$ and ${T_p}$ are the translation matrix of the camera and the projector.

The camera calibration technique proposed by Zhang [20] is implemented to calibrate the camera in our developed system. The projector can be calibrated in the same way we calibrate the camera by projecting one set of horizontal and vertical fringe patterns to help the projector find the feature points of the calibration board [21].

Then, the 3-D world coordinates can be calculated by

$$\left[ {\begin{array}{c} {{x_w}}\\ {{y_w}}\\ {{z_w}} \end{array}} \right] = {\left[ {\begin{array}{ccc} {m_{11}^c - m_{31}^c{u_c}}&{m_{12}^c - m_{32}^c{u_c}}&{m_{13}^c - m_{33}^c{u_c}}\\ {m_{21}^c - m_{31}^c{v_c}}&{m_{22}^c - m_{32}^c{v_c}}&{m_{23}^c - m_{33}^c{v_c}}\\ {m_{11}^p - m_{31}^p{u_p}}&{m_{12}^p - m_{32}^p{u_p}}&{m_{13}^c - m_{33}^c{u_p}} \end{array}} \right]^{ - 1}}\left[ {\begin{array}{c} {m_{34}^c{u_c} - m_{14}^c}\\ {m_{34}^c{v_c} - m_{24}^c}\\ {m_{34}^p{u_p} - m_{14}^p} \end{array}} \right]$$
where $({{u_c},{v_c}} )$ is the camera pixel, ${u_p}$ is the projector horizontal coordinate.

3. Simulation

In this section, we simulated two kinds of motion towards the system: the uniform motion and the non-uniform motion (uniformly accelerated motion). For the uniform motion or the non-uniform motion, it means uniform phase change or non-uniform phase change. Generally, the planar motion can easily ensure that the phase change is consistent with the spatial motion [10]. In the simulation, the period of the phase-shifting patterns is set as 20 pixels.

3.1 Rigid object under different motion

First, we tested the proposed method for different phase-shift error which ranged from -0.3 rad to 0.3 rad under uniform motion. We also realized Weise’s method [11] and Guo’s method [16] respectively in order to compare the compensation effect with our proposed method in this section. Figure 2 shows the experiment results. From the original method, we can see that the RMS of phase error between the obtained phase and real phase increases with the rise of phase-shift error $\mathrm{\;\ \varepsilon }$. From the compensation methods, all of them can effectively suppress the effect of uniform motion. More specifically, Guo’s method shows a good compensation effect when the object moves slowly, but the performance degradation will raise evidently with the rise of moving speed. In contrast, our proposed and Weise’s method shows better compensation performance when the moving speed is relatively large. Specifically, our method is superior to others as the phase error (RMS) is reduced to 0.0002 rad when the phase-shift error is relatively small, which is higher than Guo’s and Weise’s methods with a phase error about 0.002 rad.

 figure: Fig. 2.

Fig. 2. The phase RMS error under different kinds of simulated motion. (a) Uniform motion; (b) non-uniform motion

Download Full Size | PDF

Figure 2(b) shows the compensation performance of different methods under non-uniform motion. The acceleration for the non-uniform motion is set to be one-third of the moving speed, which means that the phase shift errors between the adjacent frames of four phase-shifting patterns are respectively $\mathrm{\varepsilon }$, $4\mathrm{\varepsilon }/3$ and $5\mathrm{\varepsilon }/3$. It is apparently seen that our approach is less sensitive to non-uniform motion and outperforms all others’ methods. The reason why the phase error of Weise’s method increases under the condition of non-uniform motion is that the method assumes the motion is uniform. Guo’s method also shows that there is an increase of the phase error to some extent when the object is in non-uniform motion.

3.2 Non-rigid object under different motion

Furthermore, for verifying the algorithm on the non-rigid objects, we design a non-rigid object, which is a deforming elliptical cylinder, shown in Fig. 3(a). The cross-section of the generated object is a standard ellipse, with a major radius ${R_a} = 250\textrm{mm}$ and a minor radius ${R_b} = 250\textrm{mm}$. We suppose that the motion of this non-rigid object is the change of short axis, Fig. 3(b) shows that the change of the cross-section under different times. Because it is unable to know the real position of the reconstructed object from Guo’s method for accuracy computation, so we don’t conduct the experiment of Guo’s method in this section.We tested our method in the simulated non-rigid object. Figure 4(a) shows that the results of the original 4-step phase-shifting method and our proposed method. In order to see the compensation effect more clearly, we enlarge the part in the red box of Fig. 4 (a), as shown in Fig. 4(b). We can see the serious motion ripples have been removed successfully after compensation using our proposed method.

 figure: Fig. 3.

Fig. 3. (a) The simulated object: a deforming elliptical cylinder; (b) the cross-section of the generated object and the motion under different times.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. (a) The reconstruction of the non-rigid object in motion using standard phase-shifting method and our proposed method; (b) the part in the red box of (a)

Download Full Size | PDF

Similarly, we measured the non-rigid object under the uniform motion and non-uniform motion. We suppose that the uniform variation of minor axis length is the uniform motion of the simulated object, and the non-uniform variation is the non-uniform motion in which the acceleration is set as half the speed.

Table 1 and Table 2 show the results of the standard 4-step phase-shifting method and our method. There is a significant decline in the mean average error (MAE) and root mean square error (RMSE) compared with the standard phase-shifting method after employing our proposed method to make compensation. It furtherly confirms that our method is effective for non-rigid moving objects and also indicates that ours can realize pixel-wise error compensation.

Tables Icon

Table 1. The measurement results of the simulated non-rigid object for standard PSP and our proposed method under different uniform motion

Tables Icon

Table 2. The measurement results of the simulated non-rigid object for standard PSP and our proposed method under different non-uniform motion

4. Experiments

To evaluate the performance of our proposed method, we build a structured-light system composed of a DLP projector (Texas Instrument DLP LightCrafter 4500) and a CCD industrial camera (FLIR FL3-U3-13E4C-C), which both have configurable I/O ports for synchronous triggering. The resolution of the camera is $1280 \times 1024$, which is attached to a lens with an 8 mm focal length. The DLP projector has a $912 \times 1140$ native resolution, with a high pattern projecting rate of 4225 Hz for 1 bit or 120 Hz for 8 bits. The projector is synchronized by a trigger signal of the camera, and the capturing rates can be set to the maximum frame rate of the camera (60 fps). The working distance of the system from the object is approximately 700 mm to 1000 mm.

The fringe pattern period is set as 20 pixels. And we project six additional gray-code patterns to conduct the phase unwrapping. The object’s motion taken into the comparison of different methods is expressed as mm/step or mm/s.

4.1 Accuracy evaluation of the moving step-shaped surface

To quantitatively evaluate the performance of the new phase error compensation method, a designed step-shaped workpiece was measured, as shown in Fig. 5(a). The step height of the workpiece is designed at $30({ \pm 0.05} )\textrm{mm}$. The 3D point cloud was reconstructed while it was moving towards the system with an approximate speed of 1 mm/step or 60 mm/s, respectively using the standard phase-shifting method, Guo’s method and our proposed method, as shown in Fig. 5. The reconstructed surface by the new error compensation method had fewer motion ripples compared to the standard PSP method. Figure 5(b) shows that the z coordinate of reconstructed points located on the 450th row of the captured image, which furtherly indicates that the motion artifact is largely eliminated after employing the error compensation method.

 figure: Fig. 5.

Fig. 5. (a) The designed step-shaped workpiece (b) the z coordinate of reconstructed points located on the 450th row of the captured image

Download Full Size | PDF

Then, the flatness of the measurement planes in the workpiece and the height between the two planes is evaluated. Table 3 shows the mean absolute errors (MAE) and root mean square errors (RMSE) of the reconstructed step-shaped surface in Fig. 6 for the standard phase-shifting method and our proposed method. Without compensation, the mean errors of the two planes are respectively 0.5874 mm and 0.8158 mm, the rms errors are respectively 0.3067 mm and 0.4073 mm, and the estimated height of the workpiece is 29.1383 mm. Using our proposed error compensation method, the mean errors are reduced to 0.1990 mm and 0.2630 mm, which is similar to Guo’s method, the rms errors are reduced to 0.1421 mm and 0.1868 mm which is slightly better than Guo’s method, and the measured height is 29.7965 mm, which is closer to the actual height.

 figure: Fig. 6.

Fig. 6. The measurement results of step-shaped workpiece using different methods: (a) standard PSP method (b) Guo’s method (c) our proposed method

Download Full Size | PDF

Tables Icon

Table 3. The reconstructed step-shaped surface for the standard PSP method, Guo’s method and our proposed method

4.2 Accuracy evaluation of the moving spherical surface

Figure 7 shows the point clouds of a moving hemisphere with a diameter of 200 mm which is obtained by the standard PSP method and our proposed method. The motion ripples have been distinctly reduced using the proposed method, as shown in Fig. 7(b). We can fit the sphere with the point clouds to get the available indicators of quantitative evaluation. For the standard PSP method, the mean average error of the sphere radius is 0.4134 mm, and the RMS error is 0.4303 mm. After applying the new error compensation method, the corresponding mean error is reduced to 0.1756 mm and the RMS error is reduced to 0.1285 mm, which is slightly better than Guo’s method. The dynamic measurement of the spherical surface has demonstrated the effectiveness of our proposed method in reducing motion-induced errors.

 figure: Fig. 7.

Fig. 7. The measurement results of the moving sphere surface: (a) the standard PSP method (b)Guo’s method (c) our proposed method

Download Full Size | PDF

4.3 Qualitative evaluation: the moving plaster and deflating balloon

To further validate the robustness of the proposed method, we conducted additional experiments by measuring a moving complex object (like the plaster, as shown in Fig. 8(a). The reconstructed point clouds are shown in Fig. 8 (b) and Fig. 8 (c), respectively obtained from the standard PSP method and our proposed method. From the sense of sight, after applying our proposed method, most motion ripples are no longer obvious. The 3D results obtained from the new error compensation method have better performance than the standard 4-step phase-shifting profilometry.

 figure: Fig. 8.

Fig. 8. The measurement results of the moving plaster: (a) the white plaster (b) the 3D point clouds using the standard PSP method (b) the 3D point cloud using our proposed method

Download Full Size | PDF

Lastly, a further measurement was performed on a non-rigid object such as a deflating balloon, as shown in Fig. 9 (a). Figure 9 (b) displays the reconstructed 3D point clouds from the original fringe patterns. Figure 9 (c) shows the results from our proposed method. It is clearly seen that there are fewer motion ripples after using the motion-induced error compensation compared to the standard PSP method, which indicated that our proposed method is also effective for the measurement of non-rigid objects or deforming surfaces.

 figure: Fig. 9.

Fig. 9. The measurement results of the deflating balloon: (a) the balloon (b) the 3D point clouds using the standard PSP method (b) the 3D point cloud using our proposed method

Download Full Size | PDF

4.4 Object texture restoration

Although not essential for optical metrology, the high-quality texture along with the aligned 3-D geometry is highly important for lots of applications such as face recognition, computer vision, computer graphics, etc. Theoretically, Guo’s method is unable to restore the texture from these phase-shifting patterns because correct phase-shifts can’t be obtained. Figure 10 shows that the texture of different objects under motion obtained from our proposed method and the original method. It is distinctly seen that our proposed method can restore the object texture correctly.

 figure: Fig. 10.

Fig. 10. The obtained texture of different moving objects: (a)(c)(e) using original method; (b)(d)(f) using our proposed method

Download Full Size | PDF

5. Conclusion

In this paper, we propose a pixel-wise error compensation method for 4-step phase-shifting profilometry to measure the objects in dynamic scenarios. Compared with the standard PSP method, our compensation method can improve the measurement accuracy and drastically reduce ripples caused by the object’s motion. Simulations show that our proposed method is pretty effective in reducing errors when the object undergoes uniform and some non-uniform motion. The experimental results also show that the measurement accuracy of our proposed method is significantly improved for the actual moving objects. Another major advantage of our method is that we can also restore the high-quality object texture which is useful in many applications. Moreover, the proposed compensation algorithm has no iterative operation basically and the phase-shifted error can be calculated separately on each pixel, which thus is suitable for parallel computing to accelerate the 3D reconstruction. Especially, our method also has a good performance for non-rigid deforming objects.

Appendix

Equation (26) is derived as follows:

$$\frac{k}{{{d_2}}} = \frac{{\sqrt 2 }}{2}({\sin 3\theta + \cos 3\theta } )= \frac{{\sqrt 2 }}{2}({3\sin \theta - 3\cos \theta + 4{{\cos }^3}\theta - 4{{\sin }^3}\theta } ).$$

We need make a transformation first, ${\cos ^3}\theta = {(1 - {\sin ^2}\theta )^{3/2}} = {(1 - x)^{3/2}}$, then we can use taylor series to expand at $x = 0$:

$${(1 - x)^{3/2}} = 1 - \frac{3}{2}x + \frac{3}{4}{x^2} + O({x^3}) = 1 - \frac{3}{2}{\sin ^2}\theta + O({\sin ^3}\theta ).$$

Equation (28) is derived as follows:

$${s_1} = \frac{{\frac{{\sqrt 2 }}{2}({3\sin \theta - 3\cos \theta + 4 - 6{{\sin }^2}\theta } )}}{{\frac{{\sqrt 2 }}{2}(\sin \theta + \cos \theta )}}.$$

Then we can get:

$$\cos \theta = \frac{{((3 - {s_1})\sin \theta + 4 - 6{{\sin }^2}\theta )}}{{({s_1} + 3)}}.$$

And it is well known that ${\sin ^2}\theta \textrm{ + co}{\textrm{s}^2}\theta = 1$, considering that we can omit the high-order minor item $\textrm{O(si}{\textrm{n}^3}\theta \textrm{)}$, then we can obtain Eq. (28).

Funding

Science, Technology and Innovation Commission of Shenzhen Municipality (JCYJ20180508152019687, WDZC20200820160650001); Guangdong Province Science and Technology Program (2019B010143003).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Xu and S. Zhang, “Status, challenges, and future perspectives of fringe projection profilometry,” Opt. Laser. Eng. 135, 106193 (2020). [CrossRef]  

2. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011). [CrossRef]  

3. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977 (1983). [CrossRef]  

4. C. Zuo, S. Feng, and L. Huang, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser. Eng. 109, 23–59 (2018). [CrossRef]  

5. Y. Wang and S. Zhang, “Three-dimensional shape measurement with binary dithered patterns,” Appl. Opt. 51(27), 6631–6636 (2012). [CrossRef]  

6. Y. Wang, J. I. Laughner, I. R. Efimov, and Z. Song, “3D absolute shape measurement of live rabbit hearts with a superfast two-frequency phase-shifting technique,” Opt. Express 21(5), 5822 (2013). [CrossRef]  

7. Z. Wu, C. Zuo, and W. Guo, “High-speed three-dimensional shape measurement based on cyclic complementary Gray-code light,” Opt. Express 27(2), 1283 (2019). [CrossRef]  

8. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry,” Opt. Express 21(25), 30610 (2013). [CrossRef]  

9. L. Lu, Y. Ding, Y. Luan, Y. Yin, Q. Liu, and J. Xi, “Automated approach for the surface profile measurement of moving objects based on PSP,” Opt. Express 25(25), 32120 (2017). [CrossRef]  

10. Y. Guo, F. Da, and Y. Yu, “High-quality defocusing phase-shifting profilometry on dynamic objects,” Opt. Eng. 57(10), 105105 (2018). [CrossRef]  

11. T. Weise, B. Leibe, and L. Gool, “Fast 3D scanning with automatic motion compensation,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition (2007), pp. 1–8.

12. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, and Q. Chen, “Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018). [CrossRef]  

13. J. Qian, T. Tao, S. Feng, Q. Chen, and C. Zuo, “Motion-artifact-free dynamic 3D shape measurement with hybrid fourier-transform phase-shifting profilometry,” Opt. Express 27(3), 2713–2731 (2019). [CrossRef]  

14. Y. Wang, Z. Liu, C. Jiang, and S. Zhang, “Motion induced phase error reduction using a Hilbert transform,” Opt. Express 26(26), 34224 (2018). [CrossRef]  

15. X. Liu, T. Tao, Y. Wan, and J. Kofman, “Real-time motion-induced-error compensation in 3D surface-shape measurement,” Opt. Express 27(18), 25265–25279 (2019). [CrossRef]  

16. W. Guo, Z. Wu, Q. Zhang, and Y. Wang, “Real-time motion-induced error compensation for 4-step phase-shifting profilometry,” Opt. Express 29(15), 23822–23834 (2021). [CrossRef]  

17. J. E. Greivenkamp, “Generalized Data Reduction For Heterodyne Interferometry,” Opt. Eng. 23(4), 350–352 (1984). [CrossRef]  

18. C. Zuo, L. Huang, and M. Zhang M, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser. Eng. 85, 84–103 (2016). [CrossRef]  

19. S Zhang, “Flexible 3D shape measurement using projector defocusing: extended measurement range,” Opt. Lett. 35(7), 934–936 (2010). [CrossRef]  

20. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

21. X. Chen, J. Xi, and J. Ye, “Accurate calibration for a camera–projector measurement system based on structured light projection,” Opt. Laser. Eng. 47(3-4), 310–319 (2009). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. The proposed system framework for 3D reconstruction
Fig. 2.
Fig. 2. The phase RMS error under different kinds of simulated motion. (a) Uniform motion; (b) non-uniform motion
Fig. 3.
Fig. 3. (a) The simulated object: a deforming elliptical cylinder; (b) the cross-section of the generated object and the motion under different times.
Fig. 4.
Fig. 4. (a) The reconstruction of the non-rigid object in motion using standard phase-shifting method and our proposed method; (b) the part in the red box of (a)
Fig. 5.
Fig. 5. (a) The designed step-shaped workpiece (b) the z coordinate of reconstructed points located on the 450th row of the captured image
Fig. 6.
Fig. 6. The measurement results of step-shaped workpiece using different methods: (a) standard PSP method (b) Guo’s method (c) our proposed method
Fig. 7.
Fig. 7. The measurement results of the moving sphere surface: (a) the standard PSP method (b)Guo’s method (c) our proposed method
Fig. 8.
Fig. 8. The measurement results of the moving plaster: (a) the white plaster (b) the 3D point clouds using the standard PSP method (b) the 3D point cloud using our proposed method
Fig. 9.
Fig. 9. The measurement results of the deflating balloon: (a) the balloon (b) the 3D point clouds using the standard PSP method (b) the 3D point cloud using our proposed method
Fig. 10.
Fig. 10. The obtained texture of different moving objects: (a)(c)(e) using original method; (b)(d)(f) using our proposed method

Tables (3)

Tables Icon

Table 1. The measurement results of the simulated non-rigid object for standard PSP and our proposed method under different uniform motion

Tables Icon

Table 2. The measurement results of the simulated non-rigid object for standard PSP and our proposed method under different non-uniform motion

Tables Icon

Table 3. The reconstructed step-shaped surface for the standard PSP method, Guo’s method and our proposed method

Equations (36)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = A ( x , y ) + B ( x , y ) cos ( φ ( x , y ) δ n ) ,
I n ( x , y ) = a 0 ( x , y ) + a 1 ( x , y ) sin ( δ n ) + a 2 ( x , y ) cos ( δ n ) .
[ a 0 ( x , y ) a 1 ( x , y ) a 2 ( x , y ) ] = [ N cos δ n sin δ n cos δ n cos 2 δ n cos δ n sin δ n sin δ n cos δ n sin δ n sin 2 δ n ] 1 [ I n I n cos δ n I n sin δ n ] .
φ ( x , y ) = tan 1 [ a 2 ( x , y ) a 1 ( x , y ) ] .
Φ ( x , y ) = φ ( x , y ) + 2 π k ( x , y ) .
I 1 ( x , y ) = A ( x , y ) + B ( x , y ) cos ( φ ( x , y ) )
I 2 ( x , y ) = A ( x , y ) + B ( x , y ) cos ( φ ( x , y ) π / 2 + ε 1 ( x , y ) )
I 3 ( x , y ) = A ( x , y ) + B ( x , y ) cos ( φ ( x , y ) π + ε 1 ( x , y ) + ε 2 ( x , y ) )
I 4 ( x , y ) = A ( x , y ) + B ( x , y ) cos ( φ ( x , y ) 3 π / 2 + ε 1 ( x , y ) + ε 2 ( x , y ) + ε 3 ( x , y ) )
φ ( x , y ) = tan 1 [ I 2 ( x , y ) I 4 ( x , y ) I 1 ( x , y ) I 3 ( x , y ) ]
Δ φ = φ φ = tan 1 [ s i n ( φ + ε 1 ) + s i n ( φ + ε 1 + ε 2 + ε 3 ) cos φ + cos ( φ + ε 1 + ε 2 ) ] t a n 1 [ t a n φ ] .
Δ φ = tan 1 [ c 1 cos 2 φ + c 2 sin 2 φ + c 3 c 4 cos 2 φ + c 5 sin 2 φ + c 6 ]
ε 2 ( x , y ) = ( ε 1 ( x , y ) + ε 3 ( x , y ) ) / 2.
| I 1 I 2 | = | 2 B s i n ( ϕ π 4 + ε 1 2 ) | sin ( π 4 ε 1 2 ) = t 1 sin ( π 4 ε 1 2 )
| I 3 I 4 | = | 2 B s i n ( ϕ π 4 + ε 1 + ε 2 + ε 3 2 ) | sin ( π 4 ε 3 2 ) = t 2 sin ( π 4 ε 3 2 )
| I 1 I 4 | = | 2 B s i n ( ϕ 3 π 4 + ε 1 2 + ε 2 2 + ε 3 2 ) | sin ( π 4 + 3 ε 2 2 ) = t 3 sin ( π 4 + 3 ε 2 2 )
| I 2 I 3 | = | 2 B s i n ( ϕ 3 π 4 + ε 1 + ε 2 2 ) | sin ( π 4 ε 2 2 ) = t 4 sin ( π 4 ε 2 2 ) .
B ( x , y ) | sin ( ϕ ( x , y ) ) | D = 1 π ϕ 0 π 2 ϕ 0 + π 2 B ( x , y ) | sin ( ϕ ( x , y ) ) | d ϕ .
B | sin ( ϕ ) | D = 1 π B ( ε ) ϕ 0 π 2 ϕ 0 + π 2 | sin ( ϕ ) | d ϕ = 2 π B ( ε ) .
B | sin ( ϕ ) | D = 1 π B ( ε ) n = W / 2 + 1 n = W / 2 | sin ( ϕ ( n ) ) | = 2 π B ( ε ) .
t 1 D t 2 D , t 3 D t 4 D .
| I 1 I 2 | = d 1 sin ( π 4 ε 1 2 ) = i
| I 3 I 4 | = d 1 sin ( π 4 ε 3 2 ) = j
| I 1 I 4 | = d 2 sin ( π 4 + 3 ε 2 2 ) = k
| I 2 I 3 | = d 2 sin ( π 4 ε 2 2 ) = m .
k d 2 = 2 2 ( sin 3 θ + cos 3 θ ) = 2 2 ( 3 sin θ 3 cos θ + 4 6 sin 2 θ + O ( sin 3 θ ) )
m d 2 = 2 2 ( sin θ + cos θ ) .
( 2 s 1 2 + 12 s 1 30 ) sin 2 θ + ( 24 + 8 s 1 ) sin θ + ( 7 6 s 1 s 1 2 ) = 0.
ε 1 = s i g n ( p ) × 2 arcsin ( 1 1 + p 2 )
p = 1 + s 2 ( cos 2 θ + sin 2 θ ) 1 s 2 ( cos 2 θ sin 2 θ ) .
M c = K c [ R c | T c ] = [ m 11 c m 12 c m 13 c m 14 c m 21 c m 22 c m 23 c m 24 c m 31 c m 32 c m 33 c m 34 c ] , M p = K p [ R p | T p ] = [ m 11 p m 12 p m 13 p m 14 p m 21 p m 22 p m 23 p m 24 p m 31 p m 32 p m 33 p m 34 p ]
[ x w y w z w ] = [ m 11 c m 31 c u c m 12 c m 32 c u c m 13 c m 33 c u c m 21 c m 31 c v c m 22 c m 32 c v c m 23 c m 33 c v c m 11 p m 31 p u p m 12 p m 32 p u p m 13 c m 33 c u p ] 1 [ m 34 c u c m 14 c m 34 c v c m 24 c m 34 p u p m 14 p ]
k d 2 = 2 2 ( sin 3 θ + cos 3 θ ) = 2 2 ( 3 sin θ 3 cos θ + 4 cos 3 θ 4 sin 3 θ ) .
( 1 x ) 3 / 2 = 1 3 2 x + 3 4 x 2 + O ( x 3 ) = 1 3 2 sin 2 θ + O ( sin 3 θ ) .
s 1 = 2 2 ( 3 sin θ 3 cos θ + 4 6 sin 2 θ ) 2 2 ( sin θ + cos θ ) .
cos θ = ( ( 3 s 1 ) sin θ + 4 6 sin 2 θ ) ( s 1 + 3 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.