Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Simple, flexible, and accurate phase retrieval method for generalized phase-shifting interferometry

Open Access Open Access

Abstract

This paper presents a non-iterative phase retrieval method from randomly phase-shifted fringe images. By combining the hyperaccurate least squares ellipse fitting method with the subspace method (usually called the principal component analysis), a fast and accurate phase retrieval algorithm is realized. The proposed method is simple, flexible, and accurate. It can be easily coded without iteration, initial guess, or tuning parameter. Its flexibility comes from the fact that totally random phase-shifting steps and any number of fringe images greater than two are acceptable without any specific treatment. Finally, it is accurate because the hyperaccurate least squares method and the modified subspace method enable phase retrieval with a small error as shown by the simulations. A MATLAB code, which is used in the experimental section, is provided within the paper to demonstrate its simplicity and easiness.

© 2016 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Phase-shifting interferometry (PSI) is the widely used measurement technique to acquire the objective phase from a set of phase-shifted interferograms [17]. The objective phase that contains the desired information is retrieved from M fringe images using their structure imposed by the phase shifting. Although traditional phase retrieval algorithms require precise information about the shifting steps, it is not easy to realize it because of the practical implementation errors of a measuring system. Therefore, a lot of methods to estimate the unknown phase-shifting steps and/or objective phase from randomly observed data have been investigated [823].

Among the estimation techniques, the ellipse fitting method is one of the earliest ideas to determine and correct errors on shifting steps [2426]. If measured data are represented by two quadrature components, data are ideally distributed on a circle in the two-dimensional (2D) space. Then, the arctan operation can reconstruct the objective phase. However, those components are not in quadrature when the shifting steps contain errors. In such a case, the data distribution forms an ellipse instead of a circle, which results in patterned noise. The ellipse fitting method tries to convert the ellipse back into the circle so that phase extraction error is eliminated. This idea recently attracted several authors to investigate it further because of its powerful ability to correct phase extraction errors [2732].

Although the ellipse fitting method is simple, the preprocessing required to apply the method is not trivial. To apply the ellipse fitting method, M fringe images must be represented by two images corresponding to the two (hopefully in quadrature) components. However, in general, there are an infinite number of ways to project M-dimensional space into a 2D subspace. In other words, one has to choose how to construct the 2D representation of the data (frequently called the Lissajous figure). Performance of the ellipse fitting method depends on the construction method, and thus it should be carefully chosen.

Another relatively new family of phase retrieval algorithms is the subspace-based method, which is usually called the principal component analysis (PCA) or quadrature component analysis (QCA) [3338]. We call them the subspace method because their principle is to find the subspace, where noiseless fringe images lie, within an M-dimensional vector space (see Section 2). It can extract the objective phase from fringe images without any explicit formula and knowledge about their phase shifts. The advantages of the subspace method includes its low computational cost, non-iterative nature that does not require any initial guess, ease of treating spatially varying background illumination and contrast, and ease of implementation. Therefore, the subspace method is a promising approach for generalized PSI.

Although such advantages are quite attractive, there are some situations in which the subspace method results in notable errors. Such a situation occurs when the spatial variation of fringes is small within the fringe images [3436]. This accuracy degradation is called the number of fringes limitation, and there have been several attempts to overcome it [3537]. However, existing methods either partially solve the problem or require considerably more computational time, which cancels out some of the advantages.

In this paper, a non-iterative algorithm to reconstruct the objective phase from randomly phase-shifted fringes is proposed. It realizes accurate reconstruction by combining the ellipse fitting method with the subspace method, and retaining the simplicity and flexibility of both methods. The main advantages of the proposed method are as follows:

  • • Simplicity. The proposed method does not require any parameter tuning or initial guess because of its non-iterative and global nature. It can be coded easily, as shown in Fig. 3, and its computational time is quite fast, as shown in Section 5.
  • • Flexibility. For the input data, any number of fringe images greater than two is acceptable. In addition, phase shifts of the fringes can be totally random. Therefore, the proposed method can be applied to the broad range of situations automatically.
  • • Accuracy. For the ellipse fitting, the hyperaccurate least squares method is used so that statistical bias up to second order noise terms is eliminated. Moreover, a modification on the subspace method enables random noise reduction [39], which further improves the accuracy of the ellipse fitting. Therefore, this combination can achieve highly accurate phase retrieval.

This paper is organized in six sections. In Section 2, the subspace method and its modified version are briefly reviewed. In addition, the cause of the number of fringes limitation is discussed. The ellipse method is then explained in Section 3. We provide an explicit formula for direct conversion from a point on an ellipse to its phase on the corresponding circle. Furthermore, the hyper least squares method is introduced in an implementation-friendly form. The proposed method is summarized in Section 4, and a MATLAB code is shown there to demonstrate its simplicity. The proposed method is evaluated in Section 5, and the paper is finally concluded in Section 6.

2. SUBSPACE METHOD FOR PHASE RETRIEVAL

Let the m-th image of M phase-shifted fringe images be expressed as

Ii,j[m]=Bi,j+Ai,jcos(φi,j+δ[m]),
where (i,j) is the 2D index of an image pixel, φ is the objective phase, B is background illumination, A is amplitude of the fringe, and δ[m] is an unknown phase shift. Then, the aim of phase retrieval is to reconstruct φ, which contains the desired information, from the shifted fringe images.

A. Data Vector Constructed from Fringes and Its Subspace

The subspace-based phase retrieval method, such as PCA and QCA, is based on the relation among phase-shifted fringes. For each pixel of the images, an M-dimensional column vector can be constructed as

di,j=[Ii,j[1],Ii,j[2],,Ii,j[M]]T,
where [z1,z2,,zM] is the horizontal concatenation of {zm}, and zT is the transpose of z.

Since Eq. (1) is equivalently written as

Ii,j[m]=Bi,j+Ai,j[cos(φi,j)cos(δ[m])sin(φi,j)sin(δ[m])],
every data vector di,j is sum of the following three components,
di,j=Bi,jo+Ci,jc+Si,js,
where o=[1,1,,1]T is the M-dimensional column vector whose elements are all one; Bi,j, Ci,j(=Ai,jcos(φi,j)), and Si,j(=Ai,jsin(φi,j)) are scalers independent from the phase shift; and c and s are vectors defined only by the phase shifts {δ[m]} as
c=[cos(δ[1]),cos(δ[2]),,cos(δ[M])]T,
s=[sin(δ[1]),sin(δ[2]),,sin(δ[M])]T.
This equation shows that any ideal data vector di,j lies in the 3D subspace spanned by o, c, and s. Note that M3 is implicitly assumed here (for the case M=2, the ellipse fitting method explained in Section 3 can be directly applied without the subspace method).

By subtracting the mean value d¯i,j from di,j as di,jd¯i,j, the subspace spanned by o is eliminated because subtraction of mean is the orthogonal projection onto the orthogonal compliment of that subspace:

di,jd¯i,j=di,jd¯i,jo=di,joMm=1MIi,j[m]=di,jooTo2di,j,
where · is the Euclidean norm and o2=M. Therefore, a mean subtracted data vector lies in the 2D subspace spanned by c and s. This observation is the principle of the subspace method, which tries to identify the subspace of c and s to extract the objective phase φi,j from the coefficients Ci,j and Si,j as
φi,j=Arg[Ci,jιSi,j]=Arg[Ai,j{cos(φi,j)+ιsin(φi,j)}],
where ι=1, and Arg[z] denotes the principal value of complex argument of z.

B. Data Matrix and Its Singular Value Decomposition

For extracting the subspace in which data vectors lie, one can apply the singular value decomposition (SVD) to the corresponding data matrix.

A data matrix is constructed by concatenating di,j for every pixel horizontally,

D=[d1,1,d2,1,,dNv,1,d1,2,d2,2,dNv,Nh],
whose size is M×N, where Nv and Nh are, respectively, numbers of vertical and horizontal pixels, and N(=NvNh) is the total number of pixels. Since the order of concatenation is not important, one may construct D in a different manner.

SVD of the matrix D can be written as

D=UΣVT,
where U and V are (column) orthonormal matrices whose sizes are, respectively, M×M and N×M, and Σ is a diagonal matrix consisting of the singular values as Σ=diag(σ1,σ2,,σM) with σ1σ2σM. Then, a basis of the subspace can be obtained as column vectors of U. It can also be obtained by PCA, which is an eigendecomposition of DDT, because
DDT=UΣVTVΣUT=UΣ2UT.

If mean values of the data vectors are subtracted before constructing the data matrix, a basis of the subspace spanned by c and s is obtained as two leading vectors of U that correspond to σ1 and σ2. In addition, if c and s are orthogonal and if the background illumination is correctly eliminated by the mean subtraction, then the objective phase can be reconstructed from the two leading vectors of V or ΣV as Eq. (8); the former is called PCA and the latter is QCA in the literature of subspace methods. Figure 1 illustrates the procedure of the subspace method. The subspace methods are quite fast and easy to implement because only the matrix construction and its (reduced) SVD are needed for the computation.

 figure: Fig. 1.

Fig. 1. Schematic illustration of the subspace method (M=3). Data vectors are ideally distributed as a circle as (a), which is shown in Eq. (4). By subtracting the mean vector as in (b), the data vectors lie in a linear subspace as in (c), which can be detected by SVD of the corresponding data matrix. The red and green arrows represent the two leading vectors of U. The 2D representation of data vectors is obtained by extracting the subspace as in (d). In this example, distribution of the extracted data in (d) is a circle because the two conditions in Section 2.D are satisfied.

Download Full Size | PDF

C. Modified Subspace Method for Reducing Random Error

Recently, it was shown that the subspace method with slight modification can reduce random error [39]. It will be briefly reviewed here because such a noise reduction property can effectively improve the accuracy of the ellipse fitting described in Section 3.

A data vector di,j was defined as an M-dimensional vector in Eq. (2). It can be embedded into a higher dimensional vector space by considering adjacent pixels. For example, a 2M-dimensional extended data vector can be constructed by concatenating two data vectors vertically,

di,j(2)=[di,jT,di+1,jT]T,
where di+1,j is the data vector corresponding to the next pixel of di,j, and the superscript of di,j(2) indicates how many pixels are incorporated. In general, one can use L adjacent pixels to construct an LM-dimensional extended data vector di,j(L). That is, di,j(1)=di,j correspond to the ordinary data vector in Eq. (2). In this paper, the following two cases corresponding to L=5 and L=9 are considered in the experiments:
di,j(5)=[di,jT,di+1,jT,di1,jT,di,j+1T,di,j1T]T,
di,j(9)=[di,jT,di+1,jT,di1,jT,di,j+1T,di,j1T,di+1,j+1T,di+1,j1T,di1,j+1T,di1,j1T]T.

By concatenating the above extended data vectors di,j(L) horizontally as in Eq. (9), an extended data matrix, whose size is LM×N, can be constructed. Then, applying a subspace method to the extended data matrix reconstructs the objective phase with less random noise as shown in [39].

D. Conditions under Which the Subspace Method Successfully Works

As mentioned at the end of Section 2.B, there are two conditions that should be satisfied to successfully reconstruct the phase by the subspace method.

The first condition is on the background subtraction. Before performing SVD, the mean values of each data vector are subtracted, which can be regarded as the orthogonal projection in Eq. (7). However, such mean subtraction may not correctly eliminate only the background illumination {Bi,j}. Let us explain the situation by a simple example. When a set of data is {B+cos(0),B+cos(π/2),B+cos(π/2)}={B+1,B+0,B+0}, its mean is (B+1/3). Then, the mean subtraction yields {cos(0)1/3,cos(π/2)1/3,cos(π/2)1/3}. Note that the data points are shifted by the mean of true fringes, 1/3, even though the background illumination B is perfectly removed. This shifting phenomenon causes a mismatch of the centroid of data and the origin of the subspace. Since the phase is defined relative to the origin, an error in the origin of the estimated coordinate axes causes an error in the reconstructed phase (see Fig. 2).

 figure: Fig. 2.

Fig. 2. Schematic illustration of the ellipse fitting method. When the assumptions of phase reconstruction are not satisfied, observed data form an elliptic distribution in the 2D space as in (a). By fitting an ellipse to the data (b), the elliptic distribution can be corrected as in (c) and (d). Since phase is the angle between the horizontal axis and a data vector that is depicted by the dashed lines, the deformation and translation of the data distribution results in an error of the phase.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. MATLAB code for steps 5–7 in the proposed method. The input is an M×N data matrix D introduced in Section 2, where M is the number of fringe images, and N is the total number of pixels to be considered. The extended LM×N data matrix, which incorporates L adjacent pixels in each data vector as explained in Section 2.C, is also acceptable by the same code. In this example, the functions are divided only for demonstration purposes. Obviously, combining all functions into a single function can reduce the number of total lines.

Download Full Size | PDF

The second condition is the orthogonality of c and s. Since SVD and PCA are performed with the constraint that U and V must be orthonormal matrices, subspace methods estimate c and s as orthonormal vectors. Nevertheless, c and s are not orthogonal in general because their angle, or inner product, depends on the set of phase shifts {δ[m]} as in Eqs. (5) and (6). This mismatch of the angle between basis elements of the subspace also causes errors in the obtained phase.

These conditions can be satisfied more easily when the number of interferograms increases, which should be the reason that some literature claims that the subspace method requires many interferograms. Although the modified subspace method in Section 2.C can effectively reduce random noise, the error related to the above conditions cannot be eliminated. To avoid such an error, estimation of the true origin is necessary. In addition, the oblique SVD should be performed to relax the orthogonality constraint. However, these methods usually require some iterative procedures that may be time consuming and unreliable. Therefore, this paper does not try to improve the subspace method itself. Instead, the hyperaccurate ellipse fitting method is combined with the subspace method to eliminate the above-mentioned error without iteration.

3. ELLIPSE FITTING FOR PHASE RETRIEVAL AND HYPERACCURATE LEAST SQUARES METHOD

As in the previous section, the phase obtained by the subspace method may not be accurate because of its dependency on the two conditions of data. On the other hand, the ellipse fitting method, which is an another branch of phase retrieval algorithms, can overcome the two mismatches of the assumptions above. However, its performance depends on how the so-called Lissajous figure, to which an ellipse fitting method is applied, is constructed. Moreover, the ordinary least squares method has a huge statistical bias of the estimation, which limits the accuracy.

In this paper, the subspace method and hyperaccurate least squares ellipse fitting method are combined. This combination is able to overcome the above-mentioned weaknesses and achieve fast, easy, and accurate phase retrieval.

A. Ordinary Least Squares Ellipse Fitting

A general ellipse in the 2D space denoted by the (x,y)-coordinate is represented by

α1x2+2α2xy+α3y2+2β(α4x+α5y)+β2α6=0,
where α1,,α6 are real coefficients, and β is a real positive constant that scales data within [1,1] to prevent accuracy loss caused by computation in finite precision arithmetic. This equation can be simply written as
αTχ=0,
where α and χ are the 6D vectors defined by
α=[α1,α2,α3,α4,α5,α6]T,
χ=[x2,2xy,y2,2βx,2βy,β2]T.
Suppose that a set of data points {(xn,yn)} is obtained from an unknown ellipse. Then, the aim of ellipse fitting is to find an ellipse that agrees with the data. Since α is the set of parameters determining an ellipse, fitting an ellipse to a data set is to estimate the best α.

Although ideal points on an ellipse obey Eq. (16), an observed set of points on an ellipse may not satisfy it because of unavoidable observation error. Therefore, some criterion is necessary to estimate the optimal parameters. Arguably the most popular criterion is the mean squared error

1Nn=1N(αTχn)2,
where the column vector,
χn=[xn2,2xnyn,yn2,2βxn,2βyn,β2]T,
is constructed from (xn,yn) which is an n-th point on an unknown ellipse usually contaminated by the noise. However, Eq. (19) can be minimized by a trivial solution: α=0. Thus, a constraint to avoid the trivial solution must be needed. For example, the unit norm constraint of the coefficients vector can be imposed to formulate the ellipse fitting problem as a problem of finding
αargminα1Nn=1N(αTχn)2s.t.α2=1,
which is equivalently written in the form of the Rayleigh quotient
αargminααTXααTα,
where αTα=α2, and X is a 6×6 symmetric matrix defined by
X=1Nn=1NχnχnT,
which is obtained from the following relation,
1Nn=1N(αTχn)2=1Nn=1NαTχnχnTα=αT[1Nn=1NχnχnT]α.
Its solution can be found as the eigenvector for the smallest eigenvalue of the eigenvalue problem
Xα=λα,
where λ denotes the eigenvalue. This method is referred to as the ordinary least squares method in this paper.

B. Ellipse Fitting Based Phase Retrieval

The ellipse fitting methods are used in several articles to estimate unknown parameters in PSI, such as, for example, the phase-shifting steps {δ[m]}. In this paper, ellipse fitting is applied to the result of the subspace method so that the estimation error caused by the mismatch of the two conditions in Section 2.D is eliminated.

As in Eqs. (4) and (8), the phase is reconstructed based on the assumption that the two components C and S (which are often written as the numerator and denominator inside the arctan operation in the literature) are obtained from two sinusoidal functions whose amplitudes are the same, and the phase difference is exactly π/2. Reconstruction error occurs when this assumption is violated, and thus a compensation of the assumption mismatch is necessary to increase the accuracy. After applying the subspace method, every data vector is projected on the 2D subspace, which is ideally spanned by c and s. In that subspace, distribution of the data vectors is a circle when the above assumption is satisfied. On the other hand, it becomes an ellipse when the assumption is violated. If the ellipse can be converted back into the circle, the reconstruction error of phase can be eliminated. Therefore, the ellipse fitting method tries to fit an ellipse to the data and convert it to the circle for error correction, which is illustrated in Fig. 2.

The most popular method to fit an ellipse to data is the ordinary least squares method described in the last subsection. After estimating the parameter α in Eq. (17), the conversion is calculated from it. Although one may convert the ellipse into the circle first and reconstruct the phase later, it requires additional computations. Therefore, a direct conversion of data into the phase is provided.

The corrected phase φ is obtained directly from a point (x,y) on the ellipse as

φ=Arg[α1+α3+κ(y˜+τx˜)+ια1+α3κ(x˜τy˜)],
where (x˜,y˜) is a translated version of (x,y) defined by
x˜=xβ(α3α4α2α5)/(α22α1α3),
y˜=yβ(α1α5α2α4)/(α22α1α3),
and κ and τ are constants calculated by
κ=4α22+(α1α3)2,
τ=(α1α3+κ)/(2α2).
In the combination with the subspace method (for M3), the point (x,y) corresponds to the elements of the two leading vectors of V in Eq. (10).

From the above equations, it is obvious that the accuracy of the estimated parameter α decides the accuracy of the reconstructed phase. However, the ordinary least squares method gives a statistically biased solution that limits the accuracy. Therefore, an accurate ellipse fitting method is necessary for a better reconstruction.

C. Hyper Least Squares Ellipse Fitting

In the ordinary least squares ellipse fitting, α2=1 is chosen as the constraint to avoid the trivial solution. On the other hand, several other constraints have been proposed to achieve a better estimation. Variations of the constraint can be written in the form of the weighted norm constraint αW2=1, where W is a symmetric weighting matrix, and αW2=αTWα. It is known that a proper choice of W can dramatically improve the estimation accuracy. In this paper, the hyper least squares ellipse fitting method proposed in [40,41] is used.

The estimation error of the ordinary least squares ellipse fitting results from the second order bias [4042]. The hyper least squares method is designed so that the statistical bias up to second order noise terms is completely eliminated. Such accurate estimation is realized by the choice of the weighting matrix W, which is based on error analysis of the least squares estimators. Although the error analysis is important to understand how the hyper least squares method achieves high accuracy, the detail is omitted in this paper because it may require several pages for explanation. Here, only the information necessary for the implementation is briefly introduced.

To cancel the second order bias, W is chosen according to the explicit form of the bias. Nevertheless, the original form of the hyper least squares method is exceedingly complicated, which may require much more computational time than the ordinary least squares method. Therefore, an approximated version, which is called the semi-hyper least squares method [40], is chosen as the weight in the present paper. The approximated weighting matrix is defined as

W=[6x¯26xy¯x¯2+y¯26βx¯2βy¯β26xy¯4(x¯2+y¯2)6xy¯4βy¯4βx¯0x¯2+y¯26xy¯6y¯22βx¯6βy¯β26βx¯4βy¯2βx¯4β2002βy¯4βx¯6βy¯04β20β20β2000],
where x¯=(1/N)Σn=1Nxn and y¯=(1/N)Σn=1Nyn are the usual mean, and the notations used for simplicity are
x¯2=1Nn=1Nxn2,
y¯2=1Nn=1Nyn2,
xy¯=1Nn=1Nxnyn.
This approximated version of the weight becomes closer to the original weighting matrix when the number of pixels N is larger (the order of approximation error is O(1/N)). For PSI, this approximation makes little difference because having a large N is highly usual, for example, N=10000 for an image whose size is 100×100. Moreover, the numerical experiment in [40] showed that the approximated version can achieve comparative accuracy to the original method even for small N (in [40], fitting results for N=30 are shown).

Then, the semi-hyper least squares method is to find

αargminααTXααTWα,
where the solution α can be obtained as the eigenvector for the smallest eigenvalue of the generalized eigenvalue problem of the matrix pencil (X,W)
Xα=λWα.
Since both X and W are 6×6 matrices, its calculation is quite fast compared to some iterative methods.

Note that, for numerical computation, it should be better to calculate the eigenvector for the eigenvalue with the largest absolute value of the generalized eigenvalue problem of (W,X),

Wα=λXα,
because some linear algebra library may not accept (X,W) owing to the structure of W, which is not positive definite.

4. PROPOSED METHOD: HEFS

Based on the above methods, we proposed a simple, flexible and accurate phase retrieval algorithm by combining the ellipse fitting method with the subspace method. We will call it the hyper ellipse fitting in subspace (HEFS) method, which is defined as

  • 1. Import M fringe images {Ii,j[1],Ii,j[2],,Ii,j[M]}.
  • 2. Concatenate the fringes for each pixel to construct the M-dimensional data vectors {di,j} as in Eq. (2).
  • 3. Create LM-dimensional column vectors {di,j(L)} by vertically concatenating data vectors corresponding to selected L adjacent pixels as in Eqs. (13) and (14). This step can be skipped if no adjacent pixel is considered (L=1).
  • 4. Construct an LM×N data matrix D by horizontally concatenating mean subtracted data vectors {di,j(L)d¯i,j(L)} as in Eq. (9).
  • 5. Perform SVD to decompose the data matrix as D=UΣVT.
  • 6. Fit an ellipse to the projected data obtained as the two leading vectors of V. The optimal set of parameters α is obtained as the eigenvector for the eigenvalue with the largest absolute value of the generalized eigenvalue problem in Eq. (37).
  • 7. Reconstruct the objective wrapped phase φ from V and α by using Eq. (26).

For Step 5, one may calculate V by multiplying Σ1UT to D from the left, where U and Σ are obtained as Eq. (11). This strategy is useful when the images contain some untrusted region where no fringe pattern is available. In that case, U and Σ are obtained only using data in the trusted region, and then V is calculated for whole region [33,37]. Note that determining the global sign requires additional information, which is usual for asynchronous phase retrieval algorithms as claimed in [37].

An example of the implementation of steps 5–7 in MATLAB is shown in Fig. 3, where the mean subtraction in step 4 is executed altogether at line 8 independently from the construction of the data matrix D. This code illustrates the simplicity and ease of the proposed method, even though the underlying theory may seem quite burdensome.

The proposed method can be regarded as a simple, flexible and accurate method for several reasons. It is a simple non-iterative algorithm that can be coded by several tens of lines in MATLAB, as shown in Fig. 3. The algorithm can be performed with any number of fringe images greater than two without any specific treatment depending on the number. Moreover, the phase shifts can be totally random since no assumption is made on them. Therefore, the automatic execution is possible because there is no parameter that must be tuned by hand. Furthermore, the hyperaccurate ellipse fitting method allows the elimination of the statistical bias on the parameter estimation up to second order noise terms. Its non-iterative and global nature ensures reliable estimation because no initial guess is required. The modification on the data matrix of the subspace method also enables reduction of the random noise-related error.

5. NUMERICAL EXPERIMENTS

In this section, the effectiveness of the proposed method is examined numerically. Fringe images were simulated by

I[m](x)=B(x)+A(x)cos(φ(x)+δ[m])+σε(x),
where ε is the Gaussian random noise with a standard deviation of one, and σ0 is the standard deviation.

To show the importance of the hyper least squares method, the ordinary least squares method was also tested. The ordinary least squares version of the proposed method will be called EFS by omitting “hyper” from HEFS. In addition, parameters were chosen so that differences among the retrieval methods were more apparent because they may perform similarly for data with good condition. The boundary of simulated data was excluded from the retrieval process as in [39] to avoid a complicated discussion that is not in the scope of this paper.

A. Illustrative Example

First, an example is shown to demonstrate how the proposed HEFS method works in principle. For the parameters in Eq. (38), B=2, A=1, and φ=x12+x22 were chosen, where (x1,x2)[1,1]×[1,1], and the image size was 300×300. Three fringe images were generated for shifting steps δ[m]{0.8817,2.2198,3.6285}, which were randomly obtained. Here, noise was added slightly (σ=0.01) to show the difference because the two ellipse fitting methods provide the same result for noiseless data.

Figure 4 shows data distributions in the 2D subspace. The root mean square error (RMSE) is also shown in the subcaptions. As a reference, RMSE of the advanced iterative algorithm (AIA) [11] with the best initial guess was 0.0097 rad for this example. Since data was only available for a part of the circle, the resulting distribution of PCA was highly deformed, which is the cause of the so-called number of fringes limitation. Although QCA obtained a better result for this case, it may worsen the accuracy for some cases, which will be shown in Section 5.C. The ellipse fitting methods were applied in the subspace obtained by PCA as shown in the bottom row. As in Fig. 4(e), the ordinary least squares method could not obtain a reasonable fit in this example because of the statistical bias. For such an erroneous estimate, the correction from ellipse to circle can deteriorate the accuracy, as shown in Fig. 4(f). In contrast, the hyper ellipse fitting method provided a quite natural fit, which resulted in a dozen times better RMSE (note that RMSE cannot be zero because of the noise, which had a standard deviation of σ=0.01).

 figure: Fig. 4.

Fig. 4. Example of data distributions obtained by the phase retrieval algorithms (L=1). The phase φ=x12+x22 shown in (a) was observed as three fringe images as in (b). (c–h) show the distribution of the data vectors, where each blue dot represents a 3D data vector for an (i,j)-th pixel, di,j=[Ii,j[1],Ii,j[2],Ii,j[3]]T, projected to the 2D subspace by SVD. (f) and (h) were obtained by converting the ellipses of (e) and (g) back into the circles, respectively (Fig. 2). The black cross marks illustrate the origin of the coordinate, while the red ones show the centers of the fitted ellipses. In this case, only HEFS achieved a comparable RMSE to AIA with the best initial guess whose RMSE was 0.0097 rad.

Download Full Size | PDF

This example clearly illustrates that using the hyper least squares method instead of the ordinary one is quite important to accurately retrieve the phase. In this subsection, the data matrix was not extended (i.e., L=1). As shown in the subsequent simulations, the noise reduction property of the modified subspace method explained in Section 2.C can further improve the accuracy of the ellipse fitting and overall performance.

B. Equispaced Phase Shifts

To examine the performance of the proposed method, it was applied to equally phase-shifted fringe images, that is,

δ[m]=2πm/M,
where m=1,2,,M.

The reason to consider equispaced shifting steps here is that the standard M-step phase-shifting algorithm [1],

φ=Arg[(m=1MI[m]cos(2πm/M))ι(m=1MI[m]sin(2πm/M))],
can be applied. The M-step algorithm is a good reference to measure the performance because it can be regarded as the best algorithm in the least squares sense when the phase shifts are equispaced and known. Moreover, the subspace methods can obtain better results for equispaced phase shifts than random shifts [34].

Figure 5 shows the RMSE for each method for different noise levels and numbers of fringe images. Here, the phase was set to φ=2(x12+x22) because QCA could obtain a similar RMSE as the M-step algorithm in this case. Without modifying the subspace method (L=1), QCA performed best by improving PCA for small noise levels, say, σ=0.1,0.2. Although the proposed HEFS performed similarly to QCA for σ=0.1,0.2, it obtained quite inaccurate results for larger noise levels. This is because the ellipse fitting method did not work properly because of the noise, which is more apparent for EFS. On the other hand, the accuracy was dramatically improved by the modification of the subspace method (L=5,9). The accuracy improvement was limited for PCA and EFS, respectively, due to the number of fringes limitation and the statistical bias of ellipse fitting. HEFS, however, achieved accurate results without such limitation. Note that the RMSE of HEFS was less than half of that of M-step phase shifting algorithm, which means one can benefit from using the proposed method (owing to the denoising and bias elimination property) even when the phase shifts are exactly equispaced and known.

 figure: Fig. 5.

Fig. 5. RMSE of each method for the equally phase-shifted data with different noise levels. Each line corresponds to the different σ{0.1,0.2,0.3,0.4,0.5} as in the legend. The horizontal axis represents the number of fringe images M used for the phase extraction. The vertical axis denotes RMSE, where every axis is illustrated by the same scale.

Download Full Size | PDF

C. Random Phase Shifts

The proposed method was also applied to randomly phase-shifted fringes, where the phase shifts {δ[m]} were obtained randomly from the uniform distribution within [0,2π]. The other parameters were the same as in the previous subsection. Since the M-step algorithm cannot be applied because of the randomness of the shifts, AIA was used as a reference.

Figure 6 shows the mean values of RMSE calculated from 100 trials because the fringe images were generated in a totally random manner. For easier interpretation, some outliers with RMSEs greater than a standard deviation from the mean RMSE were excluded. Although QCA was better than PCA in the previous subsection, it was not always true for this situation. These results indicate that QCA is not a direct solution to the accuracy issue of PCA. In contrast, HEFS with a modified subspace method was able to improve the accuracy.

 figure: Fig. 6.

Fig. 6. Mean RMSE of each method for the randomly phase-shifted data with different noise levels. Each line corresponds to different σ{0.1,0.2,0.3,0.4,0.5} as in the legend. The horizontal axis represents the number of fringe images M used for the phase extraction. The vertical axis denotes the mean RMSE, where every axis is illustrated by the same scale.

Download Full Size | PDF

Table 1 shows the computational times of HEFS and AIA. It was measured with an Intel core i7 3.0 GHz processor using five fringe images (M=5), with an image size of 300×300 (like other simulations in this section). The construction of each data matrix was also included. Since HEFS is non-iterative, its computational time is reasonably fast compared to the iterative algorithm, AIA.

Tables Icon

Table 1. Computational Time of the Phase Extraction from Five Fringe Images (M=5) Whose Size Was 300×300, Where Constructions of the Data Matrices Were Included

In these simulations, B=2 and A=1 were chosen so that the assumptions of AIA were fulfilled, which allows an ideal comparison. However, such a situation is highly unrealistic because B and A vary spatially in practice. The variation of illumination should affect the accuracy of the ellipse fitting, and thus its effect must be investigated. Moreover, the modification of step 5 of the proposed method to handle some untrusted regions (mentioned in Section 4) must be considered for practical use. These points will be considered in future works along with a verification using real data.

6. CONCLUSIONS

In this paper, a phase retrieval algorithm named HEFS is proposed. It combines the ellipse fitting method with the subspace method to benefit from the advantages of both methods. Since each method is non-iterative, HEFS can be executed faster than an iterative algorithm. Its non-iterative nature also allows an easy implementation, as shown in Fig. 3. Moreover, the modification of the subspace method and the hyper least squares method enable accurate and stable phase retrieval. The numerical experiments indicated that the proposed method can retrieve phase with less error. Therefore, the proposed method should obtain a similar accuracy from less fringe images, or a more accurate result from the same number of fringe images.

It is known that some iterative ellipse fitting methods achieve more accurate results than the hyper least squares method [4244]. Although they were not chosen in this paper because of their iterative nature, they must be the choice when accuracy is more important than simplicity and computational time. Moreover, combining the proposed method with other methods as in [45,46] can be the next step toward improving overall accuracy.

Funding

Japan Society for the Promotion of Science (JSPS) (15J08043, 16J06772)

REFERENCES

1. D. Malacara, M. Servín, and Z. Malacara, Interferogram Analysis For Optical Testing, 2nd ed. (CRC, 2005).

2. H. Schreiber and J. H. Bruning, “Phase shifting interferometry,” in Optical Shop Testing (Wiley, 2006), pp. 547–666.

3. Y.-Y. Cheng and J. C. Wyant, “Multiple-wavelength phase-shifting interferometry,” Appl. Opt. 24, 804–807 (1985). [CrossRef]  

4. H. Medecki, E. Tejnil, K. A. Goldberg, and J. Bokor, “Phase-shifting point diffraction interferometer,” Opt. Lett. 21, 1526–1528 (1996). [CrossRef]  

5. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]  

6. Y. Awatsuji, M. Sasada, and T. Kubota, “Parallel quasi-phase-shifting digital holography,” Appl. Phys. Lett. 85, 1069–1071 (2004). [CrossRef]  

7. K. Ishikawa, K. Yatabe, N. Chitanont, Y. Ikeda, Y. Oikawa, T. Onuma, H. Niwa, and M. Yoshii, “High-speed imaging of sound using parallel phase-shifting interferometry,” Opt. Express 24, 12922–12932 (2016). [CrossRef]  

8. G. Lai and T. Yatagai, “Generalized phase-shifting interferometry,” J. Opt. Soc. Am. A 8, 822–827 (1991). [CrossRef]  

9. K. Okada, A. Sato, and J. Tsujiuchi, “Simultaneous calculation of phase distribution and scanning phase shift in phase shifting interferometry,” Opt. Commun. 84, 118–124 (1991). [CrossRef]  

10. I.-B. Kong and S.-W. Kim, “General algorithm of phase-shifting interferometry by iterative least-squares fitting,” Opt. Eng. 34, 183–188 (1995). [CrossRef]  

11. Z. Wang and B. Han, “Advanced iterative algorithm for phase extraction of randomly phase-shifted interferograms,” Opt. Lett. 29, 1671–1673 (2004). [CrossRef]  

12. A. Patil and P. Rastogi, “Approaches in generalized phase shifting interferometry,” Opt. Lasers Eng. 43, 475–490 (2005). [CrossRef]  

13. H. Guo, Y. Yu, and M. Chen, “Blind phase shift estimation in phase-shifting interferometry,” J. Opt. Soc. Am. A 24, 25–33 (2007). [CrossRef]  

14. X. F. Xu, L. Z. Cai, Y. R. Wang, X. F. Meng, W. J. Sun, H. Zhang, X. C. Cheng, G. Y. Dong, and X. X. Shen, “Simple direct extraction of unknown phase shift and wavefront reconstruction in generalized phase-shifting interferometry: algorithm and experiments,” Opt. Lett. 33, 776–778 (2008). [CrossRef]  

15. P. Gao, B. Yao, N. Lindlein, K. Mantel, I. Harder, and E. Geist, “Phase-shift extraction for generalized phase-shifting interferometry,” Opt. Lett. 34, 3553–3555 (2009). [CrossRef]  

16. J. Deng, H. Wang, D. Zhang, L. Zhong, J. Fan, and X. Lu, “Phase shift extraction algorithm based on Euclidean matrix norm,” Opt. Lett. 38, 1506–1508 (2013). [CrossRef]  

17. H. Guo and Z. Zhang, “Phase shift estimation from variances of fringe pattern differences,” Appl. Opt. 52, 6572–6578 (2013). [CrossRef]  

18. R. Juarez-Salazar, C. Robledo-Sánchez, C. Meneses-Fabian, F. Guerrero-Sánchez, and L. A. Aguilar, “Generalized phase-shifting interferometry by parameter estimation with the least squares method,” Opt. Lasers Eng. 51, 626–632 (2013). [CrossRef]  

19. J. Deng, L. Zhong, H. Wang, H. Wang, W. Zhang, F. Zhang, S. Ma, and X. Lu, “1-Norm character of phase shifting interferograms and its application in phase shift extraction,” Opt. Commun. 316, 156–160 (2014). [CrossRef]  

20. Q. Liu, Y. Wang, J. He, and F. Ji, “Phase shift extraction and wavefront retrieval from interferograms with background and contrast fluctuations,” J. Opt. 17, 025704 (2015). [CrossRef]  

21. J. Deng, D. Wu, K. Wang, and J. Vargas, “Precise phase retrieval under harsh conditions by constructing new connected interferograms,” Sci. Rep. 6, 24416 (2016). [CrossRef]  

22. Y. Xu, Y. Wang, Y. Ji, H. Han, and W. Jin, “Three-frame generalized phase-shifting interferometry by a Euclidean matrix norm algorithm,” Opt. Lasers Eng. 84, 89–95 (2016). [CrossRef]  

23. X. Xu, X. Lu, J. Tian, J. Shou, D. Zheng, and L. Zhong, “Random phase-shifting interferometry based on independent component analysis,” Opt. Commun. 370, 75–80 (2016). [CrossRef]  

24. K. Kinnstaetter, A. W. Lohmann, J. Schwider, and N. Streibl, “Accuracy of phase shifting interferometry,” Appl. Opt. 27, 5082–5089 (1988). [CrossRef]  

25. C. T. Farrell and M. A. Player, “Phase step measurement and variable step algorithms in phase-shifting interferometry,” Meas. Sci. Technol. 3, 953–958 (1992). [CrossRef]  

26. C. T. Farrell and M. A. Player, “Phase-step insensitive algorithms for phase-shifting interferometry,” Meas. Sci. Technol. 5, 648–654 (1994). [CrossRef]  

27. A. Albertazzi Jr., A. V. Fantin, D. P. Willemann, and M. E. Benedet, “Phase maps retrieval from sequences of phase shifted images with unknown phase steps using generalized N-dimensional Lissajous figures—principles and applications,” Int. J. Optomechatron. 8, 340–356 (2014). [CrossRef]  

28. C. Meneses-Fabian and F. A. Lara-Cortes, “Phase retrieval by Euclidean distance in self-calibrating generalized phase-shifting interferometry of three steps,” Opt. Express 23, 13589–13604 (2015). [CrossRef]  

29. F. Liu, Y. Wu, and F. Wu, “Correction of phase extraction error in phase-shifting interferometry based on Lissajous figure and ellipse fitting technology,” Opt. Express 23, 10794–10807 (2015). [CrossRef]  

30. F. Liu, Y. Wu, F. Wu, and W. Song, “Generalized phase shifting interferometry based on Lissajous calibration technology,” Opt. Lasers Eng. 83, 106–115 (2016). [CrossRef]  

31. F. Liu, J. Wang, Y. Wu, F. Wu, M. Trusiak, K. Patorski, Y. Wan, Q. Chen, and X. Hou, “Simultaneous extraction of phase and phase shift from two interferograms using Lissajous figure and ellipse fitting technology with Hilbert–Huang prefiltering,” J. Opt. 18, 105604 (2016). [CrossRef]  

32. A. V. Fantin, D. P. Willemann, M. E. Benedet, and A. G. Albertazzi, “Robust method to improve the quality of shearographic phase maps obtained in harsh environments,” Appl. Opt. 55, 1318–1323 (2016). [CrossRef]  

33. J. Vargas, J. A. Quiroga, and T. Belenguer, “Phase-shifting interferometry based on principal component analysis,” Opt. Lett. 36, 1326–1328 (2011). [CrossRef]  

34. J. Vargas, J. A. Quiroga, and T. Belenguer, “Analysis of the principal component algorithm in phase-shifting interferometry,” Opt. Lett. 36, 2215–2217 (2011). [CrossRef]  

35. J. Xu, W. Jin, L. Chai, and Q. Xu, “Phase extraction from randomly phase-shifted interferograms by combining principal component analysis and least squares method,” Opt. Express 19, 20483–20492 (2011). [CrossRef]  

36. J. Vargas, C. Sorzano, J. Estrada, and J. Carazo, “Generalization of the principal component analysis algorithm for interferometry,” Opt. Commun. 286, 130–134 (2013). [CrossRef]  

37. J. Vargas and C. Sorzano, “Quadrature component analysis for interferometry,” Opt. Lasers Eng. 51, 637–641 (2013). [CrossRef]  

38. J. Deng, K. Wang, D. Wu, X. Lv, C. Li, J. Hao, J. Qin, and W. Chen, “Advanced principal component analysis method for phase reconstruction,” Opt. Express 23, 12222–12231 (2015). [CrossRef]  

39. K. Yatabe, K. Ishikawa, and Y. Oikawa, “Improving principal component analysis based phase extraction method for phase-shifting interferometry by integrating spatial information,” Opt. Express 24, 22881–22891 (2016). [CrossRef]  

40. K. Kanatani and P. Rangarajan, “Hyper least squares fitting of circles and ellipses,” Comput. Statist. Data Anal. 55, 2197–2208 (2011). [CrossRef]  

41. K. Kanatani, P. Rangarajan, Y. Sugaya, and H. Niitsuma, “HyperLS for parameter estimation in geometric fitting,” IPSJ Trans. Comput. Vis. Appl. 3, 80–94 (2011). [CrossRef]  

42. K. Kanatani, Y. Sugaya, and Y. Kanazawa, “Ellipse fitting for computer vision: implementation and applications,” in Synthesis Lectures on Computer Vision (Morgan & Claypool, 2016).

43. K. Kanatani, “Statistical optimization for geometric estimation: minimization vs. non-minimization,” in 22nd International Conference on Pattern Recognition (ICPR) (2014), pp. 1–8.

44. K. Kanatani, A. Al-Sharadqah, N. Chernov, and Y. Sugaya, “Hyper-renormalization: non-minimization approach for geometric estimation,” Inf. Media Technol. 10, 71–87 (2015). [CrossRef]  

45. K. Yatabe and Y. Oikawa, “Convex optimization based windowed Fourier filtering with multiple windows for wrapped phase denoising,” Appl. Opt. , 55, 4632–4641 (2016). [CrossRef]  

46. K. Yatabe, K. Ishikawa, and Y. Oikawa, “Compensation of fringe distortion for phase-shifting three-dimensional shape measurement by inverse map estimation,” Appl. Opt. 55, 6017–6024 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Schematic illustration of the subspace method ( M = 3 ) . Data vectors are ideally distributed as a circle as (a), which is shown in Eq. (4). By subtracting the mean vector as in (b), the data vectors lie in a linear subspace as in (c), which can be detected by SVD of the corresponding data matrix. The red and green arrows represent the two leading vectors of U . The 2D representation of data vectors is obtained by extracting the subspace as in (d). In this example, distribution of the extracted data in (d) is a circle because the two conditions in Section 2.D are satisfied.
Fig. 2.
Fig. 2. Schematic illustration of the ellipse fitting method. When the assumptions of phase reconstruction are not satisfied, observed data form an elliptic distribution in the 2D space as in (a). By fitting an ellipse to the data (b), the elliptic distribution can be corrected as in (c) and (d). Since phase is the angle between the horizontal axis and a data vector that is depicted by the dashed lines, the deformation and translation of the data distribution results in an error of the phase.
Fig. 3.
Fig. 3. MATLAB code for steps 5–7 in the proposed method. The input is an M × N data matrix D introduced in Section 2, where M is the number of fringe images, and N is the total number of pixels to be considered. The extended L M × N data matrix, which incorporates L adjacent pixels in each data vector as explained in Section 2.C, is also acceptable by the same code. In this example, the functions are divided only for demonstration purposes. Obviously, combining all functions into a single function can reduce the number of total lines.
Fig. 4.
Fig. 4. Example of data distributions obtained by the phase retrieval algorithms ( L = 1 ). The phase φ = x 1 2 + x 2 2 shown in (a) was observed as three fringe images as in (b). (c–h) show the distribution of the data vectors, where each blue dot represents a 3D data vector for an ( i , j ) -th pixel, d i , j = [ I i , j [ 1 ] , I i , j [ 2 ] , I i , j [ 3 ] ] T , projected to the 2D subspace by SVD. (f) and (h) were obtained by converting the ellipses of (e) and (g) back into the circles, respectively (Fig. 2). The black cross marks illustrate the origin of the coordinate, while the red ones show the centers of the fitted ellipses. In this case, only HEFS achieved a comparable RMSE to AIA with the best initial guess whose RMSE was 0.0097 rad.
Fig. 5.
Fig. 5. RMSE of each method for the equally phase-shifted data with different noise levels. Each line corresponds to the different σ { 0.1 , 0.2 , 0.3 , 0.4 , 0.5 } as in the legend. The horizontal axis represents the number of fringe images M used for the phase extraction. The vertical axis denotes RMSE, where every axis is illustrated by the same scale.
Fig. 6.
Fig. 6. Mean RMSE of each method for the randomly phase-shifted data with different noise levels. Each line corresponds to different σ { 0.1 , 0.2 , 0.3 , 0.4 , 0.5 } as in the legend. The horizontal axis represents the number of fringe images M used for the phase extraction. The vertical axis denotes the mean RMSE, where every axis is illustrated by the same scale.

Tables (1)

Tables Icon

Table 1. Computational Time of the Phase Extraction from Five Fringe Images ( M = 5 ) Whose Size Was 300 × 300 , Where Constructions of the Data Matrices Were Included

Equations (40)

Equations on this page are rendered with MathJax. Learn more.

I i , j [ m ] = B i , j + A i , j cos ( φ i , j + δ [ m ] ) ,
d i , j = [ I i , j [ 1 ] , I i , j [ 2 ] , , I i , j [ M ] ] T ,
I i , j [ m ] = B i , j + A i , j [ cos ( φ i , j ) cos ( δ [ m ] ) sin ( φ i , j ) sin ( δ [ m ] ) ] ,
d i , j = B i , j o + C i , j c + S i , j s ,
c = [ cos ( δ [ 1 ] ) , cos ( δ [ 2 ] ) , , cos ( δ [ M ] ) ] T ,
s = [ sin ( δ [ 1 ] ) , sin ( δ [ 2 ] ) , , sin ( δ [ M ] ) ] T .
d i , j d ¯ i , j = d i , j d ¯ i , j o = d i , j o M m = 1 M I i , j [ m ] = d i , j o o T o 2 d i , j ,
φ i , j = Arg [ C i , j ι S i , j ] = Arg [ A i , j { cos ( φ i , j ) + ι sin ( φ i , j ) } ] ,
D = [ d 1 , 1 , d 2 , 1 , , d N v , 1 , d 1 , 2 , d 2 , 2 , d N v , N h ] ,
D = U Σ V T ,
D D T = U Σ V T V Σ U T = U Σ 2 U T .
d i , j ( 2 ) = [ d i , j T , d i + 1 , j T ] T ,
d i , j ( 5 ) = [ d i , j T , d i + 1 , j T , d i 1 , j T , d i , j + 1 T , d i , j 1 T ] T ,
d i , j ( 9 ) = [ d i , j T , d i + 1 , j T , d i 1 , j T , d i , j + 1 T , d i , j 1 T , d i + 1 , j + 1 T , d i + 1 , j 1 T , d i 1 , j + 1 T , d i 1 , j 1 T ] T .
α 1 x 2 + 2 α 2 x y + α 3 y 2 + 2 β ( α 4 x + α 5 y ) + β 2 α 6 = 0 ,
α T χ = 0 ,
α = [ α 1 , α 2 , α 3 , α 4 , α 5 , α 6 ] T ,
χ = [ x 2 , 2 x y , y 2 , 2 β x , 2 β y , β 2 ] T .
1 N n = 1 N ( α T χ n ) 2 ,
χ n = [ x n 2 , 2 x n y n , y n 2 , 2 β x n , 2 β y n , β 2 ] T ,
α arg min α 1 N n = 1 N ( α T χ n ) 2 s.t. α 2 = 1 ,
α arg min α α T X α α T α ,
X = 1 N n = 1 N χ n χ n T ,
1 N n = 1 N ( α T χ n ) 2 = 1 N n = 1 N α T χ n χ n T α = α T [ 1 N n = 1 N χ n χ n T ] α .
X α = λ α ,
φ = Arg [ α 1 + α 3 + κ ( y ˜ + τ x ˜ ) + ι α 1 + α 3 κ ( x ˜ τ y ˜ ) ] ,
x ˜ = x β ( α 3 α 4 α 2 α 5 ) / ( α 2 2 α 1 α 3 ) ,
y ˜ = y β ( α 1 α 5 α 2 α 4 ) / ( α 2 2 α 1 α 3 ) ,
κ = 4 α 2 2 + ( α 1 α 3 ) 2 ,
τ = ( α 1 α 3 + κ ) / ( 2 α 2 ) .
W = [ 6 x ¯ 2 6 x y ¯ x ¯ 2 + y ¯ 2 6 β x ¯ 2 β y ¯ β 2 6 x y ¯ 4 ( x ¯ 2 + y ¯ 2 ) 6 x y ¯ 4 β y ¯ 4 β x ¯ 0 x ¯ 2 + y ¯ 2 6 x y ¯ 6 y ¯ 2 2 β x ¯ 6 β y ¯ β 2 6 β x ¯ 4 β y ¯ 2 β x ¯ 4 β 2 0 0 2 β y ¯ 4 β x ¯ 6 β y ¯ 0 4 β 2 0 β 2 0 β 2 0 0 0 ] ,
x ¯ 2 = 1 N n = 1 N x n 2 ,
y ¯ 2 = 1 N n = 1 N y n 2 ,
x y ¯ = 1 N n = 1 N x n y n .
α arg min α α T X α α T W α ,
X α = λ W α .
W α = λ X α ,
I [ m ] ( x ) = B ( x ) + A ( x ) cos ( φ ( x ) + δ [ m ] ) + σ ε ( x ) ,
δ [ m ] = 2 π m / M ,
φ = Arg [ ( m = 1 M I [ m ] cos ( 2 π m / M ) ) ι ( m = 1 M I [ m ] sin ( 2 π m / M ) ) ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.