Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Cross-iteration multi-step optimization strategy for three-dimensional intensity position correction in phase diverse phase retrieval

Open Access Open Access

Abstract

Parameters mismatching between the real optical system and phase retrieval model undermines wavefront reconstruction accuracy. The three-dimensional intensity position is corrected in phase retrieval, which is traditionally separated from lateral position correction and axial position correction. In this paper, we propose a three-dimensional intensity position correction method for phase diverse phase retrieval with the cross-iteration nonlinear optimization strategy. The intensity position is optimized via the coarse optimization method at first, then the intensity position is cross-optimized in the iterative wavefront reconstruction process with the exact optimization method. The analytic gradients about the three-dimensional intensity position are derived. The cross-iteration optimization strategy avoids the interference between the incomplete position correction and wavefront reconstruction during the iterative process. The accuracy and robustness of the proposed method are verified both numerically and experimentally. The proposed method achieves robust and accurate intensity position correction and wavefront reconstruction, which is available for wavefront measurement and phase imaging.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Coherent diffraction imaging (CDI), which directly retrieves the phase information from diffractive intensities by using a simple experimental arrangement, is an attractive wavefront sensing and imaging method. The CDI technique has been applied for super-resolution [14], wavefront sensing [59], phase imaging [1013], optical encryption [1417], etc. The iterative phase retrieval (PR) method [18], as an implementation of the CDI, has good accuracy and robustness for complex field reconstruction. The PR method estimates the desired wavefront via iteratively propagation calculation between the desired plane and the collected intensity planes based on a pair of Fourier transform operations. According to the number of the collected intensities, the PR method can be classified as the single-image PR method and the multiple-image PR method [19]. Compared with the single-image PR method, the multiple-image PR method, called phase diverse phase retrieval (PDPR), accelerates the convergence of the iterative algorithm and has higher robustness by using several intensity images [20]. The PDPR method is an effective quantitative optical method in the context of its high accuracy and good robustness, which is suitable for image reconstruction [21,22] and wavefront sensing [2327].

The accurate model matching between the phase retrieval model and the real optical experiment system is the imperative procedure for the iterative phase retrieval method. The three-dimensional intensity position is the main systematic error, which would degrade the quality of the wavefront reconstruction. In the previous literature, the three-dimensional position error which is classified by lateral position error and axial position error is corrected, respectively. On the one hand, for the lateral position error correction, the researcher proposed cross-correlation calibration methods to estimate the oblique angle of incident light and correct lateral position errors, which are induced by tilt illumination [28,29]. In addition, for wavefront sensing, the small lateral positions can be corrected via estimating Zernike coefficients, which is suitable for single-image phase retrieval or sub-aperture stitching phase retrieval algorithm [30]. On the other hand, axial uncertainty for the position of the intensity including the absolute distance error for all intensities offset and the relative distance error among the measured planes. The small absolute distance error resulting in a small focus term is easily removed, while the relative distance errors would damage the accuracy of wavefront reconstruction [19]. Searching for the optimal axial position of the measured plane is an autofocusing procedure. The imaging produced by a specially fabricated phase plate [31] and a double pinhole interference pattern is applied for autofocusing [30].

However, the three-dimensional intensity position error should be simultaneously corrected, since only correcting the axial error or lateral error would not achieve the accurate intensity position matching between the numerical model and the experimental model. Besides, the position correction is carried out according to the estimated value, traditionally. Considering that the initial estimated value of the desired plane is very different from the ground truth, so the estimated intensity position is not very precise according to the initial value of desired plane. The accurate three-dimensional intensity position correction phase retrieval method is essential to achieve accurate wavefront reconstruction.

In this paper, we develop a novel method to achieve stable three-dimensional intensity position correction and wavefront reconstruction based on the cross-iteration nonlinear optimization strategy. By optimizing common error metrics, correcting intensity position error and retrieving complex field are cross-iteratively implemented, which would alleviate accuracy loss induced by the interference between the incomplete position correction and wavefront reconstruction. The analytic gradients for the intensity position in different directions are also derived. In addition, the characteristic of reconstruction error produced by intensity position error is analyzed. Furthermore, the proposed method can be applied to correct intensity position error for the single-image phase retrieval or sub-aperture stitching phase retrieval models.

The remainder of this paper is organized as follows. Section 2 introduces the PDPR model and the intensity position errors problem. Section 3 specifically describes the cross-iteration nonlinear optimization method for position correction and wavefront reconstruction. Section 4 verifies the accuracy and stability of the proposed method through numerical simulations. Section 5 presents some verified experiments. Section 6 discusses and concludes the paper.

2. Intensity position errors problem for the PDPR model

Our goal is to recover the complex field of the sample and optimize the three-dimensional intensity position for a stack of intensity images. We first introduce the phase diverse phase retrieval model, which mathematically describes the forward physical process and establishes the error metric. Next, we analyze the three-dimensional intensity position error problem. Considering that the previous researchers solve this problem divided into lateral position correction and axial position correction, the intensity position errors are also analyzed from these two aspects.

2.1 Optical model and metric error for the phase diverse phase retrieval

The PDPR model is shown in Fig. 1(a), the wavefront crossing the desired plane to the measured plane can be calculated with

$${G_j}({u,v} )= {{\cal F}}\{{Z[{{g_s}({x,y} )\phi ({x,y,{z_j}} )} ]} \},$$
where ${{\rm G}_j}({u,v} )$ is the field on the jth measured plane, $({u,v} )$ are the coordinates of the measured plane, ${{\rm g}_s}({x,y} )$ is the desired field, $({x,y} )$ are the coordinates of the desired plane, $i^{2} = - 1,\,{\cal F}[]$ denotes the Fourier transform, Z represents zero-padding to match the real optical system, $\phi ({x,y,{z_j}} )$ is the phase diversity for the defocus length ${z_j}$
$$\phi ({x,y,{z_j}} )= \frac{{2\pi }}{\lambda }\frac{{{z_j}}}{{2{f^2}}}({{x^2} + {y^2}} )$$

The intensity on the jth measured plane is

$$I_{cal}^j({u,v} )= {G_j}({u,v} )G_j^ \ast ({u,v} ),$$
where the superscript * denotes the complex conjugate operator.

 figure: Fig. 1.

Fig. 1. (a) Experimental configuration for the phase diverse phase retrieval model. The collimated beam illuminates the sample and aperture stop. The lens is used to converge the beam. The charge-coupled device (CCD) collects diffractive intensities at different positions along with the nominal optical axis. ${z_1}\sim {z_3}$ are the measured planes. (b) The three-dimensional position correction directions for the measured intensity. The x and y axis are the lateral directions, and the z axis denotes the axial direction.

Download Full Size | PDF

The error metric of this desired plane reconstruction model is defined as

$$E_j^2 = \sum\limits_{\mu ,\nu } {{W_j}(u,v){{\left[ {\sqrt {\widehat I_{cal}^j(u,v)} - \sqrt {\widehat I_{mea}^j(u,v)} } \right]}^2}} ,$$
where $\widehat I_{mea}^j$ and $\widehat I_{cal}^j$ are the normalized intensity of $I_{mea}$ and $I_{cal}$, respectively, $I_{mea}$ is the measured intensity, ${W_j}(u,v)$ is a weighting function that is used to discard the effect of bad or saturated detector pixels and the pixels with the poor signal-to-noise ratio. The concrete form of the weighting function would be discussed in another paper in the future. The intensity position error correction and wavefront reconstruction are both accomplished by optimizing this common error metric.

2.2 Feature of the lateral position error problem

The influence of the lateral position error of intensity is analyzed. The intensity of image acquisitions along axis usually is not ideal during the real experimental operation process, as shown in Fig. 2(a). Off-axis moving CCD leads to a shift of diffraction pattern for amplitude and adds an extra distance-based matrix modulation for phase, as shown in Fig. 2(b). In fact, the CCD merely receives the intensity data and the corresponding phase is lost. Thus, the only impact of CCD off-axis moving lies on the lateral shift of diffraction pattern [29]. The position coordinates of the measured intensity pattern can be expressed as

$$\left\{ {\begin{array}{l} {{u_j} = {u_0} + \Delta {u_0} + \delta {u_j},}\\ {{v_j} = {v_0} + \Delta {v_0} + \delta {v_j},} \end{array}} \right.$$
where $(u_0,v_0)$ are the corrected coordinates, $(\Delta u_0,\Delta v_0)$ are fixed offset between the CCD and the optical axis, and ($\delta u_j,\delta v_j$) are tilt offset of CCD moving along the optical axis. In the PDPR model, the tilt offset could be corrected via optimizing tip and tilt terms when there are only small shifts of the detector, while the fixed offset for all intensities cannot be corrected via optimizing them. For convenience, in this paper, the fixed offset and tilt offset are collectively called lateral errors. If the measured intensity has a relative lateral error with ideal intensity, there is the departure error relative to the true intensity. The departure error has a ripple-like structure, as shown in Fig. 3(c). The departure error would directly be reflected on the reconstruction wavefront. There is a peak in the high-frequency domain of the power spectral density (PSD) curve, as shown in Figs. 4(c) and 4(f). This phenomenon indicates that the transverse error of the intensity would impair the wavefront reconstruction in the high-frequency domain. It is vital for correcting the lateral position error to achieve high-resolution and high-accuracy wavefront reconstruction.

 figure: Fig. 2.

Fig. 2. The model of CCD off-axis moving. (a) The schematic of diffractive intensities collected via off-axis moving CCD; (b) The oblique modality in the case of CCD.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Schematic diagram of lateral position error for intensity. (a) The intensity with 4.4 µm lateral shift error; (b) The intensity without lateral shift error; (c) The difference value between intensities (a) and (b). All intensities are normalized.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. The influence of CCD off-axis moving for PDPR. (a) The retrieved phase; (b) The ground truth (actual object) for phase; (c) The PSD curve of phase; (d) The retrieved amplitude; (e) The ground truth (actual object) for amplitude; (f) The PSD curve of amplitude. The complex field is retrieved from three different position intensities. The diameter of desired plane is 70 mm. In the (c) and (f), the blue curve denotes the PSD curve of ground truth, and the red curve denotes the PSD curve of the retrieved result.

Download Full Size | PDF

2.3 Feature of the axial position error problem

Another concern of the intensity position with PDPR is to determine the axial position, as shown in Fig. 5. The axial position error can be classified as the absolute position error for all measurement planes, which is shown in Fig. 5(b), and the relative position error among the measurement planes, which is shown in Fig. 5(c). For every measured plane, the defocus length ${z_j}$ can be expressed as

$${z_j} = {z_0} + \Delta {z_0} + \delta {z_j}$$
where ${z_0}$ is the corrected defocus length, $\Delta {z_0}$ is absolute defocus error for all measured planes, and $\delta {z_j}$ is the relative error among the different planes. The influence of the absolute defocus error for the PDPR is similar for the single-image phase retrieval model, the small absolute displacement of the planes can be compensated by a simple focus term, as shown in Fig. 5(b). The relative position error among the measurement planes would undermine the accuracy of the wavefront, as shown in Fig. 5(c). If the measured intensity has a relative axial error with ideal intensity, there is the residual ripple-like error, as shown in Fig. 6(c). This would directly be reflected on the reconstructed wavefront, which has ripple-like error as shown in Figs. 7(a) and 7(d). And there is a peak in the high-frequency domain of the PSD curve, as shown in Figs. 7(c) and 7(f).

 figure: Fig. 5.

Fig. 5. The model of axial position error and the retrieved results for different conditions.${\; }{z_1},{\; \; }{z_2}\; \textrm{and}\; {z_3}$ are the defocus length for three measurement planes. The focus term is removed for all retrieved results. (a) The retrieved result without axial position errors; (b) The retrieved result only with the fixed axial position error $\Delta {z_0}$; (c) The retrieved result with random axial position errors including the fixed axial position error $\Delta {z_0}$ and the relative axial position error $\delta {z_j}$.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Schematic diagram of axial position error for intensity. (a) The intensity collected in the plane with 0.64mm axial error; (b) The intensity without axial shift error; (c) The difference value between intensities (a) and (b).

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. The influence of axial error for PDPR. (a) The retrieved phase; (b) The ground truth (actual object) for phase; (c) The PSD curve of phase; (d) The retrieved amplitude; (e) The ground truth (actual object) for amplitude; (f) The PSD curve of amplitude.

Download Full Size | PDF

3. Method

In this Section, the cross-iteration intensity position correction phase retrieval algorithm is developed. For the gradient calculation of the error metric, Fienup et al. [30,32] proposed a simple gradient analytic expression based on the Fourier transform. Here, we further extend the analytic gradient calculation for intensity position optimization. Different from the traditionally lateral position correction and axial position correction, respectively, the cross-iteration three-dimensional intensity position correction method for phase diverse phase retrieval algorithm is established: firstly, the position of every intensity is coarsely optimized according to the initial value; then the position of every intensity is exactly optimized according to the new estimated value in the iterative process.

3.1 Intensity position and wavefront reconstruction analytic gradient calculation

In the PDPR model, following the derivation in Ref. [30], the gradient of the error metric with respect to a real-valued parameter $\alpha $ can be written as

$$\frac{{\partial {E_j}}}{{\partial {\alpha }}} = \sum\limits_{x,y} {{g_j}^{\prime}({x,y} )} \frac{\partial }{{\partial {\alpha }}}[{{g_j}({x,y} )} ]+ c.c.$$
where ${\textrm{g}_j}({x,y} )$ is the estimated complex field, ${\textrm{g}_j}{^{\prime}}({x,y} )$, which is the inverse propagated from the ${j^{th}}$ measured plane, is given by
$${g_j}^{\prime}({x,y} )= \overline Z \{{{{{\cal F}}^{ - 1}}[{G_j^w({u,v} )} ]\cdot \phi ({x,y, - {z_j}} )} \},$$
where ${\cal F} _j^{ - 1}[{} ]{\; }$denotes the inverse Fourier transform, $\bar{Z}$ represents remain the effective value inside desired plane, and
$$G_j^w({u,v} )= {W_j}({u,v} )[{|{{F_j}({u,v} )} |- |{{G_j}({u,v} )} |} ]\frac{{{G_j}({u,v} )}}{{|{{G_j}({u,v} )} |}},$$
where $|{{F_j}({\textrm{u,v}} )} |= \sqrt {\hat{I}_{\textrm{mea}}^j} $.

According to Eq. (7), the pixel-by-pixel gradient values for the desired field (including phase and amplitude) by using the ${j^{\textrm{th}}}$ collected intensity can be calculated by

$$\left\{ {\begin{array}{c} {\frac{{\partial {E_j}}}{{\partial \theta }} = 2{\mathop{\rm Im}\nolimits} [{{g_j}({x,y} ){g^{\prime}}_j^{\ast} ({x,y} )} ],}\\ {\frac{{\partial {E_j}}}{{\partial a}} ={-} 2{\rm{Re}} [{{g_j}({x,y} ){g^{\prime}}_j^{\ast} ({x,y} )} ],} \end{array}} \right.$$
where $\partial {\textrm{E}_\textrm{j}}/\partial \mathrm{\theta }$ and $\partial {\textrm{E}_\textrm{j}}/\partial \textrm{a}\; $represent the gradients about the ${j^{\textrm{th}}}$ error metric for phase and amplitude, respectively.

Then we derive the gradient with respect to the effective intensity position $({{u_j},{v_j}} )$. According to the Fourier optics theory, the tilt part of the phase would induce the shifting of the center of the sample’s spectrum in the Fourier domain

$${{\cal F}}\{{Z[{g({x,y} )\exp ({i2\pi ({{u_j}{T_x} + {v_j}{T_y}} )} )\phi ({x,y,{z_j}} )} ]} \}= G({u - {u_j},v - {v_j}} ),$$
where ${T_x}$ and ${T_y}$ denote tilt matrix in x and y directions, respectively. Similar to the gradient calculation of coefficients for polynomial basis functions, according to Eq. (7), the gradients about shifts of intensity pattern are calculated with
$$\left\{ {\begin{array}{c} {\frac{{\partial {E_j}}}{{\partial {u_j}}} = 4\pi {\mathop{\rm Im}\nolimits} \left[ {\sum\limits_{x,y} {{g_j}({x,y} ){T_x}({x,y} )g{^{\prime}_j}^\ast ({x,y} )} } \right],}\\ {\frac{{\partial {E_j}}}{{\partial {v_j}}} = 4\pi {\mathop{\rm Im}\nolimits} \left[ {\sum\limits_{x,y} {{g_j}({x,y} ){T_y}({x,y} )g{^{\prime}_j}^\ast ({x,y} )} } \right].} \end{array}} \right.$$

Finally, the gradient about defocus length is calculated. Equation (1) is rewritten as

$${G_j}({u,v} )= {{\cal F}}\left[ {\frac{{2\pi }}{\lambda }\frac{{{z_j}}}{{2{f^2}}}({{x^2} + {y^2}} )g({x,y} )} \right].$$

According to Eq. (7), the analytic gradient expression about the defocus length ${z_j}$ is calculated by

$$\frac{{\partial {E_j}}}{{\partial {z_j}}} = \frac{{2\pi }}{{\lambda {f^2}}}{\mathop{\rm Im}\nolimits} \left[ {\sum\limits_{x,y} {{g_j}({x,y} )({{x^2} + {y^2}} )g_j^{w\ast }({x,y} )} } \right].$$

3.2 Cross-iteration nonlinear position optimization phase retrieval method

The specific procedure of the gradient calculation about wavefront reconstruction and intensity position optimization is described in Section 3.1. Next, the cross-iteration nonlinear position optimization phase retrieval algorithm framework is described, as shown in Fig. 8. In the proposed algorithm framework, the position error of intensity is corrected via two different optimization strategies. Due to the imperfect wavefront of the desired plane, the first optimization could not find the global optimizing position and would suspend near the optimal position. The first step position optimization achieves roughly find the position of the diffractive intensity by using Algorithm 1. It is worth noting that each iteration only advances one pixel to avoid crossing the global optimal position for lateral position correction during the first position optimization process. Then the wavefront is reconstructed. When the number of iterations reaches a certain threshold, the position of the intensity pattern is corrected by using Algorithm 2. For second position correction, the gradient value of each iteration is superimposed with weight, then the lateral position of the intensity pattern is optimized according to the geometric relationship between desired plane and measurement surface. The axial position error is corrected at the same time. The proposed algorithm corrects the position of each intensity pattern separately and successively, which is independent of the intensity patterns of other positions.

 figure: Fig. 8.

Fig. 8. Flow chart of the proposed algorithm.

Download Full Size | PDF

The detailed algorithm has the following procedures:

  • (1) The iterative number K, the maximum position correction iteration number ${N_p}$, the iterative threshold value M, the number of patterns J, the step length for amplitude ${\textrm{h}_{\textrm{amp}}}$, phase ${\textrm{h}_{\textrm{phase}}}$, lateral position ${\textrm{h}_\textrm{c}}$ and axial position are set, $k = 0$. And the reconstruction is started with a random normalization matrix ${\textrm{g}_0}({x,y} )$;
  • (2) For every intensity pattern, the intensity position is optimized by using Algorithm 1;
  • (3) Corresponding gradient functions including $\partial {\textrm{E}_\textrm{j}}/\partial {\textrm{a}_\textrm{k}}$ for the amplitude and$\; \partial {\textrm{E}_\textrm{j}}/\partial {\mathrm{\theta }_\textrm{k}}$ for the phase are calculated based on Section 2.2, and the desired field is estimated with
    $$\left\{ {\begin{array}{l} {{\theta_k}(x,y) = {\theta_{k - 1}}(x,y) + {h_{phase}}\frac{{\partial {E_j}}}{{\partial {\theta_k}}},}\\ {|{{\textrm{g}_k}} |= |{{g_{k - 1}}} |+ {h_{amp}}\frac{{\partial {E_j}}}{{\partial {a_k}}},\textrm{ }} \end{array}} \right.$$
    where $\partial {\textrm{E}_\textrm{j}}/\partial {\theta _\textrm{k}} = \partial {\textrm{E}_{\textrm{k}\backslash J}}/\partial {\theta _k}$ and $\partial {\textrm{E}_\textrm{j}}/\partial {\textrm{a}_\textrm{k}}\textrm{ = }\partial {\textrm{E}_{\textrm{k}\backslash J}}/\partial {\textrm{a}_\textrm{k}}$, and the desired wavefront is calculated by using
    $${g_k} = |{{g_k}} |\exp [{i{\theta_k}(x,y)} ];$$
  • (4) ${k = \; k + 1}$, if $\textrm{rem}\; (k,\; M)\; = \; 1$, the intensity position is optimized by using Algorithm 2;
  • (5) The procedures (3) and (4) are iteratively repeated until the ${k}\; \ge \; {K}$ is satisfied.
Algorithm 1: coarse position error correction algorithm
  • (1) The center point of the diffraction image is chosen as the initial center position of iterative optimization;
  • (2) The gradients $\partial {\textrm{E}_\textrm{j}}/\partial {u_j}$, $\partial {\textrm{E}_\textrm{j}}/\partial {v_j}$ and $\partial {\textrm{E}_\textrm{j}}/\partial {z_\textrm{j}}$ are calculated by using the initial desired field;
  • (3) The position of the used intensity pattern is updated via
    $$\left\{ {\begin{array}{c} {{u_j} = {u_j} - {h_u}\frac{{\partial {E_j}}}{{\partial {u_j}}},}\\ {{v_j} = {v_j} - {h_v}\frac{{\partial {E_j}}}{{\partial {v_j}}},} \end{array}} \right.$$
    with ${h_u} = {d_u}/|{\partial {\textrm{E}_\textrm{j}}/\partial {u_j}} |$, ${h_v} = {d_v}/|{\partial {\textrm{E}_\textrm{j}}/\partial {v_j}} |$, ${d_u} \times {d_v}$ is the pixel size of CCD;
  • (4) The defocus length is updated by using
    $${z_j} = {z_j} + {h_z}\frac{{\partial {E_j}}}{{\partial {z_j}}};$$
  • (5) ${n\; = \; n\; + \; 1}$, if $n < {N_p}$, return to (2), otherwise, the cut diffraction image and defocus length are used for iterative wavefront reconstruction calculation.
Algorithm 2: exact position error correction algorithm
  • (1) The estimated wavefront is filtered by a low-pass filter as ${g^0}$ and setting coefficients $c_{u,j}^0 = 0\; $and $c_{v,j}^0 = 0$, and step length ${h_c}$; The low-pass filtering is used to remove the mid-spatial frequency error induced by the intensity position error;
  • (2) Calculate the gradients $\partial {\textrm{E}_\textrm{j}}/\partial {u_\textrm{j}}$, $\partial {\textrm{E}_\textrm{j}}/\partial {v_\textrm{j}}$ and $\partial {\textrm{E}_\textrm{j}}/\partial {z_\textrm{j}}$;
  • (3) Update coefficients with
    $$\left\{ {\begin{array}{c} {c_{u,j}^{n + 1} = c_{_{u,j}}^n + {h_c}\frac{{\partial {E_j}}}{{\partial {u_j}}},}\\ {c_{_{v,j}}^{n + 1} = c_{_{v,j}}^n + {h_c}\frac{{\partial {E_j}}}{{\partial {v_j}}};} \end{array}} \right.$$
  • (4) Update the wavefront with
    $${g^{m + 1}} = |{{g^0}} |\exp \{{j[{\arg ({{g^0}} )+ c_{u,j}^{n + 1}{T_x} + c_{_{v,j}}^{n + 1}{T_y}} ]} \};$$
  • (5) Update the defocus length
    $${z_j} = {z_j} + {h_z}\frac{{\partial {E_j}}}{{\partial {z_j}}};$$
  • (6) ${n\; = \; n\; + \; 1}$, if $n < {N_p}$, return to (2), otherwise, update the center point position
    $$\left\{ {\begin{array}{c} {{u_j} = {u_j} - \frac{{\lambda ({f - {z_j}} )}}{{\pi D{d_u}}}{c_{u,j}},}\\ {{v_j} = {v_j} - \frac{{\lambda ({f - {z_j}} )}}{{\pi D{d_u}}}{c_{v,j}},} \end{array}} \right.$$
    where${\; }D$ is the diameter of the desired plane;
  • (7) The new diffraction image is cut to be used for iterative calculation as the output result. The optimized defocus length ${z_\textrm{j}}$ is also output.

4. Simulations

In Section 3, the cross-iteration nonlinear optimization phase retrieval method for position correction and wavefront reconstruction is proposed. Here, the synthetic intensities with three-dimensional position errors are applied to verify the effectiveness of the proposed algorithm. The wavefront sensing and image reconstruction using intensities with position errors are simulated. The lateral position error and axial position error are jointly optimized for wavefront sensing and image reconstruction. Then, the robustness of the proposed algorithm is tested with Monte Carlo simulation. Considering that the tilt offset and the absolute defocus errors would affect the expression value of intensity position rather than affecting the accuracy of wavefront reconstruction, the output three-dimensional position values of proposed algorithm cannot actually indicate the effect of proposed algorithm. Here, the root mean square error (RMSE) of the reconstructed wavefront is applied to quantitively evaluate the quality of position correction. When the accuracy of the reconstructed wavefront with intensity position errors is well in agreement with the reconstructed results without position error, it is proved that the proposed method is effective.

For all numerical experiments the working wavelength is chosen as 632.8nm and the focal length is 1079.41mm. The physical side length of square test images is fixed at 70 mm. The truth defocus distance is taken as 5mm, 7.5mm, 10mm sequentially, the pixel size of the intensity grid is $4.4 \times 4.4\mu m$. In this simulation, the threshold of whether running an error correction algorithm M is set as 200. In order to verify the robustness of the proposed algorithm, the signal-to-noise ratio is 30dB for every intensity pattern.

4.1 Position errors correction for wavefront sensing

Firstly, the ability of the proposed algorithm to correct lateral and axial position errors simultaneously is verified. The random axial error $\delta z \in [{ - 1,1} ]\; \textrm{mm}$ and lateral errors $\Delta u,\Delta v \in [{ - 10,10} ]\; {\mu }\textrm{m}$ are added into every measurement plane. For every intensity, the intensity position is optimized by using Algorithm 1, then the position is optimized with cross-iteration strategy by using Algorithm 2. The retrieved results are shown in Fig. 9. After correcting the position errors (including lateral direction and axial direction), the retrieved results have good accuracy as comparable as the reconstructed results without position error. It is proved that the proposed method has good accuracy of position errors correction for wavefront sensing.

 figure: Fig. 9.

Fig. 9. Numerical experiments to verify the accuracy of intensity position correction. (a) and (e) are the retrieved results without axial position correction; (b) and (f) are the retrieved results by using the proposed algorithm; (c) and (g) are the retrieved results without axial position errors; (d) and (h) are the ground truth of phase and amplitude, respectively.

Download Full Size | PDF

4.2 Position errors correction for phase imaging

The PDPR algorithm is popularly applied in wavefront sensing, while it has little been applied to reconstruct the complex image (including phase and amplitude). Here, the feasibility of the proposed algorithm for image reconstruction and intensity position errors correction is verified. For every intensity pattern, the random axial error $\delta z \in [{ - 1,1} ]\; \textrm{mm}$ and lateral errors $\Delta u,\Delta v \in [{ - 10,10} ]\; {\mu }\textrm{m}$ are added into every measurement plane. The reconstructed amplitude and phase match well with the origin amplitude and phase, as shown in Fig. 10, indicating that the proposed nonlinear optimization phase retrieval method is effective in reconstructing both the amplitude and phase of an object. When the intensity position error is not corrected, the reconstructed phase image is failed. After running the position correction algorithm, the reconstructed wavefront is in agreement with the truth value. It is proved that the cross-iteration position correction is accurate for square image reconstruction.

 figure: Fig. 10.

Fig. 10. Numerical experiments to verify the accuracy of intensity position correction for phase imaging. (a) and (e) are the retrieved results without position correction; (b) and (f) are the retrieved results by using the proposed algorithm; (c) and (g) are the retrieved results without position errors; (d) and (h) are the ground truth of phase and amplitude, respectively.

Download Full Size | PDF

Monte Carlo simulations are further performed to validate the effectiveness of the proposed method. To prove that the proposed algorithm can correct large position error, for every intensity pattern, the random axial error $\delta z \in [{ - 1,1} ]\; \textrm{mm}$ and lateral errors $\Delta u,\Delta v \in [{ - 100,100} ]\; {\mu }\textrm{m}$ are added into every measurement plane. The residual RMSE is shown in Fig. 11. The retrieved results show that the proposed algorithm has comparable correction accuracy for large position errors.

 figure: Fig. 11.

Fig. 11. Monte Carto simulations for phase imaging with the proposed algorithm. (a) is residual RMSEs between the recovered phase and original phase; (b) is residual RMSEs between the recovered amplitude and original amplitude. Compared with Figs. 10(c) and 10(f), we can see that the proposed strategy can be successfully applied to correct the position error. The blue line represents the RMSE value of the reconstruction phase without position error. The red line represents the RMSE value of the reconstruction amplitude without position error.

Download Full Size | PDF

5. Experiments

The experiments were carried out to verify the feasibility and accuracy of the proposed method. The collimated beam with$\; \lambda = 632.8$ nm illuminates the plate and a circular aperture stop of diameter 22.9 mm is placed in front of the sample. A lens with a focal length of 1079.41 mm is used to focus the beam. The camera, which is a beam profiler (BGP-USB-SP620U) with 12-bit depth and $\textrm{4}\textrm{.4} \times 4\textrm{.4}\; {\mu }\textrm{m}$ pixel size, is applied to collect diffraction patterns. The motorized translation stage (M-LFS100PP), which has high moving precision up to 0.5µm. Although the axial position is enough to accurate, it is also optimization via the proposed algorithm. In this experiment, the defocusing distances we selected are 5mm, 10mm, and 20mm, respectively.

Firstly, the plate with manufacturing error is inserted in the PDPR system to test the proposed method, experimentally. The collected intensities as shown in Fig. 12 have grid size $\textrm{1200} \times 1600$. It is obvious that the patterns are the off-center position of the whole image. Here, the proposed algorithm is applied to extract the effective intensity grid and retrieve the wavefront. The retrieved results are shown in Fig. 13. As a comparison algorithm, the centroid correction phase retrieval algorithm [33] is applied to retrieve the wavefront. For all retrieved phase and ZYGO data, the piston, tip-tilt, and power (PTP) are first removed from the reconstructed wavefront by using a least-squares fit. The measured result of the proposed algorithm agrees to interferometric data to 0.1483 rad, RMSE. The accuracy of the phase, which is retrieved by the centroid correction phase retrieval algorithm, is 0.1702 rad, RMSE. The retrieved amplitude by using the proposed algorithm is smoother than that of the result by using the centroid correction algorithm, as shown in Fig. 13(g) and 13(h). It is proved that the proposed algorithm is effective for correcting the position errors of intensities.

 figure: Fig. 12.

Fig. 12. The collected intensities using CCD. (a) ∼ (c) The intensities with the nominally defocus length 20mm, 10mm, 5mm, respectively.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. The retrieved wavefront results for different algorithms. (a) is the retrieved phase by using the centroid correction algorithm; (b) is the retrieved phase by using the proposed algorithm; (c) is the ZYGO result as ground truth; (d) and (e) are residual error between the reconstructed results and ZYGO result; (g) and (h) are the retrieved amplitude.

Download Full Size | PDF

Second, the USAF resolution target is imaged to quantify the resolution improvement for the proposed algorithm. The focal length of the lens is 335.28mm and the measured diameter is 22.9 mm. Here we introduce two-step diffraction theory [34] to improve the resolution of the desired plane. The effective sampling of the desired plane is $\textrm{512} \times 512$. The collected intensities are shown in Fig. 14. It is obvious that the effective intensity is departed from the center position. Figure 15 presents the retrieved results by using different algorithms. Figures 15(a)–15(d) show the full of the USAF target, and Figs. 15(e)–15(h) show the corresponding magnified area of interest. Compared with the results, as shown in Fig. 15(e) and 15(g), of the centroid correction phase retrieval algorithm, the proposed algorithm greatly reduces the fringe-like error on the retrieved results. The resolution of reconstruction results via using the proposed algorithm is far higher than that of the comparison algorithm.

 figure: Fig. 14.

Fig. 14. The collected intensities using CCD. (a) ∼ (c) The intensities with the nominally defocus length are 20mm, 15mm, 10mm, respectively.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. The retrieved USAF results for different algorithms. (a) is the retrieved phase by using centroid correction algorithm; (b) is the retrieved phase by using the proposed algorithm; (c) is the retrieved amplitude by using centroid correction algorithm; (d) is the retrieved amplitude by using the proposed algorithm.

Download Full Size | PDF

6. Discussion and conclusion

In this paper, we propose a three-dimensional intensity position correction method for multi-image phase retrieval using a cross-iteration nonlinear optimization strategy. The reconstructing error induced by three-dimensional position error is analyzed and the analytic gradients about the three-dimensional intensity position are derived. Two-step correction strategy achieves exact large-scale three-dimensional position correction. The cross-iteration strategy avoids the interference for the lateral position correction and wavefront reconstruction via optimizing the common error metric. The simulations and experiments verify the performance of the proposed algorithm. The proposed method is a practical and effective tools to correct the intensity position for PDPR model.

Although we have discussed extensively proposed methods numerically and experimentally, there are still some interesting issues to think about further. In this paper, the nonlinear optimization algorithm is the steepest descent method which is the simplest gradient optimization method. In the future, we will introduce an advanced gradient optimization algorithm to improve the convergence speed of the algorithm [11]. We must emphasize that the position error correction is independent among the each intensity pattern in our proposed algorithm, so the proposed algorithm is also suitable for single-plane phase retrieval model. It is applicable for popular fields including the quantitative phase imaging model [3537], image encryption [14,15], ptychography [3840], and etc.

Funding

Science Challenge Project (TZ2016006-0502-02); National Natural Science Foundation of China (52075507, 61905241, 62175211); Laboratory of Precision Manufacturing Technology of CAEP (ZD18005).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

2. P. Gao, G. Pedrini, and W. Osten, “Phase retrieval with resolution enhancement by using structured illumination,” Opt. Lett. 38(24), 5204–5207 (2013). [CrossRef]  

3. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Sampling criteria for Fourier ptychographic microscopy in object space and frequency space,” Opt. Express 24(14), 15765–15781 (2016). [CrossRef]  

4. A. Pan, Y. Zhang, K. Wen, M. Zhou, J. Min, M. Lei, and B. Yao, “Subwavelength resolution Fourier ptychography with hemispherical digital condensers,” Opt. Express 26(18), 23119–23131 (2018). [CrossRef]  

5. A. Eguchi, J. Brewer, and T. D. Milster, “Optimization of random phase diversity for adaptive optics using an LCoS spatial light modulator,” Appl. Opt. 58(25), 6834–6840 (2019). [CrossRef]  

6. D.B. Moore and J.R. Fienup, “Subaperture translation estimation accuracy in transverse-translation diversity phase retrieval,” Appl. Opt. 55(10), 2526–2536 (2016). [CrossRef]  

7. W. Farriss, T. Malhotra, A. N. Vamivakas, and J. R. Fienup, “Phase retrieval in generalized optical interferometry systems,” Opt. Express 26(3), 2191–2202 (2018). [CrossRef]  

8. A. M. Michalko and J. R. Fienup, “Transverse translation diverse phase retrieval using soft-edged illumination,” Opt. Lett. 43(6), 1331–1334 (2018). [CrossRef]  

9. A. M. Michalko and J. R. Fienup, “Verification of transverse translation diverse phase retrieval for concave optical metrology,” Opt. Lett. 43(19), 4827–4830 (2018). [CrossRef]  

10. Y. Geng, J. Tan, C. Guo, C. Shen, W. Ding, S. Liu, and Z. Liu, “Computational coherent imaging by rotating a cylindrical lens,” Opt. Express 26(17), 22110–22122 (2018). [CrossRef]  

11. X. Dong, X. Pan, C. Liu, and J. Zhu, “Single shot multi-wavelength phase retrieval with coherent modulation imaging,” Opt. Lett. 43(8), 1762–1765 (2018). [CrossRef]  

12. J. Zhong, L. Tian, P. Varma, and L. Waller, “Nonlinear optimization algorithm for partially coherent phase retrieval and source recovery,” IEEE Trans. Comput. Imaging 2(3), 310–322 (2016). [CrossRef]  

13. J. Li, A. Matlock, Y. Li, Q. Chen, L. Tian, and C. Zuo, “Resolution-enhanced intensity diffraction tomography in high numerical aperture label-free microscopy,” Photon. Res. 8(12), 1818–1826 (2020). [CrossRef]  

14. X. He, H. Tao, Z. Jiang, Y. Kong, and C. Liu, “Single-shot optical multiple-image encryption by jointly using wavelength multiplexing and position multiplexing,” Appl. Opt. 59(1), 9–15 (2020). [CrossRef]  

15. A. Pan, K. Wen, and B. Yao, “Linear space-variant optical cryptosystem via Fourier ptychography,” Opt. Lett. 44(8), 2032–2035 (2019). [CrossRef]  

16. G. Situ and J. Zhang, “Double random-phase encoding in the Fresnel domain,” Opt. Lett. 29(14), 1584–1586 (2004). [CrossRef]  

17. Y. Shi, G. Situ, and J. Zhang, “Multiple-image hiding in the Fresnel domain,” Opt. Lett. 32(13), 1914–1916 (2007). [CrossRef]  

18. J. R. Fienup, “Phase Retrieval Algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

19. G. R. Brady, “Application of phase retrieval to the measurement of optical surfaces and wavefronts,” Ph.D. thesis (University of Rochester, 2008).

20. G. R. Brady and J. R. Fienup, “Nonlinear optimization algorithm for retrieving the full complex pupil function,” Opt. Express 14(2), 474–486 (2006). [CrossRef]  

21. C. Shen, J. Tan, C. Wei, and Z. Liu, “Coherent diffraction imaging by moving a lens,” Opt. Express 24(15), 16520–16529 (2016). [CrossRef]  

22. C. Shen, X. Bao, J. Tan, S. Liu, and Z. Liu, “Two noise-robust axial scanning multi-image phase retrieval algorithms based on Pauta criterion and smoothness constraint,” Opt. Express 25(14), 16235–16249 (2017). [CrossRef]  

23. G. Ju, X. Qi, H. Ma, and C. Yan, “Feature-based phase retrieval wavefront sensing approach using machine learning,” Opt. Express 26(24), 31767–31783 (2018). [CrossRef]  

24. Q. Xin, G. Ju, C. Zhang, and S. Xu, “Object-independent image-based wavefront sensing approach using phase diversity images and deep learning,” Opt. Express 27(18), 26102–26119 (2019). [CrossRef]  

25. L. Zhao, H. Yan, J. Bai, J. Hou, Y. He, X. Zhou, and K. Wang, “Simultaneous reconstruction of phase and amplitude for wavefront measurements based on nonlinear optimization algorithms,” Opt. Express 28(13), 19726 (2020). [CrossRef]  

26. P. G. Zhang, C. L. Yang, Z. H. Xu, Z. L. Cao, Q. Q. Mu, and L. Xuan, “Hybrid particle swarm global optimization algorithm for phase diversity phase retrieval,” Opt. Express 24(22), 25704–25717 (2016). [CrossRef]  

27. H. Mao and D. Zhao, “Alternative phase-diverse phase retrieval algorithm based on Levenberg-Marquardt nonlinear optimization,” Opt. Express 17(6), 4540–4552 (2009). [CrossRef]  

28. C. Guo, Q. Li, C. Wei, J. Tan, S. Liu, and Z. Liu, “Axial multi-image phase retrieval under tilt illumination,” Sci. Rep. 7(1), 7562 (2017). [CrossRef]  

29. C. Guo, Q. Li, J. Tan, S. Liu, and Z. Liu, “A method of solving tilt illumination for multiple distance phase retrieval,” Opt. Lasers Eng. 106, 17–23 (2018). [CrossRef]  

30. G. R. Brady, M. Guizar-Sicairos, and J. R. Fienup, “Optical wavefront measurement using phase retrieval with transverse translation diversity,” Opt. Express 17(2), 624–639 (2009). [CrossRef]  

31. C. Guo, Y. Zhao, J. Tan, S. Liu, and Z. Liu, “Adaptive lens-free computational coherent imaging using autofocusing quantification with speckle illumination,” Opt. Express 26(11), 14407–14420 (2018). [CrossRef]  

32. J. R. Fienup, “Phase-retrieval algorithms for a complicated optical system,” Appl. Opt. 32(10), 1737–1746 (1993). [CrossRef]  

33. H. Yan, Q. Shi, B. Ji, and S. Wen, “Error analysis and correction method of axial multi-intensity phase retrieval imaging,” International Conference on Optoelectronic and Microelectronic Technology and Application. Vol. 11617. International Society for Optics and Photonics, 2020.

34. C. Rydberg and J. Bengtsson, “Efficient numerical representation of the optical field for the propagation of partially coherent radiation with a specified spatial and temporal coherence function,” J. Opt. Soc. Am. A 23(7), 1616 (2006). [CrossRef]  

35. M. Li, L. Bian, and J. Zhang, “Coded coherent diffraction imaging with reduced binary modulations and low-dynamic-range detection,” Opt. Lett. 45(16), 4373–4376 (2020). [CrossRef]  

36. E. Malm, E. Fohtung, and A. Mikkelsen, “Multi-wavelength phase retrieval for coherent diffractive imaging,” Opt. Lett. 46(1), 13–16 (2021). [CrossRef]  

37. J. Hu, X. Xie, and Y. Shen, “Quantitative phase imaging based on wavefront correction of a digital micromirror device,” Opt. Lett. 45(18), 5036–5039 (2020). [CrossRef]  

38. L. Bian, J. Suo, G. Zheng, K. Guo, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using wirtinger flow optimization,” Opt. Express 23(4), 4856–4866 (2015). [CrossRef]  

39. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]  

40. P. Sidorenko and O. Cohen, “Single-shot ptychography,” Optica 3(1), 9–14 (2016). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. (a) Experimental configuration for the phase diverse phase retrieval model. The collimated beam illuminates the sample and aperture stop. The lens is used to converge the beam. The charge-coupled device (CCD) collects diffractive intensities at different positions along with the nominal optical axis. ${z_1}\sim {z_3}$ are the measured planes. (b) The three-dimensional position correction directions for the measured intensity. The x and y axis are the lateral directions, and the z axis denotes the axial direction.
Fig. 2.
Fig. 2. The model of CCD off-axis moving. (a) The schematic of diffractive intensities collected via off-axis moving CCD; (b) The oblique modality in the case of CCD.
Fig. 3.
Fig. 3. Schematic diagram of lateral position error for intensity. (a) The intensity with 4.4 µm lateral shift error; (b) The intensity without lateral shift error; (c) The difference value between intensities (a) and (b). All intensities are normalized.
Fig. 4.
Fig. 4. The influence of CCD off-axis moving for PDPR. (a) The retrieved phase; (b) The ground truth (actual object) for phase; (c) The PSD curve of phase; (d) The retrieved amplitude; (e) The ground truth (actual object) for amplitude; (f) The PSD curve of amplitude. The complex field is retrieved from three different position intensities. The diameter of desired plane is 70 mm. In the (c) and (f), the blue curve denotes the PSD curve of ground truth, and the red curve denotes the PSD curve of the retrieved result.
Fig. 5.
Fig. 5. The model of axial position error and the retrieved results for different conditions. ${\; }{z_1},{\; \; }{z_2}\; \textrm{and}\; {z_3}$ are the defocus length for three measurement planes. The focus term is removed for all retrieved results. (a) The retrieved result without axial position errors; (b) The retrieved result only with the fixed axial position error $\Delta {z_0}$ ; (c) The retrieved result with random axial position errors including the fixed axial position error $\Delta {z_0}$ and the relative axial position error $\delta {z_j}$ .
Fig. 6.
Fig. 6. Schematic diagram of axial position error for intensity. (a) The intensity collected in the plane with 0.64mm axial error; (b) The intensity without axial shift error; (c) The difference value between intensities (a) and (b).
Fig. 7.
Fig. 7. The influence of axial error for PDPR. (a) The retrieved phase; (b) The ground truth (actual object) for phase; (c) The PSD curve of phase; (d) The retrieved amplitude; (e) The ground truth (actual object) for amplitude; (f) The PSD curve of amplitude.
Fig. 8.
Fig. 8. Flow chart of the proposed algorithm.
Fig. 9.
Fig. 9. Numerical experiments to verify the accuracy of intensity position correction. (a) and (e) are the retrieved results without axial position correction; (b) and (f) are the retrieved results by using the proposed algorithm; (c) and (g) are the retrieved results without axial position errors; (d) and (h) are the ground truth of phase and amplitude, respectively.
Fig. 10.
Fig. 10. Numerical experiments to verify the accuracy of intensity position correction for phase imaging. (a) and (e) are the retrieved results without position correction; (b) and (f) are the retrieved results by using the proposed algorithm; (c) and (g) are the retrieved results without position errors; (d) and (h) are the ground truth of phase and amplitude, respectively.
Fig. 11.
Fig. 11. Monte Carto simulations for phase imaging with the proposed algorithm. (a) is residual RMSEs between the recovered phase and original phase; (b) is residual RMSEs between the recovered amplitude and original amplitude. Compared with Figs. 10(c) and 10(f), we can see that the proposed strategy can be successfully applied to correct the position error. The blue line represents the RMSE value of the reconstruction phase without position error. The red line represents the RMSE value of the reconstruction amplitude without position error.
Fig. 12.
Fig. 12. The collected intensities using CCD. (a) ∼ (c) The intensities with the nominally defocus length 20mm, 10mm, 5mm, respectively.
Fig. 13.
Fig. 13. The retrieved wavefront results for different algorithms. (a) is the retrieved phase by using the centroid correction algorithm; (b) is the retrieved phase by using the proposed algorithm; (c) is the ZYGO result as ground truth; (d) and (e) are residual error between the reconstructed results and ZYGO result; (g) and (h) are the retrieved amplitude.
Fig. 14.
Fig. 14. The collected intensities using CCD. (a) ∼ (c) The intensities with the nominally defocus length are 20mm, 15mm, 10mm, respectively.
Fig. 15.
Fig. 15. The retrieved USAF results for different algorithms. (a) is the retrieved phase by using centroid correction algorithm; (b) is the retrieved phase by using the proposed algorithm; (c) is the retrieved amplitude by using centroid correction algorithm; (d) is the retrieved amplitude by using the proposed algorithm.

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

G j ( u , v ) = F { Z [ g s ( x , y ) ϕ ( x , y , z j ) ] } ,
ϕ ( x , y , z j ) = 2 π λ z j 2 f 2 ( x 2 + y 2 )
I c a l j ( u , v ) = G j ( u , v ) G j ( u , v ) ,
E j 2 = μ , ν W j ( u , v ) [ I ^ c a l j ( u , v ) I ^ m e a j ( u , v ) ] 2 ,
{ u j = u 0 + Δ u 0 + δ u j , v j = v 0 + Δ v 0 + δ v j ,
z j = z 0 + Δ z 0 + δ z j
E j α = x , y g j ( x , y ) α [ g j ( x , y ) ] + c . c .
g j ( x , y ) = Z ¯ { F 1 [ G j w ( u , v ) ] ϕ ( x , y , z j ) } ,
G j w ( u , v ) = W j ( u , v ) [ | F j ( u , v ) | | G j ( u , v ) | ] G j ( u , v ) | G j ( u , v ) | ,
{ E j θ = 2 Im [ g j ( x , y ) g j ( x , y ) ] , E j a = 2 R e [ g j ( x , y ) g j ( x , y ) ] ,
F { Z [ g ( x , y ) exp ( i 2 π ( u j T x + v j T y ) ) ϕ ( x , y , z j ) ] } = G ( u u j , v v j ) ,
{ E j u j = 4 π Im [ x , y g j ( x , y ) T x ( x , y ) g j ( x , y ) ] , E j v j = 4 π Im [ x , y g j ( x , y ) T y ( x , y ) g j ( x , y ) ] .
G j ( u , v ) = F [ 2 π λ z j 2 f 2 ( x 2 + y 2 ) g ( x , y ) ] .
E j z j = 2 π λ f 2 Im [ x , y g j ( x , y ) ( x 2 + y 2 ) g j w ( x , y ) ] .
{ θ k ( x , y ) = θ k 1 ( x , y ) + h p h a s e E j θ k , | g k | = | g k 1 | + h a m p E j a k ,  
g k = | g k | exp [ i θ k ( x , y ) ] ;
{ u j = u j h u E j u j , v j = v j h v E j v j ,
z j = z j + h z E j z j ;
{ c u , j n + 1 = c u , j n + h c E j u j , c v , j n + 1 = c v , j n + h c E j v j ;
g m + 1 = | g 0 | exp { j [ arg ( g 0 ) + c u , j n + 1 T x + c v , j n + 1 T y ] } ;
z j = z j + h z E j z j ;
{ u j = u j λ ( f z j ) π D d u c u , j , v j = v j λ ( f z j ) π D d u c v , j ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.