Abstract
Parameters mismatching between the real optical system and phase retrieval model undermines wavefront reconstruction accuracy. The three-dimensional intensity position is corrected in phase retrieval, which is traditionally separated from lateral position correction and axial position correction. In this paper, we propose a three-dimensional intensity position correction method for phase diverse phase retrieval with the cross-iteration nonlinear optimization strategy. The intensity position is optimized via the coarse optimization method at first, then the intensity position is cross-optimized in the iterative wavefront reconstruction process with the exact optimization method. The analytic gradients about the three-dimensional intensity position are derived. The cross-iteration optimization strategy avoids the interference between the incomplete position correction and wavefront reconstruction during the iterative process. The accuracy and robustness of the proposed method are verified both numerically and experimentally. The proposed method achieves robust and accurate intensity position correction and wavefront reconstruction, which is available for wavefront measurement and phase imaging.
© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
Coherent diffraction imaging (CDI), which directly retrieves the phase information from diffractive intensities by using a simple experimental arrangement, is an attractive wavefront sensing and imaging method. The CDI technique has been applied for super-resolution [1–4], wavefront sensing [5–9], phase imaging [10–13], optical encryption [14–17], etc. The iterative phase retrieval (PR) method [18], as an implementation of the CDI, has good accuracy and robustness for complex field reconstruction. The PR method estimates the desired wavefront via iteratively propagation calculation between the desired plane and the collected intensity planes based on a pair of Fourier transform operations. According to the number of the collected intensities, the PR method can be classified as the single-image PR method and the multiple-image PR method [19]. Compared with the single-image PR method, the multiple-image PR method, called phase diverse phase retrieval (PDPR), accelerates the convergence of the iterative algorithm and has higher robustness by using several intensity images [20]. The PDPR method is an effective quantitative optical method in the context of its high accuracy and good robustness, which is suitable for image reconstruction [21,22] and wavefront sensing [23–27].
The accurate model matching between the phase retrieval model and the real optical experiment system is the imperative procedure for the iterative phase retrieval method. The three-dimensional intensity position is the main systematic error, which would degrade the quality of the wavefront reconstruction. In the previous literature, the three-dimensional position error which is classified by lateral position error and axial position error is corrected, respectively. On the one hand, for the lateral position error correction, the researcher proposed cross-correlation calibration methods to estimate the oblique angle of incident light and correct lateral position errors, which are induced by tilt illumination [28,29]. In addition, for wavefront sensing, the small lateral positions can be corrected via estimating Zernike coefficients, which is suitable for single-image phase retrieval or sub-aperture stitching phase retrieval algorithm [30]. On the other hand, axial uncertainty for the position of the intensity including the absolute distance error for all intensities offset and the relative distance error among the measured planes. The small absolute distance error resulting in a small focus term is easily removed, while the relative distance errors would damage the accuracy of wavefront reconstruction [19]. Searching for the optimal axial position of the measured plane is an autofocusing procedure. The imaging produced by a specially fabricated phase plate [31] and a double pinhole interference pattern is applied for autofocusing [30].
However, the three-dimensional intensity position error should be simultaneously corrected, since only correcting the axial error or lateral error would not achieve the accurate intensity position matching between the numerical model and the experimental model. Besides, the position correction is carried out according to the estimated value, traditionally. Considering that the initial estimated value of the desired plane is very different from the ground truth, so the estimated intensity position is not very precise according to the initial value of desired plane. The accurate three-dimensional intensity position correction phase retrieval method is essential to achieve accurate wavefront reconstruction.
In this paper, we develop a novel method to achieve stable three-dimensional intensity position correction and wavefront reconstruction based on the cross-iteration nonlinear optimization strategy. By optimizing common error metrics, correcting intensity position error and retrieving complex field are cross-iteratively implemented, which would alleviate accuracy loss induced by the interference between the incomplete position correction and wavefront reconstruction. The analytic gradients for the intensity position in different directions are also derived. In addition, the characteristic of reconstruction error produced by intensity position error is analyzed. Furthermore, the proposed method can be applied to correct intensity position error for the single-image phase retrieval or sub-aperture stitching phase retrieval models.
The remainder of this paper is organized as follows. Section 2 introduces the PDPR model and the intensity position errors problem. Section 3 specifically describes the cross-iteration nonlinear optimization method for position correction and wavefront reconstruction. Section 4 verifies the accuracy and stability of the proposed method through numerical simulations. Section 5 presents some verified experiments. Section 6 discusses and concludes the paper.
2. Intensity position errors problem for the PDPR model
Our goal is to recover the complex field of the sample and optimize the three-dimensional intensity position for a stack of intensity images. We first introduce the phase diverse phase retrieval model, which mathematically describes the forward physical process and establishes the error metric. Next, we analyze the three-dimensional intensity position error problem. Considering that the previous researchers solve this problem divided into lateral position correction and axial position correction, the intensity position errors are also analyzed from these two aspects.
2.1 Optical model and metric error for the phase diverse phase retrieval
The PDPR model is shown in Fig. 1(a), the wavefront crossing the desired plane to the measured plane can be calculated with
where ${{\rm G}_j}({u,v} )$ is the field on the jth measured plane, $({u,v} )$ are the coordinates of the measured plane, ${{\rm g}_s}({x,y} )$ is the desired field, $({x,y} )$ are the coordinates of the desired plane, $i^{2} = - 1,\,{\cal F}[]$ denotes the Fourier transform, Z represents zero-padding to match the real optical system, $\phi ({x,y,{z_j}} )$ is the phase diversity for the defocus length ${z_j}$The intensity on the jth measured plane is
where the superscript * denotes the complex conjugate operator.The error metric of this desired plane reconstruction model is defined as
2.2 Feature of the lateral position error problem
The influence of the lateral position error of intensity is analyzed. The intensity of image acquisitions along axis usually is not ideal during the real experimental operation process, as shown in Fig. 2(a). Off-axis moving CCD leads to a shift of diffraction pattern for amplitude and adds an extra distance-based matrix modulation for phase, as shown in Fig. 2(b). In fact, the CCD merely receives the intensity data and the corresponding phase is lost. Thus, the only impact of CCD off-axis moving lies on the lateral shift of diffraction pattern [29]. The position coordinates of the measured intensity pattern can be expressed as
2.3 Feature of the axial position error problem
Another concern of the intensity position with PDPR is to determine the axial position, as shown in Fig. 5. The axial position error can be classified as the absolute position error for all measurement planes, which is shown in Fig. 5(b), and the relative position error among the measurement planes, which is shown in Fig. 5(c). For every measured plane, the defocus length ${z_j}$ can be expressed as
where ${z_0}$ is the corrected defocus length, $\Delta {z_0}$ is absolute defocus error for all measured planes, and $\delta {z_j}$ is the relative error among the different planes. The influence of the absolute defocus error for the PDPR is similar for the single-image phase retrieval model, the small absolute displacement of the planes can be compensated by a simple focus term, as shown in Fig. 5(b). The relative position error among the measurement planes would undermine the accuracy of the wavefront, as shown in Fig. 5(c). If the measured intensity has a relative axial error with ideal intensity, there is the residual ripple-like error, as shown in Fig. 6(c). This would directly be reflected on the reconstructed wavefront, which has ripple-like error as shown in Figs. 7(a) and 7(d). And there is a peak in the high-frequency domain of the PSD curve, as shown in Figs. 7(c) and 7(f).3. Method
In this Section, the cross-iteration intensity position correction phase retrieval algorithm is developed. For the gradient calculation of the error metric, Fienup et al. [30,32] proposed a simple gradient analytic expression based on the Fourier transform. Here, we further extend the analytic gradient calculation for intensity position optimization. Different from the traditionally lateral position correction and axial position correction, respectively, the cross-iteration three-dimensional intensity position correction method for phase diverse phase retrieval algorithm is established: firstly, the position of every intensity is coarsely optimized according to the initial value; then the position of every intensity is exactly optimized according to the new estimated value in the iterative process.
3.1 Intensity position and wavefront reconstruction analytic gradient calculation
In the PDPR model, following the derivation in Ref. [30], the gradient of the error metric with respect to a real-valued parameter $\alpha $ can be written as
According to Eq. (7), the pixel-by-pixel gradient values for the desired field (including phase and amplitude) by using the ${j^{\textrm{th}}}$ collected intensity can be calculated by
Then we derive the gradient with respect to the effective intensity position $({{u_j},{v_j}} )$. According to the Fourier optics theory, the tilt part of the phase would induce the shifting of the center of the sample’s spectrum in the Fourier domain
Finally, the gradient about defocus length is calculated. Equation (1) is rewritten as
According to Eq. (7), the analytic gradient expression about the defocus length ${z_j}$ is calculated by
3.2 Cross-iteration nonlinear position optimization phase retrieval method
The specific procedure of the gradient calculation about wavefront reconstruction and intensity position optimization is described in Section 3.1. Next, the cross-iteration nonlinear position optimization phase retrieval algorithm framework is described, as shown in Fig. 8. In the proposed algorithm framework, the position error of intensity is corrected via two different optimization strategies. Due to the imperfect wavefront of the desired plane, the first optimization could not find the global optimizing position and would suspend near the optimal position. The first step position optimization achieves roughly find the position of the diffractive intensity by using Algorithm 1. It is worth noting that each iteration only advances one pixel to avoid crossing the global optimal position for lateral position correction during the first position optimization process. Then the wavefront is reconstructed. When the number of iterations reaches a certain threshold, the position of the intensity pattern is corrected by using Algorithm 2. For second position correction, the gradient value of each iteration is superimposed with weight, then the lateral position of the intensity pattern is optimized according to the geometric relationship between desired plane and measurement surface. The axial position error is corrected at the same time. The proposed algorithm corrects the position of each intensity pattern separately and successively, which is independent of the intensity patterns of other positions.
The detailed algorithm has the following procedures:
- (1) The iterative number K, the maximum position correction iteration number ${N_p}$, the iterative threshold value M, the number of patterns J, the step length for amplitude ${\textrm{h}_{\textrm{amp}}}$, phase ${\textrm{h}_{\textrm{phase}}}$, lateral position ${\textrm{h}_\textrm{c}}$ and axial position are set, $k = 0$. And the reconstruction is started with a random normalization matrix ${\textrm{g}_0}({x,y} )$;
- (2) For every intensity pattern, the intensity position is optimized by using Algorithm 1;
- (3) Corresponding gradient functions including $\partial {\textrm{E}_\textrm{j}}/\partial {\textrm{a}_\textrm{k}}$ for the amplitude and$\; \partial {\textrm{E}_\textrm{j}}/\partial {\mathrm{\theta }_\textrm{k}}$ for the phase are calculated based on Section 2.2, and the desired field is estimated with$$\left\{ {\begin{array}{l} {{\theta_k}(x,y) = {\theta_{k - 1}}(x,y) + {h_{phase}}\frac{{\partial {E_j}}}{{\partial {\theta_k}}},}\\ {|{{\textrm{g}_k}} |= |{{g_{k - 1}}} |+ {h_{amp}}\frac{{\partial {E_j}}}{{\partial {a_k}}},\textrm{ }} \end{array}} \right.$$where $\partial {\textrm{E}_\textrm{j}}/\partial {\theta _\textrm{k}} = \partial {\textrm{E}_{\textrm{k}\backslash J}}/\partial {\theta _k}$ and $\partial {\textrm{E}_\textrm{j}}/\partial {\textrm{a}_\textrm{k}}\textrm{ = }\partial {\textrm{E}_{\textrm{k}\backslash J}}/\partial {\textrm{a}_\textrm{k}}$, and the desired wavefront is calculated by using
- (4) ${k = \; k + 1}$, if $\textrm{rem}\; (k,\; M)\; = \; 1$, the intensity position is optimized by using Algorithm 2;
- (5) The procedures (3) and (4) are iteratively repeated until the ${k}\; \ge \; {K}$ is satisfied.
- (1) The center point of the diffraction image is chosen as the initial center position of iterative optimization;
- (2) The gradients $\partial {\textrm{E}_\textrm{j}}/\partial {u_j}$, $\partial {\textrm{E}_\textrm{j}}/\partial {v_j}$ and $\partial {\textrm{E}_\textrm{j}}/\partial {z_\textrm{j}}$ are calculated by using the initial desired field;
- (3) The position of the used intensity pattern is updated via$$\left\{ {\begin{array}{c} {{u_j} = {u_j} - {h_u}\frac{{\partial {E_j}}}{{\partial {u_j}}},}\\ {{v_j} = {v_j} - {h_v}\frac{{\partial {E_j}}}{{\partial {v_j}}},} \end{array}} \right.$$with ${h_u} = {d_u}/|{\partial {\textrm{E}_\textrm{j}}/\partial {u_j}} |$, ${h_v} = {d_v}/|{\partial {\textrm{E}_\textrm{j}}/\partial {v_j}} |$, ${d_u} \times {d_v}$ is the pixel size of CCD;
- (5) ${n\; = \; n\; + \; 1}$, if $n < {N_p}$, return to (2), otherwise, the cut diffraction image and defocus length are used for iterative wavefront reconstruction calculation.
- (1) The estimated wavefront is filtered by a low-pass filter as ${g^0}$ and setting coefficients $c_{u,j}^0 = 0\; $and $c_{v,j}^0 = 0$, and step length ${h_c}$; The low-pass filtering is used to remove the mid-spatial frequency error induced by the intensity position error;
- (2) Calculate the gradients $\partial {\textrm{E}_\textrm{j}}/\partial {u_\textrm{j}}$, $\partial {\textrm{E}_\textrm{j}}/\partial {v_\textrm{j}}$ and $\partial {\textrm{E}_\textrm{j}}/\partial {z_\textrm{j}}$;
- (6) ${n\; = \; n\; + \; 1}$, if $n < {N_p}$, return to (2), otherwise, update the center point position$$\left\{ {\begin{array}{c} {{u_j} = {u_j} - \frac{{\lambda ({f - {z_j}} )}}{{\pi D{d_u}}}{c_{u,j}},}\\ {{v_j} = {v_j} - \frac{{\lambda ({f - {z_j}} )}}{{\pi D{d_u}}}{c_{v,j}},} \end{array}} \right.$$where${\; }D$ is the diameter of the desired plane;
- (7) The new diffraction image is cut to be used for iterative calculation as the output result. The optimized defocus length ${z_\textrm{j}}$ is also output.
4. Simulations
In Section 3, the cross-iteration nonlinear optimization phase retrieval method for position correction and wavefront reconstruction is proposed. Here, the synthetic intensities with three-dimensional position errors are applied to verify the effectiveness of the proposed algorithm. The wavefront sensing and image reconstruction using intensities with position errors are simulated. The lateral position error and axial position error are jointly optimized for wavefront sensing and image reconstruction. Then, the robustness of the proposed algorithm is tested with Monte Carlo simulation. Considering that the tilt offset and the absolute defocus errors would affect the expression value of intensity position rather than affecting the accuracy of wavefront reconstruction, the output three-dimensional position values of proposed algorithm cannot actually indicate the effect of proposed algorithm. Here, the root mean square error (RMSE) of the reconstructed wavefront is applied to quantitively evaluate the quality of position correction. When the accuracy of the reconstructed wavefront with intensity position errors is well in agreement with the reconstructed results without position error, it is proved that the proposed method is effective.
For all numerical experiments the working wavelength is chosen as 632.8nm and the focal length is 1079.41mm. The physical side length of square test images is fixed at 70 mm. The truth defocus distance is taken as 5mm, 7.5mm, 10mm sequentially, the pixel size of the intensity grid is $4.4 \times 4.4\mu m$. In this simulation, the threshold of whether running an error correction algorithm M is set as 200. In order to verify the robustness of the proposed algorithm, the signal-to-noise ratio is 30dB for every intensity pattern.
4.1 Position errors correction for wavefront sensing
Firstly, the ability of the proposed algorithm to correct lateral and axial position errors simultaneously is verified. The random axial error $\delta z \in [{ - 1,1} ]\; \textrm{mm}$ and lateral errors $\Delta u,\Delta v \in [{ - 10,10} ]\; {\mu }\textrm{m}$ are added into every measurement plane. For every intensity, the intensity position is optimized by using Algorithm 1, then the position is optimized with cross-iteration strategy by using Algorithm 2. The retrieved results are shown in Fig. 9. After correcting the position errors (including lateral direction and axial direction), the retrieved results have good accuracy as comparable as the reconstructed results without position error. It is proved that the proposed method has good accuracy of position errors correction for wavefront sensing.
4.2 Position errors correction for phase imaging
The PDPR algorithm is popularly applied in wavefront sensing, while it has little been applied to reconstruct the complex image (including phase and amplitude). Here, the feasibility of the proposed algorithm for image reconstruction and intensity position errors correction is verified. For every intensity pattern, the random axial error $\delta z \in [{ - 1,1} ]\; \textrm{mm}$ and lateral errors $\Delta u,\Delta v \in [{ - 10,10} ]\; {\mu }\textrm{m}$ are added into every measurement plane. The reconstructed amplitude and phase match well with the origin amplitude and phase, as shown in Fig. 10, indicating that the proposed nonlinear optimization phase retrieval method is effective in reconstructing both the amplitude and phase of an object. When the intensity position error is not corrected, the reconstructed phase image is failed. After running the position correction algorithm, the reconstructed wavefront is in agreement with the truth value. It is proved that the cross-iteration position correction is accurate for square image reconstruction.
Monte Carlo simulations are further performed to validate the effectiveness of the proposed method. To prove that the proposed algorithm can correct large position error, for every intensity pattern, the random axial error $\delta z \in [{ - 1,1} ]\; \textrm{mm}$ and lateral errors $\Delta u,\Delta v \in [{ - 100,100} ]\; {\mu }\textrm{m}$ are added into every measurement plane. The residual RMSE is shown in Fig. 11. The retrieved results show that the proposed algorithm has comparable correction accuracy for large position errors.
5. Experiments
The experiments were carried out to verify the feasibility and accuracy of the proposed method. The collimated beam with$\; \lambda = 632.8$ nm illuminates the plate and a circular aperture stop of diameter 22.9 mm is placed in front of the sample. A lens with a focal length of 1079.41 mm is used to focus the beam. The camera, which is a beam profiler (BGP-USB-SP620U) with 12-bit depth and $\textrm{4}\textrm{.4} \times 4\textrm{.4}\; {\mu }\textrm{m}$ pixel size, is applied to collect diffraction patterns. The motorized translation stage (M-LFS100PP), which has high moving precision up to 0.5µm. Although the axial position is enough to accurate, it is also optimization via the proposed algorithm. In this experiment, the defocusing distances we selected are 5mm, 10mm, and 20mm, respectively.
Firstly, the plate with manufacturing error is inserted in the PDPR system to test the proposed method, experimentally. The collected intensities as shown in Fig. 12 have grid size $\textrm{1200} \times 1600$. It is obvious that the patterns are the off-center position of the whole image. Here, the proposed algorithm is applied to extract the effective intensity grid and retrieve the wavefront. The retrieved results are shown in Fig. 13. As a comparison algorithm, the centroid correction phase retrieval algorithm [33] is applied to retrieve the wavefront. For all retrieved phase and ZYGO data, the piston, tip-tilt, and power (PTP) are first removed from the reconstructed wavefront by using a least-squares fit. The measured result of the proposed algorithm agrees to interferometric data to 0.1483 rad, RMSE. The accuracy of the phase, which is retrieved by the centroid correction phase retrieval algorithm, is 0.1702 rad, RMSE. The retrieved amplitude by using the proposed algorithm is smoother than that of the result by using the centroid correction algorithm, as shown in Fig. 13(g) and 13(h). It is proved that the proposed algorithm is effective for correcting the position errors of intensities.
Second, the USAF resolution target is imaged to quantify the resolution improvement for the proposed algorithm. The focal length of the lens is 335.28mm and the measured diameter is 22.9 mm. Here we introduce two-step diffraction theory [34] to improve the resolution of the desired plane. The effective sampling of the desired plane is $\textrm{512} \times 512$. The collected intensities are shown in Fig. 14. It is obvious that the effective intensity is departed from the center position. Figure 15 presents the retrieved results by using different algorithms. Figures 15(a)–15(d) show the full of the USAF target, and Figs. 15(e)–15(h) show the corresponding magnified area of interest. Compared with the results, as shown in Fig. 15(e) and 15(g), of the centroid correction phase retrieval algorithm, the proposed algorithm greatly reduces the fringe-like error on the retrieved results. The resolution of reconstruction results via using the proposed algorithm is far higher than that of the comparison algorithm.
6. Discussion and conclusion
In this paper, we propose a three-dimensional intensity position correction method for multi-image phase retrieval using a cross-iteration nonlinear optimization strategy. The reconstructing error induced by three-dimensional position error is analyzed and the analytic gradients about the three-dimensional intensity position are derived. Two-step correction strategy achieves exact large-scale three-dimensional position correction. The cross-iteration strategy avoids the interference for the lateral position correction and wavefront reconstruction via optimizing the common error metric. The simulations and experiments verify the performance of the proposed algorithm. The proposed method is a practical and effective tools to correct the intensity position for PDPR model.
Although we have discussed extensively proposed methods numerically and experimentally, there are still some interesting issues to think about further. In this paper, the nonlinear optimization algorithm is the steepest descent method which is the simplest gradient optimization method. In the future, we will introduce an advanced gradient optimization algorithm to improve the convergence speed of the algorithm [11]. We must emphasize that the position error correction is independent among the each intensity pattern in our proposed algorithm, so the proposed algorithm is also suitable for single-plane phase retrieval model. It is applicable for popular fields including the quantitative phase imaging model [35–37], image encryption [14,15], ptychography [38–40], and etc.
Funding
Science Challenge Project (TZ2016006-0502-02); National Natural Science Foundation of China (52075507, 61905241, 62175211); Laboratory of Precision Manufacturing Technology of CAEP (ZD18005).
Disclosures
The authors declare no conflicts of interest.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
References
1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]
2. P. Gao, G. Pedrini, and W. Osten, “Phase retrieval with resolution enhancement by using structured illumination,” Opt. Lett. 38(24), 5204–5207 (2013). [CrossRef]
3. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Sampling criteria for Fourier ptychographic microscopy in object space and frequency space,” Opt. Express 24(14), 15765–15781 (2016). [CrossRef]
4. A. Pan, Y. Zhang, K. Wen, M. Zhou, J. Min, M. Lei, and B. Yao, “Subwavelength resolution Fourier ptychography with hemispherical digital condensers,” Opt. Express 26(18), 23119–23131 (2018). [CrossRef]
5. A. Eguchi, J. Brewer, and T. D. Milster, “Optimization of random phase diversity for adaptive optics using an LCoS spatial light modulator,” Appl. Opt. 58(25), 6834–6840 (2019). [CrossRef]
6. D.B. Moore and J.R. Fienup, “Subaperture translation estimation accuracy in transverse-translation diversity phase retrieval,” Appl. Opt. 55(10), 2526–2536 (2016). [CrossRef]
7. W. Farriss, T. Malhotra, A. N. Vamivakas, and J. R. Fienup, “Phase retrieval in generalized optical interferometry systems,” Opt. Express 26(3), 2191–2202 (2018). [CrossRef]
8. A. M. Michalko and J. R. Fienup, “Transverse translation diverse phase retrieval using soft-edged illumination,” Opt. Lett. 43(6), 1331–1334 (2018). [CrossRef]
9. A. M. Michalko and J. R. Fienup, “Verification of transverse translation diverse phase retrieval for concave optical metrology,” Opt. Lett. 43(19), 4827–4830 (2018). [CrossRef]
10. Y. Geng, J. Tan, C. Guo, C. Shen, W. Ding, S. Liu, and Z. Liu, “Computational coherent imaging by rotating a cylindrical lens,” Opt. Express 26(17), 22110–22122 (2018). [CrossRef]
11. X. Dong, X. Pan, C. Liu, and J. Zhu, “Single shot multi-wavelength phase retrieval with coherent modulation imaging,” Opt. Lett. 43(8), 1762–1765 (2018). [CrossRef]
12. J. Zhong, L. Tian, P. Varma, and L. Waller, “Nonlinear optimization algorithm for partially coherent phase retrieval and source recovery,” IEEE Trans. Comput. Imaging 2(3), 310–322 (2016). [CrossRef]
13. J. Li, A. Matlock, Y. Li, Q. Chen, L. Tian, and C. Zuo, “Resolution-enhanced intensity diffraction tomography in high numerical aperture label-free microscopy,” Photon. Res. 8(12), 1818–1826 (2020). [CrossRef]
14. X. He, H. Tao, Z. Jiang, Y. Kong, and C. Liu, “Single-shot optical multiple-image encryption by jointly using wavelength multiplexing and position multiplexing,” Appl. Opt. 59(1), 9–15 (2020). [CrossRef]
15. A. Pan, K. Wen, and B. Yao, “Linear space-variant optical cryptosystem via Fourier ptychography,” Opt. Lett. 44(8), 2032–2035 (2019). [CrossRef]
16. G. Situ and J. Zhang, “Double random-phase encoding in the Fresnel domain,” Opt. Lett. 29(14), 1584–1586 (2004). [CrossRef]
17. Y. Shi, G. Situ, and J. Zhang, “Multiple-image hiding in the Fresnel domain,” Opt. Lett. 32(13), 1914–1916 (2007). [CrossRef]
18. J. R. Fienup, “Phase Retrieval Algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]
19. G. R. Brady, “Application of phase retrieval to the measurement of optical surfaces and wavefronts,” Ph.D. thesis (University of Rochester, 2008).
20. G. R. Brady and J. R. Fienup, “Nonlinear optimization algorithm for retrieving the full complex pupil function,” Opt. Express 14(2), 474–486 (2006). [CrossRef]
21. C. Shen, J. Tan, C. Wei, and Z. Liu, “Coherent diffraction imaging by moving a lens,” Opt. Express 24(15), 16520–16529 (2016). [CrossRef]
22. C. Shen, X. Bao, J. Tan, S. Liu, and Z. Liu, “Two noise-robust axial scanning multi-image phase retrieval algorithms based on Pauta criterion and smoothness constraint,” Opt. Express 25(14), 16235–16249 (2017). [CrossRef]
23. G. Ju, X. Qi, H. Ma, and C. Yan, “Feature-based phase retrieval wavefront sensing approach using machine learning,” Opt. Express 26(24), 31767–31783 (2018). [CrossRef]
24. Q. Xin, G. Ju, C. Zhang, and S. Xu, “Object-independent image-based wavefront sensing approach using phase diversity images and deep learning,” Opt. Express 27(18), 26102–26119 (2019). [CrossRef]
25. L. Zhao, H. Yan, J. Bai, J. Hou, Y. He, X. Zhou, and K. Wang, “Simultaneous reconstruction of phase and amplitude for wavefront measurements based on nonlinear optimization algorithms,” Opt. Express 28(13), 19726 (2020). [CrossRef]
26. P. G. Zhang, C. L. Yang, Z. H. Xu, Z. L. Cao, Q. Q. Mu, and L. Xuan, “Hybrid particle swarm global optimization algorithm for phase diversity phase retrieval,” Opt. Express 24(22), 25704–25717 (2016). [CrossRef]
27. H. Mao and D. Zhao, “Alternative phase-diverse phase retrieval algorithm based on Levenberg-Marquardt nonlinear optimization,” Opt. Express 17(6), 4540–4552 (2009). [CrossRef]
28. C. Guo, Q. Li, C. Wei, J. Tan, S. Liu, and Z. Liu, “Axial multi-image phase retrieval under tilt illumination,” Sci. Rep. 7(1), 7562 (2017). [CrossRef]
29. C. Guo, Q. Li, J. Tan, S. Liu, and Z. Liu, “A method of solving tilt illumination for multiple distance phase retrieval,” Opt. Lasers Eng. 106, 17–23 (2018). [CrossRef]
30. G. R. Brady, M. Guizar-Sicairos, and J. R. Fienup, “Optical wavefront measurement using phase retrieval with transverse translation diversity,” Opt. Express 17(2), 624–639 (2009). [CrossRef]
31. C. Guo, Y. Zhao, J. Tan, S. Liu, and Z. Liu, “Adaptive lens-free computational coherent imaging using autofocusing quantification with speckle illumination,” Opt. Express 26(11), 14407–14420 (2018). [CrossRef]
32. J. R. Fienup, “Phase-retrieval algorithms for a complicated optical system,” Appl. Opt. 32(10), 1737–1746 (1993). [CrossRef]
33. H. Yan, Q. Shi, B. Ji, and S. Wen, “Error analysis and correction method of axial multi-intensity phase retrieval imaging,” International Conference on Optoelectronic and Microelectronic Technology and Application. Vol. 11617. International Society for Optics and Photonics, 2020.
34. C. Rydberg and J. Bengtsson, “Efficient numerical representation of the optical field for the propagation of partially coherent radiation with a specified spatial and temporal coherence function,” J. Opt. Soc. Am. A 23(7), 1616 (2006). [CrossRef]
35. M. Li, L. Bian, and J. Zhang, “Coded coherent diffraction imaging with reduced binary modulations and low-dynamic-range detection,” Opt. Lett. 45(16), 4373–4376 (2020). [CrossRef]
36. E. Malm, E. Fohtung, and A. Mikkelsen, “Multi-wavelength phase retrieval for coherent diffractive imaging,” Opt. Lett. 46(1), 13–16 (2021). [CrossRef]
37. J. Hu, X. Xie, and Y. Shen, “Quantitative phase imaging based on wavefront correction of a digital micromirror device,” Opt. Lett. 45(18), 5036–5039 (2020). [CrossRef]
38. L. Bian, J. Suo, G. Zheng, K. Guo, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using wirtinger flow optimization,” Opt. Express 23(4), 4856–4866 (2015). [CrossRef]
39. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]
40. P. Sidorenko and O. Cohen, “Single-shot ptychography,” Optica 3(1), 9–14 (2016). [CrossRef]