Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-shot Fresnel incoherent correlation holography via deep learning based phase-shifting technology

Open Access Open Access

Abstract

Fresnel incoherent correlation holography (FINCH) realizes non-scanning three-dimension (3D) images using spatial incoherent illumination, but it requires phase-shifting technology to remove the disturbance of the DC term and twin term that appears in the reconstruction field, thus increasing the complexity of the experiment and limits the real-time performance of FINCH. Here, we propose a single-shot Fresnel incoherent correlation holography via deep learning based phase-shifting (FINCH/DLPS) method to realize rapid and high-precision image reconstruction using only a collected interferogram. A phase-shifting network is designed to implement the phase-shifting operation of FINCH. The trained network can conveniently predict two interferograms with the phase shift of 2/3 π and 4/3 π from one input interferogram. Using the conventional three-step phase-shifting algorithm, we can conveniently remove the DC term and twin term of the FINCH reconstruction and obtain high-precision reconstruction through the back propagation algorithm. The Mixed National Institute of Standards and Technology (MNIST) dataset is used to verify the feasibility of the proposed method through experiments. In the test with the MNIST dataset, the reconstruction results demonstrate that in addition to high-precision reconstruction, the proposed FINCH/DLPS method also can effectively retain the 3D information by calibrating the back propagation distance in the case of reducing the complexity of the experiment, further indicating the feasibility and superiority of the proposed FINCH/DLPS method.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fresnel incoherent correlation holography (FINCH) is an effective method for recording incoherent hologram based on the correlation between sample information and Fresnel zone plate. It records the three-dimension (3D) spatial information of the sample without any moving device and scanning device. The system structure is simple and the reconstruction method is flexible [13]. It uses a spatial light modulator (SLM) to load lens phases with different focal lengths to ingeniously construct autocorrelation optical path and the phase-shifting algorithm to realize holographic reconstruction, which has been widely studied and applied to fluorescence microscopic imaging and white light microscopic imaging [4,5]. In addition, by loading a lens phase with a specific focal length on the SLM, FINCH technology can break through the diffraction limit to achieve a twice increase in resolution [6]. This feature also makes FINCH technology a research hotspot of super-resolution imaging [7].

FINCH is a coaxial autocorrelation holographic imaging method, which can reconstruct the sample information with an incoherent illumination (including fluorescent lamp) or the light reflected by any sample. The light emitted from any point of the sample is split into two beams that are differentially phase-modulated and recombined in a common plane to produce interference fringes. SLM is used to implement the phase-shifting operation by adding different phase differences between the modulated phases of the two optical paths. Then, the phase-shifting algorithm is used to remove the disturbance of DC term and twin term that appears in the reconstruction field, and the back propagation algorithm is used for image reconstruction. Obviously, the acquisition of multiple phase-shifting interferograms limits the real-time performance of FINCH. Subsequently, multiple single-shot FINCH imaging technologies have been proposed to solve this limitation [711]. For example, off-axis FINCH imaging technology [10]; the technology of acquiring multiple phase-shifting interferograms with one camera by adding grating phase through SLM [11]. It is even possible to use birefringent crystal lens toreplace SLM and combine with polarization camera to realize holographic reconstruction of synchronous phase-shifting [7], which provides a simpler and higher light efficiency method for single shot microscopy. Although these methods can greatly simplify FINCH system and improve the real-time imaging. However, these technologies will sacrifice the space bandwidth product of the camera.

In recent years, as an end-to-end image processing method, deep learning (DL) has been widely used in solving various inverse problems in optical imaging field [12,13], such as Lensless imaging [14], medical imaging [15], computational ghost imaging [16] and holographic reconstruction [1721], etc. The trained convolution neural network can remove the disturbance of DC term and the twin term of the reconstruction field, and then realizes the single-shot holographic reconstruction with the back propagation algorithm.

This end-to-end DL technology is also used for FINCH reconstruction. In the mentioned above method, interferograms with different back propagation distances are used as the input, and the reconstruction results calculated by the phase-shifting algorithm on the focal plane are used as label dataset. Although the input interferogram is not in the image focus position, the network can still predict clear image information. This not only realizes single-shot FINCH imaging, but also makes it have large depth of field [22]. However, the reported DL-based large depth of field FINCH can not retain the 3D information of the image well. Here, to realize single-shot FINCH without losing the space bandwidth product, we propose a single-shot 3D FINCH imaging method based on deep learning phase-shifting technology (FINCH/DLPS), in which the convolution neural network is not used for the end-to-end holographic reconstruction task but trains the network to predict the other two phase-shifting interferograms, and then implement the image reconstruction by combining three-step phase-shifting and back propagation algorithm.

2. Principle and network analysis

As is known to all, in conventional FINCH imaging, the phase shift is implemented using a SLM while the proposed FINCH/DLPS method realizes the phase shift using a network, and then the image reconstruction is implemented by combining three-step phase-shifting and back propagation algorithm. The construction and training of the network is the core of the proposed method. We first built a phase-shifting network in accordance with the procedure of phase-shifting FINCH imaging, as illustrated in Fig. 1. The collected interferograms with phase shift of 0 are used as input, and the interferograms with the phase shift of 2/3 π and 4/3 π are used as output of the network. The feature information of the input image is extracted through the convolution channel, and then the feature information is used to reconstruct the phase-shifting interferogram in the deconvolution layer. One channel is used to obtain the interferogram with the phase shift of 2/3 π, and the other channel is used to generate a phase-shifting interferogram with the phase shift of 4/3 π. The designed network mainly refers to U-Net and Y-Net [23], and extracted feature information from input images through multiple convolution layers, where the size of convolution kernel is 3 × 3, the step size is 2. Each convolution layer is followed by a layer norm and an activation function (leaky ReLU). The Leaky ReLU is used as the activation function, in which $f(x) = \max (\lambda ,x)$, where $\lambda = 0.2$. After two channels composed of deconvolution layers are input, the target phase-shifting interferogram can be obtained, where the convolution kernel is 5 × 5 with the step size 2, and the activation function (leaky ReLU) is in front of the deconvolution layer. The size of the input image is 1024 × 1024 pixels, the number of convolution cores of each layer of the network is shown in the Fig. 1. By building such a network, two phase-shifting interferograms can be generated with only one network instead of training two networks. This will effectively reduce the network parameters and improve the efficiency of network training.

 figure: Fig. 1.

Fig. 1. Flow chart of single-shot Fresnel incoherent correlation holography via deep learning based phase-shifting (FINCH/DLPS) method.

Download Full Size | PDF

The root mean square error (RMSE) is used as the loss function of network training, shown as follows.

$$Loss = \sqrt {\frac{{\sum\limits_{1024 \times 1024} {{{[{X({x,y} )- {X_1}({x,y} )} ]}^2}} }}{{1024 \times 1024}}} + \sqrt {\frac{{\sum\limits_{1024 \times 1024} {{{[{Y({x,y} )- {Y_1}({x,y} )} ]}^2}} }}{{1024 \times 1024}}}$$
where X and Y are the two outputs of the network respectively, and X1 and Y1 are the corresponding label images respectively. Figure 2 shows the schematic of FINCH system: An amplitude type SLM (Holoeye LC 2002) is used to load the sample. Two polarizers are placed in the front end and the back end of the SLM, so that the modulated light intensity passes through the lens in the back end, where the lens is placed near the focus position in the back end of the second polarizer. Subsequently, a half wave plate is used to modulate the polarization direction of the object light at an angle of 45 ° with the liquid crystal axis of the phase type SLM (Holoeye PLUTO) that is used to load the phase modulation distribution of a lens with a focal length of 290 mm. In addition, a polarizer with 45 ° angle between the polarization direction and SLM modulated polarization direction is placed in the front end of the camera, so that SLM-modulated light interferes with SLM-unmodulated light. The camera is placed 580 mm away from the phase type SLM. This scheme can effectively use SLM pixels to improve the imaging quality.

 figure: Fig. 2.

Fig. 2. Schematic diagram of optical path system for data collection.

Download Full Size | PDF

3. Verification and analysis of experimental results

3.1 Experimental verification

The Mixed National Institute of Standards and Technology (MNIST) dataset [24] is selected to train and test the proposed FINCH/DLPS method. The images in 5000 sets of data are loaded into the amplitude type SLM, and three phase-shifting interferograms (0, 2/3 π and 4/3 π) are collected, respectively. 80% of them are training sets, 10% are testing sets, and 10% are verification sets. For analysis, the RMSE between the network output and the ground truth is calculated, and the average RMSE is 0.0036 gray level. Wherein, Output 1 is the interferogram of the network output result, i.e. 2/3 π phase shift, and output 2 is the interferogram of the network output result, i.e. 4/3 π phase shift. The corresponding ground truth are the interferograms with the phase shift of 2/3 π and 4/3 π collected in the experiment. Figure 3 shows one set of results in the validation data. The collected phase-shifting interferograms with the phase shift of 2/3 π and 4/3 π are showed in Figs. 3(a) and 3(d), and the corresponding interferograms output by the network are also given in Figs. 3(b) and 3(e). For clearer presentation, the data of the 500th column is plotted and showed in Figs. 3(c) and 3(f). The obtained results show that the network can obtain another two phase-shifting interferograms with high precision, there is little difference between the network output results and the experimental results. To verify the feasibility of the FINCH/DLPS method, we use three collected phase-shifting interferograms and the result obtained with the proposed network to implement image reconstruction, as shown in Fig. 4. Clearly, the reconstruction results obtained with the conventional phase-shifting method and the proposed method are basically the same. Since the three-step phase-shifting reconstruction algorithm can effectively remove the disturbance of DC term and term image, by quantitative analysis, it is found that the peak signal to noise ratio (PSNR) of the reconstruction image with the proposed method is 28.3 dB because there is a small deviation between the phase-shifting interferograms generated by the network and the interferograms collected in the experiment, further indicating the feasibility of the proposed FINCH/DLPS method.

 figure: Fig. 3.

Fig. 3. One set of results in the validation data. The phase-shifting interferograms with 2/3 π phase shift (a) collected in the experiment and (b) output by the network; The phase-shifting interferograms with the phase shift 4/3 π (d) collected in the experiment and (e) output by the network; (c) and (f) the section curves of the 500th column in the interferograms of (a), (b) and (d), (e).

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Reconstruction results of handwritten digit ‘5’ in the MNIST dataset calculated with (a) 3-frame experimental phase-shifting interferograms; (b) the proposed method.

Download Full Size | PDF

In addition, it is found that the performance of the proposed FINCH/DLPS method is better than that with three collected interferograms in some data of the validation data, as shown in Fig. 5, suggesting that the main reason comes from the error caused by multiple experimental acquisition, such as human operation, unstable light source or unstable light path. In addition, the proposed method can effectively remove the DC term and twin term, which is still difficult in the conventional FINCH method, so that obtain high precision reconstruction, further indicating the advantage of the proposed method for avoiding the deviation caused by multiple acquisition.

 figure: Fig. 5.

Fig. 5. Comparison of partial reconstruction results in verification set.

Download Full Size | PDF

3.2 3D imaging capability analysis

Similar with the conventional 3D imaging, the proposed method also needs a procedure of inversion reconstruction, thus the 3D information of the image can be obtained by calibrating the inversion reconstruction distance. The inversion results at different distances are shown in Fig. 6. It is found that the clear image information can be obtained at the reconstruction distance of 320 mm, and the unclear reconstruction results appear when the reconstruction distance is out of the focus reconstruction distance. According to Ref. [8], the reconstruction distance and the actual distance of the sample in the optical path need satisfy the following relationship:

$$\begin{array}{l} {z_r} = \left\{ {\begin{array}{{cc}} {\begin{array}{{cc}} \begin{array}{l} \pm ({f - {d_2}} ),\\ \end{array}&{} \end{array}\begin{array}{{cc}} \begin{array}{l} for\\ \end{array}&\begin{array}{l} {z_0} = {f_0}\\ \end{array} \end{array}}\\ {\begin{array}{{ccc}} { \pm \left( {\frac{{({{f_1} + {d_2}} )({{f_e} + {d_1} + {d_2}} )}}{{{f_1} - {f_e} - {d_1}}}} \right),}&{for}&{{z_0} \ne {f_0}} \end{array}} \end{array}} \right.\\ {f_1} = \begin{array}{{ccc}} {\frac{{f({{f_e} + {d_2}} )}}{{f - ({{f_e} + {d_2}} )}},}&{\begin{array}{{ccc}} {}&{{f_e} = \frac{{{z_0}{f_0}}}{{{f_0} - {z_0}}},} \end{array}} \end{array} \end{array}$$
where, zr is the reconstruction distance, f is the focal length of the loaded lens phase on the SLM (290 mm), d2 is the distance between the SLM and the camera (580 mm), d1 is the distance from the rear end lens of the sample to the SLM (650 mm), f0 is the focal length of the lens (150 mm), z0 is the distance from the sample to the lens, and also the axial coordinate of the sample. The actual coordinates of the sample can be resolved by the above formula. Different from the large depth of field-based DL technology, the proposed method can effectively retain the 3D information of sample, which is very beneficial for FINCH technology to be used for 3D imaging.

 figure: Fig. 6.

Fig. 6. Reconstruction results of handwritten digit ‘0’ at different reconstruction distances in the inversion reconstruction process, and the normalized PSNR calculated using the out-focus and in-focus reconstruction results.

Download Full Size | PDF

4. Conclusion

In order to reduce the complexity of the experiment and improve the real-time performance of FINCH, we propose a single-shot Fresnel incoherent correlation holography via deep learning based phase-shifting (FINCH/DLPS) method to realize rapid and high-precision image reconstruction using only a collected interferogram. A network similar to the Y-net structure is designed for obtaining two interferograms with the phase shift of 2/3 π and 4/3 π from one input interferogram. The three-step phase-shifting algorithm and the back propagation algorithm are used for holographic reconstruction. The obtained results show that the proposed method not only can realize high-precision image reconstruction, but also can effectively retain the 3D information by calibrating the back propagation distance in the case of reducing the complexity of the experiment, further indicating the feasibility and superiority of the proposed FINCH/DLPS method. Currently, as a deep learning (DL) method based on data training, the generalization capability of the proposed FINCH/DLPS method is still its limitation, the same set of network parameters can not cope with all types of samples, which is a difficult problem for all DL imaging technologies based on data training at present. Among the existing methods, training the network model with enough parameters by obtaining a large number of data sets, or using the transfer learning technology to transfer the network parameters for different types of samples, are the main ideas to solve the network generalization ability. In addition, the use of two SLM will reduce the light budget to a certain extent, how to further improve the optical utilization and the generalization capability of the network are the core issues in our following work.

Funding

National Natural Science Foundation of China (62175041, 62205059, 61805086, 61875059); Guangdong Introducing Innovative and Entrepreneurial Teams of “The Pearl River Talent Recruitment Program” (2019ZT08X340); Guangdong Provincial Key Laboratory of Information Photonics Technology (2020B121201011).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32(8), 912–914 (2007). [CrossRef]  

2. N. Siegel, J. Rosen, and G. Brooker, “Reconstruction of objects above and below the objective focal plane with dimensional fidelity by FINCH fluorescence microscopy,” Opt. Express 20(18), 19822 (2012). [CrossRef]  

3. B. Katz, J. Rosen, R. Kelner, and G. Brooker, “Enhanced resolution and throughput of Fresnel incoherent correlation holography (FINCH) using dual diffractive lenses on a spatial light modulator (SLM),” Opt. Express 20(8), 9109–9121 (2012). [CrossRef]  

4. T. Man, Y. Wan, W. Yan, X. H. Wang, E. J. G. Peterman, and D. Y. Wang, “Adaptive optics via self-interference digital holography for non-scanning three-dimensional imaging in biological samples,” Biomed. Opt. Express 9(6), 2614–2626 (2018). [CrossRef]  

5. W. Sheng, Y. Liu, Y. Shi, H. Jin, and J. Wang, “Phase-difference imaging based on FINCH,” Opt. Lett. 46(11), 2766–2769 (2021). [CrossRef]  

6. N. Siegel, V. Lupashin, B. Storrie, and G. Brooker, “High-magnification super-resolution FINCH microscopy using birefringent crystal lens interferometers,” Nat. Photonics 10(12), 802–808 (2016). [CrossRef]  

7. N. Siegel and G. Brooker, “Single shot holographic super-resolution microscopy,” Opt. Express 29(11), 15953 (2021). [CrossRef]  

8. T. Tahara, Y. Kozawa, A. Ishii, K. Wakunami, Y. Ichihashi, and R. Oi, “Two-step phase-shifting interferometry for self-interference digital holography,” Opt. Lett. 46(3), 669–672 (2021). [CrossRef]  

9. D. Liang, Q. Zhang, J. Wang, and J. Liu, “Single-shot Fresnel incoherent digital holography based on geometric phase lens,” J. Mod. Opt. 67(2), 92–98 (2020). [CrossRef]  

10. X. Quan, O. Matoba, and Y. Awatsuji, “Single-shot incoherent digital holography using a dual-focusing lens with diffraction gratings,” Opt. Lett. 42(3), 383 (2017). [CrossRef]  

11. S. Sakamaki, N. Yoneda, and T. Nomura, “Single-shot in-line Fresnel incoherent holography using a dual-focus checkerboard lens,” Appl. Opt. 59(22), 6612 (2020). [CrossRef]  

12. A. Qayyum, I. Ilahi, F. Shamshad, F. Boussaid, M. Bennamoun, and J. Qadir, “Untrained neural network priors for inverse imaging problems: A survey,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 1–20 (2022). [CrossRef]  

13. F. Wang, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, and G. Situ, “Phase imaging with an untrained neural network,” Light: Sci. Appl. 9(1), 77 (2020). [CrossRef]  

14. A. Sinha, G. Barbastathis, J. Lee, and S. Li, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017). [CrossRef]  

15. K. Suzuki, “Overview of deep learning in medical imaging,” Radiol. Phys. Technol. 10(3), 257–273 (2017). [CrossRef]  

16. F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27(18), 25560–25572 (2019). [CrossRef]  

17. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2017). [CrossRef]  

18. H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018). [CrossRef]  

19. G. Zhang, T. Guan, Z. Shen, X. Wang, T. Hu, D. Wang, Y. He, and N. Xie, “Fast phase retrieval in off-axis digital holographic microscopy through deep learning,” Opt. Express 26(15), 19388–19405 (2018). [CrossRef]  

20. H. Lawrence, D. A. Barmherzig, H. Li, M. Eickenberg, and M. Gabrié, “Phase retrieval with holography and untrained priors: Tackling the challenges of low-photon nanoscale imaging,” arXivarXiv:2012.07386 (2020). [CrossRef]  

21. K. Wang, Q. Kemao, J. Di, and J. Zhao, “Y4-Net: a deep learning solution to one-shot dual-wavelength digital holographic reconstruction,” Opt. Lett. 45(15), 4220–4223 (2020). [CrossRef]  

22. P. Wu, D. Zhang, J. Yuan, S. Zeng, H. Gong, Q. Luo, and X. Yang, “Large depth-of-field fluorescence microscopy based on deep learning supported by Fresnel incoherent correlation holography,” Opt. Express 30(4), 5177 (2022). [CrossRef]  

23. K. Wang, J. Dou, Q. Kemao, J. Di, and J. Zhao, “Y-Net: a one-to-two deep learning framework for digital holographic reconstruction,” Opt. Lett. 44(19), 4765 (2019). [CrossRef]  

24. L. Deng, “The MNIST database of handwritten digit images for machine learning research [best of the web],” IEEE Signal Process. Mag. 29(6), 141–142 (2012). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Flow chart of single-shot Fresnel incoherent correlation holography via deep learning based phase-shifting (FINCH/DLPS) method.
Fig. 2.
Fig. 2. Schematic diagram of optical path system for data collection.
Fig. 3.
Fig. 3. One set of results in the validation data. The phase-shifting interferograms with 2/3 π phase shift (a) collected in the experiment and (b) output by the network; The phase-shifting interferograms with the phase shift 4/3 π (d) collected in the experiment and (e) output by the network; (c) and (f) the section curves of the 500th column in the interferograms of (a), (b) and (d), (e).
Fig. 4.
Fig. 4. Reconstruction results of handwritten digit ‘5’ in the MNIST dataset calculated with (a) 3-frame experimental phase-shifting interferograms; (b) the proposed method.
Fig. 5.
Fig. 5. Comparison of partial reconstruction results in verification set.
Fig. 6.
Fig. 6. Reconstruction results of handwritten digit ‘0’ at different reconstruction distances in the inversion reconstruction process, and the normalized PSNR calculated using the out-focus and in-focus reconstruction results.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

L o s s = 1024 × 1024 [ X ( x , y ) X 1 ( x , y ) ] 2 1024 × 1024 + 1024 × 1024 [ Y ( x , y ) Y 1 ( x , y ) ] 2 1024 × 1024
z r = { ± ( f d 2 ) , f o r z 0 = f 0 ± ( ( f 1 + d 2 ) ( f e + d 1 + d 2 ) f 1 f e d 1 ) , f o r z 0 f 0 f 1 = f ( f e + d 2 ) f ( f e + d 2 ) , f e = z 0 f 0 f 0 z 0 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.