Abstract
Fresnel incoherent correlation holography (FINCH) realizes non-scanning three-dimension (3D) images using spatial incoherent illumination, but it requires phase-shifting technology to remove the disturbance of the DC term and twin term that appears in the reconstruction field, thus increasing the complexity of the experiment and limits the real-time performance of FINCH. Here, we propose a single-shot Fresnel incoherent correlation holography via deep learning based phase-shifting (FINCH/DLPS) method to realize rapid and high-precision image reconstruction using only a collected interferogram. A phase-shifting network is designed to implement the phase-shifting operation of FINCH. The trained network can conveniently predict two interferograms with the phase shift of 2/3 π and 4/3 π from one input interferogram. Using the conventional three-step phase-shifting algorithm, we can conveniently remove the DC term and twin term of the FINCH reconstruction and obtain high-precision reconstruction through the back propagation algorithm. The Mixed National Institute of Standards and Technology (MNIST) dataset is used to verify the feasibility of the proposed method through experiments. In the test with the MNIST dataset, the reconstruction results demonstrate that in addition to high-precision reconstruction, the proposed FINCH/DLPS method also can effectively retain the 3D information by calibrating the back propagation distance in the case of reducing the complexity of the experiment, further indicating the feasibility and superiority of the proposed FINCH/DLPS method.
© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
1. Introduction
Fresnel incoherent correlation holography (FINCH) is an effective method for recording incoherent hologram based on the correlation between sample information and Fresnel zone plate. It records the three-dimension (3D) spatial information of the sample without any moving device and scanning device. The system structure is simple and the reconstruction method is flexible [1–3]. It uses a spatial light modulator (SLM) to load lens phases with different focal lengths to ingeniously construct autocorrelation optical path and the phase-shifting algorithm to realize holographic reconstruction, which has been widely studied and applied to fluorescence microscopic imaging and white light microscopic imaging [4,5]. In addition, by loading a lens phase with a specific focal length on the SLM, FINCH technology can break through the diffraction limit to achieve a twice increase in resolution [6]. This feature also makes FINCH technology a research hotspot of super-resolution imaging [7].
FINCH is a coaxial autocorrelation holographic imaging method, which can reconstruct the sample information with an incoherent illumination (including fluorescent lamp) or the light reflected by any sample. The light emitted from any point of the sample is split into two beams that are differentially phase-modulated and recombined in a common plane to produce interference fringes. SLM is used to implement the phase-shifting operation by adding different phase differences between the modulated phases of the two optical paths. Then, the phase-shifting algorithm is used to remove the disturbance of DC term and twin term that appears in the reconstruction field, and the back propagation algorithm is used for image reconstruction. Obviously, the acquisition of multiple phase-shifting interferograms limits the real-time performance of FINCH. Subsequently, multiple single-shot FINCH imaging technologies have been proposed to solve this limitation [7–11]. For example, off-axis FINCH imaging technology [10]; the technology of acquiring multiple phase-shifting interferograms with one camera by adding grating phase through SLM [11]. It is even possible to use birefringent crystal lens toreplace SLM and combine with polarization camera to realize holographic reconstruction of synchronous phase-shifting [7], which provides a simpler and higher light efficiency method for single shot microscopy. Although these methods can greatly simplify FINCH system and improve the real-time imaging. However, these technologies will sacrifice the space bandwidth product of the camera.
In recent years, as an end-to-end image processing method, deep learning (DL) has been widely used in solving various inverse problems in optical imaging field [12,13], such as Lensless imaging [14], medical imaging [15], computational ghost imaging [16] and holographic reconstruction [17–21], etc. The trained convolution neural network can remove the disturbance of DC term and the twin term of the reconstruction field, and then realizes the single-shot holographic reconstruction with the back propagation algorithm.
This end-to-end DL technology is also used for FINCH reconstruction. In the mentioned above method, interferograms with different back propagation distances are used as the input, and the reconstruction results calculated by the phase-shifting algorithm on the focal plane are used as label dataset. Although the input interferogram is not in the image focus position, the network can still predict clear image information. This not only realizes single-shot FINCH imaging, but also makes it have large depth of field [22]. However, the reported DL-based large depth of field FINCH can not retain the 3D information of the image well. Here, to realize single-shot FINCH without losing the space bandwidth product, we propose a single-shot 3D FINCH imaging method based on deep learning phase-shifting technology (FINCH/DLPS), in which the convolution neural network is not used for the end-to-end holographic reconstruction task but trains the network to predict the other two phase-shifting interferograms, and then implement the image reconstruction by combining three-step phase-shifting and back propagation algorithm.
2. Principle and network analysis
As is known to all, in conventional FINCH imaging, the phase shift is implemented using a SLM while the proposed FINCH/DLPS method realizes the phase shift using a network, and then the image reconstruction is implemented by combining three-step phase-shifting and back propagation algorithm. The construction and training of the network is the core of the proposed method. We first built a phase-shifting network in accordance with the procedure of phase-shifting FINCH imaging, as illustrated in Fig. 1. The collected interferograms with phase shift of 0 are used as input, and the interferograms with the phase shift of 2/3 π and 4/3 π are used as output of the network. The feature information of the input image is extracted through the convolution channel, and then the feature information is used to reconstruct the phase-shifting interferogram in the deconvolution layer. One channel is used to obtain the interferogram with the phase shift of 2/3 π, and the other channel is used to generate a phase-shifting interferogram with the phase shift of 4/3 π. The designed network mainly refers to U-Net and Y-Net [23], and extracted feature information from input images through multiple convolution layers, where the size of convolution kernel is 3 × 3, the step size is 2. Each convolution layer is followed by a layer norm and an activation function (leaky ReLU). The Leaky ReLU is used as the activation function, in which $f(x) = \max (\lambda ,x)$, where $\lambda = 0.2$. After two channels composed of deconvolution layers are input, the target phase-shifting interferogram can be obtained, where the convolution kernel is 5 × 5 with the step size 2, and the activation function (leaky ReLU) is in front of the deconvolution layer. The size of the input image is 1024 × 1024 pixels, the number of convolution cores of each layer of the network is shown in the Fig. 1. By building such a network, two phase-shifting interferograms can be generated with only one network instead of training two networks. This will effectively reduce the network parameters and improve the efficiency of network training.
The root mean square error (RMSE) is used as the loss function of network training, shown as follows.
3. Verification and analysis of experimental results
3.1 Experimental verification
The Mixed National Institute of Standards and Technology (MNIST) dataset [24] is selected to train and test the proposed FINCH/DLPS method. The images in 5000 sets of data are loaded into the amplitude type SLM, and three phase-shifting interferograms (0, 2/3 π and 4/3 π) are collected, respectively. 80% of them are training sets, 10% are testing sets, and 10% are verification sets. For analysis, the RMSE between the network output and the ground truth is calculated, and the average RMSE is 0.0036 gray level. Wherein, Output 1 is the interferogram of the network output result, i.e. 2/3 π phase shift, and output 2 is the interferogram of the network output result, i.e. 4/3 π phase shift. The corresponding ground truth are the interferograms with the phase shift of 2/3 π and 4/3 π collected in the experiment. Figure 3 shows one set of results in the validation data. The collected phase-shifting interferograms with the phase shift of 2/3 π and 4/3 π are showed in Figs. 3(a) and 3(d), and the corresponding interferograms output by the network are also given in Figs. 3(b) and 3(e). For clearer presentation, the data of the 500th column is plotted and showed in Figs. 3(c) and 3(f). The obtained results show that the network can obtain another two phase-shifting interferograms with high precision, there is little difference between the network output results and the experimental results. To verify the feasibility of the FINCH/DLPS method, we use three collected phase-shifting interferograms and the result obtained with the proposed network to implement image reconstruction, as shown in Fig. 4. Clearly, the reconstruction results obtained with the conventional phase-shifting method and the proposed method are basically the same. Since the three-step phase-shifting reconstruction algorithm can effectively remove the disturbance of DC term and term image, by quantitative analysis, it is found that the peak signal to noise ratio (PSNR) of the reconstruction image with the proposed method is 28.3 dB because there is a small deviation between the phase-shifting interferograms generated by the network and the interferograms collected in the experiment, further indicating the feasibility of the proposed FINCH/DLPS method.
In addition, it is found that the performance of the proposed FINCH/DLPS method is better than that with three collected interferograms in some data of the validation data, as shown in Fig. 5, suggesting that the main reason comes from the error caused by multiple experimental acquisition, such as human operation, unstable light source or unstable light path. In addition, the proposed method can effectively remove the DC term and twin term, which is still difficult in the conventional FINCH method, so that obtain high precision reconstruction, further indicating the advantage of the proposed method for avoiding the deviation caused by multiple acquisition.
3.2 3D imaging capability analysis
Similar with the conventional 3D imaging, the proposed method also needs a procedure of inversion reconstruction, thus the 3D information of the image can be obtained by calibrating the inversion reconstruction distance. The inversion results at different distances are shown in Fig. 6. It is found that the clear image information can be obtained at the reconstruction distance of 320 mm, and the unclear reconstruction results appear when the reconstruction distance is out of the focus reconstruction distance. According to Ref. [8], the reconstruction distance and the actual distance of the sample in the optical path need satisfy the following relationship:
4. Conclusion
In order to reduce the complexity of the experiment and improve the real-time performance of FINCH, we propose a single-shot Fresnel incoherent correlation holography via deep learning based phase-shifting (FINCH/DLPS) method to realize rapid and high-precision image reconstruction using only a collected interferogram. A network similar to the Y-net structure is designed for obtaining two interferograms with the phase shift of 2/3 π and 4/3 π from one input interferogram. The three-step phase-shifting algorithm and the back propagation algorithm are used for holographic reconstruction. The obtained results show that the proposed method not only can realize high-precision image reconstruction, but also can effectively retain the 3D information by calibrating the back propagation distance in the case of reducing the complexity of the experiment, further indicating the feasibility and superiority of the proposed FINCH/DLPS method. Currently, as a deep learning (DL) method based on data training, the generalization capability of the proposed FINCH/DLPS method is still its limitation, the same set of network parameters can not cope with all types of samples, which is a difficult problem for all DL imaging technologies based on data training at present. Among the existing methods, training the network model with enough parameters by obtaining a large number of data sets, or using the transfer learning technology to transfer the network parameters for different types of samples, are the main ideas to solve the network generalization ability. In addition, the use of two SLM will reduce the light budget to a certain extent, how to further improve the optical utilization and the generalization capability of the network are the core issues in our following work.
Funding
National Natural Science Foundation of China (62175041, 62205059, 61805086, 61875059); Guangdong Introducing Innovative and Entrepreneurial Teams of “The Pearl River Talent Recruitment Program” (2019ZT08X340); Guangdong Provincial Key Laboratory of Information Photonics Technology (2020B121201011).
Disclosures
The authors declare no conflicts of interest.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
References
1. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32(8), 912–914 (2007). [CrossRef]
2. N. Siegel, J. Rosen, and G. Brooker, “Reconstruction of objects above and below the objective focal plane with dimensional fidelity by FINCH fluorescence microscopy,” Opt. Express 20(18), 19822 (2012). [CrossRef]
3. B. Katz, J. Rosen, R. Kelner, and G. Brooker, “Enhanced resolution and throughput of Fresnel incoherent correlation holography (FINCH) using dual diffractive lenses on a spatial light modulator (SLM),” Opt. Express 20(8), 9109–9121 (2012). [CrossRef]
4. T. Man, Y. Wan, W. Yan, X. H. Wang, E. J. G. Peterman, and D. Y. Wang, “Adaptive optics via self-interference digital holography for non-scanning three-dimensional imaging in biological samples,” Biomed. Opt. Express 9(6), 2614–2626 (2018). [CrossRef]
5. W. Sheng, Y. Liu, Y. Shi, H. Jin, and J. Wang, “Phase-difference imaging based on FINCH,” Opt. Lett. 46(11), 2766–2769 (2021). [CrossRef]
6. N. Siegel, V. Lupashin, B. Storrie, and G. Brooker, “High-magnification super-resolution FINCH microscopy using birefringent crystal lens interferometers,” Nat. Photonics 10(12), 802–808 (2016). [CrossRef]
7. N. Siegel and G. Brooker, “Single shot holographic super-resolution microscopy,” Opt. Express 29(11), 15953 (2021). [CrossRef]
8. T. Tahara, Y. Kozawa, A. Ishii, K. Wakunami, Y. Ichihashi, and R. Oi, “Two-step phase-shifting interferometry for self-interference digital holography,” Opt. Lett. 46(3), 669–672 (2021). [CrossRef]
9. D. Liang, Q. Zhang, J. Wang, and J. Liu, “Single-shot Fresnel incoherent digital holography based on geometric phase lens,” J. Mod. Opt. 67(2), 92–98 (2020). [CrossRef]
10. X. Quan, O. Matoba, and Y. Awatsuji, “Single-shot incoherent digital holography using a dual-focusing lens with diffraction gratings,” Opt. Lett. 42(3), 383 (2017). [CrossRef]
11. S. Sakamaki, N. Yoneda, and T. Nomura, “Single-shot in-line Fresnel incoherent holography using a dual-focus checkerboard lens,” Appl. Opt. 59(22), 6612 (2020). [CrossRef]
12. A. Qayyum, I. Ilahi, F. Shamshad, F. Boussaid, M. Bennamoun, and J. Qadir, “Untrained neural network priors for inverse imaging problems: A survey,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 1–20 (2022). [CrossRef]
13. F. Wang, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, and G. Situ, “Phase imaging with an untrained neural network,” Light: Sci. Appl. 9(1), 77 (2020). [CrossRef]
14. A. Sinha, G. Barbastathis, J. Lee, and S. Li, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017). [CrossRef]
15. K. Suzuki, “Overview of deep learning in medical imaging,” Radiol. Phys. Technol. 10(3), 257–273 (2017). [CrossRef]
16. F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27(18), 25560–25572 (2019). [CrossRef]
17. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2017). [CrossRef]
18. H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018). [CrossRef]
19. G. Zhang, T. Guan, Z. Shen, X. Wang, T. Hu, D. Wang, Y. He, and N. Xie, “Fast phase retrieval in off-axis digital holographic microscopy through deep learning,” Opt. Express 26(15), 19388–19405 (2018). [CrossRef]
20. H. Lawrence, D. A. Barmherzig, H. Li, M. Eickenberg, and M. Gabrié, “Phase retrieval with holography and untrained priors: Tackling the challenges of low-photon nanoscale imaging,” arXivarXiv:2012.07386 (2020). [CrossRef]
21. K. Wang, Q. Kemao, J. Di, and J. Zhao, “Y4-Net: a deep learning solution to one-shot dual-wavelength digital holographic reconstruction,” Opt. Lett. 45(15), 4220–4223 (2020). [CrossRef]
22. P. Wu, D. Zhang, J. Yuan, S. Zeng, H. Gong, Q. Luo, and X. Yang, “Large depth-of-field fluorescence microscopy based on deep learning supported by Fresnel incoherent correlation holography,” Opt. Express 30(4), 5177 (2022). [CrossRef]
23. K. Wang, J. Dou, Q. Kemao, J. Di, and J. Zhao, “Y-Net: a one-to-two deep learning framework for digital holographic reconstruction,” Opt. Lett. 44(19), 4765 (2019). [CrossRef]
24. L. Deng, “The MNIST database of handwritten digit images for machine learning research [best of the web],” IEEE Signal Process. Mag. 29(6), 141–142 (2012). [CrossRef]