Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Untrained deep learning-based differential phase-contrast microscopy

Open Access Open Access

Abstract

Quantitative differential phase-contrast (DPC) microscopy produces phase images of transparent objects based on a number of intensity images. To reconstruct the phase, in DPC microscopy, a linearized model for weakly scattering objects is considered; this limits the range of objects to be imaged, and requires additional measurements and complicated algorithms to correct for system aberrations. Here, we present a self-calibrated DPC microscope using an untrained neural network (UNN), which incorporates the nonlinear image formation model. Our method alleviates the restrictions on the object to be imaged and simultaneously reconstructs the complex object information and aberrations, without any training dataset. We demonstrate the viability of UNN-DPC microscopy through both numerical simulations and LED microscope-based experiments.

© 2023 Optica Publishing Group

Quantitative phase imaging (QPI), which provides optical thickness information of transparent objects, has found its utility in numerous applications [1,2]. The most common QPI methods are based on interferometry with coherent illumination, which makes them expensive and sensitive to misalignments, vibrations, and speckle noise [3,4]. To overcome these limitations, noninterferometric or reference-free QPI methods using partially coherent illuminations have been developed, including the transport-of-intensity equation [1], Fourier ptychographic microscopy [5], and differential phase-contrast (DPC) microscopy [6]. Among these methods, DPC microscopy achieves quantitative phase recovery by utilizing a number of intensity images under asymmetric illuminations and reconstructs the complex information of objects. However, DPC operates on a linear imaging model that is applicable only to weakly scattering objects with phase delays <≈0.5 rad [7,8]. Furthermore, system aberrations can degrade the DPC reconstruction quality. Recently, a computational method for aberration-corrected DPC imaging has been demonstrated [8]; however, it is based on a linear model suitable for imaging thin phase objects and requires a complicated iterative algorithm.

Deep neural networks (DNNs) have emerged as powerful tools for phase retrieval, offering high-accuracy phase reconstruction in various platforms [9]. By leveraging large dataset and end-to-end training, DNNs can solve nonlinear inverse problems and have demonstrated a great promise for data-driven frameworks in quantitative phase microscopy [1012]. However, the performance of these networks relies heavily on the consistency between the training and experimental settings because they are sensitive to changes in object features, instrumentation, and acquisition parameters. In contrast, an untrained neural network (UNN), inspired by the deep image prior (DIP) [13], has the potential to address these issues by integrating a physical model with DNNs. The physics-informed UNN has been proposed to solve various inverse problems (e.g., phase retrieval [14,15] and Fourier ptychography [16,17]). Motivated by these DIP and related studies, we propose a UNN-based DPC microscope, termed a UNN-DPC microscope, which enables pupil and complex object reconstruction by integrating a nonlinear image formation model without ground-truth.

Figure 1 outlines our UNN-DPC framework. We employ a U-Net-based algorithm to reconstruct complex object information by incorporating multiscale and skip connections with minimal training parameters. The output information from the network is then numerically imaged using a nonlinear forward-imaging model that incorporates pupil aberrations. The aberration is generated by a fully connected network that performs the weighted sum of Zernike functions. We evaluate and minimize the difference between the measured and estimated intensity images to obtain a joint estimate of the complex object and pupil information. The optimization problem is set as

$$\begin{aligned} &{\cal U} = \mathop {\arg \min }\limits_{{{\boldsymbol {W}}_{\textrm{Net}}},{{\boldsymbol {W}}_\textrm{Z}}} ({{{\cal L}_{\textrm{SSIM}}}({I\textrm{,}\widehat I({{\boldsymbol {W}}_{\textrm{Net}}},{{\boldsymbol {W}}_\textrm{Z}})} )+ \alpha \cdot {{\cal L}_{\textrm{TV}}}({\widehat \phi } )} )\\ &{{\cal L}_{\textrm{SSIM}}} = \frac{1}{N}\sum\limits_{k = 1}^N {({1 - SSIM ({{I_k},\widehat {{I_k}}} )} )}, \end{aligned}$$
where ${\cal U}$ is the optimal UNN-DPC model; Ik represents the intensity measurement obtained using the kth illumination pattern; and k = 1,2,3 denotes the illumination index of the top-half circle, left-half circle, and center LED patterns, respectively (N = 3). We define ${{\cal L}_{\textrm{SSIM}}}$ as the average of the structural similarity index measure (SSIM) loss between the measured $(I )$ and estimated ($\hat{I}$) intensity images. We evaluated both SSIM loss and mean squared error as the fidelity term and found that UNN-DPC microscopy with SSIM loss led to more accurate object and pupil recovery (Fig. S1 in Supplement 1). ${{\cal L}_{\textrm{TV}}}$ is the total variation (TV) of the estimated phase image $\hat{\phi }$ for improving reconstruction quality, while suppressing artifacts that might arise from noise in the imaging system and the convolution operation in the UNN [18,19]. The factor α, which controls the relative weight of the two terms, was set to 1 × 10−6, because this yielded accurate object recovery for various numerical phantoms (Fig. S2 in Supplement 1). A conventional supervised DNN requires extensive dataset and ground-truth information to train networks. In contrast, UNN-DPC microscopy does not rely on ground-truth but instead estimates object and pupil information by incorporating a nonlinear image formation model. For the image formation model, we consider an object with its complex transmission function, $o({\boldsymbol {r}} )= \exp ({ - \mu ({\boldsymbol {r}} )+ i\phi ({\boldsymbol {r}} )} )$, characterized by absorption $\mu ({\boldsymbol {r}} )$ and phase $\phi ({\boldsymbol {r}} )= {{2\pi } / {\lambda \cdot \Delta n({\boldsymbol {r}} )}} \cdot d({\boldsymbol {r}} )$, where ${\boldsymbol {r}} = ({x,y} )$ represents the 2D spatial coordinate, $\lambda $ is the illumination wavelength, $d({\boldsymbol {r}} )$ is the object thickness, and $\mathrm{\Delta }n({\boldsymbol {r}} )$ is the refractive index difference. Under an LED illumination pattern, the intensity at the image plane is a nonlinear function of $o({\boldsymbol {r}} )$, given as
$$\widehat I({\boldsymbol {r}}) = \sum\limits_{m \in {\boldsymbol {\rm R}}} {{{|{{{\cal F}^{ - 1}}\{ P({\boldsymbol {\rm u}}) \cdot {e^{iA({\boldsymbol {\rm u}})}} \cdot {\cal F}\{ {s_m}({\boldsymbol {r}}) \cdot o({\boldsymbol {r}})\} \} } |}^2}},$$
where u is the 2D spatial frequency coordinate, F and F−1 denote the Fourier and inverse Fourier transforms, $P({\boldsymbol {\rm u}} )$ is the ideal pupil, $A({\boldsymbol {\rm u}} )$ is the aberration, and R is the illumination pattern, composed of m LEDs. The illumination from each LED is approximated as a plane wave, ${s_m}({\boldsymbol {r}} )= \exp ({{{i2\pi } / {\lambda \cdot {\nu_m}{\boldsymbol {r}}}}} )$, with tilt angle ${\nu _m}$, determined by the position of each LED relative to the object plane. At each iteration, we obtain the estimated intensity images using the forward model given in Eq. (2) and calculate the loss function with the measured and estimated intensity images as Eq. (1). The RMSprop optimizer, based on the gradient descent method, was applied to backpropagate ${\cal U}$ to update the weights [${{\boldsymbol {W}}_{\textrm{Net}}},{{\boldsymbol {W}}_\textrm{Z}}$].

 figure: Fig. 1.

Fig. 1. UNN-DPC imaging principle. Intensity measurements acquired with three different illumination patterns are fed into the UNN to generate complex object information. The physical model combined with system aberration then simulates the image formation process and generates estimated intensity measurement. The optimization problem is solved iteratively using a gradient-based procedure to minimize the difference between the measured $(I )$ and estimated $({\hat{I}} )$ intensity images.

Download Full Size | PDF

We first performed numerical simulations to validate the feasibility of the UNN-DPC microscopy. An LED-based microscope with system parameters the same as our experimental setup (10×/0.25 NA, $\lambda = 527$ nm) was considered, and an object composed of 3 × 3 circular step targets with various absorption and phase values was imaged (Fig. 2). We imposed pupil aberration as a weighted sum of the Zernike polynomials, with weights randomly assigned in the range of −1 to 1. The imposed pupil aberration and corresponding Zernike weights are shown in Fig. 2(a). Three DPC intensity measurements were simulated using the discrete illumination patterns in Fig. 1 and then fed into the UNN. The reconstructed absorption ($\mu $) and phase ($\phi $) images, along with the pupil aberration, are shown in Fig. 2(b). We used the learning rate and number of iterations determined in our ablation study, which is discussed next. Note the significant correspondences of the reconstructed absorption, phase, and pupil aberration compared with the ground-truth information. For the 3 × 3 circular step targets, mean absolute errors (MAEs) for the absorption, phase, and pupil aberration were determined to be 9.89 × 10−4 rad, 1.02 × 10−4 rad, and 3.70 × 10−3 rad, respectively. The estimated Zernike coefficients yielded an average error of 1.48%, with a maximum pupil error of 0.16 rad. Specifically, for a target of $\mu = 0.32$ and $\phi = 1.96$ rad, the UNN-DPC enabled accurate estimation of absorption and phase, with errors of 0.9% and 3.5%, respectively. We also performed UNN-DPC imaging of a lenslet array and biological tissue, which represented slowly varying (Fig. S3 in Supplement 1) and complicated phase samples (Fig. S4 in Supplement 1), respectively. Our UNN-DPC could also reconstruct object and pupil aberrations of the samples with high accuracy.

 figure: Fig. 2.

Fig. 2. UNN-DPC-based joint estimation of complex object information and aberration for 3 × 3 circular step target. (a), (b) Comparison of absorption and phase of phantom, pupil aberration, and weights of Zernike polynomials of (a) ground-truth and (b) UNN-DPC results. (c) Loss curves of UNN-DPC model during joint estimation process with RMSprop optimizer. lr, learning rate.

Download Full Size | PDF

Figure 2(c) shows the loss curves as a function of the iteration numbers at various learning rates. When using a learning rate of 1 × 10−2, the model exhibited rapid initial convergence but suffered from instability and oscillation during the iteration. Conversely, the model with a learning rate of 1 × 10−4 resulted in slow convergence, requiring a significantly larger number of iterations to achieve comparable performance. Based on this observation, we used a learning rate of 1 × 10−3 to achieve the balance between convergence speed and accuracy. After 3000 iterations, the optimization was stopped to mitigate the risk of potential overfitting, as suggested by Bostan et al. [14]. This took ≈5 min on a computer [Intel Xeon Gold 6226R CPU, NVIDIA RTX A6000 graphics processing unit (GPU)].

We then experimentally validated UNN-DPC microscopy using an LED-based microscope (10×/0.25 NA objective, f = 400 mm tube lens). We employed a custom-built LED array with high-power LEDs (Shenzhen LED color, APA102-202 Super LED) as the light source, and images were acquired using a scientific CMOS (sCMOS) camera (PCO.edge, 4.2 MP format, 6.5 µm pixel size). Conventional DPC phase reconstruction was also performed using four half circle illuminations (i.e., top, bottom, left, and right half-circles) and a Tikhonov deconvolution derived from the linearized imaging model. As noted elsewhere [7,8,20,21], this linear approximation is valid for most transparent objects. However, if the object under investigation exhibits a large phase delay, the first-order approximation is no longer valid, leading to an inaccurate phase reconstruction. To demonstrate the extended phase imaging range of UNN-DPC, we conducted a series of experiments using a phase target that featured a number of Siemens stars with heights ranging from 50 nm to 350 nm (Benchmark Technologies). Figures 3(a) and 3(b) show representative UNN-DPC phase images of the phase target and profiles along the yellow dashed lines in Fig. 3(a), respectively. Phase profiles measured with a conventional DPC microscope and expected phase delays from the target specification are also presented for comparison. UNN-DPC microscopy could produce accurate phase images, whereas images from conventional DPC microscopy exhibited large errors, particularly for objects with large phase delays. Specifically, for a phase target of 1.92 rad, conventional DPC microscopy resulted in an error of 38.60%, whereas UNN-DPC microscopy enabled accurate phase estimation, with a significantly lower error of 1.72%. We performed experiments on Siemens stars with various phase values. The results are summarized in Fig. 3(c). At a phase delay of 2.17 rad, the reconstruction error with conventional DPC microscopy was 44.70%, whereas the reconstructed error with UNN-DPC was only 3.82%.

 figure: Fig. 3.

Fig. 3. (a) UNN-DPC imaging results of Siemens phase target with various heights (scale bar, 10 µm). (b) UNN-DPC and conventional DPC phase profiles along the yellow dashed lines in (a). Black dashed lines indicate expected phase delays from target specification. (c) Reconstructed phase with respect to height of phase target.

Download Full Size | PDF

To demonstrate the self-calibrating capability of the UNN-DPC platform, we then experimentally imposed a defocus aberration on the microscope and evaluated its auto-focusing performance. A Siemens star phase target was mounted on a motorized translation stage and scanned along the optical axis in steps of 5 µm in the range of −20 µm to 20 µm. On acquisition of a number of intensity images at each axial location, the phase images were reconstructed using the conventional DPC and UNN-DPC methods. The reconstructed phase and pupil aberrations for various defocus distances are shown in Fig. 4(a). One can note that the phase images from conventional DPC microscopy degraded rapidly for a defocused phase target. In contrast, UNN-DPC enabled high-resolution phase imaging over much larger defocus distances by correcting for the aberration. The images from the UNN-DPC microscopy also degraded at large defocus distances (≥20 µm), but the images maintained a high contrast, compared with those from conventional DPC. As shown in the inset of Fig. 4(a), the contrast of a conventional DPC image at the focal plane was 0.66 but dropped to 0.57 at a defocus distance of 20 µm. In comparison, the UNN-DPC images maintained a high contrast of >0.74 over the considered depth ranges (−20 µm to 20 µm). The average contrast value of the UNN-DPC images was found to be 0.76. As expected, the pupil aberrations estimated by the UNN-DPC microscopy were characterized by quadratic forms, with their curvatures increasing for large defocus distances. We extracted the defocus parameters from the UNN-DPC-estimated Zernike coefficient (i.e., the defocus component $Z_2^0$) and compared them with the experimentally measured values [Fig. 4(b)]. A significant correspondence was observed between the measured defocus parameters and those estimated from the Zernike coefficients (R2 = 0.992). We further compared the imaging performance of UNN-DPC microscopy against the iterative pupil recovery algorithm from Zuo et al. [9]. For a phase object of 1.7 rad, the iterative algorithm was unable to correct for aberration, resulting in the images becoming rapidly degraded for defocused phase targets (Fig. S5 in Supplement 1). We attribute this to imaging of the phase target with a relatively large phase delay (1.7 rad), which violates the weak phase object assumption. In contrast, UNN-DPC could achieve high-accuracy phase imaging over much larger defocus distances by correcting for the aberration.

 figure: Fig. 4.

Fig. 4. (a) Conventional DPC and UNN-DPC imaging results of Siemens phase target at various defocus distances. Recovered pupil aberrations are also presented. (b) Estimated defocus parameters compared against experimentally measured values (R2 = 0.992).

Download Full Size | PDF

UNN-DPC microscopy represents a novel form of learning-based DPC microscopy. Unlike conventional supervised learning, in UNN-DPC microscopy, the network output (i.e., complex object information) is propagated by a complete physical model that represents the process of nonlinear image formation with aberration to produce estimated DPC images. The images acquired with top- and left-half circle LED patterns provide DPC images along both x and y directions, which helps absorption and phase image reconstruction via an image formation model. Conversely, the image recorded with a center LED was found to be effective in aberration recovery, as it corresponds to coherent illumination with a uniform frequency response inside the objective NA [9]. The network weight and bias factors are then optimized to minimize the difference between the estimated and measured DPC images, eventually resulting in a feasible solution that satisfies the imposed physical constraints. UNN-DPC microscopy demonstrated high-accuracy imaging capability for a phase object with larger phase delays, which could not be achieved using conventional DPC microscopes. Although the method is iterative, as in DNNs, its processing can be facilitated using high-speed GPUs.

The utility of the UNN in various phase imaging platforms has been explored previously [1417]. To the best of our knowledge, however, this study reports the first demonstration of UNN-based DPC microscopy, which is specifically aimed at addressing the limitations of conventional DPC microscopy; namely, (1) use of a linearized imaging model, which limits the range of objects for imaging, and (2) susceptibility to aberration. Our simulation and experimental results clearly validate UNN-DPC microscopy in overcoming such limitations.

The proposed scheme can be exploited in various functional derivatives of DPC microscopy. For instance, Hur et al. [21] presented a polarization-sensitive DPC (PS-DPC) microscope for birefringence imaging in the platform of DPC; however, polarization-dependent aberrations, which might be present in polarization optics, were not considered and corrected. The application of a UNN with a polarization-dependent forward-imaging model would generate polarization-dependent aberrations and phase images, which altogether would improve the measurement accuracy of birefringence information. On a final note, optimization of illumination patterns in our UNN-DPC microscope was not rigorously examined in this study. Optimization of illumination patterns, by incorporating well-characterized system physics and the nonlinear forward model, is under way to further improve imaging speed and measurement accuracy for a range of objects.

Funding

Ministry of Science and ICT, South Korea (1711179106) and Commercialization Promotion Agency for R&D Outcomes (COMPA); Samsung Research Funding & Incubation Center of Samsung Electronics (SRFC-IT2002-07).

Disclosures

The authors declare no conflicts of interest.

Data availability

All the data are available from the corresponding author upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. A. Barty, K. Nugent, D. Paganin, and A. Roberts, Opt. Lett. 23, 817 (1998). [CrossRef]  

2. G. Popescu, Quantitative Phase Imaging of Cells and Tissues (McGraw-Hill Education, 2011).

3. B. Kemper and G. Von Bally, Appl. Opt. 47, A52 (2008). [CrossRef]  

4. P. Marquet, B. Rappaz, P. J. Magistretti, E. Cuche, Y. Emery, T. Colomb, and C. Depeursinge, Opt. Lett. 30, 468 (2005). [CrossRef]  

5. G. Zheng, R. Horstmeyer, and C. Yang, Nat. Photonics 7, 739 (2013). [CrossRef]  

6. S. B. Mehta and C. J. Sheppard, Opt. Lett. 34, 1924 (2009). [CrossRef]  

7. Y. Fan, J. Sun, Y. Shu, Z. Zhang, Q. Chen, and C. Zuo, Photonics Res. 11, 442 (2023). [CrossRef]  

8. M. Chen, Z. F. Phillips, and L. Waller, Opt. Express 26, 32888 (2018). [CrossRef]  

9. C. Zuo, J. Qian, S. Feng, W. Yin, Y. Li, P. Fan, J. Han, K. Qian, and Q. Chen, Light: Sci. Appl. 11, 39 (2022). [CrossRef]  

10. N. Thanh, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, Opt. Express 26, 26470 (2018). [CrossRef]  

11. K. Wang, J. Di, Y. Li, Z. Ren, Q. Kemao, and J. Zhao, Opt. Lasers in Eng. 134, 106233 (2020). [CrossRef]  

12. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, Light: Sci. Appl. 7, 17141 (2018). [CrossRef]  

13. U. Dmitry, A. Vedaldi, and L. Victor, Int. J. Comput. Vis. 128, 1867 (2020). [CrossRef]  

14. E. Bostan, R. Heckel, M. Chen, M. Kellman, and L. Waller, Optica 7, 559 (2020). [CrossRef]  

15. F. Wang, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, and G. Situ, Light: Sci. Appl. 9, 77 (2020). [CrossRef]  

16. J. Zhang, T. Xu, J. Li, Y. Zhang, S. Jiang, Y. Chen, and J. Zhang, J. Biophotonics 15, e202100296 (2022). [CrossRef]  

17. Q. Chen, D. Huang, and R. Chen, Opt. Express 30, 39597 (2022). [CrossRef]  

18. J. Liu, Y. Sun, X. Xu, and U. S. Kamilov, in IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2019), p. 7715–7719.

19. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, IEEE Trans. on Image Process. 26, 4509 (2017). [CrossRef]  

20. L. Tian and L. Waller, Opt. Express 23, 11394 (2015). [CrossRef]  

21. S. Hur, S. Song, S. Kim, and C. Joo, Opt. Lett. 46, 392 (2021). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplement1 for supporting content.

Data availability

All the data are available from the corresponding author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. UNN-DPC imaging principle. Intensity measurements acquired with three different illumination patterns are fed into the UNN to generate complex object information. The physical model combined with system aberration then simulates the image formation process and generates estimated intensity measurement. The optimization problem is solved iteratively using a gradient-based procedure to minimize the difference between the measured $(I )$ and estimated $({\hat{I}} )$ intensity images.
Fig. 2.
Fig. 2. UNN-DPC-based joint estimation of complex object information and aberration for 3 × 3 circular step target. (a), (b) Comparison of absorption and phase of phantom, pupil aberration, and weights of Zernike polynomials of (a) ground-truth and (b) UNN-DPC results. (c) Loss curves of UNN-DPC model during joint estimation process with RMSprop optimizer. lr, learning rate.
Fig. 3.
Fig. 3. (a) UNN-DPC imaging results of Siemens phase target with various heights (scale bar, 10 µm). (b) UNN-DPC and conventional DPC phase profiles along the yellow dashed lines in (a). Black dashed lines indicate expected phase delays from target specification. (c) Reconstructed phase with respect to height of phase target.
Fig. 4.
Fig. 4. (a) Conventional DPC and UNN-DPC imaging results of Siemens phase target at various defocus distances. Recovered pupil aberrations are also presented. (b) Estimated defocus parameters compared against experimentally measured values (R2 = 0.992).

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

U = arg min W Net , W Z ( L SSIM ( I , I ^ ( W Net , W Z ) ) + α L TV ( ϕ ^ ) ) L SSIM = 1 N k = 1 N ( 1 S S I M ( I k , I k ^ ) ) ,
I ^ ( r ) = m R | F 1 { P ( u ) e i A ( u ) F { s m ( r ) o ( r ) } } | 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.