Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Untrained, physics-informed neural networks for structured illumination microscopy

Open Access Open Access

Abstract

Structured illumination microscopy (SIM) is a popular super-resolution imaging technique that can achieve resolution improvements of 2× and greater depending on the illumination patterns used. Traditionally, images are reconstructed using the linear SIM reconstruction algorithm. However, this algorithm has hand-tuned parameters which can often lead to artifacts, and it cannot be used with more complex illumination patterns. Recently, deep neural networks have been used for SIM reconstruction, yet they require training sets that are difficult to capture experimentally. We demonstrate that we can combine a deep neural network with the forward model of the structured illumination process to reconstruct sub-diffraction images without training data. The resulting physics-informed neural network (PINN) can be optimized on a single set of diffraction-limited sub-images and thus does not require any training set. We show, with simulated and experimental data, that this PINN can be applied to a wide variety of SIM illumination methods by simply changing the known illumination patterns used in the loss function and can achieve resolution improvements that match theoretical expectations.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In the past two decades, structured illumination microscopy (SIM) has become an increasingly popular super-resolution imaging method due to its relatively low illumination intensity levels and fast widefield imaging speeds [14]. SIM was first described by Gustafsson in his seminal paper and uses patterned illumination to shift high frequency spatial information into the low-frequency passband of a microscope [5]. By using a series of patterns, the ill-posed problem of super-resolution can be conditioned, and the high-resolution object recovered. The highest attainable spatial frequency with linear SIM reconstruction, $f_{SIM}$, can be described by:

$$f_{SIM} = f_{det} + f_{ill}$$
where $f_{det}$ is the maximum spatial frequency of the detection optics and $f_{ill}$ is the maximum spatial frequency of the illumination patterns. Therefore, the main drawback of traditional linear SIM is that it only yields around a 2x resolution improvement when the illumination patterns used are diffraction limited. Traditional SIM has thus been unable to compete with other super-resolution methods such as STED [6] and STORM [7] in terms of resolution. If we assume that the numerical aperture of the detection optics is fixed, then the only way to increase SIM resolution is to increase the spatial frequency of the illumination patterns. New advances in the ability to generate sub-diffraction structured illumination have opened the door to further resolution improvement.

Sub-diffraction illumination patterns can be generated from either using a non-linear process or from near-field confinement via nanoscale structures. Non-linear SIM has been demonstrated using fluorophore saturation has shown resolution improvements of 3x and greater [8,9]. Recently, it was established that a two-photon upconversion process can be used to generate similar non-linear patterns with much lower intensities [10]. In theory, if non-linear SIM includes enough higher-order harmonics and has sufficient signal-to-noise ratio, it can achieve nearly unlimited spatial resolution, although practical systems are limited to the first few harmonics [8]. Another strategy to create high-resolution illumination patterns is to use near-field illumination which is not subject to the far-field diffraction limit and can achieve large K-vectors. High refractive index waveguide-based SIM was demonstrated by using silicon nitride [11] and gallium phosphide [12]. The resolution improvement of this approach is, however, still limited by the availability of high index materials. Advances in plasmonics, metamaterials and nanoscale fabrication have greatly increased the ability to design near-field light patterns in the past decade [1316]. Plasmonic SIM, using either surface plasmon interference [17,18] or localized plasmonic resonator arrays [1921], can boost the resolution improvement to around 3x. By combining traditional SIM and plasmonic SIM, such resolution improvement may be extended to 4x [22]. Recently, hyperbolic metamaterials and organic hyperbolic materials have been used to create extremely high K-vector illumination patterns and have shown resolution improvements far beyond 4x [2325]. With these advances, linear SIM with resolution down to 30 nm scales or better is now possible.

Traditionally, super-resolution images have been recovered using the linear SIM reconstruction algorithm [5,8]. However, this algorithm includes many hand-tuned parameters that affect the final reconstructed image, and, if chosen improperly, can lead to artifacts [2628]. It is often unclear for novice users how to choose these parameters in order optimize reconstruction quality. Additionally, this algorithm presently only exists for traditional SIM and non-linear SIM with periodic illumination patterns and is not formulated for other types of SIM illumination. In light of this, many near-field and non-periodic SIM methods have relied on blind-SIM algorithms for image reconstruction, which do not require knowledge of the illumination patterns [29,30]. Yet these algorithms are fundamentally poorly conditioned. As a result they produce worse results than if the illumination patterns are known and often require many more sub-frames. In regard to ease of use, only for traditional linear SIM are there open-source code packages available [27,31]. Thus, it would be highly useful to the microscopy community if there were an open-source SIM reconstruction method that could be readily adapted to any type of SIM illumination with few parameters for the user to tune.

Recently, a rapidly growing approach for SIM image reconstruction has been the use of deep neural networks (DNNs). DNNs have demonstrated impressive SIM reconstruction results and have been shown to reduce the number of needed sub-frames, image in extremely noisy conditions, improve axial resolution, and reduce artifacts [3238].

However, while DNNs have shown great promise, there are two main obstacles to their more practical and widespread use in the super-resolution community. The first is that all methods so far have employed a supervised learning strategy for training that requires paired low-resolution and high-resolution images. This is a challenge as is not easy to obtain experimental high-resolution ground truth images due to the diffraction limit. Thus, in order to get ground truth images another super-resolution technique is first needed, which as we explained earlier is not always available. The second major challenge is that data-driven supervised training methods can be highly sensitive to the class of objects used. These types of DNNs tend to have much worse performance when used on objects that are different from the training set. Oftentimes they can hallucinate and project characteristic features of a training set onto unseen objects [39,40]. It is highly impractical for a user to collect a new training set every time they want to try to use a DNN on a new type of object.

In this paper we propose a new reconstruction method that uses a DNN coupled with the forward model of the structured illumination process to produce super-resolution images without the use of training data or ground-truth images. This so called "untrained" neural network is inspired by the deep image prior concept which has shown that DNNs can serve a type of general prior for natural images [41]. In recent years there has been growing interest in the use of untrained neural networks for various computational imaging problems [4245]. However, to our knowledge, there has been no exploration of this concept as a method for the highly ill-posed problem of super-resolution microscopy where it could have a great advantage over traditional supervised DNNs. We demonstrate that this physics-informed neural network (PINN) is robust to a wide class of objects, can be used for many types of SIM illumination and produces resolution improvements that match theoretical limits.

2. Theory

The physical forward model of incoherent structured illumination imaging in a microscope can be described as follows:

$$H(f) = (If \ast PSF) + N$$
Where H is the diffraction limited sub-frame, I is the illumination pattern, f is the fluorophore distribution/object, PSF is the point spread function of the microscope, and N is additive noise. The goal of a traditional reconstruction algorithm [46] is to then find the object $f^{\ast }$ such that:
$$f^{{\ast}} = \arg \min_{f} \{ \sum_{n=1}^{n} \|H(f)_{i} -g_{i} \|_{d} + \alpha \phi(f) \}$$
Where g is the experimentally collected sub-frame, $\| \|_{d}$ is a distance metric, $\alpha$ is a constant weighting term and $\phi$ is a regularizer term or prior. This regularization term can either be engineered by hand (e.g. sparsity penalty or total variation) or in the case of a data-driven DNN can be statistically learned during training. Note that this statistical prior is why data driven DNNs can have such impressive results but also why they tend to perform worse on data that varies from the training set.

For our untrained PINN we aim to instead learn the inverse mapping function M with trainable parameters (kernels) $\theta$ such that:

$$M_{\theta} = \arg \min_{\theta} \{ \sum_{n=1}^{n} \|H(M_{\theta}(g_{i}) - g_{i} \|_{d} \}$$

The optimization process is shown visually in Fig. 1. During optimization, the neural network is fed the set of diffraction limited sub-images that are modulated via the SIM illumination patterns. The neural network then outputs an image which is passed through the SIM forward process to generate a new series of diffraction limited sub-images. These images are then compared to the input frames and the loss backpropagated through the network to update the kernels. In this way, the network is optimized without ever "seeing" a ground truth image. This process is repeated until the loss function plateaus and the network outputs the super-resolution image.

 figure: Fig. 1.

Fig. 1. Concept of PINN for SIM. a) Flowchart for the PINN optimization process on a single set of sub-images. (b-d) Examples of SIM illumination patterns. a) traditional linear SIM b) non-linear SIM (NL-SIM) c) localized plasmonic SIM (LPSIM). Scale bars: 177.8 nm.

Download Full Size | PDF

3. Methods

3.1 Simulated image generation

Our simulated images are all created using data taken from the BioSR dataset (publicly available online) [47]. To generate the ground truth images, crops are taken from high SNR SIM reconstructed results. The images are then convolved with a small PSF to smooth out any sharp artifacts. Widefield diffraction-limited sub-images are generated by multiplying the ground truth images with a series of illumination patterns. The product of the two are then convolved with the theoretical microscope PSF (airy disk). The sub frames are then downsampled by 2x or 4x to mimic camera pixelation and additive Gaussian white noise is added. Illumination patterns for non-linear SIM are simulated assuming 3 harmonics with 5 angles and 5 phases. Illumination patterns for LPSIM are based off full-wave simulations for nanopillar arrays and have 12 polar angles and 2 azimuthal angles.

3.2 Experimental data

The experimental SIM data was taken from publicly available data from the ML-SIM paper [37,48]. The data can be accessed at https://ml-sim.github.io/ and methods concerning experimental parameters are in the listed reference. Image resolution estimation was performed using an ImageJ plugin of the decorrelation analysis algorithm [49] which is available at https://github.com/Ades91/ImDecorr.

3.3 Neural network reconstruction

The PINN is built using Tensorflow version 2.4.0 and Python version 3.8.5. The network uses a U-Net style architecture [50] that has 3 downsampling and upsampling layers with skip connections between them. We wrote a custom physics-informed loss function based on the SIM forward process and use the SSIM loss between the generated and input sub-frames to drive optimization. We use the Adam optimizer and a learning rate of between 0.001 and 0.0001. The learning rate is decayed exponentially with a decay rate of 0.9 every 50 epochs. In most cases we run the network optimization process for 1000 epochs which is more than enough for the loss to plateau. Running on a computer using a GTX 1080 Ti GPU the reconstruction time can take from the order of minutes to tens of minutes depending on the size of the image, the size of the PSF kernel, and the number of sub-frames used. Code and demonstration notebooks are publicly available for open-source use on Github (Code 1, Ref. [51]).

4. Results

To test the performance of our PINN method we first conducted various tests using simulated data in order to have ground-truth images for proper evaluation. We assessed both the resolution improvement capabilities and versatility of the PINN across different SIM modalities and object types. We took experimental images of various biological objects from the BioSR dataset and used them as ground truth images [47] (Supp. 2). To simulate the diffraction limited sub-frames the images are multiplied by illumination patterns and then convolved with a known PSF which in our case is an Airy disc. The images are then down sampled to mimic pixelation and additive background noise is added to the images. For each case the SIM illumination patterns are used in the physics-informed loss function and training is run until the loss plateaus. We use 9 sub-frames (3 phases, 3 angles) for linear SIM, 25 sub-frames (5 phases, 5 angles) for NL-SIM, and 24 sub-frames (12 polar angles, 2 azimuthal angles) for LPSIM. For all simulations we assume the fluorophores emit at 530 nm and that the objective used has a NA of 1.49.

4.1 PINN image reconstruction across illumination modality

We first asses the reconstruction ability of the PINN across a range of SIM illumination modalities. The results are shown in Fig. 2. For all three SIM imaging modalities (see Fig. 1(b)-d) there is clear resolution improvement and good agreement with the ground truth images. The structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and root-mean-square error (RMSE) for all three modalities indicate close alignment between the PINN reconstructions and ground truth images (Supp. 3). To evaluate resolution, we compared line profiles across the images to determine at what distance sub-diffraction features can be resolved. In the linear SIM case, we are able to distinguish two microtubules 89 nm apart (2x the diffraction limit). This confirms our expectation for linear SIM where the illumination patterns are also diffraction limited. Furthermore, with non-linear SIM and LPSIM we are able to resolve microtubules 59 nm apart (3x the diffraction limit), demonstrating that the PINN can be readily adapted to many types of SIM and that the resolution improvement of this method is fundamentally limited by the resolution of the illumination patterns used and how well-posed the reconstruction inverse process is. Additionally, the fact that the PINN resolution matches well with the resolution predicted for linear SIM by Eq. (1) suggests that the network is truly learning the inverse problem rather than simply relying on object priors or performing deconvolution.

 figure: Fig. 2.

Fig. 2. Demonstration of resolution improvement with multiple SIM modalities. Traditional linear SIM is able to distinguish features at 2x the diffraction limit and NL-SIM and LPSIM can accurately distinguish features at 3x the diffraction limit. Line Profiles: blue line = diffraction limited, orange line = PINN result, red dotted line = ground truth. SNR is 20 for all input frames.

Download Full Size | PDF

4.2 PINN image reconstruction across biological objects

Next, we evaluated the ability of the PINN to reconstruct super-resolution images across a variety of biological objects. One of the drawbacks of trained neural networks is that they learn statistical priors about the objects in their training set which tends to limit their performance when tested on new types of objects. Super-resolution imaging is a tool of discovery, and thus it is important that a user has high confidence that the PINN can produce images on all types of images, even if they have not been previously observed.

We test the PINN again on a series of simulated images that are generated from the BioSR dataset. The four object types tested are F-actin, clathrin-coated pits (CCP), endoplasmic reticulum, and microtubules, all of which have very different structures. If the PINN is simply biased to certain high-resolution structures (such as thin lines) rather than truly solving the inverse problem, then we would expect to observe hallucinations or artifacts. For our test we use non-linear SIM which presents a more challenging inverse problem as we aim to recover object features at more than 2x the diffraction limit.

Results are shown in Fig. 3. Across all objects we observe that the PINN produces images that match well with the ground truth images in terms of quality metrics (Supp 3). The insets clearly show sub-diffraction features and the PINN is able to recover them across a variety of objects. The frequency space of the diffraction limited and PINN reconstructed images are compared with circles indicating the range of support. The original circle indicates the diffraction limit in frequency space and the enlarged circle indicates a 3x enlargement. In all four object types the PINN result is able to increase the available frequency support to 3x the diffraction limit (59 nm).

 figure: Fig. 3.

Fig. 3. Demonstration of PINN based nonlinear SIM resolution improvement on multiple object types. (Top left) F-Actin, (top right) clathrin-coated pits, (bottom left) endoplasmic reticulum, (bottom right) microtubules. LR: low resolution (diffraction limited), PINN: physics-informed neural network, GT: ground truth. SNR is 20 for all input images. Widefield scale bars: 600 nm. Inset scale bars: 100 nm.

Download Full Size | PDF

4.3 PINN noise performance

We further test the PINN by evaluating its performance at various signal-to-noise ratios (SNR). In practical microscopy, the SNR will be limited by factors such as the exposure time and background noise, therefore it is important that our PINN method can work at reasonable SNRs. In Fig. 4 we evaluate the performance of the PINN on both linear and nonlinear SIM at SNRs ranging from 20 down to 5. We see that two-line features at 89 nm and 59 nm respectively are able to be distinguished down to a SNR of about 7.5, indicating that the PINN is able to recover super-resolution image features even at relatively low SNRs. Image quality metrics are included in Supplementary Fig. 4. For linear SIM, the quality metrics remain high until the SNR of 12 and then drop off at 10. From Fig. 4 it is evident that dot like noise artifacts begin to appear at a SNR of 10 corresponding to this drop in image quality. However, for nonlinear SIM the quality metrics remain stable with a very slight decrease down to an SNR of 5 indicating that it appears more resistant to noise related artifacts, perhaps due to the larger number of input sub-frames used.

 figure: Fig. 4.

Fig. 4. Assessment of PINN reconstruction performance at various signal-to-noise ratios. (Top section) SIM (bottom section) non-linear SIM. For each section: (top row) Individual raw frame at the given SNR, (middle row) The PINN reconstruction result, (bottom row) a line profile of two closely spaced microtubules (blue dotted) ground truth (orange) PINN result. SIM scale bar: 400 nm. NL-SIM scale bar: 200 nm.

Download Full Size | PDF

4.4 PINN experimental demonstration on traditional SIM

Lastly, we test the PINN on experimental linear SIM data to confirm that the results match theory and our simulations. We use publicly available SIM data taken on samples of endoplasmic reticulum [37]. Since we do not have any ground truth images for the case of experimental data, we used a decorrelation analysis tool to estimate resolution. The decorrelation analysis algorithm uses partial phase autocorrelation to measure resolution from a single image and has become a widely used tool for resolution estimation in microscopy [49]. Our results are shown in Fig. 5. The PINN reconstruction shows clear resolution improvement as shown by the zoomed in insets. Additionally, the decorrelation analysis software package estimates a resolution of 105 nm for the PINN image which gives a resolution improvement of 1.78x the diffraction limit. This matches well with both the theoretical expected resolution improvement (~1.7x) from Eq. (1) and from the FairSIM ImageJ module (Supp. 6).

 figure: Fig. 5.

Fig. 5. Experimental assessment of PINN for linear SIM on endoplasmic reticulum. (left) Diffraction limited image, (right) PINN result, (insets) Zoomed in view of dashed green regions showing sub-diffraction features. Widefield scale bars: 1,000 nm. Inset scale bars: 250 nm.

Download Full Size | PDF

5. Discussion and conclusion

We demonstrate that an untrained physics-informed neural network can be used for the reconstruction of super-resolution images with structured illumination microscopy. The method does not require the collection of any training data and can be used for multiple types of SIM imaging modalities. The PINN is generalizable and shows high fidelity reconstruction across many types of biological objects and noise levels. Furthermore, the only hand-tuned parameter is the initial learning rate which typically produces good results when between $10^{-3}$ to $10^{-4}$.

As the PINN does not rely on a statistically learned prior, the main limiting factor of this method will be how ill-posed the inverse reconstruction problem is. If the SNR is too low, the modulation depth is poor, or the illumination patterns are not well chosen the reconstructed image will not be optimal.

Additionally, the current PINN method described here assumes that both the illumination patterns and their location relative to the field of view are known. As demonstrated in Fig. 5 this is experimentally possible to acheive, however, it can become challenging for cases where the illumination patterns are below the diffraction limit. Pattern location can be experimentally determined using markers like in the case of LPSIM. In future work, we aim to modify our method to account for the unknown pattern location in order to make reconstruction possible in cases where markers are difficult or not possible to fabricate.

We envision our PINN method as a flexible, open-source, and nearly hyperparameter-free method for general structured illumination imaging. We believe it will serve as an important tool for microscopists who need an easily modifiable reconstruction method that does not require any training data.

Funding

Gordon and Betty Moore Foundation (5722); National Science Foundation (DGE-2038238).

Acknowledgments

This work was supported by grants from the National Science Foundation Graduate Research Fellowship Program (to Z. Burns) and by the Gordon and Betty Moore Foundation (to Z. Liu). Z.B. implemented software, ran experiments, and wrote the manuscript. Z.L. supervised the project and helped write the manuscript. The authors thank Junxiang Zhao for his simulation of LPSIM illumination patterns.

Disclosures

The authors declare no conflicts of interest.

Data availability

Code and simulated data from the paper are publicly available through GitHub [51]). The experimental endoplasmic reticulum data taken from the ML-SIM dataset is available at (Ref. [48]).

Supplemental document

See Supplement 1 for supporting content.

References

1. Y. Wu and H. Shroff, “Faster, sharper, and deeper: structured illumination microscopy for biological imaging,” Nat. Methods 15(12), 1011–1019 (2018). [CrossRef]  

2. M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photonics 7(2), 241–275 (2015). [CrossRef]  

3. F. Ströhl and C. F. Kaminski, “Frontiers in structured illumination microscopy,” Optica 3(6), 667–677 (2016). [CrossRef]  

4. X. Zheng, J. Zhou, L. Wang, M. Wang, W. Wu, J. Chen, J. Qu, B. Z. Gao, and Y. Shao, “Current challenges and solutions of super-resolution structured illumination microscopy,” APL Photonics 6(2), 020901 (2021). [CrossRef]  

5. M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]  

6. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19(11), 780–782 (1994). [CrossRef]  

7. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]  

8. M. G. Gustafsson, “Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution,” Proc. Natl. Acad. Sci. 102(37), 13081–13086 (2005). [CrossRef]  

9. E. H. Rego, L. Shao, J. J. Macklin, L. Winoto, G. A. Johansson, N. Kamps-Hughes, M. W. Davidson, and M. G. Gustafsson, “Nonlinear structured-illumination microscopy with a photoswitchable protein reveals cellular structures at 50-nm resolution,” Proc. Natl. Acad. Sci. 109(3), E135–E143 (2012). [CrossRef]  

10. B. Liu, C. Chen, X. Di, J. Liao, S. Wen, Q. P. Su, X. Shan, Z.-Q. Xu, L. A. Ju, C. Mi, F. Wang, and D. Jin, “Upconversion nonlinear structured illumination microscopy,” Nano Lett. 20(7), 4775–4781 (2020). [CrossRef]  

11. M. Tang, Y. Han, D. Ye, Q. Zhang, C. Pang, X. Liu, W. Shen, Y. Ma, C. F. Kaminski, X. Liu, and Q. Yang, “High-refractive-index chip with periodically fine-tuning gratings for tunable virtual-wavevector spatial frequency shift universal super-resolution imaging,” Adv. Sci. 9(9), 2103835 (2022). [CrossRef]  

12. Ø. I. Helle, F. T. Dullo, M. Lahrberg, J.-C. Tinguely, O. G. Hellesø, and B. S. Ahluwalia, “Structured illumination microscopy using a photonic chip,” Nat. Photonics 14(7), 431–438 (2020). [CrossRef]  

13. E. Ozbay, “Plasmonics: merging photonics and electronics at nanoscale dimensions,” Science 311(5758), 189–193 (2006). [CrossRef]  

14. D. K. Gramotnev and S. I. Bozhevolnyi, “Plasmonics beyond the diffraction limit,” Nat. Photonics 4(2), 83–91 (2010). [CrossRef]  

15. A. Poddubny, I. Iorsh, P. Belov, and Y. Kivshar, “Hyperbolic metamaterials,” Nat. Photonics 7(12), 948–957 (2013). [CrossRef]  

16. L. Ferrari, C. Wu, D. Lepage, X. Zhang, and Z. Liu, “Hyperbolic metamaterials and their applications,” Prog. Quantum Electron. 40, 1–40 (2015). [CrossRef]  

17. F. Wei and Z. Liu, “Plasmonic structured illumination microscopy,” Nano Lett. 10(7), 2531–2536 (2010). [CrossRef]  

18. F. Wei, D. Lu, H. Shen, W. Wan, J. L. Ponsetto, E. Huang, and Z. Liu, “Wide field super-resolution surface imaging through plasmonic structured illumination microscopy,” Nano Lett. 14(8), 4634–4639 (2014). [CrossRef]  

19. J. L. Ponsetto, F. Wei, and Z. Liu, “Localized plasmon assisted structured illumination microscopy for wide-field high-speed dispersion-independent super resolution imaging,” Nanoscale 6(11), 5807–5812 (2014). [CrossRef]  

20. J. L. Ponsetto, A. Bezryadina, F. Wei, K. Onishi, H. Shen, E. Huang, L. Ferrari, Q. Ma, Y. Zou, and Z. Liu, “Experimental demonstration of localized plasmonic structured illumination microscopy,” ACS Nano 11(6), 5344–5350 (2017). [CrossRef]  

21. A. Bezryadina, J. Zhao, Y. Xia, X. Zhang, and Z. Liu, “High spatiotemporal resolution imaging with localized plasmonic structured illumination microscopy,” ACS Nano 12(8), 8248–8254 (2018). [CrossRef]  

22. A. I. Fernández-Domínguez, Z. Liu, and J. B. Pendry, “Coherent four-fold super-resolution imaging with composite photonic–plasmonic structured illumination,” ACS Photonics 2(3), 341–348 (2015). [CrossRef]  

23. Y. U. Lee, J. Zhao, Q. Ma, L. K. Khorashad, C. Posner, G. Li, G. B. M. Wisna, Z. Burns, J. Zhang, and Z. Liu, “Metamaterial assisted illumination nanoscopy via random super-resolution speckles,” Nat. Commun. 12(1), 1–8 (2021). [CrossRef]  

24. Y. U. Lee, Z. Nie, S. Li, C.-H. Lambert, J. Zhao, F. Yang, G. B. M. Wisna, S. Yang, X. Zhang, and Z. Liu, “Ultrathin layered hyperbolic metamaterial-assisted illumination nanoscopy,” Nano Lett. 22(14), 5916–5921 (2022). [CrossRef]  

25. Y. U. Lee, C. Posner, Z. Nie, J. Zhao, S. Li, S. E. Bopp, G. B. M. Wisna, J. Ha, C. Song, J. Zhang, S. Yang, and Z. Liu, “Organic hyperbolic material assisted illumination nanoscopy,” Adv. Sci. 8(22), 2102230 (2021). [CrossRef]  

26. C. Karras, M. Smedh, R. Förster, H. Deschout, J. Fernandez-Rodriguez, and R. Heintzmann, “Successful optimization of reconstruction parameters in structured illumination microscopy–a practical guide,” Opt. Commun. 436, 69–75 (2019). [CrossRef]  

27. A. Lal, C. Shan, and P. Xi, “Structured illumination microscopy image reconstruction algorithm,” IEEE J. Sel. Top. Quantum Electron. 22(4), 50–63 (2016). [CrossRef]  

28. C. S. Smith, J. A. Slotman, L. Schermelleh, N. Chakrova, S. Hari, Y. Vos, C. W. Hagen, M. Müller, W. van Cappellen, A. B. Houtsmuller, J. P. Hoogenboom, and S. Stallinga, “Structured illumination microscopy with noise-controlled image reconstructions,” Nat. Methods 18(7), 821–828 (2021). [CrossRef]  

29. E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. Le Moal, C. Nicoletti, M. Allain, and A. Sentenac, “Structured illumination microscopy using unknown speckle patterns,” Nat. Photonics 6(5), 312–315 (2012). [CrossRef]  

30. L.-H. Yeh, L. Tian, and L. Waller, “Structured illumination microscopy with unknown patterns and a statistical prior,” Biomed. Opt. Express 8(2), 695–711 (2017). [CrossRef]  

31. M. Müller, V. Mönkemöller, S. Hennig, W. Hübner, and T. Huser, “Open-source image reconstruction of super-resolution structured illumination microscopy data in imagej,” Nat. Commun. 7(1), 10980 (2016). [CrossRef]  

32. L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020). [CrossRef]  

33. C. Ling, C. Zhang, M. Wang, F. Meng, L. Du, and X. Yuan, “Fast structured illumination microscopy via deep learning,” Photonics Res. 8(8), 1350–1359 (2020). [CrossRef]  

34. M. A. Boland, E. A. Cohen, S. R. Flaxman, and M. A. Neil, “Improving axial resolution in structured illumination microscopy using deep learning,” Phil. Trans. R. Soc. A 379(2199), 20200298 (2021). [CrossRef]  

35. Z. H. Shah, M. Müller, T.-C. Wang, P. M. Scheidig, A. Schneider, M. Schüttpelz, T. Huser, and W. Schenck, “Deep-learning based denoising and reconstruction of super-resolution structured illumination microscopy images,” Photonics Res. 9(5), B168–B181 (2021). [CrossRef]  

36. C. Qiao, D. Li, Y. Guo, C. Liu, T. Jiang, Q. Dai, and D. Li, “Evaluation and development of deep neural networks for image super-resolution in optical microscopy,” Nat. Methods 18(2), 194–202 (2021). [CrossRef]  

37. C. N. Christensen, E. N. Ward, M. Lu, P. Lio, and C. F. Kaminski, “Ml-sim: universal reconstruction of structured illumination microscopy images using transfer learning,” Biomed. Opt. Express 12(5), 2720–2733 (2021). [CrossRef]  

38. Z. Burns, J. Zhao, Y. U. Lee, and Z. Liu, “Deep learning based metamaterial assisted illumination nanoscopy,” in Emerging Topics in Artificial Intelligence (ETAI) 2021, vol. 11804 (SPIE, 2021), p. 118040Z.

39. D. P. Hoffman, I. Slavitt, and C. A. Fitzpatrick, “The promise and peril of deep learning in microscopy,” Nat. Methods 18(2), 131–132 (2021). [CrossRef]  

40. C. Belthangady and L. A. Royer, “Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction,” Nat. Methods 16(12), 1215–1225 (2019). [CrossRef]  

41. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 9446–9454.

42. F. Wang, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, and G. Situ, “Phase imaging with an untrained neural network,” Light: Sci. Appl. 9(1), 77 (2020). [CrossRef]  

43. E. Bostan, R. Heckel, M. Chen, M. Kellman, and L. Waller, “Deep phase decoder: self-calibrating phase microscopy with an untrained deep neural network,” Optica 7(6), 559–562 (2020). [CrossRef]  

44. K. Monakhova, V. Tran, G. Kuo, and L. Waller, “Untrained networks for compressive lensless photography,” Opt. Express 29(13), 20913–20929 (2021). [CrossRef]  

45. M. Qiao, X. Liu, and X. Yuan, “Snapshot temporal compressive microscopy using an iterative algorithm with untrained neural networks,” Opt. Lett. 46(8), 1888–1891 (2021). [CrossRef]  

46. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019). [CrossRef]  

47. C. Qiao and D. Li, “BioSR: a biological image dataset for super-resolution microscopy,” figshare, (2020), https://doi.org/10.6084/m9.figshare.13264793.v7 .

48. C. Christensen, “ML-SIM,” Github, 2020, https://github.com/charlesnchr/ML-SIM.

49. A. Descloux, K. S. Grußmayer, and A. Radenovic, “Parameter-free image resolution estimation based on decorrelation analysis,” Nat. Methods 16(9), 918–924 (2019). [CrossRef]  

50. O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in International Conference on Medical image Computing and Computer-Assisted Intervention, (Springer, 2015), pp. 234–241.

51. Z. Burns, “Untrained, physics-informed neural networks for structured illumination microscopy,” Github, 2023, https://github.com/Zach-T-Burns/Untrained-PINN-for-SIM.

Supplementary Material (2)

NameDescription
Code 1       Link to code housed on github
Supplement 1       supplemental information document

Data availability

Code and simulated data from the paper are publicly available through GitHub [51]). The experimental endoplasmic reticulum data taken from the ML-SIM dataset is available at (Ref. [48]).

51. Z. Burns, “Untrained, physics-informed neural networks for structured illumination microscopy,” Github, 2023, https://github.com/Zach-T-Burns/Untrained-PINN-for-SIM.

48. C. Christensen, “ML-SIM,” Github, 2020, https://github.com/charlesnchr/ML-SIM.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Concept of PINN for SIM. a) Flowchart for the PINN optimization process on a single set of sub-images. (b-d) Examples of SIM illumination patterns. a) traditional linear SIM b) non-linear SIM (NL-SIM) c) localized plasmonic SIM (LPSIM). Scale bars: 177.8 nm.
Fig. 2.
Fig. 2. Demonstration of resolution improvement with multiple SIM modalities. Traditional linear SIM is able to distinguish features at 2x the diffraction limit and NL-SIM and LPSIM can accurately distinguish features at 3x the diffraction limit. Line Profiles: blue line = diffraction limited, orange line = PINN result, red dotted line = ground truth. SNR is 20 for all input frames.
Fig. 3.
Fig. 3. Demonstration of PINN based nonlinear SIM resolution improvement on multiple object types. (Top left) F-Actin, (top right) clathrin-coated pits, (bottom left) endoplasmic reticulum, (bottom right) microtubules. LR: low resolution (diffraction limited), PINN: physics-informed neural network, GT: ground truth. SNR is 20 for all input images. Widefield scale bars: 600 nm. Inset scale bars: 100 nm.
Fig. 4.
Fig. 4. Assessment of PINN reconstruction performance at various signal-to-noise ratios. (Top section) SIM (bottom section) non-linear SIM. For each section: (top row) Individual raw frame at the given SNR, (middle row) The PINN reconstruction result, (bottom row) a line profile of two closely spaced microtubules (blue dotted) ground truth (orange) PINN result. SIM scale bar: 400 nm. NL-SIM scale bar: 200 nm.
Fig. 5.
Fig. 5. Experimental assessment of PINN for linear SIM on endoplasmic reticulum. (left) Diffraction limited image, (right) PINN result, (insets) Zoomed in view of dashed green regions showing sub-diffraction features. Widefield scale bars: 1,000 nm. Inset scale bars: 250 nm.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

f S I M = f d e t + f i l l
H ( f ) = ( I f P S F ) + N
f = arg min f { n = 1 n H ( f ) i g i d + α ϕ ( f ) }
M θ = arg min θ { n = 1 n H ( M θ ( g i ) g i d }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.