Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Differentiable model-based adaptive optics for two-photon microscopy

Open Access Open Access

Abstract

Aberrations limit scanning fluorescence microscopy when imaging in scattering materials such as biological tissue. Model-based approaches for adaptive optics take advantage of a computational model of the optical setup. Such models can be combined with the optimization techniques of machine learning frameworks to find aberration corrections, as was demonstrated for focusing a laser beam through aberrations onto a camera [Opt. Express 2826436 (26436) [CrossRef]  ]. Here, we extend this approach to two-photon scanning microscopy. The developed sensorless technique finds corrections for aberrations in scattering samples and will be useful for a range of imaging application, for example in brain tissue.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Using adaptive optics for imaging in aberrating samples requires finding appropriate corrections, which can be determined using wavefront sensors [14]. Alternatively, a variety of so-called sensorless approaches, that don’t require a wavefront sensor, have been developed. One class of algorithms takes advantage of optimization. For example, iteratively modulating and updating an excitation wavefront depending on the resulting fluorescence intensity can be used for finding aberration corrections [3,5].

Other approaches for adaptive optics additionally take advantage of prior information about the optical system by including system information in a computational model [612]. In this case, optimization can be used for finding aberrations which enter the model as undetermined parameters. Different from the above mentioned optimization approaches, all data is already provided at the beginning of the optimization process and not recorded iteratively [11].

More recently, the development of machine learning frameworks such as Tensorflow has enabled optimizing computationally demanding models in many areas of physics and engineering (for example [1320]) and also in optical imaging [17,21,22].

This approach can be applied for adaptive optics [21]. Using a differentiable model of the optical system, focusing in transmission through a single aberrating layer as well as in a reflection, epidetection configuration through two aberrating layers was achieved [21]. Here, we extend this differentiable model-based adaptive optics method to scanning two-photon microscopy with a high numerical aperture objective as used for in vivo imaging in brain tissue.

For this, a fluorescent guide star in an aberrated sample is probed with a small number of excitation modulations. The resulting dataset of pairs of excitation modulations and guide star images is sufficient for constraining model optimization in Tensorflow. This approach allows including detailed setup information in the optimization process and finds corrections independent of prior assumptions about the statistics of aberrations. We show experimentally that aberrations in a sample can be determined and corrected.

2. Differentiable model-based approach for adaptive optics

Modern machine learning frameworks efficiently implement model training by gradient-based minimization of a loss function describing the mismatch between the output of a model and a target. They provide gradient-based optimizers and an automatic differentiation [23] framework, which leaves for the user only the task of implementing the desired model and loss function using a composition of differntiable functions. Here, we implement a model that simulates two-photon image formation in a scanning microscope depending on a set of free parameters which correspond to the sample aberration and are found through optimization (Fig. 1). An advantage of this approach is also the efficient implementation of optimization and model evaluation on a GPU, for example in Tensorflow, which was used here.

 figure: Fig. 1.

Fig. 1. Left: Schematic of the two-photon microscope and corresponding model optimization. A laser beam is scanned with scanning mirrors and focused through an objective. Fluorescence light is collected with a dichroic mirror onto a photomultiplier tube. A computational model is set up to describe the microscope in the absence of aberrations. After introducing aberrations, the computational model receives phase modulations displayed on the SLM as input. The model output is a set of point spread function (PSF) images that are compared with experimentally recorded PSF images through the loss function. The optimizer (running in Tensorflow on a GPU) finds a sample aberration (and corresponding correction) by matching of the model output with experimental PSF images. Right: The computational model is formulated in expression (5). The beam cross-section, represented as a complex amplitude, is multiplied by phase objects representing SLM probing modulation, objective, and aberration, respectively, and is propagated to the focal plane of the objective using the angular spectrum method to calculate the resulting PSF. The two-photon image is the convolution of the squared PSF intensity and the object function, resulting in just the point spread function squared for a guide star. The beam profile $U_0$ and objective parameters are adjusted to describe the experimental setup. The SLM modulation is the model input, the PSF is the model output, the aberration is found through optimization.

Download Full Size | PDF

This approach is schematically illustrated in Fig. 1. First, a computational model is fitted to the two-photon microscope. This model accurately describes propagation of the laser beam through the microscope, including the spatial light modulator and the high numerical aperture objective, as well as image formation with the detected fluorescence. Then, aberrations are found by optimizing free model parameters corresponding to the undetermined sample aberrations (see Fig. 1 for details).

3. Setup and image preprocessing

The setup is schematically shown in Fig. 1. A custom built two-photon scanning microscope was used with components similar to the one described in [24]. A laser beam (920 nm, Chameleon Discovery, Coherent) was expanded and reflected off a spatial light modulator (Meadowlark, HSP1920-1064-HSP8). The spatial light modulator was imaged onto the resonant scanning mirror with a pair of lenses. The scanning mirrors were imaged into the back focal plane of the objective and fluorescence was detected with a dichroic mirror and a lens focusing detected light onto a photomultiplier tube (Hamamatsu, H7422PA-40) using photon counting. The same excitation and fluorescence detection arrangement was used as described in [24], however, with a different objective (Nikon CFI Apo 60X W NIR, N.A. 1.0, 2.8 mm W.D.) and correspondingly adjusted tube and scan lenses (all lenses used in the setup were achromatic doublets from Thorlabs). The microscope objective was underfilled resulting in a point spread function (measured with 1 $\mu$m diameter beads) with a full width at half maximum (FWHM) of $0.9 \mu$m laterally and $3.7 \mu$m axially. The microscope was controlled with Scanimage [25] which was integrated with custom software written in Python for SLM control and for synchronizing probing modulations with image acquisition using Scanimage API callbacks.

Fluorescence images were recorded using photon counting at $512\times 512$ pixels resolution at 30 Hz. This resulted in sparse images with low counts of photons per pixel, with many discontinuities (gaps) in intensities hindering the correct estimation of the similarity with model output. Therefore all fluorescence images were preprocessed with a low-pass filter: a discrete Fourier transform was applied to the images and frequencies exceeding $0.15$ of the pattern resolution were discarded before inverse transformation. Examples of images before and after preprocessing are shown in Fig. 2 (columns two and three, respectively).

 figure: Fig. 2.

Fig. 2. Correspondence between computationally generated and experimentally measured images of fluorescent beads. Three representative examples (rows), recorded in an aberrating sample: probing modulations displayed on SLM (first column), and corresponding PSF images as measured with photon counting (second column), preprocessed for model fitting (third column) and simulated with the optimized model. Field of view size is $40\times 40 \mu$m.

Download Full Size | PDF

4. Computational modeling and optimization

4.1 Computational model of the optical setup

To implement differentiable model-based adaptive optics for two-photon microscopy, first a differentiable model of the setup needs to be established. The main elements of the setup model are illustrated in Fig. 1, right side. The model consists of, first, a phase modulation describing the spatial light modulator (SLM), second, a phase function describing the focusing objective, and, third, a (unknown) sample phase aberration. The optimization process is schematically illustrated in Fig. 1 (see figure legend for details).

The computational model is based on Fourier optics. Light propagation along the optical axis ($z$-axis; x, y, z are spatial coordinates) is represented as a complex amplitude $U(x, y, z)$. The wavefront propagates in free space and through a sequence of planar phase objects along the optical axis. The interaction of the wavefront $U(x, y, d)$ with a phase object $\phi (x, y, d)$ at plane $d$ is described as a multiplication:

$$U(x, y, d)\cdot\exp\left[{i\phi(x,y,d)}\right].$$

Free space propagation of the wavefront over a distance $d$ is calculated using the angular spectrum method with the following operator [26]:

$$\begin{aligned}U(x, y, z+d) & = P_d(U(x, y, z)) = \iint A(f_x, f_y; z)\,\textrm{circ}\left(\sqrt{(\lambda f_X)^2+(\lambda f_Y)^2}\right)\\ & \times H\exp\left[i2\pi(f_Xx+f_Yy)\right] \,\textrm{d}f_X\,\textrm{d}f_Y. \end{aligned}$$

Here, $A(f_X, f_Y; z)$ is the Fourier transform of $U(x, y, z)$, $f_X$ and $f_Y$ are spatial frequencies, the circ function is 1 inside the circle with the radius in the argument and 0 outside [26], and $H(f_X, f_Y) = \exp \left [i2\pi \frac {d}{\lambda }\sqrt {1-(\lambda f_X)^2-(\lambda f_Y)^2}\right ]$ is the optical transfer function. Light intensity as measured at the sensor is given by

$$I(x, y, z) = \left|U(x, y, z)\right|^2.$$

For two-photon imaging [27] the induced fluorescence intensity is proportional to the square of the excitation intensity:

$$I_{\textrm{f}} \propto I^2 = \left|U\right|^4.$$

Using these equations, the image of a fluorescent bead in the microscope (or the PSF) is simulated with the following function, taking into account that the propagating wavefront is modulated by the SLM and phase aberration of the sample:

$$\begin{aligned}I_{\textrm{f}} = S(\phi_{\textrm{SLM}}, \phi_{\textrm{aberration}}) & = \left| P_{f_{\textrm{MO}}}(U_{0}\cdot\exp\left[i(\phi_{\textrm{SLM}}+\phi_{\textrm{MO}}+\phi_{\textrm{aberration}}) \right])\right|^4, \end{aligned}$$
where $\phi _{\textrm {SLM}}$ is the SLM phase modulation, $\phi _{\textrm {aberration}}$ is the phase surface of the aberration, $U_{0}$ is the beam cross-section (complex) amplitude, and $f_{\textrm {MO}}$ and $\phi _{\textrm {MO}}$ are the focal length and the corresponding phase modulation of the microscope objective, respectively. The SLM modulation is modelled directly at the same z-plane as the objective, which allows omitting nonessential computations, since it is imaged onto the back focal plane of the microscope objective. The aberration is also modelled at the same plane, saving computations and simplifying the correction: since the phase function of the aberration is found at the SLM plane, its inverse can be directly applied to the SLM without any additional computations of propagation between different planes.

4.2 Fitting the computational model to the experimental setup

The goal of the optimization process is matching the computational model to experimental observations with correlations as a similarity measure. The model images therefore have to match in each pixel with the images observed in the microscope in the absence of sample aberrations. When responses to probing SLM modulations are measured under these aberration-free conditions, model optimization is expected to yield a flat aberration. Optimization without sample aberrations therefore can be used for validating the correspondence between the model and the microscope. The model Eq. (5) contains fixed parameters $U_{0}$ (Gaussian amplitude profile with flat phase, $\sigma =2.85$ mm), $\lambda =920$ nm, $f_{\textrm {MO}}=1700$ mm, and corresponding $\phi _{\textrm {MO}}$ which were matched to the experimental setup by taking into account pupil sizes $d=4.0$ mm, field of view size ($60 \times 40 \mu$m at zoom level 10), and the resolution of imaging and simulation ($512\times 512$ pixels). Significant elongation of $f_{\textrm {MO}}$ allows approximate lateral upscaling of simulated images for matching the microscope zoom level without resampling of the angular spectrum. Additionally, minor rotational misalignments of the SLM in the image plane were observed and adjusted for by corresponding counter-rotations of the recorded fluorescence images, since such rotations could not be corrected by adjusting the SLM phase. A fixed zoom level and field of view were used for all experiments and the model was adjusted for these conditions.

The parameters were tuned in the following procedure: first, a sparse aberration-free fluorescent guide star (1 $\mu$m diameter fluorescent beads embedded in 3 $\%$ agarose) was imaged under various modulations (Zernike modes Z1 through Z10 displayed one-by-one with alternating magnitudes). The parameters were then manually tuned to closely match computed to actual images. For this, zoom level and SLM orientation were varied while displaying a set of different SLM phase modulations. This procedure needed only to be done once and was therefore performed manually, but a computational optimization would also be possible. The resulting model was validated by running the model optimization for finding corrections in the absence of introduced sample aberrations. After successful parameter optimization, the residual aberrations of the system were close to zero, indicating that the computational model accurately described the experimental microscope setup.

As shown in Fig. 2, after tuning of the model parameters, good correspondence between experimentally observed and computationally generated patterns was achieved (shown here after optimization in an aberrating sample, see below).

4.3 Model optimization and loss function

Sample aberrations introduce an unknown phase function into the optical setup. To mirror this situation computationally, a phase surface is added as a set of free parameters to the model [21]. This unknown phase surface needs to be adjusted through optimization in such a way that it describes the introduced sample aberration. After successful optimization, the aberration phase surface is known, as verified by the model again matching the optical setup (Fig. 2), and can therefore be corrected.

We found that optimization with a single image of the fluorescent bead and corresponding single SLM phase modulation did often not result in satisfactory results since multiple possible phase modulations can generate similar planar PSF images. To constrain the optimization process, we therefore probed the guide star with a set of different excitation modulations as in [21] by displaying randomly generated phase patterns on the SLM and imaging the resulting changes in the aberrated guide star.

In total, 20 such pairs of SLM phase aberrations and corresponding two-photon images served as the input to the computational model. Images were recorded using photon counting at a frame rate of 30 Hz. (The SLM has a maximum frame rate of more than 500 Hz, but the control loop implemented here ran only at 2 Hz since SLM and image acquisition were not closely integrated.) Probing phase modulations were generated by summing Zernike modes Z1 through Z10 with coefficients drawn form a uniform random distribution in the range of $-1$ to $1$ and were displayed on the SLM while corresponding fluorescence images $I_{\mathrm {f}}$ were recorded.

The unknown aberration phase function was then found in an optimization process with the goal of matching measured and simulated images:

$$\mathop{\arg\,\min}\limits_{\phi_{\textrm{aberration}}} \sum_{j=1}^{N}\textrm{loss}(S(\phi_{\textrm{SLM}_{j}}, \phi_{\textrm{aberration}}), I_{\textrm{f}_{j}}).$$

Here, $\mathrm {SLM}_j$ and $\phi _{\mathrm {aberration}_j}$ are pairs of SLM probing modulations and corresponding fluorescence images recorded in the two-photon microscope. $S$ is the microscope model as specified in (5). The loss function is defined to reflect the similarity between simulated and measured fluorescent images:

$$\textrm{loss}(\textrm{prediction}, \textrm{target}) ={-}r\left[\textrm{prediction}, \textrm{target}\right]\cdot\ln{(\sum^\textrm{pixels}|\textrm{prediction}|)},$$
where $r\left [\mathrm {prediction}, \mathrm {target}\right ]$ is Pearson’s correlation coefficient, and $\ln {(\sum ^{\mathrm {pixels}}|\mathrm {prediction}|)}$ is an additional cost factor for light intensity conservation. Since Pearson’s correlation coefficient is not sensitive to the magnitude of the prediction, the optimizer likely converges to a solution which provides high correlation, but discards some of the light by redirecting it out of the field of view, and therefore usually does not result in good corrections. Therefore, additionally introducing the sum of the total intensity promotes solutions that do not discard excitation intensity. The logarithm ensures that the slope of this regularization factor is always smaller than that of the correlation coefficient, making it a secondary optimization goal.

The model (5) was implemented using Tensorflow 2.4 [28], adapting the angular spectrum method from [29] and using the optimization algorithm Adam with a learning rate of $0.01$. According to expression (5), the phase $\phi _{\mathrm {aberration}}$ is represented as a real-valued tensor, which is a requirement for optimization variables in Tensorflow. All modulations and sample responses were packed in a single batch and used all at once in each of the optimizer’s iterations. 1000 optimization steps were typically sufficient for reaching a correlation coefficient between model and observations of $>0.9$. The optimization took between 1 and 2 minutes on a workstation with four Nvidia Titan RTX GPUs used in data parallel mode. The optimization was terminated when the correlation coefficient between computational model and experimental data plateaued. The resulting aberration phase function was negated (multiplied by $-1$) for displaying on the SLM and in this way results in the correction of the aberration.

5. Two-photon imaging through aberrations

To test the approach experimentally, a sample aberration was introduced by covering the fluorescent beads with a layer of vacuum grease on a microscope cover slide. For correcting the induced aberrations, images were taken with a single guide star at the center of a field of view with dimensions of $60\times 40 \mu$m.

Two representative results are shown in Fig. 3. Three orthogonal maximum intensity projections, before and after correction, of a fluorescent bead recorded in a volume with axial step size of 0.25 $\mu$m between planes are displayed. The third and sixth rows show the improved intensity profiles along representative cross-sections through the fluorescent bead in lateral and axial directions, respectively. Corrections resulted in an increase in intensity and an improved focus with a full width at half maximum (FWHM) after correction of $0.9 \mu$m for both samples, and an axial FWHM of $14 \mu$m and $5.2 \mu$m, respectively (measured with a $1 \mu$m diameter bead). The intensity profiles before correction could not be fitted with Gaussians. The slices at the focal plane (at maximum intensity) were averaged over four frames (axially spaced by 0.25$\mu$m). Axial profiles were averaged over $3\times 3$ pixels around the center of the focus (maximum intensity) and averaged over the lateral dimensions.

 figure: Fig. 3.

Fig. 3. Two representative examples of two-photon volume images of a $1 \mu$m diameter fluorescent bead embedded in agarose before and after correction. Sample 1, top row, left: Maximum intensity projection of volume image of a fluorescent bead in axial (z) direction (see bottom row for color bar). Projections along x- and y-axes are shown to the right (first row, left center) and below (second row, left). Second row, left center: slice at maximum intensity, indicated by white lines in x- and y-projections averaged over 4 frames recorded with $0.25 \mu$m axial spacing. Image is zoomed-in as indicated by white box in top left image, size is $10\times 10 \mu$m. Top row, center right: same as left side after correction, again with corresponding x- and y- projections. Second row, right: slice at maximum intensity, indicated by white lines in x- and y-projections averaged over 4 frames recorded with $0.25 \mu$m axial spacing after correction. Third row, left: lateral cross-sections before and after correction as indicated by correspondingly colored arrows in row two, including Gaussian fit of corrected profile (black dashed line). Third row, center: axial cross sections along the correspondingly colored arrows in the second row including Gaussian fit of corrected image. Third row, right: aberration correction. Sample 2: as described for sample 1.

Download Full Size | PDF

6. Discussion and conclusions

We developed a differentiable model-based approach for adaptive optics for two-photon scanning microscopy. The method takes advantage of combining a differentiable model of the known microscope and unknown sample with the optimization techniques implemented in machine learning frameworks [21]. We show that, with an appropriate cost function (Eq. (7)), a small number of probing modulations (recorded at high frame rates and with low photon count rates as often observed when imaging in vivo) is sufficient for finding sample aberrations through model optimization and to correct them.

Different from optimization approaches that directly aim to optimize focus intensity, the first optimization objective here was matching the focus shape of the model to the one observed in the microscope. However, we added an additional, secondary term to the cost function to also optimize image intensity directly.

A limitation of the current implementation for dynamic samples is the required correction time. Here, we used 20 probing modulations recorded at a frame rate of 30 Hz, thus requiring a minimum of 0.66 seconds for data acquisition. Model optimization took between 1 and 2 minutes, which is ultimately limited by the computational speed of the GPU and could be accelerated with improved or more GPUs.

As seen in Fig. 3, the focus after optimization did in particular in axial direction not reach the diffraction limit (as measured with 1 $\mu$m diameter fluorescent beads). While such axially extended point spread functions are often used for in vivo imaging (see for example [30] for a recent discussion), the quality of the corrections could be improved in several ways. A limitation of the current data, in particular for correcting higher order aberrations, is their limited dynamic range. The dynamic range could for example be extended by combining multiple fluorescence images recorded with different integration times [31]. Additional improvements in performance are expected from using a model approximation that is more accurate for high numerical aperture objectives than the one used here [32]. Together, such improvements will more accurately reflect the physical setup and aberrations and therefore improve optimization results. As an alternative to using multiple modulations recorded in the same focal plane, also axial slices or an entire volume could be used for optimization.

Compared with other optimization approaches that rely on iterative methods, including prior information about the experimental setup additionally constrains the optimization process [11]. Compared to neural network approaches [24,3341], differentiable model-based approaches have the advantage that they don’t rely on a predetermined model of sample aberrations.

Overall, the presented approach, similar to other iterative approaches, can be used for correcting aberrations that are approximately stationary, and is under these conditions compatible with imaging in biological samples.

Funding

Max-Planck-Gesellschaft; Center of advanced european studies and research.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available from the authors upon request.

References

1. J. N. Kerr and W. Denk, “Imaging in vivo: watching the brain in action,” Nat. Rev. Neurosci. 9(3), 195–205 (2008). [CrossRef]  

2. C. Rodríguez and N. Ji, “Adaptive optical microscopy for neurobiology,” Curr. Opin. Neurobiol. 50, 83–91 (2018). [CrossRef]  

3. S. Rotter and S. Gigan, “Light fields in complex media: Mesoscopic scattering meets wave control,” Rev. Mod. Phys. 89(1), 015005 (2017). [CrossRef]  

4. S. Yoon, M. Kim, M. Jang, Y. Choi, W. Choi, S. Kang, and W. Choi, “Deep optical imaging within complex scattering media,” Nat. Rev. Phys. 2(3), 141–158 (2020). [CrossRef]  

5. I. M. Vellekoop, “Feedback-based wavefront shaping,” Opt. Express 23(9), 12189–12206 (2015). [CrossRef]  

6. R. A. Gonsalves, “Perspectives on phase retrieval and phase diversity in astronomy,” in Adaptive Optics Systems IV, vol. 9148 (International Society for Optics and Photonics, 2014), pp. 91482P1–10.

7. S. M. Jefferies, M. Lloyd-Hart, E. K. Hege, and J. Georges, “Sensing wave-front amplitude and phase with phase diversity,” Appl. Opt. 41(11), 2095–2102 (2002). [CrossRef]  

8. B. M. Hanser, M. G. Gustafsson, D. Agard, and J. W. Sedat, “Phase-retrieved pupil functions in wide-field fluorescence microscopy,” J. Microsc. 216(1), 32–48 (2004). [CrossRef]  

9. H. Song, R. Fraanje, G. Schitter, H. Kroese, G. Vdovin, and M. Verhaegen, “Model-based aberration correction in a closed-loop wavefront-sensor-less adaptive optics system,” Opt. Express 18(23), 24070–24084 (2010). [CrossRef]  

10. H. Linhai and C. Rao, “Wavefront sensorless adaptive optics: a general model-based approach,” Opt. Express 19(1), 371–379 (2011). [CrossRef]  

11. H. Yang, O. Soloviev, and M. Verhaegen, “Model-based wavefront sensorless adaptive optics system for large aberrations and extended objects,” Opt. Express 23(19), 24587–24601 (2015). [CrossRef]  

12. J. Antonello and M. Verhaegen, “Modal-based phase retrieval for adaptive optics,” J. Opt. Soc. Am. A 32(6), 1160–1170 (2015). [CrossRef]  

13. M. M. Loper and M. J. Black, “Opendr: An approximate differentiable renderer,” in European Conference on Computer Vision, (Springer, 2014), pp. 154–169.

14. M. Giftthaler, M. Neunert, M. Stäuble, M. Frigerio, C. Semini, and J. Buchli, “Automatic differentiation of rigid body dynamics for optimal control and estimation,” Adv. Robotics 31(22), 1225–1237 (2017). [CrossRef]  

15. F. de Avila Belbute-Peres, K. Smith, K. Allen, J. Tenenbaum, and J. Z. Kolter, “End-to-end differentiable physics for learning and control,” in Advances in Neural Information Processing Systems, (2018), pp. 7178–7189.

16. C. Schenck and D. Fox, “Spnets: Differentiable fluid dynamics for deep neural networks,” arXiv preprint arXiv:1806.06094 (2018).

17. M. Kellman, E. Bostan, M. Chen, and L. Waller, “Data-driven design for fourier ptychographic microscopy,” in 2019 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2019), pp. 1–8.

18. F. Wang, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, and G. Situ, “Phase imaging with an untrained neural network,” Light: Sci. Appl. 9(1), 77 (2020). [CrossRef]  

19. E. Bostan, R. Heckel, M. Chen, M. Kellman, and L. Waller, “Deep phase decoder: self-calibrating phase microscopy with an untrained deep neural network,” Optica 7(6), 559–562 (2020). [CrossRef]  

20. K. C. Zhou and R. Horstmeyer, “Diffraction tomography with a deep image prior,” Opt. Express 28(9), 12872–12896 (2020). [CrossRef]  

21. I. Vishniakou and J. D. Seelig, “Differentiable model-based adaptive optics with transmitted and reflected light,” Opt. Express 28(18), 26436–26446 (2020). [CrossRef]  

22. G. Ongie, A. Jalal, C. A. M. R. G. Baraniuk, A. G. Dimakis, and R. Willett, “Deep learning techniques for inverse problems in imaging,” IEEE J. Sel. Areas Inf. Theory 1(1), 39–56 (2020). [CrossRef]  

23. A. G. Baydin, B. A. Pearlmutter, A. A. Radul, and J. M. Siskind, “Automatic differentiation in machine learning: a survey,” The Journal of Machine Learning Research 18, 5595–5637 (2017).

24. I. Vishniakou and J. D. Seelig, “Wavefront correction for adaptive optics with reflected light and deep neural networks,” Opt. Express 28(10), 15459–15471 (2020). [CrossRef]  

25. T. A. Pologruto, B. L. Sabatini, and K. Svoboda, “Scanimage: flexible software for operating laser scanning microscopes,” BioMed. Eng. OnLine 2(1), 13 (2003). [CrossRef]  

26. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).

27. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248(4951), 73–76 (1990). [CrossRef]  

28. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale machine learning on heterogeneous systems,” (2015). Software available from tensorflow.org.

29. L. M. Sanchez Brea, “Diffractio, python module for diffraction and interference optics,” https://pypi.org/project/diffractio/ (2019).

30. A. Song, J. L. Gauthier, J. W. Pillow, D. W. Tank, and A. S. Charles, “Neural anatomy and optical microscopy (naomi) simulation for evaluating calcium imaging methods,” J. Neurosci. Methods 358, 109173 (2021). [CrossRef]  

31. C. Vinegoni, P. F. Feruglio, and R. Weissleder, “High dynamic range fluorescence imaging,” IEEE J. Sel. Top. Quantum Electron. 25(1), 1–7 (2019). [CrossRef]  

32. N. H. Thao, O. Soloviev, and M. Verhaegen, “Phase retrieval based on the vectorial model of point spread function,” J. Opt. Soc. Am. A 37(1), 16–26 (2020). [CrossRef]  

33. J. R. P. Angel, P. Wizinowich, M. Lloyd-Hart, and D. Sandler, “Adaptive optics for array telescopes using neural-network techniques,” Nature 348(6298), 221–224 (1990). [CrossRef]  

34. S. W. Paine and J. R. Fienup, “Machine learning for improved image-based wavefront sensing,” Opt. Lett. 43(6), 1235–1238 (2018). [CrossRef]  

35. R. Swanson, M. Lamb, C. Correia, S. Sivanandam, and K. Kutulakos, “Wavefront reconstruction and prediction with convolutional neural networks,” in Adaptive Optics Systems VI, vol. 10703 (International Society for Optics and Photonics, 2018), pp. 107031F1–10.

36. T. Andersen, M. Owner-Petersen, and A. Enmark, “Neural networks for image-based wavefront sensing for astronomy,” Opt. Lett. 44(18), 4618–4621 (2019). [CrossRef]  

37. T. Andersen, M. Owner-Petersen, and A. Enmark, “Image-based wavefront sensing for astronomy using neural networks,” J. Astron. Telesc. Instrum. Syst. 6(03), 1 (2020). [CrossRef]  

38. Y. Jin, Y. Zhang, L. Hu, H. Huang, Q. Xu, X. Zhu, L. Huang, Y. Zheng, H.-L. Shen, W. Gong, and K. Si, “Machine learning guided rapid focusing with sensor-less aberration corrections,” Opt. Express 26(23), 30162–30171 (2018). [CrossRef]  

39. L. Hu, S. Hu, W. Gong, and K. Si, “Learning-based shack-hartmann wavefront sensor for high-order aberration detection,” Opt. Express 27(23), 33504–33517 (2019). [CrossRef]  

40. S. Cheng, H. Li, Y. Luo, Y. Zheng, and P. Lai, “Artificial intelligence-assisted light control and computational imaging through scattering media,” J. Innovative Opt. Health Sci. 12(04), 1930006 (2019). [CrossRef]  

41. D. Saha, U. Schmidt, Q. Zhang, A. Barbotin, Q. Hu, N. Ji, M. J. Booth, M. Weigert, and E. W. Myers, “Practical sensorless aberration estimation for 3d microscopy with deep learning,” Opt. Express 28(20), 29044–29053 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are available from the authors upon request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1.
Fig. 1. Left: Schematic of the two-photon microscope and corresponding model optimization. A laser beam is scanned with scanning mirrors and focused through an objective. Fluorescence light is collected with a dichroic mirror onto a photomultiplier tube. A computational model is set up to describe the microscope in the absence of aberrations. After introducing aberrations, the computational model receives phase modulations displayed on the SLM as input. The model output is a set of point spread function (PSF) images that are compared with experimentally recorded PSF images through the loss function. The optimizer (running in Tensorflow on a GPU) finds a sample aberration (and corresponding correction) by matching of the model output with experimental PSF images. Right: The computational model is formulated in expression (5). The beam cross-section, represented as a complex amplitude, is multiplied by phase objects representing SLM probing modulation, objective, and aberration, respectively, and is propagated to the focal plane of the objective using the angular spectrum method to calculate the resulting PSF. The two-photon image is the convolution of the squared PSF intensity and the object function, resulting in just the point spread function squared for a guide star. The beam profile $U_0$ and objective parameters are adjusted to describe the experimental setup. The SLM modulation is the model input, the PSF is the model output, the aberration is found through optimization.
Fig. 2.
Fig. 2. Correspondence between computationally generated and experimentally measured images of fluorescent beads. Three representative examples (rows), recorded in an aberrating sample: probing modulations displayed on SLM (first column), and corresponding PSF images as measured with photon counting (second column), preprocessed for model fitting (third column) and simulated with the optimized model. Field of view size is $40\times 40 \mu$m.
Fig. 3.
Fig. 3. Two representative examples of two-photon volume images of a $1 \mu$m diameter fluorescent bead embedded in agarose before and after correction. Sample 1, top row, left: Maximum intensity projection of volume image of a fluorescent bead in axial (z) direction (see bottom row for color bar). Projections along x- and y-axes are shown to the right (first row, left center) and below (second row, left). Second row, left center: slice at maximum intensity, indicated by white lines in x- and y-projections averaged over 4 frames recorded with $0.25 \mu$m axial spacing. Image is zoomed-in as indicated by white box in top left image, size is $10\times 10 \mu$m. Top row, center right: same as left side after correction, again with corresponding x- and y- projections. Second row, right: slice at maximum intensity, indicated by white lines in x- and y-projections averaged over 4 frames recorded with $0.25 \mu$m axial spacing after correction. Third row, left: lateral cross-sections before and after correction as indicated by correspondingly colored arrows in row two, including Gaussian fit of corrected profile (black dashed line). Third row, center: axial cross sections along the correspondingly colored arrows in the second row including Gaussian fit of corrected image. Third row, right: aberration correction. Sample 2: as described for sample 1.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

U ( x , y , d ) exp [ i ϕ ( x , y , d ) ] .
U ( x , y , z + d ) = P d ( U ( x , y , z ) ) = A ( f x , f y ; z ) circ ( ( λ f X ) 2 + ( λ f Y ) 2 ) × H exp [ i 2 π ( f X x + f Y y ) ] d f X d f Y .
I ( x , y , z ) = | U ( x , y , z ) | 2 .
I f I 2 = | U | 4 .
I f = S ( ϕ SLM , ϕ aberration ) = | P f MO ( U 0 exp [ i ( ϕ SLM + ϕ MO + ϕ aberration ) ] ) | 4 ,
arg min ϕ aberration j = 1 N loss ( S ( ϕ SLM j , ϕ aberration ) , I f j ) .
loss ( prediction , target ) = r [ prediction , target ] ln ( pixels | prediction | ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.