Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Three-dimensional bi-functional refractive index and fluorescence microscopy (BRIEF)

Open Access Open Access

Abstract

Fluorescence microscopy is a powerful tool for imaging biological samples with molecular specificity. In contrast, phase microscopy provides label-free measurement of the sample’s refractive index (RI), which is an intrinsic optical property that quantitatively relates to cell morphology, mass, and stiffness. Conventional imaging techniques measure either the labeled fluorescence (functional) information or the label-free RI (structural) information, though it may be valuable to have both. For example, biological tissues have heterogeneous RI distributions, causing sample-induced scattering that degrades the fluorescence image quality. When both fluorescence and 3D RI are measured, one can use the RI information to digitally correct multiple-scattering effects in the fluorescence image. Here, we develop a new computational multi-modal imaging method based on epi-mode microscopy that reconstructs both 3D fluorescence and 3D RI from a single dataset. We acquire dozens of fluorescence images, each ‘illuminated’ by a single fluorophore, then solve an inverse problem with a multiple-scattering forward model. We experimentally demonstrate our method for epi-mode 3D RI imaging and digital correction of multiple-scattering effects in fluorescence images.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fluorescence microscopy and phase microscopy are two distinct imaging techniques that leverage different contrast mechanisms: fluorescence microscopy images specific structures that are labeled by fluorescent tags in a biological sample; phase microscopy, on the other hand, images the refractive index (RI) of a sample and can be used for visualizing label-free structures, while lacking molecular specificity. Multimodal microscopy methods that can image both fluorescence-labeled and label-free structures enables correlating the two. The phase images reconstruct the RI of the sample, which may give structural information about the sample, or can be used to correct sample-induced scattering effects computationally in the fluorescence.

Previous work either focused on reconstructing fluorescence signals through tissue scattering by wavefront shaping [13], ultrasound-assisted optical imaging [47], measurement of transmission [8,9] or reflection matrix [10,11], and computational optimization [1216]; or focused on measuring RI with optical diffraction tomography [1719], computational phase retrieval [2024], or optical coherence refractive tomography [25]. A few existing methods recover both fluorescence and phase information [2629], but they require sequential experiments or independent measurements, which makes it difficult to register the images spatially. No previous method reconstructs both RI and fluorescence from a single dataset captured by one camera. One of the reasons is that fluorescence microscopy is usually in epi-mode, while phase microscopy is usually in transmission mode, which limits the application for in vivo imaging. Here, we introduce an epi-mode multimodal microscopy system that merges the function of fluorescence microscopy and epi-mode optical diffraction tomography (ODT), which could become a powerful tool for bioimaging.

We provide a proof-of-principle demonstration for 3D bi-functional refractive index and fluorescence microscopy (BRIEF), which reconstructs both 3D fluorescence and 3D RI from fluorescence images captured in epi-mode. We focus the microscope at the top of the sample and collect dozens of images, each having different fluorophores within the volume ‘on’. The fluorophores deeper in the sample illuminate different parts of the phase objects near the surface from different angles. We can estimate the fluorophore 3D position from the fluorescence images and also reconstruct the 3D RI, as long as the set of captured images contains diverse illumination angles for each lateral position. To recover the 3D information, we solve an inverse problem with a physics-based multi-slice model that accounts for multiple scattering. Because both modalities are reconstructed from the same dataset captured by a single camera, the fluorescence and RI signals are strictly registered in space and time.

2. Results

The experimental setup of BRIEF is shown in Fig. 1. Our test samples are 3D phase objects (beads or cells) seeded with fluorophores that are sufficiently deep in the sample (hundreds of microns) to illuminate the phase objects from below. In general, BRIEF could be used to image fluorescence-labeled biological tissue which has either sparsely blinking fluorophores in scattering tissue (e.g., the dyes used in super-resolution localization microscopy [30,31], or the fluorophores can be selectively illuminated by a photo-stimulation setup. For the latter case, one can use a widefield fluorescence stack to identify fluorophores’ locations first, followed by stimulating them sequentially; or one can scan an illumination ‘point source’ through the sample volume and capture a 2D image at each scanning location, then only use the images in which a fluorophore is emitting. We choose the latter of these approaches for convenience. We modulate collimated laser light at 473nm wavelength with a Digital Micromirror Device (DMD) located at the relayed image plane (Fig. 1(a)). The pixels on the DMD are binned into $20 \times 20$ pixel patches (corresponding to $8 \times 8\mathrm{\mu }{\textrm{m}^2}$ at the sample), termed “super-pixels” and each super-pixel is turned on one-by-one to scan the volume laterally. Given weak scattering and sparse fluorophore distribution, either a single fluorophore or no fluorophore is excited at each scanning location; we select the images containing fluorophores as our measurements (Fig. 1(c)). The fluorophores act like point light sources inside the sample, creating a circular bright area in the measurement, inside which we see fine structures due to the phase objects (live CHO cells). The size of the illuminated area is proportional to the depth of the fluorophore: deeper fluorophores will give larger illuminated areas. Fluorophores located at different positions will illuminate different parts of the phase object from different angles. If the phase object is near or beneath the fluorophore, it is not measured by this fluorophore but measured by other fluorophores whose illuminated areas cover the phase object. If the fluorophores are on the top of sample and above the phase object, they will not be used as light sources. We collect dozens fluorescence images, each illuminated by a different fluorophore, as raw measurements, such that each part of the phase object sees a diversity of illumination angles over the set of captured images.

 figure: Fig. 1.

Fig. 1. 3D bi-functional refractive index and fluorescence microscopy (BRIEF). (a) In order to excite different fluorophores at different times, a collimated illumination beam is modulated by a digital micromirror device (DMD) to selectively excite each fluorophore sequentially. Emitted fluorescence light scatters through the label-free tissue above and is imaged onto a camera sensor, which is focused at the top of the sample. (b) The sample here consists of fluorescence-labeled structures on the bottom and non-labeled cells on the top. A multi-slice scattering algorithm models how fluorescence light propagates through each depth slice of the sample, scattering according to its 3D refractive index (RI). (c) Two examples of raw images captured by the camera, corresponding to fluorophore #1 and #2 in (b) being turned ‘on’. The size of the circular region is determined by the depth of the fluorophore and the numerical aperture (NA) of the system; fluorophore #1 is located deeper than fluorophore #2, so it illuminates a larger volume. Fine structures inside of the circular area carry phase information about the cells and can be considered as intensity images taken with different defocus and/or illumination angles.

Download Full Size | PDF

The 3D information in BRIEF comes from illuminating the phase objects from different angles via the spherical waves generated by the fluorophores at various depths. This is somewhat analogous to a fan-beam version of ODT, which illuminates phase objects from different angles in transmission mode. Therefore, BRIEF can incorporate similar forward models as ODT for 3D RI reconstruction. Here, we use one with a multi-slice scattering model [21,23,32] and treat the fluorescence sources inside the tissue, denoted by ${o_f}(\boldsymbol{r} )$, as spherical emitters, where each fluorophore’s light is spatially coherent (with itself) but incoherent with other fluorophores. The multi-slice model approximates the bulk tissue as a series of thin layers, where the RI of the kth layer is denoted by ${n_k}(\boldsymbol{r} )$. Light propagation through the tissue is modeled via sequential layer-to-layer propagation of the electric field. The intensity of the exit electric field at the image plane z, Il(r;z), accounting for the accumulation of diffraction and multiple scattering, is recorded by a camera as the lth measurement. Both the 3D fluorescence distribution, ${o_f}(\boldsymbol{r} )$, and the 3D RI, n(r), are unknown in the model. However, the approximate position of fluorescence objects can be estimated from the measurements if the scattering is weak and isotropic, such that the intensity image has a clear circle area illuminated. In the lateral direction, the fluorophore position is approximately the center of gravity of the circle, which is the initial value of ${o_f}(\boldsymbol{r} )$. In the axial direction, the fluorophore position is estimated by fitting to the point-spread-function (PSF) at each axial depth. Therefore, we first estimate the 3D fluorescence distribution ${o_f}(\boldsymbol{r} )$ from multiple 2D fluorescence images and then use the expected fluorescence positions to estimate the 3D RI, $n(\boldsymbol{r} )$, by solving an optimization problem with Tikhonov regularization, denoted by $R$:

$$\mathop {\textrm{argmin}}\limits_{\; n(\boldsymbol{r} )} \mathop \sum \limits_l \left\|{I_l}({\boldsymbol{r};z} )- {\hat{I}_l}({\boldsymbol{r};z} )\right\|_2^2 + R[{n(\boldsymbol{r} )} ].$$

We solve this optimization problem with the fast iterative shrinkage-thresholding algorithm (FISTA), which is a first-order gradient descent algorithm. Details about the forward model and reconstruction process are in the supplementary materials.

We first demonstrate that our method can reconstruct 3D RI from fluorescence images by using a calibrated sample with ground truth information of its 3D RI. The calibration sample consists of two layers: the bottom layer of PDMS (RI 1.43) is about 150 µm thick with 0.71µm red fluorescence beads inside as the light sources (positions are unknown); the top layer of PDMS is about 100µm thick with glass beads of known size and RI (RI 1.50, size 5-50µm) inside as weak scattering phase objects (Fig. 2(a)). We first cure the bottom layer of PDMS with fluorescent beads, and then paste the mixture of PDMS and glass beads on top of the bottom layer and cure the entire sample. There is no restriction on the thickness of the two layers because we can easily adjust the number of slices and the separation between slices in the model to fit the sample. To find the positions of the glass beads for our ground truth information, we capture transmission widefield images at several axial planes with infrared LED illumination (Fig. 2(e-f)). For our BRIEF reconstruction, we capture 21 fluorescence measurements with different fluorophores on. The images are captured at 10 frames per second. A representative measurement is shown in Fig. 2(b), where we can see a defocused fluorescence light source illuminating some of the glass beads. After processed by our algorithm, the reconstructed 3D RI clearly distinguishes each glass bead in a cluster with good optical sectioning ability (Fig. 2(c-d)), and the 3D position of the glass beads matches with the ground truth widefield focus stack. The reconstructed RI of glass beads is 1.43-1.50 as shown in Fig. 2(c-d), matching well with the ground truth value. This experimental result demonstrates our ability to reconstruct 3D RI from fluorescence images.

 figure: Fig. 2.

Fig. 2. BRIEF for reconstructing 3D RI of glass beads from a single dataset of experimentally measured fluorescence images taken with different fluorescent beads emitting. (a) The sample consists of fluorescent beads on the bottom layer of the PDMS to act as light sources and glass beads on the top layer of the PDMS to act as non-fluorescence phase objects to be reconstructed. (b) One of the 21 raw measurements used for reconstruction, with the defocused fluorescence signal scattered by the glass beads. (c, d) Reconstructed RI of glass beads at $\Delta z = 80\mu m$ and $\Delta z = 56\mu m$ below the image plane, respectively. The 3D view of the reconstructed RI shows our technique achieves a good z-sectioning ability. (e, f) Widefield images of the sample under transmitted illumination to show the position of the glass beads as a ‘ground truth’ to compare with the reconstruction results.

Download Full Size | PDF

Next, we demonstrate BRIEF for reconstructing both fluorescence and RI from the same raw measurements of live biological cells. The test sample in this case is made by two steps: first, we fixed red fluorescent beads in PDMS on top of a coverslip; next, we coated the top surface of the PDMS with poly-lysine and cultured a thin layer of non-labeled CHO cells on it, in order to obtain a realistic biological phase object. During the experiment, we placed the sample in phosphate-buffered saline solution (RI 1.33) and collected 23 fluorescence images. An example of the fluorescence images for one excited fluorescent bead is shown in Fig. 3(b). We treat the previous experiment of glass beads as a calibration reference and used the same parameters in the optimization algorithm for reconstruction. The reconstructed RI of CHO cells at $\Delta z = 40\mu m$ below the image plane is shown in Fig. 3(c), and Fig. 3(d) shows the position of the cells using the transmission widefield image with infrared LED illumination (note that these are different contrast measurements so the images should not be compared directly).

 figure: Fig. 3.

Fig. 3. Experimental results with a sample consisting of fluorescent beads beneath a thin layer of alive CHO cells. (a) 3D view of fluorescent signals from fluorescent beads (magenta) and RI from CHO cells (green). (b) A representative raw measurement with one fluorescent bead ‘on’. (c) Reconstructed 3D RI of CHO cells at $\Delta z = 40\mu m$ below the image plane from 23 forward measurements. (d) Widefield intensity image of the sample under transmitted infrared illumination to show the ground truth of the cells’ lateral positions. (e) Overlap of the maximum intensity projection (MIP) of the widefield image stack of fluorescence beads excited by 473nm laser (ground truth, red) and the reconstructed 3D distribution of the fluorescence beads’ location (green) from 148 forward measurements like (b). Note this 3D fluorescence image stack is not used in the reconstruction of fluorescence distribution.

Download Full Size | PDF

In addition, our method also reconstructs the 3D fluorophore locations in the sample from the same raw data. In our case, each fluorescence image only contains one fluorophore, and the 23 reconstructed fluorophores are fairly bright and located relatively deep in the sample. The raw data carries more information about the phase objects because they have higher SNR and the fluorophores in these images illuminate a larger volume compared to the raw data containing dim fluorophores. To find a balance between reconstruction quality and computing burden, we only used these raw data for RI reconstruction. For fluorescence reconstruction, to also reconstruct dim fluorophores and fluorophores that are near the image plane, we select another 116 images (139 in total, with the 23 images used in the RI reconstruction) from the raw data that we collected in the same scanning process as described above. The reconstruction results (Fig. 3(e)) indicate the location of fluorophores, with intensity being proportional to the sum of fluorescence intensity in the measurement. For validation of our method, we also take a 3D axial-scanned image stack of fluorophores under widefield illumination with a 473nm laser (Fig. 3(e), red). This image stack is used as ground truth for fluorophore positions, and not used in the reconstruction of localization of fluorophores (Fig. 3(e), green). Given the scattering from the CHO cells is weak, the image stack is close to the ground truth of the fluorophores. Compared to localization of fluorophores from the 3D image stack, our method can reconstruct the location of most bright fluorophores accurately at high spatial precision (SSIM = 0.9786). Since the RI and fluorescence are collected in the same experiment with a single camera, they are automatically registered in the 3D volume (Fig. 3(a)), unlike previous methods [29] that collect fluorescence and RI information with two different cameras.

Knowing both the fluorescence and the 3D RI information not only provides a multimodal reconstruction of the sample’s structural and functional maps, but also can be used to digitally correct multiple-scattering effects in the fluorescence images. To demonstrate, we use a negative fluorescent USAF target as the fluorescence object (instead of fluorescent beads), placed below a visually opaque glass-bead sample as the scattering phantom, with immersion oil (RI = 1.515) in between (Fig. 4(a)). We first focus on the USAF target through the scattering glass-bead sample and take a widefield fluorescence image in focus (Fig. 4(d)). Then we move the focal plane above the glass-bead sample and collect scattered fluorescence images as the measurements for RI reconstruction, like in the previous experiments. Even though the fluorescent USAF target is not a point source, the excited area selected by the “super-pixel” of the DMD is so small that it can be treated as a point source. We reconstruct the 3D RI of the glass-bead sample from 10 measurements (Fig. 4(b)). One representative plane of the reconstructed 3D RI is shown in Fig. 4(c). This result also demonstrates this technique works for continuous fluorescent structures other than spherical fluorescent beads.

 figure: Fig. 4.

Fig. 4. Multi-modal microscopy for digital correction of multiple scattering in fluorescence images. (a) A negative fluorescence USAF target is used as the fluorescent sample and a highly-scattering glass-beads phantom on top of it acts as the scattering media. (b) A representative image of the raw measurements. The glass beads are illuminated by fluorescence from the USAF target instead of fluorescence beads in the previous experiments. (c) Orthogonal slice views of the reconstructed 3D RI. (d) Raw fluorescence image of the USAF target under wide-field blue laser illumination. This image is taken in focus, whereas (b) is captured with the system focused at the top surface of the sample. (e) Zoom-in view of the area in the blue box in (d), containing line pairs of element 6, group 7. (f) Reconstructed image after correcting multiple scattering with the multi-slice model. (g) Zoom-in view of the area in the red box in (f). (h) Normalized intensity profiles along the horizontal direction (H) and the vertical direction (V) in (e, blue) and (g, red).

Download Full Size | PDF

To digitally correct multiple-scattering effects in the fluorescence image (Fig. 4(d-e)), we take two steps: we first calculate the scattered PSF by the convolution of a 2D delta function and the reconstructed 3D RI of the scattering phantom (Fig. 4(c)); we next perform Richardson-Lucy deconvolution on the scattered image (Fig. 4(d)) with the scattered PSF. The reconstructed image can clearly distinguish the finest line pairs (Element 6 of Group 7, 228.1line pairs/mm) on the USAF target (Fig. 4(f-g)), whereas the scattered image cannot. Fig. 4(h) quantitatively compares the normalized intensity profiles of the line pairs in the scattered image (Fig. 4(e)) and in the reconstructed image (Fig. 4(g)) in both horizontal (H) and vertical (V) directions. Hence, we have shown that the reconstructed RI can be used to digitally correct multiple-scattering effects and improve image SNR of fluorescence imaging. Our advantage over the previous work [116] on imaging through scattering is that we can potentially correct scattering effect for fluorescence objects at any location in 3D inside of the scattering tissue without additional experimental measurements, since we already reconstructed the 3D RI of the whole volume.

3. Discussion

In the study, we demonstrated and validated BRIEF, a new imaging method that operates in epi mode and reconstructs 3D fluorescence and 3D RI from only fluorescence images by solving the inverse problem of multiple scattering based on a multi-slice model. We experimentally demonstrate the 3D reconstructed RI of glass beads and alive CHO cells registered with fluorescence beads in the same sample. We also demonstrate an application of BRIEF with the reconstructed RI by digitally correcting multiple scattering effects and improving the SNR of fluorescence images that are taken through a visually opaque phantom.

Our technique not only works for sparse fluorophores but also for dense fluorophores. There are two strategies to use BRIEF with dense fluorophores. First, BRIEF can leverage multiphoton absorption to only excite a diffraction-limited small region in a densely labeled fluorescent sample. For example, the state-of-the-art fluorescence microscopy has been demonstrated for fluorescence imaging at 1mm depth in the mouse brain [33]. Multiphoton excitation has better z-section ability than one-photon excitation, avoiding accidentally excitation of other untargeted fluorophores on the superficial layer. Second, in the above experiments, we excited only one fluorophore for each measurement, given that the fluorophores are sparsely distributed. For dense fluorophores, we probably cannot excite a single fluorophore at a time but excite multiple fluorophores (say N fluorophores) simultaneously at each measurement (Fig. S1). In this case, since the measured fluorescence image is the sum of the intensity of the scattered light illuminated by each fluorophore, each fluorophore needs to be measured at least N times combined with different fluorophores to acquire mutually independent measurements. As an example, we perform a simulation that contains 10 fluorophores illuminating a 3D phantom (ΔRI = 0.01) from below (Fig. S1a). The first result is reconstructed from 10 measurements and each measurement is illuminated by a single fluorophore (Fig. S1b); the second result is also reconstructed from 10 measurements, but each measurement is illuminated by 5 fluorophores simultaneously (Fig. S1c). The result from single fluorophore illumination is better than the result from multiple fluorophores illumination given the same number of measurements (SSIMsingle = 0.9999, SSIMmultiple = 0.9998). Therefore, we choose to illuminate a single fluorophore in each measurement in all the experiments above to achieve better reconstruction results, but our technique also works if illuminating multiple fluorophores in each measurement.

The resolution of our method ultimately is limited by the finite numerical aperture, so the axial resolution is poorer than the lateral resolution. Like ODT, the more measurements we take, the higher axial resolution we can achieve. However, more measurements and finer axial grid in the multi-slice model cause heavier computational burden. Current reconstruction takes over 1 hour to reconstruct the RI of $1200 \times 1200 \times 60$ pixels with a GPU (GeForce GTX 1080 Ti, NVIDIA). With prior knowledge of the sparse phase objects, we are able to resolve individual glass beads and cells with dozens of intensity measurements. We could potentially achieve higher axial resolution with more measurements and more slices in z.

When imaging through highly scattering tissue, our current method may have poor performance due to model mismatching. We assume weak and isotropic scattering in the current model, so we can accurately estimate the 3D position of fluorophores and then use this information to reconstruct 3D RI. However, under strong scattering, 3D localization of fluorophores from the raw measurements will be challenging. Also, highly scattering tissue contains more heterogeneous structures. To reconstruct RI at higher resolution in order to distinguish these structures, our method will require finer grid in the multi-slice model and more raw measurements, which will add more computational burden as we discussed above. The computational burden could be solved by using more powerful graphic cards. In addition, as the imaging depth increases, the SNR of fluorescence raw measurements will drop, which is common for fluorescence microscopy. We can try to push the depth limit by leveraging high-sensitivity cameras, large cross-section fluorophores, and high excitation power.

After the arXiv version of our manuscript came out [34], a similar idea was published [35] but it is different because: (1) it is considered for a different application of individual emission of fluorophores in localization microscopy; (2) it requires measurements from two defocused image planes, but we use only one; (3) it is simulation only, whereas we provide an experimental demonstration.

In conclusion, we provide a proof-of-concept experiment of BRIEF to reconstruct RI information from fluorescence images, which is beyond the conventional applications of fluorescence microscopy. We experimentally demonstrated the feasibility of the new method with in vitro samples, which shines light to in vivo experiments in the future. BRIEF is a versatile technique and is compatible with both one-photon and multiphoton microscopy, which will facilitate a wide range of applications in biology.

Funding

Weill Neurohub (Fellowship); Gordon and Betty Moore Foundation (GBMF4562); Chan Zuckerberg Initiative (Deep tissue imaging).

Acknowledgments

We thank Dr. Savitha Sridharan in Prof. Hillel Adesnik group helped us culture the CHO cells. We also thank Prof. Hillel Adesnik for providing the lab space.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Custom code used to reconstruct RI from fluorescence images is programed in Python and can be found in Ref. [34].

Supplemental document

See Supplement 1 for supporting content.

References

1. I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32(16), 2309–2311 (2007). [CrossRef]  

2. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6(5), 283–292 (2012). [CrossRef]  

3. S. Gigan, “Optical microscopy aims deep,” Nat. Photonics 11(1), 14–16 (2017). [CrossRef]  

4. M. Kobayashi, T. Mizumoto, Y. Shibuya, M. Enomoto, and M. Takeda, “Fluorescence tomography in turbid media based on acousto-optic modulation imaging,” Appl. Phys. Lett. 89(18), 181102 (2006). [CrossRef]  

5. K. Si, R. Fiolka, and M. Cui, “Fluorescence imaging beyond the ballistic regime by ultrasound pulse guided digital phase conjugation,” Nat. Photonics 6(10), 657–661 (2012). [CrossRef]  

6. Y. M. Wang, B. Judkewitz, C. A. DiMarzio, and C. Yang, “Deep-tissue focal fluorescence imaging with digitally time-reversed ultrasound-encoded light,” Nat. Commun. 3(1), 928 (2012). [CrossRef]  

7. H. Ruan, Y. Liu, J. Xu, Y. Huang, and C. Yang, “Fluorescence imaging through dynamic scattering media with speckle-encoded ultrasound-modulated light correlation,” Nat. Photonics 14(8), 511–516 (2020). [CrossRef]  

8. S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1(1), 81 (2010). [CrossRef]  

9. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104(10), 100601 (2010). [CrossRef]  

10. Sungsam Kang, Seungwon Jeong, Wonjun Choi, Hakseok Ko, Taeseok D. Yang, Jang Ho Joo, Jae-Seung Lee, Yong-Sik Lim, Q-Han Park, and Wonshik Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9(4), 253–258 (2015). [CrossRef]  

11. S. Yoon, M. Kim, M. Jang, Y. Choi, W. Choi, S. Kang, and W. Choi, “Deep optical imaging within complex scattering media,” Nat. Rev. Phys. 2(3), 141–158 (2020). [CrossRef]  

12. C. Moretti and S. Gigan, “Readout of fluorescence functional signals through highly scattering tissue,” Nat. Photonics 14(6), 361–364 (2020). [CrossRef]  

13. Y. Xue, K. P. Berry, J. R. Boivin, D. Wadduwage, E. Nedivi, and P. T. C. So, “Scattering reduction by structured light illumination in line-scanning temporal focusing microscopy,” Biomed. Opt. Express 9(11), 5654 (2018). [CrossRef]  

14. D. N. Wadduwage, J. K. Park, J. R. Boivin, Y. Xue, and P. T. C. So, De-scattering with Excitation Patterning (DEEP) Enables Rapid Wide-field Imaging Through Scattering Media,arXiv [physics.optics] (2019).

15. Zhun Wei, Josiah R. Boivin, Yi Xue, Xudong Chen, Peter T. C. So, Elly Nedivi, and Dushan N. Wadduwage, 2019. “3D Deep Learning Enables Fast Imaging of Spines through Scattering Media by Temporal Focusing Microscopy.” arXiv [eess.IV]. arXiv. http://arxiv.org/abs/2001.00520.

16. A. Escobet-Montalbán, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. Simon Herrington, M. Mazilu, and K. Dholakia, “Wide-Field Multiphoton Imaging through Scattering Media without Correction,” Sci. Adv. 4(10), eaau1338 (2018). [CrossRef]  

17. Y. Park, M. Diez-Silva, G. Popescu, G. Lykotrafitis, W. Choi, M. S. Feld, and S. Suresh, “Refractive index maps and membrane dynamics of human red blood cells parasitized by plasmodium falciparum,” Proc. Natl. Acad. Sci. U. S. A. 105(37), 13730–13735 (2008). [CrossRef]  

18. Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17(1), 266 (2009). [CrossRef]  

19. Taewoo Kim, Renjie Zhou, Mustafa Mir, S. Derin Babacan, P. Scott Carney, Lynford L. Goddard, and Gabriel Popescu, “White-light diffraction tomography of unlabelled live cells,” Nat. Photonics 8(3), 256–263 (2014). [CrossRef]  

20. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope.,” Optica 2(2), 104–111 (2015). [CrossRef]  

21. S. Chowdhury, M. Chen, R. Eckert, D. Ren, F. Wu, N. Repina, and L. Waller, “High-resolution 3D refractive index microscopy of multiple-scattering samples from intensity images,” Optica 6(9), 1211 (2019). [CrossRef]  

22. J. Li, A. C. Matlock, Y. Li, Q. Chen, C. Zuo, and L. Tian, “High-speed in vitro intensity diffraction tomography,” Adv. Photon. 1(06), 1 (2019). [CrossRef]  

23. M. Chen, D. Ren, H.-Y. Liu, S. Chowdhury, and L. Waller, “Multi-layer Born multiple-scattering model for 3D phase microscopy.,” Optica 7(5), 394–403 (2020). [CrossRef]  

24. H.-Y. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, and L. Waller, “3D imaging in volumetric scattering media using phase-space measurements,” Opt. Express 23(11), 14461 (2015). [CrossRef]  

25. K. C. Zhou, R. Qian, S. Degan, S. Farsiu, and J. A. Izatt, “Optical coherence refraction tomography,” Nat. Photonics 13(11), 794–802 (2019). [CrossRef]  

26. K. Kim, W. S. Park, S. Na, S. Kim, T. Kim, W. D. Heo, and Y. Park, “Correlative three-dimensional fluorescence and refractive index tomography: bridging the gap between molecular specificity and quantitative bioimaging,” Biomed. Opt. Express 8(12), 5688 (2017). [CrossRef]  

27. S. Shin, D. Kim, K. Kim, and Y. Park, “Super-resolution three-dimensional fluorescence and optical diffraction tomography of live cells using structured illumination generated by a digital micromirror device,” Sci. Rep. 8(1), 9183 (2018). [CrossRef]  

28. J. Chung, J. Kim, X. Ou, R. Horstmeyer, and C. Yang, “Wide field-of-view fluorescence image deconvolution with aberration-estimation from Fourier ptychography,” Biomed. Opt. Express 7(2), 352–368 (2016). [CrossRef]  

29. L.-H. Yeh, S. Chowdhury, and L. Waller, “Computational structured illumination for high-content fluorescence and phase microscopy,” Biomed. Opt. Express 10(4), 1978–1998 (2019). [CrossRef]  

30. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]  

31. Eric Betzig, George H. Patterson, Rachid Sougrat, O. Wolf Lindwasser, Scott Olenych, Juan S. Bonifacino, Michael W. Davidson, Jennifer Lippincott-Schwartz, and Harald F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642 (2006). [CrossRef]  

32. D. Ren, C. Ophus, M. Chen, and L. Waller, “A multiple scattering algorithm for three dimensional phase contrast atomic electron tomography,” Ultramicroscopy 208, 112860 (2020). [CrossRef]  

33. D. G. Ouzounov, T. Wang, M. Wang, D. D. Feng, N. G. Horton, J. C. Cruz-Hernández, Y.-T. Cheng, J. Reimer, A. S. Tolias, N. Nishimura, and C. Xu, “In vivo three-photon imaging of activity of GCaMP6-labeled neurons deep in intact mouse brain,” Nat. Methods 14(4), 388–390 (2017). [CrossRef]  

34. Y. Xue, D. Ren, and L. Waller, “Three-dimensional bi-functional refractive index and fluorescence microscopy (BRIEF): code,” github, 2022. https://github.com/Waller-Lab/BRIEF.

35. T.-A. Pham, E. Soubies, F. Soulez, and M. Unser, “Optical diffraction tomography from single-molecule localization microscopy,” Opt. Commun. 499, 127290 (2021). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental document

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Custom code used to reconstruct RI from fluorescence images is programed in Python and can be found in Ref. [34].

34. Y. Xue, D. Ren, and L. Waller, “Three-dimensional bi-functional refractive index and fluorescence microscopy (BRIEF): code,” github, 2022. https://github.com/Waller-Lab/BRIEF.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. 3D bi-functional refractive index and fluorescence microscopy (BRIEF). (a) In order to excite different fluorophores at different times, a collimated illumination beam is modulated by a digital micromirror device (DMD) to selectively excite each fluorophore sequentially. Emitted fluorescence light scatters through the label-free tissue above and is imaged onto a camera sensor, which is focused at the top of the sample. (b) The sample here consists of fluorescence-labeled structures on the bottom and non-labeled cells on the top. A multi-slice scattering algorithm models how fluorescence light propagates through each depth slice of the sample, scattering according to its 3D refractive index (RI). (c) Two examples of raw images captured by the camera, corresponding to fluorophore #1 and #2 in (b) being turned ‘on’. The size of the circular region is determined by the depth of the fluorophore and the numerical aperture (NA) of the system; fluorophore #1 is located deeper than fluorophore #2, so it illuminates a larger volume. Fine structures inside of the circular area carry phase information about the cells and can be considered as intensity images taken with different defocus and/or illumination angles.
Fig. 2.
Fig. 2. BRIEF for reconstructing 3D RI of glass beads from a single dataset of experimentally measured fluorescence images taken with different fluorescent beads emitting. (a) The sample consists of fluorescent beads on the bottom layer of the PDMS to act as light sources and glass beads on the top layer of the PDMS to act as non-fluorescence phase objects to be reconstructed. (b) One of the 21 raw measurements used for reconstruction, with the defocused fluorescence signal scattered by the glass beads. (c, d) Reconstructed RI of glass beads at $\Delta z = 80\mu m$ and $\Delta z = 56\mu m$ below the image plane, respectively. The 3D view of the reconstructed RI shows our technique achieves a good z-sectioning ability. (e, f) Widefield images of the sample under transmitted illumination to show the position of the glass beads as a ‘ground truth’ to compare with the reconstruction results.
Fig. 3.
Fig. 3. Experimental results with a sample consisting of fluorescent beads beneath a thin layer of alive CHO cells. (a) 3D view of fluorescent signals from fluorescent beads (magenta) and RI from CHO cells (green). (b) A representative raw measurement with one fluorescent bead ‘on’. (c) Reconstructed 3D RI of CHO cells at $\Delta z = 40\mu m$ below the image plane from 23 forward measurements. (d) Widefield intensity image of the sample under transmitted infrared illumination to show the ground truth of the cells’ lateral positions. (e) Overlap of the maximum intensity projection (MIP) of the widefield image stack of fluorescence beads excited by 473nm laser (ground truth, red) and the reconstructed 3D distribution of the fluorescence beads’ location (green) from 148 forward measurements like (b). Note this 3D fluorescence image stack is not used in the reconstruction of fluorescence distribution.
Fig. 4.
Fig. 4. Multi-modal microscopy for digital correction of multiple scattering in fluorescence images. (a) A negative fluorescence USAF target is used as the fluorescent sample and a highly-scattering glass-beads phantom on top of it acts as the scattering media. (b) A representative image of the raw measurements. The glass beads are illuminated by fluorescence from the USAF target instead of fluorescence beads in the previous experiments. (c) Orthogonal slice views of the reconstructed 3D RI. (d) Raw fluorescence image of the USAF target under wide-field blue laser illumination. This image is taken in focus, whereas (b) is captured with the system focused at the top surface of the sample. (e) Zoom-in view of the area in the blue box in (d), containing line pairs of element 6, group 7. (f) Reconstructed image after correcting multiple scattering with the multi-slice model. (g) Zoom-in view of the area in the red box in (f). (h) Normalized intensity profiles along the horizontal direction (H) and the vertical direction (V) in (e, blue) and (g, red).

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

argmin n ( r ) l I l ( r ; z ) I ^ l ( r ; z ) 2 2 + R [ n ( r ) ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.