Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep-learning super-resolution light-sheet add-on microscopy (Deep-SLAM) for easy isotropic volumetric imaging of large biological specimens

Open Access Open Access

Abstract

Isotropic 3D histological imaging of large biological specimens is highly desired but remains highly challenging to current fluorescence microscopy technique. Here we present a new method, termed deep-learning super-resolution light-sheet add-on microscopy (Deep-SLAM), to enable fast, isotropic light-sheet fluorescence imaging on a conventional wide-field microscope. After integrating a minimized add-on device that transforms an inverted microscope into a 3D light-sheet microscope, we further integrate a deep neural network (DNN) procedure to quickly restore the ambiguous z-reconstructed planes that suffer from still insufficient axial resolution of light-sheet illumination, thereby achieving isotropic 3D imaging of thick biological specimens at single-cell resolution. We apply this easy and cost-effective Deep-SLAM approach to the anatomical imaging of single neurons in a meso-scale mouse brain, demonstrating its potential for readily converting commonly-used commercialized 2D microscopes to high-throughput 3D imaging, which is previously exclusive for high-end microscopy implementations.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Light-sheet Fluorescence Microscopy (LSFM) allows three-dimensional imaging of biological samples with high speed and low photo-bleaching, and has recently emerged as an important alternative to conventional epifluorescence imaging approachs [17]. To the ultimate goal of fast, accurate, noninvasive spatiotemporal imaging, a variety of LSFM implementations have evolved from the classic SPIM, to provide superior-quality imaging of samples from single cells to entire organs [819]. Aside from these advanced LSFM modalities, simple LSFM approaches, e.g. established Open-SPIM, are also invented with simplified structures and reduced cost, for more widespread LSFM applications under ordinary conditions [20,21]. Some easier ways have also been reported to enhance existing conventional microscope with simple retrofits that contains plane illumination and sample scanning additions, thus providing compact and cost-effective solution to implement LSFM imaging on epifluorescence microscopes. Given the large number of such conventional microscopes in service, these types of techniques provide compelling solutions for researchers to readily access advanced LSFM imaging [2225]. However, owing to the simple ways used for light-sheet generation, the axial resolution of system, which is compromised to the range of laser-sheet illumination, remains largely insufficient, especially being limited to ambiguous cellular resolution (∼10-20 μm) when imaging large histological specimens. Multi-frame-based resolution enhancement techniques, such as Fourier ptychography, structured illumination, and voxel super-resolution [2628], can computationally address this issue through reconstructing a super-resolved 3D image based on a number of low-resolution measurements, but at the expense of increased acquisition time and low processing throughput. Unlike the above-mentioned multi-frame methods, deep learning-enabled image restoration has recently become a promising tool for various light microscopy techniques [2931], with the trained neural network capable of directly deducing a higher-quality image based on a single low-quality measurement. Deep learning-based restoration has been also applied to the resolution enhancement and denoising of LSFM images, in which the acquisition of qualitied LSFM data for network training is relatively difficult [32,33]. Based on these state-of-the-art developments, we herein report a deep-learning-enabled super-resolution light-sheet add-on microscopy (Deep-SLAM) that combines simple add-on device with efficient deep neural network (DNN), and allows a 2D conventional microscope to implement 3D isotropic imaging of large biological specimens. In addition to the hardware add-on that enhances wide-field microscope with 3D optical sectioning capability, the newly integrated DNN procedure further computationally improves the axial performance of system from 15-µm cellular resolution to isotropic 3-µm single-cell resolution. We demonstrate this Deep-SLAM approach through imaging GFP-tagged single neurons in a meso-scale mouse brain. As compared to very poor performance by the original microscope, or still fuzzy 3D reconstruction by merely applying add-on device, simple-and-efficient Deep-SLAM hybrid strategy can readily achieve over 10-fold enhanced axial resolution and show much higher signal contrast, allowing otherwise indistinguishable nerve endings to be super-resolved across large 3D brain space. Furthermore, we also demonstrate accurate counting of cell populations and segmentation of single neurons as a result of significantly improved image quality.

2. Experiments and results

2.1 Dataset preparation and experimental settings

Deep-SLAM procedure includes SLAM imaging on an inverted microscope (Olympus IX73) [22] with improved image contrast and axial resolution, followed by an isotropic enhancement using CARE deep-learning model [29]. Built-in epi-fluorescence mode in conventional microscope illuminates entire thick sample without blocking out-of-focus excitation, yielding complete blurred z reconstruction (Fig. 1(a)). Then SLAM imaging was enabled through a compact add-on device, which provides horizontal light-sheet illumination to the sample (Fig. 1(b)). While SLAM has enhanced the axial resolution and signal contrast by introducing plane illumination mode, a relatively thick laser-sheet (e.g., ∼15 μm) was necessary to cover a large field of view (FOV) of samples (e.g., ∼3 mm), thereby resulting ambiguous axial resolution insufficient for discerning single cells. Following the iso-CARE procedure, we generated the synthetic low-resolution axial slices through applying a degradation model to the better-resolved lateral slices (Fig. 6). Then these raw lateral slices (ground truths) were paired with their degraded versions that simulate the low-resolution axial slices, to form the training dataset for a CARE network training. Finally, the acquired SLAM image were resliced into a stack of axial slices, and restored by the trained model to generate an output stack with improved isotropic resolution (Fig. 1(c)).

 figure: Fig. 1.

Fig. 1. Deep-SLAM procedure. (a) Regular epi-fluorescence microscopy with low contrast and completely-blurred axial planes. (b) SLAM mode with an add-one device attached to the conventional microscope. SLAM add-one provides additional light-sheet illumination at the vicinity of the focal plane, and thus improves the image contrast and resolution. (c) Deep-SLAM mode with further DNN-based isotropic restoration of SLAM image. The blurred and down-sampled lateral slices of SLAM with noise addition were generated to simulate the axial slices of SLAM image which still show ambiguous axial resolution incapable of resolving single cells (Step 1). The degraded lateral slices (synthetic low-resolution data) paired with raw lateral slices (high-resolution ground truth) were trained using a CARE network model (Step 2). Finally, the trained network directly infer an isotropic 3D image stack based on the raw anisotropic SLAM input (step 3).

Download Full Size | PDF

2.2 Characterization of Deep-SLAM

We imaged sub-diffraction fluorescent beads (0.5-μm diameter, Lumisphere, BaseLine Chromtech) using original inverted microscope (4×/0.16 objective), thick SLAM mode (0.02 illumination NA), thin SLAM mode (0.06 illumination NA) and Deep-SLAM mode (0.06 illumination NA), to compare their point spread functions (PSFs, shown in Fig. 2(a)-(d)). Compared to the extremely poor axial performance (∼42 µm) by epi-illumination mode under a wide FOV (∼3 mm), our raw SLAM result showed much improved axial resolution (∼15 µm in z-axis) even though the elongated PSF indicated that it was still anisotropic (Fig. 2(a), (b), (f)). Meanwhile, images obtained by thin SLAM mode had ∼5-µm higher axial resolution but exponentially reduced FOV (∼280 µm), owing to the intrinsic limitation of Gaussian beam. In contrast, Deep-SLAM showed near isotropic axial resolution (∼3µm in PSFs) close to that of thin SLAM mode while maintained the 3-mm wide FOV (Fig. 2(d), (f)).

 figure: Fig. 2.

Fig. 2. Wide-FOV, isotropic Deep-SLAM imaging. (a-d) x-z maximum intensity projections (MIPs) of sub-diffraction fluorescence beads imaged by epi-illumination mode (a, 4×/0.16 objective), thick SLAM mode (b, 0.02 illumination NA), thin SLAM mode (c, 0.06 illumination NA) and Deep-SLAM mode (d, 0.02 illumination NA + iso-CARE). Scale bars: 50 µm. The magnified views of the PSFs indicate the resolving power by each mode. Scale bars: 20 µm for insets. (e) Corresponding 3D visualization of the beads resolved by 4 modes. (f) Peak signal to noise ratio (PSNR) values quantitatively showing the fidelity of wide-field, thick SLAM and Deep-SLAM images, as compared to the HR ground-truth images (results from the confocal range of thin SLAM). (g) The axial intensity plot of beads along x axis, comparing both axial resolution and FOV of each imaging modes shown in a-d.

Download Full Size | PDF

2.3 Network validation on cleared mouse brain

We further demonstrated the performance of Deep-SLAM through imaging fluorescence-labelled neurons in transgenic mouse brain (Thy1-GFP-M). A large clarified brain tissue (PEGASOS method [34]) was imaged and then reconstructed using abovementioned Deep-SLAM approach. The single soma in original whole-tissue-scale thick SLAM images remained ambiguous (Fig. 3(b), (c) left, Fig. 4(a), (d)), while the Deep-learning restoration was capable of super-resolving these fine structures across the entire 3mm-FOV (Deep-SLAM middle, Fig. 3(b),(c), Fig. 4(b), (e)), which furthermore enabled follow-up biological analysis, such as accurate neuron segmentation and soma counting (Fig. 6(a), (b)). We also compared the Deep-SLAM results with thin SLAM mode (Fig. 3(b), (c) right, Fig. 4(c), (f)), which can achieve single-cell resolution only in few ROIs owing to the small illumination FOV. The normalized root mean square error (NRMSE) also validated the sufficient accuracy of network restoration in Deep-SLAM.

 figure: Fig. 3.

Fig. 3. Restoration of neurons/nuclei in Thy1-GFP-M mouse brain imaged by thick SLAM mode. (a) 3D rendering of a large tissue block (∼2.5 mm × 2 mm × 1.5 mm) from transgenic Thy1-GFP-M mouse brain. (b, c) Maximum intensity projections (MIPs) of the x-z planes in two selected ROIs (highlighted boxes in a), by thick-SLAM (1st column), Deep-SLAM (2nd column), thin SLAM (3rd column) modes. The resolved somata and neuron fibers, together with their line intensity profiles confirm the improvement of axial resolution after isotropic network restoration. The normalized root mean square error (NRMSE) between Deep-SLAM and thin-SLAM results further validate the high restoration accuracy of Deep-SLAM (4th column). Scale bar, 100 µm.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Large-scale imaging of Thy1-GFP-M mouse brain using the three modes. (a-c) Maximum intensity projections (MIPs) of x-z planes in a 3mm FOV region by thick SLAM, Deep-SLAM and thin SLAM modes. The red box indicates the confocal range of thin SLAM mode. (d-e) Magnified views of three selected ROIs indicated in a-c (blue, green, orange and yellow boxes with the blue boxes within the Rayleigh range of thin SLAM). Scale bar, 50 µm.

Download Full Size | PDF

2.4 High-throughput isotropic imaging of half mouse brain using Deep-SLAM

We demonstrate that Deep-SLAM can achieve high-throughput, isotropic 3D imaging of a half mouse brain at single-cell resolution, showing how a unique function previously belonging to high-end LSFM implementations can now be realized on a conventional 2D microscope. We used SLAM to quickly image GFP-tagged neurons in a large half mouse brain (Tg: Thy1-GFP-M, ∼10 × 3 × 5 mm3) in merely ∼5 minutes (Fig. 5(a)), equivalent to an acquisition throughput of 1.7 × 108 voxels per second (1 × 1 × 5 μm voxel size). At such a large scale, as compared to thin-and-narrow SLAM mode, Deep-SLAM with wide FOV reduced the times of image stitching (∼5 times vs ∼40 times) and showed lower photo-bleaching as well. Then by merely including a small amount of SLAM self-data for training, the DNN model can restore various types of neurons in the half brain. Diverse neuronal structures distributed in five different brain sub-regions were chosen to show the successful 3D visualization of them at isotropic single-cell resolution (Fig. 5(b)-(f)). The neurons distributed in the edge of the FOV were compared to validate the superior performance of our Deep-SLAM in the whole FOV (Fig. 5(g), (h)).

 figure: Fig. 5.

Fig. 5. High-throughput acquisition of Thy1-GFP-M half mouse brain with Deep-SLAM. (a) 3D visualization of a Thy1-GFP-M half mouse brain (200 mm3), which were imaged less than 10 minutes with 5 stitching tiles using thick light-sheet (4×, NA 0.16, Olympus, FOV 2.9 mm). (b-f) Five magnified volumes from the cortex (b), hindbrain (c), hippocampus (d), midbrain (e) and cerebellar (f) to show the resolution improvement by Deep-SLAM, achieving isotropic 3D single-cell resolution across the total half brain. (g) The MIP of the regions indicated in (a) with dotted lines shows the resolution enhancement ability of our Deep SLAM in the whole FOV even near the edge. Scale bar, 100 µm.

Download Full Size | PDF

2.5 Improved cell counting and neuron tracing based on Deep-SLAM result

As a result of isotropic brain imaging by Deep-SLAM, significantly more neuron details densely packed in a cortex region could be segmented / traced using Imaris. The total length of segmented filaments are 759 μm in thick SLAM result, 1239 μm in Deep-SLAM result, and 1248 μm in thin SLAM result, as shown in Fig. 5(a). Meanwhile, the counting results of dense cell bodies located in different brain sub-regions reconstructed by Deep-SLAM were also more accurate than those by raw thick-SLAM, when using high-resolution thin-SLAM results as references (Fig. 5(b), (c), 546 by thick-SLAM vs 640 by Deep-SLAM vs 636 by thin-SLAM at cortex, and 1595 by thick-SLAM vs 1959 by Deep-SLAM vs 1985 by thin-SLAM at hippocampus).

3. Methods

3.1 SLAM imaging of large clarified mouse brain

We adopted SLAM imaging of mouse brain on an Olympus ix73 inverted microscope [22] (Fig. 7(b), (c)).The brain of a 8-week transgenic adult mouse (Thy 1-GFP, line M, Jackson Laboratory) was clarified using an organic-solvent-based clearing method (PEGASOS [33]). Two set screws on the sample holder were used to clamp the cleared-and-harden half mouse brain (Fig. 7(d)) and dipped it in the glass chamber of SLAM add-on, which was filled with refractive index-matched solvent (1.54). A 473-nm laser beam was reformed into a laser light-sheet by a cylindrical lens (CL, Thorlabs, LJ1810L2-A) of add-on device, to selectively illuminate the brain sample at the focal plane of an 4×/0.16 objective. An adjustable slit (AS, Thorlabs, VA100C/M) was used to switch between thick and thin SLAM modes (wide open for thin SLAM, 1-mm width for thick SLAM). Three compact translational stages aligned the light sheet with the sample in three directions (Fig. 7(c)). The confocal range of light sheet could be adjusted along the propagation direction (x axis) by translational stage 3 and the sample could be adjusted along the y direction by translational stage 1. The motorized actuator (on translational stage 2) in the device scanned the sample through the light-sheet (z direction) while the camera simultaneously recorded the consecutive planes at a rate of 20 frames per seconds.

 figure: Fig. 6.

Fig. 6. Accuracy-improved brain analysis by Deep-SLAM. (a) Comparative Image-based neuron segmentation/tracing using Imaris. Pyramidal neurons in the same cerebral cortex region resolved by thick SLAM (1st column), Deep-SLAM (2nd column) and thin SLAM (3rd column) modes were segmented and traced. The total length of filaments in the image of each mode was also compared in the right column. (b, c) Comparative neuron cell body identification and counting in two 400 × 400 × 400 μm3 ROIs at hippocampal and cortex. The 3D visualization as well as the quantitative results have validated that the axial improvement by Deep-SLAM is substantially beneficial to accurate neuron analysis, which are otherwise challenging to regular thick SLAM mode owing the axial blurring. The number of neurons identified in the image of each mode was also compared (inset). (d, e) 3D visualization of six 50 × 50 × 50 μm3 ROIs indicated in the blue boxes in (a, b). The magnified view clearly shows the accuracy-improved cell counting by our Deep-SLAM. Scale bar, 50 μm.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. SLAM add-on device. (a) The schematic of SLAM device. (b) The whole picture of SLAM device. (c) The magnified view of SLAM at working status. (d) The photograph of the sample holder that fixes the cleared mouse brain with two setting screws. Scale bar, 1 cm.

Download Full Size | PDF

3.2 Validation of the image degradation model

Ambiguous axial resolution is a common problem in 3D microscopy imaging (Fig. 8(a)). Such anisotropy is caused by the inherent axial elongation of the optical PSF and the usual low axial sampling rate of volumetric acquisitions required for fast imaging. To obtain the isotropic resolution, we used the well-resolved lateral slices as ground truths and degraded them to obtain the corresponding low-resolution training data, which simulated the anisotropic axial slices. In this way, we generated low-high resolution data pairs to train the deep-learning model, which finally could restore the axial slices to a near isotropic resolution. We generated the low-axial-resolution semi-synthetic data using following image degrading operations (Fig. 8(b)):

  • (1) Anisotropic transform. We first convolved the high-resolution lateral slices with a synthetic PSF (simulating the axial elongation of measured PSF), to obtain a blurred image similar to the axial slices. (2) Down-sampling. We down-sampled the blurred image by 5 times along x axis (from 1 μm to 5μm) using a re-slicing method, to simulate the coarser z-scan steps. (3) Noise addition. We adjusted both mean and variation of randomly-generated noises to produce a series of synthetic images and compared them to the experimentally measured LR images. When the semi-synthetic images have similar SNR and Fourier domain distribution with real low-resolution axial slices, the degradation process is considered convincible (Fig. 9). Finally, these operations transformed high-resolution lateral slices (xy) into synthetic lower-resolution images similar with the measured axial slices (xz or yz), for network training (Fig. 10).

 figure: Fig. 8.

Fig. 8. Degrading model to generate semi-synthetic axial slices. (a) The insufficient axial resolution of SLAM image, as compared to its lateral resolution. (b) The schematic of the image degradation steps including: 1. a Gaussian blur performed along the x axis of the lateral image slices of SLAM. 2. 5× downsampling followed with a noise addition to generate the final low-resolution data (lower-right) similar to the axial slices of SLAM (upper-right).

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Verification of the image degradation model. We visualized four 150 × 150 × 150 μm3 regions in the cortex of a mouse brain, to verify the accuracy of our degradation model. We first compare the synthetic LR degraded from the thin light-sheet GT measurement, with the real thick light-sheet Experimental LR measurements. Then the output results recovered from both the Synthetic LR and Experimental LR measurements are also compared. (a-d) The comparison of MIPs in xz planes of the cortex region. Scale bars, 50 µm. The insets in the projection images accordingly show their Fourier spectrums, from which the achieved resolutions of the images are also calculated. (e) The PSNR, SSIM and NRMSE indicate the higher fidelity and lower error of Deep-SLAM reconstructions. The visually and quantitatively high similarity between both the low-resolution pairs, their corresponding reconstructions and high-resolution GT, verify the sufficient accuracy of our image degrading model.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. U-Net architecture in Deep-SLAM. The CARE network based a 2D U-Net framework was applied in our Deep-SLAM. U-Net logically contains an encoder-decoder architecture with skip-connections. Instead of predicting a single value per output pixel, it predicts a per-pixel distribution (parameterized by mean and scale), which results a 1-channel output. The mean value is used as the prediction. The numbers in the left and upper sides of the layers represent the size and number of the feature maps, respectively. The “Conv” represents a convolutional layer. “Concat” means concatenation operation. The 5×5 Conv layers use the Rectified Linear Unit (ReLU) as the active function. We used adam as the optimizer and the learning rate is 0.0004.

Download Full Size | PDF

Through applying the abovementioned degradation operations, we generated 4000 training pairs of fluorescent microspheres (size of 128 × 128 pixels for each pair) from four image blocks (∼2 × 2 ×1 mm3 for each) acquired using thick SLAM. For the network training of mouse brain data, we generated 6000 training pairs (size of 128 × 128 pixels for each pair) from six image blocks of size (∼2 × 2 × 0. 25 mm3 for each) acquired using thick SLAM.

3.3 Deep network structure

We adopted CARE model to obtain the nonlinear mapping function between low-resolution input and high-resolution output29. A U-Net frame-work was used as it has achieved good performance in different biomedical applications. Due to the elastic deformation data enhancement, U-Net only requires a small amount of label images and relatively shorter training time. The network generates intermediate outputs based on the input low-resolution data and quantitatively compares them with the high-resolution label data. The obtained system loss function then iteratively optimizes the model till it reaches the convergency. The CARE network was implemented in Python using Tensorflow and Tensorlayer. The training process (4000 training pairs with size 128 × 128 for each) took ∼3 hours by using a single Nvidia 2080Ti GPU for computation.

3.4 Neuron tracing and cell body counting

The neurons in whole brain data was segmented semi-automatically using the commercial Imaris software. The Autopath mode of the Filament module was applied to trace the neurons. We first assigned one point on a neuron to initiate the tracing. Then, Imaris automatically calculated the pathway in accordance with image data, reconstructed the 3D morphology, and linked it with the previous part. This procedure would repeat several times until the whole neuron, which could also be recognized by human’s eye, was segmented. The trajectories of the neuron was shown in Fig. 5(a). We used the Spots module of commercial Imaris software to count the cell nuclei. We chose two blocks in hippocampus and cortex regions to count the number of cells densely-distributed in these areas. Then automatic creation in Spots module was applied to count cell number for all the channels, each of which represents an encephalic region. By comparing the counting results of raw thick-SLAM and Deep-SLAM with results from high-resolution thin-SLAM mode, it is obvious that our Deep-SLAM result can also lead to more accurate counting of these dense cell bodies.

3.5 Qualitative comparison of Deep-SLAM with other imaging modalities

We compared our Deep-SLAM with conventional epi-illumination microscopy, thin SLAM mode, thick SLAM mode and Bessel light sheet fluorescence microscopy which have emerged in recent years. As compared to the conventional epi-illumination microscope, the addition of both thin/thick SLAM modes improve the image contrast and axial resolution. But due to the inherent trade-off between axial resolution and FOV, we have to sacrifice the FOV for a thinner optical sectioning. The Deep-SLAM further solved this conflict with combining the merits of large FOV, high throughput in thick SLAM mode, and high-axial-resolution in thin SLAM mode. At the same time, there is no need to make any hardware retrofits to the existing system. Table 1 qualitatively summarizes the performance of original epifluorescence microscope, different modes of SLAM imaging.

Tables Icon

Table 1. Comparison of the performance of different imaging modalities

4. Conclusion

We have demonstrated an efficient and cost-effective imaging approach that combines light-sheet imaging add-on (hardware) with DNN restoration (software) to allow an ordinary conventional inverted microscope to realize minute time-scale histological imaging of large mouse brain (∼10 × 3 × 5 mm3) at isotropic single-cell resolution, previously a unique capability from high-end, complicated LSFM implementations only. Benefitting from the Deep-SLAM imaging results with sufficient quality, we also successfully implemented image-based neuron tracing and cell counting at high accuracy across a large scale. As we know exploring the distribution of cellular brain regions and neurons is the key to understand complex biological problems about brain functions, our approach could be a valuable tool for readily enabling image-based segmentation and segmentation of neurons performed at system level and in three dimensions. Besides the demonstration on brain imaging, we think this simple and cost-effective method could be the same efficient to the large-scale histological imaging of other whole organs.

Funding

Innovation Fund of WNLO; National Key Research and Development Program of China (2017YFA0700501); National Natural Science Foundation of China (21874052, 61860206009).

Acknowledgements

The authors acknowledge the selfless sharing of CARE source code29. We thank the assistance from Hao Zhang and Le Xiao on the experimental design and code implementation.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. J. Huisken, J. Swoger, F. Del Bene, J. Wittbrodt, and E. H. K. Stelzer, “Optical sectioning deep inside live embryos by selective plane illumination microscopy,” Science 305(5686), 1007–1009 (2004). [CrossRef]  

2. P. J. Keller, A. D. Schmidt, J. Wittbrodt, and E. H. K. Stelzer, “Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy,” Science 322(5904), 1065–1069 (2008). [CrossRef]  

3. T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–423 (2011). [CrossRef]  

4. E. G. Reynaud, U. Kržič, K. Greger, and E. H. Stelzer, “Light sheet-based fluorescence microscopy: more dimensions, more photons, and less photodamage,” HFSP J. 2(5), 266–275 (2008). [CrossRef]  

5. M. Mickoleit, B. Schmid, M. Weber, F. O. Fahrbach, S. Hombach, S. Reischauer, and J. Huisken, “High-resolution reconstruction of the beating zebrafish heart,” Nat. Methods 11(9), 919–922 (2014). [CrossRef]  

6. Y. Wu, P. Wawrzusin, J. Senseney, R. S. Fischer, R. Christensen, A. Santella, A. G. York, P. W. Winter, C. M. Waterman, and Z. Bao, “Spatially isotropic four-dimensional imaging with dual-view plane illumination microscopy,” Nat. Biotechnol. 31(11), 1032–1038 (2013). [CrossRef]  

7. M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods 10(5), 413–420 (2013). [CrossRef]  

8. T. Breuninger, K. Greger, and E. H. K. Stelzer, “Lateral modulation boosts image quality in single plane illumination fluorescence microscopy,” Opt. Lett. 32(13), 1938–1940 (2007). [CrossRef]  

9. B. C. Chen, W. R. Legant, K. Wang, L. Shao, D. E. Milkie, M. W. Davidson, C. Janetopoulos, X. S. Wu, J. A. Hammer, Z. Liu, B. P. English, Y. Mimori-Kiyosue, D. P. Romero, A. T. Ritter, J. Lippincott-Schwartz, L. Fritz-Laylin, R. D. Mullins, D. M. Mitchell, J. N. Bembenek, A.-C. Reymann, R. Böhme, S. W. Grill, J. T. Wang, G. Seydoux, U. S. Tulu, D. P. Kiehart, and E. Betzig, “Lattice light-sheet microscopy: Imaging molecules to embryos at high spatiotemporal resolution,” Science 346(6208), 1257998 (2014). [CrossRef]  

10. C. J. Engelbrecht, K. Greger, E. G. Reynaud, U. Kržic, J. Colombelli, and E. H. K. Stelzer, “Three-dimensional laser microsurgery in light-sheet based microscopy (SPIM),” Opt. Express 15(10), 6420–6430 (2007). [CrossRef]  

11. C. J. Engelbrecht and E. H. Stelzer, “Resolution enhancement in a light-sheet-based microscope (SPIM),” Opt. Lett. 31(10), 1477–1479 (2006). [CrossRef]  

12. J. Huisken and D. Y. R. Stainier, “Selective plane illumination microscopy techniques in developmental biology,” Development 136(12), 1963–1975 (2009). [CrossRef]  

13. B. Schmid, G. Shah, N. Scherf, M. Weber, K. Thierbach, C. P. Campos, I. Roeder, P. Aanstad, and J. Huisken, “High-speed panoramic light-sheet microscopy reveals global endodermal cell dynamics,” Nat. Commun. 4(1), 2207 (2013). [CrossRef]  

14. T. F. Holekamp, D. Turaga, and T. E. Holy, “Fast three-dimensional fluorescence imaging of activity in neural populations by objective-coupled planar illumination microscopy,” Neuron 57(5), 661–672 (2008). [CrossRef]  

15. J. Huisken and D. Y. R. Stainier, “Even fluorescence excitation by multidirectional selective plane illumination microscopy (mSPIM),” Opt. Lett. 32(17), 2608–2610 (2007). [CrossRef]  

16. J. G. Ritter, R. Veith, J. Siebrasse, and U. Kubitscheck, “High-contrast single-particle tracking by selective focal plane illumination microscopy,” Opt. Express 16(10), 7142–7152 (2008). [CrossRef]  

17. J. Swoger, P. Verveer, K. Greger, J. Huisken, and E. H. K. Stelzer, “Multi-view image fusion improves resolution in three-dimensional microscopy,” Opt. Express 15(13), 8029–8042 (2007). [CrossRef]  

18. D. Turaga and T. E. Holy, “Miniaturization and defocus correction for objective-coupled planar illumination microscopy,” Opt. Lett. 33(20), 2302–2304 (2008). [CrossRef]  

19. P. J. Verveer, J. Swoger, F. Pampaloni, K. Greger, M. Marcello, and E. H. K. Stelzer, “High-resolution three-dimensional imaging of large specimens with light sheet–based microscopy,” Nat. Methods 4(4), 311–313 (2007). [CrossRef]  

20. P. G. Pitrone, J. Schindelin, L. Stuyvenberg, S. Preibisch, M. Weber, K. W. Eliceiri, J. Huisken, and P. Tomancak, “OpenSPIM: an open-access light-sheet microscopy platform,” Nat. Methods 10(7), 598–599 (2013). [CrossRef]  

21. C. s. Chard, P. nec, V. Bertrand, and P.-F. Lenne, “Setting up a simple light sheet microscope for in toto imaging of C. elegans development,” J. Visualized Exp. 87(87), e51342 (2014). [CrossRef]  

22. F. Zhao, Y. Yang, Y. Li, H. Jiang, X. Xie, T. Yu, X. Wang, Q. Liu, H. Zhang, H. J. J. o, and B. Jia, “Efficient and cost-effective 3D cellular imaging by sub-voxel-resolving light-sheet add-on microscopy,” J. Biophotonics 13(6), e201960243 (2020). [CrossRef]  

23. P. Paiè, F. Bragheri, A. Bassi, and R. Osellame, “Selective plane illumination microscopy on a chip,” Lab Chip 16(9), 1556–1560 (2016). [CrossRef]  

24. J. Wu and R. K. Y. Chan, “A fast fluorescence imaging flow cytometer for phytoplankton analysis,” Opt. Express 21, 23921–23926 (2013). [CrossRef]  

25. T. C. Fadero, T. M. Gerbich, K. Rana, A. Suzuki, M. DiSalvo, K. N. Schaefer, J. K. Heppert, T. C. Boothby, B. Goldstein, M. J. J. o, and C. B. Peifer, “LITE microscopy: Tilted light-sheet excitation of model organisms offers high resolution and low photobleaching,” J. Cell Biol. 217(5), 1869–1882 (2018). [CrossRef]  

26. P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, Z. Yu, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Sub-voxel light-sheet microscopy for high-resolution, high-throughput volumetric imaging of large biomedical specimens,” Adv. Photonics 1(01), 1 (2019). [CrossRef]  

27. J. Nie, S. Liu, T. Yu, Y. Li, J. Ping, P. Wan, F. Zhao, Y. Huang, W. Mei, S. Zeng, D. Zhu, and P. Fei, “Fast, 3D isotropic imaging of whole mouse brain using multiangle-resolved subvoxel SPIM,” Advanced Science 7, 1901891 (2019). [CrossRef]  

28. M. Elad and Y. Hel-Or, “A fast super-resolution reconstruction algorithm for pure translational motion and common space-invariant blur,” IEEE Trans. on Image Process. 10(8), 1187–1193 (2001). [CrossRef]  

29. M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018). [CrossRef]  

30. H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. Jin, and P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” Biomed. Opt. Express 10(3), 1044–1063 (2019). [CrossRef]  

31. H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019). [CrossRef]  

32. L. Xiao, C. Fang, L. Zhu, Y. Wang, T. Yu, Y. Zhao, D. Zhu, and P. Fei, “Deep learning-enabled efficient image restoration for 3D microscopy of turbid biological specimens,” Opt. Express 28(20), 30234–30247 (2020). [CrossRef]  

33. H. Zhang, Y. Zhao, C. Fang, M. Zhang, Y. Zhang, and P. Fei, “Dual-stage-processed 3D network super-resolution for volumetric fluorescence microscopy far beyond throughput limit, “ Biorxiv, doi: https://doi.org/10.1101/435040

34. D. Jing, S. Zhang, W. Luo, X. Gao, Y. Men, C. Ma, X. Liu, Y. Yi, A. Bugde, and B. O Zhou, “Tissue clearing of both hard and soft tissue organs with the PEGASOS method,” Cell Res. 28(8), 803–818 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Deep-SLAM procedure. (a) Regular epi-fluorescence microscopy with low contrast and completely-blurred axial planes. (b) SLAM mode with an add-one device attached to the conventional microscope. SLAM add-one provides additional light-sheet illumination at the vicinity of the focal plane, and thus improves the image contrast and resolution. (c) Deep-SLAM mode with further DNN-based isotropic restoration of SLAM image. The blurred and down-sampled lateral slices of SLAM with noise addition were generated to simulate the axial slices of SLAM image which still show ambiguous axial resolution incapable of resolving single cells (Step 1). The degraded lateral slices (synthetic low-resolution data) paired with raw lateral slices (high-resolution ground truth) were trained using a CARE network model (Step 2). Finally, the trained network directly infer an isotropic 3D image stack based on the raw anisotropic SLAM input (step 3).
Fig. 2.
Fig. 2. Wide-FOV, isotropic Deep-SLAM imaging. (a-d) x-z maximum intensity projections (MIPs) of sub-diffraction fluorescence beads imaged by epi-illumination mode (a, 4×/0.16 objective), thick SLAM mode (b, 0.02 illumination NA), thin SLAM mode (c, 0.06 illumination NA) and Deep-SLAM mode (d, 0.02 illumination NA + iso-CARE). Scale bars: 50 µm. The magnified views of the PSFs indicate the resolving power by each mode. Scale bars: 20 µm for insets. (e) Corresponding 3D visualization of the beads resolved by 4 modes. (f) Peak signal to noise ratio (PSNR) values quantitatively showing the fidelity of wide-field, thick SLAM and Deep-SLAM images, as compared to the HR ground-truth images (results from the confocal range of thin SLAM). (g) The axial intensity plot of beads along x axis, comparing both axial resolution and FOV of each imaging modes shown in a-d.
Fig. 3.
Fig. 3. Restoration of neurons/nuclei in Thy1-GFP-M mouse brain imaged by thick SLAM mode. (a) 3D rendering of a large tissue block (∼2.5 mm × 2 mm × 1.5 mm) from transgenic Thy1-GFP-M mouse brain. (b, c) Maximum intensity projections (MIPs) of the x-z planes in two selected ROIs (highlighted boxes in a), by thick-SLAM (1st column), Deep-SLAM (2nd column), thin SLAM (3rd column) modes. The resolved somata and neuron fibers, together with their line intensity profiles confirm the improvement of axial resolution after isotropic network restoration. The normalized root mean square error (NRMSE) between Deep-SLAM and thin-SLAM results further validate the high restoration accuracy of Deep-SLAM (4th column). Scale bar, 100 µm.
Fig. 4.
Fig. 4. Large-scale imaging of Thy1-GFP-M mouse brain using the three modes. (a-c) Maximum intensity projections (MIPs) of x-z planes in a 3mm FOV region by thick SLAM, Deep-SLAM and thin SLAM modes. The red box indicates the confocal range of thin SLAM mode. (d-e) Magnified views of three selected ROIs indicated in a-c (blue, green, orange and yellow boxes with the blue boxes within the Rayleigh range of thin SLAM). Scale bar, 50 µm.
Fig. 5.
Fig. 5. High-throughput acquisition of Thy1-GFP-M half mouse brain with Deep-SLAM. (a) 3D visualization of a Thy1-GFP-M half mouse brain (200 mm3), which were imaged less than 10 minutes with 5 stitching tiles using thick light-sheet (4×, NA 0.16, Olympus, FOV 2.9 mm). (b-f) Five magnified volumes from the cortex (b), hindbrain (c), hippocampus (d), midbrain (e) and cerebellar (f) to show the resolution improvement by Deep-SLAM, achieving isotropic 3D single-cell resolution across the total half brain. (g) The MIP of the regions indicated in (a) with dotted lines shows the resolution enhancement ability of our Deep SLAM in the whole FOV even near the edge. Scale bar, 100 µm.
Fig. 6.
Fig. 6. Accuracy-improved brain analysis by Deep-SLAM. (a) Comparative Image-based neuron segmentation/tracing using Imaris. Pyramidal neurons in the same cerebral cortex region resolved by thick SLAM (1st column), Deep-SLAM (2nd column) and thin SLAM (3rd column) modes were segmented and traced. The total length of filaments in the image of each mode was also compared in the right column. (b, c) Comparative neuron cell body identification and counting in two 400 × 400 × 400 μm3 ROIs at hippocampal and cortex. The 3D visualization as well as the quantitative results have validated that the axial improvement by Deep-SLAM is substantially beneficial to accurate neuron analysis, which are otherwise challenging to regular thick SLAM mode owing the axial blurring. The number of neurons identified in the image of each mode was also compared (inset). (d, e) 3D visualization of six 50 × 50 × 50 μm3 ROIs indicated in the blue boxes in (a, b). The magnified view clearly shows the accuracy-improved cell counting by our Deep-SLAM. Scale bar, 50 μm.
Fig. 7.
Fig. 7. SLAM add-on device. (a) The schematic of SLAM device. (b) The whole picture of SLAM device. (c) The magnified view of SLAM at working status. (d) The photograph of the sample holder that fixes the cleared mouse brain with two setting screws. Scale bar, 1 cm.
Fig. 8.
Fig. 8. Degrading model to generate semi-synthetic axial slices. (a) The insufficient axial resolution of SLAM image, as compared to its lateral resolution. (b) The schematic of the image degradation steps including: 1. a Gaussian blur performed along the x axis of the lateral image slices of SLAM. 2. 5× downsampling followed with a noise addition to generate the final low-resolution data (lower-right) similar to the axial slices of SLAM (upper-right).
Fig. 9.
Fig. 9. Verification of the image degradation model. We visualized four 150 × 150 × 150 μm3 regions in the cortex of a mouse brain, to verify the accuracy of our degradation model. We first compare the synthetic LR degraded from the thin light-sheet GT measurement, with the real thick light-sheet Experimental LR measurements. Then the output results recovered from both the Synthetic LR and Experimental LR measurements are also compared. (a-d) The comparison of MIPs in xz planes of the cortex region. Scale bars, 50 µm. The insets in the projection images accordingly show their Fourier spectrums, from which the achieved resolutions of the images are also calculated. (e) The PSNR, SSIM and NRMSE indicate the higher fidelity and lower error of Deep-SLAM reconstructions. The visually and quantitatively high similarity between both the low-resolution pairs, their corresponding reconstructions and high-resolution GT, verify the sufficient accuracy of our image degrading model.
Fig. 10.
Fig. 10. U-Net architecture in Deep-SLAM. The CARE network based a 2D U-Net framework was applied in our Deep-SLAM. U-Net logically contains an encoder-decoder architecture with skip-connections. Instead of predicting a single value per output pixel, it predicts a per-pixel distribution (parameterized by mean and scale), which results a 1-channel output. The mean value is used as the prediction. The numbers in the left and upper sides of the layers represent the size and number of the feature maps, respectively. The “Conv” represents a convolutional layer. “Concat” means concatenation operation. The 5×5 Conv layers use the Rectified Linear Unit (ReLU) as the active function. We used adam as the optimizer and the learning rate is 0.0004.

Tables (1)

Tables Icon

Table 1. Comparison of the performance of different imaging modalities

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.