Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep learning based image quality improvement of a light-field microscope integrated with an epi-fluorescence microscope

Open Access Open Access

Abstract

Light-field three-dimensional (3D) fluorescence microscopes can acquire 3D fluorescence images in a single shot, and followed numerical reconstruction can realize cross-sectional imaging at an arbitrary depth. The typical configuration that uses a lens array and a single image sensor has the trade-off between depth information acquisition and spatial resolution of each cross-sectional image. The spatial resolution of the reconstructed image degrades when depth information increases. In this paper, we use U-net as a deep learning model to improve the quality of reconstructed images. We constructed an optical system that integrates a light-field microscope and an epifluorescence microscope, which acquire the light-field data and high-resolution two-dimensional images, respectively. The high-resolution images from the epifluorescence microscope are used as ground-truth images for the training dataset for deep learning. The experimental results using fluorescent beads with a size of 10 µm and cultured tobacco cells showed significant improvement in the reconstructed images. Furthermore, time-lapse measurements were demonstrated in tobacco cells to observe the cell division process.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In life science and biology, we can create a population of transgenic individuals or cells and observe the cells we want to see in them in a non-destructive and living manner [13]. For example, in calcium imaging, the raising and decay time of the calcium-ion indicators are tens of milliseconds (ms) and hundreds of ms, respectively. To capture the temporal behavior of cell activity in calcium imaging, the recording speed at tens of ms is required. The development of high-speed 3D fluorescence imaging techniques is essential to elucidate biological activities that occur in 3D space, such as between many neurons or multiple brain regions. Recently, researches on incoherent digital holography [48], transport of intensity equations [9,10], light fields [1115], and light sheets [16,17] have attracted much attention as 3D imaging techniques. To realize high-speed 3D imaging, it is necessary to record whole 3D information in a single acquisition.

The light field is a four-dimensional function that shows the radiance along a light ray as a function of position and direction in space [11]. In a typical light field microscope, a microlens array is inserted into the imaging plane of a conventional optical imaging system. This microscope is capable of recording both positional and angular information of rays from each lens, thus enabling single-shot 3D measurement. The perspective and observation plane can be arbitrarily determined on a computer by using geometric optics to reproduce rays of light. Compared to other 3D fluorescence imaging techniques, light-field microscopy has the advantage of non-invasive and fast 3D measurement, but there is a trade-off between depth information and the resolution of the reconstructed image. In other words, to obtain a high-resolution image, depth information is decreased because the angular information of the ray is decreased. Here, we mentioned that light-field and integral imaging are synonymous [1821]. We have witnessed several fruitful research to improve the reconstructed image quality in integral imaging. As methods to improve image quality and recognition capability, physical techniques and machine learning including deep learning have been proposed [2235].

In this paper, we present the improvement of the reconstructed image quality acquired by light-field microscopy by using U-net, a model of deep learning. For obtaining the learning data set, we build an optical setup of simultaneously capturing the light field image and the corresponding high-resolution fluorescence image by an epi-fluorescence microscope that is used as the ground-truth image for deep learning. After finishing the learning process, the epi-fluorescence microscope is not used. The feasibility of the proposed method for the improvement of the reconstructed image quality is experimentally demonstrated by performing several experiments on fluorescent beads and living plant cells. In the first experiment, the improvement of the reconstructed images of fluorescent beads is presented. Then, we performed the time-lapse imaging of tobacco suspension culture cells expressing mEGFP-β-tubulin and showed the results of the image quality improvement.

2. Light-field imaging with training data acquisition for deep learning

Light-field microscope can record simultaneously positional and angular information of light rays by placing a microlens array (MLA) in front of an image sensor as shown in Fig. 1. In Fig. 1, there are three-point sources as objects on the object plane: green, blue, and red colors. From these three point sources, rays are propagating at three different angles. The three rays from each of these three point sources are detected independently by the image sensor after passing through a lens and MLA. Therefore, the image sensor is placed in the focal plane of the MLA that is the spatial frequency plane of the object. The image recorded by each lens in the MLA is called an elemental image. This elemental image has angular information of rays of the part of the object. The size of the elemental image is the same as the number of pixels of the image sensor covered by a single lens of MLA, and the angular information of the rays is necessary to obtain the depth information of a 3D object. Let N × N be the number of pixels in the image sensor, M × M be the number of the microlenses in the MLA, and L × L be the number of pixels in the elemental image, then

$$N = M \times L.$$

M × M will be the number of pixels in the reconstructed image and L × L will be the number of angles of the rays. From Eq. (1), there is a trade-off between the number of pixels in the reconstructed image and the number of angles of light rays. If the number of pixels in the reconstructed image is increased, the number of angles of light rays decreases, resulting in less depth information. On the other hand, increasing the number of angles of light rays increases depth information but decreases the number of pixels in the reconstructed image, which is a critical issue because depth information is necessary for 3D reconstruction.

 figure: Fig. 1.

Fig. 1. Typical optical setup of light-field microscope. In this figure, a single lens is used to make an imaging optics.

Download Full Size | PDF

Next, we will briefly explain the principle of the cross-sectional image reconstruction by light-field microscopy on a computer by considering the case of reproducing a point object in a plane, as shown in Fig. 2. The light rays emitted from the point object located on the object plane pass through a lens and MLA, and are detected by the image sensor. By considering the geometrical path of the ray, we can know the positions of the elemental images. Once the distance from the lens during playback is determined, ray tracing reveals the arrival position at the image sensor. Based on this information, the reconstructed image at an arbitrary distance from the lens can be calculated. The reconstruction calculation in light-field imaging and integral imaging is described in details in Refs. [26] and [30]. Figs. 3(a) and (b) show an example of elemental images and its reconstructed image of the light-field microscopy by using fluorescent beads with a diameter of 10 µm. Here we can see the quality of the reconstructed image of the fluorescent beads is degraded. The experimental condition of the light field microscopy is discussed in the following subsection.

 figure: Fig. 2.

Fig. 2. Numerically retrieved process from elemental images.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. (a) Example of elemental images obtained by light-field microscope and (b) its reconstructed image.

Download Full Size | PDF

Deep learning requires the acquisition of input and ideal images as the training dataset. The data preparation of ideal images is key. Therefore, in this paper, we build an optical system that can simultaneously acquire elemental images of the target object using a light-field microscope and ideal high-resolution images by an epi-fluorescence microscope. The experimental system is shown in Fig. 4. Light emitted from an LED, Thorlabs M470L4, with a central wavelength of 470 nm is reflected by a dichroic mirror and passes through a microscope objective lens (Olympus ULWD MSPlan 20, NA 0.4) to illuminate an object with collimated light. The objects under study are the fluorescent beads (diameter ∼ 10 µm) that emit fluorescence (of central wavelength of ∼ 570 nm) and plant cells into which fluorescent proteins (mEGFP-β-tubulin with a central emission wavelength of ∼ 535 nm) have been introduced. The light emitted from the fluorescent object that passes through the microscope objective lens is converted to a near-collimated form, passes through the dichroic mirror, and is magnified by a tube lens (TL). Then the fluorescence light is split into two light waves by the half mirror (HM). The optical system on the left (in Fig. 4(a)) is the light-field microscope optical system (a photograph is shown in Fig. 4(b)). A magnified image is formed on the MLA (Thorlabs MLA150-7AR) with a focal length of 5.6 mm and lens pitch of 150 µm, by the 4f optical system (L1 – L2). The image at the focal plane of the MLA is then formed on the image sensor by another 4f optical system (L3 – L4). The image sensor, CoolSANP KINO with 1940 × 1460 pixels and a pixel pitch of 4.54 µm, is used to record the image. The focal lengths of TL, L1, L2, L3, and L4 are 200 mm, 150 mm, 150 mm, 100 mm, and 100 mm, respectively. The optical system on the upper right is an epi-fluorescence microscope, in which the magnified image is observed by the image sensor through the 4f optical system (L5 – L6). The focal lengths of L5 and L6 are 150 mm. The image sensor, Hamamatsu ORCA-Flash4.0, has 2048 × 2048 pixels with a pixel pitch of 6.5 µm. A photograph of the epi-fluorescence microscope is shown in Fig. 4(c). This epi-fluorescence microscope is used to capture the high-resolution images used as the ideal images for the deep learning training process. In order to record high-resolution images at arbitrary observation depth plane under the microscope objective lens without the change of image size magnification, we use a variable focal length lens (VFL), Optotune, EL-16-40-TC with a diopter range of 20 m-1 in the focal plane of L5. The difference in pixel pitch and misalignment between the two image sensors were corrected using affine transformation. For the dataset in U-net, we reduce the image size to 256 × 256 pixels.

 figure: Fig. 4.

Fig. 4. Schematic of the light-field microscope with an epi-fluorescence microscope for obtaining the training dataset for deep learning. (a) optical configuration, (b) a photo of the light-field microscope, and (c) a photo of the epi-fluorescence microscope. L’s: lenses; M: mirror; MLA: microlens array; DM: dichroic mirror; MO: microscope objective lens; HM: half mirror; VFL: variable focal length lens.

Download Full Size | PDF

In the following subsection, we describe the image quality improvement method using deep learning, a 3-layer U-Net [36,37]. U-Net is one of the powerful convolutional neural networks in image conversion and image segmentation. We evaluated the number of layers to optimize the structure and then a three-layer model is appropriate in our datasets as shown in Fig. 5. The size of the fluorescent beads and structures inside the cell used in this paper is about 10 µm. The kernel size for the convolution is 11 × 11. Table 1 shows the parameters used in U-Net.

 figure: Fig. 5.

Fig. 5. Three-layer U-net model.

Download Full Size | PDF

Tables Icon

Table 1. Parameters used in U-net

3. Experimental results

First, we show the results of the experiment using fluorescent beads. Nile red beads with an average diameter of 10.4 µm were used. A set of 45 images are used as training data and 5 images are used as test data. Figure 6 shows SSIM (Structural Similarity Index Measure) loss as a function of the number of iterations in the learning process. The SSIM loss function was used for evaluation which is defined as

$$SSI{M_{loss}} = 1 - \frac{{({2{\mu_x}{\mu_y} + {C_1}} )({2{\sigma_{xy}} + {C_2}} )}}{{({\mu_x^2 + \mu_y^2 + {C_1}} )({\sigma_x^2 + \sigma_y^2 + {C_2}} )}}$$

In Eq. (2), µx, µy, σx, σy, σxy, C1, and C2 are means of fluorescent intensity along the horizontal (x axis), the vertical (y axis) axes, standard deviation along the x and y axes, the covariance of x and y axes, two variables for stabilization, respectively. C1 and C2 are (0.01*L)2, where, L is the maximum value of corresponding images. The SSIMloss should be smaller when two images are similar. After 10,000 training runs, SSIM loss was 9.48 × 10−2 and 1.63 × 10−1 for the training and validation data, respectively. Figure 7 shows the output images for the test data: (a) the ideal image obtained in the epi-fluorescence microscope, (b) the reconstructed image acquired by the light-field microscope, and (c) the output image from the U-net. We can see that the image quality is dramatically improved, while weak noise in the peripheral region of beads can be seen. This is because, in the training data, there are various patterns, such as a single bead or two or more connected beads. Therefore, it is thought that noise-like afterimages of beads appear in the periphery. However, the constructed U-net can be used to improve the image quality of the light-field image since this noise was enough small.

 figure: Fig. 6.

Fig. 6. SSIM loss as a function of iteration. Orange and blue curves indicate SSIM loss for learning data and validation data, respectively.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Validation test by using fluorescent beads: (a) ideal output images obtained by the epi-fluorescence microscope, (b) reconstructed images obtained by the light-field microscope, and (c) output images of U-net. The scale bar is 10 µm.

Download Full Size | PDF

Next, we show the results using plant cells. Tobacco suspension culture cells expressing mEGFP-β-tubulin were used as an object under test. Examples of the training dataset used in this experiment are shown in Fig. 8, where Fig. 8(a) shows the reconstructed images captured by the light-field microscope and Fig. 8(b) shows the ideal images captured by the epi-fluorescence microscope, those are used as ideal images in U-Net. Figure 9 shows the plot of SSIM loss for the training and validation datasets. We captured the cell division process by recording the images every 10-minute intervals as shown in Visualization 1 and Fig. 10. By observing the cell activity in the region marked by the red circles, the nuclear changes caused by cell division can be reconstructed well. Figures 10(a), (b), and (c) show the ideal images captured by the epi-fluorescence microscope, processed images by the U-net, and reconstructed images by the light-field microscope, respectively. It can be seen that the relatively intense small region has been improved, especially in the mitotic apparatus. This is because the kernel size of 11 × 11 is appropriate for the fluorescence object with a diameter of 10 µm.

 figure: Fig. 8.

Fig. 8. Fluorescence images of plant cells for the training set. (a) reconstructed images as input images and (b) ideal output images as output images. The scale bar is 20 µm.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. SSIM loss as a function of the number of iterations in tobacco suspension culture cells expressing mEGFP-β-tubulin. Orange and blue curves indicate SSIM loss for training data and validation data, respectively.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Time-lapse imaging of tobacco suspension culture cells expressing mEGFP-β-tubulin; (a) ideal images by the epi-fluorescence microscope, (b) output images of U-net, (c) reconstructed images by light-field microscope. The scale bar is 20 µm.

Download Full Size | PDF

4. Conclusion

We have presented the improvement of reconstructed fluorescence images in light-field microscopy by using U-net as the deep learning model. To obtain the training dataset, we built the integrated optical system that consists of the light-field microscope and epi-fluorescence microscope for taking simultaneously a light-field image and a high-resolution image as an ideal image. In the epi-fluorescence microscope to record ideal high-resolution images, a variable focal length lens is used to change the observation depth plane with the same image size. The experimental results using fluorescent beads and cultured tobacco cells showed significant improvement for fluorescent beads with a size of 10 µm in diameter and cell nuclei. In tobacco suspension culture cells, we were able to improve the observation of the cell division process. In future works, it is necessary to optimize the structure of the U-net so that it can accommodate structures of various sizes other than cell nuclei, since cells have structures of various scales.

Funding

Japan Society for the Promotion of Science (20H05886).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D.J. Stephens and V.J. Allan, “Light microscopy techniques for live cell imaging,” Science 300(5616), 82–86 (2003). [CrossRef]  

2. K. Ohki, S. Chung, Y.H. Ch’ng, P. Kara, and R.C. Reid, “Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex,” Nature 433(7026), 597–603 (2005). [CrossRef]  

3. K. Ota, Y. Oisi, T. Suzuki, et al., “Fast, cell-resolution, contiguous-wide two-photon imaging to reveal functional network architectures across multi-modal cortical areas,” Neuron 109(11), 1810–1824.e9 (2021). [CrossRef]  

4. J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photonics 2(3), 190–195 (2008). [CrossRef]  

5. X. Quan, O. Matoba, and Y. Awatsuji, “Single-shot incoherent digital holography using a dual-focusing lens with diffraction gratings,” Opt. Lett. 42(3), 383 (2017). [CrossRef]  

6. T. Tahara, X. Quan, R. Otani, Y. Takaki, and O. Matoba, “Digital holography and its multidimensional imaging applications: a review,” Microscopy 67(2), 55–67 (2018). [CrossRef]  

7. M. Kumar, X. Quan, Y. Awatsuji, C. Cheng, M. Hasebe, Y. Tamada, and O. Matobaa, “Common-path multimodal three-dimensional fluorescence and phase imaging system,” J. Biomed. Opt. 25(03), 032010 (2020). [CrossRef]  

8. X. Quan, M. Kumar, S. K. Rajput, Y. Awatsuji, Y. Tamada, and O. Matoba, “Multimodal microscopy: fast 3D acquisition of quantitative phase imaging and cross-sectional fluorescence imaging,” IEEE J. Select. Topics Quantum Electron. 27(4), 6800911 (2021). [CrossRef]  

9. S.K. Rajput, M. Kumar, X. Quan, M. Morita, T. Furuyashiki, Y. Awatsuji, E. Tajahuerce, and O. Matoba, “Three-dimensional fluorescence imaging using the transport of intensity equation,” J. Biomed. Opt. 25(03), 032004 (2020). [CrossRef]  

10. S. K. Rajput, O. Matoba, M. Kumar, X. Quan, Y. Awatsuji, Y. Tamada, and E. Tajahuerce, “Multi-physical parameter cross-sectional imaging of quantitative phase and fluorescence by integrated multimodal microscopy,” IEEE J. Select. Topics Quantum Electron. 27(4), 6801809 (2021). [CrossRef]  

11. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924 (2006). [CrossRef]  

12. M. Levoy, Z. Zhang, and I. McDowell, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009). [CrossRef]  

13. E. Sánchez-Ortiga, A. Llavador, G. Saavedra, J. García-Sucerquia, and M. Martínez-Corral, “Optical sectioning with a Wiener-like filter in Fourier integral imaging microscopy,” Appl. Phys. Lett. 113(21), 214101 (2018). [CrossRef]  

14. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7(1), 821 (1908). [CrossRef]  

15. J.-S. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29(11), 1230 (2004). [CrossRef]  

16. B.-C. Chen, W.R. Legant, K. Wang, et al., “Lattice light-sheet microscopy: Imaging molecules to embryos at high spatiotemporal resolution,” Science 346(6208), 1257998 (2014). [CrossRef]  

17. P.J Keller, A.D. Schmidt, A. Santella, K. Khairy, Z. Bao, and J. Wittbrodt, “Fast, high-contrast imaging of animal development with scanned light sheet–based structured- illumination microscopy,” Nat. Methods 7(8), 637–642 (2010). [CrossRef]  

18. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598 (1997). [CrossRef]  

19. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52(4), 546 (2013). [CrossRef]  

20. A. Stern and B. Javidi, “3D image sensing and reconstruction with time-division multiplexed computational integral imaging (CII),” Appl. Opt. 42(35), 7036 (2003). [CrossRef]  

21. B. Javidi, A. Carnicer, J. Arai, T. Fujii, H. Hua, H. Liao, M. Martínez-Corral, F. Pla, A. Stern, L. Waller, Q.-H. Wang, G. Wetzstein, M. Yamaguchi, and H. Yamamoto, “Roadmap on 3D integral imaging: sensing, processing, and display,” Opt. Express 28(22), 32266 (2020). [CrossRef]  

22. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324 (2002). [CrossRef]  

23. J.-S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27(13), 1144 (2002). [CrossRef]  

24. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Integral imaging with improved depth of field by use of amplitude-modulated microlens arrays,” Appl. Opt. 43(31), 5806 (2004). [CrossRef]  

25. A. Llavador, J. Sola-Pikabea, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Resolution improvements in integral microscopy with Fourier plane recording,” Opt. Express 24(18), 20792 (2016). [CrossRef]  

26. M. Martinez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: A tutorial on integral imaging, Lightfield, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512 (2018). [CrossRef]  

27. A. Ivani, I.-K. Williem, and Park, “Joint light field spatial and angular super-resolution from a single image,” IEEE Access 8, 112562 (2020). [CrossRef]  

28. R. A. Farrugia, C. Galea, and C. Guillemot, “Super resolution of light field images using linear subspace projection of patch-volumes,” IEEE J. Sel. Top. Signal Process. 11(7), 1058–1071 (2017). [CrossRef]  

29. M. S. K. Gul and B.K. Gunturk, “Spatial and angular resolution enhancement of light fields using convolutional neural networks,” IEEE Trans. on Image Process. 27(5), 2146–2159 (2018). [CrossRef]  

30. K.-C. Kwon, K. H. Kwon, M.-U. Erdenebat, Y.-L. Piao, Y.-T. Lim, M.Y. Kim, and N. Kim, “Resolution-enhancement for an integral imaging microscopy using deep learning,” IEEE Photonics J. 11(1), 6900512 (2019). [CrossRef]  

31. R.A. Farrugia and C. Guillemot, “Light field super-resolution using a low-rank prior and deep convolutional neural networks,” IEEE Trans. Pattern Anal. Mach. Intell. 42, 1162 (2020). [CrossRef]  

32. M. S. Alam, K.-C. Kwon, M.-U. Erdenebat, M. Y. Abbass, M. A. Alam, and N. Kim, “Super-resolution enhancement method based on generative adversarial network for integral imaging microscopy,” Sensors 21(6), 2164 (2021). [CrossRef]  

33. J. Wu, Y. Guo, C. Deng, A. Zhang, H. Qiao, Z. Lu, J. Xie, L. Fang, and Q. Dai, “An integrated imaging sensor for aberration-corrected 3D photography,” Nature 612(7938), 62–71 (2022). [CrossRef]  

34. M. Guo, J. Hou, J. Jin, J. Chen, and L. -P. Chau, “Deep spatial-angular regularization for light field imaging, denoising, and super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. 44, 6094 (2022). [CrossRef]  

35. G. Krishnan, R. Joshi, T. O’Connor, F. Pla, and B. Javidi, “Human gesture recognition under degraded environments using 3D-integral imaging and deep learning,” Opt. Express 28(13), 19711 (2020). [CrossRef]  

36. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. 234, 2015.

37. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/

Supplementary Material (1)

NameDescription
Visualization 1       Time-lapse imaging of tobacco suspension culture cells expressing mEGFP-ß-tubulin

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Typical optical setup of light-field microscope. In this figure, a single lens is used to make an imaging optics.
Fig. 2.
Fig. 2. Numerically retrieved process from elemental images.
Fig. 3.
Fig. 3. (a) Example of elemental images obtained by light-field microscope and (b) its reconstructed image.
Fig. 4.
Fig. 4. Schematic of the light-field microscope with an epi-fluorescence microscope for obtaining the training dataset for deep learning. (a) optical configuration, (b) a photo of the light-field microscope, and (c) a photo of the epi-fluorescence microscope. L’s: lenses; M: mirror; MLA: microlens array; DM: dichroic mirror; MO: microscope objective lens; HM: half mirror; VFL: variable focal length lens.
Fig. 5.
Fig. 5. Three-layer U-net model.
Fig. 6.
Fig. 6. SSIM loss as a function of iteration. Orange and blue curves indicate SSIM loss for learning data and validation data, respectively.
Fig. 7.
Fig. 7. Validation test by using fluorescent beads: (a) ideal output images obtained by the epi-fluorescence microscope, (b) reconstructed images obtained by the light-field microscope, and (c) output images of U-net. The scale bar is 10 µm.
Fig. 8.
Fig. 8. Fluorescence images of plant cells for the training set. (a) reconstructed images as input images and (b) ideal output images as output images. The scale bar is 20 µm.
Fig. 9.
Fig. 9. SSIM loss as a function of the number of iterations in tobacco suspension culture cells expressing mEGFP-β-tubulin. Orange and blue curves indicate SSIM loss for training data and validation data, respectively.
Fig. 10.
Fig. 10. Time-lapse imaging of tobacco suspension culture cells expressing mEGFP-β-tubulin; (a) ideal images by the epi-fluorescence microscope, (b) output images of U-net, (c) reconstructed images by light-field microscope. The scale bar is 20 µm.

Tables (1)

Tables Icon

Table 1. Parameters used in U-net

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

N = M × L .
S S I M l o s s = 1 ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.