Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Design of a high-resolution light field miniscope for volumetric imaging in scattering tissue

Open Access Open Access

Abstract

Integrating light field microscopy techniques with existing miniscope architectures has allowed for volumetric imaging of targeted brain regions in freely moving animals. However, the current design of light field miniscopes is limited by non-uniform resolution and long imaging path length. In an effort to overcome these limitations, this paper proposes an optimized Galilean-mode light field miniscope (Gali-MiniLFM), which achieves a more consistent resolution and a significantly shorter imaging path than its conventional counterparts. In addition, this paper provides a novel framework that incorporates the anticipated aberrations of the proposed Gali-MiniLFM into the point spread function (PSF) modeling. This more accurate PSF model can then be used in 3D reconstruction algorithms to further improve the resolution of the platform. Volumetric imaging in the brain necessitates the consideration of the effects of scattering. We conduct Monte Carlo simulations to demonstrate the robustness of the proposed Gali-MiniLFM for volumetric imaging in scattering tissue.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Miniaturized head-mounted fluorescence microscopes, i.e. miniscopes [1], are an emerging technology in visualizing neural activities within targeted brain regions, since they enable the interrogation of in vivo, fluorescently labeled neurons in freely behaving animals. Major allures of miniscope technology are their open source nature and low cost [1], which allow for a wide degree of customizability in their designs as well as the ability to easily monitor several animals in parallel. As a result, major efforts incorporate off-the-shelf components [2,3] and 3D printing [4] to encourage rapid prototyping.

Most miniscopes utilize the one-photon wide-field epifluorescence geometry, in which the critical component is an easily miniaturizable GRadient INdex (GRIN) objective lens used to satisfy the size and weight restrictions implicit in animal experiments (Fig. 1(a)). The effectiveness of this design was originally demonstrated in the pioneering work done by Schnitzer laboratory [2] and the open-source UCLA Miniscope [5]. The FinchScope [4] further improved the design by incorporating a 3D printed body for improved experimental flexibility and the wireless miniScope [6,7] removed the influence of tethering on active animals. While these initial projects improved the functionality of the epifluorescent miniscope, other designs chose to maximize the amount of data collected during experiments. For example, the NINscope [8] further miniaturized the mechanical design to allow the simultaneous recording of two brain regions using two scopes on a single mouse. The cScope [9] instead employed a bulkier and more optically complex design to provide a mm-scale Field of View (FOV) for widefield imaging of the neocortex of rat. The multi-contrast miniscope [10] allowed for the capturing of multiple imaging modalities including fluorescence, intrinsic optical signals, and laser speckle contrast with a single platform. As a result, these developments have permitted the study of memory encoding, functional imaging, spatial coding, neural synchronization, and complex motor tasks to a greater extent than before [4,5,1013].

 figure: Fig. 1.

Fig. 1. Comparison of the optical imaging path of (a) Miniscope [5], (b) LFM [19] / MiniLFM [20], (c) HR-LFM [21], and (d) our Gali-MiniLFM. NIP: native image plane; NOP: native object plane; OL: objective lens; TL: tube lens; MLA: microlens array.

Download Full Size | PDF

Similar to their standard microscopy counterparts, wide-field miniscopes are intrinsically susceptible to out-of-focus fluorescence and limited imaging volume [14,15]. One possible solution to overcome these issues is two-photon microscopy due to its improved optical sectioning capability [16]. Recently, two-photon miniscopes have been developed to achieve volumetric imaging by combining low-dispersion, hollow-core fibers with MEMS scanning mirrors [17]. The axial focusing capability can be further enhanced with an electro-wetting lens [18]. Despite the higher spatial resolution and larger imaging volume, the utility of the two-photon miniscopes are limited by their hardware requirement for specialized scanning optics, external laser source, and slow acquisition speed [1].

Another attractive approach is to augment existing wide-field miniscopes with computational imaging techniques to partially mitigate the aforementioned limitations. Closely related to the present work, single-shot volumetric reconstruction has been demonstrated by implementing light field microscopy (LFM) [19] on a miniscope platform, i.e. the miniaturized light field microscope (MiniLFM) [20]. This is done by introducing a microlens array (MLA) to the existing miniscope platform paired with a light field deconvolution algorithm [22,23]. Same as the original LFM design [19], the setup extends the optical path by placing the MLA at the native image plane (NIP) of the original miniscope and displaces the sensor to the back focal plane of the MLA, as shown in Fig. 1(b). The utility of both the table-top LFM and MiniLFM have been demonstrated for localizing neuronal activities in 3D without any moving parts [24,25]. However, this design fundamentally suffers from non-uniform lateral resolution across multiple depths [22]. In particular, the lateral resolution close to the native object plane (NOP), i.e. the focal plane of the objective lens, is significantly lower than that from the normal scope [22].

Several solutions have been proposed to maintain or enhance the resolution by trading with other imaging attributes. The first approach restricted the imaging depths to be within a single side (e.g. above or below) of the NOP [23,24], hence sacrificing the imaging volume. The second approach introduced an engineered phase mask to the pupil plane of the microscope [26], which further complicates the setup. The third approach combined LFM with structured illumination [27,28], which required a specialized illumination unit for light patterning. Recently, a new approach, termed high-resolution light field microscopy (HR-LFM) [21] and its variants [29,30], was demonstrated by simply displacing the MLA to some distance behind the NIP, as illustrated in Fig. 1(c). The key insight is that the finite conjugate imaging geometry allows for improved lateral resolution by optimizing the trade-off between spatial and angular sampling in the measurement [31]. However, HR-LFM suffers from a much longer optical path when compared to the original LFM, which is not desirable for miniaturization.

In this paper, we propose a new light field miniscope design, termed the Galilean-mode light field miniscope (Gali-MiniLFM), which provides high resolution across an extended volume with a reduced optical path when compared to both MiniLFM and standard miniscopes, as shown in Fig. 1(d). This, combined with the design’s easy integrating into a standard miniscope architecture, as illustrated in Fig. 2(a), makes it highly attractive for wide-field volumetric fluorescent imaging. We validate the effectiveness of this design while additionally challenging two confounding factors present in traditional wide-field miniscope imaging. First, the GRIN lens introduces more severe optical aberrations [32] than a standard objective lens, which can worsen the resolution. To compensate for this effect, we propose a new computational framework that models the aberrated PSF across the imaging volume and later uses it to perform a more accurate 3D reconstruction, as illustrated in Fig. 2(b). Second, high-resolution volumetric imaging inside brain tissue is complicated by light scattering. Therefore, we conduct a series of Monte Carlo simulations of the Gali-MiniLFM measurements under different scattering conditions and evaluate the robustness of our design by performing 3D reconstruction, as illustrated in Fig. 2(c).

 figure: Fig. 2.

Fig. 2. Overview of the design and evaluation procedure of the proposed high-resolution light field miniscope. (a) Optical design of Gali-MiniLFM; (b) We model light field PSF in 3D and incorporate aberrations of the miniscope optics. (c) We evaluate the robustness of Gali-MiniLFM by modeling the measurements under different scattering conditions and performing volumetric reconstruction via deconvolution.

Download Full Size | PDF

2. Optical design of Gali-MiniLFM

The miniscope setup in our study follows the open-source FinchScope [4]. The imaging path is illustrated in Fig. 1(a). The main optical components consist of a GRIN objective lens (Edmund Optics $\#$64-538: 1.8mm diameter, 0.25 pitch, effective focal length EFL 1.71mm) and an achromatic doublet serving as the tube lens (Edmund Optics $\#$45-207: 5mm diameter, EFL 15mm). They form an imaging system with a magnification $M$ when focused at the NIP. Due to the need for miniaturization, the system is designed to only approximate an ideal 4F system, with its magnification varying at different axial ($z$) planes. Although removing the tube lens and performing imaging using the GRIN lens alone may also meet the need for miniaturization, commercially available GRIN lenses cannot fulfill the joint requirement of gathering the excitation light, collimating the emission beam, and providing sufficient sampling on the image plane.

Here, we show that the parameters of the MLA must be chosen alongside its placement to optimize the design of Gali-MiniLFM. First, we discuss the choice of the MLA. Similar to the standard LFM and HR-LFM, we design the MLA’s numerical aperture (NA) NA$_{\mathrm {mla}}$ to approximately match the image-side NA of the miniscope NA$_0/M$, where NA$_0$ is the NA of the GRIN lens (i.e. object-side NA) [19,21]. This ensures the sub-images formed by the MLA to have minimal overlap while maximizing pixel sampling. Further considering a single point source at the NOP, as illustrated in Fig. 1(d), the converging beam after the tube lens covers $N$-microlenses on the MLA and results in $N$ sub-images, which quantifies the amount of angular information captured by the MLA. The number of sub-images $N$ is determined by

$$2|a|\frac{\textrm{NA}_0}{M} = Nd_\textrm{mla} ,$$
where $d_\textrm {mla}$ is the diameter of the microlens, and $a$ is the distance from the principal plane of MLA to the NIP, which is bounded by the back focal length of the tube lens (i.e. the distance between the tube lens and the NIP). In general, LFM trades spatial sampling for angular information. To maximize the spatial sampling for higher lateral resolution, one should choose a small $N$ (hence small amount of angular information) [21], which in turns sets the optimal choice of $d_\textrm {mla}$ of the MLA. Combining parameters of miniscope setup, sampling trade-off and the length of imaging path into considerations, initially we chose an off-the-shelf MLA (RPC Photonics: MLA-S-250-f10) in our design.

Next, we optimize the locations of MLA and image sensor. The key idea of Gali-MiniLFM is to place the MLA at a proper distance such that the NIP approximately forms a virtual image on the sensor plane, as shown in Fig. 1(d). This means that $a$ and $b$ should simultaneously satisfy

$$\begin{aligned} 1/a+1/b>1/f_\textrm{mla} , \end{aligned}$$
$$\begin{aligned}\frac{1}{2}f_\textrm{mla} < b < f_\textrm{mla} , \end{aligned}$$
where the slight defocus implied by Eq. (2) is inspired by the results in [21], $f_{\mathrm {mla}}$ is the focal length of the microlens, and $b$ is the distance from the principal plane of MLA to the sensor. We also quantify the magnification from the MLA plane $M_\textrm {mla}$ by
$$M_\textrm{mla} \approx \frac{2b}{d_\textrm{mla}}\textrm{NA}_0 ,$$
where the approximation in Eq. (4) is due to the slight defocus introduced in $b$.

Denote the distance between the exit plane of the tube lens to the MLA plane by $c$. We next discuss the choice of $a$, $b$ and $c$, as guided by the results in [22]. It was shown that the lateral resolution is the worst when the object is located at the conjugate plane to the MLA, marked as the “virtual NOP” in Fig. 3(a). Additionally, the lateral resolution decreases when the imaging depth is too far from the virtual NOP since this results in poor spatial sampling. The optimal location of MLA provides a balance between the spatial and angular sampling within a target imaging depth range. In Gali-MiniLFM, by placing the MLA before the NIP, the virtual NOP is displaced to a deeper depth into the sample, as shown in Fig. 3(a). The imaging depth range is targeted to be a shallow region around the actual NOP of the miniscope.

 figure: Fig. 3.

Fig. 3. Lateral resolution analysis for the proposed Gali-MiniLFM. (a) By placing the MLA before the NIP, its conjugate object plane, marked as the virtual NOP, is far from the actual NOP of the miniscope. (b) The PSF images simulated from the proposed Gali-MiniLFM. (c) The deconvolved PSF images using the light field deconvolution algorithm. (d) The MTF varies across different depths. The optimal imaging depth range is determined by that having the largest bandwidth in the MTFs. The optimized MTF indicates that our design can provide better than 5$\mu$m resolution across an approximately 100$\mu$m depth range.

Download Full Size | PDF

To find the optimal values of $a$ and $b$, next we analyze the system’s effective modulation transfer function (MTF) at different imaging depths by adapting the procedure in [22]. First, we simulate a series of light field point-spread-function (PSF) images captured by our system from a point source placed at different depths using the procedure detailed in Section 3. Some examples of simulated PSF images are shown in Fig. 3(b). Second, we perform light field deconvolution by adapting the algorithm in [22], the result of which quantifies the best achievable lateral resolution at the corresponding depth. The deconvolved PSF images from the corresponding measurements in Fig. 3(b) are shown in Fig. 3(c). Finally, the MTF at each depth is found by the 2D spectrum of the corresponding deconvolved PSF. Since each MTF approximately has circular symmetry, we take a radial line from each MTF to capture the main characteristics from each depth, and then repeat the same procedure for all depths. This allows us to inspect the MTFs across different depths simultaneously, as illustrated in Fig. 3(d). As predicted by our analysis, the MTFs contain wide bandwidths within a specific depth range, indicating the optimal imaging range under this configuration. The final optimized parameters for Gali-MiniLFM are summarized in Table 1, which are obtained by repeating these simulation procedures. The final optimized MTFs are shown in Fig. 3(d), demonstrating that we can achieve better than 5$\mu$m lateral resolution across an approximately 100$\mu$m imaging depth range.

Tables Icon

Table 1. The optimized parameters of the proposed Gali-MiniLFM.

According to the above MTF analysis, the optimal imaging depth range is -80$\mu$m $\sim$ 20$\mu$m, as marked in Fig. 3(d). To further quantify the axial resolution within this optimal imaging depth range, we then characterize the full-width at half maximum (FWHM) from the $x$-$z$ cross-sections of the deconvolved volumes. In Fig. 4, we show how the axial resolution changes across different depths in our proposed Gali-MiniLFM design, along with the corresponding lateral resolution. Generally, the axial resolution improves as the imaging depth is closer to the Gali-MiniLFM. This is because as the imaging depth is closer to the Gali-MiniLFM, $N$ becomes larger, i.e. angular sampling becomes denser, and denser angular sampling leads to better depth discrimination. The average axial resolution among the optimal imaging depth range is about 21$\mu$m, which is worse than the lateral resolution. As shown in Fig. 4, while the lateral resolution is optimized within the depth range, the axial resolution can be further improved. In practice, the final optimal imaging depth range should be chosen by considering this trade-off between lateral and axial resolution.

 figure: Fig. 4.

Fig. 4. Axial resolution analysis for the proposed Gali-MiniLFM where the dashed green rectangle marks the optimal imaging depth range in Fig. 3(d).

Download Full Size | PDF

In order to accurately characterize the lateral resolution of the proposed Gali-MiniLFM under more realistic imaging conditions, we further conduct simulations of two closely spaced point sources. The two point sources are placed on the same depth with a lateral interval being 5$\mu$m to 35$\mu$m in $x$ direction. After performing light field deconvolution, we define the lateral resolution by the Rayleigh criterion [33] on the deconvolved image. Based on this procedure, the lateral resolution at each depth is shown in Fig. 5(a). The lateral size of a deconvolved single point source is also presented for comparison. Some examples of deconvolved two point sources are shown in Fig. 5(b) and their intensity profiles along the central dashed line are shown in Fig. 5(c). As demonstrated in Fig. 5(a), the lateral resolution is worse than the width of the single point source case, especially at deeper depths. The discrepancy is due to the fact that the deconvolution algorithm tends to reconstruct isolated objects. In summary, we conclude that our design achieves better than 7$\mu$m lateral resolution across an approximately 70$\mu$m imaging depth range.

 figure: Fig. 5.

Fig. 5. Lateral resolution study for the proposed Gali-MiniLFM. (a) Comparison for the resolved size obtained from a single bead and two closely spaced beads. (b) Examples of deconvolved two beads. (c) Corresponding line profiles along colored dashed lines in (b).

Download Full Size | PDF

Overall, our design allows Gali-MiniLFM to capture high-resolution spatial information while minimizing the overall length of the imaging path. The total length of the imaging path is shorten by approximately $|a|-b = 6.67$mm, compared to the original miniscope platform. As compared to the alternative HR-LFM design, the reduction of the imaging path length is approximately $2|a|= 16.948$mm, as shown in Fig. 1, which is highly desirable for miniaturization.

3. PSF modeling

As a computational imaging technique, the image formation in LFM takes in two steps [22]. First, one establishes a forward model that relates the object to the measurement. Next, after an image is captured, the light field deconvolution algorithm is applied to invert the model to recover the underlying object. A crucial step for high-quality reconstruction is to build an accurate forward model since any model-mismatch can result in reconstruction artifacts [22]. Previous works establish the forward model by assuming an ideal optical system without considering aberrations [2022,24]. The PSFs are then calculated based on the scalar Debye theory [34], with the open source code shared in [24]. Here, we show that the use of GRIN lens necessitates the modeling of aberrations for building an accurate model of Gali-MiniLFM.

Our computational framework models optical aberrations, diffraction effects from the MLA, and the shift variance of the system response in three steps. In the first step, we simulate the aberrated wavefront at the focal plane (FP) from an on-axial point object using Zemax. Here, the FP is defined as the plane where the point object forms the best focus and is optimized by Zemax, as illustrated in Fig. 6(a1). The same simulation is repeated by moving the point object axially and record the aberrated wavefronts at the corresponding FPs. The distance between each FP and the exit plane of the achromat is denoted by $d_i$, and the aberrated wavefront is denoted by $U_i$, where $i$ indexes the axial location. This procedure allows us to model realistic aberrations from all the off-the-shelf optical components in the miniscope, including the GRIN lens and the achromat, since their optical parameters are directly importable to Zemax. Modeling the on-axis PSFs accounts for the primary aberrations from the GRIN lens due to the spherical aberration [32], as shown in Fig. 6(a2). The spherical aberration is approximately invariant across different lateral positions at the same depth while changes across different depths. In Fig. 6(a3), we present the Seidel coefficients for off-axis point sources in the $x$ direction at two extreme depth planes, i.e. -140$\mu$m and 60$\mu$m. The optical aberrations are quantified in the unit of the products of Seidel coefficients and the wavelength used. As shown in Fig. 6(a3), the Seidel coefficients of the spherical aberration are much larger than that of the off-axis Seidel aberrations, e.g. Coma and Astigmatism, around the central FOV and at least 5 times larger at the peripheral FOV. Therefore, these off-axis components of Seidel aberrations are neglected for simplicity in the rest of the study. This significantly reduces the computational complexity in both modeling the shift variant forward model and devising the 3D reconstruction algorithm. The reason we extracted the wavefronts at the FPs instead of directly extracting them at the fixed MLA plane is also for improving the computational efficiency. At the FPs, it requires the least amount of samples in the Zemax simulation for fulfilling the sampling requirement.

 figure: Fig. 6.

Fig. 6. The proposed 3D PSF modeling framework incorporating both aberration and diffraction effects in Gali-MiniLFM. (a) Aberration extraction in Zemax. (a1) The position of the FP is optimized in Zemax for different point source. (a2) The Seidel diagrams for the aberrations included in the wavefront at the FP from a point source placed off-axis by 0.15mm in the $x$-direction. (a3) The Seidel coefficients for off-axis point sources in $x$ direction at two extreme depths, i.e. -140$\mu$m and 60$\mu$m. (b) Diffraction modeling in Matlab. Each aberrated wave field is propagated through the MLA to calculate the light field PSF. (c) The computational framework assume periodicity across the MLA (c1-c2) and shift-variant PSF within each microlens (c3) in order to model the off-axis light field PSFs.

Download Full Size | PDF

In the second step, each aberrated wavefront is input to Matlab for simulating the light field PSFs by propagating them through the MLA and incorporating the diffraction effects. First, we resample the aberrated wavefronts to account for non-uniform sampling. Next, $U_i$ is back-propagated to the MLA plane via Fresnel propagation to calculate the field immediately before the MLA plane $\hat {U}_i$

$$\hat{U_i}(\textbf{r}_2) = \frac{e^{jk(c-d_i)}}{j\lambda (c-d_i)} \iint_{-\infty}^{\infty}{U_i}(\textbf{r}_1) \exp\left\{\frac{jk}{2(c-d_i)} |\textbf{r}_1-\textbf{r}_2|^2\right\}d\textbf{r}^2_1,$$
where $k = 2\pi /{\lambda }$ is the wave-number, $\lambda$ is the wavelength, $\textbf {r}_1 = (x_1,y_1)$ and $\textbf {r}_2 = (x_2,y_2)$ denote the spatial coordinates of the FP and MLA planes, respectively. Next, the field, $\hat {U_i}$, is propagated through the MLA and further to the sensor plane. The incoherent PSF $h_i(\textbf {r}_{\textbf{3}})$ for the $i^\textrm {th}$ location of the overall system is
$$h_i(\textbf{r}_{\textbf{3}})= \left\vert\iint_{-\infty}^{\infty}\frac{e^{jkb}}{j\lambda b}\hat{U_i}(\textbf{r}_{\textbf{2}})P(\textbf{r}_{\textbf{2}}) \exp\left\{\frac{jk}{2b}|\textbf{r}_3-\textbf{r}_2|^2\right\}d\textbf{r}^2_2\right\vert^2,$$
where $\textbf {r}_3 = (x_3,y_3)$ is the spatial coordinates on the sensor plane and $P(\textbf {r}_2)$ is the transmission function of MLA, which is given by
$$P(\textbf{r}_2) = \mathop{\sum\sum}_{\textbf{s}\in \{S_1, S_2\}}p(\textbf{r}_2-\textbf{s}d_\textrm{mla}) \exp\left\{-\frac{jk}{2f_\textrm{mla}}|\textbf{r}_2-\textbf{s}d_\textrm{mla}|^2\right\},$$
where $\textbf {s} = (s_1,s_2)$ indexes the microlenses in the MLA along $x$ and $y$ dimensions, $S_1$, and $S_2$ are the total number of microlenses in each dimensions, and $p(\textbf {r}_2)$ is the pupil function of each microlens. This procedure is summarized in Fig. 6(b).

The third step aims to establish the 3D shift variant model of Gali-MiniLFM. Intuitively, one needs to repeat the above two steps for all the point locations within the volume of interest. In practice, this incurs prohibitive computational cost. We resolve this issue by approximating the PSFs being periodic due to the MLA. This approximation is valid as long as the off-axis aberrations from the miniscope optics are not severe, which is justified previously. Building on this approximation, we devise an efficient algorithm to compute the shift variance forward model, as summarized in Fig. 6(c). First, the calculated field at the MLA plane from an off-axis point is obtained by laterally shifting the field from the on-axis point. The lateral shift $\triangle d_q$ for the $q^\textrm {th}$ light field PSF is determined by

$$\triangle d_q = q \frac{d_\textrm{mla}}{N_{n}},$$
where $q\in \left [\frac {-N_{n}}{2},\frac {N_{n}}{2}\right ]$, and $N_{n}$ is set by the number of pixels under each microlens and assumed to be even (without loss of generality). Next, the shifted field is propagated through the MLA and further to the sensor (Eq. (6)) to obtain the off-axis light field PSF.

4. Numerical simulations and analysis

4.1 Validation of the PSF modeling method

In order to verify the accuracy of our forward model, we first simulated the PSF directly using the ray tracing model in Zemax. The layout of the simulation is shown in Fig. 7(a). The fluorescence emission wavelength is assumed to be 532nm. The refractive index of the background medium is 1.35, matching typical brain tissues [35]. The light field PSFs of Gali-MiniLFM are simulated at different axial positions, with some example PSFs shown in Fig. 7(b1). To demonstrate the need for modeling aberrations in order to generate accurate light field PSFs, we next simulate the PSFs using the wave-optic model assuming an ideal system by adapting the code published in [24], whose results are shown in Fig. 7(b2). By comparing the PSFs in Fig. 7(b1) and (b2), it is evident that the aberration results in distortions of the intensity distribution of the PSFs. Finally, we simulate the PSFs using our method, as shown in Fig. 7(b3). The general intensity distributions of the PSFs match well with the ray tracing results, with the additional benefits of including the diffraction effects from the MLA. Overall, this simulation validates the accuracy of our PSF modeling framework.

 figure: Fig. 7.

Fig. 7. Validation of our light field PSF simulation method. (a) The ray tracing layout used in Zemax. (b) The comparison between PSFs simulated using (b1) direct ray tracing, (b2) the wave model without considering aberration (adapted from the code shared in [24]), and (b3) our method. The PSFs simulated from our method closely matches with the ray tracing results with diffraction effects included.

Download Full Size | PDF

As compared to the ray tracing method, our method has the additional benefits of being highly computationally efficient. As described previously, a major challenge in modeling Gali-MiniLFM is shift variance. If implemented in Zemax, even with the same periodicity assumption across the MLA, one would still require repeating the same ray tracing procedure by $N^2_{n}$ times at each depth, which is very time consuming. Notice that simulating only one on-axis PSF shown in Fig. 7(b1), which used 5 million rays for tracing, took about 3.2min. The tracing time is the same for each off-axis PSF. In contrast, our method, which adopts a highly efficient Fast Fourier Transform in Zemax followed by efficient wave-optical model calculations in Matlab, took only 12s to obtain one on-axis PSF shown in Fig. 7(b3), and only additional 2s to obtain all off-axis PSFs.

4.2 Validation of volumetric reconstruction capability

Next, we demonstrate the volumetric reconstruction capability of the proposed Gali-MiniLFM in simulation. The 3D object consists of 50 spherical particles with diameter of 10$\mu$m randomly distributed within a $300\times 300\times 100{\mu }$m$^{3}$ volume, as shown in Fig. 8(a). We first simulated the light field measurement on Zemax, as shown in Fig. 8(b). The 3D reconstruction from the light field measurement involves solving a 3D shift-variant deconvolution algorithm. Here, we carry out the 3D reconstruction by adapting the algorithm developed in [22,24] and modifying the forward model matrix using our simulated PSFs. The impact of the aberration of the miniscope is highlighted in the results shown in Fig. 8(c) and (d). Without considering aberration, although volumetric reconstruction is still possible, the result suffers from broadened degraded lateral resolution, inaccurate depth estimation, and axial elongation in each reconstructed particles, primarily due to the spherical aberration. By incorporating the spherical aberration in the model, the 3D reconstruction quality is substantially improved, as shown in Fig. 8(d). Still, the particles located at the outer regions of the FOV suffer from artifacts most likely due to the unaccounted aberration terms. Although our framework allows modeling these aberrations in the forward model, the inversion algorithm for solving such large-scale shift-variant problem accounting for the aberrations across the entire FOV, is currently lacking due to the computational cost. Efficient light field deconvolution algorithms to overcome this issue will be investigated in our future work.

 figure: Fig. 8.

Fig. 8. Light field measurement for imaging volumetric objects without scattering and 3D reconstruction results. (a) Ground truth objects used in the simulation. The colorbar indicates the axial location of each sphere. (b) The simulated light field measurement from Zemax. The 3D reconstruction results using (c) the wave-optic model without considering aberration, and (d) our methods. Comparison of the projection images show that the results without considering aberration suffer from worse lateral resolution, incorrect depth information, and axial elongation.

Download Full Size | PDF

4.3 3D imaging in scattering tissue

Finally, we perform numerical simulations to evaluate the performance of Gali-MiniLFM for volumetric imaging under tissue scattering. The setup used in the simulation is shown in Fig. 9(a). To simulate the effects of volumetric scattering inside the tissue, we embed spherical fluorescent particles of two different sizes. The imaging targets are the same as those in Fig. 8, containing volumetrically distributed fluorescent beads of 10$\mu$m in diameter. The background fluorescence is simulated by embedding 1$\mu$m-sized fluorescent beads throughout an extended $500\times 500\times 300{\mu }$m$^{3}$ volume. We set the concentration of imaging targets and background fluorescence using the parameters used in [23] and [36], respectively. The anisotropic volumetric scattering from the tissue is simulated using the Henyey-Greenstein model [35]. The scattering parameters are assumed to match commonly seen values in brain tissues, including the scattering mean free path $l_s = 100\mu$m, the anisotropy factor $g = 0.9$, and the absorption coefficient $\mu _a = 0$. The measurements are simulated by the built-in Monte Carlo ray tracing algorithm in Zemax.

 figure: Fig. 9.

Fig. 9. Results of volumetric imaging and 3D reconstruction under tissue scattering. (a) The simulation considers anisotropic volumetric scattering by embedding both the imaging targets and the background fluorescent beads inside the tissue volume. (b) Volumetric reconstruction results under increasing imaging depth. The depths of the top surface of the tissue are (b1) 180$\mu$m, (b2) 90$\mu$m, and (b3) 0$\mu$m.

Download Full Size | PDF

To investigate the effect of tissue scattering, we perform a series of simulations corresponding to different imaging depths. The location of the imaging volume from the GRIN lens is kept the same in all the simulations, which is between 180$\mu$m and 280$\mu$m underneath the GRIN lens, corresponding to between $50\mu$m and $-50\mu$m in our coordinate system (Fig. 3(a)). In between the bottom surface of the GRIN lens and the top surface of the tissue, non-scattering index matching fluid (e.g. water) is assumed to fill the space. The scattering is varied by changing the size of this non-scattering volume $D$. In Fig. 9(b), we show three representative examples in our study, corresponding to $D = 180\mu$m, $90\mu$m, and $0\mu$m, respectively. The resulting light field measurements are shown in the first row of Fig. 9(b). As expected, the scattering effects become more severe as the imaging depth increases (the thickness of non-scattering volume $D$ decreases). We quantify the effect of scattering to the raw light field measurement by comparing the scattering-free image $I_0$ (Fig. 8(b)) with each scattered image $I_s$. The strength of the scattering is quantified by the signal-to-background ratio (SBR), defined as

$$\textrm{SBR} = \|I_0\| / \|I_s - I_0\|,$$
where $\|\cdot \|$ calculates the norm. The SBRs for the light field measurements are shown in Fig. 10(a).

 figure: Fig. 10.

Fig. 10. SBR for quantifying the scattering effect. SBRs of (a) raw light field measurements, (b) 3D reconstruction. Notice that smaller $D$ will cause more severe scattering as well as lower SBR.

Download Full Size | PDF

Next, we perform 3D reconstruction using the same algorithm as the scattering-free case. The results are shown in depth-coded projection in the second row of Fig. 9(b). We also zoom in the central region of the reconstruction in the third row of Fig. 9(b). Overall, it is observed that the effect of scattering shows up as periodic background patterns whose strength increases as the imaging depth increases. In Fig. 9(b1), all the imaging targets are within a single scattering mean free path; it is seen that the degradation due to scattering is mild. In Fig. 9(b2), most of the imaging targets are within a single and two scattering mean free paths; it is seen that the light field deconvolution algorithm can still overcome most of the scattering background. Even at deepest depth under study in Fig. 9(b3) in which the imaging targets are mostly within two and three scattering mean free paths, it is still possible to discern the imaging targets from the background albeit with more severe background artifacts. We further quantify the degradation of the reconstruction due to scattering by comparing the scattering-free reconstruction $R_0$ (Fig. 8(d)) with each reconstruction under scattering $R_s$. The effect of scattering is also measured by the signal-to-background ratio (SBR) defined similarly as Eq. (9), which is shown in Fig. 10(b). As expected, the values of SBR deteriorate at greater depths.

Finally, in order to investigate the performance limit of the proposed Gali-MiniLFM, the same procedures are conducted by increasing the density of the 10$\mu$m fluorescent particles. We first increase the number of fluorescent particles from 50 to 55 and 65 when $D=0\mu$m. Next, we perform the Monte Carlo ray tracing as well as the 3D reconstruction. The results are shown in the left-most and the second columns of Fig. 11. The majority of imaging targets can be discerned from severe background artifacts in both cases. We plot the intensity profiles for the depth coded 3D reconstruction results and consider the beads that can be distinguished from their neighbors and background as resolved. The percentage is then defined as the number of resolved beads over the number of total beads. When comparing the number of resolved fluorescent particles in these two cases, we can find that the percentage of resolved fluorescent particles does not reduce when the density increases, i.e. from 55 to 65. Next, we investigate imaging the same 65 fluorescent particles but with reduced scattering, i.e. $D$ equals to 90$\mu$m. The results are shown in the third column of Fig. 11. As expected, the background artifacts become weaker and the number of resolved fluorescent particles keeps the same, which indicates the robustness of the proposed Gali-MiniLFM. We further increase the number from 65 to 75 when $D$ equals to 90$\mu$m and 180$\mu$m, whose results are shown in the fourth and the right-most columns of Fig. 11, respectively. The conclusion still holds that the percentage of resolved fluorescent particles does not reduce when the density increases.

 figure: Fig. 11.

Fig. 11. Performance limit analysis for the proposed Gali-MiniLFM under scattering. (a) Distributions of the fluorescent particles used in the simulation, (b) The raw light field measurements, and (c) 3D reconstruction results. The left-most column corresponds to 55 particles and $D = 0\mu$m. The second and the third columns correspond to 65 particles and $D=0\mu$m and $D=90\mu$m, respectively. The fourth and the right-most columns correspond to 75 particles and $D=90\mu$m and $D=180\mu$m, respectively.

Download Full Size | PDF

We additionally investigate the influence of the background fluorescence, i.e. the 1$\mu$m-sized fluorescent beads on the performance limit of the proposed Gali-MiniLFM, whose results are shown in Fig. 12. First, we image the same 50 fluorescent particles (shown in Fig. 8) when $D=0\mu$m but increase the density of the 1$\mu$m-sized fluorescent beads up to 1.3 times. The corresponding results are displayed in the second column of Fig. 12. For comparison, we show the original results of 50 fluorescent particles (shown in the right-most column of Fig. 9(b)) in the left-most column of Fig. 12 and the results of increasing the density of fluorescent particles up to 1.3 times (the second column of Fig. 11) in the third column of Fig. 12. As can be seen, both the increments of background fluorescence and fluorescent particles will lead to more severe background noise on the light field measurements and reduce the contrast of 3D reconstruction slightly, but barely affect the percentage of resolved fluorescent particles. Next, we investigate imaging the same 65 fluorescent particles with reduced scattering, i.e. $D$ equals to 90$\mu$m, but increase the density of the 1$\mu$m-sized fluorescent beads up to two and three times, whose results are shown in the fourth and the right-most columns of Fig. 12, respectively. It can be found that the increment of background fluorescence will also reduce the contrast of 3D reconstruction even with reduced scattering. However, the percentage of resolved fluorescent particles still is not influenced. Overall, these studies demonstrate the robustness of the proposed Gali-MiniLFM under volumetric tissue scattering for imaging 10$\mu$m-scale fluorescent objects.

 figure: Fig. 12.

Fig. 12. Influence analysis of background fluorescence on the performance limit of the proposed Gali-MiniLFM under scattering. (a) Distributions of the fluorescent particles used in the simulation, (b) The raw light field measurements, and (c) 3D reconstruction results. The left-most column corresponds to 50 particles and $D = 0\mu$m. The second and the third columns correspond to increasing the density of background fluorescence and imaging targets up to 1.3 times (65 particles) when $D = 0\mu$m, respectively. The fourth and the right-most columns correspond to 65 particles with reduced scattering, i.e. $D=90\mu$m but with increased density of background fluorescence up to two and three times, respectively.

Download Full Size | PDF

5. Conclusion

In this paper, we developed a Galilean-mode light field miniscope (Gali-MiniLFM) to achieve high, uniform resolution and a compact design for volumetric imaging of neural activity. In Gali-MiniLFM, the MLA is placed at the position that forms a virtual imaging relationship between the NIP and the image sensor to mitigate the discrepancy between spatial and angular sampling, as well as shorten the fluorescence imaging path. In order to improve the 3D reconstruction, we proposed a novel framework that includes optical aberrations into the light field PSF modeling. Numerical simulation results demonstrate the robustness of Gali-MiniLFM for 3D imaging under volumetric tissue scattering.

Funding

National Natural Science Foundation of China (61771275); Shenzhen Project (JCYJ20170817162658573); Boston University Dean's Catalyst Award; Tip-top Scientific and Technical Innovative Youth Talents of Guangdong Special Support Program (2016TQ03X998).

Acknowledgments

The authors would like to thank Ian Davison, Daniel Leman, and William Yen for helping with the original FinchScope Zemax design. L. Tian acknowledges the support from Boston University Dean’s Catalyst Award for this work. Y. Chen acknowledges the financial support from China Scholarship Council for one-year study in Boston University. X. Jin acknowledges the support from National Natural Science Foundation of China (NSFC) (61771275), Shenzhen Project, China (JCYJ20170817162658573) and Tip-top Scientific and Technical Innovative Youth Talents of Guangdong Special Support Program (2016TQ03X998) .

Disclosures

The authors declare no conflicts of interest.

References

1. D. Aharoni, B. S. Khakh, A. J. Silva, and P. Golshani, “All the light that we can see: a new era in miniaturized microscopy,” Nat. Methods 16(1), 11–13 (2019). [CrossRef]  

2. K. K. Ghosh, L. D. Burns, E. D. Cocker, A. Nimmerjahn, Y. Ziv, A. El Gamal, and M. J. Schnitzer, “Miniaturized integration of a fluorescence microscope,” Nat. Methods 8(10), 871–878 (2011). [CrossRef]  

3. J. H. Park, J. Platisa, J. V. Verhagen, S. H. Gautam, A. Osman, D. Kim, V. A. Pieribone, and E. Culurciello, “Head-mountable high speed camera for optical neural recording,” J. Neurosci. Methods 201(2), 290–295 (2011). [CrossRef]  

4. W. A. Liberti III, L. N. Perkins, D. P. Leman, and T. J. Gardner, “An open source, wireless capable miniature microscope system,” J. Neural Eng. 14(4), 045001 (2017). [CrossRef]  

5. D. J. Cai, D. Aharoni, T. Shuman, J. Shobe, J. Biane, W. Song, B. Wei, M. Veshkini, M. La-Vu, J. Lou, S. E. Flores, I. Kim, Y. Sano, M. Zhou, K. Baumgaertel, A. Lavi, M. Kamata, M. Tuszynski, M. Mayford, P. Golshani, and A. J. Silva, “A shared neural ensemble links distinct contextual memories encoded close in time,” Nature 534(7605), 115–118 (2016). [CrossRef]  

6. G. Barbera, B. Liang, L. Zhang, Y. Li, and D.-T. Lin, “A wireless miniScope for deep brain imaging in freely moving mice,” J. Neurosci. Methods 323, 56–60 (2019). [CrossRef]  

7. T. Shuman, D. Aharoni, D. J. Cai, C. R. Lee, S. Chavlis, L. Page-Harley, L. M. Vetere, Y. Feng, C. Y. Yang, and I. Mollinedo-Gajate, “Breakdown of spatial coding and interneuron synchronization in epileptic mice,” Nat. Neurosci. 23(2), 229–238 (2020). [CrossRef]  

8. A. de Groot, B. J. van den Boom, R. M. van Genderen, J. Coppens, J. van Veldhuijzen, J. Bos, H. Hoedemaker, M. Negrello, I. Willuhn, C. I. De Zeeuw, and T. M. Hoogland, “Ninscope: a versatile miniscope for multi-region circuit investigations,” eLife 9, 685909 (2020). [CrossRef]  

9. B. B. Scott, S. Y. Thiberge, C. Guo, D. G. R. Tervo, C. D. Brody, A. Y. Karpova, and D. W. Tank, “Imaging cortical dynamics in GCaMP transgenic rats with a head-mounted widefield macroscope,” Neuron 100(5), 1045–1058.e5 (2018). [CrossRef]  

10. J. Senarathna, H. Yu, C. Deng, A. L. Zou, J. B. Issa, D. H. Hadjiabadi, S. Gil, Q. Wang, B. M. Tyler, N. V. Thakor, and A. P. Pathak, “A miniature multi-contrast microscope for functional imaging in freely behaving animals,” Nat. Commun. 10(1), 99 (2019). [CrossRef]  

11. T. Shuman, D. Aharoni, D. J. Cai, C. R. Lee, S. Chavlis, J. Taxidis, S. E. Flores, K. Cheng, M. Javaherian, C. C. Kaba, M. Shtrahman, K. I. Bakhurin, S. Masmanidis, B. S. Khakh, P. Poirazi, A. J. Silva, and P. Golshani, “Breakdown of spatial coding and neural synchronization in epilepsy,” bioRxiv p. 358580 (2018).

12. X. Wang, Y. Liu, X. Li, Z. Zhang, H. Yang, Y. Zhang, P. R. Williams, N. S. A. Alwahab, K. Kapur, B. Yu, Y. Zhang, M. Chen, H. Ding, C. R. Gerfen, K. H. Wang, and Z. He, “Deconstruction of corticospinal circuits for goal-directed motor skills,” Cell 171(2), 440–455.e14 (2017). [CrossRef]  

13. M. Murugan, H. J. Jang, M. Park, E. M. Miller, J. Cox, J. P. Taliaferro, N. F. Parker, V. Bhave, H. Hur, and Y. Liang et al., “Combined social and spatial coding in a descending projection from the prefrontal cortex,” Cell 171(7), 1663–1677.e16 (2017). [CrossRef]  

14. A. Glas, M. Hübener, T. Bonhoeffer, and P. M. Goltstein, “Benchmarking miniaturized microscopy against two-photon calcium imaging using single-cell orientation tuning in mouse visual cortex,” PLoS One 14(4), e0214954 (2019). [CrossRef]  

15. J. Mertz, “Strategies for volumetric imaging with a fluorescence microscope,” Optica 6(10), 1261 (2019). [CrossRef]  

16. F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2(12), 932–940 (2005). [CrossRef]  

17. W. Zong, R. Wu, M. Li, Y. Hu, Y. Li, J. Li, H. Rong, H. Wu, Y. Xu, Y. Lu, H. Jia, M. Fan, Z. Zhou, Y. Zhang, A. Wang, L. Chen, and H. Cheng, “Fast high-resolution miniature two-photon microscopy for brain imaging in freely behaving mice,” Nat. Methods 14(7), 713–719 (2017). [CrossRef]  

18. B. N. Ozbay, G. L. Futia, M. Ma, V. M. Bright, J. T. Gopinath, E. G. Hughes, D. Restrepo, and E. A. Gibson, “Three dimensional two-photon brain imaging in freely moving mice using a miniature fiber coupled microscope with active axial-scanning,” Sci. Rep. 8(1), 8108 (2018). [CrossRef]  

19. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]  

20. O. Skocek, T. Nöbauer, L. Weilguny, F. Martínez Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, D. D. Cox, P. Golshani, and A. Vaziri, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018). [CrossRef]  

21. H. Li, C. Guo, D. Kim-Holzapfel, W. Li, Y. Altshuller, B. Schroeder, W. Liu, Y. Meng, J. B. French, K.-I. Takamaru, M. A. Frohman, and S. Jia, “Fast, volumetric live-cell imaging using high-resolution light-field microscopy,” Biomed. Opt. Express 10(1), 29–49 (2019). [CrossRef]  

22. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]  

23. T. Nöbauer, O. Skocek, A. J. Pernia-Andrade, L. Weilguny, F. M. Traub, M. I. Molodtsov, and A. Vaziri, “Video rate volumetric C2+ imaging across cortex using seeded iterative demixing (SID) microscopy,” Nat. Methods 14(8), 811–818 (2017). [CrossRef]  

24. R. Prevedel, Y.-G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014). [CrossRef]  

25. N. C. Pégard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3(5), 517–524 (2016). [CrossRef]  

26. N. Cohen, S. Yang, A. Andalman, M. Broxton, L. Grosenick, K. Deisseroth, M. Horowitz, and M. Levoy, “Enhancing the performance of the light field microscope using wavefront coding,” Opt. Express 22(20), 24817–24839 (2014). [CrossRef]  

27. M. A. Taylor, T. Nöbauer, A. Pernia-Andrade, F. Schlumm, and A. Vaziri, “Brain-wide 3D light-field imaging of neuronal activity with speckle-enhanced resolution,” Optica 5(4), 345–353 (2018). [CrossRef]  

28. T. V. Truong, D. B. Holland, S. Madaan, A. Andreev, J. V. Troll, D. E. S. Koo, K. Keomanee-Dizon, M. J. McFall-Ngai, and S. E. Fraser, “Selective volume illumination microscopy offers synchronous volumetric imaging with high contrast,” bioRxiv (2018).

29. L. Cong, Z. Wang, Y. Chai, W. Hang, C. Shang, W. Yang, L. Bai, J. Du, K. Wang, and Q. Wen, “Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (danio rerio),” eLife 6, e28158 (2017). [CrossRef]  

30. C. Guo, W. Liu, X. Hua, H. Li, and S. Jia, “Fourier light-field microscopy,” Opt. Express 27(18), 25573 (2019). [CrossRef]  

31. S. Zhu, A. Lai, K. Eaton, P. Jin, and L. Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Appl. Opt. 57(1), A1 (2018). [CrossRef]  

32. F. Bociort and J. Kross, “Seidel aberration coefficients for radial gradient-index lenses,” J. Opt. Soc. Am. A 11(10), 2647 (1994). [CrossRef]  

33. L. Rayleigh, “Investigations in optics, with special reference to the spectroscope,” Mon. Not. R. Astron. Soc. 9(53), 40–55 (1880). [CrossRef]  

34. M. Gu, Advanced Optical Imaging Theory, vol. 75 (Springer Science & Business Media, 2000).

35. S. L. Jacques, “Optical properties of biological tissues: a review,” Phys. Med. Biol. 58(11), R37–R61 (2013). [CrossRef]  

36. P. Theer and W. Denk, “On the fundamental imaging-depth limit in two-photon microscopy,” J. Opt. Soc. Am. A 23(12), 3139–3149 (2006). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Comparison of the optical imaging path of (a) Miniscope [5], (b) LFM [19] / MiniLFM [20], (c) HR-LFM [21], and (d) our Gali-MiniLFM. NIP: native image plane; NOP: native object plane; OL: objective lens; TL: tube lens; MLA: microlens array.
Fig. 2.
Fig. 2. Overview of the design and evaluation procedure of the proposed high-resolution light field miniscope. (a) Optical design of Gali-MiniLFM; (b) We model light field PSF in 3D and incorporate aberrations of the miniscope optics. (c) We evaluate the robustness of Gali-MiniLFM by modeling the measurements under different scattering conditions and performing volumetric reconstruction via deconvolution.
Fig. 3.
Fig. 3. Lateral resolution analysis for the proposed Gali-MiniLFM. (a) By placing the MLA before the NIP, its conjugate object plane, marked as the virtual NOP, is far from the actual NOP of the miniscope. (b) The PSF images simulated from the proposed Gali-MiniLFM. (c) The deconvolved PSF images using the light field deconvolution algorithm. (d) The MTF varies across different depths. The optimal imaging depth range is determined by that having the largest bandwidth in the MTFs. The optimized MTF indicates that our design can provide better than 5$\mu$m resolution across an approximately 100$\mu$m depth range.
Fig. 4.
Fig. 4. Axial resolution analysis for the proposed Gali-MiniLFM where the dashed green rectangle marks the optimal imaging depth range in Fig. 3(d).
Fig. 5.
Fig. 5. Lateral resolution study for the proposed Gali-MiniLFM. (a) Comparison for the resolved size obtained from a single bead and two closely spaced beads. (b) Examples of deconvolved two beads. (c) Corresponding line profiles along colored dashed lines in (b).
Fig. 6.
Fig. 6. The proposed 3D PSF modeling framework incorporating both aberration and diffraction effects in Gali-MiniLFM. (a) Aberration extraction in Zemax. (a1) The position of the FP is optimized in Zemax for different point source. (a2) The Seidel diagrams for the aberrations included in the wavefront at the FP from a point source placed off-axis by 0.15mm in the $x$-direction. (a3) The Seidel coefficients for off-axis point sources in $x$ direction at two extreme depths, i.e. -140$\mu$m and 60$\mu$m. (b) Diffraction modeling in Matlab. Each aberrated wave field is propagated through the MLA to calculate the light field PSF. (c) The computational framework assume periodicity across the MLA (c1-c2) and shift-variant PSF within each microlens (c3) in order to model the off-axis light field PSFs.
Fig. 7.
Fig. 7. Validation of our light field PSF simulation method. (a) The ray tracing layout used in Zemax. (b) The comparison between PSFs simulated using (b1) direct ray tracing, (b2) the wave model without considering aberration (adapted from the code shared in [24]), and (b3) our method. The PSFs simulated from our method closely matches with the ray tracing results with diffraction effects included.
Fig. 8.
Fig. 8. Light field measurement for imaging volumetric objects without scattering and 3D reconstruction results. (a) Ground truth objects used in the simulation. The colorbar indicates the axial location of each sphere. (b) The simulated light field measurement from Zemax. The 3D reconstruction results using (c) the wave-optic model without considering aberration, and (d) our methods. Comparison of the projection images show that the results without considering aberration suffer from worse lateral resolution, incorrect depth information, and axial elongation.
Fig. 9.
Fig. 9. Results of volumetric imaging and 3D reconstruction under tissue scattering. (a) The simulation considers anisotropic volumetric scattering by embedding both the imaging targets and the background fluorescent beads inside the tissue volume. (b) Volumetric reconstruction results under increasing imaging depth. The depths of the top surface of the tissue are (b1) 180$\mu$m, (b2) 90$\mu$m, and (b3) 0$\mu$m.
Fig. 10.
Fig. 10. SBR for quantifying the scattering effect. SBRs of (a) raw light field measurements, (b) 3D reconstruction. Notice that smaller $D$ will cause more severe scattering as well as lower SBR.
Fig. 11.
Fig. 11. Performance limit analysis for the proposed Gali-MiniLFM under scattering. (a) Distributions of the fluorescent particles used in the simulation, (b) The raw light field measurements, and (c) 3D reconstruction results. The left-most column corresponds to 55 particles and $D = 0\mu$m. The second and the third columns correspond to 65 particles and $D=0\mu$m and $D=90\mu$m, respectively. The fourth and the right-most columns correspond to 75 particles and $D=90\mu$m and $D=180\mu$m, respectively.
Fig. 12.
Fig. 12. Influence analysis of background fluorescence on the performance limit of the proposed Gali-MiniLFM under scattering. (a) Distributions of the fluorescent particles used in the simulation, (b) The raw light field measurements, and (c) 3D reconstruction results. The left-most column corresponds to 50 particles and $D = 0\mu$m. The second and the third columns correspond to increasing the density of background fluorescence and imaging targets up to 1.3 times (65 particles) when $D = 0\mu$m, respectively. The fourth and the right-most columns correspond to 65 particles with reduced scattering, i.e. $D=90\mu$m but with increased density of background fluorescence up to two and three times, respectively.

Tables (1)

Tables Icon

Table 1. The optimized parameters of the proposed Gali-MiniLFM.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

2|a|NA0M=Ndmla,
1/a+1/b>1/fmla,
12fmla<b<fmla,
Mmla2bdmlaNA0,
Ui^(r2)=ejk(cdi)jλ(cdi)Ui(r1)exp{jk2(cdi)|r1r2|2}dr12,
hi(r3)=|ejkbjλbUi^(r2)P(r2)exp{jk2b|r3r2|2}dr22|2,
P(r2)=s{S1,S2}p(r2sdmla)exp{jk2fmla|r2sdmla|2},
dq=qdmlaNn,
SBR=I0/IsI0,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.