Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Axial localization with modulated-illumination extended-depth-of-field microscopy

Open Access Open Access

Abstract

High-speed volumetric imaging represents a challenge in microscopy applications. We demonstrate a technique for acquiring volumetric images based on the extended depth of field microscopy with a fast focal scan and modulated illumination. By combining two frames with different illumination ramps, we can perform local depth ranging of the sample at speeds of up to half the camera frame rate. Our technique is light efficient, provides diffraction-limited resolution, enables axial localization that is largely independent of sample size, and can be operated with any standard widefield microscope based on fluorescence or darkfield contrast as a simple add-on. We demonstrate the accuracy of axial localization and applications of the technique to various dynamic extended samples, including in-vivo mouse brain.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

There is a need for microscopes to provide high-speed volumetric imaging. Such a microscope could be used to characterize the fast dynamics of samples distributed in three dimensions, including bacteria in natural environments [1–3], tracer molecules [4,5], or neuronal activity in brains [6–8]. Standard techniques for volumetric imaging are generally limited in speed, requiring the acquisition of image stacks, as in laser scanning [9] or light sheet microscopy [10,11]. Other techniques such as light-field [12] or tomographic microscopes offer limited resolution, or require iterative inversion algorithms [13]. These techniques can also impose illumination and detection geometries that are sometimes inconvenient or sacrifice light efficiency.

A simple technique for fast volumetric microscopy is based on the use of extended depth-of-field (EDOF) imaging [14–16]. Part of the attractiveness of this technique is that it can often be implemented as an add-on to a conventional microscope, either scanning [17, 18] or widefield [19–21]. But EDOF microscopy only offers a partial solution to the problem of 3D imaging. While it can provide diffraction-limited lateral resolution, it offers no axial resolution. We describe an EDOF-based technique that maintains the attractiveness of simple implementation with a standard widefield microscope, while also providing depth information in the form of axial localization. Our technique is light efficient and versatile, able to provide user-defined depths-of-field (DOF) up to a few hundred microns, and can operate with fluorescent or non-fluorescent samples.

The core of our technique involves sweeping the focal plane of a microscope at an axial position zs over a scan range D. There are many ways of implementing a focal sweep system [15]; while any such system would be acceptable, our method is to insert a deformable mirror (DM) into the back focal plane, or pupil plane, of an otherwise standard epi-fluorescence microscope (described in [22]). The curvature of the DM can be swept from positive to negative, causing the focal plane of the microscope to sweep over a scan range from zs=D2 to zs=D2, during a single camera exposure. The key benefits of using a DM are light efficiency, achromaticity, speed (we use a DM that provides a 20kHz update rate), and scan range (we have shown that even with a modest DM stroke of 2.5 − 3μm, we can extend the standard depth of field of the microscope by ≈ 70× [22]). In existing focal-sweeping systems, the EDOF image is a single-shot projection of the sample along the axial direction. But, as noted above, a single EDOF image does not contain axial information: it is impossible to determine the depth of objects from the EDOF intensity alone. Moreover, if two objects are distributed along the same z axis, it is impossible to determine which is above the other, or indeed, if there are two objects at all.

As we demonstrate, all that is required to recover axial information is a control of the illumination power during the focal sweep. This is experimentally straightforward since our illumination source is a LED and control can be achieved with a simple function generator (see Fig. 1) synchronized to the DM modulation rate. While the technique itself is straightforward, the recovery of axial localization information from our images is somewhat less so. In this paper, we describe a fast (open-loop) deconvolution algorithm that simultaneously recovers diffraction-limited lateral resolution and axial localization information, largely independent of the sample geometry. We present the theory behind our technique, and example applications involving both non-fluorescence and fluorescence imaging, including in-vivo imaging of mouse neuronal activity.

 figure: Fig. 1

Fig. 1 Schematic for modulated-illumination EDOF system based on focal scanning with a deformable mirror. The illumination power is modulated with positive and negative ramps every other camera frame

Download Full Size | PDF

2. Theory

The basic principle behind our technique is that a modulation of the illumination power during the focal sweep convolves the modulation with the sample structure. Our method is a simple modulation in the form of a linear ramp. We acquire two EDOF images in rapid succession, one with the ramp increasing with focal depth (image I+) and the other decreasing with focal depth (image I). Now consider the function Zest = (I+I)/(I+ + I), applied to each pixel. If a point-like object is situated midpoint in the scan range, then Zest = 0. If it is deeper than the mid-point, then Zest > 0 and if it is shallower Zest < 0. In fact, the value of Zest provides an excellent estimate of the object depth (scaled by the scan range D). However there is a major caveat associated with this strategy: it only works with point-like objects. If the object is large, for example if it is a uniform plane occupying the entire field of view (FOV), then Z = 0 regardless of the axial location of the plane. In effect, the scaling factor relating Z to object depth is not a constant, but depends on the lateral extent of the sample. To assure that our strategy can work independent of the sample geometry, we must understand this size-dependent scaling factor and correct for it.

To begin, we denote as M±(zs) the normalized illumination intensities used to acquire the two EDOF images, one increasing linearly with focal scan depth zs and the other decreasing. That is, we write

M±(zs)=12±1Dzs

Denoting as I± (ρ) the corresponding EDOF image intensities recorded at the camera position ρ, we have then

I˜±(κ)=1DD2D2dzsdz0M±(zs)OTF(κ;zsz0)O˜(κ;z0)
where Ĩ± (κ) is the 2D Fourier transform of I± (ρ), Õ (κ; z0) is the 2D Fourier transform of the 3D object distribution O(ρ0, z0), and OTF (κ; z) is the z-dependent optical transfer function associated with the microscope. For simplicity, we assume unit magnification throughout.

We can further define the sum and difference EDOF images to be

I˜Σ(κ)=I˜+(κ)+I˜(κ)I˜Δ(κ)=I˜+(κ)I˜(κ)
leading to
I˜Σ(κ)=1DD2D2dzsdz0OTF(κ;zsz0)O˜(κ;z0)
and
I˜Δ(κ)=2D2D2D2dzsdz0zsOTF(κ;zsz0)O˜(κ;z0)

Let us first evaluate ĨΣ (Eq. (4)). If we assume that the object of interest is located well within the focal scan range (i.e. z0D), we can make the approximation

I˜Σ(κ)EOTF(κ;D)dz0O˜(κ;z0)
where EOTF (κ; D) is the representative extended OTF [23] associated with the scan range D, defined by
EOTF(κ;D)=1DD2D2dzOTF(κ;z)

This extended OTF is discussed in detail in Ref. [22], where it is expressed in exact and approximate forms. We note that the right-hand side of Eq. (6) is the projection of the object along the axial direction. In other words, the sum image ΣĨ (κ) is the same as a conventional EDOF image acquired with constant illumination power.

From Eq. (6) we recover an expression for the extended object stripped of axial information, given by

dz0O(ρ,z0)=FT1{I˜Σ(κ)EOTF(κ;D)}

The evaluation of ĨΔ (Eq. (5)) can be pursued in a similar manner. Following a change of variable, z = zsz0, this may be recast as

I˜Δ(κ)=2D2dz0D2z0D2z0dz(z0+z)OTF(κ;z)O˜(κ;z0)

Making use of the fact that OTF (κ; z) is symmetric in z (and hence zOTF (κ; z) is anti-symmetric), which leads to

D2z0D2z0dzzOTF(κ;z)=(D2D2z0D2z0D2)dzzOTF(κ;z)Dz0OTF(κ;D2)
we obtain
I˜Δ(κ)=2DMOTF(κ;D)dz0z0O˜(κ;z0)
where we have introduced an effective modulated OTF defined by
MOTF(κ;D)=EOTF(κ;D)OTF(κ;D2)

Using this modulated OTF, we can obtain an expression that recovers axial localization information:

dz0z0O(ρ,z0)=D2FT1{I˜Δ(κ)MOTF(κ;D)}

Equations (8) and (13) are the main results of this paper: they allow us to evaluate the weighted-average axial location of the object, defined by

Z(ρ)=dz0z0O(ρ,z0)dz0O(ρ,z0)

We recall that ρ corresponds to a pixel coordinate on the camera. In other words, Eq. (14) provides an estimate of the object depth at every pixel.

Plots of EOTF (κ; D) and MOTF (κ; D) are shown in Fig. 2, where the lateral frequency κ is normalized to the diffraction-limited bandwidth of the microscope defined by 2NA/λ, NA being the microscope numerical aperture and λ being the optical wavelength in vacuum.

 figure: Fig. 2

Fig. 2 EOTF (red) and MOTF (black) curves for a focal scan range of 60μm, as a function of spatial frequency normalized to the diffraction limit. Example regularization parameters (blue, purple) are shown for reference.

Download Full Size | PDF

Two important features of EOTF (κ; D) and MOTF (κ; D) are apparent. First, both transfer functions decay to zero at the diffraction limit. This is expected, since no spatial frequencies beyond this limit can be transferred from the object to the camera. However, it poses a difficulty when implementing Eqs. (8) and (13), since any high frequency noise in the EDOF images becomes detrimentally amplified. The standard solution to this problem [24] is to introduce a regularization parameter δ2 in the denominator of both Eqs. (8) and (13), and rewrite these as

dz0O(ρ,z0)=FT1{I˜Σ(κ)EOTF*(κ;D)|EOTF(κ;D)|2+δ2}
and
dz0z0O(ρ,z0)=D2FT1{I˜Δ(κ)MOTF*(κ;D)|MOTF(κ;D)|2+δ2}

This regularization scheme works well at high frequency, but we are faced with the difficulty that another zero occurs in MOTF (κ; D) in the low-frequency limit κ → 0. The origin of this zero is just as fundamental. Consider a laterally uniform object such that O (ρ0, z0) is a constant, independent of ρ0. In this case, the image intensity recorded at the camera for both increasing and decreasing ramps is also a uniform, with intensity value independent of z0. This reflects the principle that a standard widefield microscope fundamentally cannot resolve the depth of a laterally uniform object. As a result, another regularization parameter is introduced in Eq. (13) to prevent a vanishing of the denominator at low frequencies. The optimal value chosen for this parameter depends on the sample in question. The larger the value, the more immune the reconstruction is to noise, but the less accurate the depth ranging becomes for objects of large lateral extent (i.e. small κ). In practice, the low (δ0) and high (δ1) frequency regularization parameters need not be the same, and we can make use of a frequency-dependent regularization parameter defined by

δ(κ)={δ1(κ>κc)δ0(κ<κc)
where κc is a cutoff frequency separating the two (typically chosen where EOTF and MOTF intersect). In general, we found that δ0 could be chosen much smaller than δ1, possibly for the reason that low-frequency region where MOTF is small covers much less κ-space area than the high-frequency region where both MOTF and EOTF are small, causing the presence of noise in the low-frequency region to be less deleterious. For all demonstrations shown below, the values of δ0 and δ1 were chosen by eye.

3. Experimental validation

To begin, we experimentally evaluate the accuracy of our axial localization with modulated-illumination EDOF (MI-EDOF) strategy. In particular, we evaluate the effect of the deconvolution corrections used in Eqs. (15) and (16). For this comparison, we introduce the uncorrected depth estimator, given simply by

Zu(ρ)=D2IΔ(ρ)IΣ(ρ)
where IΔ (ρ) and IΣ (ρ) are the inverse Fourier transforms of ĨΔ (κ) and ĨΣ (κ) respectively.

We measured the 3D position of 1μm diameter fluorescent beads while axially translating the beads with a mechanical, calibrated stage. A Thorlabs M470 L3-C Blue LED was calibrated to produce linearly modulated intensity and used to generate fluorescence at about 500nm, and an Olympus BX51 microscope with a 20×, 0.5NA objective was used to collect the light, providing a conventional depth-of-field of about 2μm. The camera was a PCO Edge 4.2LT, with full-frame rate up to 40Hz. A 140-actuator MultiDM from Boston Micromachines Corporation (BMC) with up to 3.5 μm stroke was used to achieve a total focal-scan range of about D = 60μm.

For isolated beads (Fig. 3), both Z(ρ) (Eq. (14)) and Zu(ρ) (Eq. (18)) provide accurate axial localization over the focal-scan range. Outside this range, the accuracy of the axial localization decreases dramatically, as expected. However, for larger lateral structures such as groups or rafts of beads (Fig. 4), axial localization values are only accurately recoverable when deconvolution is applied. In other words, while deconvolution is not required for sparse, point-like objects, it becomes critical for laterally extended objects.

 figure: Fig. 3

Fig. 3 Verification of axial localization with an isolated 1μm bead (inset). Scale-bar is 5μm. Intensity of the bead as a function of stage position before (solid red) and after (solid black) deconvolution illustrates the range of the extended DOF of about 60μm. Axial localization of the bead before deconvolution (red) shows good agreement with the nominal stage position (dotted black) over the extended DOF (slope ≈ 0.8, r2 = 0.996). Axial localization with a single (blue) and double (black) parameter deconvolution shows improved axial accuracy (slope ≈ 0.9, r2 = 0.997).

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Verification of axial localization with an extended cluster of 1μm beads (inset). Scale-bar is 5μm. Axial localization of the cluster before (red) deconvolution shows linearity but poor accuracy in estimating the axial position within the DOF, systematically underestimating deviations from Z = 0 (slope ≈ 0.4, r2 = 0.983). After applying deconvolution (blue) with a single regularization parameter the accuracy improves significantly (slope ≈ 0.8, r2 = 0.998). Two-parameter deconvolution (black) provides even higher accuracy (slope ≈ 0.9, r2 = 0.999) which extends even beyond the focal scan range of 60μm.

Download Full Size | PDF

As pointed out in the introduction, our MI-EDOF technique also works with non-fluorescence imaging, provided the sample is illuminated with spatially incoherent light, in which case the microscope imaging properties are similar to those obtained with fluorescence. To demonstrate this, we image 4μm fluorescent beads suspended in PDMS with both fluorescence and darkfield contrasts. A Thorlabs M625 L3 Red LED was used to provide darkfield illumination (calibrated for linearity) from below the sample using a Cassegrain condenser with a specially cut optical block, allowing a straightforward comparison between the two imaging modes (Fig. 5). Fluorescence and darkfield images yield identical relative axial localization for each of the 4μm bead samples, with an offset between the two imaging modalities of about 3.2μm (Fig. 5). This offset, which is slightly larger than the microscope conventional depth-of-field, resulted from an apparent shift in the nominal focal plane possibly caused by the change in imaging wavelength from 500nm to 625nm.

 figure: Fig. 5

Fig. 5 MI-EDOF images of 4μm beads embedded in PDMS acquired with a) fluorescence and b) darkfield imaging modes; scale bar is 50μm. Axial displacement from nominal focus is represented by the color axis (in units of microns). c) Comparison of the axial positions obtained in fluorescence and darkfield modes yields a linear fit of slope 1.02, and offset 3.2μm.

Download Full Size | PDF

Figures 3, 4, and 5 illustrate the capacity of our MI-EDOF technique to perform axial localization of both fluorescent and non-fluorescent objects. A crucial requirement for this localization, however, is that the objects do not overlap one another in the axial direction. In the event such overlap occurs, our technique returns an overall intensity-weighted average of the depth. For example, if two punctate objects lie at the same lateral position but at different depths z1 and z2, our technique returns an image of only a single object located at an apparent depth (z1 + z2)/2. This weighted-intensity axial localization is apparent in Fig. 6, which shows a darkfield image of a cylindrospermum algae (Carolina Biological Supply) suspended in water. The algae is generally sparse enough to identify the depth profile of individual strands; where the strands overlap, the depth is identified as an intensity-weighted average.

 figure: Fig. 6

Fig. 6 a) Cylindrospermum algae acquired with MI-EDOF in darkfield mode using a 20× 0.5NA Olympus objective. From the modulated-illumination image, two algae strands appear to overlap at two distinct points (A and B). b) Plots of the recorded axial location of each strand near point B reveal sharp variations that converge to a common axial location, indicative of an incorrect apparent co-localization of the strands. The convergence at point A occurs at much more slowly, suggesting the two strands are in fact co-localized at that point. c) Axial geometry is verified by an x–z projection obtained from an image stack, where we confirm that the strands are axially co-located point A but axially separated at point B. The different behaviors of axial plots about these points suggests that with prior information about the sample (such as continuity constraints), correct axial information can be inferred even in non-co-localized cases.

Download Full Size | PDF

4. Applications

An application of our MI-EDOF technique is 3D tracking of dynamic objects within large volumes. For example, we imaged E. Coli swimming in water using a 40× 0.8NA Olympus objective over an extended volume (Fig. 7). The E. Coli were fluorescently labeled with GFP. The axial range was chosen to be about 30μm, and the frame rate was 8Hz. This frame rate was limited not by the camera, but by the weak fluorescence intensity of the E. Coli, which imposed a SNR constraint on the exposure time. Video images of the E. Coli (Fig. 7) shows two bacteria trajectories that intersect laterally. Without the axial information supplied by our technique, it would be difficult to determine with certainty whether the E. Coli are close enough to interact. From the modulated-intensity video (see Visualization 1), we find that the two trajectories are in fact at different depths, separated by about 20μm (see Fig. 7). In other words, the added depth information provided by our modulated illumination technique facilitates the disambiguation of the trajectories. We emphasize that the 3D trajectories of every E. Coli bacterium in our FOV can be monitored in this manner in parallel, and that our technique can be applied to a large number of bacteria over large FOVs.

 figure: Fig. 7

Fig. 7 a–c) The monitoring of two E. Coli bacteria trajectories (obtained from a much larger FOV), with axial position represented by color. Bacteria are shown crossing the same lateral position but separated in height. d–e) 3D representations of the trajectories confirm height separation.

Download Full Size | PDF

Careful observation reveals a slight apparent depth gradient across the bacteria body, particularly the dark-blue colored E. Coli in Fig. 7(c). We attribute this to lateral translation which can lead to a motion-induced edge artifact. Specifically, because our technique is based on frame-by-frame subtraction, any motion faster than the camera frame rate can appear as a change in depth. While this nominally restricts the use of MI-EDOF to sample dynamics slower than the camera frame rate, a knowledge of the sample trajectory can potentially be used to correct, or at least identify, this artifact.

Another important application of our MI-EDOF technique is in functional brain imaging. For example, when performing widefield epi-fluorescence imaging of neurons labeled with a calcium indicator, it is not uncommon that neurons situated one on top of another are difficult to distinguish. In such cases, intensity variations of the indicators that are signatures of neuronal activity can be difficult to associate with specific neurons, leading to erroneous signal interpretation. Our technique of axial localization provides an added dimension into which signal is encoded, thus increasing signal diversity and facilitating the identification of signal origin. As an example of this, we imaged GCaMP-labeled mouse striatum neurons in vivo. We used a 20× 0.4NA Mitutoyo objective, and acquired videos at a frame rate of 22Hz, easily fast enough to capture the GCaMP fluorescence dynamics. To correct for motion artifacts, we registered the brain images prior to deconvolution. Axial localization was performed at each frame, followed by binning and temporal filtering to reduce noise. This resulted in spatio-temporally filtered axial-localization videos that could be superposed onto the conventional intensity maps of brain activity.

Figure 8 shows neuronal activity of two distinct overlapping neurons taken with an MI-EDOF video (see Visualization 2). Intensity plots of the overlapping (purple) and non-overlapping (green, blue) regions show that the overlap region exhibits calcium transients associated with either neuron (Fig. 8(a)–(b)). However, the overlap intensity alone would not be sufficient to enable the association of a particular transient to a particular neuron, without recourse to statistical correlations over non-overlapping regions ([25, 26]). Using our axial localization technique, analysis of the axial positioning data (Fig. 8(c)) indicates that when the green neuron is active, the apparent depth of the overlap increases (Fig. 8(d1)), whereas when the blue neuron is active, the apparent depth decreases (Fig. 8(d3)). When both neurons are simultaneously active, the depth appears unchanged (Fig. 8(d2)), as our technique provides the intensity-averaged axial position as indicated in Fig. 6. In other words, the association of calcium transients to specific neurons can be achieved locally using information obtained from a single image point, rather than requiring delocalized cross-correlations obtained from spatially separated image points.

 figure: Fig. 8

Fig. 8 a) In-vivo MI-EDOF data from GCaMP-labeled neurons in a mouse striatum (three frames are shown from a video – see Visualization 2). Distinct neurons are observed (blue, green) in the indicated ROI that laterally overlap (purple); scalebar is 50μm. b,c) Intensity and depth variations are monitored simultaneously, facilitating the discrimination of neuronal activity. d1–3) MI-EDOF video frames show neurons firing either individually or together; scalebars are 20μm. e1–2) Intensity and depth of neurons firing almost simultaneously. f1–3) MI-EDOF video frames show near-simultaneous firing of neurons; scalebars are 20μm

Download Full Size | PDF

This advantage becomes particularly evident when there is a slight time delay in the activity between the two neurons. Since the fluorescence intensities from the neurons add, only a single transient is apparent in the overlap region (Fig. 8(e1)). That is, using only the intensity obtained from the overlap region, it would appear that both neurons were active simultaneously. From the axial localization trace, however, there is a clear time delay between the activity of the two neurons: first the apparent depth drops, indicating the blue neuron is active, and then the depth rises, indicating the green neuron becomes active (Fig. 8(e2)). This sequence of neuron activity is verified by the intensity traces obtained from the non-overlapping regions of each neuron, (imaged in Fig. 8(f1)–8(f3)), confirming a time delay between the two neurons of about 0.5s.

5. Conclusion

We have presented a modulated-illumination technique that provides axial localization in volumetric samples at video-rate acquisition times. The technique works in combination with EDOF imaging, which can be implemented as a simple add-on with a standard widefield microscope, operating with fluorescence or darkfield contrast. The signature advantage of our technique is speed. EDOF by itself provides quasi-volumetric imaging at kilohertz rates, while sacrificing axial resolution. Our modulated-illumination technique recovers axial information while only moderately slowing our EDOF acquisition.

We emphasize that our technique does not provide axial resolution per se. Rather it provides axial localization information, in the form of an intensity-weighted average along the depth axis at each pixel. While alternative strategies to obtain such axial localization have been described [27], they do not benefit from the extended range provided by EDOF. Axial information beyond a simple intensity-weighted average could of course be obtained with more sophisticated modulated illumination strategies involving the acquisition of more than two image frames, but this would undermine our speed advantage. As it is, the addition of axial localization information alone is generally sufficient to resolve ambiguous signals, such as those obtained when tracking overlapping particles or filaments, or when monitoring overlapping neuronal signals.

An important component of our technique is the implementation of a deconvolution algorithm to largely render our axial localization accuracy independent of object lateral extent (provided the object remains smaller than the microscope FOV). This algorithm is not restricted to focal-scanning with a DM, and can be applied more generally to any focal-scanning strategy for obtaining EDOF, such as stage-scanning [15] or scanning with a tunable acoustic gradient lens [5]. As with any deconvolution strategy, detection noise can lead to erroneous results, which are somewhat exacerbated in our case since we rely on image subtraction. We used a simple regularization strategy to mitigate the effects of this noise, though our strategy remains largely subjective as implemented. Detection noise, when significant, can also prescribe a somewhat different implementation of modulated illumination, where the illumination ramps do not taper all the way to zero but rather taper to a finite value (the depth ranging algorithm must be modified accordingly, but the modification is straightforward). Moreover, issues can arise when the illumination control is not linear, in which case a compensating lookup table may be required to recover linearity in the illumination ramps.

In either case, whether depth ranging accuracy is sought, or whether only signal diversity is sought to facilitate signal disambiguation, our strategy of modulated illumination remains easy to implement, making it attractive as a general tool for widefield microscopy.

Funding

National Science Foundation Industry/University Cooperative Research Center for Biophotonic Sensors and Systems (IIP-1068070); National Institute of Health (R21EY027549).

Acknowledgments

We thank Lei Tian and Anne Sentenac for helpful discussions. E. Coli were supplied by the Mo Khalil laboratory. Mice were supplied by the Xue Han laboratory.

Disclosures

T. Bifano acknowledges a financial interest in Boston Micromachines Corporation.

References and links

1. Z. Frentz, S. Kuehn, D. Hekstra, and S. Leibler, “Microbial population dynamics by digital in-line holographic microscopy,” Rev. Sci. Instrum. 81, 084301 (2010). [CrossRef]   [PubMed]  

2. W. Bishara, U. Sikora, O. Mudanyali, T.-W. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip 11, 1276–1279 (2011). [CrossRef]   [PubMed]  

3. A. Wang, R. F. Garmann, and V. N. Manoharan, “Tracking E. coli runs and tumbles with scattering solutions and digital holographic microscopy,” Optics Express 50, 23719–23725 (1960).

4. P. Memmolo, L. Miccio, M. Paturzo, G. DiCaprio, G. Coppola, P. A. Netti, and P. Ferraro, “Recent advances in holgraphic 3D particle tracking,” Adv. Opt. Photon. 7, 713–755 (2015). [CrossRef]  

5. T. H. Chen, J. T. Ault, H. A. Stone, and C. B. Arnold, “High-speed axial-scanning wide-field microscopy for volumetric particle tracking velocimetry,” Exp. Fluids 58, 1–7 (2017). [CrossRef]  

6. Y. Gong, C. Huang, J. Z. Z. Li, B. F. Grewe, Y. Zhang, S. Eismann, and M. J. Schnitzer, “High-speed recording of neural spikes in awake mice and flies with a fluorescent voltage sensor,” Science 350, 1361–1366 (2015). [CrossRef]   [PubMed]  

7. N. Ji, J. Freeman, and S. L. Smith, “Technologies for imaging neural activity in large volumes,” Nat. Neuro. 19, 1154–1164 (2016). [CrossRef]  

8. W. Yang and R. Yuste, “In vivo imaging of neural activity,” Nat. Meth. 14, 349–359 (2017). [CrossRef]  

9. J. E. Pawley, Handbook of Biological Confocal Microscopy, 3rd. Ed. (Springer, 2006). [CrossRef]  

10. J. Huisken, J. Swoger, F. Del Benne, J. Wittbrodt, and E. H. K. Stelzer, “Optical Sectioning Deep Inside Live Embryos by Selective Plane Illumination Microscopy,” Science 305, 1007–1009 (2004). [CrossRef]   [PubMed]  

11. M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Meth. 10, 413–420 (2013). [CrossRef]  

12. M. Levoy, “Light fields and computational imaging,” IEEE Comp. Soc. , 3945–55 (2006). [CrossRef]  

13. P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Image translation for single-shot focal tomography,” Optica 2, 822–825 (2015). [CrossRef]  

14. W. T. Welford, “Use of Annular Apertures to Increase Focal Depth,” J. Opt. Soc. Am. 50, 749–753 (1960). [CrossRef]  

15. G. Hausler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6, 38–42 (1972). [CrossRef]  

16. G. Indebetouw and H. Bai, “Imaging with Fresnel zone pupil masks: extended depth of field,” Appl. Opt. 23, 4299–4302 (1984). [CrossRef]   [PubMed]  

17. P. Dufour, M. Piché, Y. De Koninck, and N. McCarthy, “Two-photon excitation fluorescence microscopy with a high depth of field using an axicon,” Appl. Opt. 45, 9246–9252 (2006). [CrossRef]   [PubMed]  

18. R. Lu, W. Sun, Y. Liang, A. Kerlin, J. Bierfeld, J. D. Seelig, D. E. Wilson, B. Scholl, B. Mohar, M. Tanimoto, M. Koyama, D. Fitzpatrick, M. B. Orger, and N. Ji, “Video-rate volumetric functional imaging of the brain at synaptic resolution,” Nat. Neurosci. 20, 620–628 (2017). [CrossRef]   [PubMed]  

19. S. Abrahamsson, S. Usawa, and M. Gustafsson, “A new approach to extended focus for high-speed, high-resolution biological microscopy,” Proc. SPIE 6090, 60900N (2006). [CrossRef]  

20. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995). [CrossRef]   [PubMed]  

21. B. F. Grewe, F. F. Voigt, M. van ’t Hoff, and F. Helmchen, “Fast two-layer two-photon imaging of neuronal cell populations using an electrically tunable lens,” Biomed. Opt. Express 2, 2035–2046 (2011). [CrossRef]   [PubMed]  

22. W. J. Shain, N. A. Vickers, B. B. Goldberg, T. Bifano, and J. Mertz, “Extended depth-of-field microscopy with a high-speed deformable mirror,” Opt. Lett. 42, 995–998 (2017). [CrossRef]   [PubMed]  

23. S.-H. Lu and H. Hua, “Imaging properties of extended depth of field microscopy through single-shot focus scanning,” Opt. Express 23, 10714–10731 (2015). [CrossRef]   [PubMed]  

24. M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging (IOP Publishing, 1998). [CrossRef]  

25. A. Inglis, L. Cruz, D. L. Roe, H. E. Stanley, D. L. Rosene, and B. Urbanc, “Automated identification of neurons and their locations,” J. Microsc. 230, 339–352 (2008). [CrossRef]   [PubMed]  

26. L. Theis, P. Berens, E. Froudarakis, J. Reimer, M. R. Roson, T. Baden, T. Euler, A. S. Tolias, and M. Bethge, “Benchmarking Spike Rate Inference in Population Calcium Imaging,” Neuron 90, 471–482 (2016). [CrossRef]   [PubMed]  

27. M. Watanabe and S. K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comp. Vision 27, 203–225 (1998). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       Fluorescently-labeled E. Coli in water observed with 40x 0.8NA objective. Color corresponds to axial depth obtained by MI-EDOF.
Visualization 2       Spontaneous activity of GCaMP-labeled neurons in mouse striatum. Two neurons overlap on another. Overlapping (middle) and non-overlapping (left/right) regions are highlighted. Scale bar is 20 microns. Frame rate is 2x real time.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Schematic for modulated-illumination EDOF system based on focal scanning with a deformable mirror. The illumination power is modulated with positive and negative ramps every other camera frame
Fig. 2
Fig. 2 EOTF (red) and MOTF (black) curves for a focal scan range of 60μm, as a function of spatial frequency normalized to the diffraction limit. Example regularization parameters (blue, purple) are shown for reference.
Fig. 3
Fig. 3 Verification of axial localization with an isolated 1μm bead (inset). Scale-bar is 5μm. Intensity of the bead as a function of stage position before (solid red) and after (solid black) deconvolution illustrates the range of the extended DOF of about 60μm. Axial localization of the bead before deconvolution (red) shows good agreement with the nominal stage position (dotted black) over the extended DOF (slope ≈ 0.8, r2 = 0.996). Axial localization with a single (blue) and double (black) parameter deconvolution shows improved axial accuracy (slope ≈ 0.9, r2 = 0.997).
Fig. 4
Fig. 4 Verification of axial localization with an extended cluster of 1μm beads (inset). Scale-bar is 5μm. Axial localization of the cluster before (red) deconvolution shows linearity but poor accuracy in estimating the axial position within the DOF, systematically underestimating deviations from Z = 0 (slope ≈ 0.4, r2 = 0.983). After applying deconvolution (blue) with a single regularization parameter the accuracy improves significantly (slope ≈ 0.8, r2 = 0.998). Two-parameter deconvolution (black) provides even higher accuracy (slope ≈ 0.9, r2 = 0.999) which extends even beyond the focal scan range of 60μm.
Fig. 5
Fig. 5 MI-EDOF images of 4μm beads embedded in PDMS acquired with a) fluorescence and b) darkfield imaging modes; scale bar is 50μm. Axial displacement from nominal focus is represented by the color axis (in units of microns). c) Comparison of the axial positions obtained in fluorescence and darkfield modes yields a linear fit of slope 1.02, and offset 3.2μm.
Fig. 6
Fig. 6 a) Cylindrospermum algae acquired with MI-EDOF in darkfield mode using a 20× 0.5NA Olympus objective. From the modulated-illumination image, two algae strands appear to overlap at two distinct points (A and B). b) Plots of the recorded axial location of each strand near point B reveal sharp variations that converge to a common axial location, indicative of an incorrect apparent co-localization of the strands. The convergence at point A occurs at much more slowly, suggesting the two strands are in fact co-localized at that point. c) Axial geometry is verified by an x–z projection obtained from an image stack, where we confirm that the strands are axially co-located point A but axially separated at point B. The different behaviors of axial plots about these points suggests that with prior information about the sample (such as continuity constraints), correct axial information can be inferred even in non-co-localized cases.
Fig. 7
Fig. 7 a–c) The monitoring of two E. Coli bacteria trajectories (obtained from a much larger FOV), with axial position represented by color. Bacteria are shown crossing the same lateral position but separated in height. d–e) 3D representations of the trajectories confirm height separation.
Fig. 8
Fig. 8 a) In-vivo MI-EDOF data from GCaMP-labeled neurons in a mouse striatum (three frames are shown from a video – see Visualization 2). Distinct neurons are observed (blue, green) in the indicated ROI that laterally overlap (purple); scalebar is 50μm. b,c) Intensity and depth variations are monitored simultaneously, facilitating the discrimination of neuronal activity. d1–3) MI-EDOF video frames show neurons firing either individually or together; scalebars are 20μm. e1–2) Intensity and depth of neurons firing almost simultaneously. f1–3) MI-EDOF video frames show near-simultaneous firing of neurons; scalebars are 20μm

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

M ± ( z s ) = 1 2 ± 1 D z s
I ˜ ± ( κ ) = 1 D D 2 D 2 d z s d z 0 M ± ( z s ) OTF ( κ ; z s z 0 ) O ˜ ( κ ; z 0 )
I ˜ Σ ( κ ) = I ˜ + ( κ ) + I ˜ ( κ ) I ˜ Δ ( κ ) = I ˜ + ( κ ) I ˜ ( κ )
I ˜ Σ ( κ ) = 1 D D 2 D 2 d z s d z 0 OTF ( κ ; z s z 0 ) O ˜ ( κ ; z 0 )
I ˜ Δ ( κ ) = 2 D 2 D 2 D 2 d z s d z 0 z s OTF ( κ ; z s z 0 ) O ˜ ( κ ; z 0 )
I ˜ Σ ( κ ) EOTF ( κ ; D ) d z 0 O ˜ ( κ ; z 0 )
EOTF ( κ ; D ) = 1 D D 2 D 2 d z OTF ( κ ; z )
d z 0 O ( ρ , z 0 ) = FT 1 { I ˜ Σ ( κ ) EOTF ( κ ; D ) }
I ˜ Δ ( κ ) = 2 D 2 d z 0 D 2 z 0 D 2 z 0 d z ( z 0 + z ) OTF ( κ ; z ) O ˜ ( κ ; z 0 )
D 2 z 0 D 2 z 0 d z z OTF ( κ ; z ) = ( D 2 D 2 z 0 D 2 z 0 D 2 ) d z z OTF ( κ ; z ) D z 0 OTF ( κ ; D 2 )
I ˜ Δ ( κ ) = 2 D MOTF ( κ ; D ) d z 0 z 0 O ˜ ( κ ; z 0 )
MOTF ( κ ; D ) = EOTF ( κ ; D ) OTF ( κ ; D 2 )
d z 0 z 0 O ( ρ , z 0 ) = D 2 FT 1 { I ˜ Δ ( κ ) MOTF ( κ ; D ) }
Z ( ρ ) = d z 0 z 0 O ( ρ , z 0 ) d z 0 O ( ρ , z 0 )
d z 0 O ( ρ , z 0 ) = FT 1 { I ˜ Σ ( κ ) EOTF * ( κ ; D ) | EOTF ( κ ; D ) | 2 + δ 2 }
d z 0 z 0 O ( ρ , z 0 ) = D 2 FT 1 { I ˜ Δ ( κ ) MOTF * ( κ ; D ) | MOTF ( κ ; D ) | 2 + δ 2 }
δ ( κ ) = { δ 1 ( κ > κ c ) δ 0 ( κ < κ c )
Z u ( ρ ) = D 2 I Δ ( ρ ) I Σ ( ρ )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.