Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Axial motion estimation and correction for simultaneous multi-plane two-photon calcium imaging

Open Access Open Access

Abstract

Two-photon imaging in behaving animals is typically accompanied by brain motion. For functional imaging experiments, for example with genetically encoded calcium indicators, such brain motion induces changes in fluorescence intensity. These motion-related intensity changes or motion artifacts can typically not be separated from neural activity-induced signals. While lateral motion, within the focal plane, can be corrected by computationally aligning images, axial motion, out of the focal plane, cannot easily be corrected. Here, we developed an algorithm for axial motion correction for non-ratiometric calcium indicators taking advantage of simultaneous multi-plane imaging. Using temporally multiplexed beams, recording simultaneously from at least two focal planes at different z positions, and recording a z-stack for each beam as a calibration step, the algorithm separates motion-related and neural activity-induced changes in fluorescence intensity. The algorithm is based on a maximum likelihood optimisation approach; it assumes (as a first order approximation) that no distortions of the sample occurs during axial motion and that neural activity increases uniformly along the optical axis in each region of interest. The developed motion correction approach allows axial motion estimation and correction at high frame rates for isolated structures in the imaging volume in vivo, such as sparse expression patterns in the fruit fly brain.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Using optical microscopy for investigating neural circuits in vivo, and particularly in behaving animals, is complicated by movement of the brain with respect to the focal plane [15]. In many preparations, such as mice or fruit flies, movement is reduced by mechanically stabilizing the brain, which however often still leaves residual motion [2] and additionally can be invasive [6,7].

In in vivo preparations, the brain moves in all three spatial dimensions, laterally, within the focal plane, as well as axially, out of the focal plane. Since the lateral resolution of the microscope is higher than the axial resolution, lateral motion is easier to detect [5]. Lateral motion can be corrected by post processing with a range of different methods, resulting in sequences of images aligned to a common field of view [13,5,8,9].

Motion in axial direction, however, can usually not be corrected using post-processing. Axial motion leads to variations in the recorded fluorescence intensity due to changes in sample excitation. Generally, such changes cannot be disentangled from intensity fluctuations of the indicator resulting from neural activity [4]. Additionally, sample movement, which for example in mice lies in the range of a few micrometers [2,4], can exceed the focal range of the two-photon microscope [10] and therefore lead to loss of information [11].

Algorithms similar to those used for lateral motion correction have also been applied for time series of volume images (z-stacks). For small axial displacements, two-color labeling can be used for motion correction (see [12] for a recent example of this approach). Recording time series of z-stacks with sufficient axial resolution however limits the frame rate in each focal plane [13,14]. Faster axial correction speeds can be achieved by actively tracking moving brain samples. Taking advantage of a reference signal, for example a reflected beam [15], behavioral parameters [16], or fluorescent fiducials such as cell bodies [4,17] or fluorescent beads [4], the focal plane or additionally the region of interest (ROI) can be updated in a closed-loop configuration. Repositioning of the ROI at high frame rates and in fast control cycles however requires specialized hardware allowing random access scanning [4]. Additionally, introducing fiducials, typically fluorescent beads, is often impractical and invasive, for example when imaging in the brain of Drosophila. Similarly, naturally occurring fiducials such as cell bodies are often not present in the imaging volume. Further, often only a single fluorescent dye, for instance a non-ratiometric calcium indicator, is used.

Alternatively to recording volume images in z-stacks, simultaneous multi-plane imaging allows recording a limited number of focal planes at the same time [1824]. Similar to imaging in a single focal plane, multi-plane recordings are still subject to axial motion artifacts. However, the correlated intensity changes across simultaneously recorded focal planes that result from sample motion, allow inferring motion information. This has been used for tracking fluorescent beads using temporally multiplexed and axially offset beams [2527].

Here, we develop an approach for axial motion estimation and correction in the brain for two-photon functional imaging with non-ratiometric calcium indicators. The approach does not require fiducials and can be applied for isolated neural structures inside the imaging volume. The technique relies on an optimization algorithm that estimates axial brain motion as well as disentangles motion and neural activity related fluorescence changes based on simultaneous imaging in at least two focal planes. The algorithm is demonstrated using simulations with two and four simultaneously recorded focal planes, as well as validated using in vivo imaging in behaving fruit flies.

2. Results

2.1 Outline of motion correction approach

In simultaneous multi-plane imaging, motion along the axial direction displaces the sample at the same time in all focal planes. This correspondingly leads to related intensity changes in the different planes. For functional imaging with calcium indicators, changes in intensity due to sample motion are additionally confounded with neural activity related intensity changes. For disentangling these two contributions we developed an optimization based correction approach.

The experimental setup is shown in Fig. 1(B) and was previously described [24]. Simultaneous two-plane imaging was implemented with two temporally offset beams (by 6 ns) recorded independently using temporal multiplexing [18]. The axial profiles of the two beams are shown in Fig. 1(C).

 figure: Fig. 1.

Fig. 1. Approach and setup for motion estimation and correction. A Outline of method: $1)$ Calibration step: two stacks of the sample are recorded simultaneously in two different, axially offset focal planes. $2)$ Fluorescence intensity of the sample is recorded simultaneously in two planes and the sample moves in axial direction ($z$-axis). 3) Changes in intensities over time in each plane have two contributions: axial motion as well as neural activity. 4) The algorithm uses the recorded stacks to estimate axial motion of the sample from intensities recorded in the two planes. 5) Changes in fluorescence, $\Delta F/F$, are corrected to remove the contribution of axial motion, yielding motion corrected, neural activity related fluorescence changes. B Optical setup: two Gaussian beams are temporally offset by 6 $ns$, allowing simultaneous imaging in two different planes using temporal multiplexing. C Normalized profiles of the two beams along the $z$-axis fitted with Gaussian functions.

Download Full Size | PDF

The different steps of the correction approach are outlined in Fig. 1(A). First, a calibration stack of the sample is recorded with the two temporally multiplexed beams (Fig. 1(A),(1))). This stack is averaged over 10 recordings (see Methods for details) so that intensity variations due to motion are averaged out. For subsequent combined functional imaging and motion estimation, the sample is then approximately centered between the two beams such that it is always visible in both focal planes (Fig. 1(A), (2))). After recording a time series (Fig. 1(A), (3))) and computationally removing lateral motion, the correction algorithm extracts axial sample motion. Using the estimated axial motion and the z-stacks, the intensity recorded by each beam at each focal plane is finally corrected for motion (Fig. 1(A), (5))).

The algorithm does not require a second activity-independent label for motion estimation, such as a fluorescent bead of a different color. It however relies on the assumption that neural activity induced fluorescence changes occur with a constant gain factor from baseline in axial direction (see Supplement 1 for an illustrative example). Additionally, the method requires that the structure of interest is always visible in both focal planes and isolated, meaning that no other structures move in and out of the imaging volume. Depending on the sample and its typical motion, the separation between focal planes as well as the axial width of the beams therefore needs to be adjusted.

The algorithm finds and corrects axial motion by optimizing a cost function. The cost function is derived and explained in detail in Methods in several steps. First an analytical approximation for a one dimensional sample is developed to illustrate the workings of the algorithm before introducing the general case with multiple ROIs. Both, the one dimensional as well as the full algorithm are demonstrated in simulations. We then test the algorithm experimentally. First, we estimate controlled focal plane motion in a sample labeled with green fluorescent protein. Then, in addition to motion, we also estimate controlled variations in fluorescence intensity. Next, we correct neural activity recorded with a genetically expressed calcium indicator for controlled motion. Finally, the algorithm is applied for measuring and correcting actual brain motion.

2.2 Motion estimation and correction algorithm

The motion estimation and correction algorithm is shown schematically in Fig. 1(A) and proceeded along the following steps:

  • 1) First, we recorded a total of $10$ $z$-stacks of the sample with each beam, using the Fast $z$ mode in Scanimage with a piezoelectric objective actuator. The fast-$z$ mode in scanimage enables recording of $z$-stacks by continuously recording frames while moving the objective with a piezoelectric actuator linearly along the $z$-axis. The $z$-stacks were obtained over $100 \mu m$ along the $z$-axis in steps of $0.25\mu m$, at a resolution of $256$ x $256$ pixels at $60 Hz$ frame rate. Stacks were recorded simultaneously with the two beams. The time to record the 10 stacks was approximately $67$ seconds.
  • 2) We recorded time series of the sample simultaneously in two focal planes at $60 Hz$, with the same resolution and zoom as the stacks, providing a total of $T$ frames for each beam. Lateral motion correction (i.e., along the $x$ and $y$-axis) was performed using the phase correlation algorithm from Skimage [28] in Python. The phase correction algorithm computes the relative lateral offset between two images by obtaining the phase shift between the Fourier transforms of both images [29]. We defined a template as the average of the first $10$ frames of the two-plane time series recordings and computed the offset of each subsequent frame in the time series with respect to this template [9,24]. Additionally, all frames in the stacks recorded in step 1 were similarly aligned with respect to their respective templates and the aligned 10 stacks for each beam were finally averaged.
  • 3) We defined $N=32$ regions of interest (ROIs) corresponding to the structural arrangement of wedge neurons [24]. The intensity of each ROI $i$ was computed for each beam, $I_{1,i}(t)$ and $I_{2,i}(t)$, at every recorded frame, $t=1,\ldots, T$, by summing over all pixels within the ROI. We also computed the intensity of each ROI $i$ in each averaged $z$-stack in step 2, producing a total of $400$ slices of each ROI $i$ profile recorded by each beam along the $z$-axis, $I_{1,i}^{(stack)}(z)$ and $I_{2,i}^{(stack)}(z)$.
  • 4) For each frame $t$, we estimated axial motion of the sample. First, we computed the log-likelihood of the sample at any $z$-position in the stacks, described by Eq. (1) and derived from Eq. (35). Then we filtered the log-likelihood over time with a Gaussian kernel $f(t, \sigma )$ (Eq. (20)), and estimated the axial motion of the sample, $\hat {z}(t)$, by obtaining the $z$ position that maximized Eq. (2). An estimation error, $E(t)$ was additionally computed using Eq. (3).
    $$\begin{array}{c} L(z,t) =\\ -\frac{N}{2}\log(2\pi) - \frac{1}{2}\sum\limits_{i=1}^N\log\Big( 2 \frac{(I_{1,i}^{(stack)}(z))^2}{(I_{2,i}^{(stack)}(z))^3} \Big) - \sum\limits_{i=1}^{N} \frac{(I_{2,i}^{(stack)}(z))^3}{(I_{1,i}^{(stack)}(z))^2} \Big( \frac{I_{1,i}(t)}{I_{2,i}(t)} - \frac{I_{1,i}^{(stack)}(z)}{I_{2,i}^{(stack)}(z)} \Big)^2. \end{array}$$
    $$L_f(z, t) = \sum\limits_{t'=0}^T f(t' - t, \sigma_t)L(z, t'),$$
    $$E(t) = \frac{1}{N}\sum\limits_{i=0}^N\Big( \frac{I_{1,i}(t)}{I_{2,i}(t)} - \frac{I_{1,i}^{(stack)}(\hat{z}(t))}{I_{2,i}^{(stack)}(\hat{z}(t))} \Big)^2.$$
  • 5) The motion estimation is used to remove axial motion-related changes in the measured intensities $I_{1,i}(t)$ and $I_{2,i}(t)$, using Eq. (4). Finally, the corrected change in fluorescence in each ROI $i$, $\Delta F/F_i$, is computed according to Eq. (5).
    $$\begin{cases} I_{1,i}^{cor}(t) = \frac{I_{1,i}(t)}{I_{1,i}^{stack}(\hat{z}(t))}\\ \\ I_{2,i}^{cor}(t) = \frac{I_{2,i}(t)}{I_{2,i}^{stack}(\hat{z}(t))}. \\ \end{cases}$$
    $$\textrm{corrected} \quad \Delta F/F_i (t) = \frac{1}{2}\Big( \frac{I_{1,i}^{cor}(t) - F^0_{1,i}}{F^0_{1,i}} + \frac{I_{2,i}^{cor}(t) - F^0_{2,i}}{F^0_{2,i}} \Big) \quad \textrm{for}\; i=1,\ldots,N,$$

The previous steps are also summarized in Algorithm 1.

Tables Icon

Algorithm 1. Motion estimation and correction algorithm

2.3 Motion correction in simulated data for a single voxel

The motion correction algorithm was first verified using simulated data (Fig. 2). For this purpose, we defined an arbitrary one dimensional sample extended along the optical axis (z-axis, Fig. 2(A), top). We additionally defined two Gaussian beams with different beam profiles and intensities, similar to the ones used in the experiment (Fig. 2(A), top). The profiles of the beams along the $z$-axis must cover the axial range of sample motion, keeping the sample always visible with both beams. Note that the use of different beam profiles is not a requirement for the motion correction algorithm, both profiles can be similar as long as they are offset along the $z$-axis.

 figure: Fig. 2.

Fig. 2. Motion correction in a simulated single ROI. A Top: axial intensity profiles of the two simulated beams and the simulated sample (ROI) at time $t = 0$. Bottom: two stacks recorded from the sample at time $t = 0$ with the two beams. B Example of axial motion and activity of the ROI. Sample profile along the $z$-axis at time $t=0$ is shown in green. At time $t>0$, the offset of the ROI along the $z$-axis changes while its activity increases. C Simulation of ROI activity and motion along the $z$-axis over time. Top row, left side: beam profiles. Right side: sample moves along z-axis over time. Color indicates sample neural activity. Second row: resulting intensities measured by each beam have two different contributions, motion and activity. Third row: comparison of actual and estimated axial motion. Fourth row: cost function error of the motion estimation algorithm (see Methods for details). Bottom row: actual and estimated, corrected changes in neural activity induced $\Delta F/F$.

Download Full Size | PDF

At the beginning of the simulated experiment, a $z$-stack is computed for each beam (Fig. 2(A), bottom, see Methods for details). The sample is assumed to move along the z-axis as well as change its intensity due to neural activity (Fig. 2(B)). We assume that activity changes occur homogeneously along the optical axis, that is, with a multiplicative factor with respect to the fluorescence baseline (see Supplement 1).

A simulated time series of sinusoidal axial sample motion combined with stochastic changes in neural activity is shown in Fig. 2(C), top row. The resulting intensity time series, recorded simultaneously in two focal planes, are shown in Fig. 2(C), second row. Based on the recorded intensity time series, the optimization algorithm accurately estimates both, sample motion as well as motion-corrected, activity related changes in fluorescence intensity (Fig. 2(C), third and bottom rows).

2.4 Motion correction in simulated data for multiple voxels

The algorithm was further validated with simulated data with multiple one dimensional voxels, representing regions of interests (ROIs) defined on a three dimensional sample (see next section and Methods for details). For this, neural activity in a ring attractor was simulated, similar to experimentally observed data [24,30]. A random distribution of the axial position of the different ROIs was assumed, reflecting structural variability (Fig. 3(A), top row). Neural activity in this kind of network is centered in a bump and we here assumed that the bump of activity moves at constant velocity around the ring attractor. To simulate brain motion, the sample was oscillated with respect to the two focal planes along the axial direction (Fig. 3(A), bottom). As for the case of a single voxel, a $z$-stack is computed using both beams. Intensity measurements from both beams are subjected to noise, similar to experimental recordings (see Methods for details). The algorithm assumes that all ROIs undergo the same axial motion. Therefore, all ROIs contribute to estimating motion (Fig. 3(C), first row).

 figure: Fig. 3.

Fig. 3. Motion correction in a simulated ring attractor with $32$ ROIs. A Top: normalized intensity profiles of the ROIs, which are simulated with varying lengths and offsets along the $z$-axis. B Bottom: ROI 1 moves along the $z$-axis over time while its activity ($\Delta F/F$) changes (see C for all ROIs). B Top: stacks of the ROIs obtained with first beam. Bottom: same for second beam. C Simulation of ROIs with axial motion and activity changes over time. All ROIs move together, while activity changes independently in each ROI. First row: comparison of actual and estimated axial motion of ROIs. Second row: cost function error of the motion estimation algorithm. Third row: measured changes in fluorescence with combined activity changes and axial motion. Fourth row: changes in fluorescence after motion correction. Fifth row: actual changes in fluorescence due to activity. Bottom row: comparison of the averaged measured, corrected and actual bump amplitudes in the ROIs (see Methods for details on all steps).

Download Full Size | PDF

The bump of activity traveling with continuous velocity across the different ROIs is shown before correction in Fig. 3(C), third row, as well as after correction in Fig. 3(C), fourth row, and compared with the actual bump activity in Fig. 3(C), fifth row. As also seen in the average over all ROIs (Fig. 3(C), bottom row), the algorithm accurately corrects motion artifacts in the estimation of fluorescence changes.

2.5 Estimation and correction of controlled sample motion in vivo

The motion correction algorithm was experimentally tested by controlled variation of the focal plane in neurons labeled with green fluorescent protein (GFP, Figs. 4 and 5). For this, GFP was expressed in wedge neurons of the head direction system of the fruit fly [30] (R60D05-GAL4, Fig. 4(A), left side). The fly’s proboscis was fixed with wax to prevent brain motion [7]. To simulate sample motion, the axial position of the microscope objective was modulated sinusoidially at a range of frequencies using a piezoelectric objective actuator (P-725.4CD, Physik Instrumente) by up to $10 \mu m$ at different frequencies (1$Hz$, 0.5 $Hz$, 0.2$Hz$, 0.1$Hz$, see Methods for details).

 figure: Fig. 4.

Fig. 4. Axial motion estimation and correction in GFP labeled neurons. A Left side: average over all frames of the experiment. Right side: $32$ ROIs are defined. B Stacks recorded from each ROI with first (top) and second (bottom) beam, respectively. C Top row: actual and estimated axial motion. Second row: cost function error of motion estimation (see Methods for details). Third row: measured fluorescence changes in each ROI. Fourth row: corrected fluorescence changes in each ROI. Fifth row: difference between corrected and measured fluorescence changes for each ROI. Bottom row: measured, corrected, and expected changes in fluorescence for ROI 1 (representative for all ROIs).

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Axial motion estimation and correction in GFP labeled neurons at different laser powers, simulating changes in neural activity. The definition of ROIs and the recorded stacks are the same as in Fig. 4(A) and (B), respectively. Top row: comparison between actual and estimated axial motion. Second row: cost function error of motion correction algorithm. Third row: measured changes in fluorescence in each ROI. Fourth row: corrected changes in fluorescence in each ROI. Fifth row: difference between corrected and measured changes in fluorescence for each ROI. Bottom row: comparison between measured, corrected and expected changes in fluorescence for ROI 1 (see Methods for details).

Download Full Size | PDF

The motion correction algorithm was applied for estimating axial motion from data simultaneously recorded in two focal planes as follows. At the beginning of the experiment a total of $10$ $z$-stacks were recorded with both beams. Each $z$-stack contained a total of $400$ slices covering $100 \mu m$ along the $z$-axis in steps of $0.25\mu m$. The resolution of each slice was $256 \times 256$ pixels and images were recorded at $60Hz$. Even though there was no lateral brain motion during these experiments, the piezoelectric objective actuator produced lateral displacements at different $z$-positions. Therefore, we defined a template, consisting of the $10$ first images acquired during the experiment, and all images, including those of the z-stacks, were aligned with respect to this template using a phase correlation algorithm (as implemented in Skimage [28], see Methods). After alignment, the 10 $z$-stacks were averaged and a total of $N=32$ ROIs were defined along the dendritic toroidal structure as described previously [24] (Fig. 4(A), right side). The intensities recorded in the z-stacks were further filtered using a median filter with a window size of $20$ slices to remove noise. The filtered intensities for all ROIs are shown in Fig 4(B) for both beams.

Axial motion in the time series data was estimated by computing the value of the log-likelihood function (Eq. (2)) in all $400$ slices of the z-stacks with a time window of size $\sigma _f=3$. Then, the slice $\hat {z}(t)$ that maximized the log-likelihood function was determined. The axial motion estimation was obtained in steps of $0.25 \mu m$, which is the step size of the recorded stacks. The resulting resolution of axial motion estimation however depends on the profile of the $z$-stack, which in turn depends on the ROI densities and beam profiles. For example, Supplement 1 shows a simulation for a single voxel defined with a very wide rectangular ROI density along the $z$-axis, producing $z$-stacks without axial structure (Supplement 1). In this case, the motion correction algorithm cannot estimate the precise axial position of the sample around the center of the ROI (see Supplement 1, third row), because the ratio between the stacks, $I_1^{(stack)}(z) / I_2^{(stack)}(z)$, is constant. There are however no changes in fluorescence intensity between the two recorded intensities, and therefore the activity correction of the sample (Supplement 1, fourth row) is not affected by the inaccuracy of the estimated axial motion.

As seen in Fig. 4(C), top row, the algorithm accurately estimates motion. The cost function error (Fig. 4(C), second row, calculated using Eq. (3)) indicates that the accuracy of the estimate is similar across all positions.

The fluorescence intensities were corrected using Eq. (23), and the corrected $\Delta F/F$ was computed using Eq. (5) (Fig. 4(C), fourth row). This was compared to the measured $\Delta F/F$, obtained from Eq. (47) (Fig. 4(C), third row). The strong changes in measured fluorescence intensity, or motion artifacts, across ROIs in Fig. 4(C), third row, are removed after correction and all ROIs show constant fluorescence profiles, as expected for GFP labeled neurons (Fig. 4(C), fourth row). This is also seen in the average intensities for a representative ROI (Fig. 4(C), bottom row). The offset (from $0$) in the corrected $\Delta F/F$ value is due to an arbitrary definition of the baseline as the fraction of 10 percent lowest intensities (see Supplement 1 for details).

2.6 Estimation and correction of controlled sample motion and intensity variations

To estimate at the same time motion related as well as activity induced intensity changes, we varied the focal plane (as described above) together with the laser power, resulting in different levels of fluorescence intensity in a GFP labeled sample (Fig. 5). Four epochs of sinusoidal focal plane modulations at a frequency of $1 Hz$ were combined with increasing the laser power in steps, simulating different neural activity levels. Recordings of time series and z-stacks, image preprocessing as well as motion estimation and correction were performed as described above.

Motion was again estimated accurately (Fig. 5, top row). Additionally the algorithm correctly estimated the different motion corrected intensity levels (Fig. 5, third to fifth row). This is also seen in the average changes across all regions of interest (Fig. 5, bottom row). As expected, the cost function error (Fig. 5, second row) decreased with increased signal-to-noise ratio resulting from the increased fluorescence signal. Motion estimation always occurs on a single frame basis (smoothed with a Gaussian kernel with a standard deviation of 3 frames, see Methods), so that faster variations in intensity would result in similar estimates.

2.7 Estimation and correction of controlled motion for in vivo two-photon calcium imaging

We next corrected controlled motion of the focal plane in a sample expressing the calcium indicator jGCaMP8f [31] in the head direction system of the fly [30]. The fly walked in a virtual reality setup on an air supported ball [24] and the proboscis of the fly was fixed to prevent uncontrolled sample motion. Controlled focal plane motion was again generated by actuating the objective by up to $10 \mu m$ at different frequencies (1$Hz$, 0.5$Hz$, 0.2$Hz$, 0.1$Hz$). Images (z-stack and time series) were recorded, corrected for lateral motion, and filtered as described above and motion correction proceeded along the same steps (Eqs. (2), (23), (5), and (47)).

Again, $N=32$ ROIs were defined (Fig. 6(A), right side) and Fig. 6(B) shows the intensity profiles of all ROIs for both beams.

 figure: Fig. 6.

Fig. 6. Motion estimation and correction in wedge neurons labeled with jGCaMP8f during controlled axial motion. A Left side: average over all frames. Right side: definition of $32$ ROIs. B Z-stacks recorded by the first beam (top) and second beam (bottom). C First to fifth row: same as Fig. 4(C). Bottom row: measured and corrected fluorescence signals (bump amplitude, see Methods for details).

Download Full Size | PDF

The estimated axial motion is compared with the actual motion of the piezoelectric objective actuator in Fig. 6(C), first row. The measured $\Delta F/F$ as well as the corrected $\Delta F/F$ values are shown in Fig. 6(C), third and fourth rows. The difference between the corrected and measured $\Delta F/F$ is shown in Fig. 6(C), fifth row. The estimation error (Eq. (3)) indicates that motion was tracked with similar quality across all axial positions. The last row in Figs. 6(C) compares corrected and measured bump amplitudes (see Methods), which was smoothed using a Gaussian filter with a standard deviation in the kernel of $5$ frames. Motion correction was able to remove motion artifacts, resulting in a smooth movement of the bump across the ellipsoid body as expected [30].

2.8 Estimation and correction of brain motion for in vivo two-photon calcium imaging in behaving animals

Finally, motion was corrected in a naturally moving brain during in vivo calcium imaging in a fly walking in a virtual reality setup [7,24], without fixing the fly’s proboscis or removing muscles in the head. Motion estimation and correction proceeded as described in the previous section.

We measured brain motion of up to $10 \mu m$ (Fig. 7(C), top row) and corrections induced changes in the values of $\Delta F/ F$ of up to $40 \%$, as seen in the difference (Fig. 7(C), sixth row) between uncorrected and corrected $\Delta F/ F$ (Fig. 7(C), fourth and fifth row, respectively). The bump amplitudes of the corrected and measured $\Delta F/F$ are shown in Fig. 7, bottom row, and were computed using Eq. (7) and smoothed using a Gaussian filter with standard deviation of $5$ frames.

 figure: Fig. 7.

Fig. 7. Motion estimation and correction in wedge neurons labeled with jGCaMP8f. A Left side: average over all frames. Right side: definition of $32$ ROIs. B Stacks recorded by the first beam (top) and second beam (bottom). C First to sixth row: same as Fig. 4(C), additionally including lateral motion estimated from computationally aligning frames (second row). Bottom row: measured and corrected fluorescence signals (bump amplitude, see Methods for details).

Download Full Size | PDF

2.9 Algorithm for imaging with four simultaneously recorded focal planes

The developed technique can be extended to more than two focal planes (for example four beams as in [19]). Supplement 1 shows a simulation with four axially separated beams. A single ROI was simulated for simplicity (Supplement 1, top) and the resulting $z$-stacks for all four beams are shown in Supplement 1, bottom. Using more than two beams allows for example increasing the axial resolution or axial range.

The code of the axial motion correction algorithm, together with the simulations, can be found in the following git repository: https://gitlab.com/anflores/axial_motion_correction

3. Discussion

We developed a method for correcting motion artifacts for two-photon calcium imaging that result from axial sample movement. The method relies on simultaneous multi-plane imaging with at least two axially offset focal planes [18]. As verified with simulations (Figs. 2 and 3 and Supplement 1), by controlled motion of the focal plane (Fig. 4, 5, 6), as well as by controlled changes in fluorescence intensity (Fig. 5), the developed optimization algorithm can estimate as well as correct motion with non-ratiometric indicators in isolated structures in the imaging volume.

The method was applied for correcting motion artifacts that are observed when imaging in vivo in the brain of fruit flies. Since motion can be corrected without additional mechanical stabilization (Fig. 7), such as removing muscles or immobilizing the proboscis [7], the approach simplifies imaging in flies and improves the viability of the preparation. Since brain motion in the fly as observed here (about $10 \mu m$, see Fig. 7) lies in a similar range as observed in other species [4], the technique could also similarly be applied in other animals.

Compared with motion correction that relies on recording time series of z-stacks [13,14], the approach developed here benefits from high frame rates (for example $60$ Hz at $256 \times 256$ pixels in each plane). Compared with closed-loop correction [4,1517], the microscope does not require custom modifications and is compatible with multi-beam scanning as implemented in commercially available scanning software (Scanimage [14]). Further, the developed method does not use an independent tracking fiducial, such as a fluorescent bead or cell body.

That motion is estimated directly from the sample labeled with calcium indicator, leads however to the following two conditions: First, changes in fluorescence due to calcium activity need to occur with a constant gain factor axially along the entire ROI (see Supplement 1 for illustration). Given that controlled motion is accurately corrected in a sample expressing jGCaMP8f (Fig. 6), this is a valid approximation for the sample used here. Second, the structure of interest needs to be isolated in the imaging volume at all times. This means that no (partial) structures can move in and out of the imaging volume. Since the algorithm relies on measuring intensity changes along the optical axis, such appearing or disappearing features would lead to erroneous corrections.

Further, the algorithm requires that each ROI must be visible at all times in both focal planes. This condition can be satisfied by adjusting the separation and axial extension of the beams such that the moving object is always within their axial range. Here, beam conditions were optimized for imaging with commonly used GAL4 fly lines labeling subsets of neurons in the central brain. We used two axially extended beams, since we imaged from a relatively large structure. However, the technique could equally be applied for smaller structure with diffraction limited beams. Beam separation and parameters could additionally be adjusted on a sample by sample basis using for example electrically tunable lenses or spatial light modulators. The method can also be similarly extended to more simultaneously recorded focal planes (see [32] for an overview of different multiplane imaging approaches) as shown in simulations (Supplement 1) which could be used to extend the axial range or resolution. Instead of using simultaneously recorded planes the method could be applied with sequentially recorded planes [33,34], as long as the sample moves slowly compared with the time it take to switch between focal planes. Additionally, the algorithm requires that lateral motion is corrected first using standard motion correction algorithms and for reliable image alignment therefore sufficient intensity is needed in each channel. Similarly, the ROIs must be defined over a sufficient number of pixels so that the intensity distribution in each ROI is approximately normal [35].

Overall, the developed method allows correction of lateral and axial brain motion in two-photon calcium imaging experiments from isolated structures in the imaging volume, such as sparse expression patterns in the brain of Drosophila, with non-ratiometric indicators.

Funding

Max-Planck-Gesellschaft; Max Planck Institute for Neurobiology of Behavior - caesar (MPINB).

Acknowledgements

We would like to thank Giacomo Bassetto for helpful discussions and comments on the algorithm and Ivan Vishniakou for comments on the manuscript.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. D. S. Greenberg and J. N. Kerr, “Automated correction of fast motion artifacts for two-photon imaging of awake animals,” J. Neurosci. Methods 176(1), 1–15 (2009). [CrossRef]  

2. D. A. Dombeck, A. N. Khabbaz, F. Collman, T. L. Adelman, and D. W. Tank, “Imaging large-scale neural activity with cellular resolution in awake, mobile mice,” Neuron 56(1), 43–57 (2007). [CrossRef]  

3. E. A. Pnevmatikakis and A. Giovannucci, “Normcorre: An online algorithm for piecewise rigid motion correction of calcium imaging data,” J. Neurosci. Methods 291, 83–94 (2017). [CrossRef]  

4. V. A. Griffiths, A. M. Valera, J. Y. Lau, H. Roš, T. J. Younts, B. Marin, C. Baragli, D. Coyle, G. J. Evans, G. Konstantinou, T. Koimtzis, K. M. N. S. Nadella, S. A. Punde, P. A. Kirkby, I. H. Bianco, and R. A. Silver, “Real-time 3d movement correction for two-photon imaging in behaving animals,” Nat. Methods 17(7), 741–748 (2020). [CrossRef]  

5. C. Stringer and M. Pachitariu, “Computational processing of neural recordings from calcium imaging data,” Curr. Opin. Neurobiol. 55, 22–31 (2019). [CrossRef]  

6. V. Jayaraman and G. Laurent, “Evaluating a genetically encoded optical sensor of neural activity using electrophysiology in intact adult fruit flies,” Front. Neural Circuits 1, 3 (2007). [CrossRef]  

7. J. D. Seelig, M. E. Chiappe, G. K. Lott, A. Dutta, J. E. Osborne, M. B. Reiser, and V. Jayaraman, “Two-photon calcium imaging from head-fixed drosophila during optomotor walking behavior,” Nat. Methods 7(7), 535–540 (2010). [CrossRef]  

8. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33(2), 156–158 (2008). [CrossRef]  

9. A. Miri, K. Daie, R. D. Burdine, E. Aksay, and D. W. Tank, “Regression-based identification of behavior-encoding neurons during large-scale optical imaging of neural activity at cellular resolution,” J. Neurophysiol. 105(2), 964–980 (2011). [CrossRef]  

10. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248(4951), 73–76 (1990). [CrossRef]  

11. D. Soulet, A. Paré, J. Coste, and S. Lacroix, “Automated filtering of intrinsic movement artifacts during two-photon intravital microscopy,” PLoS One 8(1), e53942 (2013). [CrossRef]  

12. T. M. Ryan, A. J. Hinojosa, R. Vroman, C. Papasavvas, and L. Lagnado, “Correction of z-motion artefacts to allow population imaging of synaptic activity in behaving mice,” J. Physiol. 598(10), 1809–1827 (2020). [CrossRef]  

13. S. Aghayee, D. E. Winkowski, Z. Bowen, E. E. Marshall, M. J. Harrington, P. O. Kanold, and W. Losert, “Particle tracking facilitates real time capable motion correction in 2d or 3d two-photon imaging of neuronal activity,” Front. Neural Circuits 11, 56 (2017). [CrossRef]  

14. T. A. Pologruto, B. L. Sabatini, and K. Svoboda, “Scanimage: flexible software for operating laser scanning microscopes,” BioMed. Eng. Online 2(1), 13 (2003). [CrossRef]  

15. S. Laffray, S. Pagès, H. Dufour, P. De Koninck, Y. De Koninck, and D. Côté, “Adaptive movement compensation for in vivo imaging of fast cellular dynamics within a moving tissue,” PLoS One 6(5), e19928 (2011). [CrossRef]  

16. J. L. Chen, O. A. Pfäffli, F. F. Voigt, D. J. Margolis, and F. Helmchen, “Online correction of licking-induced brain motion during two-photon imaging with a tunable lens,” J. Physiol. 591(19), 4689–4698 (2013). [CrossRef]  

17. D. Karagyozov, M. M. Skanata, A. Lesar, and M. Gershow, “Recording neural activity in unrestrained animals with three-dimensional tracking two-photon microscopy,” Cell Rep. 25(5), 1371–1383.e10 (2018). [CrossRef]  

18. W. Amir, R. Carriles, E. E. Hoover, T. A. Planchon, C. G. Durfee, and J. A. Squier, “Simultaneous imaging of multiple focal planes using a two-photon scanning microscope,” Opt. Lett. 32(12), 1731–1733 (2007). [CrossRef]  

19. A. Cheng, J. T. Gonçalves, P. Golshani, K. Arisaka, and C. Portera-Cailliau, “Simultaneous two-photon calcium imaging at different depths with spatiotemporal multiplexing,” Nat. Methods 8(2), 139–142 (2011). [CrossRef]  

20. S. Weisenburger, F. Tejera, J. Demas, B. Chen, J. Manley, F. T. Sparks, F. M. Traub, T. Daigle, H. Zeng, A. Losonczy, and A. Vaziri, “Volumetric Ca2+ imaging in the mouse brain using hybrid multiplexed sculpted light microscopy,” Cell 177(4), 1050–1066.e14 (2019). [CrossRef]  

21. D. R. Beaulieu, I. G. Davison, K. Kılıç, T. G. Bifano, and J. Mertz, “Simultaneous multiplane imaging with reverberation two-photon microscopy,” Nat. Methods 17(3), 283–286 (2020). [CrossRef]  

22. J. Wu, Y. Liang, S. Chen, C.-L. Hsu, M. Chavarha, S. W. Evans, D. Shi, M. Z. Lin, K. K. Tsia, and N. Ji, “Kilohertz two-photon fluorescence microscopy imaging of neural activity in vivo,” Nat. Methods 17(3), 287–290 (2020). [CrossRef]  

23. A. Flores-Valle and J. D. Seelig, “Two-photon bessel beam tomography for fast volume imaging,” Opt. Express 27(9), 12147–12162 (2019). [CrossRef]  

24. A. Flores-Valle, R. Honnef, and J. D. Seelig, “Automated long-term two-photon imaging in head-fixed walking drosophila,” J. Neurosci. Methods 368, 109432 (2022). [CrossRef]  

25. E. P. Perillo, Y.-L. Liu, K. Huynh, C. Liu, C.-K. Chou, M.-C. Hung, H.-C. Yeh, and A. K. Dunn, “Deep and high-resolution three-dimensional tracking of single particles using nonlinear and multiplexed illumination,” Nat. Commun. 6(1), 7874 (2015). [CrossRef]  

26. C. Liu, Y.-L. Liu, E. Perillo, N. Jiang, A. Dunn, and H.-C. Yeh, “Improving z-tracking accuracy in the two-photon single-particle tracking microscope,” Appl. Phys. Lett. 107(15), 153701 (2015). [CrossRef]  

27. S. Hou, C. Johnson, and K. Welsher, “Real-time 3D single particle tracking: towards active feedback single molecule spectroscopy in live cells,” Molecules 24(15), 2826 (2019). [CrossRef]  

28. S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image processing in python,” PeerJ2, e453 (2014). [CrossRef]  

29. C. D. Kuglin, “The phase correlation image alignment methed,” in Proc. Int. Conference Cybernetics Society (1975), pp. 163–165.

30. J. D. Seelig and V. Jayaraman, “Neural dynamics for landmark orientation and angular path integration,” Nature 521(7551), 186–191 (2015). [CrossRef]  

31. Y. Zhang, M. Rózsa, D. Bushey, J. Zheng, D. Reep, Y. Liang, G. J. Broussard, A. Tsang, G. Tsegaye, R. Patel, S. Narayan, J. X. Lim, R. Zhang, M. B. Ahrens, G. C. Turner, S. S.-H. Wang, K. Svoboda, W. Korff, E. R. Schreiter, J. P. Hasseman, I. Kolb, and L. L. Looger, “jgcamp8 fast genetically encoded calcium indicators,” Janelia Research Campus (2020).

32. J. Wu, N. Ji, and K. K. Tsia, “Speed scaling in multiphoton fluorescence microscopy,” Nat. Photonics 15(11), 800–812 (2021). [CrossRef]  

33. R. Liu, N. Ball, J. Brockill, L. Kuan, D. Millman, C. White, A. Leon, D. Williams, S. Nishiwaki, S. de Vries, J. Larkin, D. Sullivan, C. Slaughterbeck, C. Farrell, and P. Saggau, “Aberration-free multi-plane imaging of neural activity from the mammalian brain using a fast-switching liquid crystal spatial light modulator,” Biomed. Opt. Express 10(10), 5059 (2019). [CrossRef]  

34. B. F. Grewe, F. F. Voigt, M. van’t Hoff, and F. Helmchen, “Fast two-layer two-photon imaging of neuronal cell populations using an electrically tunable lens,” Biomed. Opt. Express 2(7), 2035–2046 (2011). [CrossRef]  

35. E. Díaz-Francés and F. J. Rubio, “On the existence of a normal approximation to the distribution of the ratio of two independent normal random variables,” Stat. Papers 54(2), 309–323 (2013). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplementary Information

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Approach and setup for motion estimation and correction. A Outline of method: $1)$ Calibration step: two stacks of the sample are recorded simultaneously in two different, axially offset focal planes. $2)$ Fluorescence intensity of the sample is recorded simultaneously in two planes and the sample moves in axial direction ($z$-axis). 3) Changes in intensities over time in each plane have two contributions: axial motion as well as neural activity. 4) The algorithm uses the recorded stacks to estimate axial motion of the sample from intensities recorded in the two planes. 5) Changes in fluorescence, $\Delta F/F$, are corrected to remove the contribution of axial motion, yielding motion corrected, neural activity related fluorescence changes. B Optical setup: two Gaussian beams are temporally offset by 6 $ns$, allowing simultaneous imaging in two different planes using temporal multiplexing. C Normalized profiles of the two beams along the $z$-axis fitted with Gaussian functions.
Fig. 2.
Fig. 2. Motion correction in a simulated single ROI. A Top: axial intensity profiles of the two simulated beams and the simulated sample (ROI) at time $t = 0$. Bottom: two stacks recorded from the sample at time $t = 0$ with the two beams. B Example of axial motion and activity of the ROI. Sample profile along the $z$-axis at time $t=0$ is shown in green. At time $t>0$, the offset of the ROI along the $z$-axis changes while its activity increases. C Simulation of ROI activity and motion along the $z$-axis over time. Top row, left side: beam profiles. Right side: sample moves along z-axis over time. Color indicates sample neural activity. Second row: resulting intensities measured by each beam have two different contributions, motion and activity. Third row: comparison of actual and estimated axial motion. Fourth row: cost function error of the motion estimation algorithm (see Methods for details). Bottom row: actual and estimated, corrected changes in neural activity induced $\Delta F/F$.
Fig. 3.
Fig. 3. Motion correction in a simulated ring attractor with $32$ ROIs. A Top: normalized intensity profiles of the ROIs, which are simulated with varying lengths and offsets along the $z$-axis. B Bottom: ROI 1 moves along the $z$-axis over time while its activity ($\Delta F/F$) changes (see C for all ROIs). B Top: stacks of the ROIs obtained with first beam. Bottom: same for second beam. C Simulation of ROIs with axial motion and activity changes over time. All ROIs move together, while activity changes independently in each ROI. First row: comparison of actual and estimated axial motion of ROIs. Second row: cost function error of the motion estimation algorithm. Third row: measured changes in fluorescence with combined activity changes and axial motion. Fourth row: changes in fluorescence after motion correction. Fifth row: actual changes in fluorescence due to activity. Bottom row: comparison of the averaged measured, corrected and actual bump amplitudes in the ROIs (see Methods for details on all steps).
Fig. 4.
Fig. 4. Axial motion estimation and correction in GFP labeled neurons. A Left side: average over all frames of the experiment. Right side: $32$ ROIs are defined. B Stacks recorded from each ROI with first (top) and second (bottom) beam, respectively. C Top row: actual and estimated axial motion. Second row: cost function error of motion estimation (see Methods for details). Third row: measured fluorescence changes in each ROI. Fourth row: corrected fluorescence changes in each ROI. Fifth row: difference between corrected and measured fluorescence changes for each ROI. Bottom row: measured, corrected, and expected changes in fluorescence for ROI 1 (representative for all ROIs).
Fig. 5.
Fig. 5. Axial motion estimation and correction in GFP labeled neurons at different laser powers, simulating changes in neural activity. The definition of ROIs and the recorded stacks are the same as in Fig. 4(A) and (B), respectively. Top row: comparison between actual and estimated axial motion. Second row: cost function error of motion correction algorithm. Third row: measured changes in fluorescence in each ROI. Fourth row: corrected changes in fluorescence in each ROI. Fifth row: difference between corrected and measured changes in fluorescence for each ROI. Bottom row: comparison between measured, corrected and expected changes in fluorescence for ROI 1 (see Methods for details).
Fig. 6.
Fig. 6. Motion estimation and correction in wedge neurons labeled with jGCaMP8f during controlled axial motion. A Left side: average over all frames. Right side: definition of $32$ ROIs. B Z-stacks recorded by the first beam (top) and second beam (bottom). C First to fifth row: same as Fig. 4(C). Bottom row: measured and corrected fluorescence signals (bump amplitude, see Methods for details).
Fig. 7.
Fig. 7. Motion estimation and correction in wedge neurons labeled with jGCaMP8f. A Left side: average over all frames. Right side: definition of $32$ ROIs. B Stacks recorded by the first beam (top) and second beam (bottom). C First to sixth row: same as Fig. 4(C), additionally including lateral motion estimated from computationally aligning frames (second row). Bottom row: measured and corrected fluorescence signals (bump amplitude, see Methods for details).

Tables (1)

Tables Icon

Algorithm 1. Motion estimation and correction algorithm

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

L ( z , t ) = N 2 log ( 2 π ) 1 2 i = 1 N log ( 2 ( I 1 , i ( s t a c k ) ( z ) ) 2 ( I 2 , i ( s t a c k ) ( z ) ) 3 ) i = 1 N ( I 2 , i ( s t a c k ) ( z ) ) 3 ( I 1 , i ( s t a c k ) ( z ) ) 2 ( I 1 , i ( t ) I 2 , i ( t ) I 1 , i ( s t a c k ) ( z ) I 2 , i ( s t a c k ) ( z ) ) 2 .
L f ( z , t ) = t = 0 T f ( t t , σ t ) L ( z , t ) ,
E ( t ) = 1 N i = 0 N ( I 1 , i ( t ) I 2 , i ( t ) I 1 , i ( s t a c k ) ( z ^ ( t ) ) I 2 , i ( s t a c k ) ( z ^ ( t ) ) ) 2 .
{ I 1 , i c o r ( t ) = I 1 , i ( t ) I 1 , i s t a c k ( z ^ ( t ) ) I 2 , i c o r ( t ) = I 2 , i ( t ) I 2 , i s t a c k ( z ^ ( t ) ) .
corrected Δ F / F i ( t ) = 1 2 ( I 1 , i c o r ( t ) F 1 , i 0 F 1 , i 0 + I 2 , i c o r ( t ) F 2 , i 0 F 2 , i 0 ) for i = 1 , , N ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.