Abstract
Studying dynamic biological processes, such as heart development and function in zebrafish embryos, often relies on multi-channel fluorescence labeling to distinguish multiple anatomical features, yet also demands high frame rates to capture rapid cell motions. Although a recently proposed method for imaging dynamic samples in transmission or reflection allows to conveniently switch between color imaging or boosting the frame rate by use of spectrally-encoded, temporally-modulated illumination sequences and a hue-encoded shutter (hue-encode shutter method, HESM), the technique is not applicable directly in fluorescence microscopy, where the emitted light spectrum is mostly independent of the excitation wavelength. In this paper, we extend HESM by using samples labeled with multiple fluorophores, whose emission signal can either be used to distinguish multiple anatomical features when imaged in multi-channel mode or, if the fluorophores are co-localized in a dynamic tissue, to increase the frame rate via HESM. We detail the necessary steps to implement this method in a two-color light-sheet microscope to image the beating heart of a zebrafish embryo. Specifically, we propose an adapted laser modulation scheme for illumination, we identify caveats in choosing a suitable multi-color fluorophore labeling strategy, and derive an ℓ1-regularized reconstruction technique that is sufficiently robust to handle the low signal-to-noise ratio and labeling inhomogeneities in the fluorescence images at hand. Using the case of a beating heart in a zebrafish embryo, we experimentally show an increase in the frame rate by a factor two while preserving the ability to image static features labeled in distinct channels, thereby demonstrating the applicability of HESM to fluorescence. With a suitable illumination setup and fluorescent labeling, the method could generalize to other applications where flexibility between multiple channel and high-speed fluorescence imaging is desirable. For fluorophores that are not co-localized, the imaging system is similar to a conventional light sheet microscope.
© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
Fluorescence microscopy is a tool used in many biological studies. It is possible to have fluorescence light emitted by a sample from specific biological features such as nuclei, membranes, or selected molecules [1]. In addition to the need for distinctly labeling anatomical, cellular, or subcellular features, many live biological processes are highly dynamic and a high temporal resolution in imaging is required to study them [2]. Although a recently proposed method for imaging dynamic sample in transmission or reflection allows to conveniently switch between color imaging or boosting the frame rate by use of a spectrally encoded illumination and a hue-encoded shutter method (HESM), the technique is not applicable directly in fluorescence microscopy, where the emitted light spectrum is mostly independent of the excitation wavelength.
In this paper, we propose to extend the hue-encoded shutter method presented in [3] to fluorescence microscopy and to apply it to light-sheet microscopy [4]. The method in [3] increases the flexibility of microscopes by allowing to rapidly switch between color imaging and fast imaging, or to do both simultaneously, within regions of interest. Briefly, that method uses a multi-color LED illumination source that is synchronized to a color camera. The scene (which is assumed to be monochromatic) is illuminated with a temporally-varying hue at a sub-frame rate frequency. The individual color frames are then processed to convert the color channels into multiple (monochrome) time frames. The monochrome requirement of [3] cannot be translated directly to fluorescence microscopy, as the spectral signature of the illumination is lost in the excitation-emission process at the heart of fluorescence. To overcome this limitation, we propose to use multiple co-localized fluorophore species with different emission spectra. The overview of the proposed method is presented in Fig. 1.
Improving temporal or spatial resolution of microscopes has been tackled by many different approaches. We briefly review some that are of particular interest or use related concepts.
Several temporal super-resolution methods directly capitalize on strong prior knowledge of the signal itself, for example that sample motion is repetitive [5,6] or that the signal is sparse in some basis [7–9]. Other approaches do not require any prior knowledge on the signal. Bub et al. [10] used a modified camera with individually addressable pixels to stagger exposure times. This method fits well for fluorescence microscopy, but requires a modified camera, which may not be widely available. Wagnert et al. [11] presented a method to instantaneously acquire full volumes of isotropic resolution thanks to two light-field imagers. While limited to small fields of view and having a lower spatial resolution than light-sheet microscopy, their method shows great potential as whole volumes can be acquired at camera frame rate, despite requiring a dedicated setup.
Our method takes advantage of temporally patterned illumination. With illumination long known to be an essential factor to a microscope’s performance (as shown by Köhler [12]), the field of structured illumination microscopy has demonstrated the possibility of improving spatial resolution beyond the limits of the acquisition device [13–18]. These methods mostly target spatial resolution improvements, often at the cost of temporal resolution, since they require multiple modulated images be merged into a higher spatial resolution image. Methods that specifically seek to improve temporal resolution by taking advantage of fast modulable illumination sources include the coded illumination method by Gorthi et al. [19], who perform motion deblurring for fluorescence microscopy using the fluttered shutter principle [20]. This approach is limited to linear motion deblurring. Yet other approaches, such as that of Dillavou et al. in [21] assume that the imaged sample is binary (either black or white) and use the dynamical range of a camera to drastically increase its frame rate. This strong assumption on the signal might, however, limit applicability to fluorescence microscopy.
Our method allows us to do either color imaging or fast imaging in fluorescence microscopy without the limitations of the aforementioned methods as it does not require multiple or other custom modified cameras, neither do we require the signal to be sparse, repetitive, binary, or following any particular (linear) motion.
The main contributions of the present paper are (i) the adaptation of the method in [3] to fluorescence microscopy; (ii) an improved temporal super-resolution reconstruction algorithm to compensate for slight model mismatches; (iii) demonstration of the applicability to light-sheet microscopy for biological applications.
The paper is organized as follows. In Section 2, we present the signal and imaging models and detail the assumptions on the image acquisition and the signal and derive the algorithm to perform temporal super-resolution. In Section 3, we demonstrate our method in practice by performing frame-rate doubling in light-sheet microscopy of the beating heart of a double-labeled, 2 days post-fertilization old zebrafish embryo. We finally discuss our experiments and conclude in Section 4.
2. Methods
Before we delve into the details of our proposed method, we briefly recall assumptions of the HESM method developed in [3] and the limitations that prevent it from being directly applicable to fluorescence microscopy. In [3], the light captured by the camera (Fig. 2(A, B)) was assumed to be either reflected by the imaged scene or transmitted through a sample. This implied that the light spectrum measured by the camera was that of the illumination light source modulated by the spectral sample response. To follow a monochromatic object with higher temporal resolution, temporal variations could be encoded spectrally in the illumination pattern by using light sources of different hues. In the case of fluorescence microscopy with a single type of fluorophore (Fig. 2(C, D)), while the emitted light (which results from the fluorophores’ relaxation following their excitation by the incoming light) has an overall intensity that depends on the excitation light via the molecules’ spectral absorption properties, its spectrum only depends on the emission properties of the fluorophore but not on the spectrum of the excitation. Hence, the spectrum captured by the camera does no longer depend on the light source used, which precludes the possibility of using HESM in a way similar to the brightfield case. Our proposed way to extend HESM to fluorescence involves the use of multiple co-localized fluorophores (Fig. 2(E, F)). Although each fluorescent species’ individual emission spectrum is independent of the illumination spectrum, the fluorophores’ combined emission spectrum depends on the spectral overlap of the illumination with the absorption curve of each fluorophore. An added challenge for the fluorescence case is that fluorophores typically have broad and overlapping emission spectra, which makes it more difficult to isolate their contributions in distinct RGB channels as could be done with narrow-band illuminations in the case of transmission HESM. This overlap, therefore, requires an unmixing step.
2.1 Imaging model
The fluorescence imaging setup we consider is made of a color camera with $C$ channels and $L$ lasers of different wavelengths (Fig. 1). We further consider a dynamic sample $\boldsymbol{x}(t)$ labeled with $F$ different, co-localized fluorophore species. We write the intensity measured in the $c^{\textrm {th}}$ channel of a single pixel, during the $k^{\textrm {th}}$ acquisition at time $t_k$, as:
As a concrete example (which is the case we will consider in our experiments) we set $C=3$, $L=2$, and $Q=2$. We choose to set the illumination temporal functions of the $L=2$ lights as $s_1 = [1, 0]$ and $s_2 = [0, 1]$. That is, each light source will be switched on for half of the exposure time, consecutively. In this setting, the matrix $\boldsymbol{S}$ is written as $\boldsymbol{S} = \begin {pmatrix} \boldsymbol{S}^{1} & \boldsymbol{S}^{2} \end {pmatrix}$ and (see [3] for further details):
is the matrix expressing the first light source’s temporal activation pattern. Similarly, is the matrix expressing the second light source’s temporal activation pattern. We recall that $\boldsymbol{S}_1, \boldsymbol{S}_2 \in \mathbb {R}^{C \times C Q = 3 \times 6}$ and $\boldsymbol{S} \in \mathbb {R}^{C\times C Q L = 3\times 12}$. Our example further leads to $\boldsymbol{\Gamma } = \begin {pmatrix} \boldsymbol{(\Gamma }^{1})^{\top } & (\boldsymbol{\Gamma }^{2})^{\top } \end {pmatrix}^{\top }$ with $\boldsymbol{\Gamma }^{\ell }$, $\ell =1,2$, the spectral mixing matrix for the $\ell ^{\textrm {th}}$ light source:2.2 Temporal super-resolution from hue encoded signal
In [3] we performed temporal super-resolution by solving
To make the method in [3] more robust to model mismatches, we propose to add a regularization term over time which introduces a dependency over consecutive frames (temporal dependency). Given the model presented in Eq. (4), the vector $\boldsymbol{d}_0$ and matrix $\boldsymbol{\Gamma }$ obtained through calibration [3] and a sequence of $M$ acquired color pixels $\boldsymbol{y} = \left [ \boldsymbol{y}_0^{\top }, \boldsymbol{y}_1^{\top }, \cdots , \boldsymbol{y}_{M-1}^{\top } \right ]^{\top } \in \mathbb {R}^{M C\times 1}$, we want to reconstruct the temporal high-resolution signal $\boldsymbol{x} = \left [ \boldsymbol{x}_0^{\top }, \boldsymbol{x}_1^{\top }, \cdots , \boldsymbol{x}_{M-1}^{\top } \right ]^{\top } \in \mathbb {R}^{M Q\times 1}$ with a temporal super-resolution factor of $Q$. We propose to solve
3. Experiments
We used our implementation of a light sheet microscope (based on OpenSPIM [31]), with two lasers (Stradus, Vortran Laser Technology) of wavelengths 488nm and 561nm to generate the excitation illumination light sheet. A four-dimensional stage (USB 4D-STAGE, Picard) held the sample. The detection axis consisted of a 20$\times$/0.5 water immersion microscopy lens (UMPLFLN 20$\times$W, Olympus) coupled to a 180 mm tube-lens (U-TLU-1-2, Olympus), terminated by a 1.2 mega pixels CMOS color camera equipped with an RGGB Bayer filter (DCC3240C, Thorlabs Instruments). The lasers and camera were electronically synchronized using a micro-controller (Arduino Due, Arduino, Italy). The micro-controller monitored the flash output of the camera and controlled the lasers using their ON/OFF electrical input, similarly to the electronics scheme provided in [32].
We conducted experiments on more than one fish sample, and there were differences in fluorescent proteins expression between different samples. In order to have acceptable SNR in the acquired images, we adjusted the exposure time of the camera, hence different frame-rates in the following experiments.
3.1 Temporal super-resolution light-sheet imaging of the beating heart of a zebrafish
To illustrate the potential of our method, we tested whether it was applicable to imaging the beating heart of a zebrafish embryo. To that end, we chose zebrafish that co-express ubiquitous cytoplasmic green fluorescent protein, EGFP, and red fluorescent protein, mCherry [33] (two co-localized fluorophores, $F=2$). We embedded the embryos on a glass bottom dish in 1% low melting agarose and acquired 30 frames ($M=30$) with our RGB camera ($C=3$); setting the super-resolution factor to $Q=2$, the two lasers ($L=2$) were controlled so that they were each switched on for half of the exposure time (consecutively), so that we had $s_1[i] = [1, 0]$ and $s_2[i] = [0, 1]$, $i \in \{0, 1\}$. Figure 4 shows two consecutively acquired frames (Fig. 4(a),(e)), all color channels of the acquisitions (Fig. 4(b)-(d), (g)-(h)), and the corresponding four reconstructed frames (Fig. 4(i)-(l)). Note that, as discussed in Fig. 3, the blue layer (Fig. 4(d),(h)) contains very little energy. Our method was therefore capable of doubling the frame rate. Here, we had $F=2$, which limited the frame rate increase to $Q=2$, but with additional fluorophore species and acquisition channels, higher $Q$ could be considered.
The matrix $\boldsymbol{\Gamma }$ and the vector $\boldsymbol{d}$ were obtained with the calibration procedure in [3], using the code available online [32]. The area used for calibration is shown in Fig. 5(d). Figure 3 illustrates how the matrix $\boldsymbol{A} = \boldsymbol{S\Gamma }$ is built. We observe that the green laser has a high impact (first value, 0.7) on the red channel of the camera, which is due to the green laser mostly exciting the red fluorophores and the ensuing red fluorescent light will mostly be captured on the red channel of the camera. Similarly, the blue laser has a high impact on the green channel of the camera (value of 0.8), as it mostly excites the green fluorophores. Finally, note that none of the lasers really impact the blue channel, since there are no blue fluorophores. The values displayed on Fig. 3 are the ones obtained after calibration and were used for the following experiments.
3.2 Homogeneity of the fluorescent co-expression ratio
Our method requires that the imaged sample shows consistent fluorescent co-expression. We set out to make sure that the (static) calibration area and the (dynamic) heart area have indeed homogeneous and matching green and red fluorescent protein expression levels. To that end, we computed the red over green ratio (R/G) for each pixel in the images we acquired to compute the calibration, displayed in Fig. 5(b). We then computed the R/G histograms over various areas to compare them (Fig. 5(c’), (d’) and (e’)).
Figure 5 shows an acquired color image where the lasers were temporally controlled as explained in Fig. 3. The ventricle (V), atrium (At) and pericardium (p) are outlined. In the pericardium in Fig. 5(a), some patches appear green, while the heart appears orange. This is better seen in Fig. 5(b), comparing (c), (d) and (e) and their corresponding histograms on (c’), (d’) and (e’).
Inhomogeneities in the co-expression of red and green fluorescent proteins are visible over the sample (for example Fig. 5(e) and (e’) compared to Fig. 5(d) and (d’)). Given this expression variability (which is common from individual to individual or within the sample), it is thus important to choose a motionless calibration area (Fig. 5(d) and (d’)) that exhibits similar R/G ratio than that of the dynamic phenomenon of interest, like the beating heart (Fig. 5(c) and (c’)). Even with a careful choice, slight differences in the histogram remain (compare (c’) and (d’)). This justifies the use of an algorithm that is sufficiently robust to mismatches than a plain least-squares method without regularization. We further investigated this aspect in Section 3.4.
3.3 Combined fast and color imaging on the beating heart of a zebrafish
We applied Algorithm 1 only in the region comprising the beating heart. In this way, we could maintain the color information in static areas and do fast imaging where needed. Since we require the imaged sample (in this case, the heart) to be homogeneously labeled for temporal super-resolution, after Algorithm 1 we can virtually assign the (single) hue back to the reconstructed images. Figure 6 shows two frames where we have temporal super-resolution reconstructed frames only in a region of interest (Fig. 6(c)) and where the acquired hue is preserved in static areas (Fig. 6(a) and (b)).
3.4 ADMM compared to least-squares without regularization
As observed in Section 3.2 and Fig. 5, there can be inhomogeneities in the co-expression ratios of the fluorescent proteins over the sample. We investigated whether the algorithm we introduced in Section 2 (ADMM) is more robust to these variations than the least-squares method (LSTSQ) without regularization of [3]. To that end, we compared reconstructions obtained using either ADMM or LSTSQ with the same input data. Figure 6(d) shows a comparison of both reconstructions at the same location, where the heart beats. The cardiac period is visible in both plots yet the ADMM reconstruction is smoother with sharp transitions maintained, unlike the LSTSQ reconstruction. This behavior is to be expected with the Total-Variation regularization. In Fig. 6(e), we compared ADMM and LSTSQ reconstruction at a location where the imaged sample is mostly static. There the reconstructions is expected to be nearly constant, which is indeed the case for the ADMM reconstruction, while the LSTSQ reconstructions oscillates, most likely because of a slight image model mismatch.
For the ADMM reconstruction, the processing time for 45 time-points on a single pixel (i.e. 90 reconstructed time-points) was $\approx 1.7$ milliseconds on a MacBook Pro 2018 (2.9 GHz 6-Core Intel Core i9, 32 Gb RAM). The code is in standard Python 3. We believe that with GPU programming, separating the processing of each individual pixel, and some tweaking, the code could run in real-time.
3.5 Impact on the reconstruction of 3D volumes of the beating heart
Light-sheet microscopy is a well-established tool for developmental studies thanks to its high spatio-temporal resolution and its low level of induced optical damage to the sample [34]. Furthermore, it has good sectioning capabilities, which we took advantage of to produce a dynamic 3D+time volume series (XYZ + time). We acquired movies of optical sections at different depths within the sample, applied our temporal super-resolution method to each individual movie, then temporally registered consecutive (in z) movies for the heartbeat to be synchronous in all slices. For the temporal registration, we used a dynamic programming method [35]. The method finds the best warping function of a temporal series onto another one by maximizing the sum of normalized mutual information between all frames of both time-series, which has already been shown to be well-suited for cardiac images [36–38]. Visualization 4 shows a 3D+time rendering that highlights the raw data where motion is encoded within the color channels and the doubled frame rate reconstruction.
3.6 Roadmap to higher super-resolution factors
In this paper, we presented a method to double the frame-rate of imaging in fluorescence microscopy. According to our model, the temporal super-resolution factor is determined by the rank and conditioning number of matrix $\boldsymbol{A}$ in Eq. (9). The rank is upper-bounded by both $C$, $F$, and $L$ (respectively the number of color channels, the number of co-localized fluorophores and the number of lasers at our disposal), so to have a super-resolution factor of, say, $Q =3$, we would need at least three color channels, three co-localized fluorophores and three lasers.
Also, the conditioning number of the matrix $\boldsymbol{A}$ provides a means to predict the quality of the reconstructions, as we have illustrated in [3]. If the chosen fluorophores have too close light emission spectra, this conditioning number will increase, hence lowering the quality of the reconstruction. In [1], Shaner et al. provide insights on how to chose fluorescent proteins and optical filters for maximal spectral separation, which improves the conditioning of our system. A recommended setup of [1] would have so little spectral cross-talk that we could have $Q=3$ with close to a perfect conditioning number, hence good quality reconstructions at three times the frame-rate. This setup recommends using three bandpass detection filters centered at 480 nm (bandpass width 40 nm), 575 nm (bandpass width 25 nm) and 675 nm (bandpass width 130 nm) and the following three co-localized fluorophores: Cerulean (center wavelength of emission 475 nm), mOrange (center wavelength of emission 562 nm) and mPlum (center wavelength of emission 649 nm).
4. Discussion and conclusion
In this paper, we investigated whether a hue-encoded shutter method (HESM) to improve the temporal resolution [3] could be applied to fluorescence microscopy. We showed that despite the physical differences between fluorescence (where the illumination spectrum cannot be inferred from the emission spectrum) and light absorption or reflection properties (which directly modulates the illumination spectrum to produce the transmitted or reflected light), HESM can still be used in fluorescence microscopy, provided one can use temporally modulated illumination, a color camera, and samples labeled with co-localized fluorophores in the dynamic regions where a frame rate boost is desired. We showed imaging of the beating heart of a zebrafish labeled with co-localized red and green fluorescent proteins at twice the acquisition frame rate on a light-sheet microscope, with the ability to preserve the hue-information in static areas. Although we experimentally showed an increase of the frame rate by a factor of 2, our method is general enough that it should allow for higher factors if additional co-localized fluorescent species, lasers, and color channels are available as discussed in Section 3.6.
Our use of a low SNR RGB camera leads to the performance of the current implementation of our method to be below that of the highest frame-rates achievable with dedicated high-sensitivity cameras (100–200 fps) built into some light-sheet microscope implementations, we foresee that the use of alternative imaging components (light splitting and multiple high-speed cameras) could lead to even higher frame rates. Also, alternating between the excitation of multiple populations of fluorophores might be beneficial as it allows for additional recovery time between excitations or lower excitation intensities at each wavelength. However, since this effect will likely depend on the specificities of each use case (in vivo imaging, labeling procedure, choice and proximity of fluorophores) further studies will be required to fully characterize the benefits of using multiple populations of fluorophores over simply increasing the concentration of a single one (when such a concentration increase is biologically feasible).
A key element that we introduced in this paper for the HESM approach to be sufficiently robust in practice, is the temporal regularization, which can cope with sample labeling inhomogeneities and low signal quality inherent to fluorescence imaging, as shown in Fig. 6(d). We further observed that two aspects are particularly important when selecting a fluorescent sample for HESM: (1) the co-localized fluorescent labels should be as homogeneous as possible in the dynamic regions and (2) a static area with a labeling ratio sufficiently similar to the dynamic region is required to calibrate the system. This means that a sample where only moving cells are labeled would not be suitable for our method, as the calibration could not be performed.
In Section 3.3, we illustrated the versatility afforded by our method (either slow color imaging or fast monochrome imaging), by applying it to a region of interest surrounding the beating heart of a zebrafish, while preserving the acquired color images outside of the region. Furthermore, the added possibility of using fluorescence opens the use of HESM to sectioning methods that rely on fluorescence (such as light-sheet microscopy). We illustrated the benefits of a doubled frame rate in the reconstruction of the beating heart in four dimensions (3D + t).
Our method is generic enough that it could potentially speed-up other fluorescence microscopy methods, provided samples with co-localized fluorophores can be generated and multiple lasers can be modulated in time at high-speed, in synchrony with the color camera.
Funding
Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (200020_179217, 200021_159227, 206021_164022, 310030E-164245).
Disclosures
The method [3] used in this paper is the subject of a European patent application (EP19154253) which also covers the method presented in this paper.
References
1. N. C. Shaner, P. A. Steinbach, and R. Y. Tsien, “A guide to choosing fluorescent proteins,” Nat. Methods 2(12), 905–909 (2005). [CrossRef]
2. J. Vermot, S. E. Fraser, and M. Liebling, “Fast fluorescence microscopy for imaging the dynamics of embryonic development,” HFSP J. 2(3), 143–155 (2008). [CrossRef]
3. C. Jaques, E. Pignat, S. Calinon, and M. Liebling, “Temporal Super-Resolution Microscopy Using a Hue-Encoded Shutter,” Biomed. Opt. Express 10(9), 4727–4741 (2019). [CrossRef]
4. J. Huisken, J. Swoger, F. Del Bene, J. Wittbrodt, and E. H. K. Stelzer, “Optical sectioning deep inside live embryos by selective plane illumination microscopy,” Science 305(5686), 1007–1009 (2004). [CrossRef]
5. K. G. Chan, S. J. Streichan, L. A. Trinh, and M. Liebling, “Simultaneous temporal superresolution and denoising for cardiac fluorescence microscopy,” IEEE Trans. Comput. Imaging 2(3), 348–358 (2016). [CrossRef]
6. A. Veeraraghavan, D. Reddy, and R. Raskar, “Coded strobing photography: Compressive sensing of high speed periodic videos,” IEEE Trans. Pattern Anal. Mach. Intell. 33(4), 671–686 (2011). [CrossRef]
7. T. H. Tsai, P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Spectral-temporal compressive imaging,” Opt. Lett. 40(17), 4054–4057 (2015). [CrossRef]
8. R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoulas, O. Cossairt, G. Schuster, and A. K. Katsaggelos, “High spatio-temporal resolution video with compressed sensing,” Opt. Express 23(12), 15992 (2015). [CrossRef]
9. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9), 10526 (2013). [CrossRef]
10. G. Bub, M. Tecza, M. Helmes, P. Lee, and P. Kohl, “Temporal pixel multiplexing for simultaneous high-speed, high-resolution imaging.,” Nat. Methods 7(3), 209–211 (2010). [CrossRef]
11. N. Wagner, N. Norlin, J. Gierten, G. de Medeiros, B. Balázs, J. Wittbrodt, L. Hufnagel, and R. Prevedel, “Instantaneous isotropic volumetric imaging of fast biological processes,” Nat. Methods 16(6), 497–500 (2019). [CrossRef]
12. A. Koehler, “Ein neues Beleuchtungsverfahren für mikrophotographische Zwecke,” Zeitschrift für wissenschaftliche Mikroskopie und für Mikroskopische Technik 10, 433–440 (1893).
13. W. Lukosz, “Optical systems with resolving powers exceeding the classical limit,” J. Opt. Soc. Am. 56(11), 1463–1471 (1966). [CrossRef]
14. W. Lukosz, “Optical systems with resolving powers exceeding the classical limit. II,” J. Opt. Soc. Am. 57(7), 932–941 (1967). [CrossRef]
15. P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. Van Vliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (pam),” J. Microsc. 189(3), 192–198 (1998). [CrossRef]
16. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]
17. R. Heintzmann, T. M. Jovin, and C. Cremer, “Saturated patterned excitation microscopy—a concept for optical resolution improvement,” J. Opt. Soc. Am. A 19(8), 1599–1609 (2002). [CrossRef]
18. M. G. L. Gustafsson, “Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution,” Proc. Natl. Acad. Sci. 102(37), 13081–13086 (2005). [CrossRef]
19. S. S. Gorthi, D. Schaak, and E. Schonbrun, “Fluorescence imaging of flowing cells using a temporally coded excitation,” Opt. Express 21(4), 5164–5170 (2013). [CrossRef]
20. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: Motion deblurring using fluttered shutter,” ACM Trans. Graph. 25(3), 795–804 (2006). [CrossRef]
21. S. Dillavou, S. M. Rubinstein, and J. M. Kolinski, “The virtual frame technique: ultrafast imaging with any camera,” Opt. Express 27(6), 8112–8120 (2019). [CrossRef]
22. J. Marguier, N. Bhatti, H. Baker, M. Harville, and S. Süsstrunk, “Color correction of uncalibrated images for the classification of human skin color,” Proceedings of the 15th IS&T/SID Color Imaging Conference pp. 331–335 (2007).
23. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3(1), 1–122 (2010). [CrossRef]
24. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60(1-4), 259–268 (1992). [CrossRef]
25. R. Tibshirani, “Regression shrinkage and selection via the lasso,” J. Royal Stat. Soc. Ser. B 58(1), 267–288 (1996). [CrossRef]
26. D. P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods (Academic Press, 1982).
27. S. Boyd and L. Vandenberghe, Convex Optimization (Cambridge Press University, 2004).
28. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009). [CrossRef]
29. P. C. Hansen, “Analysis of discrete ill-posed problems by means of the l-curve,” SIAM Rev. 34(4), 561–580 (1992). [CrossRef]
30. P. C. Hansen, Discrete inverse problems: insight and algorithms, vol. 7 (Siam, 2010).
31. P. G. Pitrone, J. Schindelin, L. Stuyvenberg, S. Preibisch, M. Weber, K. W. Eliceiri, J. Huisken, and P. Tomancak, “OpenSPIM: an open-access light-sheet microscopy platform,” Nat. Methods 10(7), 598–599 (2013). [CrossRef]
32. C. Jaques, “Hue-encoded shutter method code,” https://github.com/idiap/hesm_distrib (2019).
33. Y. A. Pan, T. Freundlich, T. A. Weissman, D. Schoppik, X. C. Wang, S. Zimmerman, B. Ciruna, J. R. Sanes, J. W. Lichtman, and A. F. Schier, “Zebrabow: multispectral cell labeling for cell tracing and lineage analysis in zebrafish,” Development 140(13), 2835–2846 (2013). [CrossRef]
34. Y. Wan, K. McDole, and P. J. Keller, “Light-sheet microscopy and its potential for understanding developmental processes,” Annu. Rev. Cell Dev. Biol. 35(1), 655–681 (2019). PMID: 31299171. [CrossRef]
35. M. Liebling, J. Vermot, A. S. Forouhar, M. Gharib, M. E. Dickinson, and S. E. Fraser, “Nonuniform temporal alignment of slice sequences for four-dimensional imaging of cyclically deforming embryonic structures,” Proc. IEEE Int. Symp. Biomed. Imag. pp. 1156–1159 (2006).
36. M. Liebling and H. Ranganathan, “Wavelet domain mutual information synchronization of multimodal cardiac microscopy image sequences,” Proc. SPIE 7446, 744602 (2009). [CrossRef]
37. J. Ohn, J. Yang, S. E. Fraser, R. Lansford, and M. Liebling, “High-speed multicolor microscopy of repeating dynamic processes,” Genesis 49(7), 514–521 (2011). [CrossRef]
38. C. Jaques, L. Bapst-Wicht, D. F. Schorderet, and M. Liebling, “Multi-spectral widefield microscopy of the beating heart through post-acquisition synchronization and unmixing,” in 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), (2019), pp. 1382–1385.