Abstract
Limited time-resolution in microscopy is an obstacle to many biological studies. Despite recent advances in hardware, digital cameras have limited operation modes that constrain frame-rate, integration time, and color sensing patterns. In this paper, we propose an approach to extend the temporal resolution of a conventional digital color camera by leveraging a multi-color illumination source. Our method allows for the imaging of single-hue objects at an increased frame-rate by trading spectral for temporal information (while retaining the ability to measure base hue). It also allows rapid switching to standard RGB acquisition. We evaluated the feasibility and performance of our method via experiments with mobile resolution targets. We observed a time-resolution increase by a factor 2.8 with a three-fold increase in temporal sampling rate. We further illustrate the use of our method to image the beating heart of a zebrafish larva, allowing the display of color or fast grayscale images. Our method is particularly well-suited to extend the capabilities of imaging systems where the flexibility of rapidly switching between high frame rate and color imaging are necessary.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
Many biological processes are highly dynamic and a low time-resolution in microscopy seriously limits their study [1]. Several recent developments in both illumination and detection technology allow pushing towards higher frame rates. Light emitting diodes (LEDs), which are bright yet emit little heat, are both cost-effective and reliable [2]. Several open-source projects have made building custom microscopes increasingly accessible [2–4] and facilitated hardware control outside of standard operating modes [5,6]. Despite these developments, many imaging setups remain constrained by the achievable frame-rate, integration time, and color sensing patterns, because digital cameras have limited operation modes.
In this paper, we propose to extend the temporal resolution of a conventional digital color camera (whose frames can be externally-triggered) by leveraging a multi-color LED illumination source and computational post-processing. Our method assumes that the observed object is of a single hue (such as obtained by use of a single stain or dye) and embeds time information into each acquired frame by spectrally encoding temporal light patterns that are then collected by a color camera. Following acquisition, the images undergo an unmixing procedure that increases the frame-rate and effective temporal resolution. Our approach is related or combines approaches leveraged for other imaging methods, which we briefly review below.
To increase the (temporal and spatial) resolution beyond what cameras can offer directly, several computational approaches have been proposed, often relying on multiple simultaneous observations of a signal, which are then fused to reconstruct a high-resolution version of the signal of interest [7–10]. Despite the resolution gains, these methods require availability of multiple cameras, which can be hard to integrate in a standard microscopy setup, or which may not be compatible in low photon count situations. Other methods make assumptions on the signal structure itself, for instance assuming the signal has a sparse representation in a function basis [11–13] or relying on the repeatable nature of the imaged motion [14,15]. Alternative approaches, which require no prior assumptions include a method by Bub et al. [16], who proposed using a modified camera, whose pixels have staggered exposure times, which allows for a flexible tradeoff between time and spatial resolution. This method offers great possibilities for microscopy, yet with the drawback that it requires a modified camera, requiring low-level hardware control.
Controlled illumination is a core aspect of any optical microscope’s performance, as demonstrated by Köhler over a century ago [17]. In particular, structured illumination has been proposed as a way to access high-frequency components of the object via multiple modulations [18–22]. The modulated signals are combined computationally, and numerical methods have focused on aspects such as taking into account experimental artefact [23], performing structured illumination without precise knowledge of the projected pattern [24], or lowering the number of required images [25]. Our proposed method leverages ideas from structured illumination, albeit in the temporal domain. Our method also relies on unmixing spectrally encoded signals, which bears similarities with multi-spectral unmixing in fluorescence microscopy [26].
In order to improve temporal resolution and reduce motion blur, several methods have been proposed that take advantage of the availability of controllable illumination sources driven by rapid controllers that work in synchrony with the camera, followed by computational post-processing. For example, Staudt et al. [27] used short light pulses (stroboscopy) to reduce motion blur (while remaining limited by the camera framerate) when imaging the beating heart. Gorthi et al. [28] proposed a method for linear motion deblurring for fluorescence microscopy based on the fluttered shutter principle [29] by using a pseudo-random temporal illumination sequence, allowing to reduce motion blur by a factor of 50. This improvement is, however, only possible in the case of linear motions, which are common in cytometry, yet might not be applicable to more general biological motions.
Other active illumination methods have also been proposed in fields other than microscopy. Shiba et al. in [30] used an active illumination to project six dense dot patterns during an image acquisition duration and recover both depth and speed of elements in the imaged scene through computation. Rangarajan et al. in [31] presented active computational imaging methods to do spatial super-resolution as well as depth-estimation, via the projection of space-varying illumination patterns.
The main contributions of the present paper are:
- 1. a procedure to encode temporal details by illuminating each frame with colored light patterns;
- 2. a reconstruction method to achieve temporal superresolution based on the encoded measurements;
- 3. the characterization of our method’s performance and its robustness both on synthetic and experimental data.
2. Imaging model
We consider an imaging system consisting of $L$ co-located illumination light sources and a color camera with $C$ color channels. We assume that the camera has a global shutter, meaning that each pixel collects light over the same, fixed interval of time. We further consider that each light source has a fixed spectrum while the overall intensity can be varied over the duration of the camera shutter time. The timing of the illumination is linked to the camera. The imaged scene is assumed to be of a single hue and the optical parameters are assumed to be constant over the field of view. Figure 1 schematically depicts an example arrangement for three illumination sources and a color camera with a Bayer pattern.
Our method operates on each pixel and each frame independently. We can therefore proceed with the derivation of our method by considering a single color pixel, denoted by the vector $\boldsymbol{Y} = \left (Y_1, \ldots , Y_C \right )^{\top }$, whose $C$ color components can be modeled as:
3. Methods
3.1 Temporal super-resolution
The super-resolution problem is equivalent to retrieving the signal $\boldsymbol{x}$ from a single color pixel $\boldsymbol{Y}$ by solving Eq. (7). When the number of channels $C$ is at least equal to the super-resolution factor $Q$, we propose to obtain approximate solutions in the least-squares sense by solving the minimization problem (under the assumption that the data is corrupted by additive white noise)
3.2 Determination of the system spectral mixing coefficients and electronics offsets
In order to retrieve $\boldsymbol{x}^{\star}$ in Eq. (12), given the measured color pixel $\boldsymbol{Y}$ and the user-controlled illumination pattern $\boldsymbol{S}_Q$, the coefficients in matrix $\mathbf{\Gamma }_Q$ and in the bias vector $\boldsymbol{D}$ must be known beforehand. We propose to determine these coefficients via a calibration procedure in which we image a static scene with a series of fixed illumination patterns that combine contributions from one or several LEDs. The static scene is illuminated with $P$ static intensity combinations of the LEDs. These patterns are fully specified by the operator, who can choose which lights to turn on or off and who can manually select an area, comprising $M$ pixels, on which to calibrate the system. We first consider a single pixel with a given illumination pattern and set $Q=1$ in Eq. (7) to obtain:
Given these definitions and measurements, we solve for $\mathbf{\Gamma }_1$ and $\boldsymbol{D}$ in Eq. (14) to minimize the $\ell _1$-norm cost:
Although a similar approach could be used for minimizing Eq. (12) in order to retrieve the data, we found that for our applications, the least-squares approach, which is direct rather than iterative, is sufficient. The $\ell _1$ norm is more robust to mismatches between the affine response model and the actual measurements. Mismatches may be due to, for example, low photon count (in dark regions), saturated pixels, or a nonlinear detector response curve. Since good calibration has a strong influence on the reconstruction quality and can be carried out offline, we favored the $\ell _1$-norm in Eq. (16) over least-squares, despite it being slower to mimimize. We note that other robust norms, for which efficient algorithms exist, could be used.
3.3 Base-hue recovery and hue-dependent model-selection for non-gray samples
Although our method trades spectral information to gain temporal resolution, we can leverage our use of a color camera to collect the hue of the imaged sample during the calibration procedure of Section 3.2 (using white illumination), and assign the measured and normalized RGB triplet to build a color pixel $x^{\star }[i] (R \quad G \quad B)^{\top }$ from the monochromatic, temporally super-resolved reconstruction $\boldsymbol{x}^{\star }$ obtained with our method described in Section 3.1.
Furthermore, if the scene to be imaged is made of moving objects of any one hue among $N$ possible hues, we can recover super-resolved images as follows. We first calibrate the system according to Sec. 3.2 for each one of the possible hues (indexed by $n=0,\ldots , N-1$), hence obtaining $N$ parameter sets $\left(\mathbf{\Gamma }_Q^{(n)}, \boldsymbol{D}^{(n)}\right)$ and base hue triplets $(R^{(n)},\;G^{(n)},\;B^{(n)})$. After acquiring images of a moving object (whose type or hue index $n$ is unknown), we apply our temporal super-resolution method using each model $\left(\mathbf{\Gamma }_Q^{(n)}, \boldsymbol{D}^{(n)}\right)$ in turn (e.g. on a manually selected region of interest (ROI)). We then evaluate the quality of the reconstructions $\boldsymbol{x}^{\star ,(n)}=\left [ x^{(n)}[1] \quad \cdots \quad x^{(n)}[Q] \right ]^{\top }$, $n=0,\ldots ,\;N-1$ obtained with the corresponding models $\left(\mathbf{\Gamma }_Q^{(n)}, \boldsymbol{D}^{(n)}\right)$ by computing
as a measure of smoothness. The rationale behind this criterion is that only correct model parameters will reduce flicker in regions and time-intervals where the scene is static (which we assume are present in the scene), hence decreasing $R^{(n)}$.4. Experiments
4.1 Hardware and parameters setup
We implemented the illumination with commonly available and cost effective hardware. We assembled a light source using a 6-LED chip (SLS Lighting RGBWA+UV, Aliexpress, China). The LEDs have hues red ($\lambda \approx 620$nm), green ($\lambda \approx 525$nm), blue ($\lambda \approx 465$nm), amber ($\lambda \approx 595$nm), white (broad spectrum via fluorescence), and ultra-violet ($\lambda \approx 395$nm). We drove the LED via a micro-controller (Arduino Uno, Arduino, Italy), which we programmed to generate the illumination time-pattern shown on Fig. 1(b), individually controlling each color. For the LED-camera synchronization, the micro-controller monitored the flash trigger output of the camera. Whenever the trigger signal transitions from low to high state, the micro-controller starts the time-sequence of the LEDs for the frame about to be recorded. The LEDs were directly powered by the controller’s outputs, without additional power amplification of the signal.
We used a CMOS color camera (Thorlabs DCC3240C, Thorlabs, Germany) with 1280 $\times$ 1024 pixels, each with a standard RGGB-Bayer filter pattern ($C=3$). We used this camera both for imaging macroscopic objects, in which case we used a 12mm focal length camera objective (Navitar NMV-12M1, HR F1.4/12mm), and for microscopic samples, in which case we attached the camera to the camera port of a custom-built wide-field transmission microscope consisting of a 20$\times$ Olympus water dipping lens (Olympus Plan Fluorite UMPLFLN 20xW) combined with a 180mm tube lens (Olympus U-TLU-1-2).
We either used the LED source as-is, when illuminating macroscopic scenes, or placed it into the illumination port of the microscope, which we adjusted for Köhler illumination (transmission).
In all experiments presented here, we adjusted the exposure of our camera to $E = 60$ milliseconds, the target over-sampling factor to $Q=3$, and three LEDs per experiment, hence $L=3$.
For the validation experiment in Section 4.2 and the beating heart data acquisition of Section 5.2, we used the red, green, and blue LEDs. For the robustness characterization experiment in Section 4.3, we used all available LEDs alternatively, by set of three. In all of these experiments, the illumination code sequences, $S_\ell [i], i \in \{0,1,2\}$ corresponding to Eq. (6) were:
In Section 4.4, we investigate various illumination sequences that are specified in Table 2.
For all experiments, to calibrate $\boldsymbol{D}$ and $\mathbf{\Gamma }_1$, we acquired $P = 30$ images ($\approx$ 3 calibration images per channel and per LED) of a static binary patterned sample, each with one of $P$ different combinations of LEDs that were turned on or off (see Section 3.2).
4.2 Resolution improvement characterization
To quantitate the resolution improvement achievable by our method, we moved a test target (USAF resolution pattern) printed on a white cardboard paper and imaged it either: (i) with steady white light illumination; (ii) with strobed white light (one 20 ms pulse per frame); and (iii) with our proposed HESM method, followed by reconstruction.
In order to replicate the same motion in each case and thereby to allow for direct comparison, we used a robotic arm (Baxter, RethinkRobotics, Boston, MA, USA) to carry out the motion.
Under constant white light illumination (Fig. 2(a)), the resolution bars of the test target are blurred since the shutter remains open while the test target moves. With a single white light pulse per frame (Fig. 2(b)) the bars are sharp but only one image per camera frame is available. Using our method (Fig. 2(c)) we observe both sharp bars (comparable to what can be obtained with the strobed white light) and an increase in the frame-rate by a factor of three. We determined the finest resolvable resolution bar triplet in both the images obtained under white light illumination (0.25 line pairs/mm) and with our proposed HESM method (0.707 line pairs/mm). This corresponds to a 2.8-fold improvement in lateral resolution, which directly results from the improvement in temporal resolution of the same factor, given that the motion of the resolution target was uniform.
4.3 Characterization of robustness with choice of illumination hues
Our method allows, in principle, for an arbitrary choice of LED wavelength spectra for the different LEDs. In practice, however, selecting appropriate wavelengths for the LEDs given the type of imaged sample is essential to ensure the stability of the reconstruction. To illustrate this point, we explored different combinations of colors, chosen among the 6 individually addressable LEDs in our illumination head: red (R), green (G), blue (B), amber (A), white (W), and ultra-violet (UV). Specifically, we repeated the experiment of the moving target using our proposed imaging method with the following color combinations: R-G-B, A-G-UV, B-UV-W, or R-UV-B (each turned on in sequence). In order to characterize the robustness of the imaging system in each case, we calibrated the system then we calculated the conditioning number $\kappa (\boldsymbol{A})$ (see Chapter 4.4 in [38], p.82) of the obtained system matrix $\boldsymbol{A}=\boldsymbol{S}_Q \mathbf{\Gamma }_Q$:
Table 1 gives the condition number $\kappa$ for the 4 combinations of LEDs that we tested. See Visualization 2 for the corresponding videos. We observed that whenever the system matrix was poorly conditioned, whose likely cause we attribute to overlapping spectra of two (or three) lights in a given combination (e.g. blue and UV LEDs in the B-UW-W combination), the reconstruction was noisy and flickering. We think that there is crosstalk in the lights signal contribution, which translates into a poorly conditioned system matrix. We observed sharp reconstructions with little noise for the color combinations R-G-B and A-G-UV. The reconstructions with the two other color combinations B-UV-W and R-UV-B flickered and showed amplified noise.
4.4 The conditioning number of the system matrix depends on the illumination functions
We next investigated the influence of the illumination functions on the quality of the reconstructions. To that end, we performed an experiment similar to that in Section 4.3 but keeping a single set of LEDs (R-G-B) to image the same repeating motion, while varying the illumination functions. We then compared the condition number of the system matrix corresponding to each illumination pattern with the quality of the reconstruction. Table 2 shows the illumination intensities of the LEDs in the sub-frame time intervals for the four different cases with the corresponding condition number of the system matrix. We observed good reconstructions when the system was well-conditioned. A comparative video is provided (Visualization 2).
5. Applications
5.1 Model selection applied to two samples
To demonstrate our method’s ability to recover both the hue of an object and a temporally super-resolved sequence, we imaged two paper cards, one whose hue was white and the other off-white. In both cases, our method could retrieve both a temporally-super-resolved image sequences as well as assign RGB values for the base-color, directly from the raw images (Fig. 3(b, c)).
5.2 Fast imaging of the beating heart
To illustrate the applicability of our HESM method for biological microscopy, we imaged the beating heart of a live 4 days post fertilization (dpf) old zebrafish larva mounted in agarose gel, with a wide-field microscope under transmitted illumination.
Zebrafish (wild-type AB zebrafish strain (Zebrafish International Resource Center) were raised under standard laboratory conditions (14/10 hour light/dark cycle, fish water of the system (ZEBTEC Techniplast Aquatic Solution) at 26.5$^{\circ }$C temperature, 500 $\mu$s conductivity, and pH 7.3) in a facility approved by the Veterinary Service of the State of Valais (Switzerland). Fertilized eggs were collected and the embryos raised at 29$^{\circ }$C in standard E3 medium in an incubator (Termaks B8054), supplemented by 0.003% 1-phenyl 2-thiourea (PTU) from 24 hours post fertilization (hpf) to prevent pigmentation. For imaging, we embedded 4 dpf larvae, anesthetized with 0.1% tricaine (ethyl 3-aminobenzoate methanesulfonate salt, Sigma), in low melting agarose.
Following raw image acquisition (see hardware and parameter setup, above), we selected an ROI over which we applied our method, keeping the rest of the images in color. Figure 4 shows a single frame of Visualization 5, where color imaging allows clearly visualizing cells within the heart wall of both the atrium and ventricle, which are blurred in the color images. Visualization 5 first shows only color imaging of the beating heart, then it shows reconstructions from our method within an ROI and finally a comparison side by side of standard color imaging and our method, both on the same ROI. Our method therefore offers the flexibility of either using RGB or fast monochrome imaging.
6. Discussion
The hardware implementation of our method has only few hard design requirements. In particular, frame acquisition and illumination must be in synchrony, which is straightforward to implement provided the camera has a trigger output. Given variability in hardware clocks and data transfer, independently running the illumination and acquisition systems results in rapid asynchrony and departure from the acquisition model (making it difficult to invert the system matrix). Beyond synchronization, given the frame rate of our camera (60 frames per second), neither the clock-time of our micro-controller (16 MHz) nor the rise time of our LEDs (20 ns) appeared to be limiting our method.
In order to calibrate our system as described in Section 3.2, it is necessary to manually select an ROI. We observed that the best results were obtained when calibration ROIs were chosen among a wide range of intensities. Our method requires a static sample (or scene) for calibration, ideally with a variety of intensities. Since acquiring calibration images is fast, many images of identical regions can be acquired fairly rapidly, which allows to limit the influence of noise, in particular in darker regions. We empirically found that acquiring $P = 30$ images gave good calibration results and used that number for all experiments in Section 4.
The results in Section 4.2 show a time-resolution improvement of a factor $\sqrt {8} \approx 2.8$ with a temporal sampling improvement by a factor 3. This factor depends on the number of channels available and using additional channels (e.g. through wavelength splitting and use of multiple cameras) higher resolution factors might be achievable. Less motion blur can be obtained by shortening the illumination pulses yet without improvement of the frame-rate, at the cost of a higher peak-intensity (which is sometimes undesirable in microscopy) and at the risk of producing aliasing. Longer-duration pulses are less prone to aliasing and might help preserve live samples as their peak intensity can be lower for a given camera integration time.
Although the model presented in Section 2. is non-specific regarding the precise illumination functions, the discrete formulation in Eq. (7) reveals that our model becomes ill-posed should the matrix $\boldsymbol{S}_Q \mathbf{\Gamma }_Q$ not be full rank, i.e. if $ \textrm{rank}(\boldsymbol{S}_Q \mathbf{\Gamma }_Q) \lt Q$. This provides us with a tool for verifying that a proposed illumination function does not lead to an ill-posed system. For example, should data from a particular channel be missing, $Q$ should be lowered such as to have $Q \leq C$ and the matrices $\boldsymbol{S}_Q$ adapted accordingly. Similarly, the matrix $\mathbf{\Gamma }_Q$ should be well-conditioned, which depends on the sample itself, the color-filters of the camera, and the spectrum of the lighting. The spectrum absorbed and reflected by the sample and then acquired by the camera should form a matrix of rank $Q$, when calibrating with the procedure in Section 3.2. With the condition number $\kappa$ (Section 4.3), we have a means of predicting the quality of reconstructions with our method given the sample, the LEDs, and the chosen camera. The results in Section 4.4 suggest that a condition number above 10 should be avoided, in order to guarantee a good reconstruction. For a given matrix $\mathbf{\Gamma }_Q$ (specified by the sample, LED, and detection spectra) the matrix $\boldsymbol{S}_Q$ containing the temporal patterns could be optimized for $\boldsymbol{S}_Q\mathbf{\Gamma }_Q$ to have maximal rank, an NP-hard problem which we have not pursued here
We investigated the impact of the illumination hues in Section 4.3. Since the camera captures transmitted or reflected light, the same conclusions apply to changing the hue of the sample, rather than the illumination.
Simply using more lights would not necessarily produce a more stable system matrix. Instead, chasing the proper combination of lights does. A simple case for choosing the number of LEDs, channels, and super-resolution factor, is to set $L = C = Q$ (of course one must still ensure that the choice of lights and illumination functions produces well-conditioned matrices, as discussed above).
In practice, temporal flickering may remain in the reconstructed videos, depending on how well the system’s matrix $\boldsymbol{S}_Q \mathbf{\Gamma }_Q$ matches dynamic conditions. Calibration and experimental conditions may differ, for example, because calibration is carried out in regions different than those imaged. Furthermore, LED rise and fall times during calibration and imaging may differ as the LEDs’ electronic current drivers can be frequency-dependent
For single-hue objects, the possibility of recovering the base hue simultaneously to reconstructing temporally super-resolved image sequences (as shown in Section 3.3) is particularly appealing as this advantage comes without requiring an increase in the bandwidth of the system. This capability is preserved even in the case of multiple objects with different hues, via the model selection scheme we proposed in Section 3.3. Furthermore, for applications in microscopy, where often only a single camera can be mounted, making rapid switching to a different camera or view-port unfeasible, our method brings clear practical advantages: (i) the same camera can be used both for color and fast imaging as demonstrated in Section 5.2 and (ii) the motion blur can be reduced when acquiring stacks in continuous scanning mode. While the former may be particularly attractive for building versatile imaging systems, the latter may be particularly relevant for applications that require fast inspection, such as for screening or flow cytometry. We also foresee that the improved frame-rates and temporal resolution could be beneficial for object tracking applications, a point that we may investigate in the future.
Since the wavelength of the photons emitted by a fluorophore are independent of the excitation wavelength, our method would not be applicable on samples labeled with a single fluorophore. However, our method could be applied for imaging structures simultaneously co-labeled with two or more fluorophores: while the individual emission spectra shape would remain unchanged (except for scaling), as their combined emission intensity (and therefore the resulting combined spectrum) will vary with the relative excitation intensities of the illumination sources our method could, in principle, provide similar benefits to fluorescence imaging.
We provide the code, data, and instructions to reproduce results in this paper [37].
7. Conclusion
We introduced a general computational imaging method to carry out temporal super-resolution with a color camera and a set of multi-spectral active illumination sources. Each frame includes multiple copies of the signal at various times, encoded in the hue of the image. The computational procedure retrieves a high time-resolution signal, along with the base-hue, under the assumption that the imaged sample has a single color. We showed a direct method to characterize the robustness of the method, depending on the sensing and illumination spectra, as well as the base-hue of the imaged sample. We experimentally showed a temporal resolution improvement of a factor 2.8 combined with a three-fold increase of the frame-rate. We illustrated our method with an application exhibiting both color imaging and fast grayscale (on a chosen ROI) of the beating heart, showing its applicability to bio-microscopy.
Funding
Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (200020_179217, 200021_159227, 206021_164022).
Acknowledgement
The authors would like to thank Linda Bapst-Wicht from IRO (Institut de Recherche en Opthalmologie, Sion, Switzerland) for providing the zebrafish used in Section 5.2.
Disclosures
The method presented in this paper is the subject of a European patent application (EP19154253).
References
1. J. Vermot, S. E. Fraser, and M. Liebling, “Fast fluorescence microscopy for imaging the dynamics of embryonic development,” HFSP J. 2(3), 143–155 (2008). [CrossRef]
2. J. B. Bosse, N. S. Tanneti, I. B. Hogue, and L. W. Enquist, “Open led illuminator: A simple and inexpensive LED illuminator for fast multicolor particle tracking in neurons,” PLoS One 10(11), e0143547 (2015). [CrossRef]
3. P. G. Pitrone, J. Schindelin, L. Stuyvenberg, S. Preibisch, M. Weber, K. W. Eliceiri, J. Huisken, and P. Tomancak, “OpenSPIM: an open-access light-sheet microscopy platform,” Nat. Methods 10(7), 598–599 (2013). [CrossRef]
4. E. J. Gualda, T. Vale, P. Almada, J. A. Feijó, G. G. Martins, and N. Moreno, “OpenSpin Microscopy: an open-source integrated microscopy platform,” Nat. Methods 10(7), 599–600 (2013). [CrossRef]
5. A. Edelstein, N. Amodaj, K. Hoover, R. Vale, and N. Stuurman, “Computer control of microscopes using $\mu$Manager,” Curr. Protoc. Mol. Biol. 92(1), 1–17 (2010). [CrossRef]
6. A. Edelstein, M. Tsuchida, N. Amodaj, H. Pinkard, R. Vale, and N. Stuurman, “Advanced methods of microscope control using $\mu$Manager software,” J. Biol. Methods 1(2), 10 (2014). [CrossRef]
7. E. Shechtman, Y. Caspi, and M. Irani, “Space-time super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. 27(4), 531–545 (2005). [CrossRef]
8. T. Li, X. He, Q. Teng, Z. Wang, and C. Ren, “Space-time super-resolution with patch group cuts prior,” Signal Process. 30, 147–165 (2015). [CrossRef]
9. A. Agrawal, M. Gupta, A. Veeraraghavan, and S. G. Narasimhan, “Optimal coded sampling for temporal super-resolution,” in CVPR, (2010), pp. 599–606.
10. R. Pournaghi and X. Wu, “Coded Acquisition of High Frame Rate Video,” IEEE Trans. Image Process. 23(12), 5670–5682 (2014). [CrossRef]
11. T.-H. Tsai, P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Spectral-temporal compressive imaging,” Opt. Lett. 40(17), 4054–4057 (2015). [CrossRef]
12. R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoulas, O. Cossairt, G. Schuster, and A. K. Katsaggelos, “High spatio-temporal resolution video with compressed sensing,” Opt. Express 23(12), 15992 (2015). [CrossRef]
13. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9), 10526–1306 (2013). [CrossRef]
14. K. G. Chan, S. J. Streichan, L. A. Trinh, and M. Liebling, “Simultaneous temporal superresolution and denoising for cardiac fluorescence microscopy,” IEEE Trans. Comput. Imaging 2(3), 348–358 (2016). [CrossRef]
15. A. Veeraraghavan, D. Reddy, and R. Raskar, “Coded strobing photography: Compressive sensing of high speed periodic videos,” IEEE Trans. Pattern Anal. Mach. Intell. 33(4), 671–686 (2011). [CrossRef]
16. G. Bub, M. Tecza, M. Helmes, P. Lee, and P. Kohl, “Temporal pixel multiplexing for simultaneous high-speed, high-resolution imaging,” Nat. Methods 7(3), 209–211 (2010). [CrossRef]
17. A. Koehler, “Ein neues Beleuchtungsverfahren fur mikrophotographische Zwecke,” Zeitschrift für wissenschaftliche Mikroskopie und für Mikroskopische Technik 10, 433–440 (1893).
18. W. Lukosz, “Optical systems with resolving powers exceeding the classical limit,” J. Opt. Soc. Am. 56(11), 1463–1471 (1966). [CrossRef]
19. W. Lukosz, “Optical systems with resolving powers exceeding the classical limit. II,” J. Opt. Soc. Am. 57(7), 932–941 (1967). [CrossRef]
20. P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. Van Vliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (pam),” J. Microsc. 189(3), 192–198 (1998). [CrossRef]
21. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]
22. R. Heintzmann, T. M. Jovin, and C. Cremer, “Saturated patterned excitation microscopy—a concept for optical resolution improvement,” J. Opt. Soc. Am. A 19(8), 1599–1609 (2002). [CrossRef]
23. L. H. Schaefer, D. Schuster, and J. Schaffer, “Structured illumination microscopy: artefact analysis and reduction utilizing a parameter optimization approach,” J. Microsc. 216(2), 165–174 (2004). [CrossRef]
24. E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. Le Moal, C. Nicoletti, M. Allain, and A. Sentenac, “Structured illumination microscopy using unknown speckle patterns,” Nat. Photonics 6(5), 312–315 (2012). [CrossRef]
25. F. Orieux, E. Sepulveda, V. Loriette, B. Dubertret, and J. Olivo-Marin, “Bayesian estimation for optimized structured illumination microscopy,” IEEE Trans. Image Process. 21(2), 601–614 (2012). [CrossRef]
26. T. Zimmermann, J. Rietdorf, and R. Pepperkok, “Spectral imaging and its applications in live cell microscopy,” FEBS Lett. 546(1), 87–92 (2003). [CrossRef]
27. D. W. Staudt, J. Liu, K. S. Thorn, N. Stuurman, M. Liebling, and D. Y. R. Stainier, “High-resolution imaging of cardiomyocyte behavior reveals two distinct steps in ventricular trabeculation,” Development 141(3), 585–593 (2014). [CrossRef]
28. S. S. Gorthi, D. Schaak, and E. Schonbrun, “Fluorescence imaging of flowing cells using a temporally coded excitation,” Opt. Express 21(4), 5164–5170 (2013). [CrossRef]
29. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: Motion deblurring using fluttered shutter,” ACM Trans. Graph. 25(3), 795–804 (2006). [CrossRef]
30. Y. Shiba, S. Ono, R. Furukawa, S. Hiura, and H. Kawasaki, “Temporal shape super-resolution by intra-frame motion encoding using high-fps structured light,” ICCV (2017).
31. P. Rangarajan, I. Sinharoy, P. Milojkovic, and M. P. Christensen, “Active computational imaging for circumventing resolution limits at macroscopic scales,” Appl. Opt. 56(9), D84–D107 (2017). [CrossRef]
32. G. H. Golub and C. F. Van Loan, Matrix Computations (The John Hopkins University Press, 1996), 3rd ed.
33. R. E. Welsch, “Robust regression using iteratively reweighted least-squares,” Commun. Stat. Theory. 6(9), 813–827 (1977). [CrossRef]
34. R. L. Chartrand and W. R. U. Yin, “Iterativery reweighted algorithms for compressive sensing,” in ICASSP, (2008), pp. 3869–3872.
35. I. Daubechies, R. Devore, M. Fornasier, and C. S. Gunturk, “Iteratively reweighted least squares minimization for sparse recovery,” Comm. Pure Appl. Math. 63(1), 1–38 (2010). [CrossRef]
36. ILOG-CPLEX, “High-performance software for mathematical programming and optimization,” http://www.ilog.com/products/cplex, (2005).
37. https://github.com/idiap/hesm_distrib.
38. M. Bertero and P. Boccacci, Introduction to inverse problems in imaging (IOP Publishing, Bristol, UK, 1998).