Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Temporal super-resolution microscopy using a hue-encoded shutter

Open Access Open Access

Abstract

Limited time-resolution in microscopy is an obstacle to many biological studies. Despite recent advances in hardware, digital cameras have limited operation modes that constrain frame-rate, integration time, and color sensing patterns. In this paper, we propose an approach to extend the temporal resolution of a conventional digital color camera by leveraging a multi-color illumination source. Our method allows for the imaging of single-hue objects at an increased frame-rate by trading spectral for temporal information (while retaining the ability to measure base hue). It also allows rapid switching to standard RGB acquisition. We evaluated the feasibility and performance of our method via experiments with mobile resolution targets. We observed a time-resolution increase by a factor 2.8 with a three-fold increase in temporal sampling rate. We further illustrate the use of our method to image the beating heart of a zebrafish larva, allowing the display of color or fast grayscale images. Our method is particularly well-suited to extend the capabilities of imaging systems where the flexibility of rapidly switching between high frame rate and color imaging are necessary.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Many biological processes are highly dynamic and a low time-resolution in microscopy seriously limits their study [1]. Several recent developments in both illumination and detection technology allow pushing towards higher frame rates. Light emitting diodes (LEDs), which are bright yet emit little heat, are both cost-effective and reliable [2]. Several open-source projects have made building custom microscopes increasingly accessible [24] and facilitated hardware control outside of standard operating modes [5,6]. Despite these developments, many imaging setups remain constrained by the achievable frame-rate, integration time, and color sensing patterns, because digital cameras have limited operation modes.

In this paper, we propose to extend the temporal resolution of a conventional digital color camera (whose frames can be externally-triggered) by leveraging a multi-color LED illumination source and computational post-processing. Our method assumes that the observed object is of a single hue (such as obtained by use of a single stain or dye) and embeds time information into each acquired frame by spectrally encoding temporal light patterns that are then collected by a color camera. Following acquisition, the images undergo an unmixing procedure that increases the frame-rate and effective temporal resolution. Our approach is related or combines approaches leveraged for other imaging methods, which we briefly review below.

To increase the (temporal and spatial) resolution beyond what cameras can offer directly, several computational approaches have been proposed, often relying on multiple simultaneous observations of a signal, which are then fused to reconstruct a high-resolution version of the signal of interest [710]. Despite the resolution gains, these methods require availability of multiple cameras, which can be hard to integrate in a standard microscopy setup, or which may not be compatible in low photon count situations. Other methods make assumptions on the signal structure itself, for instance assuming the signal has a sparse representation in a function basis [1113] or relying on the repeatable nature of the imaged motion [14,15]. Alternative approaches, which require no prior assumptions include a method by Bub et al. [16], who proposed using a modified camera, whose pixels have staggered exposure times, which allows for a flexible tradeoff between time and spatial resolution. This method offers great possibilities for microscopy, yet with the drawback that it requires a modified camera, requiring low-level hardware control.

Controlled illumination is a core aspect of any optical microscope’s performance, as demonstrated by Köhler over a century ago [17]. In particular, structured illumination has been proposed as a way to access high-frequency components of the object via multiple modulations [1822]. The modulated signals are combined computationally, and numerical methods have focused on aspects such as taking into account experimental artefact [23], performing structured illumination without precise knowledge of the projected pattern [24], or lowering the number of required images [25]. Our proposed method leverages ideas from structured illumination, albeit in the temporal domain. Our method also relies on unmixing spectrally encoded signals, which bears similarities with multi-spectral unmixing in fluorescence microscopy [26].

In order to improve temporal resolution and reduce motion blur, several methods have been proposed that take advantage of the availability of controllable illumination sources driven by rapid controllers that work in synchrony with the camera, followed by computational post-processing. For example, Staudt et al. [27] used short light pulses (stroboscopy) to reduce motion blur (while remaining limited by the camera framerate) when imaging the beating heart. Gorthi et al. [28] proposed a method for linear motion deblurring for fluorescence microscopy based on the fluttered shutter principle [29] by using a pseudo-random temporal illumination sequence, allowing to reduce motion blur by a factor of 50. This improvement is, however, only possible in the case of linear motions, which are common in cytometry, yet might not be applicable to more general biological motions.

Other active illumination methods have also been proposed in fields other than microscopy. Shiba et al. in [30] used an active illumination to project six dense dot patterns during an image acquisition duration and recover both depth and speed of elements in the imaged scene through computation. Rangarajan et al. in [31] presented active computational imaging methods to do spatial super-resolution as well as depth-estimation, via the projection of space-varying illumination patterns.

The main contributions of the present paper are:

  • 1. a procedure to encode temporal details by illuminating each frame with colored light patterns;
  • 2. a reconstruction method to achieve temporal superresolution based on the encoded measurements;
  • 3. the characterization of our method’s performance and its robustness both on synthetic and experimental data.
The paper is organized as follows: in Section 2 we present the signal and imaging models and detail the assumptions on the image acquisition and the signal. In Section 3, we derive our superresolution method and present a color calibration procedure to adjust the free parameters of our method and retrieve base hues. In Section 4 we characterize our method in terms of resolution gain and robustness and demonstrate its applicability for imaging biological samples in microscopy in Section 5. We discuss these results in Section 6 before concluding in Section 7.

2. Imaging model

We consider an imaging system consisting of $L$ co-located illumination light sources and a color camera with $C$ color channels. We assume that the camera has a global shutter, meaning that each pixel collects light over the same, fixed interval of time. We further consider that each light source has a fixed spectrum while the overall intensity can be varied over the duration of the camera shutter time. The timing of the illumination is linked to the camera. The imaged scene is assumed to be of a single hue and the optical parameters are assumed to be constant over the field of view. Figure 1 schematically depicts an example arrangement for three illumination sources and a color camera with a Bayer pattern.

 figure: Fig. 1.

Fig. 1. (a) Acquisition setup. The moving sample is imaged with three active light sources $s_i(t)$. The projection of the scene to the camera is shown with $x(t)$. The color (Bayer) filter makes each pixel sensitive to a specific spectrum that is independent of the light sources. Each light source has its own time function, capturing the sample at different times and encoding this information in different spectra which is then captured by the color sensor in the hue domain. (b) Example of possible temporal functions for the three light sources. (c) Real example of acquired data with the depicted system of (a). (d)-(e) Close-ups to the real acquired data, the Bayer filter is visible. (f) Reconstruction of three grayscale frames from the acquisition shown in (c).

Download Full Size | PDF

Our method operates on each pixel and each frame independently. We can therefore proceed with the derivation of our method by considering a single color pixel, denoted by the vector $\boldsymbol{Y} = \left (Y_1, \ldots , Y_C \right )^{\top }$, whose $C$ color components can be modeled as:

$$ Y_{c} = \int_0^{E}\left(\sum_{\ell=1}^{L} \gamma_{\ell,c}x(t)s_{\ell}(t)\right) \textrm{d}t + D_c $$
$$= D_c + \sum_{\ell=1}^{L} \gamma_{\ell,c} \int_0^{E} x(t)s_{\ell}(t) \textrm{d}t, $$
where $x(t), t \in [0,\;E)$ is the imaged time signal (which we wish to recover) at the location in the scene corresponding to the pixel, $E$ is the exposure duration, $s_\ell (t) \in \mathbb {R}^{+}_0$ is the intensity function of the $\ell ^{\textrm {th}}$ active light over time, $D_c \in \mathbb {R}^{+}_0$ is an electronic bias for channel $c$ and $\gamma _{\ell,c} \in \mathbb {R}_0^{+}$ is the spectral impact of the $\ell ^{}{th}$ light source on channel $c$. Within the duration of one movie frame, we model the imaged signal $x(t)$ as a piecewise constant signal:
$$x(t) = \sum_{i=1}^{Q} x[i] \beta^{0}(Q\cdot(t-i)),$$
with $Q$ an integer number of steps over the exposure time, $x[i], i=1, \ldots , Q$, the values of $x(t)$ at each step, and
$$\beta^{0}(t) = \begin{cases} 1 & \textrm{if } 0 \leq t \lt \frac{E}{Q} \\ 0 & \textrm{otherwise,} \end{cases}$$
the causal B-spline of degree 0 (box function). Given this model for the signal $x(t)$, Eq. (1) can be rewritten as:
$$\begin{aligned} Y_c &= D_c + \sum_{\ell=1}^{L} \gamma_{\ell,c} \int_0^{E} \sum_{i=1}^{Q}x[i] \beta^{0}(Q\cdot(t-i)) s_{\ell}(t)\textrm{d}t \nonumber\\ &= D_c + \sum_{\ell=1}^{L} \gamma_{\ell,c} \sum_{i=1}^{Q}x[i] \int_0^{E} \beta^{0}(Q\cdot(t-i)) s_{\ell}(t)\textrm{d}t \nonumber\\ &= D_c + \sum_{\ell=1}^{L} \gamma_{\ell,c} \sum_{i=1}^{Q} x[i] S_{\ell}[i], \end{aligned}$$
with the average light intensity $S_{\ell }[i]$ in the $i^{\textrm {th}}$ sub-frame interval defined as:
$$\begin{aligned} S_\ell[i] &= \int_0^{E} \beta^{0}(Q\cdot(t-i)) s_{\ell}(t)\textrm{d}t \nonumber\\ &= \int_{(i-1)\cdot E/Q}^{i\cdot E/Q} s_\ell (t) \textrm{d}t. \end{aligned}$$
With these notations, we can rewrite Eq. (5) in matrix form:
$$\boldsymbol{Y} = \boldsymbol{S}_Q\mathbf{\Gamma}_Q \boldsymbol{x} + \boldsymbol{D},$$
where $\boldsymbol{x} = \begin {pmatrix} x[1] & \cdots & x[Q] \end {pmatrix}^{\top }$ is the vector of signal samples, $\boldsymbol{D}=\begin {pmatrix} D_1 & \cdots & D_C \end {pmatrix}^{\top }$ is a bias vector, and $\boldsymbol{S}_Q$ contains the time coefficients of the $L$ lights:
$$\boldsymbol{S}_Q = \begin{pmatrix} \boldsymbol{S}^{1}_Q & \ldots & \boldsymbol{S}^{\ell}_Q & \ldots & \boldsymbol{S}^{L}_Q \end{pmatrix}_{C\times CQL},$$
with:
$$\boldsymbol{S}^{\ell}_Q = \begin{pmatrix} (S_\ell[1], \ldots, S_{\ell}[Q]) & {{\mathbb{0}}}_{1\times Q} & \ldots & {{\mathbb{0}}}_{1\times Q} \\ {{\mathbb{0}}}_{1\times Q} & \ddots & \ldots & \vdots \\ \vdots & \ddots & \ddots & {{\mathbb{0}}}_{1\times Q} \\ {{\mathbb{0}}}_{1\times Q} & \ldots & {{\mathbb{0}}}_{1\times Q} & (S_\ell[1], \ldots, S_{\ell}[Q]) \end{pmatrix}_{C\times CQ}.$$
The matrix $\mathbf{\Gamma }$ is built as:
$$\mathbf{\Gamma}_Q = \left[ \begin{pmatrix} \mathbf{\Gamma}^{1}_Q & \ldots & \mathbf{\Gamma}^{\ell}_Q & \ldots & \mathbf{\Gamma}^{L}_Q \end{pmatrix}^{\top} \right]_{CQL \times Q},$$
with:
$$\mathbf{\Gamma}^{\ell}_Q= \left[ \begin{pmatrix} \gamma_{1,\ell}\boldsymbol{I}_{Q} & \ldots & \gamma_{c,\ell}\boldsymbol{I}_{Q} & \ldots & \gamma_{C,\ell}\boldsymbol{I}_{Q} \end{pmatrix}^{\top}\right]_{CQ \times Q},$$
where $\boldsymbol{I}_Q$ is the identity matrix of size $Q \times Q$ and ${{\mathbb{0}}}_{m\times n}$ a matrix with $m$ rows and $n$ columns of zeros (for clarity, we have indicated the dimensions of certain matrices as subscripts in a similar fashion).

3. Methods

3.1 Temporal super-resolution

The super-resolution problem is equivalent to retrieving the signal $\boldsymbol{x}$ from a single color pixel $\boldsymbol{Y}$ by solving Eq. (7). When the number of channels $C$ is at least equal to the super-resolution factor $Q$, we propose to obtain approximate solutions in the least-squares sense by solving the minimization problem (under the assumption that the data is corrupted by additive white noise)

$$\boldsymbol{x}^{{\star}} = \min_{\boldsymbol{x}} \left\lVert{\boldsymbol{Y} - \boldsymbol{D} - \boldsymbol{S}_Q \mathbf{\Gamma}_Q \boldsymbol{x}}\right\rVert^{2}_2.$$
When $Q \leq C$ and $Q \leq L$, this minimization problem can be solved efficiently with a number of numerical methods (e.g. see Chapter 5.3 in [32], p. 236).

3.2 Determination of the system spectral mixing coefficients and electronics offsets

In order to retrieve $\boldsymbol{x}^{\star}$ in Eq. (12), given the measured color pixel $\boldsymbol{Y}$ and the user-controlled illumination pattern $\boldsymbol{S}_Q$, the coefficients in matrix $\mathbf{\Gamma }_Q$ and in the bias vector $\boldsymbol{D}$ must be known beforehand. We propose to determine these coefficients via a calibration procedure in which we image a static scene with a series of fixed illumination patterns that combine contributions from one or several LEDs. The static scene is illuminated with $P$ static intensity combinations of the LEDs. These patterns are fully specified by the operator, who can choose which lights to turn on or off and who can manually select an area, comprising $M$ pixels, on which to calibrate the system. We first consider a single pixel with a given illumination pattern and set $Q=1$ in Eq. (7) to obtain:

$$\Big[\boldsymbol{Y}\Big]_{C\times 1} =\Big[\boldsymbol{S}_1\Big]_{C\times CL}\Big[\mathbf{\Gamma}_1\Big]_{CL\times 1}\Big[\boldsymbol{x}\Big]_{1\times 1} + \Big[\boldsymbol{D}\Big]_{C\times 1},$$
which we rearrange as:
$$\boldsymbol{Y} = \begin{pmatrix} \boldsymbol{xS}_1 & \boldsymbol{I}_{C} \end{pmatrix} \begin{pmatrix} \mathbf{\Gamma}_1 \\ \boldsymbol{D} \end{pmatrix},$$
where $\boldsymbol{I}_{C}$ is the identity matrix of size $C\times C$. Then we combine similar equations for $M$ pixels and $P$ illumination patterns to form the full calibration matrix:
$$\underbrace{\begin{pmatrix} \boldsymbol{Y}^{1,1 } \\ \boldsymbol{Y}^{1,2} \\ \vdots \\ \boldsymbol{Y}^{1,P} \\ \boldsymbol{Y}^{2,1} \\ \vdots \\ \boldsymbol{Y}^{M,P} \end{pmatrix}}_{\boldsymbol{Y}_{\textrm{cal}}} = \underbrace{\begin{pmatrix} x^{1}\boldsymbol{S}_1^{1} & \boldsymbol{I}_{C}\\ x^{1}\boldsymbol{S}_1^{2} & \boldsymbol{I}_{C}\\ \vdots & \vdots \\ x^{1}\boldsymbol{S}_1^{P} & \boldsymbol{I}_{C}\\ x^{2}\boldsymbol{S}_1^{1} & \boldsymbol{I}_{C}\\ \vdots & \vdots \\ x^{M}\boldsymbol{S}_1^{P} & \boldsymbol{I}_{C}\\ \end{pmatrix}}_{\boldsymbol{A}_{\textrm{cal}}} \begin{pmatrix} \mathbf{\Gamma}_1 \\ \boldsymbol{D} \end{pmatrix},$$
where the $x^{m}$ are the intensity of the $m^{\textrm {th}}$ pixel of all $M$ static calibration pixels, $\boldsymbol{S}_\textbf{1}^{p}$ is the $p^{\textrm {th}}$ calibration illumination pattern and $\boldsymbol{Y}^{m,p}$ is the measurement vector on pixel $m$ for illumination pattern $p$. With this setup, all involved quantities in $\boldsymbol{Y}_{\textrm {cal}}$ and $\boldsymbol{A}_{\textrm {cal}}$ are known, either measured or user-imposed. Note that the expression in Eq. (14) involves $\mathbf{\Gamma }_1$, rather than $\mathbf{\Gamma }_Q$, yet even if the dimensions and structure of $\mathbf{\Gamma }_Q$ depend on $Q$, its free parameters, the $\gamma _{c,\ell }$, are independent of $Q$, which allows their inference from $\mathbf{\Gamma }_1$.

Given these definitions and measurements, we solve for $\mathbf{\Gamma }_1$ and $\boldsymbol{D}$ in Eq. (14) to minimize the $\ell _1$-norm cost:

$$e(\mathbf{\Gamma}_{1}, \boldsymbol{D}) = \left\lVert{\boldsymbol{Y}_{\textrm{cal}} - \boldsymbol{A}_{\textrm{cal}} \begin{pmatrix} \mathbf{\Gamma}_1 \\ \boldsymbol{D} \end{pmatrix}}\right\rVert_1.$$
We find the solution to this cost minimization problem by using an Iteratively Reweighted Least-Squares (IRLS) method [33]. IRLS proceeds by solving, at each iteration, a weighted least-square problem:
$$\boldsymbol{u}^{(t+1)} = {\mathop{\textrm{argmin}}\limits_{\boldsymbol{u}}} \left\lVert{\boldsymbol{W}^{(t)}\boldsymbol{Y}_{\textrm{cal}} - \boldsymbol{W}^{(t)}\boldsymbol{A}_{\textrm{cal}} \boldsymbol{u}^{(t)}}\right\rVert_2^{2},$$
where $\boldsymbol{W}^{(t)}=\textrm {diag}(w_1^{(t)},\ldots ,\;w_{MP}^{(t)})$ is a diagonal weighting matrix, whose entries $w_{k}^{(t+1)}$ are updated at each iteration $t+1$ [34,35]:
$$w_{k}^{(t+1)} = \left(\left(\boldsymbol{Y}_{\textrm{cal},k} - \boldsymbol{A}_{\textrm{cal}} \boldsymbol{u}^{(t)}_k\right)^{2} + \epsilon^{(t)}\right)^{{-}1/2}.$$
The weights are initialized with $w_k^{(0)} = 1$ and $\epsilon ^{(0)} = 1$. We follow an acceleration method similar to that proposed by Chartrand and Yin [34], where the variable damping factor $\epsilon ^{(t)}$ is divided by 10 each time the relative change of the $\ell _1$-norm of the residual is smaller than $\sqrt {\epsilon ^{(t)}}/100$, until the residual converges or $\epsilon ^{(t)}$ reaches a set minimum value ($10^{-6}$). Once convergence is attained, we retrieve the values of $\mathbf{\Gamma }_1$ and $\boldsymbol{D}$ from $\boldsymbol{u}^{\left (t_{\textrm {final}}\right )}$. In practice, solutions we obtained with this approach were identical to those obtained by use of an exact linear programing method (CPLEX [36]). We favored our implementation for its simplicity and the possibility to make it available [37].

Although a similar approach could be used for minimizing Eq. (12) in order to retrieve the data, we found that for our applications, the least-squares approach, which is direct rather than iterative, is sufficient. The $\ell _1$ norm is more robust to mismatches between the affine response model and the actual measurements. Mismatches may be due to, for example, low photon count (in dark regions), saturated pixels, or a nonlinear detector response curve. Since good calibration has a strong influence on the reconstruction quality and can be carried out offline, we favored the $\ell _1$-norm in Eq. (16) over least-squares, despite it being slower to mimimize. We note that other robust norms, for which efficient algorithms exist, could be used.

3.3 Base-hue recovery and hue-dependent model-selection for non-gray samples

Although our method trades spectral information to gain temporal resolution, we can leverage our use of a color camera to collect the hue of the imaged sample during the calibration procedure of Section 3.2 (using white illumination), and assign the measured and normalized RGB triplet to build a color pixel $x^{\star }[i] (R \quad G \quad B)^{\top }$ from the monochromatic, temporally super-resolved reconstruction $\boldsymbol{x}^{\star }$ obtained with our method described in Section 3.1.

Furthermore, if the scene to be imaged is made of moving objects of any one hue among $N$ possible hues, we can recover super-resolved images as follows. We first calibrate the system according to Sec. 3.2 for each one of the possible hues (indexed by $n=0,\ldots , N-1$), hence obtaining $N$ parameter sets $\left(\mathbf{\Gamma }_Q^{(n)}, \boldsymbol{D}^{(n)}\right)$ and base hue triplets $(R^{(n)},\;G^{(n)},\;B^{(n)})$. After acquiring images of a moving object (whose type or hue index $n$ is unknown), we apply our temporal super-resolution method using each model $\left(\mathbf{\Gamma }_Q^{(n)}, \boldsymbol{D}^{(n)}\right)$ in turn (e.g. on a manually selected region of interest (ROI)). We then evaluate the quality of the reconstructions $\boldsymbol{x}^{\star ,(n)}=\left [ x^{(n)}[1] \quad \cdots \quad x^{(n)}[Q] \right ]^{\top }$, $n=0,\ldots ,\;N-1$ obtained with the corresponding models $\left(\mathbf{\Gamma }_Q^{(n)}, \boldsymbol{D}^{(n)}\right)$ by computing

$$R^{(n)} = \sum_{i=1}^{Q-1} \left| x^{(n)} [i]-x^{(n)} [i+1] \right|$$
as a measure of smoothness. The rationale behind this criterion is that only correct model parameters will reduce flicker in regions and time-intervals where the scene is static (which we assume are present in the scene), hence decreasing $R^{(n)}$.

4. Experiments

4.1 Hardware and parameters setup

We implemented the illumination with commonly available and cost effective hardware. We assembled a light source using a 6-LED chip (SLS Lighting RGBWA+UV, Aliexpress, China). The LEDs have hues red ($\lambda \approx 620$nm), green ($\lambda \approx 525$nm), blue ($\lambda \approx 465$nm), amber ($\lambda \approx 595$nm), white (broad spectrum via fluorescence), and ultra-violet ($\lambda \approx 395$nm). We drove the LED via a micro-controller (Arduino Uno, Arduino, Italy), which we programmed to generate the illumination time-pattern shown on Fig. 1(b), individually controlling each color. For the LED-camera synchronization, the micro-controller monitored the flash trigger output of the camera. Whenever the trigger signal transitions from low to high state, the micro-controller starts the time-sequence of the LEDs for the frame about to be recorded. The LEDs were directly powered by the controller’s outputs, without additional power amplification of the signal.

We used a CMOS color camera (Thorlabs DCC3240C, Thorlabs, Germany) with 1280 $\times$ 1024 pixels, each with a standard RGGB-Bayer filter pattern ($C=3$). We used this camera both for imaging macroscopic objects, in which case we used a 12mm focal length camera objective (Navitar NMV-12M1, HR F1.4/12mm), and for microscopic samples, in which case we attached the camera to the camera port of a custom-built wide-field transmission microscope consisting of a 20$\times$ Olympus water dipping lens (Olympus Plan Fluorite UMPLFLN 20xW) combined with a 180mm tube lens (Olympus U-TLU-1-2).

We either used the LED source as-is, when illuminating macroscopic scenes, or placed it into the illumination port of the microscope, which we adjusted for Köhler illumination (transmission).

In all experiments presented here, we adjusted the exposure of our camera to $E = 60$ milliseconds, the target over-sampling factor to $Q=3$, and three LEDs per experiment, hence $L=3$.

For the validation experiment in Section 4.2 and the beating heart data acquisition of Section 5.2, we used the red, green, and blue LEDs. For the robustness characterization experiment in Section 4.3, we used all available LEDs alternatively, by set of three. In all of these experiments, the illumination code sequences, $S_\ell [i], i \in \{0,1,2\}$ corresponding to Eq. (6) were:

$$\begin{aligned} S_1[i] &= [1, 0, 0] \nonumber\\ S_2[i] &= [0, 1, 0] \\ S_3[i] &= [0, 0, 1] , \nonumber \end{aligned}$$
with $s_1(t)$ the time-function of the first, $s_2(t)$ the second, and $s_3(t)$ the third LED, respectively.

In Section 4.4, we investigate various illumination sequences that are specified in Table 2.

For all experiments, to calibrate $\boldsymbol{D}$ and $\mathbf{\Gamma }_1$, we acquired $P = 30$ images ($\approx$ 3 calibration images per channel and per LED) of a static binary patterned sample, each with one of $P$ different combinations of LEDs that were turned on or off (see Section 3.2).

4.2 Resolution improvement characterization

To quantitate the resolution improvement achievable by our method, we moved a test target (USAF resolution pattern) printed on a white cardboard paper and imaged it either: (i) with steady white light illumination; (ii) with strobed white light (one 20 ms pulse per frame); and (iii) with our proposed HESM method, followed by reconstruction.

In order to replicate the same motion in each case and thereby to allow for direct comparison, we used a robotic arm (Baxter, RethinkRobotics, Boston, MA, USA) to carry out the motion.

Under constant white light illumination (Fig. 2(a)), the resolution bars of the test target are blurred since the shutter remains open while the test target moves. With a single white light pulse per frame (Fig. 2(b)) the bars are sharp but only one image per camera frame is available. Using our method (Fig. 2(c)) we observe both sharp bars (comparable to what can be obtained with the strobed white light) and an increase in the frame-rate by a factor of three. We determined the finest resolvable resolution bar triplet in both the images obtained under white light illumination (0.25 line pairs/mm) and with our proposed HESM method (0.707 line pairs/mm). This corresponds to a 2.8-fold improvement in lateral resolution, which directly results from the improvement in temporal resolution of the same factor, given that the motion of the resolution target was uniform.

 figure: Fig. 2.

Fig. 2. Imaging a moving sample with (a, f) a constant white light, (b, g) a 20ms white pulse and (c, d, e, h, i, j) our proposed method (see Visualization 1). The zoom on the element 1 of the group −2 of the USAF-grid (close-up in a, b, c) shows that all three methods can resolve it. It is the limit for the constant illumination. This element is 0.625 mm wide. The detailed views on the whole group −1 (f, g, h) show that the stroboscopic illumination and our method (g, h) are able to resolve up to element 4. This corresponds to a resolution improvement factor of 2.8. Moreover, with our method operating at the same frame-rate, we have six reconstructed frames (c, d, e, h, i, j) while with the two other methods we have two acquired frames (a, b, f, g), thus we improved the frame-rate by a factor of 3.

Download Full Size | PDF

4.3 Characterization of robustness with choice of illumination hues

Our method allows, in principle, for an arbitrary choice of LED wavelength spectra for the different LEDs. In practice, however, selecting appropriate wavelengths for the LEDs given the type of imaged sample is essential to ensure the stability of the reconstruction. To illustrate this point, we explored different combinations of colors, chosen among the 6 individually addressable LEDs in our illumination head: red (R), green (G), blue (B), amber (A), white (W), and ultra-violet (UV). Specifically, we repeated the experiment of the moving target using our proposed imaging method with the following color combinations: R-G-B, A-G-UV, B-UV-W, or R-UV-B (each turned on in sequence). In order to characterize the robustness of the imaging system in each case, we calibrated the system then we calculated the conditioning number $\kappa (\boldsymbol{A})$ (see Chapter 4.4 in [38], p.82) of the obtained system matrix $\boldsymbol{A}=\boldsymbol{S}_Q \mathbf{\Gamma }_Q$:

$$\kappa(\boldsymbol{A}) = \frac{ \sigma_\textrm{max}(\boldsymbol{A})}{ \sigma_\textrm{min}(\boldsymbol{A})},$$
where $\sigma _\textrm {max}(\boldsymbol{A})$ and $\sigma _\textrm {min}(\boldsymbol{A})$ are the highest and lowest eigen-values of the matrix $\boldsymbol{A}$.

Table 1 gives the condition number $\kappa$ for the 4 combinations of LEDs that we tested. See Visualization 2 for the corresponding videos. We observed that whenever the system matrix was poorly conditioned, whose likely cause we attribute to overlapping spectra of two (or three) lights in a given combination (e.g. blue and UV LEDs in the B-UW-W combination), the reconstruction was noisy and flickering. We think that there is crosstalk in the lights signal contribution, which translates into a poorly conditioned system matrix. We observed sharp reconstructions with little noise for the color combinations R-G-B and A-G-UV. The reconstructions with the two other color combinations B-UV-W and R-UV-B flickered and showed amplified noise.

Tables Icon

Table 1. Condition number $\kappa$ depending on the LEDs used (see Visualization 2).

4.4 The conditioning number of the system matrix depends on the illumination functions

We next investigated the influence of the illumination functions on the quality of the reconstructions. To that end, we performed an experiment similar to that in Section 4.3 but keeping a single set of LEDs (R-G-B) to image the same repeating motion, while varying the illumination functions. We then compared the condition number of the system matrix corresponding to each illumination pattern with the quality of the reconstruction. Table 2 shows the illumination intensities of the LEDs in the sub-frame time intervals for the four different cases with the corresponding condition number of the system matrix. We observed good reconstructions when the system was well-conditioned. A comparative video is provided (Visualization 2).

Tables Icon

Table 2. Condition number $\kappa$ with various time functions. $R_1, R_2, R_3$ are the values of the red LED respectively at the first, second and third time-steps of the whole exposure time (see Visualization 3).

5. Applications

5.1 Model selection applied to two samples

To demonstrate our method’s ability to recover both the hue of an object and a temporally super-resolved sequence, we imaged two paper cards, one whose hue was white and the other off-white. In both cases, our method could retrieve both a temporally-super-resolved image sequences as well as assign RGB values for the base-color, directly from the raw images (Fig. 3(b, c)).

 figure: Fig. 3.

Fig. 3. When any of several objects with different, but known, hues enters the field of view, the system matrix adapted to the object can be automatically selected. (a) Color image of a static scene, with two kind of papers illuminated by a white light. The gray areas show the calibration ROIs. (b) Each sample has a corresponding calibrated set of parameters $\mathbf{\Gamma }$ and $\boldsymbol{D}$ as well as the sample hue. (c) Data acquisition of a dynamic scene with the active illumination. (d) Reconstruction with model selection as explained in Section 3.3. (e, f) Two reconstructions with our method after model selection, using RGB LEDs and reconstructing the hue of the samples from the raw data acquired with our method (see Visualization 4). Scalebar: 5 cm.

Download Full Size | PDF

5.2 Fast imaging of the beating heart

To illustrate the applicability of our HESM method for biological microscopy, we imaged the beating heart of a live 4 days post fertilization (dpf) old zebrafish larva mounted in agarose gel, with a wide-field microscope under transmitted illumination.

Zebrafish (wild-type AB zebrafish strain (Zebrafish International Resource Center) were raised under standard laboratory conditions (14/10 hour light/dark cycle, fish water of the system (ZEBTEC Techniplast Aquatic Solution) at 26.5$^{\circ }$C temperature, 500 $\mu$s conductivity, and pH 7.3) in a facility approved by the Veterinary Service of the State of Valais (Switzerland). Fertilized eggs were collected and the embryos raised at 29$^{\circ }$C in standard E3 medium in an incubator (Termaks B8054), supplemented by 0.003% 1-phenyl 2-thiourea (PTU) from 24 hours post fertilization (hpf) to prevent pigmentation. For imaging, we embedded 4 dpf larvae, anesthetized with 0.1% tricaine (ethyl 3-aminobenzoate methanesulfonate salt, Sigma), in low melting agarose.

Following raw image acquisition (see hardware and parameter setup, above), we selected an ROI over which we applied our method, keeping the rest of the images in color. Figure 4 shows a single frame of Visualization 5, where color imaging allows clearly visualizing cells within the heart wall of both the atrium and ventricle, which are blurred in the color images. Visualization 5 first shows only color imaging of the beating heart, then it shows reconstructions from our method within an ROI and finally a comparison side by side of standard color imaging and our method, both on the same ROI. Our method therefore offers the flexibility of either using RGB or fast monochrome imaging.

 figure: Fig. 4.

Fig. 4. Flexible color and fast grayscale imaging of the beating heart in a 4 days post fertilization zebrafish larva. (a) Single frame of an RGB color movie, with (b) ROI with reconstructed grayscale (no hue was measured beforehand, gray reconstruction) movie at threefold increased frame-rate. See Visualization 5 for the full movie. Anatomical features visible include the ventricle (v), the atrium (at), the bulbus arteriosis (BA) and the pericardium (p). Orientation is indicated as V: ventral, D: dorsal, A: anterior, P: posterior. Scalebar: $100 \mu$m.

Download Full Size | PDF

6. Discussion

The hardware implementation of our method has only few hard design requirements. In particular, frame acquisition and illumination must be in synchrony, which is straightforward to implement provided the camera has a trigger output. Given variability in hardware clocks and data transfer, independently running the illumination and acquisition systems results in rapid asynchrony and departure from the acquisition model (making it difficult to invert the system matrix). Beyond synchronization, given the frame rate of our camera (60 frames per second), neither the clock-time of our micro-controller (16 MHz) nor the rise time of our LEDs (20 ns) appeared to be limiting our method.

In order to calibrate our system as described in Section 3.2, it is necessary to manually select an ROI. We observed that the best results were obtained when calibration ROIs were chosen among a wide range of intensities. Our method requires a static sample (or scene) for calibration, ideally with a variety of intensities. Since acquiring calibration images is fast, many images of identical regions can be acquired fairly rapidly, which allows to limit the influence of noise, in particular in darker regions. We empirically found that acquiring $P = 30$ images gave good calibration results and used that number for all experiments in Section 4.

The results in Section 4.2 show a time-resolution improvement of a factor $\sqrt {8} \approx 2.8$ with a temporal sampling improvement by a factor 3. This factor depends on the number of channels available and using additional channels (e.g. through wavelength splitting and use of multiple cameras) higher resolution factors might be achievable. Less motion blur can be obtained by shortening the illumination pulses yet without improvement of the frame-rate, at the cost of a higher peak-intensity (which is sometimes undesirable in microscopy) and at the risk of producing aliasing. Longer-duration pulses are less prone to aliasing and might help preserve live samples as their peak intensity can be lower for a given camera integration time.

Although the model presented in Section 2. is non-specific regarding the precise illumination functions, the discrete formulation in Eq. (7) reveals that our model becomes ill-posed should the matrix $\boldsymbol{S}_Q \mathbf{\Gamma }_Q$ not be full rank, i.e. if $ \textrm{rank}(\boldsymbol{S}_Q \mathbf{\Gamma }_Q) \lt Q$. This provides us with a tool for verifying that a proposed illumination function does not lead to an ill-posed system. For example, should data from a particular channel be missing, $Q$ should be lowered such as to have $Q \leq C$ and the matrices $\boldsymbol{S}_Q$ adapted accordingly. Similarly, the matrix $\mathbf{\Gamma }_Q$ should be well-conditioned, which depends on the sample itself, the color-filters of the camera, and the spectrum of the lighting. The spectrum absorbed and reflected by the sample and then acquired by the camera should form a matrix of rank $Q$, when calibrating with the procedure in Section 3.2. With the condition number $\kappa$ (Section 4.3), we have a means of predicting the quality of reconstructions with our method given the sample, the LEDs, and the chosen camera. The results in Section 4.4 suggest that a condition number above 10 should be avoided, in order to guarantee a good reconstruction. For a given matrix $\mathbf{\Gamma }_Q$ (specified by the sample, LED, and detection spectra) the matrix $\boldsymbol{S}_Q$ containing the temporal patterns could be optimized for $\boldsymbol{S}_Q\mathbf{\Gamma }_Q$ to have maximal rank, an NP-hard problem which we have not pursued here

We investigated the impact of the illumination hues in Section 4.3. Since the camera captures transmitted or reflected light, the same conclusions apply to changing the hue of the sample, rather than the illumination.

Simply using more lights would not necessarily produce a more stable system matrix. Instead, chasing the proper combination of lights does. A simple case for choosing the number of LEDs, channels, and super-resolution factor, is to set $L = C = Q$ (of course one must still ensure that the choice of lights and illumination functions produces well-conditioned matrices, as discussed above).

In practice, temporal flickering may remain in the reconstructed videos, depending on how well the system’s matrix $\boldsymbol{S}_Q \mathbf{\Gamma }_Q$ matches dynamic conditions. Calibration and experimental conditions may differ, for example, because calibration is carried out in regions different than those imaged. Furthermore, LED rise and fall times during calibration and imaging may differ as the LEDs’ electronic current drivers can be frequency-dependent

For single-hue objects, the possibility of recovering the base hue simultaneously to reconstructing temporally super-resolved image sequences (as shown in Section 3.3) is particularly appealing as this advantage comes without requiring an increase in the bandwidth of the system. This capability is preserved even in the case of multiple objects with different hues, via the model selection scheme we proposed in Section 3.3. Furthermore, for applications in microscopy, where often only a single camera can be mounted, making rapid switching to a different camera or view-port unfeasible, our method brings clear practical advantages: (i) the same camera can be used both for color and fast imaging as demonstrated in Section 5.2 and (ii) the motion blur can be reduced when acquiring stacks in continuous scanning mode. While the former may be particularly attractive for building versatile imaging systems, the latter may be particularly relevant for applications that require fast inspection, such as for screening or flow cytometry. We also foresee that the improved frame-rates and temporal resolution could be beneficial for object tracking applications, a point that we may investigate in the future.

Since the wavelength of the photons emitted by a fluorophore are independent of the excitation wavelength, our method would not be applicable on samples labeled with a single fluorophore. However, our method could be applied for imaging structures simultaneously co-labeled with two or more fluorophores: while the individual emission spectra shape would remain unchanged (except for scaling), as their combined emission intensity (and therefore the resulting combined spectrum) will vary with the relative excitation intensities of the illumination sources our method could, in principle, provide similar benefits to fluorescence imaging.

We provide the code, data, and instructions to reproduce results in this paper [37].

7. Conclusion

We introduced a general computational imaging method to carry out temporal super-resolution with a color camera and a set of multi-spectral active illumination sources. Each frame includes multiple copies of the signal at various times, encoded in the hue of the image. The computational procedure retrieves a high time-resolution signal, along with the base-hue, under the assumption that the imaged sample has a single color. We showed a direct method to characterize the robustness of the method, depending on the sensing and illumination spectra, as well as the base-hue of the imaged sample. We experimentally showed a temporal resolution improvement of a factor 2.8 combined with a three-fold increase of the frame-rate. We illustrated our method with an application exhibiting both color imaging and fast grayscale (on a chosen ROI) of the beating heart, showing its applicability to bio-microscopy.

Funding

Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (200020_179217, 200021_159227, 206021_164022).

Acknowledgement

The authors would like to thank Linda Bapst-Wicht from IRO (Institut de Recherche en Opthalmologie, Sion, Switzerland) for providing the zebrafish used in Section 5.2.

Disclosures

The method presented in this paper is the subject of a European patent application (EP19154253).

References

1. J. Vermot, S. E. Fraser, and M. Liebling, “Fast fluorescence microscopy for imaging the dynamics of embryonic development,” HFSP J. 2(3), 143–155 (2008). [CrossRef]  

2. J. B. Bosse, N. S. Tanneti, I. B. Hogue, and L. W. Enquist, “Open led illuminator: A simple and inexpensive LED illuminator for fast multicolor particle tracking in neurons,” PLoS One 10(11), e0143547 (2015). [CrossRef]  

3. P. G. Pitrone, J. Schindelin, L. Stuyvenberg, S. Preibisch, M. Weber, K. W. Eliceiri, J. Huisken, and P. Tomancak, “OpenSPIM: an open-access light-sheet microscopy platform,” Nat. Methods 10(7), 598–599 (2013). [CrossRef]  

4. E. J. Gualda, T. Vale, P. Almada, J. A. Feijó, G. G. Martins, and N. Moreno, “OpenSpin Microscopy: an open-source integrated microscopy platform,” Nat. Methods 10(7), 599–600 (2013). [CrossRef]  

5. A. Edelstein, N. Amodaj, K. Hoover, R. Vale, and N. Stuurman, “Computer control of microscopes using $\mu$Manager,” Curr. Protoc. Mol. Biol. 92(1), 1–17 (2010). [CrossRef]  

6. A. Edelstein, M. Tsuchida, N. Amodaj, H. Pinkard, R. Vale, and N. Stuurman, “Advanced methods of microscope control using $\mu$Manager software,” J. Biol. Methods 1(2), 10 (2014). [CrossRef]  

7. E. Shechtman, Y. Caspi, and M. Irani, “Space-time super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. 27(4), 531–545 (2005). [CrossRef]  

8. T. Li, X. He, Q. Teng, Z. Wang, and C. Ren, “Space-time super-resolution with patch group cuts prior,” Signal Process. 30, 147–165 (2015). [CrossRef]  

9. A. Agrawal, M. Gupta, A. Veeraraghavan, and S. G. Narasimhan, “Optimal coded sampling for temporal super-resolution,” in CVPR, (2010), pp. 599–606.

10. R. Pournaghi and X. Wu, “Coded Acquisition of High Frame Rate Video,” IEEE Trans. Image Process. 23(12), 5670–5682 (2014). [CrossRef]  

11. T.-H. Tsai, P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Spectral-temporal compressive imaging,” Opt. Lett. 40(17), 4054–4057 (2015). [CrossRef]  

12. R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoulas, O. Cossairt, G. Schuster, and A. K. Katsaggelos, “High spatio-temporal resolution video with compressed sensing,” Opt. Express 23(12), 15992 (2015). [CrossRef]  

13. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9), 10526–1306 (2013). [CrossRef]  

14. K. G. Chan, S. J. Streichan, L. A. Trinh, and M. Liebling, “Simultaneous temporal superresolution and denoising for cardiac fluorescence microscopy,” IEEE Trans. Comput. Imaging 2(3), 348–358 (2016). [CrossRef]  

15. A. Veeraraghavan, D. Reddy, and R. Raskar, “Coded strobing photography: Compressive sensing of high speed periodic videos,” IEEE Trans. Pattern Anal. Mach. Intell. 33(4), 671–686 (2011). [CrossRef]  

16. G. Bub, M. Tecza, M. Helmes, P. Lee, and P. Kohl, “Temporal pixel multiplexing for simultaneous high-speed, high-resolution imaging,” Nat. Methods 7(3), 209–211 (2010). [CrossRef]  

17. A. Koehler, “Ein neues Beleuchtungsverfahren fur mikrophotographische Zwecke,” Zeitschrift für wissenschaftliche Mikroskopie und für Mikroskopische Technik 10, 433–440 (1893).

18. W. Lukosz, “Optical systems with resolving powers exceeding the classical limit,” J. Opt. Soc. Am. 56(11), 1463–1471 (1966). [CrossRef]  

19. W. Lukosz, “Optical systems with resolving powers exceeding the classical limit. II,” J. Opt. Soc. Am. 57(7), 932–941 (1967). [CrossRef]  

20. P. J. Verveer, Q. S. Hanley, P. W. Verbeek, L. J. Van Vliet, and T. M. Jovin, “Theory of confocal fluorescence imaging in the programmable array microscope (pam),” J. Microsc. 189(3), 192–198 (1998). [CrossRef]  

21. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]  

22. R. Heintzmann, T. M. Jovin, and C. Cremer, “Saturated patterned excitation microscopy—a concept for optical resolution improvement,” J. Opt. Soc. Am. A 19(8), 1599–1609 (2002). [CrossRef]  

23. L. H. Schaefer, D. Schuster, and J. Schaffer, “Structured illumination microscopy: artefact analysis and reduction utilizing a parameter optimization approach,” J. Microsc. 216(2), 165–174 (2004). [CrossRef]  

24. E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. Le Moal, C. Nicoletti, M. Allain, and A. Sentenac, “Structured illumination microscopy using unknown speckle patterns,” Nat. Photonics 6(5), 312–315 (2012). [CrossRef]  

25. F. Orieux, E. Sepulveda, V. Loriette, B. Dubertret, and J. Olivo-Marin, “Bayesian estimation for optimized structured illumination microscopy,” IEEE Trans. Image Process. 21(2), 601–614 (2012). [CrossRef]  

26. T. Zimmermann, J. Rietdorf, and R. Pepperkok, “Spectral imaging and its applications in live cell microscopy,” FEBS Lett. 546(1), 87–92 (2003). [CrossRef]  

27. D. W. Staudt, J. Liu, K. S. Thorn, N. Stuurman, M. Liebling, and D. Y. R. Stainier, “High-resolution imaging of cardiomyocyte behavior reveals two distinct steps in ventricular trabeculation,” Development 141(3), 585–593 (2014). [CrossRef]  

28. S. S. Gorthi, D. Schaak, and E. Schonbrun, “Fluorescence imaging of flowing cells using a temporally coded excitation,” Opt. Express 21(4), 5164–5170 (2013). [CrossRef]  

29. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: Motion deblurring using fluttered shutter,” ACM Trans. Graph. 25(3), 795–804 (2006). [CrossRef]  

30. Y. Shiba, S. Ono, R. Furukawa, S. Hiura, and H. Kawasaki, “Temporal shape super-resolution by intra-frame motion encoding using high-fps structured light,” ICCV (2017).

31. P. Rangarajan, I. Sinharoy, P. Milojkovic, and M. P. Christensen, “Active computational imaging for circumventing resolution limits at macroscopic scales,” Appl. Opt. 56(9), D84–D107 (2017). [CrossRef]  

32. G. H. Golub and C. F. Van Loan, Matrix Computations (The John Hopkins University Press, 1996), 3rd ed.

33. R. E. Welsch, “Robust regression using iteratively reweighted least-squares,” Commun. Stat. Theory. 6(9), 813–827 (1977). [CrossRef]  

34. R. L. Chartrand and W. R. U. Yin, “Iterativery reweighted algorithms for compressive sensing,” in ICASSP, (2008), pp. 3869–3872.

35. I. Daubechies, R. Devore, M. Fornasier, and C. S. Gunturk, “Iteratively reweighted least squares minimization for sparse recovery,” Comm. Pure Appl. Math. 63(1), 1–38 (2010). [CrossRef]  

36. ILOG-CPLEX, “High-performance software for mathematical programming and optimization,” http://www.ilog.com/products/cplex, (2005).

37. https://github.com/idiap/hesm_distrib.

38. M. Bertero and P. Boccacci, Introduction to inverse problems in imaging (IOP Publishing, Bristol, UK, 1998).

Supplementary Material (5)

NameDescription
Visualization 1       This video shows the experiment shown in Figure 2, where we characterise the temporal resolution improvement achieved with our method
Visualization 2       This video shows the experiment presented in Section 4.3, where the same sample is imaged with various LEDs combinations. The video shows reconstructions with each one of the LEDs combinations. The conditioning number of the system's matrix is given
Visualization 3       This video shows the experiment presented in Section 4.4, where the same sample is imaged with various illumination temporal sequences (and the same LEDs each time). For each reconstruction, the conditioning number of the system's matrix is given.
Visualization 4       This video shows the experiment presented in Section 5.1 and Figure 3. Our method allows to recover which model is correct and to retrieve the base hue of the sample, that was calibrated previously.
Visualization 5       This video present an application of our method to the beating heart of a live zebrafish larva. It highlights the possibilities offered by our method, making it very easy to switch between standard color imaging to fast grayscale imaging.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. (a) Acquisition setup. The moving sample is imaged with three active light sources $s_i(t)$ . The projection of the scene to the camera is shown with $x(t)$ . The color (Bayer) filter makes each pixel sensitive to a specific spectrum that is independent of the light sources. Each light source has its own time function, capturing the sample at different times and encoding this information in different spectra which is then captured by the color sensor in the hue domain. (b) Example of possible temporal functions for the three light sources. (c) Real example of acquired data with the depicted system of (a). (d)-(e) Close-ups to the real acquired data, the Bayer filter is visible. (f) Reconstruction of three grayscale frames from the acquisition shown in (c).
Fig. 2.
Fig. 2. Imaging a moving sample with (a, f) a constant white light, (b, g) a 20ms white pulse and (c, d, e, h, i, j) our proposed method (see Visualization 1). The zoom on the element 1 of the group −2 of the USAF-grid (close-up in a, b, c) shows that all three methods can resolve it. It is the limit for the constant illumination. This element is 0.625 mm wide. The detailed views on the whole group −1 (f, g, h) show that the stroboscopic illumination and our method (g, h) are able to resolve up to element 4. This corresponds to a resolution improvement factor of 2.8. Moreover, with our method operating at the same frame-rate, we have six reconstructed frames (c, d, e, h, i, j) while with the two other methods we have two acquired frames (a, b, f, g), thus we improved the frame-rate by a factor of 3.
Fig. 3.
Fig. 3. When any of several objects with different, but known, hues enters the field of view, the system matrix adapted to the object can be automatically selected. (a) Color image of a static scene, with two kind of papers illuminated by a white light. The gray areas show the calibration ROIs. (b) Each sample has a corresponding calibrated set of parameters $\mathbf{\Gamma }$ and $\boldsymbol{D}$ as well as the sample hue. (c) Data acquisition of a dynamic scene with the active illumination. (d) Reconstruction with model selection as explained in Section 3.3. (e, f) Two reconstructions with our method after model selection, using RGB LEDs and reconstructing the hue of the samples from the raw data acquired with our method (see Visualization 4). Scalebar: 5 cm.
Fig. 4.
Fig. 4. Flexible color and fast grayscale imaging of the beating heart in a 4 days post fertilization zebrafish larva. (a) Single frame of an RGB color movie, with (b) ROI with reconstructed grayscale (no hue was measured beforehand, gray reconstruction) movie at threefold increased frame-rate. See Visualization 5 for the full movie. Anatomical features visible include the ventricle (v), the atrium (at), the bulbus arteriosis (BA) and the pericardium (p). Orientation is indicated as V: ventral, D: dorsal, A: anterior, P: posterior. Scalebar: $100 \mu$ m.

Tables (2)

Tables Icon

Table 1. Condition number κ depending on the LEDs used (see Visualization 2).

Tables Icon

Table 2. Condition number κ with various time functions. R 1 , R 2 , R 3 are the values of the red LED respectively at the first, second and third time-steps of the whole exposure time (see Visualization 3).

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

Y c = 0 E ( = 1 L γ , c x ( t ) s ( t ) ) d t + D c
= D c + = 1 L γ , c 0 E x ( t ) s ( t ) d t ,
x ( t ) = i = 1 Q x [ i ] β 0 ( Q ( t i ) ) ,
β 0 ( t ) = { 1 if  0 t < E Q 0 otherwise,
Y c = D c + = 1 L γ , c 0 E i = 1 Q x [ i ] β 0 ( Q ( t i ) ) s ( t ) d t = D c + = 1 L γ , c i = 1 Q x [ i ] 0 E β 0 ( Q ( t i ) ) s ( t ) d t = D c + = 1 L γ , c i = 1 Q x [ i ] S [ i ] ,
S [ i ] = 0 E β 0 ( Q ( t i ) ) s ( t ) d t = ( i 1 ) E / Q i E / Q s ( t ) d t .
Y = S Q Γ Q x + D ,
S Q = ( S Q 1 S Q S Q L ) C × C Q L ,
S Q = ( ( S [ 1 ] , , S [ Q ] ) 0 1 × Q 0 1 × Q 0 1 × Q 0 1 × Q 0 1 × Q 0 1 × Q ( S [ 1 ] , , S [ Q ] ) ) C × C Q .
Γ Q = [ ( Γ Q 1 Γ Q Γ Q L ) ] C Q L × Q ,
Γ Q = [ ( γ 1 , I Q γ c , I Q γ C , I Q ) ] C Q × Q ,
x = min x Y D S Q Γ Q x 2 2 .
[ Y ] C × 1 = [ S 1 ] C × C L [ Γ 1 ] C L × 1 [ x ] 1 × 1 + [ D ] C × 1 ,
Y = ( x S 1 I C ) ( Γ 1 D ) ,
( Y 1 , 1 Y 1 , 2 Y 1 , P Y 2 , 1 Y M , P ) Y cal = ( x 1 S 1 1 I C x 1 S 1 2 I C x 1 S 1 P I C x 2 S 1 1 I C x M S 1 P I C ) A cal ( Γ 1 D ) ,
e ( Γ 1 , D ) = Y cal A cal ( Γ 1 D ) 1 .
u ( t + 1 ) = argmin u W ( t ) Y cal W ( t ) A cal u ( t ) 2 2 ,
w k ( t + 1 ) = ( ( Y cal , k A cal u k ( t ) ) 2 + ϵ ( t ) ) 1 / 2 .
R ( n ) = i = 1 Q 1 | x ( n ) [ i ] x ( n ) [ i + 1 ] |
S 1 [ i ] = [ 1 , 0 , 0 ] S 2 [ i ] = [ 0 , 1 , 0 ] S 3 [ i ] = [ 0 , 0 , 1 ] ,
κ ( A ) = σ max ( A ) σ min ( A ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.