Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Motion-resolved quantitative phase imaging

Open Access Open Access

Abstract

The temporal resolution of quantitative phase imaging with Differential Phase Contrast (DPC) is limited by the requirement for multiple illumination-encoded measurements. This inhibits imaging of fast-moving samples. We present a computational approach to model and correct for non-rigid sample motion during the DPC acquisition in order to improve temporal resolution to that of a single-shot method and enable imaging of motion dynamics at the framerate of the sensor. Our method relies on the addition of a simultaneously-acquired color-multiplexed reference signal to enable non-rigid registration of measurements prior to phase retrieval. We show experimental results where we reduce motion blur from fast-moving live biological samples.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Quantitative phase imaging (QPI) [1–7] enables stain-free and label-free imaging of transparent biological samples in vitro [8,9]. Unlike non-quantitative phase contrast techniques (e.g. Zernike Phase Contrast [10], Differential Interference Contrast (DIC) [11]), QPI methods are able to separate out the effects of phase and absorption. However, this generally comes at a cost of lost temporal or spatial resolution due to the need for multiple measurements. Here, we implement QPI without sacrificing speed or resolution, for the specific case of coded-illumination QPI.

Quantitative Differential Phase Contrast (DPC) [3,4,12,13] recovers the complex transmittance function of a sample from several coded-illumination measurements and a phase retrieval optimization. DPC achieves spatial resolution corresponding to twice the coherent diffraction limit and is practically implemented with an LED array based coded-illumination source on a commercial microscope (Fig. 1(a)) [3, 14]. Traditional DPC measurements consist of 4 intensity images, each captured with a half-circle illumination pattern at a different rotation angle (Fig. 1(b)). The time-multiplexed nature of the measurements requires the implicit assumption that the sample is not moving during the acquisition. Of course, live biological samples may be non-stationary (defined as moving more than one pixel during the acquisition time). When only a single measurement is required (single-shot), the exposure time of the sensor can be scaled to guarantee approximately stationary behavior; however, for multi-shot methods, the acquisition time is limited by the sensor readout time. While each individual measurement may have an appropriate exposure time to guarantee the stationary assumption, motion occurring between measurements during the multi-shot DPC acquisition will cause errors in the reconstructed complex-field.

 figure: Fig. 1

Fig. 1 Motion-resolved quantitative Differential Phase Contrast (mrDPC). (a) Coded-illumination microscope with RGB LED array as the illumination source. (b) Traditional four-image DPC acquisition with rotating half-circle sources. Because the polystyrene bead is moving, the reconstructed phase suffers from motion blur artifacts. (c) Our method, mrDPC, uses traditional DPC source patterns in the green color channel and an additional constant navigator source pattern (half-circle) in the red channel. The motion-resolved phase reconstructed corrects the effects of the sample’s non-rigid motion.

Download Full Size | PDF

Interferometry-based QPI techniques such as digital holographic microscopy [5] and white light diffraction phase microscopy [6] can be single-shot, but are limited in spatial resolution by the coherent diffraction limit and are sensitive to system imperfections that cause speckle. Transport of intensity equation based QPI techniques [7] can be single-shot if the chromatic aberrations of the system are great enough [15] or with additional camera hardware [16]. DIC-based QPI techniques [17] can be single-shot with the addition of specialized hardware. Other methods rely on simultaneously acquiring multiple measurements via color multiplexing [18–20], polarization multiplexing [21,22], or spatial multiplexing [23]. In the case of color multiplexing, the implicit assumption is made that the sample has no chromatic dispersion and is colorless; and in the case of polarization multiplexing the implicit assumption is made that the sample is not birefringent, both of which may be difficult to guarantee when imaging biological samples. Finally, in the case of spatial multiplexing, the space-bandwidth product of the reconstructed phase will be limited by the division of the sensor into smaller non-overlapping segments.

Here, we demonstrate that non-rigid sample motion occurring between the frames of a multi-shot DPC acquisition can be estimated and corrected. Techniques for rigid and non-rigid motion estimation and correction have been comprehensively applied in other fields (e.g. magnetic resonance imaging [24,25], multi-frame image enhancement [26], remote sensing [27], computer vision [28, 29]), but not in QPI microscopy. It is not straightforward to apply these existing methods to DPC because they make an assumption that the spatial frequency content between any two images being registered is similar [30]. This assumption is violated when estimating the motion between DPC measurements, since each coded-illumination measurement has a unique spatial frequency contrast of the sample’s optical phase. Thus, estimation of motion between raw DPC measurements will fail when using traditional registration techniques.

In order to perform motion estimation for DPC images, we introduce a new method, termed motion-resolved DPC (mrDPC), that uses an additional simultaneously-acquired color-multiplexed measurement with a constant coded-illumination pattern (Fig. 1(c)). This navigator measurement uses one color channel of the source LEDs to display a constant illumination pattern (a half-circle), thus maintaining constant spatial frequency contrast. A color camera then separates the navigator and DPC measurements (without any assumptions regarding the dispersion or color of the sample). Non-rigid motion can then be estimated from the navigator measurements and corrected in the DPC measurements prior to phase retrieval. In this way, quantitative phase images can be recovered for each time point of the captured data, resulting in temporal resolution equivalent to single-shot methods. We demonstrate proof of principle experimental results in which blurring due to live sample motion (Amoeba proteus and Caenorhabditis elegans) is reduced.

2. Methods

Our proposed method, mrDPC, captures each DPC measurement sequentially, while simultaneously capturing a color-multiplexed navigator, using an LED array microscope. We program the green color channel of the LED array with rotating half-circle patterns (for DPC) and the red color channel with a constant half-circle pattern (for the navigator), illustrated in Fig. 2(a). The signal is measured on a color camera and separated via demosaiking with a precalibrated spectral sensitivity matrix (see App. color multiplexing) into DPC measurements and navigator measurements (Fig. 2(a)).

 figure: Fig. 2

Fig. 2 Motion-resolved DPC uses traditional DPC (rotating half-circle) illumination patterns in the green color channel and a constant half-circle navigator pattern in the red color channel. (a) Simulations of the captured DPC images (green box) and the navigator images (red box) for a sample comprised of three polystyrene beads: one stationary, one moving up, and one moving down. (b) Motion is estimated between each time point and the reference time point (T = t2). Estimates are used to correct for motion in the DPC measurements, then (c) a DPC phase reconstruction is performed, eliminating the motion artifacts.

Download Full Size | PDF

To achieve motion correction between the four DPC measurements, we need to register each image to the others. Because the different DPC illumination patterns result in different contrast, they cannot be directly registered to each other. The navigator measurements circumvent this problem; they can be registered to each other and the resulting motion estimates can then be applied to the DPC measurements. Specifically, we estimate the motion between three pairs of measurements (between t0 and t2, t1 and t2, and t3 and t2) as outlined in Sec. 2.1. The reference time point can be any of the four measurement time points; we chose T = t2. The motion estimates are plotted as vector fields in Fig. 2(b), where the arrows’ magnitude corresponds to the amount of the sample’s local displacement and the arrows’ orientation corresponds to the direction of the sample’s local displacement. The three motion estimates are applied to the raw DPC measurements at times t0, t1, t3, respectively, to register them to the reference measurement. The registration is performed by resampling with linear interpolation. Using the physics model in Sec. 2.2, we then linearly deconvolve the motion-corrected DPC measurements (Eq. 7) to recover the sample’s absorption and optical phase (Fig. 2(c)).

2.1. Motion estimation

The task of removing motion artifacts can be formulated as a blind deconvolution problem [31,32] where the unblurred image and the blur kernel are jointly estimated; however, this does not account for non-rigid motion. In this work, we use our navigator measurements to correct for the sample’s non-rigid motion via image registration, enabling a wider array of biological applications.

To model non-rigid sample motion, our proposed method estimates a deformable mapping between pairs of images, for which there exist many algorithms [28,29,33–35]. We chose the Symmetric Normalization (SyN) method [33,34] for its state-of-the-art performance [36] and open-source availability [37]. The method is called symmetric because it is commutative with respect to the ordering of the two input images and therefore does not over-fit the deformable mapping estimate to either image. This is particularly important to us, so that we can arbitrarily choose the reference time point without biasing our results.

The SyN algorithm solves an optimization problem (Eq. 1) to estimate the deformable mapping between two images, I0(r) and I1(r), such that a similarity metric, 𝒮(I0, I1), is maximized and the deformable mapping is spatially smooth. The deformable mapping, g(r, t), is a function of space and time, where r denotes 2D spatial coordinates and t ∈ [0, 1] denotes a dimensionless time coordinate. At time t = 0, g(r, 0) maps I0(r) to itself, while at time t = 1, g(r, 1) maps I0(r) to I1(r). The SyN method achieves symmetry by jointly estimating a forward deformable mapping g0(r, t) between I0 and I1 and a backwards deformable mapping, g1(r, t), between I1 and I0. Mathematically, the algorithm can be written as:

maxg0(r,t),g1(r,t)𝒮(I0(g0(r,0.5)),I1(g1(r,0.5))(v0(r,t))(v1(r,t)))
subjectto,gi(r,t)t=vi(gi(r,t),t)foreachi{0,1}
gi(r,t)=rforeachi{0,1},
where our similarity metric is the normalized cross-correlation, defined as 𝒮(Ia,Ib)=Ia,IbIaIb. The mappings’ spatial smoothness is achieved by penalizing the term (v(r,t))=t=00.5Lv(r,t)2dt for each map. Here, L = ∇2 + I is the linear differential operator, where ∇ is the first-order difference, I is the identity operator and v(r, t) is the velocity field corresponding to g(r, t). This correspondence is enforced with the Lagrangian-Euler constraint [38] in Eq. 2. The SyN optimization is solved via gradient descent [33,34] and implemented in Dipy [37].

2.2. Phase retrieval

After using the navigator measurements to estimate the motion, we correct for motion in the DPC measurements, which can then be used as input to a phase retrieval algorithm. Generally, the relationship between an object’s 2D complex transmittance function and measured intensity is non-linear, so recovery of phase requires non-linear optimization and an iterative solver. For in vitro biological samples, the “scatter-scatter” term is small and so we can make a weak object approximation, thus enabling phase recovery by simple linear deconvolution with weak object transfer functions (WOTFs) [3,4,7,39]. This linearization decouples the contributions of absorption and phase and allows us to express intensity in terms of linear contributions from: background, absorption and phase contrast. In the Fourier domain,

y˜(u)=Bδ(u)+Hμ(u)μ˜(u)+Hϕ(u)ϕ˜(u),
where y is the intensity measurement and B is the DC term. Here, ·̃ denotes Fourier transform and u are 2D spatial frequency coordinates. Hμ(u) is the WOTF for the sample’s absorption and Hϕ(u) is the WOTF for the sample’s phase. These terms are derived in [3],
Hμ(u)=S(u)P(u)+P(u)S(u)
Hμ(u)=i(S(u)P(u)P(u)S(u)),
where P(u) is the complex pupil function, S(u) is the illumination source distribution, and ★ denotes cross-correlation. In traditional DPC, the illumination sources are four rotating half-circles with radius NAobj oriented right, bottom, left, and top (see Fig. 1(b)).

We recover the quantitative phase and absorption by linearly deconvolving the four motion-corrected measurements, (j) (enumerated based on their coded-illumination patterns: top (0), right (1), bottom (2), left (3)), with their respective to their WOTFs.

minμ˜,ϕ˜j=03y˜(j)Hμ(j)μ˜Hϕ(j)ϕ˜22+λμμ˜22+λϕϕ˜22.
Here, λμ and λϕ are regularization parameters, which are set to trade off data consistency and penalties for the low-frequencies in ϕ and the high-frequencies in μ. The necessity of the regularization comes from the phase WOTF’s low-sensitivity to low-frequencies and absorption WOTF’s low-sensitivity to high-frequencies. The optimization in Eq. 7 can be reformulated as a least-squares problem,
[j=03H¯μ(j)Hμ(j)+λμIj=03H¯μ(j)Hϕ(j)j=03H¯ϕ(j)Hμ(j)j=03H¯ϕ(j)Hϕ(j)+λϕI][μ˜ϕ˜]=[j=03H¯μ(j)y˜(j)j=03H¯ϕ(j)y˜(j)],
to yield a closed form solution for the motion-resolved absorption, μ̃*, and motion-resolved quantitative phase images, ϕ̃* (·̄ denotes the conjugate operator):
[μ˜ϕ˜]=[j=03H¯μ(j)Hμ(j)+λμIj=03H¯μ(j)Hϕ(j)j=03H¯ϕ(j)Hμ(j)j=03H¯ϕ(j)Hϕ(j)+λϕI][j=03H¯μ(j)y˜(j)j=03H¯ϕ(j)y˜(j)].

3. Experimental results

To acquire coded-illumination measurements experimentally, we use a commercial Nikon TE300 microscope with a custom quasi-dome [14] illumination system (581 RGB LEDs, λR = 625nm, λG = 532nm, λB = 450nm). A qImaging Optimos sCMOS color camera (1080 × 1920, 4.54μm pixel pitch) acquires 16 bit images with an exposure time of 25ms (∼ 30 fps).

Validating our method’s motion correction ability is challenging because of the difficulty in obtaining ground truth phase for comparison. To address this, we start by capturing full framerate videos (50×, NA=0.55) of a slow-moving sample Amoeba proteus (Carolina Biological Supply) and assume the traditional DPC reconstruction to be ground truth, since we expect negligible motion between frames. We then decimate the dataset in time by a factor of 8× to emulate faster motion, reconstruct with our method and traditional DPC, and compare results to the ground truth DPC result. As can be seen in Fig. 3(a), the bright water vacuoles and well-defined wall edges in the amoeba’s nucleus and contractile vacuole (gold arrows in Fig. 3) appear blurred in the time-decimated traditional DPC reconstruction, but not in our method’s reconstruction.

 figure: Fig. 3

Fig. 3 Experimental Validation. Recovered quantitative phase images of live Amoeba proteus reconstructed (a) without motion (ground truth), (b) corrupted by sample motion as outlined in Sec. 3, and (c) motion resolved with mrDPC (Sec. 2). Insets highlight blurring of water vacuoles (bright spots) due to sample motion and its correction with mrDPC.

Download Full Size | PDF

We next demonstrate our method with an even faster sample, live C. elegans (12.5×, NA=0.25), which generates significant non-rigid motion between the raw measurements (Fig. 4(a)). From these measurements, we reconstruct and compare traditional DPC with our mrDPC method (Fig. 4(b)). Insets highlight mrDPC’s correction of distortion around the head region (pink insets in Fig. 4(c)) and blurring of internal features (blue insets in Fig. 4(c)).

 figure: Fig. 4

Fig. 4 Experimental results for motion-resolved DPC with fast-moving live C. elegans. (a) Raw uncorrected DPC intensity images. Insets highlight significant non-rigid sample motion of the head (pink) and body (blue) during the four-image acquisition. (b) Quantitative absorption and phase reconstructions without (left) and with (right) motion correction. (c) Insets highlight spatial distortion and blurring artifacts due to head (pink) and body (blue) motion. Gold arrows indicate correction of head motion in the phase reconstructions. Red arrows indicate correction of internal body feature motion in the absorption reconstructions.

Download Full Size | PDF

By capturing a continuous video at the full framerate of the sensor, we can reveal biological motion dynamics of the C. Elegans. Reconstructions are performed on a sliding window of measurements such that each window has a full set of four DPC measurements and each window is offset by one measurement. The motion-resolved absorption and quantitative phase video reconstructions are compared with traditional DPC in Supplementary Material Visualization 1.

Finally, we compare our method to color-multiplexed DPC [18] (colorDPC), which is a single shot QPI method (Fig. 5). ColorDPC is able to encode the information required for reconstruction into a single measurement using color multiplexing of the RGB LEDs and a color camera, under the assumption that the sample is non-dispersive (an assumption which is not required for mrDPC). Since colorDPC is a single-shot method, it will not suffer inter-frame motion blur, but uses fewer measurements than mrDPC and traditional DPC for each reconstruction and thus will have lower reconstruction SNR. In addition, the three color-encoded measurements for colorDPC will have different bandwidths, each defined by their respective encoding wavelength. As a result, the final reconstruction has orientation-varying high-frequency contrast, while traditional DPC and mrDPC do not (highlighted in Fig. 5).

 figure: Fig. 5

Fig. 5 ColorDPC avoids motion blur by using a single image capture, but suffers loss of quality. Experimental comparison of phase reconstructions (12.5×, NA=0.25) for a stationary phase target (max 0.8 radians) using (a) traditional DPC, (b) motion-resolved DPC and (c) colorDPC. (d) Radial cross sections of the insets highlight the improved resolution and contrast achieved with traditional DPC (green) and navDPC (purple) over colorDPC (blue).

Download Full Size | PDF

4. Discussion

Our proposed method, mrDPC, can correct artifacts due to sample motion that is fast enough to cause motion blur across the four captured DPC images, but does not cause motion blur within each measurement. In the case of Amoeba proteus, the sample motion is slow enough over the duration of the multi-shot DPC acquisition that it can be assumed stationary and no motion correction is necessary. In the case of C. elegans, the stationarity assumption is violated, but each individual captured image is unblurred, so mrDPC helps to resolve the motion between frames. Most generally, the sample is non-stationary in each single measurement and motion-induced blur is present. To address this, strobed illumination could be used to effectively shorten the capture time of each measurement to ensure stationarity. This strategy is analogous to our time-decimated validation in Sec. 3 (measurements acquired with delays between them).

Design of the navigator pattern also affects performance. We make the assumption that the structural motion between the navigator measurements is the same motion that occurs between DPC measurements. This should hold true, since the navigator measurement is acquired simultaneously with its corresponding DPC measurement and its illumination is similar to the DPC illumination pattern in terms of bandwidth. This ensures that the highest resolution features’ motion in the DPC measurements will be captured in the navigator measurements. Further, the navigator measurement must have sufficient SNR and gradient information [40] to perform motion estimation. While SNR isn’t rigorously measured here, the power and number of LEDs in the navigator pattern is equal to that of the DPC pattern so that when spectrally unmixed neither contribute much additional noise to each others’ measurements.

The work of Phillips et al. [18] discusses that only three half-circle coded-illumination measurements are required to perform the quantitative DPC reconstruction. We could incorporate this by only performing our method on three rather than four measurements; however, performance might degrade similar to in Fig. 5c. Our method is not limited to a fixed number of measurements; if more measurements give improved reconstruction results (e.g. increased SNR), we can incorporate them simply by registering additional measurements to the reference.

One limitation of the present method is that of using the weak object approximation, which only applies to samples with relatively weak phase and absorption. Since the motion correction is independent of the phase retrieval method, nonlinear methods can be used when the weak object approximation is violated. In that case, the linear reconstruction can serve as a good initialization to a non-linear phase retrieval optimization [4].

5. Conclusion

We present a computational method, motion-resolved DPC, that achieves similar reconstruction quality to that of traditional DPC’s quantitative phase images, but corrects for the blurring caused by sample motion during the four-image acquisition. Validation of our method’s navigator-based non-rigid motion estimation and correction of live Amoeba proteus sample motion is performed. Furthermore, we motion resolve even faster live C. elegans and reveal motion dynamics at the frame rate of the camera with video reconstruction.

Appendix: color multiplexing

Color sensors with a traditional Bayer filter [41] spatially multiplex color filters to capture spatial-spectral information. However, these color filters are not perfectly selective to a single spectra, but rather are sensitive to overlapping distributions of spectra. This cross-talk makes it necessary to calibrate the pixels’ sensitivity relative to the spectrum of the illumination source, so that the desired spectral response can be demixed from the acquired measurements.

For our method, we estimate the spectral sensitivity of our sensor’s red and green pixels to our illumination source’s red (625nm) and green (532nm) LEDs. This is accomplished by spatially averaging each of the color channels’ intensity response to red-only and green-only illumination [18]. The results form the entries in a matrix, C, that can be used by applying its pseudo inverse, C, to unmix future color multiplexed measurements.

[InavIdpc]=C[IrIg1Ig2]

Here, we use the green pixels (Ig1, Ig2) and green LEDs to encode the DPC signal, Idpc, and we use the red pixels (Ir) and red LEDs to encode the navigator signal, Inav.

Funding

STROBE: A National Science Foundation Science & Technology Center under Grant No. DMR 1548924 and by the Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative through Grant GBMF4562 to Laura Waller (UC Berkeley). Laura Waller is a Chan Zuckerberg Biohub investigator. Michael R. Kellman is supported by the National Science Foundation’s Graduate Research Fellowship under Grant No. DGE 1106400.

Acknowledgments

Special thanks to Emrah Bostan and Li-Hao Yeh for advanced reading, comments, and discussion as well as to Ben Larson for experimental advice.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. A. Barty, K. A. Nugent, D. Paganin, and A. Roberts, “Quantitative optical phase microscopy,” Opt. Lett. 23, 817–819 (1998). [CrossRef]  

2. M. Mir, B. Bhaduri, R. Wang, R. Zhu, and G. Popescu, Quantitative phase imaging, vol. 57 (ElsevierAmsterdam, The Netherlands, 2012).

3. L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an led array microscope,” Opt. express 23, 11394–11403 (2015). [CrossRef]   [PubMed]  

4. R. A. Claus, P. P. Naulleau, A. R. Neureuther, and L. Waller, “Quantitative phase retrieval with arbitrary pupil and illumination,” Opt. express 23, 26672–26682 (2015). [CrossRef]   [PubMed]  

5. E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of fresnel off-axis holograms,” Appl. Opt. 38, 6994–7001 (1999). [CrossRef]  

6. B. Bhaduri, H. Pham, M. Mir, and G. Popescu, “Diffraction phase microscopy with white light,” Opt. Lett. 37, 1094–1096 (2012). [CrossRef]   [PubMed]  

7. N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. communications 49, 6–10 (1984). [CrossRef]  

8. G. Popescu, Quantitative Phase Imaging of Cells and Tissues, McGraw-Hill biophotonics (McGraw-Hill Education, 2011).

9. L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro fourier ptychographic microscopy,” Optica 2, 904–911 (2015). [CrossRef]  

10. F. Zernike, “Phase contrast, a new method for the microscopic observation of transparent objects part ii,” Physica 9, 974–986 (1942). [CrossRef]  

11. G. Nomarski, “Nouveau dispositif pour lobservation en contraste de phase differentiel,” J. de Physique et le Radium 16S88 (1955).

12. D. Hamilton and C. Sheppard, “Differential phase contrast in scanning optical microscopy,” J. microscopy 133, 27–39 (1984). [CrossRef]  

13. S. B. Mehta and C. J. Sheppard, “Quantitative phase-gradient imaging at high resolution with asymmetric illumination-based differential phase contrast,” Opt. letters 34, 1924–1926 (2009). [CrossRef]  

14. Z. F. Phillips, R. Eckert, and L. Waller, “Quasi-dome: A self-calibrated high-na led illuminator for fourier ptychography,” in Imaging and Applied Optics 2017 (3D, AIO, COSI, IS, MATH, pcAOP), (Optical Society of America, 2017), p. IW4E.5. [CrossRef]  

15. L. Waller, S. S. Kou, C. J. R. Sheppard, and G. Barbastathis, “Phase from chromatic aberrations,” Opt. Express 18, 22817–22825 (2010). [CrossRef]   [PubMed]  

16. B. E. Allman, K. Nugent, and C. Porter, “An optical system for producing differently focused images,” Tech. rep. (2010).

17. D. Fu, S. Oh, W. Choi, T. Yamauchi, A. Dorn, Z. Yaqoob, R. R. Dasari, and M. S. Feld, “Quantitative dic microscopy using an off-axis self-interference approach,” Opt. Lett. 35, 2370–2372 (2010). [CrossRef]   [PubMed]  

18. Z. F. Phillips, M. Chen, and L. Waller, “Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cdpc),” PLOS ONE 12, 1–14 (2017). [CrossRef]  

19. W. Lee, D. Jung, S. Ryu, and C. Joo, “Single-exposure quantitative phase imaging in color-coded led microscopy,” Opt. Express 25, 8398–8411 (2017). [CrossRef]   [PubMed]  

20. T. Tahara, T. Kakue, Y. Awatsuji, K. Nishio, S. Ura, T. Kubota, and O. Matoba, “Parallel phase-shifting color digital holographic microscopy,” 3D Res. 1, 5 (2010). [CrossRef]  

21. T. Tahara, T. Kanno, Y. Arai, and T. Ozawa, “Single-shot phase-shifting incoherent digital holography,” J. Opt. 19, 065705 (2017). [CrossRef]  

22. N. Brock, C. Crandall, and J. Millerd, “Snap-shot imaging polarimeter: Performance and applications,” in Proceedings of SPIE - The International Society for Optical Engineering, vol. 9099 (2014), pp. 9099–10003.

23. P. Sidorenko and O. Cohen, “Single-shot ptychography,” Optica 3, 9–14 (2016). [CrossRef]  

24. J. A. Maintz and M. A. Viergever, “A survey of medical image registration,” Med. image analysis 2, 1–36 (1998). [CrossRef]  

25. G. Hermosillo, C. Chef d’hotel, K.-H. Herrmann, G. Bousquet, L. Bogoni, K. Chaudhuri, D. R. Fischer, C. Geppert, R. Janka, A. Krishnan, et al., “Image registration in medical imaging: applications, methods, and clinical evaluation,” in Multi Modality State-of-the-Art Medical Image Segmentation and Registration Methodologies, (Springer, 2011), pp. 263–313. [CrossRef]  

26. M. Irani and S. Peleg, “Improving resolution by image registration,” CVGIP: Graph. models image processing 53, 231–239 (1991).

27. Y. Bentoutou, N. Taleb, K. Kpalma, and J. Ronsin, “An automatic image registration for applications in remote sensing,” IEEE Transactions on Geosci. Remote. Sens. 43, 2127–2137 (2005). [CrossRef]  

28. B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” (1981), pp. 674–679.

29. B. K. Horn and B. G. Schunck, “Determining optical flow,” Artif. intelligence 17, 185–203 (1981). [CrossRef]  

30. L. G. Brown, “A survey of image registration techniques,” ACM computing surveys (CSUR) 24, 325–376 (1992). [CrossRef]  

31. D. A. Fish, A. M. Brinicombe, E. R. Pike, and J. G. Walker, “Blind deconvolution by means of the richardson–lucy algorithm,” J. Opt. Soc. Am. A 12, 58–65 (1995). [CrossRef]  

32. J.-F. Cai, H. Ji, C. Liu, and Z. Shen, “Blind motion deblurring from a single image using sparse approximation,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, (IEEE, 2009), pp. 104–111. [CrossRef]  

33. B. Avants, C. Epstein, and J. C. Gee, “Geodesic image normalization in the space of diffeomorphisms,” in 1st MICCAI Workshop on Mathematical Foundations of Computational Anatomy: Geometrical, Statistical and Registration Methods for Modeling Biological Shape Variability, (2006), pp. 125–135.

34. B. Avants, C. L. Epstein, M. Grossman, and J. C. Gee, “Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain,” Med. image analysis 12, 26–41 (2008). [CrossRef]  

35. J.-P. Thirion, “Image matching as a diffusion process: an analogy with maxwell’s demons,” Med. Image Analysis 2, 243–260 (1998). [CrossRef]  

36. A. Klein, J. Andersson, B. A. Ardekani, J. Ashburner, B. Avants, M.-C. Chiang, G. E. Christensen, D. L. Collins, J. Gee, P. Hellier, et al., “Evaluation of 14 nonlinear deformation algorithms applied to human brain mri registration,” Neuroimage 46, 786–802 (2009). [CrossRef]   [PubMed]  

37. E. Garyfallidis, M. Brett, B. Amirbekian, A. Rokem, S. Van Der Walt, M. Descoteaux, and I. Nimmo-Smith, “Dipy, a library for the analysis of diffusion mri data,” Front. neuroinformatics 8, 8 (2014). [CrossRef]  

38. M. I. Miller, A. Trouve, and L. Younes, “On the metrics and euler-lagrange equations of computational anatomy,” Annu. Rev. Biomed. Eng. 4, 375–405 (2002). [CrossRef]   [PubMed]  

39. D. Hamilton, C. Sheppard, and T. Wilson, “Improved imaging of phase gradients in scanning optical microscopy,” J. microscopy 135, 275–286 (1984). [CrossRef]  

40. D. Robinson and P. Milanfar, “Fundamental performance limits in image registration,” IEEE Transactions on Image Process. 13, 1185–1199 (2004). [CrossRef]  

41. B. E. Bayer, “Color imaging array,” (1976). US Patent 3,971,065.

Supplementary Material (1)

NameDescription
Visualization 1       The motion-resolved absorption and quantitative phase video reconstructions are compared side-by-side with traditional DPC’s absorption and quantitative phase video reconstructions.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Motion-resolved quantitative Differential Phase Contrast (mrDPC). (a) Coded-illumination microscope with RGB LED array as the illumination source. (b) Traditional four-image DPC acquisition with rotating half-circle sources. Because the polystyrene bead is moving, the reconstructed phase suffers from motion blur artifacts. (c) Our method, mrDPC, uses traditional DPC source patterns in the green color channel and an additional constant navigator source pattern (half-circle) in the red channel. The motion-resolved phase reconstructed corrects the effects of the sample’s non-rigid motion.
Fig. 2
Fig. 2 Motion-resolved DPC uses traditional DPC (rotating half-circle) illumination patterns in the green color channel and a constant half-circle navigator pattern in the red color channel. (a) Simulations of the captured DPC images (green box) and the navigator images (red box) for a sample comprised of three polystyrene beads: one stationary, one moving up, and one moving down. (b) Motion is estimated between each time point and the reference time point (T = t2). Estimates are used to correct for motion in the DPC measurements, then (c) a DPC phase reconstruction is performed, eliminating the motion artifacts.
Fig. 3
Fig. 3 Experimental Validation. Recovered quantitative phase images of live Amoeba proteus reconstructed (a) without motion (ground truth), (b) corrupted by sample motion as outlined in Sec. 3, and (c) motion resolved with mrDPC (Sec. 2). Insets highlight blurring of water vacuoles (bright spots) due to sample motion and its correction with mrDPC.
Fig. 4
Fig. 4 Experimental results for motion-resolved DPC with fast-moving live C. elegans. (a) Raw uncorrected DPC intensity images. Insets highlight significant non-rigid sample motion of the head (pink) and body (blue) during the four-image acquisition. (b) Quantitative absorption and phase reconstructions without (left) and with (right) motion correction. (c) Insets highlight spatial distortion and blurring artifacts due to head (pink) and body (blue) motion. Gold arrows indicate correction of head motion in the phase reconstructions. Red arrows indicate correction of internal body feature motion in the absorption reconstructions.
Fig. 5
Fig. 5 ColorDPC avoids motion blur by using a single image capture, but suffers loss of quality. Experimental comparison of phase reconstructions (12.5×, NA=0.25) for a stationary phase target (max 0.8 radians) using (a) traditional DPC, (b) motion-resolved DPC and (c) colorDPC. (d) Radial cross sections of the insets highlight the improved resolution and contrast achieved with traditional DPC (green) and navDPC (purple) over colorDPC (blue).

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

max g 0 ( r , t ) , g 1 ( r , t ) 𝒮 ( I 0 ( g 0 ( r , 0.5 ) ) , I 1 ( g 1 ( r , 0.5 ) ) ( v 0 ( r , t ) ) ( v 1 ( r , t ) ) )
subject to , g i ( r , t ) t = v i ( g i ( r , t ) , t ) for each i { 0 , 1 }
g i ( r , t ) = r for each i { 0 , 1 } ,
y ˜ ( u ) = B δ ( u ) + H μ ( u ) μ ˜ ( u ) + H ϕ ( u ) ϕ ˜ ( u ) ,
H μ ( u ) = S ( u ) P ( u ) + P ( u ) S ( u )
H μ ( u ) = i ( S ( u ) P ( u ) P ( u ) S ( u ) ) ,
min μ ˜ , ϕ ˜ j = 0 3 y ˜ ( j ) H μ ( j ) μ ˜ H ϕ ( j ) ϕ ˜ 2 2 + λ μ μ ˜ 2 2 + λ ϕ ϕ ˜ 2 2 .
[ j = 0 3 H ¯ μ ( j ) H μ ( j ) + λ μ I j = 0 3 H ¯ μ ( j ) H ϕ ( j ) j = 0 3 H ¯ ϕ ( j ) H μ ( j ) j = 0 3 H ¯ ϕ ( j ) H ϕ ( j ) + λ ϕ I ] [ μ ˜ ϕ ˜ ] = [ j = 0 3 H ¯ μ ( j ) y ˜ ( j ) j = 0 3 H ¯ ϕ ( j ) y ˜ ( j ) ] ,
[ μ ˜ ϕ ˜ ] = [ j = 0 3 H ¯ μ ( j ) H μ ( j ) + λ μ I j = 0 3 H ¯ μ ( j ) H ϕ ( j ) j = 0 3 H ¯ ϕ ( j ) H μ ( j ) j = 0 3 H ¯ ϕ ( j ) H ϕ ( j ) + λ ϕ I ] [ j = 0 3 H ¯ μ ( j ) y ˜ ( j ) j = 0 3 H ¯ ϕ ( j ) y ˜ ( j ) ] .
[ I nav I dpc ] = C [ I r I g 1 I g 2 ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.