Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multispectral total-variation reconstruction applied to lens-free microscopy

Open Access Open Access

Abstract

Lens-free microscopy multispectral acquisitions are processed with an inverse problem approach: a multispectral total variation criterion is defined and minimized with the conjugate gradients method. Reconstruction results show that the method is efficient to recover the phase image of densely packed cells.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Lens-free microscopy [1] is a minimalist microscopy technique, which consists in acquiring diffraction patterns of a thin sample and computing (reconstructing) the optical properties of an object which diffraction pattern would match the measurements. By its simple design, the system is cheap, robust and incubator-proof. It can easily provide videos with a large field of view. Moreover, biological samples can be imaged without staining [2].

Lens-free acquisition is however problematic. While a complete description of light requires complex value fields, only their absolute values (intensities) are measured with a lens-free setup featuring a basic sensor. The consequence of this lack of information is to create artifacts in reconstructed images known as twin images [3].

A first strategy to reduce such artifacts consists in using priors on the sample, such as non negativity [4], or priors related to sparsity [3, 5]. Another strategy consists in enriching the measurements set by acquiring diffraction images at multiple object-to-detector distances [6,7], or at multiple wavelengths [2,8].

This article relies on both strategies, i.e. the use of an enriched measurements set (multispectral acquisitions) associated with physically valid priors of the unknown sample. Hence, we developed an algorithm that minimizes a multispectral total variation (MS-TV) criterion. The algorithm uses the MS-TV criterion to reconstruct phase images that show shape similarities along the different acquisition wavelengths.

The paper is organized in three parts. The method part describes the models used (i.e. the optical field, its propagation and its measurement), the multispectral total variation criterion, the computation of its gradient and an overview of the reconstruction process. The material part describes the 3-chromatic RGB acquisition system, which sequentially acquires three diffraction images of biological samples and the results part shows an example of a reconstruction obtained from a culture of A549 mammalian cells. The reconstructed time-lapse movie of this cell culture is provided (field of view of 29.4 mm2, 3.7 days) in supplementary material.

2. Method

Light is described as a complex scalar field and its propagation from one plane to another is obtained using a convolution kernel (h). Therefore, if A0 is the complex field after the sample, the field AZ in the detector plane at the distance Z from the object is:

AZ=A0*hZ
where ’*’ is the convolution operator. Here, we used a kernel hZ derived from Fresnel theory, but other propagation models such as the angular spectrum [9] approach could be employed with no difficulties. In the following, Z is assumed to be known. In practice, the focusing distance has to be estimated with a precision of approximately 10 μm and it can be evaluated by inspecting Z-stack of back-propagated measurements or with an automated method as described in [10].

The propagation kernel is hZλ(r)=(1/iλZ)exp(iπr2/λZ), i being the imaginary unit. Note that the quantities AZ, A0, hZ, as well as the measured intensity IZ, the phase in the detector plane φZ and the objective function gradient ∇ε (that are introduced in the following), are functions of the wavelength λ and the spatial position r⃗ in a plane. Hence, for example, the convolution of Eq. (1) can be explicitly written as:

AZλ(r)=drA0λ(r).hZλ(rr)
where r⃗ is the coordinate in the sample plane and r⃗′ is the one in the detector plane. Comments about spatial discretization of Eq. (1) and Eq. (2) and computation using Fourier transforms can be found in Appendix A.

The quantity measured by a standard detector is the intensity of the light field: IZ = |AZ|2. At this stage, we see that the modulus of AZ is known but its phase is undetermined. Whatever the phase map φZ, AZ=IZ.exp(i.φZ) and the corresponding field A0 at the sample plane, A0=(IZ.exp(i.φZ))*hZ, would perfectly match the data. The approach presented here consists in finding the phase field φZ so that A0(φZ) minimizes the MS-TV criterion (φZ):

(φZ)=drε+λ(|A0λ(r)x|+|A0λ(r)y|)
where ε is a small valued coefficient (10−4) used to make Eq. (3) differentiable and prevent division by 0 in the gradient derivation presented afterwards.

MS-TV is a L1-norm applied to the 2Nλ fields (A0λ/x and A0λ/y) where Nλ is the number of wavelengths used (here Nλ = 3). A comment about discretization of operators /∂x and /∂y can be found in Appendix A. As illustrated in Fig. 1, MS-TV criterion is lower when high values of these 2Nλ fields are colocalized. With this consideration plus the fact that L1-norm tends to promote sparsity, we can see that minimization of the MS-TV criterion leads to results with similar sharp edges at all wavelengths. Contrary to [2, 8] which present another way to data process multi-spectral acquisitions, our MS-TV criterion is not designed to produce sample fields A0 with the same values for different wavelengths (see Appendix D) but favors colocalization of edges across wavelengths.

 figure: Fig. 1

Fig. 1 Illustration showing that multispectral L1-norm is lower when high values of fields are colocalized. In the case of a), the high values are at various spatial positions whereas in b), the fields are colocalized. The L1-norm is lower for the second case as designed.

Download Full Size | PDF

The optimization is performed using the non-linear conjugate gradients iterative method as described by the following pseudocode:

  • 1/ initialization k ← 0, D(k) ← 0, φZ(k)0
  • 2/ kk + 1
  • 3/ computation of gradient ∇(k), of advancement direction D(k) and advancement step σ(k)
  • 4/ phase update: φZ(k)φZ(k1)+σ(k).D(k)
  • 5/ test of convergence, if not reached go back to 2/

For step 3/, the gradient ∇(k) is analytically computed as detailed in Appendix C. The advancement direction D(k) is obtained by conjugating the gradient ∇(k) with the previous direction of advancement D(k−1), using Hestenes-Stiefel formula [11]. The advancement step σ(k) (a scalar) is obtained by developing the MS-TV criterion at order 2 in the advancement direction (Hessian computation) and by minimizing a second-order polynomial of variable σ(k). For the test of step 5/, we simply stop the algorithm when the number of iterations reaches 100.

3. Material

Measurements were carried out with a Cytonote (Iprasense) lens-free video microscope (Fig. 2(b)) which geometry is illustrated in Fig. 2(a). Workable diffraction measurements must exhibit fringes. Therefore, the optical system is designed to provide partial coherence. The source is a 3-chromatic RGB-LED spatially filtered by a pinhole illuminating a thin sample on a microscope slide at 50 mm of the source. A CMOS detector of 6.4 × 4.6 mm2 with 3840 × 2748 pixels (pixel pitch of 1.67 μm) is used to measure diffraction patterns at a distance of ≈ 1 mm from the sample. In the configuration used, optical magnification is close to 1.

 figure: Fig. 2

Fig. 2 (a) Geometry of the system, (b) Picture of the Cytonote lens-free video microscope.

Download Full Size | PDF

For the aim of observing living cells during a long period of time without significant disturbance, 1 frame per 5 minutes is the maximum sampling rate achievable on this setup. Indeed, due to its short cell-to-sensor distance, the heat of the detector generated during a frame acquisition is partially transferred to the biological sample. A faster acquisition rate would lead to the overheating of the sample and may cause cell death. The system spatial resolution limitations are discussed in Appendix B. We show that the limiting physical factor in this system is the temporal coherence of the source.

4. Results

To illustrate the capabilities of the system, a video of the diffraction of a biological sample (A549 cells observed during 3.7 days, 1065 frames with a sampling of 1 frame per 5 minutes) was acquired. A typical diffraction measurement is presented on Fig. 3(d) and Fig. 3(g). Since such samples are transparent (non-absorbing), most of the contrast is expected to be due to phase variations.

 figure: Fig. 3

Fig. 3 (a) Example of a full 3840 × 2748 pixels reconstruction (phase, i.e. mean angle of A0 over its 3 color-channels) of a A549 cells culture acquired on the system of Fig. 2(b). Zooms of the reconstructed phase from RGB channels measurements ((b) and (e)), from only red-channel measurements ((c) and (f)) and red-channel diffraction measurements ((d) and (g)). Images (b), (c) and (d) are 1000 × 1000 pixel zooms and images (e), (f) and (g) are 235 × 235 pixel zooms. This example is frame #465 of the entire reconstructed movie, which can be found in Visualization 1 of the supplementary material. The phase colorbar apply to phase subplots ((a), (b), (c), (e) and (f)).

Download Full Size | PDF

An optimal distance Z (1340 μm) was first determined by visual inspection of retropropagation (A0=IZ*hZ) of measurements for various Z. This distance is used for the entire set of 1065 frames. We then performed the reconstruction with the proposed method (100 iterations) using either tri-chromatic acquisition, or only one acquisition channel (red). An example of reconstruction obtained with the proposed method from a tri-chromatic acquisition is presented on Fig. 3(a), Fig. 3(b) and Fig. 3(e). The reconstruction time of one frame is about 40 seconds with a script running on Matlab R2017b and a desktop PC equipped with a Nvidia Titan XP graphic card. The reconstruction results for all the 1065 frames of the acquisition set are presented as a movie in supplementary material (see Visualization 1). For comparison, the reconstruction obtained with the proposed method but using only the red channel acquisition, is presented on Fig. 3(c) and Fig. 3(f) (18 seconds runtime).

We note that the algorithm is successful in retrieving sharp results with the lens-free setup even when fringes of various cells are intermixed (Fig. 3(g)). Fig. 3(c) and Fig. 3(f) show that the proposed algorithm used with only one wavelength is efficient in recovering a valid phase image of the sample even in the difficult case of densely packed biological cells. Furthermore, the detailed image of reconstructed phase (Fig. 3(e)) shows the benefit of using wavelengths diversity since results from tri-chromatic acquisitions are less impaired by twin image residues (see arrows in Fig. 3(e) and (f), as well as the flatter area void of cells).

In Appendix D, reconstructions performed from the same set of acquisition data but using the standard multispectral method as described in [8] are presented for comparison purpose. We show that the two methods provide results with the same spatial resolution. However the MS-TV algorithm allows to reduce twin-image residuals and artifacts on this example.

5. Conclusion

In this work, we developed an algorithm for processing multispectral lens-free microscopy acquisitions. This technique is well adapted for visualizing transparent samples such as unstained cell cultures since it relies on acquisitions of diffraction images. The algorithm is based on the definition of a novel multispectral total variation criterion. Its minimization is performed through a standard non-linear conjugated gradient method, which uses analytical computation of the criterion gradient also presented in the paper.

Acquisitions from a densely packed culture of A549 mammalian cells are presented. In such a case, algorithms based on the determination of the support of the object would have failed since the later occupies most of the field of view. On the contrary, the proposed algorithm is able to reconstruct valid phase images of the sample over a wide field of view. Detailed inspection of reconstructions shows that the data-processing of multispectral acquisitions is advantageous compared to one channel acquisitions since it provides results with less artifacts.

Appendix A: Comments on discretization and the Fourier transform

For the sake of simplifying the text, fields are written as functions of the continuous spatial variable r⃗, but, since data acquisitions are performed on discrete grids corresponding to the detector sampling (on the example, grid of size 3840 × 2748 with a pixel pitch of 1.67 m), actual computations are discrete operations.

Integrals as the one of Eq. (2) are to be read as a discrete sum; for example the integral of a field f is:

drf(r)ijfij
i and j being indexes of the matrix f.

For the same reason, partial derivatives, as appearing in Eq. (3), are discretized. For example the partial derivative with respect to x of a field f is:

fx(rij)kfik,j.Sxk
with Sx0 = 1 and Sx1 = −1.

For computational performance reasons, operations involving convolution with Fresnel kernel, as appearing in Eq. (1), are performed in the Fourier space (using the Fast Fourier Transform) as described in Eq. (5) of reference [12]. We utilize the fact that the Fourier transform Z of hZ is:

h˜Zλ(μ)=exp(iπλZμ2)
where the Fourier transform definition used is (μ⃗) = ∫ dr⃗ f(r⃗) exp(−2 μ⃗.r⃗). This way to compute Eq. 1 is valid as long as the Nyquist theorem is not violated (see Eq. 51 of [13]); it is the case of our experimental conditions since Z < NΔx2/λ, where Z is the sample to detector distance (1340 μm), Δx the detector pitch (1.67 μm), N the number of pixels in the smallest dimension of the detector (N = 2748) and λ is around 0.5 μm.

Appendix B: Comments on system resolution due to partial coherence

Coherent wave formulation is used throughout this article, although the system source of light are RGB LEDs, which are only partially coherent. This lack of coherence leads to a decrease of the spatial resolution of the system and may result in specific artifacts due to modeling errors.

The three main factors that limit the spatial resolution are the detector pixel pitch and the source temporal and spatial coherences as discussed in the article [14]. Using Eq. (4), Eq. (5), Eq. (6) and Eq. (7) of this reference, we compute the half pitch resolution Δx of these three limiting factors; we get Δx = 1.67 μm due to the detector pixel pitch, Δx = 1.12 μm due to source spatial coherence and Δx = 2.40 μm due to source temporal coherence. (Results obtained with the values: λ = 0.5 μm, source-sample distance=50 mm, sample-detector distance=1340 μm, source size=50 μm, source spectral width=20 nm). Therefore, the limiting factor of our system is the temporal coherence of the source and we expect a complete loss of contrast for lines with a width around 2.40 μm on phase test targets.

Such a loss of contrast is observed on a standard USAF1951 phase test target (SiO2 material, 320 nm-height pillars) as shown in Fig. 4. The algorithm described in the present article was applied to a tri-chromatic acquisition of this test target; group 6 is perfectly well contrasted whereas lines of group 7 are gradually less visible with a complete loss of contrast for element 4 of group 7. As experimentally evaluated on this phase test target, the system half pitch resolution is around 2.8 μm, which is good agreement with the theoretical value.

 figure: Fig. 4

Fig. 4 Reconstruction of standard USAF 1951 phase target by using the MS-TV algorithm from a tri-chromatic acquisition. A gradual loss of contrast can be observed in group 7.

Download Full Size | PDF

The main artifact of the reconstruction is the non homogeneity of the recovered phase values visible on large structures (such as the biggest square in Fig. 4), which is caused by the fact that intensity measurements are not sensitive to small spatial frequencies of phase samples.

Appendix C: Derivation of the gradient of the problem

The goal of this annex is to give mathematical details on how the gradient of the objective function is calculated. As explained in the part 2, for any detector phase φZ, a multispectral TV criterion (φZ) of the reconstructed object field A0 can be computed. The operations chain linking φZ and are the following Eq. 710:

φZ(unknowndetectorphase)
AZ=IZ.exp(i.φZ)(detectorlightfield)
A0=AZ*hZ(samplelightfield)
=drε+λ(|A0λ(r)x|+|A0λ(r)y|)(criterion)

To obtain the gradient of , we use the differential notion. If we have a differentiable function f : AB, for every a and h of A, there exists a linear function Da f called differential of f around a, such that:

f(a+h)=f(a)+Daf(h)+o(h)
From now on, we adopt usual notations in physics to describe the differential; we write the above formula in the following syntax, where a is implied and where the “progress” h (noted δa) of the variable and the differential are noted with the symbol δ:
f(a+δa)f(a)=δf(δa)+o(δa)
Note that this syntax can be used for a scalar a or a field such as φZ. Since is a scalar which depends on φZ, a field of indexes λ and spatial coordinates r⃗, the differential of is a linear form that can be written as:
δ=λdrλ(r).δφλ(r)
is called the gradient and has the same dimension as φZ, the variable of . To derive its analytical formula, let us differentiate Eq. 710 to obtain an equation of the shape of Eq. 13. Knowing that δε+a=1/2.δa/ε+a, that δ |a|2 = 2Re (a.δa*) (Re being the “real part” operator) and that δ∂f/∂x = ∂δf/∂x, Eq. 13 becomes:
δ=Redrλ1eλ(r)(A0λ(r)xδA0λ(r)*x+A0λ(r)yδA0λ(r)*y)
where the error term is defined as: eλ(r)=ɛ+λ|A0λ(r)x|+|+A0λ(r)y|. By integrating by parts Eq. 14, one obtains:
δ=Redrλ(x(1eλ(r)A0λ(r)x)+y(1eλ(r)A0λ(r)y)).δA0λ(r)*
By differentiating Eq. 8 and Eq. 9, we have : δA0 = δAZ * hZ and δAZ = AZiδφZ. Hence:
δ=+Reidrλ(x(1eλ(r)A0λ(r)x)+y(1eλ(r)A0λ(r)y)).((AZλ.δφZ)*hZλ(r))r*
Using Parseval theorem, i.e. for any fields f and g, ∫ dr⃗ f.g* = ∫ dμ⃗ .*, by taking g=(AZλ.δφZλ)*hZ, we get: drf.((AZλ.δφZλ)*hZ)*=dμ.f˜.((AZλ.φZλ˜)*.h˜Z*)=dμ.f˜.h˜Z*.(AZλ.φZλ˜)*=dr(f*hZ).(AZλ.δφZλ)* where we used the fact that h˜Z*=h˜Z (see Eq. 6). Considering also the fact that Re (i f) = −Im (f), Im being the “imaginary part” operator, Eq. 16 becomes:
δ=Imdrλ((x(1eλ(r)A0λ(r)x)+y(1eλ(r)A0λ(r)y))*hZλ).AZλ(r)*.δφZ(r)
Finally, by comparing Eq. (13) and Eq. (17), we identify the gradient of the criterion as being:
λ(r)=Im(((x(1eλ(r)A0λ(r)x)+y(1eλ(r)A0λ(r)y))*hZλ).AZλ(r)*)
This formula is computed by using the discretization rules given in Appendix A.

Appendix D: Standard multispectral algorithm

A standard algorithm [8] used for dealing with multi-wavelength lens-free acquisition, consists in:

  • 1/ back-propagating the field from the detector plane to the object plane,
  • 2/ averaging over wavelengths in the object plane,
  • 3/ propagating the result to the detector plane,
  • 4/ replacing the modulus of the field at the detector plane by the modulus acquired,
  • 5/ going back to 1/ until convergence.

We show that such an algorithm can be understood with the tools developed in this article. Let us consider the following criterion:

=dr1Nλλ|A0λ(r)A0¯(r)|2
A0¯ being the mean of field A0 with respect to the Nλ wavelengths. Eq. (19) is the integral on the object plane of the variance of light amplitude values at the different wavelengths. Such a criterion implements the preference of solutions with similar values at each wavelength. By using the same steps as in Appendix C, we get the gradient of the problem:
λ(r)=2Im((A0¯*hZλ).AZλ(r)*)
The minimum of the criterion is reached when ∇λ(r⃗) = 0. Therefore, (A0¯*hZλ).AZλ(r)* is real. Therefore:
φ(A0¯*hZλ)=φ(AZλ(r))=φZλ(r)
where φ() is the phase function. By adopting a fixed-point strategy to reach the minimum, we see that a possible update of φZλ(k)(r) at iteration k can be chosen as:
φZλ(k)(r)=φ(A0(k1)¯*hZλ)
where A0(k1) is the sample field obtained for the detector phase φZ(k1). This formula corresponds to the five steps algorithm described above.

For the sake of comparison, the tri-chromatic acquisitions of the biological sample shown in section 4 and the phase test target shown in Appendix B have also been data processed using the standard multispectral algorithm described [8] in this part (33 s runtime). Results of these two cases are presented on Fig. 5. On the figure, we see that the resolution limit of the standard method is equivalent to the MS-TV algorithm (complete loss of contrast for element 4 of group 7). The main differences come from the nature of specific artifacts. The standard method tends to produce dark halos surrounding phase elements and the twin-image residuals are more intense than on the output of our MS-TV method.

 figure: Fig. 5

Fig. 5 Reconstructions performed with the standard multispectral algorithms applied to the biological sample shown in section 4 and to the phase test target shown in Appendix B.

Download Full Size | PDF

Acknowledgements

We thank Pierre Joly (CEA-LETI) for the design of the phase test target and Jean-Pierre Alcaraz and Yves Usson de (TIMC-IMAG, Univ. Grenoble Alpes) for AFM measurements performed on it.

Disclosures

L. Hervé, O. Cioni and C. Allier are co-inventors of a patent devoted to the holographic reconstruction of lens-free microscopy acquisitions as described in this manuscript.

References

1. A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889 (2012). [CrossRef]   [PubMed]  

2. C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, et al., “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytom. Part A 91, 433–442 (2017). [CrossRef]  

3. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]  

4. T. Latychevskaia and H.-W. Fink, “Solution to the twin image problem in holography,” Phys. Rev. Lett. 98, 233901 (2007). [CrossRef]  

5. Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, 37862 (2016). [CrossRef]  

6. L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199, 65–75 (2001). [CrossRef]  

7. A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express 20, 3129–3143 (2012). [CrossRef]   [PubMed]  

8. M. Sanz, J. A. Picazo-Bueno, J. García, and V. Micó, “Improved quantitative phase imaging in lensless microscopy by single-shot multi-wavelength illumination using a fast convergence algorithm,” Opt. Express 23, 21352–21365 (2015). [CrossRef]   [PubMed]  

9. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005).

10. Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017). [CrossRef]  

11. M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, vol. 49 (NBSWashington, DC, 1952).

12. S. Grilli, P. Ferraro, S. De Nicola, A. Finizio, G. Pierattini, and R. Meucci, “Whole optical wavefields reconstruction by digital holography,” Opt. Express 9, 294–302 (2001). [CrossRef]   [PubMed]  

13. D. Claus, D. Iliescu, and P. Bryanston-Cross, “Quantitative space-bandwidth product analysis in digital holography,” Appl. Opt. 50, H116–H127 (2011). [CrossRef]  

14. A. Ozcan and E. McLeod, “Lensless imaging and sensing,” Annu. Rev. Biomed. Eng. 18, 77–102 (2016). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Phase images time-lapse of densily packed mammamial A549 cells. The original acquisition set consist of 1065 frames (1 frame per 5 minutes) of 3840x2748 images (pitch 1.67 µm) acquired with the Cytonote lensless tri-chromatique (RGB) microscope. The

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Illustration showing that multispectral L1-norm is lower when high values of fields are colocalized. In the case of a), the high values are at various spatial positions whereas in b), the fields are colocalized. The L1-norm is lower for the second case as designed.
Fig. 2
Fig. 2 (a) Geometry of the system, (b) Picture of the Cytonote lens-free video microscope.
Fig. 3
Fig. 3 (a) Example of a full 3840 × 2748 pixels reconstruction (phase, i.e. mean angle of A0 over its 3 color-channels) of a A549 cells culture acquired on the system of Fig. 2(b). Zooms of the reconstructed phase from RGB channels measurements ((b) and (e)), from only red-channel measurements ((c) and (f)) and red-channel diffraction measurements ((d) and (g)). Images (b), (c) and (d) are 1000 × 1000 pixel zooms and images (e), (f) and (g) are 235 × 235 pixel zooms. This example is frame #465 of the entire reconstructed movie, which can be found in Visualization 1 of the supplementary material. The phase colorbar apply to phase subplots ((a), (b), (c), (e) and (f)).
Fig. 4
Fig. 4 Reconstruction of standard USAF 1951 phase target by using the MS-TV algorithm from a tri-chromatic acquisition. A gradual loss of contrast can be observed in group 7.
Fig. 5
Fig. 5 Reconstructions performed with the standard multispectral algorithms applied to the biological sample shown in section 4 and to the phase test target shown in Appendix B.

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

A Z = A 0 * h Z
A Z λ ( r ) = d r A 0 λ ( r ) . h Z λ ( r r )
( φ Z ) = d r ε + λ ( | A 0 λ ( r ) x | + | A 0 λ ( r ) y | )
d r f ( r ) i j f i j
f x ( r i j ) k f i k , j . S x k
h ˜ Z λ ( μ ) = exp ( i π λ Z μ 2 )
φ Z ( unknown detector phase )
A Z = I Z . exp ( i . φ Z ) ( detector light field )
A 0 = A Z * h Z ( sample light field )
= d r ε + λ ( | A 0 λ ( r ) x | + | A 0 λ ( r ) y | ) ( criterion )
f ( a + h ) = f ( a ) + D a f ( h ) + o ( h )
f ( a + δ a ) f ( a ) = δ f ( δ a ) + o ( δ a )
δ = λ d r λ ( r ) . δ φ λ ( r )
δ = Re d r λ 1 e λ ( r ) ( A 0 λ ( r ) x δ A 0 λ ( r ) * x + A 0 λ ( r ) y δ A 0 λ ( r ) * y )
δ = Re d r λ ( x ( 1 e λ ( r ) A 0 λ ( r ) x ) + y ( 1 e λ ( r ) A 0 λ ( r ) y ) ) . δ A 0 λ ( r ) *
δ = + Re i d r λ ( x ( 1 e λ ( r ) A 0 λ ( r ) x ) + y ( 1 e λ ( r ) A 0 λ ( r ) y ) ) . ( ( A Z λ . δ φ Z ) * h Z λ ( r ) ) r *
δ = Im d r λ ( ( x ( 1 e λ ( r ) A 0 λ ( r ) x ) + y ( 1 e λ ( r ) A 0 λ ( r ) y ) ) * h Z λ ) . A Z λ ( r ) * . δ φ Z ( r )
λ ( r ) = Im ( ( ( x ( 1 e λ ( r ) A 0 λ ( r ) x ) + y ( 1 e λ ( r ) A 0 λ ( r ) y ) ) * h Z λ ) . A Z λ ( r ) * )
= d r 1 N λ λ | A 0 λ ( r ) A 0 ¯ ( r ) | 2
λ ( r ) = 2 Im ( ( A 0 ¯ * h Z λ ) . A Z λ ( r ) * )
φ ( A 0 ¯ * h Z λ ) = φ ( A Z λ ( r ) ) = φ Z λ ( r )
φ Z λ ( k ) ( r ) = φ ( A 0 ( k 1 ) ¯ * h Z λ )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.