Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Efficient illumination angle self-calibration in Fourier ptychography

Open Access Open Access

Abstract

Fourier ptychography captures intensity images with varying source patterns (illumination angles) in order to computationally reconstruct large space-bandwidth-product images. Accurate knowledge of the illumination angles is necessary for good image quality; hence, calibration methods are crucial, despite often being impractical or slow. Here, we propose a fast, robust, and accurate self-calibration algorithm that uses only experimentally collected data and general knowledge of the illumination setup. First, our algorithm makes a fast direct estimate of the brightfield illumination angles based on image processing. Then, a more computationally intensive spectral correlation method is used inside the iterative solver to further refine the angle estimates of both brightfield and darkfield images. We demonstrate our method for correcting large and small misalignment artifacts in 2D and 3D Fourier ptychography with different source types: an LED array, a galvo-steered laser, and a high-NA quasi-dome LED illuminator.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Computational imaging leverages the power of optical hardware and computational algorithms to reconstruct images from indirect measurements. In optical microscopy, programmable sources have been used for computational illumination techniques, including multicontrast [1,2], quantitative phase [36], and super-resolution [3,710]. Implementation is simple, requiring only an inexpensive source attachment for a commercial microscope. However, these methods are also sensitive to experimental misalignment errors and can suffer severe artifacts due to model mismatch. Extensive system calibration is needed to ensure that the inverse algorithm is consistent with the experimental setup, which can be time- and labor-intensive. This often requires significant user expertise, making the setup less accessible to reproduction by nonexperts and undermining the simplicity of the scheme. Further, precalibration methods are not robust to changes in the system (e.g., bumping the setup, changing objectives, sample-induced aberrations) and require precise knowledge of a ground-truth test object.

Algorithmic self-calibration methods [1125] eliminate the need for pre-calibration and precise test objects by making calibration part of the inverse problem. These methods jointly solve two inverse problems: one for the reconstructed image of the object and the other for the calibration parameters. By recovering system calibration information directly from captured data, the system becomes robust to dynamic changes in the system.

Here, we focus on illumination angle self-calibration for Fourier ptychographic microscopy (FPM) [3]. FPM is a coherent computational imaging method that reconstructs high-resolution amplitude and phase across a wide field-of-view (FoV) from intensity images captured with a low-resolution objective lens and a dynamically coded illumination source. Images captured with different illumination angles are combined computationally in an iterative phase retrieval algorithm that constrains the measured intensity in the image domain and pupil support in the Fourier domain. This algorithm can be described as stitching together different sections of Fourier space (synthetic aperture imaging [26,27]) coupled with iterative phase retrieval. FPM has enabled fast in vitro capture via multiplexing [9,10], fluorescence imaging [25], and 3D microscopy [28,29]. It requires significant redundancy (pupil overlap) in the data set [8,30], making it suitable for joint estimation self-calibration.

Self-calibration routines have previously been developed to solve for pupil aberrations [1113], illumination angles [1418], LED intensity [19], sample motion [20], and autofocusing [21] in FPM. The state-of-the-art self-calibration method for illumination angles is simulated annealing [14,15], a joint estimation solution that (under proper initialization) removes LED misalignment artifacts that usually manifest as low-frequency noise. Unfortunately, because the simulated annealing procedure operates inside the FPM algorithm iterative loop, it slows the runtime of the solver by an order of magnitude or more. For 3D FPM (which is particularly sensitive to angle calibration [28]), the computational costs become infeasible.

Moreover, most self-calibration algorithms require a relatively close initial guess for the calibration parameters. This is especially true when the problem is nonconvex or if multiple calibration variables are to be solved for (e.g., object, pupil, and angles of illumination). Of the relevant calibration variables for FPM, illumination angles are the most prone to error, due to shifts or rotations of the LED array [31], source instabilities [22,32], nonplanar illuminator arrangements [3336], or sample-induced aberrations [37,38]. Sample-induced aberrations can also change the effective illumination angles dynamically, such as when the sample is in a moving aqueous solution.

We propose here a two-pronged-angle self-calibration method that uses both preprocessing (brightfield calibration) and iterative joint estimation (spectral correlation calibration) that is quicker and more robust to system changes than state-of-the-art calibration methods. A circle-finding step prior to the FPM solver accurately identifies the angles of illumination in the brightfield (BF) region. A transformation between the expected and BF calibrated angles extrapolates the correction to illuminations in the darkfield (DF) region. Then, a local grid-search-based algorithm inside the FPM solver further refines the angle estimates, with an optional prior based on the illuminator geometry (Fig. 1). Our method is object-independent, robust to coherent noise, and time-efficient, adding only seconds to the processing time. We demonstrate on-line angle calibration for 2D and 3D FPM with three different source types: an LED array, a galvanometer-steered laser, and a high-NA (max NAillum=0.98) quasi-dome illuminator [36].

 figure: Fig. 1.

Fig. 1. Illumination angles are calibrated by analyzing Fourier spectra. (a) A cheek cell is illuminated at angle α and imaged with NAobj. (b) Brightfield images contain overlapping circles in their Fourier spectra; darkfield images do not. (c) We perform a fast and efficient brightfield calibration in preprocessing, then extrapolate the correction to darkfield images, and, finally, iteratively calibrate angles inside the FPM algorithm using a spectral correlation calibration.

Download Full Size | PDF

2. METHODS

The image formation process for a thin sample under off-axis spatially coherent plane wave illumination can be described by

Ii(r)=|O(r)ei2πkir*P(r)|2=|F1(O˜(kki)P˜(k))|2,
where ki is the spatial frequency of the incident light, P˜(k) is the system pupil function, O˜(k) is the object Fourier spectrum, and F is the 2D Fourier transformation operation, valid for shift-invariant systems. Intensity images are captured at the sensor plane, corresponding to autocorrelation in the Fourier domain:
I˜i(k)=F(|O(r)ei2πkir*P(r)|2)=O˜(kki)P˜(k)O˜(kki)P˜(k),
where * denotes convolution and denotes autocorrelation. O˜(kki)P˜(k) corresponds to the shifted spectrum of the object within the circle |k|NAobjλ and 0 everywhere else. The autocorrelation operation essentially scans two copies of O˜(kki)P˜(k) across each other, coherently summing at each shift to give I˜i(k). Typically, the object spectrum has a large zero-order (DC) term that decays sharply toward higher frequencies. In the brightfield region, when the DC term at ki is within the pupil’s passband, the autocorrelation effectively scans this DC term across the conjugate spectrum, giving high values where the DC term overlaps with the conjugate pupil and negligible signal elsewhere. This interference between the DC term and pupil in the autocorrelation creates two distinct circles centered at ki and ki in the intensity spectrum amplitude (Fig. 1). Hence, we can calibrate the illumination angle by finding these circle centers. For darkfield images, the DC term is outside NAobjλ, and so we do not observe clearly defined circles in |I˜i| [Fig. 1(b)], making calibration more complicated.

Our algorithm relies on analysis of the raw intensity Fourier transform to recover illumination angles. Fourier domain analysis of intensity images has been used previously to deduce aberrations [39] and determine the center of diffraction patterns [40,41] for system calibration. We show here that the individual Fourier spectra can be used to accurately determine illumination angles in the brightfield and darkfield regimes.

A. Brightfield Calibration

Locating the center of the circles in the amplitude of a Fourier spectrum is an image processing problem. Previous work in finding circles in images uses the Hough transform, which relies on an accurate edge detector as an initial step [42,43]. In practice, however, we find that edge detectors do not function well on our data sets due to speckle noise, making the Hough transform an unreliable tool for our purpose. Therefore, we propose a new method that we call circular edge detection.

Intuitively, circular edge detection can be understood as performing edge detection (i.e., calculating image gradients) along a circular arc around a candidate center point in k-space (the Fourier domain). To first approximation, we assume |I˜i| is a binary function that is 1 inside the two circles and 0 everywhere else. Our goal is to find the strong binary edge in order to locate the circle center. We need only consider one of the circles because the intensity image is real-valued, and so its Fourier transform is symmetric. Based on knowledge of the setup, we expect the illumination spatial frequency (and circle center) for spectrum I˜i to be at ki,0=(kx,i,0,ky,i,0) (polar coordinates ki,0=(di,0,θi,0)) [Fig. 2(a)]. If this is the correct center ki, we expect there to be a sharp drop in |I˜i| at radius R along any radial line f(r,ϕn) out from ki [Fig. 2(b)]. This amplitude edge will appear as a peak at r=R in the first derivative of each radial line with respect to r, f(r,ϕn) [Fig. 2(d)]. Here, (r,ϕn) are the polar coordinates of the radial line with respect to the center ki, considering the nth of N radial lines.

 figure: Fig. 2.

Fig. 2. Circular edge detection on brightfield images finds circle centers, giving illumination angle calibration. (a), (b) Comparison of uncalibrated (red) and calibrated (black) illumination ki. The blue box in (b) indicates the search range for ki. (c), (d) I˜i along radial lines, f(r,ϕn), and derivatives with respect to r. (e), (f) E1 and E2, sums of the derivatives at known radii R and R+σ, peak near the correct center. Boxes show uncalibrated (red) and calibrated (black) ki centers.

Download Full Size | PDF

We identify the correct ki by evaluating the summation of the first derivative around the circular arc at r=R from several candidate ki=(di,θi) positions:

E1(R,di,θi)=n=1Nf(r=R,ϕn,di,θi).
When ki is incorrect, the edges do not align, and the derivative peaks do not add constructively at R [Fig. 2(c)]. The derivatives at R are all maximized only at the correct center ki [Fig. 2(d)], creating a peak in E1 [Fig. 2(e)]. This is analogous to applying a classic edge filter in the radial direction from a candidate center and accumulating the gradient values at radius R.

In order to bring our data closer to our binary image approximation, we divide out the average spectrum meani(|I˜i|) across all i spectra. Because the object remains constant across images while the angles of illumination change, the average spectrum is similar in form to the object’s auto-correlation spectrum, with a sharp central peak decaying toward higher frequencies. The resulting normalized spectra contain near-constant circles on top of background from higher-order terms. We then convolve with a Gaussian blur kernel with standard deviation σ to remove speckle noise (Algorithm 1.1-2). Experimentally, we choose σ=2 pixels, which balances blurring speckle noise and maintains the circular edge. Under this model, the radial line f(r,ϕn) from our correct center ki can be modeled near the circular edge as a binary step function convolved with a Gaussian:

f(r,ϕn,di,θi)=rect(r2R)*12πσer22σ2.
By differentiating through f(r,ϕn) and setting equal to zero, we find the peak of f(r,ϕn) still occurs at r=R. Additionally, we find that the second derivative f(r,ϕn) has a maximum at r=R+σ. Experimentally, we have found that considering both the first and second derivatives increases our accuracy and robustness to noise across a wide variety of data sets. We therefore calculate a second derivative metric,
E2(R+σ,di,θi)=n=1Nf(r=R+σ,ϕn,di,θi),
which is jointly considered with Eq. (3). We identify candidate centers ki that occur near the peak of both E1 and E2 [Figs. 2(e) and 2(f)], then use a least-squares error metric to determine the final calibrated ki (Algorithm 1.5-8). In practice, we also only consider the nonoverlapping portion of the circle’s edge, bounding ϕ.

Until now, we have assumed that the precise radius R of the pupil is known. However, in pixel units, R is dependent on the pixel size of the sensor, ps, and the system magnification, mag:

R=NAobjλps*Mmag,
as well as NAobj and λ, where I˜i is dimension M×M. Given that mag and NAobj are often imprecisely known but are unchanged across all images, we calibrate the radius by finding the R, which gives the maximum gradient peak E1 across multiple images before calibrating ki (Algorithm 1.3). A random subset of images may be used to decrease computation time.

Tables Icon

Algorithm 1. Brightfield Calibration

Finally, once all images are calibrated, we want to remove outliers and extrapolate the correction to the darkfield images. Outliers occur due to (1) little high-frequency image content and therefore no defined circular edge, (2) strong background, or (3) shifts such that the conjugate circle center ki is identified as ki. In these cases, we cannot recover the correct center based on a single image and must rely on the overall calibrated change in the illuminator’s position. We find outliers based on an illuminator-specific transformation A (e.g., rigid motion) between the expected initial guess of circle centers ki,0 (e.g., the LED array map) and the calibrated centers ki using a RANSAC-based method [44]. This transformation is used to correct outliers and darkfield images (Algorithm 1.9-12), serving as an initialization for our spectral correlation (SC) method.

B. Spectral Correlation Calibration

While the brightfield (BF) calibration method localizes illumination angles using intrinsic contrast from each measurement, this contrast is not present in high-angle (darkfield) measurements [Fig. 1(b)]. Therefore, we additionally solve a more general joint estimation problem to refine the initialization provided by BF calibration, where the object O(r), pupil P(k), and illumination angles ki are optimized within the FPM algorithm. At each inner iteration, we estimate the ith illumination angle by minimizing the FPM objective function with respect to illumination angle [Fig. 3(a)]. This step finds the relative k-space location of the current spectrum I˜i relative to the overall object, providing an estimate ki(m) relative to the other illuminator angles kj(m), ji. We call this the spectral correlation method because this optimization implicitly finds ki(m), which best aligns the ith spectrum with the estimated object spectrum O˜(k)(m).

 figure: Fig. 3.

Fig. 3. BF calibration uses a fast preprocessing step to estimate illumination angles; then SC calibration iteratively refines them within the FPM solver. (a) Algorithm block diagram. (b) Uncalibrated (red) and BF + SC calibrated (green) illumination angle map. Insets are example search spaces, showing local convexity. (c) FPM convergence plot for different methods.

Download Full Size | PDF

Unlike previous methods [14,15], we constrain ki to exist on the k-space grid defined by our image sampling. Our k-space resolution is band-limited by the size of the image patch, s=(sx,sy), across which the illumination can be assumed coherent. This coherent area size is determined by the van Cittert–Zernike theorem, which can be simplified [45] to show that the coherence length lc of illumination with mean source wavelength λ¯ produced by a source of size ρ at a distance R is lc=0.61Rλ¯/ρ. For example, a 300 μm wide LED placed 50 mm above the sample with λ¯=530nm gives lc=53.8μm, which provides an upper bound on the size of image patch used in the FPM reconstruction, (sx,sy)lc. This limitation imposes a minimum resolvable discretization of illumination angles Δk=2s due to the Nyquist criterion. Because we cannot resolve finer angle changes, we need only perform a local grid search over integer multiples of Δk, which makes our joint estimation SC method much faster than previous methods.

SC calibration is cast as an iterative optimization of discrete perturbations of the estimated angle using a local grid search. At each FPM iteration, we solve for the optimal perturbation of illumination angle ki(m) over integer multiples n=(nx,ny) of k-space resolution-limited steps Δk such that the updated illumination position ki(m+1)=ki(m)+n·Δk minimizes the 2 distance between the object and illumination angle estimates and measurements:

argminnIi|O(m+1)ei2π(ki(m)+nΔk)r*P(m+1)|222subjectton=(nx,ny),(nx,ny)[1,0,1].
This grid search is performed iteratively within each sequential iteration of an FPM reconstruction until ki converges, giving a lower reconstruction cost than BF calibration alone (Fig. 3).

The choice of n=(nx,ny) to search can be tuned to match the problem. In most experimental cases, we find that a search of the immediate locality of the current estimate ((nx,ny)[1,0,1]) gives a good balance between speed and gradient performance when paired with the close initialization from our BF calibration. A larger search range (e.g., (nx,ny)[2,1,0,1,2]) may be required in the presence of noise or without a close initialization, but the number of points searched will increase with the square of the search range, causing the algorithm to slow considerably.

Including prior information about the design of the illumination source can make our calibration problem more well-posed. For example, we can include knowledge that an LED array is a rigid, planar illuminator in our initial guess of the illumination angle map, ki,0. By forcing the current estimates ki(m) to fit a transformation of this initial angle map at the end of each FPM subiteration, we can use this knowledge to regularize our optimization [Fig. 3(a)]. The transformation model used depends on the specific illuminator. For example, our quasi-dome LED array is composed of five circuit boards with precise LED positioning within each board but variable board position relative to each other. Thus, imposing an affine transformation from the angle map of each board to the current estimates ki(m) significantly reduces the problem dimensionality and mitigates noise across LEDs, making the reconstruction more stable.

3. RESULTS

A. Planar LED Array

We first show experimental results from a conventional LED array illumination system with a 10×, 0.25NA, and a 4×, 0.1NA, objective lens at λ=514nm and NAillum0.455 (Fig. 4). We compare reconstructions with simulated annealing, our BF pre-processing alone, and our combined BF + SC calibration method. All methods were run in conjunction with EPRY pupil reconstruction [12]. We include results with and without the SC calibration to illustrate that the BF calibration is sufficient to correct for most misalignment of the LED array because we can accurately extrapolate LED positions to the darkfield region when the LEDs fall on a planar grid. However, when using a low NA objective (NAobj0.1), as in Fig. 4(d), the SC method becomes necessary because the BF calibration is only able to use nine images (compared with 69 brightfield images with a 10×, 0.25NA objective, as in Figs. 4(a)4(c).

 figure: Fig. 4.

Fig. 4. Experimental results with an LED array microscope, comparing reconstructions with no calibration (average reconstruction time 132 s), simulated annealing (3453 s), our BF calibration (156 s), and our BF + SC calibration (295 s). (a) Amplitude reconstructions of a USAF target in a well-aligned system. (b) Amplitude reconstructions of the same USAF target with a drop of oil placed on top of the sample to simulate sample-induced aberrations. (c) Phase reconstructions of a human cheek cell with computationally misaligned illumination. (d) A Siemens star phase target with experimentally misaligned illumination.

Download Full Size | PDF

Our method is object-independent and can be used for phase and amplitude targets as well as biological samples. All methods reconstruct similar quality results for the well-aligned LED array with the USAF resolution target [Fig. 4(a)]. To simulate an aqueous sample, we place a drop of oil on top of the resolution target. The drop causes uneven changes in the illumination, giving low-frequency artifacts in the uncalibrated and simulated annealing cases, which are corrected by our method [Fig. 4(b)]. Our method is also able to recover a 5° rotation, 0.02 NA shift, and 1.1× scaled computationally imposed misalignment on a well-aligned LED array data for a cheek cell [Fig. 4(c)] and gives a good reconstruction of an experimentally misaligned LED array for a phase Siemens star (Benchmark Technologies, Inc.) [Fig. 4(d)]. In contrast with simulated annealing, which on average takes 26× as long to process as FPM without calibration, our brightfield calibration only takes an additional 24 s of processing time, and the combined calibration takes roughly only 2.25× as long as no calibration.

B. Steered Laser

Laser illumination can be used instead of LED arrays to increase the coherence and light efficiency of FPM [32,33]. In practice, laser systems are typically less rigidly aligned than LED arrays, making them more difficult to calibrate. We constructed a laser-based FPM system using a dual-axis galvanometer to steer a 532 nm, 5 mW laser, which is focused on the sample by large condenser lenses [Fig. 5(a)]. This laser illumination system allows finer, more agile illumination control than an LED array as well as higher light throughput. However, the laser illumination angle varies from the expected value due to offsets in the dual-axis galvanometer mirrors, relay lens aberrations, and mirror position misestimations when run at high speeds. Our method can correct for these problems in a fraction of the time of the previous methods [Fig. 5(b)].

 figure: Fig. 5.

Fig. 5. Experimental angle calibration in laser and high-NA quasi-dome illumination systems. (a) Laser illumination is steered by a dual-axis galvanometer. The angled beam is relayed to the sample by 4 in., 80 mm focal length lenses. (b) Our calibration method removes low-frequency reconstruction artifacts. (c) The quasi-dome illuminator enables up to 0.98 NAillum using programmable LEDs. (d) Our 1.23 NA reconstruction provides isotropic 425 nm resolution with BF + SC calibration.

Download Full Size | PDF

C. Quasi-Dome

Because the FPM resolution limit is set by NAobj+NAillum, high-NA illuminators are needed for large space-bandwidth product imaging [36,46]. To achieve high-angle illumination with sufficient signal-to-noise ratio in the darkfield region, the illuminators should be more dome-like rather than planar [34]. We previously developed a novel programmable quasi-dome array made of five separate planar LED arrays that can illuminate up to 0.98 NA [36] with discrete control of the RGB LEDs (λ¯=[475nm,530nm,630nm]). It can be easily attached to most commercial inverted microscopes [Fig. 5(c)].

As with conventional LED arrays, we assume that the LEDs on each board are rigidly placed as designed. However, each circuit board may have some relative shift, tilt, or rotation because the final mating of the five boards is performed by hand. LEDs with high-angle incidence are more difficult to calibrate and more likely to suffer from misestimation due to the dome geometry, so the theoretical reconstruction NA would be nearly impossible to reach without self-calibration. Using our method, we obtain the theoretical resolution limit available to the quasi-dome [Fig. 5(d)]. The SC calibration is especially important in the quasi-dome case because it usually has many darkfield LEDs.

D. 3D FPM

Calibration is particularly important for 3D FPM. Even small changes in angle become large when they are propagated to large defocus depths, leading to reduced resolution and reconstruction artifacts [22,28]. For example, using a well-aligned LED array, [28] was unable to reconstruct high-resolution features of a resolution target defocused beyond 30 μm due to angle misestimation; using the same data set, our method allows us to reconstruct high-resolution features of the target even when it is 70 μm off-focus (Fig. 6).

 figure: Fig. 6.

Fig. 6. Even small calibration errors degrade 3D FPM resolution severely when defocus distances are large. (a) Experiment schematic for a USAF target placed at varying defocus distances. (b) Measured reconstruction resolution degrades with defocus distance; our calibration algorithm reduces this error significantly. (c) Amplitude reconstructions for selected experimental defocus distances, with and without calibration of the illumination angles.

Download Full Size | PDF

Because iterative angle estimation (including our SC calibration) unfeasibly increases the computational complexity of 3D FPM, we use BF calibration only. While we do not attain the theoretical limits for all depths, we offer significant reconstruction improvement. Our calibration only slightly changes the angles of illumination [Fig. 6(c)], highlighting that small angular changes have a large effect on 3D reconstructions. Experimental resolution was determined by resolvable bars on the USAF resolution target in Fig. 6(c), where we declare a feature “resolved” when there is a >20% dip between Imax and Imin.

4. DISCUSSION

Our calibration method offers significant gains in speed and robustness as compared with previous methods. BF calibration enables these capabilities by obtaining a good calibration that needs to be calculated only once in preprocessing, reducing computation. Because an estimate of a global shift in the illuminator based only on the brightfield images provides such a close initialization for the rest of the angles, we can use a quicker, easier joint estimation in our SC calibration than would be otherwise possible. Jointly, these two methods work together to create fast and accurate reconstructions.

3D FPM algorithms are slowed by an untenable amount by iterative calibration methods because they require the complicated 3D forward model to be calculated multiple times during each iteration. Combined with 3D FPM’s reliance on precise illumination angles to obtain a good reconstruction, it has previously been difficult to obtain accurate reconstruction of large volumes with 3D FPM. However, because BF calibration occurs outside the 3D FPM algorithm, we can now correct for the angle misestimations that have degraded these reconstructions in the past, allowing 3D FPM to be applied to larger volumes.

We analyze the robustness of our method to illumination changes by simulating an object illuminated by a grid of LEDs with NAillum<0.41, with LEDs spaced at 0.041NA intervals. We define the system to have λ=532nm, with a 10×, 0.25 NA objective, a 2× system magnification, and a camera with 6.5 μm pixels. While the actual illumination angles in the simulated data remain fixed, we perturb the expected angle of illumination in typical misalignment patterns for LED arrays: rotation, shift, and scale (analogous to LED array distance from sample). We then calibrate the unperturbed data with the perturbed expected angles of illumination as our initial guess.

Our method recovers the actual illumination angles with errors less than 0.005 NA for rotations of 45° to 45° [Fig. 7(a)]; shifts of 0.1 to 0.1 NA, or approximately a displacement of ±2 LEDs [Fig. 7(b)]; and scalings of 0.5× to 1.75× (or LED array height between 40–140 cm if the actual LED array height is 70 cm) [Fig. 7(c)]. In these ranges, the average error is 0.0024 NA, less than the k-space resolution of 0.0032 NA. Hence, our calibrated angles are close to the actual angles even when the input expected angles are extremely far off. This result demonstrates that our method is robust to most misalignments in the illumination scheme.

 figure: Fig. 7.

Fig. 7. Our calibration methods are robust to large mismatches between estimated and actual LED array position. Simulation of misaligned illumination by (a) rotation, (b) shift, and (c) scale. Our calibration recovers the illumination with <0.005 NA error for rotations of 45° to 45°, shifts of 0.1 to 0.1 NA, and scalings of 0.5× to 1.75× before diverging.

Download Full Size | PDF

5. CONCLUSION

We have presented a novel two-part calibration method for recovering the illumination angles of a computational illumination system for Fourier ptychography. We have demonstrated how this self-calibrating method makes Fourier ptychographic microscopes more robust to system changes and sample-induced aberrations. The method also makes it possible to use high-angle illuminators, such as the quasi-dome, and nonrigid illuminators, such as laser-based systems, to their full potential. Our preprocessing brightfield calibration further enables 3D multislice Fourier ptychography to reconstruct high-resolution features across larger volumes than previously possible. These gains were all made with minimal additional computation, especially when compared with current state-of-the-art methods. Efficient self-calibrating methods such as these are important to make computational imaging methods more robust and available for broad use. Open source code is available at www.laurawaller.com/opensource.

Funding

National Science Foundation (NSF) (DGE 1106400); Gordon and Betty Moore Foundation (GBMF4562); David and Lucile Packard Foundation; Office of Naval Research (ONR) (N00014-17-1-2401).

REFERENCES

1. G. Zheng, C. Kolner, and C. Yang, “Microscopy refocusing and dark-field imaging by using a simple LED array,” Opt. Lett. 36, 3987–3989 (2011). [CrossRef]  

2. Z. Liu, L. Tian, S. Liu, and L. Waller, “Real-time brightfield, darkfield, and phase contrast imaging in a light-emitting diode array microscope,” J. Biomed. Opt. 19, 106002 (2014). [CrossRef]  

3. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013). [CrossRef]  

4. L. Tian, J. Wang, and L. Waller, “3D differential phase-contrast microscopy with computational illumination using an LED array,” Opt. Lett. 39, 1326–1329 (2014). [CrossRef]  

5. L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an LED array microscope,” Opt. Express 23, 11394–11403 (2015). [CrossRef]  

6. M. Chen, L. Tian, and L. Waller, “3D differential phase contrast microscopy,” Biomed. Opt. Express 7, 3940–3950 (2016). [CrossRef]  

7. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38, 4845–4848 (2013). [CrossRef]  

8. S. Dong, Z. Bian, R. Shiradkar, and G. Zheng, “Sparsely sampled Fourier ptychography,” Opt. Express 22, 5455–5464 (2014). [CrossRef]  

9. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). [CrossRef]  

10. L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2, 904–911 (2015). [CrossRef]  

11. P. Thibault, M. Dierolf, O. Bunk, A. Menzel, and F. Pfeiffer, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109, 338–343 (2009). [CrossRef]  

12. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22, 4960–4972 (2014). [CrossRef]  

13. R. Horstmeyer, X. Ou, J. Chung, G. Zheng, and C. Yang, “Overlapped Fourier coding for optical aberration removal,” Opt. Express 22, 24062–24080 (2014). [CrossRef]  

14. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23, 33214–33240 (2015). [CrossRef]  

15. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7, 1336–1350 (2016). [CrossRef]  

16. J. Liu, Y. Li, W. Wang, H. Zhang, Y. Wang, J. Tan, and C. Liu, “Stable and robust frequency domain position compensation strategy for Fourier ptychographic microscopy,” Opt. Express 25, 28053–28067 (2017). [CrossRef]  

17. A. Maiden, M. Humphry, M. Sarahan, B. Kraus, and J. Rodenburg, “An annealing algorithm to correct positioning errors in ptychography,” Ultramicroscopy 120, 64–72 (2012). [CrossRef]  

18. F. Zhang, I. Peterson, J. Vila-Comamala, A. Diaz, F. Berenguer, R. Bean, B. Chen, A. Menzel, I. K. Robinson, and J. M. Rodenburg, “Translation position determination in ptychographic coherent diffraction imaging,” Opt. Express 21, 13592–13606 (2013). [CrossRef]  

19. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21, 32400–32410 (2013). [CrossRef]  

20. L. Bian, G. Zheng, K. Guo, J. Suo, C. Yang, F. Chen, and Q. Dai, “Motion-corrected Fourier ptychography,” Biomed. Opt. Express 7, 4543–4553 (2016). [CrossRef]  

21. J. Dou, Z. Gao, J. Ma, C. Yuan, Z. Yang, and L. Wang, “Iterative autofocusing strategy for axial distance error correction in ptychography,” Opt. Lasers Eng. 98, 56–61 (2017). [CrossRef]  

22. R. Eckert, L. Tian, and L. Waller, “Algorithmic self-calibration of illumination angles in Fourier ptychographic microscopy,” in Imaging and Applied Optics (Optical Society of America, 2016), paper CT2D.3.

23. G. Satat, B. Heshmat, D. Raviv, and R. Raskar, “All photons imaging through volumetric scattering,” Sci. Rep. 6, 33946 (2016). [CrossRef]  

24. A. Pan, Y. Zhang, T. Zhao, Z. Wang, D. Dan, M. Lei, and B. Yao, “System calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt. 22, 096005 (2017). [CrossRef]  

25. J. Chung, J. Kim, X. Ou, R. Horstmeyer, and C. Yang, “Wide field-of-view fluorescence image deconvolution with aberration-estimation from Fourier ptychography,” Biomed. Opt. Express 7, 352–368 (2016). [CrossRef]  

26. T. M. Turpin, L. H. Gesell, J. Lapides, and C. H. Price, Theory of the Synthetic Aperture Microscope (1995), Vol. 2566, pp. 1–11.

27. J. Di, J. Zhao, H. Jiang, P. Zhang, Q. Fan, and W. Sun, “High resolution digital holographic microscopy with a wide field of view based on a synthetic aperture technique and use of linear CCD scanning,” Appl. Opt. 47, 5654–5659 (2008). [CrossRef]  

28. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015). [CrossRef]  

29. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3, 827–835 (2016). [CrossRef]  

30. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Sampling criteria for Fourier ptychographic microscopy in object space and frequency space,” Opt. Express 24, 15765–15781 (2016). [CrossRef]  

31. K. Guo, S. Dong, P. Nanda, and G. Zheng, “Optimization of sampling pattern and the design of Fourier ptychographic illuminator,” Opt. Express 23, 6171–6180 (2015). [CrossRef]  

32. C. Kuang, Y. Ma, R. Zhou, J. Lee, G. Barbastathis, R. R. Dasari, Z. Yaqoob, and P. T. C. So, “Digital micromirror device-based laser-illumination Fourier ptychographic microscopy,” Opt. Express 23, 26999–27010 (2015). [CrossRef]  

33. J. Chung, H. Lu, X. Ou, H. Zhou, and C. Yang, “Wide-field Fourier ptychographic microscopy using laser illumination source,” Biomed. Opt. Express 7, 4787–4802 (2016). [CrossRef]  

34. Z. F. Phillips, M. V. D’Ambrosio, L. Tian, J. J. Rulison, H. S. Patel, N. Sadras, A. V. Gande, N. A. Switz, D. A. Fletcher, and L. Waller, “Multi-contrast imaging and digital refocusing on a mobile microscope with a domed LED array,” PLoS ONE 10, e0124938 (2015). [CrossRef]  

35. S. Sen, I. Ahmed, B. Aljubran, A. A. Bernussi, and L. G. de Peralta, “Fourier ptychographic microscopy using an infrared-emitting hemispherical digital condenser,” Appl. Opt. 55, 6421–6427 (2016). [CrossRef]  

36. Z. Phillips, R. Eckert, and L. Waller, “Quasi-dome: A self-calibrated high-NA LED illuminator for Fourier ptychography,” in Imaging and Applied Optics (Optical Society of America, 2017), paper IW4E.5.

37. S. Hell, G. Reiner, C. Cremer, and E. H. K. Stelzer, “Aberrations in confocal fluorescence microscopy induced by mismatches in refractive index,” J. Microsc. 169, 391–405 (1993). [CrossRef]  

38. S. Kang, P. Kang, S. Jeong, Y. Kwon, T. D. Yang, J. H. Hong, M. Kim, K.-D. Song, J. H. Park, J. H. Lee, M. J. Kim, K. H. Kim, and W. Choi, “High-resolution adaptive optical imaging within thick scattering media using closed-loop accumulation of single scattering,” Nat. Commun. 8, 2157 (2017). [CrossRef]  

39. A. Shanker, A. Wojdyla, G. Gunjala, J. Dong, M. Benk, A. Neureuther, K. Goldberg, and L. Waller, “Off-axis aberration estimation in an EUV microscope using natural speckle,” in Imaging and Applied Optics (Optical Society of America, 2016), paper ITh1F.2.

40. C. Dammer, P. Leleux, D. Villers, and M. Dosire, “Use of the Hough transform to determine the center of digitized x-ray diffraction patterns,” Nucl. Instrum. Methods Phys. Res. Sect. B 132, 214–220 (1997). [CrossRef]  

41. J. Cauchie, V. Fiolet, and D. Villers, “Optimization of an Hough transform algorithm for the search of a center,” Pattern Recognit. 41, 567–574 (2008). [CrossRef]  

42. H. K. Yuen, J. Princen, J. Illingworth, and J. Kittler, “A comparative study of Hough transform methods for circle finding,” in Proceedings of the 5th Alvey Vision Conference, Reading, August 31, 1989, pp. 169–174.

43. E. Davies, Machine Vision: Theory, Algorithms and Practicalities, 3rd ed. (Morgan Kauffmann, 2004).

44. M. Jacobson, “Absolute orientation MATLAB package,” in MATLAB Central File Exchange (2015).

45. M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, 7th ed. (Cambridge University, 1999).

46. J. Sun, C. Zuo, L. Zhang, and Q. Chen, “Resolution-enhanced Fourier ptychographic microscopy based on high-numerical-aperture illuminations,” Sci. Rep. 7, 1187 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Illumination angles are calibrated by analyzing Fourier spectra. (a) A cheek cell is illuminated at angle α and imaged with NA obj . (b) Brightfield images contain overlapping circles in their Fourier spectra; darkfield images do not. (c) We perform a fast and efficient brightfield calibration in preprocessing, then extrapolate the correction to darkfield images, and, finally, iteratively calibrate angles inside the FPM algorithm using a spectral correlation calibration.
Fig. 2.
Fig. 2. Circular edge detection on brightfield images finds circle centers, giving illumination angle calibration. (a), (b) Comparison of uncalibrated (red) and calibrated (black) illumination k i . The blue box in (b) indicates the search range for k i . (c), (d)  I ˜ i along radial lines, f ( r , ϕ n ) , and derivatives with respect to r . (e), (f)  E 1 and E 2 , sums of the derivatives at known radii R and R + σ , peak near the correct center. Boxes show uncalibrated (red) and calibrated (black) k i centers.
Fig. 3.
Fig. 3. BF calibration uses a fast preprocessing step to estimate illumination angles; then SC calibration iteratively refines them within the FPM solver. (a) Algorithm block diagram. (b) Uncalibrated (red) and BF + SC calibrated (green) illumination angle map. Insets are example search spaces, showing local convexity. (c) FPM convergence plot for different methods.
Fig. 4.
Fig. 4. Experimental results with an LED array microscope, comparing reconstructions with no calibration (average reconstruction time 132 s), simulated annealing (3453 s), our BF calibration (156 s), and our BF + SC calibration (295 s). (a) Amplitude reconstructions of a USAF target in a well-aligned system. (b) Amplitude reconstructions of the same USAF target with a drop of oil placed on top of the sample to simulate sample-induced aberrations. (c) Phase reconstructions of a human cheek cell with computationally misaligned illumination. (d) A Siemens star phase target with experimentally misaligned illumination.
Fig. 5.
Fig. 5. Experimental angle calibration in laser and high-NA quasi-dome illumination systems. (a) Laser illumination is steered by a dual-axis galvanometer. The angled beam is relayed to the sample by 4 in., 80 mm focal length lenses. (b) Our calibration method removes low-frequency reconstruction artifacts. (c) The quasi-dome illuminator enables up to 0.98 NA illum using programmable LEDs. (d) Our 1.23 NA reconstruction provides isotropic 425 nm resolution with BF + SC calibration.
Fig. 6.
Fig. 6. Even small calibration errors degrade 3D FPM resolution severely when defocus distances are large. (a) Experiment schematic for a USAF target placed at varying defocus distances. (b) Measured reconstruction resolution degrades with defocus distance; our calibration algorithm reduces this error significantly. (c) Amplitude reconstructions for selected experimental defocus distances, with and without calibration of the illumination angles.
Fig. 7.
Fig. 7. Our calibration methods are robust to large mismatches between estimated and actual LED array position. Simulation of misaligned illumination by (a) rotation, (b) shift, and (c) scale. Our calibration recovers the illumination with < 0.005 NA error for rotations of 45 ° to 45°, shifts of 0.1 to 0.1 NA, and scalings of 0.5 × to 1.75 × before diverging.

Tables (1)

Tables Icon

Algorithm 1 Brightfield Calibration

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

I i ( r ) = | O ( r ) e i 2 π k i r * P ( r ) | 2 = | F 1 ( O ˜ ( k k i ) P ˜ ( k ) ) | 2 ,
I ˜ i ( k ) = F ( | O ( r ) e i 2 π k i r * P ( r ) | 2 ) = O ˜ ( k k i ) P ˜ ( k ) O ˜ ( k k i ) P ˜ ( k ) ,
E 1 ( R , d i , θ i ) = n = 1 N f ( r = R , ϕ n , d i , θ i ) .
f ( r , ϕ n , d i , θ i ) = rect ( r 2 R ) * 1 2 π σ e r 2 2 σ 2 .
E 2 ( R + σ , d i , θ i ) = n = 1 N f ( r = R + σ , ϕ n , d i , θ i ) ,
R = NA obj λ p s * M mag ,
argmin n I i | O ( m + 1 ) e i 2 π ( k i ( m ) + n Δ k ) r * P ( m + 1 ) | 2 2 2 subject to n = ( n x , n y ) , ( n x , n y ) [ 1 , 0 , 1 ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.