Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Robust Fourier ptychographic microscopy via a physics-based defocusing strategy for calibrating angle-varied LED illumination

Open Access Open Access

Abstract

Fourier ptychographic microscopy (FPM) is a recently developed computational imaging technique for wide-field, high-resolution microscopy with a high space-bandwidth product. It integrates the concepts of synthetic aperture and phase retrieval to surpass the resolution limit imposed by the employed objective lens. In the FPM framework, the position of each sub-spectrum needs to be accurately known to ensure the success of the phase retrieval process. Different from the conventional methods with mechanical adjustment or data-driven optimization strategies, here we report a physics-based defocusing strategy for correcting large-scale positional deviation of the LED illumination in FPM. Based on a subpixel image registration process with a defocused object, we can directly infer the illumination parameters including the lateral offsets of the light source, the in-plane rotation angle of the LED array, and the distance between the sample and the LED board. The feasibility and effectiveness of our method are validated with both simulations and experiments. We show that the reported strategy can obtain high-quality reconstructions of both the complex object and pupil function even the LED array is randomly placed under the sample with both unknown lateral offsets and rotations. As such, it enables the development of robust FPM systems by reducing the requirements on fine mechanical adjustment and data-driven correction in the construction process.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fourier ptychography (FP) [13] is a recently developed computational imaging technique, which addresses the inherent trade-off between large field-of-view (FOV) and high spatial resolution in conventional optical systems. Based on the corresponding relationship between the illumination direction and the spatial spectrum position, FP can obtain an expanded range of spectrum by providing varied illumination angles, and achieve quantitative phase imaging with a gigapixels space-bandwidth product (SBP) [4] by integrating phase retrieval [5] and synthetic aperture [6] algorithms.

Since its first demonstration in 2013, FP has evolved from a microscopic imaging tool to a versatile imaging technique, embodied in different implementations including reflective FP [7,8], aperture-scanning FP [9], X-ray FP [10], multi-camera FP [11], among others. Meanwhile, FP has been applied in various applications, including quantitative 3D imaging [1214], digital pathology [15,16], aberration metrology [17] and extension for incoherent imaging [18], et. al.

In a typical FPM system, an LED array is used to provide angle-varied illumination for the sample. During the data acquisition process, LEDs are turned on sequentially to provide different angular illuminations and a camera is used to record the corresponding low-resolution (LR) images whose spatial frequencies are determined by the numerical aperture (NA) of the objective lens and the illumination wavevectors. In the original FPM model, each LED element is assumed to be a point source that emits quasi-monochromatic light, and the LED board should be aligned precisely. The former can be achieved by choosing LEDs with narrower spectral bandwidth and smaller light-emitting area. Nevertheless, the positional deviation of the LED board is unavoidable during the process of constructing or modifying FPM systems, and it will degrade the quality of reconstructed high-resolution (HR) images by making the sub-spectrum position used in the phase retrieval algorithm inconsistent with the wavevector corresponding to the actual illumination direction. One conventional way to address the problem is to pre-calibrate the LED array by mechanical adjustment stages [19], but aligning the LED array accurately is a time-consuming task and needs precise and multi-degree stages, which are always bulky and expensive. An intuitive and effective method [20] utilizing brightfield-to-darkfield features to get the location and orientation of the LED array is adjust-free, and no additional hardware or operations are required. Nevertheless, it may fail when there is no expected bright field overlapping zone for some specific system parameters, such as the adjacent LED spacing, the objective NA, and the distance between the sample and the LED array. Many data-driven optimization approaches taking use of the data redundancy of LR images are also useful solutions, such as pcFPM method [21], SC-FPM [22], QG-FPM [23], and tcFPM [24], et.al. However, all these methods use the intensity distribution characteristics of the LR images to seek optimal positional parameters, whereas, the LR images’ intensity distributions are also affected by illumination intensity fluctuation, objective aberrations, random noise, and other kinds of systematic errors. The influences of all errors are mixed, resulting in extracting positional deviation from the LR images constraint becoming an ill-posed problem. It is thus difficult for these methods to be performed with the optimization and correction methods of other non-ideal parameters. In addition, the data-driven optimization approaches also have the problem of being time-consuming and easy to fall into local traps.

Aiming to reduce the influences of other systematic errors and accurately obtain the positional deviation of the LED board, we propose a correction method with an explicit imaging physical model, termed pdcFPM. Based on the constraint that each LED element is arranged in a regular array, and the relationship between the lateral offset of the defocus image and the illumination direction [2527], the positional parameters of the LED board can be calculated precisely with the aid of subpixel image registration and non-linear regression algorithms. Since pdcFPM calculates the offset of binarized images to obtain positional parameters, and the offset mainly relies on the geometric information rather than the intensity distribution information of the LR images, it can avoid the influence of other systematic errors. Simulations and experiments have been conducted to demonstrate the feasibility and effectiveness of pdcFPM for large-scale positional deviations. The proposed method can significantly improve the quality of reconstructed complex amplitude images, obtain the objective pupil function without the influence of positional deviations, and therefore evidently reduce the accuracy requirements of the LED board in the process of building FPM platforms.

The remainder of this paper is organized in the following manner. In Section 2.1, we analyze the effect of LED positional deviation through the reconstructed results; in Section 2.2, we establish a mathematical model between LED positional parameters and LR image offset; and in Section 2.3, we give the flow process of the method. The effectiveness of pdcFPM is confirmed with simulations and experiments in Sections 3 and 4, respectively, and the summary and further discussion are given in Section 5.

2. Principle

2.1 Positional deviation in the FPM system

A typical FPM system consists of an LED array, a low-NA objective lens, and a monochromatic camera. LEDs are turned on sequentially to provide angle-varied illuminations, and the camera is used to acquire the corresponding LR intensity images as the raw data set. For each LEDm,n (row m, column n) and its illumination wavevector $({u_{m,n}},{v_{m,n}})$, the LR intensity image $I_{m,n}^c$ can be described as

$$I_{m,n}^c(x^{\prime},y^{\prime}) = |{\mathrm{{\cal F}}^{ - 1}}\{ \mathrm{{\cal F}}[o(x,y){e^{(jx{u_{m,n}},jy{v_{m,n}})}}]P(u,v)\} {|^2},$$
where $\mathrm{{\cal F}}$ and ${\mathrm{{\cal F}}^{ - 1}}$ represent the Fourier and inverse Fourier transform operators respectively, o(x, y) is the sample’s complex transmission function, (x, y) denotes the two-dimensional (2D) Cartesian coordinates in the sample plane, j is the imaginary unit, $P(u,v)$ is the pupil function, (u, v) is the wavevector in the pupil plane, and $(x^{\prime},y^{\prime})$ is the 2D Cartesian coordinates in the image plane. The incident wavevector $({u_{m,n}},{v_{m,n}})$ can be expressed as
$$\begin{array}{c} {u_{m,n}} ={-} \frac{{2\pi }}{\lambda }\frac{{{x_c} - {x_{m,n}}}}{{\sqrt {{{({x_c} - {x_{m,n}})}^2} + {{({y_c} - {y_{m,n}})}^2} + {h^2}} }},\\ {v_{m,n}} ={-} \frac{{2\pi }}{\lambda }\frac{{{y_c} - {y_{m,n}}}}{{\sqrt {{{({x_c} - {x_{m,n}})}^2} + {{({y_c} - {y_{m,n}})}^2} + {h^2}} }}, \end{array}$$
where (xc, yc) is the central position of each small segment of the sample, (xm,n, ym,n) represents the position of LEDm,n, λ is the central wavelength, and h is the distance between the sample and LED array. Subsequently, these raw LR images are processed by the conventional FP reconstruction algorithm to obtain the HR complex amplitude images. The reconstruction process is composed of five steps. Firstly, initialize the guesses of the HR sample spectrum ${O_0}(u,v)$ and pupil function ${P_0}(u,v)$ to start the algorithm. Generally, the initial guess of pupil function is set as a circular low-pass filter with all ones inside the passband, zeros out of the passband, and uniform zero phase. The initial sample spectrum guess is set as the Fourier transform of the up-sampled central LR image. Secondly, use the guesses of pupil function and sample spectrum to generate the LR image in the (mth, nth) image plane as
$$o_{_{m,n}}^e(x^{\prime},y^{\prime}) = {\mathrm{{\cal F}}^{ - 1}}\{ {O_0}(u - {u_{m,n}},v - {v_{m,n}}){P_0}(u,v)\} .$$
Thirdly, replace the amplitude of the simulated LR image with the square-root of the actual measurement, and keep the phase unchanged to update the LR image as
$$o_{_{0,m,n}}^u(x^{\prime},y^{\prime}) = \sqrt {I_{m,n}^c(x^{\prime},y^{\prime})} \frac{{o_{m,n}^e(x^{\prime},y^{\prime})}}{{|o_{m,n}^e(x^{\prime},y^{\prime})|}}.$$
Next, update the corresponding sub-spectrum of the HR sample spectrum using the updated LR image, which is given by [28]:
$$\begin{array}{c} O_i^{\prime}(u - {u_{m,n}},v - {v_{m,n}}) = {O_i}(u - {u_{m,n}},v - {v_{m,n}}) + \alpha \frac{{P_i^\ast (u,v)}}{{|{P_i}(u,v)|_{\max }^2}}\Delta {O_{i,m,n}},\\ P_i^{\prime}(u,v) = {P_i}(u,v) + \beta \frac{{O_i^\ast (u - {u_{m,n}},v - {v_{m,n}})}}{{|{O_i}(u - {u_{m,n}},v - {v_{m,n}})|_{\max }^2}}\Delta {O_{i,m,n}}, \end{array}$$
where α and β are the iterative step size and usually set to one, i denotes the time of iteration, and $\Delta {O_{i,m,n}}$ is the auxiliary gradient function used for updating:$\Delta {O_{i,m,n}} = \mathrm{{\cal F}}\{ o_{_{i,m,n}}^u(x^{\prime},y^{\prime})\} - \mathrm{{\cal F}}\{ o_{_{i,m,n}}^e(x^{\prime},y^{\prime})\}$. The last step is to repeat the above four steps until all the LR images are used, and the whole iterative process will be repeated until the solution convergences. Finally, the HR sample spectrum is inverse Fourier transformed to the spatial domain to recover the HR intensity and phase distributions.

It is worth noting that the accurate position of each sub-spectrum needs to be known for high recovery quality when updating the HR spectrum in the fourth step. In other words, the incident wave vector determined by the position of each LED element and the distance between LED board and sample needs to be precisely known, which puts high requirements on reconstructing and modifying FPM systems. Fortunately, some improved recovery strategies have been proposed to relax the demands. EPRY algorithm [28] is a helpful method to get a better solution by updating HR spectrum and pupil function simultaneously. If the deviation between ideal and actual positions of the sub-spectrum is small, it can be corrected by EPRY algorithm. However, as the deviation becomes more considerable, the performance of EPRY algorithm will decrease, resulting in wrinkle artifacts arising in the reconstructed amplitude and phase images.

Here, we demonstrate the influence of LED array positional deviation on the quality of reconstructed HR images by simulations. The parameters of simulations are chosen based on an actual system, and they are used in all the simulations through this article. A 21×21 programmable LED array (2.5mm spacing) with monochromatic wavelength (λ=470nm) is placed at 92mm beneath the sample to provide angle-varied illuminations. The NA and magnification of the objective lens are 0.1 and 4 respectively, and the pixel size of the camera is 2.4µm. For simplicity and without loss of generality, we assume the LED array is only moved along the x-axis, and use the symbol Δx to represent the value of the positional deviation. EPRY algorithm is utilized to recover the intensity and phase when Δx varies from 0 to 2000µm at 200µm intervals, and the corresponding results are shown in Fig. 1. Figures 1(a1) and (b1) shows the intensity and phase ground truths of the sample. Figures 1(a2) and (b2) show the reconstructed intensity and phase respectively when Δx equals 200µm. Although the images are somewhat blurred and the center of the recovered phase image becomes too dark, the change is not disastrous, and distribution characteristic of both the intensity and phase are still distinguishable. The reconstructed intensity and phase corresponding to the case when Δx increases to 800µm are shown in Figs. 1(a3) and (b3), respectively. Although most of the intensity information is resolvable, evident wrinkle artifacts appear in the intensity and phase images simultaneously, and the phase information is seriously distorted. As shown in Figs. 1(a4) and (b4), for the case Δx equals 2000µm, which will result in the wrong choice of central LED, obvious dark shadow artifacts and some unexpected phase information arise in the intensity image, and the whole phase image is submerged and indistinguishable.

 figure: Fig. 1.

Fig. 1. Quality of reconstructed results versus the positional deviation of LED board. (a1) and (b1) are the amplitude and phase profiles of the sample; (a2)-(a4) are the reconstructed intensity images when Δx equals to 200µm, 800µm, and 2000µm, respectively; (b2)-(b4) are the corresponding reconstructed phase images; (c1) and (c2) are the RMSE between the reconstructed and true intensity and phase versus Δx.

Download Full Size | PDF

By visually comparing the reconstructed intensity and phase images with the same Δx, we can infer that the degradation of reconstructed phase quality is more serious than that of the reconstructed intensity quality. That is because the transversal bias of sub-spectrum in the Fourier domain is more relevant to the phase information in the spatial domain. Figures 1(c1) and (c2) show the relationship of the root mean square error (RMSE) of the reconstructed and true images and Δx, where the RMSE of phase is greater than that of intensity with the same Δx. In addition, it is noteworthy that the RMSE is not linearly related to Δx, but remains stable when Δx changes in an interval (e.g., the RMSE of intensity is about 0.2 when Δx varies from 1400µm to 1800µm), and the RMSE dramatically increases when Δx exceeds a certain threshold (e.g., 800µm). The abnormal phenomenon is caused by the discretization of the Fourier domain coordinates when reconstructing HR images. The quality of the reconstructed images is stable, and the RMSE will not change significantly if the deviation between the real and ideal positions of the sub-spectrum is no more than one pixel in the Fourier domain. However, the recovered quality will drop sharply when the deviation is more than one pixel.

Moreover, in an actual FPM system, the LED array will also have other positional deviations besides the x-axis shift, such as the y-axis shift and the rotation along the optical axis, et.al. The recovered quality will degrade more under the combined effect of these deviations, so correcting the positional deviation of the LED array is a significant task.

2.2 Image offset model

A conventional FPM platform and the corresponding LED array with positional deviation are shown in Fig. 2. The blue LED element in Fig. 2(c) is considered as the central LED but point O is the actual center, and the orientation of LED array (the black dashed line) deviates from the x-axis. In fact, the high precision of LED board manufactured by existing technique enables us to consider the arrangement of LED elements is regular, and FPM system is always placed on a horizontal tale. We can assume the LED array shown in Fig. 2(b) is perpendicular to the optical axis z to simplify the LED based illumination model. Then, the positional coordinates of each LED element can be express as

$$\begin{array}{c} {x_{m,n}} = [mcos(\theta ) + nsin(\theta )]{d_{LED}} + \Delta x,\\ {y_{m,n}} = [ - msin(\theta ) + n\cos (\theta )]{d_{LED}} + \Delta y, \end{array}$$
where $\theta $ denotes the rotation angle around the optical axis z, ${d_{LED}}$ is the distance between adjacent LED elements, $\Delta x$ and $\Delta y$ represent the shifts along the x-axis and y-axis, respectively.

 figure: Fig. 2.

Fig. 2. FPM system and LED array model. (a) is an FPM system; (b) is the corresponding LED array model with positional deviation; (c) is the local magnification of (b).

Download Full Size | PDF

If the sample is out of focus, the captured LR intensity image can be achieved by replacing Eq. (1) with

$$I_{m,n,d}^c(x^{\prime},y^{\prime}) = |{\mathrm{{\cal F}}^{ - 1}}\{ O(u - {u_{m,n}},v - {v_{m,n}})P(u,v)H(u,v,{z_d})\} {|^2},$$
where ${z_d}$ is the defocus distance, $H(u,v,{z_d})$ is the defocus phase factor in the pupil plane, and it can be expressed as
$$H(u,v,{z_d}) = \exp [jA{z_d}\sqrt {{{(\frac{{2\pi }}{\lambda })}^2} - {{(u - {u_{m,n}})}^2} - {{(v - {v_{m,n}})}^2}} ],$$
where A is the magnification of the objective lens. By performing binominal expansion of the square root term in Eq. (8) and keeping the first two terms, we can get
$$H(u,v,{z_d}) = \exp [jA{z_d}(u\frac{{{u_{m,n}}}}{{{w_{m,n}}}} + v\frac{{{v_{m,n}}}}{{{w_{m,n}}}})]\exp [jA{z_d}{w_{m,n}})\exp ( - jA{z_d}\frac{{{u^2} + {v^2}}}{{2{w_{m,n}}}})],$$
where ${w_{m,n}}$ denotes the wave vector along the z-axis and can be written as
$${w_{m,n}} = \sqrt {{{(\frac{{2\pi }}{\lambda })}^2} - u_{m,n}^2 - v_{m,n}^2} . $$
The first term of Eq. (9) will result in the lateral offset of defocus image, which is used to calculate the LED deviation in the proposed method. The second and third terms will make the quality of LR image degrade, which contribute little to the offset, so we take no account of them and derive the relationship between the defocus and focus images as
$$\begin{array}{c} I_{m,n,d}^c(x^{\prime},y^{\prime}) = I_{m,n}^c(x^{\prime} + \Delta x{^{\prime}_{m,n}},y^{\prime} + \Delta y{^{\prime}_{m,n}}),\\ \Delta x{^{\prime}_{m,n}} = A{z_d}\frac{{{u_{m,n}}}}{{{w_{m,n}}}},\\ \Delta y{^{\prime}_{m,n}} = A{z_d}\frac{{{v_{m,n}}}}{{{w_{m,n}}}}, \end{array}$$
where $\Delta x{^{\prime}_{m,n}}$ and $\Delta y{^{\prime}_{m,n}}$ are the offsets along the $x^{\prime}$ and $y^{\prime}$ directions, respectively. Since both A and zd are constants in a fixed system, the offset relies entirely on ${u_{m,n}}$ and ${v_{m,n}}$, which means the illumination direction of each LED element can be obtained by calculating the offset.

2.3 Correction strategy

The process of pdcFPM is shown in Fig. 3. Firstly, we adjust the sample to the focus position with the focusing knob of the microscopic platform shown in Fig. 2(a), then turn on LED0,0 and capture the corresponding LR image as the reference. The criteria for in-focus in an FPM system can be easily accomplished by turning on symmetrically positioned two LEDs and checking whether there is lateral movement of the feature in the images. If the sample is in the focus position, the feature of the images will have no shift because the corresponding zd in Eq. (9) is zero.

 figure: Fig. 3.

Fig. 3. Block diagram of pdcFPM strategy.

Download Full Size | PDF

Secondly, we adjust the sample to a defocus position, then turn on the LEDs within the bright field sequentially, and capture the LR images $I_{m,n,d}^c(x^{\prime},y^{\prime}),m,n ={-} 2,1,0,1,2$. In this step, the defocus distance is a crucial parameter. The calculated offset will have a low signal-to-noise ratio because the offset is too small when the defocus distance is not large enough. And on the contrary, when the defocus amount is too large, the image will be seriously distorted because of the defocus aberration corresponding to the last two terms in Eq. (9). In this paper, we choose the defocus distance to be 200µm, then the image offset is obvious and the defocus aberration is also acceptable for subpixel image registration.

Next, we calculate the offsets between the defocus images and the reference image. Considering that calculating the image offset of the whole FOV is unnecessary and time-consuming, we choose to calculate the offset of a small image segment, which can be a stripe on the resolution target, a small hole, or a cell, et al. Furthermore, because the grayscale of background will exert an effect on the precision of calculating the offset, we binarize the selected small segment and then calculate the offset of the centroid with subpixel image registration algorithm as

$$\begin{array}{l} \Delta x_{m,n}^{\prime} = \frac{{\sum\limits_{x^{\prime}} {[x^{\prime}\sum\limits_{y^{\prime}} {{g_{m,n}}(x^{\prime},y^{\prime})} ]} }}{{\sum\limits_{x^{\prime},y^{\prime}} {{g_{m,n}}(x^{\prime},y^{\prime})} }} - \frac{{\sum\limits_{x^{\prime}} {[x^{\prime}\sum\limits_{y^{\prime}} {{g_{0,0}}(x^{\prime},y^{\prime})} ]} }}{{\sum\limits_{x^{\prime},y^{\prime}} {{g_{0,0}}(x^{\prime},y^{\prime})} }},\\ \Delta y_{m,n}^{\prime} = \frac{{\sum\limits_{y^{\prime}} {[y^{\prime}\sum\limits_{x^{\prime}} {{g_{m,n}}(x^{\prime},y^{\prime})} } }}{{\sum\limits_{x^{\prime},y^{\prime}} {{g_{m,n}}(x^{\prime},y^{\prime})} }} - \frac{{\sum\limits_{y^{\prime}} {[y^{\prime}\sum\limits_{x^{\prime}} {{g_{0,0}}(x^{\prime},y^{\prime})} ]} }}{{\sum\limits_{x^{\prime},y^{\prime}} {{g_{0,0}}(x^{\prime},y^{\prime})} }}, \end{array}$$
where ${g_{m,n}}(x^{\prime},y^{\prime})$ is the grayscale distribution of binarized image corresponding to LEDm,n.

Once we obtain the offsets, we utilize non-linear regression algorithm to obtain the positional parameters of the LED array. Mathematically, the non-linear regression process can be described as

$$\begin{array}{c} E(\Delta x,\Delta y,\theta ,h) = \sum\limits_{m,n ={-} 2}^2 {\{ {{[\Delta x{^{\prime}_{m,n}} - \Delta x{^{\prime}_{m,n,e}}(\Delta x,\Delta y,\theta ,h)]}^2} + {{[\Delta y{^{\prime}_{m,n}} - \Delta y{^{\prime}_{m,n,e}}(\Delta x,\Delta y,\theta ,h)]}^2}} \} ,\\ {(\Delta x,\Delta y,\theta ,h)^u} = argmin[E(\Delta x,\Delta y,\theta ,h)], \end{array}$$
where $E(\Delta x,\Delta y,\theta ,h)$ is the defined non-linear regression function which should be minimized, $[\Delta x{^{\prime}_{m,n,e}}(\Delta x,\Delta y,\theta ,h),\Delta y{^{\prime}_{m,n,e}}(\Delta x,\Delta y,\theta ,h)]$ denotes the image offset estimation calculated with Eq. (8), and ${(\Delta x,\Delta y,\theta ,h)^u}$ is the optimal estimate of the positional parameters of the LED array.

At last, we use ${(\Delta x,\Delta y,\theta ,h)^u}$ to correct the position of each sub-spectrum in the conventional FPM reconstruction process of updating the HR sample spectrum. Nevertheless, obtaining the correct LED deviation is not enough to guarantee an optimal HR result. When the LED array has a considerable deviation (larger than half the LED spacing), the original optimal central LED may be replaced by other LED around it, and then the final reconstructed HR complex image will be significantly degraded if the sub-spectrum recovery order remains the original spiral order. In the process of reconstructing the HR image with pdcFPM, the central LED and the recovered path are reselected according to the calculated parameters to ensure fast and optimal convergence for arbitrary positional deviations.

3. Simulation

Before applying pdcFPM strategy to correct the positional deviations in actual FPM systems, we first validate its effectiveness with simulations. The platform is a laptop with a CPU (i7-7700HQ) and no parallel computing framework is utilized in the simulations. As introduced in section 2.3, we use the offset of a small image segment to represent that of a defocus image and choose the defocus distance zd to be 200µm. Considering that there are many rectangular stripes on the resolution target, we employ a square with a side length of 50µm as the sample, as shown in Fig. 4(a). The positional deviations are introduced artificially and taken in the range of $\Delta x \in ( - 2.5mm,2.5mm)$, $\Delta y \in ( - 2.5mm,2.5mm)$, $h \in (87mm,97mm)$, $\theta \in ( - 5^\circ ,5^\circ )$. In an actual FPM system, the specific positional deviations used for verification experiments can be realized with mechanical adjustment devices. Even for the microscopic system without any mechanical adjustment devices, we can introduce the relative LED array deviation by choosing an appropriate LED element as the central one according to the characteristics of the spots acquired with the camera, in which the transversal deviation is smaller than the introduced maximum value.

 figure: Fig. 4.

Fig. 4. Simulated results of defocus image offsets. (a) is the square sample; (b) is the focus LR image illuminated with LED0,0; (c) and (d) are two defocus LR images illuminated with LED0,0 and LED2,2 respectively; (e) shows the offsets distribution of 25 defocus LR images.

Download Full Size | PDF

Figure 4 shows the simulated results of defocus image offsets with deviation parameters of $(\Delta x = 1mm,\Delta y = 1mm,\theta = 5^\circ ,h = 92mm)$. Figure 4(a) is the simulated square sample, whose focus LR image illuminated with LED0,0 is shown in Fig. 4(b), and its defocus LR images illuminated with LED0,0 and LED2,2 are shown in Figs. 4(c) and (d), respectively. The offsets distribution of 25 defocus LR images are illustrated in Fig. 4(e). After processing the image offsets with the proposed method, the recovered positional parameters $(\Delta x = 0.997mm,\Delta y = 1.007mm,\theta = 5.243^\circ ,h = 92.15mm)$ obtained within 0.020s are in good agreement with the set parameters.

To further verify the performance of pdcFPM, we continue to perform a set of simulations with different positional parameters. At the same time, we use another data-driven position deviation correction approach termed SC-FPM as a comparative experiment, and both the recovered positional parameters and the processing time achieved with pdcFPM and SC-FPM are shown in Table 1. The corrected parameters with two methods are all in good agreement with the actual parameters, but the processing time of pdcFPM is three orders of magnitude smaller than that of SC-FPM.

Tables Icon

Table 1. Recovered positional parameters and processing time of pdcFPM and SC-FPM

Next, we utilize the recovered positional parameters to calibrate the position of each sub-spectrum in the Fourier domain during the process of reconstructing HR images and compare the performances of pdcFPM with that of EPRY-FPM and SC-FPM. Figures 5(a1) and (a2) show the ideal HR intensity and phase profiles. Figures 5(b1) and (c2) show the recovered HR intensity and phase images with EPRY-FPM when the introduced parameters are $(\Delta x = 1mm,\Delta y = 1mm,\theta = 5^\circ ,h = 92mm)$. The corresponding HR intensity and phase images recovered with SC-FPM and pdcFPM are shown in Figs. 5(d1)-(e1) and 5(f1)-(g1), respectively, where all artifacts have been eliminated and the recovered results are about the same. However, as reported in Ref. [22], SC-FPM is an iterative data-driven method based on the simulated annealing algorithm and requires at least 15 iterations (about 14s is needed for an LR image of 128×128 pixels) to guarantee stable solutions, while only 0.020s is needed when using pdcFPM. Figures 5(b2)-(b4) and 5(c2)-(c4) show the recovered results with EPRY-FPM under different positional deviations, where obvious and regular wrinkle artifacts disturb the observation of recovered information. After correcting the positional deviations with pdcFPM, the details of both intensity and phase images become resolvable, as shown in Figs. 5(f2)-(f4) and 5(g2)-g(4), which illustrates the effectiveness of pdcFPM for calibrating arbitrary positional deviation.

 figure: Fig. 5.

Fig. 5. Recovered HR images of EPRY-FPM, SC-FPM and pdcFPM. (a1) and (a2) are the ideal HR intensity and phase profiles; (b1)-(b4) and (c1)-(c4) are the recovered HR intensity and phase images with EPRY-FPM; (d1)-(d4) and (e1)-(e4) are the recovered HR intensity and phase images with SC-FPM; (f1)-(f4) and (g1)-(g4) are the recovered HR intensity and phase images with pdcFPM.

Download Full Size | PDF

4. Experiment

To evaluate the effectiveness of pdcFPM experimentally, we use a USA-1951 resolution target as the sample to compare the reconstructed intensity distribution of one image segment (256×256 pixels) and pupil function with EPRY-FPM, SC-FPM, and pdcFPM, respectively. A 21×21 LED array (2.5mm spacing, central wavelength 470nm with 20nm bandwidth), an objective lens (NA = 0.1, A = 4) and a camera (FLIR, BFS-U3-200S6M-C, sensor size 1”, dynamic range 71.89 dB, pixel size 2.4µm) are used to build the FPM system, as shown in Fig. 2(a).

Figure 6 shows the experimental results with different LED array positional deviations. Figure 6(a) is the full FOV LR image, and a small rectangle shown in Fig. 6(b1) is used to calculate the image offset for simplifying calculation and accelerating the speed of subpixel registration algorithm. Figure 6(b2) is an LR image of Fig. 6(a), which becomes blurry since the restriction of low NA objective lens. Figures 6(c1) and (c2) show the reconstructed HR intensity image and pupil function without positional deviation. The HR intensity image has excellent imaging quality and the reconstructed pupil has good symmetry which is consistent with prior knowledge. Next, we translate the LED array by 2 mm along the y-axis with a mechanical adjustment stage, and the corresponding reconstructed intensity image and pupil function with EPRY-FPM are shown in Figs. 6(d1) and (d2), where wrinkle artifacts appear, where the degradation of imaging quality is not severe, since EPRY-FPM will identify the positional deviations of LED array and consider it as a kind of aberration. Therefore, EPRY-FPM couples the positional deviation into the reconstructed pupil function, and compared with Fig. 6(c2), the pupil in Fig. 6(d2) is distorted. Similarly, we use the mechanical adjustment stage to move the LED array by 2mm along the x-axis and y-axis simultaneously, the reconstructed HR intensity image and pupil function with EPRY-FPM shown in Figs. 6(d3) and (d4) are severely degraded, and the details of the resolution line pair are irresolvable.

 figure: Fig. 6.

Fig. 6. Experimental performance comparison of EPRY-FPM, SC-FPM, and pdcFPM. (a) is a full FOV LR image; (b1) is the small segment used to calculate image offset; (b2) is a blurry LR image of 256×256 pixels in (a); (c1) and (c2) are the reconstructed intensity image and pupil function with EPRY-FPM when there is no positional deviation; (d1)-(d2), (e1)-(e2), and (f1)-(f2) are the reconstructed intensity images and pupil functions of EPRY-FPM, SC-FPM, and pdcFPM when Δy = 2mm, respectively; (d3)-(d4), (e3)-(e4), and (f3)-(f4) are the reconstructed intensity images and pupil functions of EPRY-FPM, SC-FPM, and pdcFPM when Δx = 2mm and Δy = 2mm, respectively.

Download Full Size | PDF

Groups (e) and (f) show the reconstructed HR intensity images and pupil functions of SC-FPM and pdcFPM, respectively. When the LED array is moved by 2mm along the y-axis, the reconstructed results of SC-FPM at 35 iterations are shown in Figs. (e1) and (e2), where the wrinkles have been removed with the final parameters of $\Delta x ={-} 0.175mm$, $\Delta y = 1.419mm$, $\theta ={-} 0.111^\circ $, $h = 91.83mm$. However, the recovered parameters are not consistent with the introduced one and it consumes 321.02s to obtain the intensity image with low contrast, and compared with Fig. 6(d2) the pupil function distorts more as well. Figs. (f1) and (f2) present the reconstructed results of pdcFPM with the corrected positional parameters of $\Delta x ={-} 0.279mm$, $\Delta y = 1.980mm$, $\theta ={-} 0.393^\circ $, $h = 93.39mm$, which can match the actual positional deviations well, and only 0.02s and 26.52s are respectively needed to obtain the accurate parameters and high-quality reconstructed images. When the LED array is shifted by 2mm along both the x-axis and y-axis, the reconstructed results of SC-FPM at 30 iterations with the final parameters of $\Delta x = 1.028mm$, $\Delta y = 1.297mm$, $\theta ={-} 1.395^\circ $, $h = 94.74mm$ are shown in Figs. (e3) and (e4), where the parameters are much different from the truth and the imaging quality is unacceptable due to the low contrast and the distortion of pupil function. Figs. (f3) and (f4) show the reconstructed results of pdcFPM with the corrected positional parameters of $\Delta x = 1.949mm$, $\Delta y = 1.997mm$, $\theta ={-} 0.327^\circ $, $h = 93.04mm$, where the reconstructed intensity image with a uniform background has higher contrast and each line pair can be resolved.

In addition, to verify the generalizability of our method, we use a paramecium slice as the sample and randomly move and rotate the LED array to an arbitrary position. Figure 7(a) shows the full FOV LR image, and Fig. 7(b) is the enlargement of the small segment in the red box in Fig. 7(a). The reconstructed intensity, phase, and pupil function with EPRY-FPM are shown in Figs. 7(c1)-(c3), respectively. Many wrinkle artifacts appear in both the reconstructed intensity and phase distribution, decreasing the reconstructed quality and the image’s contrast severely. The reconstructed intensity, phase, and pupil function with the recovered parameters of $\Delta x = 1.680mm,\Delta y = 0.089mm,\theta ={-} 5.135^\circ ,h = 93.15mm$ are shown in Figs. 7(d1)-(d3), respectively. Compared with Fig. 7(c1)-(c3), the artifacts caused by the positional deviations have vanished and the imaging quality is also highly improved.

 figure: Fig. 7.

Fig. 7. Reconstructed results of pdcFPM for a random positional deviation. (a) is the full FOV LR image of the paramecium slice; (b) is the enlargement of one small segment in (a); (c1)-(c3) are the reconstructed intensity, phase and pupil with EPRY-FPM; (d1)-(d3) are the reconstructed intensity, phase and pupil with pdcFPM.

Download Full Size | PDF

5. Discussion

In this paper, we propose a positional correction method termed pdcFPM that yields high-quality reconstructed intensity, phase, and pupil function. The feasibility and effectiveness of pdcFPM are verified by both simulations and experiments. Different from the existing complex data-driven optimization strategies, pdcFPM constructs a clear physical model that can separate the positional deviations from many other coupled systematic errors effectively. We utilize the relationship between the offset of the defocus image and illumination direction to firstly obtain four key positional parameters of each LED, and then precisely calibrate the position of each sub-spectrum in the process of phase retrieval and aperture synthesis. The comparative experimental results corresponding to large-scale positional deviations prove the excellent performance of pdcFPM. In addition, even if the positional deviations are remarkably large (larger than the interval of adjacent LEDs) that can cause the wrong choice of the central LED, our method can still achieve good recovered results by rearranging the updating order. Thus, our method can handle arbitrary positional deviations.

Although pdcFPM can correct large-scale positional parameters and reduce the complexity of system construction, the accuracy of the focusing knob used for defocus adjustment is highly required because the image offset is as small as tens of microns and easily affected. For the focusing device with poor accuracy, it can also be feasible because the image offset caused by the mechanical adjustment can be calibrated before the experiment. To further reduce the requirements on the hardware system and eliminate the defocus adjustment part, we are trying to implement LED array positional correction based on the bright-dark field transition boundary characteristics.

Funding

National Natural Science Foundation of China (61735003, 61805011).

Acknowledgements

The authors acknowledge Guoan Zheng for the valuable discussions.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

2. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [CrossRef]  

3. A. Pan, C. Zuo, and B. Yao, “High-resolution and large field-of-view Fourier ptychographic microscopy and its applications in biomedicine,” Rep. Prog. Phys. 83(9), 096101 (2020). [CrossRef]  

4. A. W. Lohmann, R. G. Dorsch, D. Mendlovic, C. Ferreira, and Z. Zalevsky, “Space-bandwidth product of optical signals and systems,” J. Opt. Soc. Am. A 13(3), 470 (1996). [CrossRef]  

5. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

6. S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, “Synthetic aperture Fourier holographic optical microscopy,” Phys. Rev. Lett. 97(16), 168102 (2006). [CrossRef]  

7. S. Pacheco, G. Zheng, and R. Liang, “Reflective Fourier ptychography,” J. Biomed. Opt. 21(2), 026010 (2016). [CrossRef]  

8. H. Lee, B. H. Chon, and H. K. Ahn, “Reflective Fourier ptychographic microscopy using a parabolic mirror,” Opt. Express 27(23), 34382 (2019). [CrossRef]  

9. S. Dong, R. Horstmeyer, R. Shiradkar, K. Guo, X. Ou, Z. Bian, H. Xin, and G. Zheng, “Aperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging,” Opt. Express 22(11), 13586 (2014). [CrossRef]  

10. K. Wakonig, A. Diaz, A. Bonnin, M. Stampanoni, A. Bergamaschi, J. Ihli, M. Guizar-Sicairos, and A. Menzel, “X-ray Fourier ptychography,” Sci. Adv. 5(2), eaav0282 (2019). [CrossRef]  

11. A. C. S. Chan, J. Kim, A. Pan, H. Xu, D. Nojima, C. Hale, S. Wang, and C. Yang, “Parallel Fourier ptychographic microscopy for high-throughput screening with 96 cameras (96 Eyes),” Sci. Rep. 9(1), 11114 (2019). [CrossRef]  

12. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104 (2015). [CrossRef]  

13. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827–835 (2016). [CrossRef]  

14. S. Chowdhury, W. J. Eldridge, A. Wax, and J. Izatt, “Refractive index tomography with structured illumination,” Optica 4(5), 537 (2017). [CrossRef]  

15. R. Horstmeyer, X. Ou, G. Zheng, P. Willems, and C. Yang, “Digital pathology with Fourier ptychography,” Comput. Med. Imaging. Grap 42, 38–43 (2015). [CrossRef]  

16. A. Williams, J. Chung, X. Ou, G. Zheng, S. Rawal, Z. Ao, R. Datar, C. Yang, and R. Cote, “Fourier ptychographic microscopy for filtration-based circulating tumor cell enumeration and analysis,” J. Biomed. Opt. 19(6), 066007 (2014). [CrossRef]  

17. P. Song, S. Jiang, H. Zhang, X. Huang, Y. Zhang, and G. Zheng, “Full-field Fourier ptychography (FFP): Spatially varying pupil modeling and its application for rapid field-dependent aberration metrology,” APL Photonics 4(5), 050802 (2019). [CrossRef]  

18. S. Dong, P. Nanda, K. Guo, J. Liao, and G. Zheng, “Incoherent Fourier ptychographic photography using structured light,” Photonics Res. 3(1), 19 (2015). [CrossRef]  

19. S. Zhang, G. Zhou, Y. Wang, Y. Hu, and Q. Hao, “A simply equipped fourier ptychography platform based on an industrial camera and telecentric objective,” Sensors 19(22), 4913 (2019). [CrossRef]  

20. G. Zheng, C. Shen, S. Jiang, P. Song, and C. Yang, “Concept, implementations and applications of Fourier ptychography,” Nat. Rev. Phys. 3(3), 207–223 (2021). [CrossRef]  

21. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7(4), 1336 (2016). [CrossRef]  

22. A. Pan, Y. Zhang, T. Zhao, Z Wang, D. Dan, and B Yao, “System calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt. 22(09), 1–11 (2017). [CrossRef]  

23. J. Zhang, X. Tao, P. Sun, and Z. Zheng, “A positional misalignment correction method for Fourier ptychographic microscopy based on the quasi-Newton method with a global optimization,” Opt. Commun. 452, 296–305 (2019). [CrossRef]  

24. H. Wei, J. Du, L. Liu, Y. He, Y. Yang, S. Hu, and Y. Tang, “Accurate and stable two-step LED position calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt. 26(10), 106502 (2021). [CrossRef]  

25. S. Zhang, G. Zhou, C. Zheng, T. Li, Y. Hu, and Q. Hao, “Fast digital refocusing and depth of field extended Fourier ptychography microscopy,” Biomed. Opt. Express 12(9), 5544 (2021). [CrossRef]  

26. G. Zhou, S. Zhang, Y. Zhai, Y. Hu, and Q. Hao, “Single-shot through-focus image acquisition and phase retrieval from chromatic aberration and multi-angle illumination,” Front. Phys. 9, 648827 (2021). [CrossRef]  

27. S. Jiang, Z. Bian, X. Huang, P. Song, H. Zhang, Y. Zhang, and G. Zheng, “Rapid and robust whole slide imaging based on LED-array illumination and color-multiplexed single-shot autofocusing,” Quant. Imaging Med. Surg 9(5), 823–831 (2019). [CrossRef]  

28. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960 (2014). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Quality of reconstructed results versus the positional deviation of LED board. (a1) and (b1) are the amplitude and phase profiles of the sample; (a2)-(a4) are the reconstructed intensity images when Δx equals to 200µm, 800µm, and 2000µm, respectively; (b2)-(b4) are the corresponding reconstructed phase images; (c1) and (c2) are the RMSE between the reconstructed and true intensity and phase versus Δx.
Fig. 2.
Fig. 2. FPM system and LED array model. (a) is an FPM system; (b) is the corresponding LED array model with positional deviation; (c) is the local magnification of (b).
Fig. 3.
Fig. 3. Block diagram of pdcFPM strategy.
Fig. 4.
Fig. 4. Simulated results of defocus image offsets. (a) is the square sample; (b) is the focus LR image illuminated with LED0,0; (c) and (d) are two defocus LR images illuminated with LED0,0 and LED2,2 respectively; (e) shows the offsets distribution of 25 defocus LR images.
Fig. 5.
Fig. 5. Recovered HR images of EPRY-FPM, SC-FPM and pdcFPM. (a1) and (a2) are the ideal HR intensity and phase profiles; (b1)-(b4) and (c1)-(c4) are the recovered HR intensity and phase images with EPRY-FPM; (d1)-(d4) and (e1)-(e4) are the recovered HR intensity and phase images with SC-FPM; (f1)-(f4) and (g1)-(g4) are the recovered HR intensity and phase images with pdcFPM.
Fig. 6.
Fig. 6. Experimental performance comparison of EPRY-FPM, SC-FPM, and pdcFPM. (a) is a full FOV LR image; (b1) is the small segment used to calculate image offset; (b2) is a blurry LR image of 256×256 pixels in (a); (c1) and (c2) are the reconstructed intensity image and pupil function with EPRY-FPM when there is no positional deviation; (d1)-(d2), (e1)-(e2), and (f1)-(f2) are the reconstructed intensity images and pupil functions of EPRY-FPM, SC-FPM, and pdcFPM when Δy = 2mm, respectively; (d3)-(d4), (e3)-(e4), and (f3)-(f4) are the reconstructed intensity images and pupil functions of EPRY-FPM, SC-FPM, and pdcFPM when Δx = 2mm and Δy = 2mm, respectively.
Fig. 7.
Fig. 7. Reconstructed results of pdcFPM for a random positional deviation. (a) is the full FOV LR image of the paramecium slice; (b) is the enlargement of one small segment in (a); (c1)-(c3) are the reconstructed intensity, phase and pupil with EPRY-FPM; (d1)-(d3) are the reconstructed intensity, phase and pupil with pdcFPM.

Tables (1)

Tables Icon

Table 1. Recovered positional parameters and processing time of pdcFPM and SC-FPM

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

I m , n c ( x , y ) = | F 1 { F [ o ( x , y ) e ( j x u m , n , j y v m , n ) ] P ( u , v ) } | 2 ,
u m , n = 2 π λ x c x m , n ( x c x m , n ) 2 + ( y c y m , n ) 2 + h 2 , v m , n = 2 π λ y c y m , n ( x c x m , n ) 2 + ( y c y m , n ) 2 + h 2 ,
o m , n e ( x , y ) = F 1 { O 0 ( u u m , n , v v m , n ) P 0 ( u , v ) } .
o 0 , m , n u ( x , y ) = I m , n c ( x , y ) o m , n e ( x , y ) | o m , n e ( x , y ) | .
O i ( u u m , n , v v m , n ) = O i ( u u m , n , v v m , n ) + α P i ( u , v ) | P i ( u , v ) | max 2 Δ O i , m , n , P i ( u , v ) = P i ( u , v ) + β O i ( u u m , n , v v m , n ) | O i ( u u m , n , v v m , n ) | max 2 Δ O i , m , n ,
x m , n = [ m c o s ( θ ) + n s i n ( θ ) ] d L E D + Δ x , y m , n = [ m s i n ( θ ) + n cos ( θ ) ] d L E D + Δ y ,
I m , n , d c ( x , y ) = | F 1 { O ( u u m , n , v v m , n ) P ( u , v ) H ( u , v , z d ) } | 2 ,
H ( u , v , z d ) = exp [ j A z d ( 2 π λ ) 2 ( u u m , n ) 2 ( v v m , n ) 2 ] ,
H ( u , v , z d ) = exp [ j A z d ( u u m , n w m , n + v v m , n w m , n ) ] exp [ j A z d w m , n ) exp ( j A z d u 2 + v 2 2 w m , n ) ] ,
w m , n = ( 2 π λ ) 2 u m , n 2 v m , n 2 .
I m , n , d c ( x , y ) = I m , n c ( x + Δ x m , n , y + Δ y m , n ) , Δ x m , n = A z d u m , n w m , n , Δ y m , n = A z d v m , n w m , n ,
Δ x m , n = x [ x y g m , n ( x , y ) ] x , y g m , n ( x , y ) x [ x y g 0 , 0 ( x , y ) ] x , y g 0 , 0 ( x , y ) , Δ y m , n = y [ y x g m , n ( x , y ) x , y g m , n ( x , y ) y [ y x g 0 , 0 ( x , y ) ] x , y g 0 , 0 ( x , y ) ,
E ( Δ x , Δ y , θ , h ) = m , n = 2 2 { [ Δ x m , n Δ x m , n , e ( Δ x , Δ y , θ , h ) ] 2 + [ Δ y m , n Δ y m , n , e ( Δ x , Δ y , θ , h ) ] 2 } , ( Δ x , Δ y , θ , h ) u = a r g m i n [ E ( Δ x , Δ y , θ , h ) ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.