Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

LED-based temporal variant noise model for Fourier ptychographic microscopy

Open Access Open Access

Abstract

Fourier ptychographic microscopy (FPM) is a technique to reconstruct a high-resolution image from a set of low-resolution images captured with different illumination angles, which is susceptible to ambient noise, system noise, and weak currents when acquiring large-angle images, especially dark field images. To effectively address the noise problem, we propose an adaptive denoising algorithm based on a LED-based temporal variant noise model. Taking the results of blank slide samples as the reference value of noise, and analyzing the distribution of noise, we establish a statistical model for temporal variant noise, describing the relationship between temporal noise and LED spatial location. Based on this model, Gaussian denoising parameters are selected to adaptively denoise the images with different locations, with which high-resolution images can be reconstructed. Compared with other methods, the experimental results show that the proposed method effectively suppresses the noise, recovers more image details, increases the image contrast, and obtains better visual effects. Meanwhile, better objective evaluation also mirrors the advantages of the proposed algorithms.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fourier ptychographic microscopy (FPM) is a technology that combines wide-field and high-resolution imaging [1,2], which has been widely applied in medical imaging [3,4], tomography [57] and pathology judgment [810]. In the process of capturing dark-field images of the sample, the incidence angle of the LED light source is large, and the amount of incoming light is low, which is susceptible to temporal variant noise, resulting in a low signal noise ratio (SNR) of the captured images [11,12]. The dark-field images correspond to the high-frequency information of the high-resolution image. When the high-frequency information is mixed by noise, it leads to blurring of details, lower sharpness, and reduces contrast in the reconstructed image.

To solve the reconstruction errors caused by the low SNR of the dark field images, some researchers have optimized the FPM system so that it can capture images of higher quality. Ou et al. proposed to use an objective lens with a larger numerical aperture (NA) to enhance the resolution of the captured images, and this method increases the resolution while reduces the field of view(FOV). [13]. Sun et al. proposed a REFPM platform [14] with an oil-immersed condenser and a dense LED array, which captured images up to 98.5 megapixels. However, due to the sampling requirements [15], a low magnification objective is not applicable and the FOV loss in this method is still significant. So, in order to optimize the resolution of the acquired image while guaranteeing the FOV, optimizing the light source is an excellent choice. Bian et al. [11] added an optical correction parameter to the FP reconstruction process, which was used to correct the luminance of the dark-field image, but this method increased the noise value while improving the information content. In addition, there are also some methods, through the rational design of light source arrangement, so that the illumination intensity of each angle tends to be consistent. Pan et al. use hemispherical digital condensers [1618] to increase the amount of light into the dark-field image, this method effectively solves the problem of inconsistent angular illumination light intensity. However, this system’s LED sub-light source is fixed arrangement, the illumination lacks flexibility, and it is not easy to match with different numerical aperture objective lenses. Noise is difficult to be eliminated by optimizing the system or increasing the exposure, so it is necessary to carry out the research of FPM denoising algorithm.

Moreover, some researchers focus on reconstruction optimization algorithms design. Traditional FPM uses the spectrum processing method Alternate Projection (AP) [19] algorithm which alternates between adding constraints in spatial space and Fourier space, stitches together the LR sub-spectra. This algorithm is susceptible to dark-field images with low SNR, resulting in degradation of image quality. To reduce the effect of noise, Bian et al. proposed truncated Poisson Wirtinger Fourier ptychographic reconstruction (TPWFP) [20], to deal with measurement noise and pupil position errors. Then, Zhou et al. proposed the FPM misalignment correction (mcFPM) [21] algorithm reduces the effect of noise by applying aberration correction to the image at the offset sampling point, which is robust to misalignment elimination in each region. In addition, after acquiring images using a multiplexing strategy, Tian et al. [22] implemented a background estimation and regularization procedure during image processing to improve noise performance and robustness, as well as to increase reconstruction efficiency. There is a global optimization algorithm called alternating direction method of multipliers (ADMM) [23] that decomposes the reconstruction process into multiple sub-problems with more stable and robust phase recovery under noisy conditions. Recently, Vittorio Bianco et al proposed a deep learning-based phase errors correction FPM algorithm [24,25], which can effectively avoid the effect of artefacts during the acquisition process and obtain images with higher signal-to-noise ratio results. Although the above methods provide a variety of options for FP reconstruction, these algorithms are optimizations of the reconstruction algorithm and do not analyze the noise of the acquired image to reduce the noise that enters the reconstruction process.

We want to optimize the reconstruction results by optimizing the quality of the captured images, but there are many other systematic biases in the FPM system that will have an impact on the reconstruction results. We establish a statistical model for temporal variant noise, describing the relationship between temporal noise and LEDs’ spatial locations. Inspired by the embedded pupil function recovery (EPRY) [26] algorithm, proposed denoising algorithm is based on our noise model, in which Gaussian parameters are selected adaptively for different LEDs’s locations.

The main contributions are listed as follows:

  • 1) The absolute deviation and standard deviation between multiple sets of captured images are used to estimate spatial and temporal noise, predict the noise of sampled images, and distinguish between image information and noise information. This is able to remove the noise while preserving the image information.
  • 2) Analyzing the distribution of different noises, a statistical model of temporal variant noise was developed to describe the relationship between temporal variant noise and the spatial location of LEDs. Separate denoising for different levels of noise can retain more image details.
  • 3) When image preprocessing, select the parameters of Gaussian denoising algorithm based on the noise model to adaptively remove noise at different locations, improves image contrast and improve image quality.

To demonstrate the effectiveness of proposed method, we evaluate proposed method against the above algorithms on both simulated data and real data. Both simulations and real experiments show that proposed method outperforms other state-of-the-art algorithms in imaging scenarios with high noise effects. Proposed method effectively suppresses noise, improves image contrast, recovers more image details, and obtains better visual effects.

2. Method

2.1 FPM

The FPM imaging system is shown in the Fig. 1, where the illumination module of the microscope is replaced with an LED array, and for each LED to illuminate the sample. Then we employ this LED to illuminate the sample, capture the intensity of LR images, and reconstruct the target’s amplitude and phase.

 figure: Fig. 1.

Fig. 1. Schematic of a traditional FPM experimental platform

Download Full Size | PDF

During image acquisition, individual LEDs in the matrix are switched on sequentially to illuminate the 2D thin samples from different angles. The correspondence between the spatial and frequency domain of different LEDs is:

$$\begin{array}{*{20}{c}} {{u_i} = \frac{{{x_i} - {x_0}}}{{\lambda \sqrt {{{({{x_i} - {x_0}} )}^2} + {{({{y_i} - {y_0}} )}^2} + {h^2}} }}}\\ {{v_i} = \frac{{{y_i} - {y_0}}}{{\lambda \sqrt {{{({{x_i} - {x_0}} )}^2} + {{({{y_i} - {y_0}} )}^2} + {h^2}} }}} \end{array}\; \; \; \; \; \; $$

As shown in Fig. 1, $({{u_i},{v_i}} )$ are the coordinates of the position in the Fourier domain corresponding to when the LED at position $({{x_i},{y_i}} )$ is lit. The LED directly below the sample is the centre LED, ${x_0}$ and ${y_0}$ are the coordinates of the centre LED, and $h\; $is the distance from the LED matrix to the sample. When an LED in the LED matrix is lit for imaging, the spectrum of the outgoing light field is $T({u - {u_i},v - {v_i}} )$.

After the outgoing light passes through the incident pupil of the objective lens, the spectral distribution located on the back focal plane of the objective lens is $T({u - {u_i},v - {v_i}} )CTF({u,v} ).$ Here CTF is the coherence transfer function (i.e., the optical pupil function of the objective lens), is a circle with an internal value of 1 external value of 0, and its cutoff frequency is $\frac{{NA}}{\lambda }$. The light carrying the sample information is imaged on the camera surface through the Fourier transform of the sleeve lens. Due to the camera can only collect the image of the intensity of light and loss of the phase information, so the camera can be expressed as the camera to collect the image:

$$I_i\left( {x,y} \right) = \left| {\mathrm{\digamma} ^{-1}\left\{ {\left. {CTF\left( {u,v} \right)T\left( {u-u_i} \right)\left( {v-v_i} \right)} \right\}} \right.} \right|^2$$

It is easy to observe that noise is introduced in the forward imaging process of the FPM, especially for the images captured by the LEDs that are far away from the axis, the value of image intensity ${I_i}({x,y} )$ is small and easily affected by noise. So, proposed method analyzes the noise in the forward imaging process of FPM.

2.2 Noise analysis

During the image acquisition process, the LED lights in the edge area correspond to images that are easily affected by noise, resulting in a low signal-to-noise ratio. To distinguish which images contain less information, the concepts of bright and dark fields are introduced. In the literature [27], it is proposed that for the captured images, based on the relationship between the spatial frequency $({{u_i},{v_i}} )$ and the cutoff frequency of the objective lens $\frac{{N{A_{obj}}}}{\lambda }$, where $N{A_{obj}}$ is the numerical aperture of the objective lens, and $\lambda $ is the wavelength of the light source. Bright-field and dark-field imaging can be distinguished:

$$\left\{ {\begin{array}{{c}} {\sqrt {{u_i}^2 + {v_i}^2} \le \frac{{N{A_{obj}}}}{\lambda }\; ,\; \; \textrm{bright}\; \textrm{field}}\\ {\sqrt {{u_i}^2 + {v_i}^2} > \frac{{N{A_{obj}}}}{\lambda }\; ,\; \; \textrm{dark}\; \textrm{field}} \end{array}} \right.$$

This equation implies that the subaperture spectrum of bright-field illumination contains zero-frequency components, while dark-field illumination shifts the low-frequency components out of the captured subaperture spectrum. Bright-field images record low-frequency information about the sample under transmitted illumination, while dark-field images record high-resolution information under large-angle illumination. Thus, the raw acquisition images present sample information at different spatial frequencies.

Both spatial and temporal noise are introduced simultaneously in the process when capturing both bright-field and dark-field images. Moreover, the noise distribution of bright-field and dark-field images is not the same, in order to analyze the noise of the captured images in different situations, we refer to the literature [28] and analyze the spatial and temporal noise of the bright-field and dark-field during the acquisition of the blank slide.

The spatial noise in the captured images is caused by fluctuations in the light field due to environmental factors and errors in the experimental equipment. Therefore, we irradiated blank glass slides under non-darkroom conditions using a 15 × 15 LED matrix and captured ten sets of images. Each set of images contained 225 images, and the exposure time for each image was 500 ms. To minimize the effect of natural light variations on the light field, the acquisition time was set to 225 seconds per group, with a time step of 30 seconds between each group. We define the image intensity of the ${i^{th}}$ image as ${I_i}({x,y} )$.The spatial noise $S{N_i}$ is quantified by the standard deviation of the captured images:

$$S{N_i} = \sqrt {\frac{{{{\left( {\mathop \sum \nolimits_x \mathop \sum \nolimits_y \; {I_i}({x,y} )} \right)}^2}}}{{H \times L}}} $$
where $\mathop \sum \limits_x \mathop \sum \limits_y \; {I_i}(x,y)$ represents the sum of intensity of the ${i^{th}}$ image and H and L are the length and width of the ${i^{th}}$ image, respectively.

The temporal noise, on the other hand, is the noise due to the optical system itself which is affected by the current fluctuation during the acquisition. So, we repeat the process of capturing 225 original low-resolution images of the blank slide illuminated by LED lights at different angles ten times. We define the image intensity of the ${i^{th}}$ image in the ${n^{th}}$ group as ${I_{n - i}}({x,y} )$. Then took the average of the ten sets of images as the ${i^{th}}$ standard image ${I_{std - i}}({x,y} )$:

$${I_{std - i}}({x,y} )= \frac{{\mathop \sum \nolimits_{n = 1}^{10} \; {I_{n - i}}({x,y} )}}{{10}}$$

And then calculated the average of each image with absolute deviation as temporal noise $T{N_i}$ with the following formula:

$$T{N_i} = \frac{{\mathop \sum \nolimits_{n = 1}^{10} \mathop \sum \nolimits_x \mathop \sum \nolimits_y |{\; {I_{n - i}}({x,y} )- {I_{std - i}}({x,y} )} |}}{{10}}$$

After calculating the noise, we took two typical places of bright field and dark field for noise analysis, and the results are shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Noise analysis of FPM acquired images.

Download Full Size | PDF

From Fig. 2, it can be seen that bright field image contains more spatial noise than that of dark field image. The spatial noise and the intensity of the light field show a positive correlation, so replacing the empty samples with the real samples affects the distribution of the light field. Thus, the distribution of the spatial noise will also have a large change with it, and it is more difficult to analyze the distribution law. At the same time, it is observed that the temporal noise is randomly distributed whether in bright-field images or dark-field images. And the probability distribution function of temporal noise follows a Gaussian distribution, which is due to the fact that the temporal noise is dependent on the system and is less affected by the light field.

To observe the distribution pattern of different degrees of noise in bright and dark fields more clearly and globally, we fix the noise parameter as the average intensity value of the spatial noise map ${I_{i - SN}}$ and the temporal noise map ${I_{i - TN}}$. The Fig. 3 shows the spatial noise mean and temporal noise mean of the captured images versus the spatial position of the LED.

 figure: Fig. 3.

Fig. 3. Relationship between mean of noise and the spatial position of the LED (a) Spatial noise distribution; .(b) Temporal noise distribution.

Download Full Size | PDF

Figure 3 shows that the noise is high in the center region and low in the edge region. In order to distinguish the regions with different noise performances, we introduce the concept of bright-field and dark-field.

$$\left\{ {\begin{array}{{c}} {\sqrt {{u_i}^2 + {v_i}^2} \le \frac{{2N{A_{obj}}}}{\lambda }\; ,\; \; \textrm{Existing bright field}}\\ {\sqrt {{u_i}^2 + {v_i}^2} > \frac{{2N{A_{obj}}}}{\lambda }\textrm{No bright field}} \end{array}} \right.$$

When the length of the center of the sub-aperture from the origin of the coordinates is less than 2NAobj, the low-frequency information in the range of bright-field is captured. When the length of the subaperture center from the coordinate origin is greater than 2NAobj only the high frequency information present in the dark-field is captured.

From Fig. 3, it can be also seen that bright field image contains more spatial noise than that of dark field image. There is a significant difference in spatial noise and the spatial noise sensitive to changes in the light field. Replacing the blank slide with real samples affects the distribution of the light field, consequently, the distribution of the spatial noise also changes significantly, making it more difficult to analyze its distribution pattern. At the same time, it is observed that the temporal noise is randomly distributed and less affected by the light field, no matter for dark-field images or bright-field images. After counting a large amount of temporal noise, it can be seen from the two typical examples in Fig. 2 that the random noise which depends on the system itself shows a Gaussian distribution. Based on this property of temporal variant noise, we will model temporal variant noise and generate noise templates for denoising.

2.3 Noise model

For bright-field images, the SNR of temporal variant noise is high, so the impact is relatively small. However, for dark-field images, the SNR of temporal variant noise is low and the impact of dark-field noise on the final imaging is significant. Therefore, in order to quantitatively measure the effect of temporal variant noise on the captured images, we will calculate the noise to information ratio for each image. The calculation formula is:

$${N_i} = \frac{{\mathop \sum \nolimits_x \mathop \sum \nolimits_y T{N_i}({x,y} )}}{{\mathop \sum \nolimits_x \mathop \sum \nolimits_y \; {I_i}({x,y} )}}$$

In the above equation, ${N_i}$ denotes the proportion of noise in the image lighted by the ith LED. Relationship between temporal variant noise ${N_i}$ and led spatial position is as shown in the Fig. 4(a). It can be seen that dark-field images have higher noise information ratio and bright-field images have lower noise information ratio. So, we assume that the image obtained by an LED lamp farther away from the center point has a higher ratio of noise to information, that is, the noise is positively correlated with the distance to the center point. So, we correspond the noise to the distance from the center point, draw the coordinate points, and then use the function to fit the coordinate points. Based on this conclusion, several functions are used to analyze the relationship between the image temporal variant noise ${N_i}$ and led spatial position, and finally a fourth degree polynomial function is chosen for noise modelling. The new proportion of temporal variant noise ${N_i}$ obtained based on the above model is as shown in the Fig. 4(b).

 figure: Fig. 4.

Fig. 4. Relationship between temporal noise ${N_i}$ distribution and the spatial position of the LED (a) Realistic temporal noise ${N_i}$ distribution data; (b) Fitted temporal noise ${N_i}$ distribution data.

Download Full Size | PDF

At this point, we have established establish a statistical model for temporal variant noise, describing the relationship between temporal variant noise and LED spatial location. Observing the model, the farther the sampled LED is from the center LED, the higher the noise ratio of the collected image.

2.4 Gaussian denoising

Given that the temporal variant noise exhibits a Gaussian distribution, and the dark-field images have high noise information ratio, we plan to perform Gaussian denoising on the dark-field images. Due to noise information ratio in each dark-field image is different, it is necessary to analyze the selection of denoising parameters for Gaussian noise of different concentrations. Gaussian filter denoising is the convolution of the entire image using a filter template. The value of each pixel $\textrm{G}({\textrm{x},\textrm{y}} )$ is obtained by its own value and other pixel values in the neighborhood after weighted average. The two-dimensional Gaussian denoising template is as follows:

$$\textrm{G}({\textrm{x},\textrm{y}} )= \frac{1}{{2\mathrm{\pi }{\mathrm{\sigma }^2}}}{\textrm{e}^{ - \frac{{{\textrm{x}^2} + {\textrm{y}^2}}}{{2{\mathrm{\sigma }^2}}}}}$$

In this formula, $({\textrm{x},\textrm{y}} )$ represents the position where pixel values need to be calculated, where $\mathrm{\sigma }$ is the standard deviation of the Gaussian distribution. If $\mathrm{\sigma }$ is smaller, the center coefficient of the generated template will be larger, while the surrounding coefficients will be smaller, which will not have a significant smoothing effect on the image. On the contrary, when $\mathrm{\sigma }$ is large, the coefficients of the generated template do not differ significantly, which is similar to the mean template and has a significant smoothing effect on the image. Therefore, when the image contains a high proportion of noise, a correspondingly large Gaussian parameter σ should be used. We can assume that the temporal noise to information ratio ${N_i}$ is positively correlated with σ:

$${\mathrm{\sigma }_i} = k{N_i}$$

The experimental analysis in Section 4 shows that when k taken as 0.5, the Gaussian template has a good denoising effect on images with a high proportion of Gaussian noise. Substituting the result of Eq. (10) into Eq. (9) simplifies to get ${I_{i - denoise}}({\textrm{x},\textrm{y}} )\; $ is:

$${I_{i - denoise}}({\textrm{x},\textrm{y}} )= \frac{1}{{2\mathrm{\pi }{{({0.5{N_i}} )}^2}}}{e^{ - \frac{{{\textrm{x}^2} + {\textrm{y}^2}}}{{2{{({0.5{N_i}} )}^2}}}}}$$

2.5 Image reconstruction

After obtaining the denoised image ${I_{i - denoise}}({\textrm{x},\textrm{y}} )$, they can be spliced and fused together in the frequency domain to obtain high-frequency information beyond the limitations of the original objective lens, thus generating a high-resolution sample image beyond the diffraction limit of the objective lens. However, since the camera did not capture the phase information in the light waves during the image acquisition process, an optimization approach is needed to find the optimal solution for the sample high-resolution complex amplitude, iteratively retrieving the lost phase information in the process. According to Eq. (2), the optimized objective function can be written:

$$\mathop {\min }\limits_{T({u,v} )} \mathop \sum \limits_i \mathop \sum \limits_{({x,y} )} {\left|{\sqrt {{I_{i - denoise}}({\textrm{x},\textrm{y}} )} - |{{F^{ - 1}}\{{CTF({u,v} )T({u - {u_i}} )({v - {v_i}} )} \}} |} \right|^2}$$
where the ith low-resolution denoise image ${I_{i - denoise}}({\textrm{x},\textrm{y}} )$ acquired is used as an amplitude constraint in the null domain and the coherent transfer function $CTF$ is used as a support domain constraint in the frequency domain. The sample spectrum T is solved to minimize the difference between the actual acquired image and the estimated image. However, in a real system, the pupil function does not match the coherent transfer function CTF. So we use the pupil correction algorithm mentioned in EPRY [26], using the corrected pupil function ${P_n}$ instead of the CTF.
$$\left\{ \begin{matrix}{\emptyset _n = P_n\left( {u,v} \right)T_n\left( {u-u_i} \right)\left( {v-v_i} \right)} \\ {P_{n + 1}\left( {u,v} \right) = P_n\left( {u,v} \right) + {{T_n\left( {u-u_i} \right)\left( {v-v_i} \right)} \over {\left| {T_n\left( {u-u_i} \right)\left( {v-v_i} \right)} \right|_{max}^2 }}\left[ {\emptyset _n^{\prime} -\emptyset _n} \right]} \end{matrix}\right.$$

In the nth loop, the exit wave optically illuminating the pupil plane is ${\emptyset _n}$. This function has been experimentally shown to be effective in recovering the unknown error of the system. In conclusion, the overall processing flowchart of the FPM denoising optimization algorithm proposed in this paper is shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Flowchart of the algorithm in this paper.

Download Full Size | PDF

3. Experiment

3.1 Simulation experiments

3.1.1 Single image analysis

To verify the effectiveness of proposed method, the AP [19], EPRY [26], TPWFP [20] and ADMM [23] algorithms were used for comparison to process the same image under the same parameters. We simulate the FPM setup with hardware parameters as follows: the objective lens NA is 0.1, the corresponding pupil function is an ideal binary function (one inside the NA circle and zero outside); the height of the LED plane to the sampling plane is 100 mm; the distance between neighboring LEDs is 4 mm, and the incident wavelength is 635 nm using a 15 × 15 LED matrix light source. After simulating a set of 15 × 15 FPM raw data sets, Gaussian noise with the same standard deviation is added to each image [20]. Since the algorithm proposed in this paper requires the noise prior information of the image, the same Gaussian noise is first added to the blank image to simulate the sampling information on the blank slide, and then the noise parameters of the FPM captured images are calculated according to the method proposed in this paper. The simulate images are then denoised according to the calculated parameters. To avoid errors, we repeat each of the following simulations twenty times and take the average value.

The “Monkeyface” and “Westconcordorthophoto” are initial input HR intensity image and phase image, respectively. Figure 6 and Fig. 7 show the reconstruction results after the LR image is disturbed by a Gaussian noise of 0.006 standard deviation. Figure 6(a) and Fig. 7(a) show the HR input intensity and phase images, which are used to simulate the true values of the complex samples. The experimental results of the AP algorithm are sensitive to noise, and the reconstruction results are shown in Fig. 6(b) and Fig. 7(b), which show that the method has a large amount of noise in the reconstruction results, and the detail information cannot be observed, and the visual effect is poorer. The TPWFP algorithm is able to suppress a part of the noise, and the reconstruction results are shown in Fig. 6(d) and Fig. 7(d), which show that the algorithm is able to suppress the noise, and the visual effect is good, but the reconstructed result image still retains a large amount of noise. The ADMM algorithm can effectively suppress different degrees of noise, and the reconstruction results are shown in Fig. 6(e) and Fig. 7(e), the algorithm successfully removes the noise, but at the same time sacrifices the detail information, and the imaging result is blurred. The reconstruction results of the EPRY algorithm are shown in Fig. 6(c) and Fig. 7(c), the algorithm has corrected the unknown systematic distortions, but the contrast of the image is low. The reconstruction results shown in Fig. 6(f) and Fig. 7(f) indicate that the algorithm in this paper eliminates the noise and preserves the image details at the same time and achieves the best visual effect.

 figure: Fig. 6.

Fig. 6. Comparison of simulation intensity reconstruction results of different algorithms (a) Input HR intensity; (b) AP [19]; (c) EPRY [26]; (d) TPWFP [20]; (e) ADMM [23]; (f) Proposed method.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Comparison of simulation phase reconstruction results of different algorithms (a) Input HR phase; (b) AP [19]; (c) EPRY [26]; (d) TPWFP [20]; (e) ADMM [23]; (f) Proposed method.

Download Full Size | PDF

3.1.2 Analysis of the impact of noise at different levels

To further analyze the imaging results of the algorithm in different noise scenarios, we added Gaussian noise with different standard deviations to the images and introduced the structural similarity index (SSIM) [29] to assess the image similarity and analyze the time-domain quality, and the relative error index (RE) [30] to assess the frequency-domain bias. Figure 8(a) shows the relationship between the SSIM values of the resultant images and the Gaussian noise standard deviation. It is clearly seen that the image quality of all the reconstruction algorithms decreases as the noise level increases. However, the AP algorithm and the TPWFP algorithm are more affected by the noise and the SSIM value of the reconstruction results decreases more, which is a poor ability to deal with images with low signal-to-noise ratios. EPRY, ADMM and our proposed algorithms maintain a stable level, which is less affected by the noise, which indicates a strong ability to suppress the noise. However, observing the image results shows that the black line representing EPRY algorithm and the pink line representing ADMM algorithm are overall lower than the reconstruction results of our algorithm. It indicates that our algorithm achieves the reconstruction results closest to the original image while removing the noise. Figure 8(b) shows the relationship between the RE value of the phase of the resultant image and the standard deviation of Gaussian noise. When the noise is low, the RE values obtained by the AP algorithm and the TPWFP algorithm are lower, which indicates that these two methods have a small reconstruction phase deviation under low noise. However, the RE values of the AP algorithm and the TPWFP algorithm, which are susceptible to high levels of noise, change more with the increase of noise. EPRY, ADMM and our proposed algorithm maintain a stable level, but our algorithm maintains the lowest RE value with the least phase deviation. This experiment shows that our algorithm is robust enough to manage different levels of Gaussian noise and maintain the high quality of the reconstructed images.

 figure: Fig. 8.

Fig. 8. Objective evaluation indicators of the algorithm under different temporal variant noise noises (a) SSIM; (b) RE.

Download Full Size | PDF

3.2 Real image experiments

In order to validate the imaging effectiveness of our proposed adaptive denoising algorithm, we simulated USAF, biological samples of fly wings, and sliced samples of human colon to accurately validate the feasibility of the proposed denoising reconstruction method for high signal-to-noise ratio image reconstruction and to compare it with state-of-the-art FPM denoising methods. Four algorithms are compared in this experiment: EPRY [26], ADMM [23], TPWFP [20] and mcFPM [21]. In the actual experiments, due to the high noise of the captured images, in order to obtain the best picture quality, we adjusted the experimental parameters as much as possible in order to obtain the best experimental results on the experimental equipment. The reconstruction number of TPWFP is 80, the reconstruction number of mcFPM algorithm is 5, the reconstruction number of the proposed algorithm with EPRY algorithm is 8, and the Gaussian denoising parameter of the proposed algorithm is $0.5{N_i}$.

In practice, all experiments were performed on an experimental platform equipped with a 4×/0.1 NA objective lens, an E3ISPM20000KPA microcamera with an image size of 2.74 µm, a central illumination wavelength of 635 nm, and LED arrays spaced at a distance of 4 mm from each other, with the LED arrays located at 113 mm below the sample. The LED arrays were located 113 mm below the sample, and the center 15 × 15 LED unit was selected to illuminate the sample sequentially, and the spectra were filled using 225 LR images. Since the analysis results showed that our algorithm performs better in the presence of high noise, we did not acquire the images under darkroom conditions, kept the lab lights, and did not perform any enhancement on the acquired images.

3.2.1 Analysis of results for USAF-1951

The target USAF-1951 is a commonly used sample for reconstruction quality analysis, which facilitates the analysis of the reconstruction effect of the algorithm on the details of the sample, and its low-resolution image is shown in Fig. 9(a). Figure 9(b) shows the reconstruction effect of the EPRY algorithm, which corrects for unknown systematic errors but cannot avoid the effect of noise. Through our zoomed-in local area, we can see that the details are not well contrasted. Figure 9(c) shows that mcFPM reduces the effect of noise by correcting the LED sampling position, but it cannot correctly distinguish the noise from the image information when reconstructing a low signal-to-noise image, resulting in the noise information being reconstructed as well and the background being blurred. The TPWFP algorithm shown in Fig. 9(d) is capable of removing noise, but it is weak in compensating for the inherent errors of the system, and insufficient in reconstructing low-quality images acquired under non-darkroom conditions. Figure 9(e) shows that the ADMM algorithm is able to remove the noise and correct the system error, and the visual effect is better, but the observation of the local zoomed-in area shows that the details of the reconstructed image are blurred by this algorithm. Figure 9(f) shows that our algorithm corrects the systematic error, suppresses the imaging noise, and retains more image details. In order to further quantify the performance of different methods on image details, we zoom in on the local regions of the five algorithms mentioned above and mark the locations that need attention, and then draw their normalized phase contour curves in Fig. 10. The TPWFP algorithm fails to avoid the systematic error, and its blue curve fluctuates little at the details and fails to recover the image detail features. The green curve of the mcFPM algorithm has large fluctuations at the left and right ends, which indicates that the information and background contrast is obvious, but the fluctuation at the details is small and fails to preserve the image detail features. The orange and pink lines of EPRY and ADMM algorithms fluctuate obviously and recover the image details, but the fluctuation is smaller than the red curve of our algorithm. It shows that our algorithm suppresses the noise and also retains more detail information with high image contrast.

 figure: Fig. 9.

Fig. 9. USAF-1951 resolution target experiment (a) Raw image;( b) EPRY [26]; (c) mcFPM [21]; (d) TPWFP [20]; (e) ADMM [23]; (f) Proposed method.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Pixel normalized intensity curve.

Download Full Size | PDF

As can be seen from Fig. 9, the reconstruction results of the mcFPM, TPWFP and ADMM algorithms have low contrast and have reached the limit of their ability. EPRY and the proposed method have better reconstruction results in Fig. 9. To further compare the performance of EPRY with the proposed method, we add a comparison experiment of Group 8 of the USAF resolution chart, as shown in Fig. 11. Figure 11(a) is the raw image, Fig. 11(b) is the zoom-in on the smallest features of the raw image. Figure 11(c) and Fig. 11(e) are the Amplitude and phase reconstruction results of EPRY, and Fig. 11(d) and Fig. 11(f) are the Amplitude and phase reconstruction results of the proposed method. After we zoom in Group 8, Element 1 of the USAF, we can observe that the result of the proposed method is able to recover the detail information of the image and increase the contrast of the image, performing better than that of EPRY algorithm. Meanwhile, according to phase image in Fig. 11(e) and Fig. 11(f), we can see that our proposed algorithm is able to suppress the ambient noise in the blank area and recover the results more accurately.

 figure: Fig. 11.

Fig. 11. USAF-1951 resolution target experiment (a) Raw image; (b) A zoom-in on the smallest features; (c) Amplitude reconstructions from EPRY; (d) Amplitude reconstructions from proposed method; (e) Phase reconstructions from EPRY; (f) Phase reconstructions from proposed method.

Download Full Size | PDF

3.2.2 Analysis of results for biological sample

We performed actual FPM recovery on a fly wing thin slice sample of size 500 × 500 pixels to verify the feasibility of the method. Figure 12(a) shows the LR image taken at the center field of view. Figure 12(b) and Fig. 13(a) show the amplitude and phase restoration results of the EPRY algorithm, which can recover some of the systematic errors but cannot avoid the noise that blurs the tiny burrs on the fly wings. Figure 12(c) and Fig. 13(b) show the amplitude and phase restoration results of the mcFPM algorithm. The algorithm can reduce the effect of noise by correcting the phase error, but the method is unable to distinguish between noise and information, which makes the noise mix with the background and prevents the recovery of more effective information. Figure 12(d) and Fig. 13(c) show the amplitude and phase restoration results of the TFWFP algorithm. The algorithm suppresses some weaker noise, but it does not suppress the high noise in the background enough to avoid the systematic error, so the reconstruction results are poor. Figure 12(e) and Fig. 13(d) show the amplitude and phase restoration results of the ADMM algorithm. It can suppress the noise and improve the quality of low-resolution images to obtain better visual effects. However, observing the red square area in the results reveals that Fig. 12(f) has higher contrast and better detail restoration. In the results of this experiment, the phase image of ADMM is better, but it also retains more optical noise. The phase reconstruction result Fig. 13(e) of our algorithm retains more detailed information in the presence of print noise. Our proposed algorithm obtains the best visual effect.

 figure: Fig. 12.

Fig. 12. Comparison of amplitude reconstruction results of fly’s wing biospecimens (a) Raw image; (b) EPRY [26]; (c) mcFPM [21]; (d) TPWFP [20]; (e) ADMM [23]; (f) Proposed method.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Comparison of phase reconstruction results of fly’s wing biospecimens (a) EPRY [26]; (b) mcFPM [21]; (c) TPWFP [20]; (d) ADMM [23]; (e) Proposed method.

Download Full Size | PDF

As shown in Fig. 14, we also performed real-time image restoration of 500 × 500 pixel-sized transverse cells of the human stomach wall using the same scheme to verify the feasibility of the method in real cells. Figure 14(a) shows the LR image taken at the center FOV. Figure 14(c) and Fig. 15(b) show the amplitude and phase restoration results of the mcFPM and Fig. 14(d) and Fig. 15(c) show the amplitude and phase restoration results of the TFWFP. Both algorithms, mcFPM and TFWFP, can recover part of the image information, but both methods cannot effectively suppress the strong background noise, and more noise effects can be seen from the results. Figure 14(e) and Fig. 15(d) show the amplitude and phase restoration results of the ADMM, this algorithm effectively suppresses the noise and recovers the image, but observing the right side of the image, we find that the reconstruction of this algorithm blurs the detail information of the denser regions and fails to preserve the details. Figure 14(b) and Fig. 15(a) show the amplitude and phase restoration results of EPRY, which indicates that the algorithm retains more detail information. Figure 14(f) and Fig. 15(e) show the amplitude and phase restoration results of our proposed algorithm, our algorithm achieves higher contrast while preserving the detail information, which makes the reconstruction results clearer and easier to observe.

 figure: Fig. 14.

Fig. 14. Comparison of reconstruction results of human gastric wall biospecimens (a) Raw image;(b) EPRY [26]; (c) mcFPM [21]; (d) TPWFP [20]; (e) ADMM [23]; (f) Proposed method.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. Comparison of reconstruction results of human gastric wall biospecimens (a) EPRY [26]; (b) mcFPM [21]; (c) TPWFP [20]; (d) ADMM [23]; (e) Proposed method.

Download Full Size | PDF

3.2.3 Repeatability experiment

To further validate the effectiveness of our algorithm, we capture a total of thirty-two sets of FPM low-resolution images at different locations of twelve different slice samples. The images are reconstructed using EPRY, mcFPM, TPWFP, ADMM and our algorithm respectively, and the obtained experimental results are statistically analyzed objectively using the sharpness evaluation metrics of Brenner and Tenengrad. The results are shown in Fig. 16, where Fig. 16(a) demonstrates the average Brenner value and Fig. 16(b) demonstrates the average Tenengrad value of the reconstructed images using different algorithms. Analyzing the experimental results, it can be seen that the results of the clarity evaluation index of the image results are consistent, the clarity of our algorithm image is the highest, the clarity of the reconstructed image by EPRY and mcFPM algorithms is slightly lower than our proposed algorithms, and the clarity of the image obtained by TPWFP and ADMM algorithms is lower. Among them, the high value of clarity assessment of mcFPM algorithm is due to the fact that the information of the resultant image obtained by this algorithm is mixed with the noise and has a large information gradient. The resultant image information of the TPWFP algorithm is not mixed with the background, but the algorithm fails to remove the noise and has low clarity. ADMM algorithm removes the image noise but it also blurs a larger number of image details and has a low clarity of the images. And EPRY algorithm can remove noise and keep the stability of the algorithm in batch experiments. However, among all the denoising algorithms, our algorithm is the most effective and retains more information while enhancing the image contrast and obtaining the highest clarity.

 figure: Fig. 16.

Fig. 16. Image quality of different algorithms (a) Brenner; (b) Tenengrad.

Download Full Size | PDF

4. Discussion

4.1 Discussion of noise modelling parameters

After obtaining the relationship between real temporal variant noise and LEDs’s position, we use different functions to fit the distribution pattern and model the noise. The analysis in Section 2.3 shows that the images obtained from LEDs farther away from the central LED have a higher noise to information ratio, i.e., the noise to information ratio ${N_i}$ is positively correlated with the distance from the central LED. Therefore, we plotted the noise against the distance from the central LED, plotted the coordinate points, and then used a function to fit the coordinate points. After using several fitting methods, the polynomial fitting method was found to be the most appropriate. Figure 17 shows the results of fitting the noise with different degree of polynomial functions:

 figure: Fig. 17.

Fig. 17. Fitting ${N_i}$ plots using different functions.

Download Full Size | PDF

To compare the effectiveness of each function, the objective evaluation metrics the sum of squares error (SSE) [31,32] is introduced to evaluate the quality of the fitting function. The closer SSE is to zero, the better the model selection and the original data fitting are, and the better the data prediction is. Coefficient of determination (R-square) [33] evaluates how well the fitted function fits the coordinate points. The maximum value is one, and the closer it is to one, the better the regression line fits the observed values. The closer the mean square error (MSE) [34] and the root mean square error (RMSE) [35] is to zero, the more similar the curves are. The results of the evaluation of the metrics are shown in Table 1:

Tables Icon

Table 1. Fitting function evaluation metrics

To make the result clearer, we have drawn line charts.

From Fig. 18, It can be seen that the higher the degree of x, the better the fit. After the fourth degree, further increasing the degree has a smaller impact on the function, so a fourth-degree polynomial is chosen as the fit function.

 figure: Fig. 18.

Fig. 18. Evaluation metrics for the fitted function (a) SSE; (b) R-square; (c) MSE; (d) RMSE.

Download Full Size | PDF

4.2 Discussion of denoise parameters

In this paper, after obtaining the relationship between the noise information ratio ${N_i}$ and the position of the LED, we decided to use Gaussian denoising algorithm to denoise dark field images. The effect of Gaussian denoising is determined by the parameter σ, so we assume that $\sigma $ is proportional to the noise information ratio ${N_i}$, i.e., ${\mathrm{\sigma }_i} = k{N_i}$. We designed the following experiment to analyze the value of k. Adding different ratios of Gaussian noise to the grey image and then denoising the image using Gaussian denoising templates with different ${\sigma _i}$. The obtained denoised images are evaluated using two metrics SSIM and PSNR. The resultant evaluation value of the denoised image with respect to the proportion of noise is shown in the Fig. 19:

 figure: Fig. 19.

Fig. 19. Discussion of denoising parameters (a) Variation of SSIM values with temporal noise ${N_i}$ at different values of k (b) Variation of PSNR values with temporal noise ${N_i}$ at different values of $k$.

Download Full Size | PDF

As shown in Fig. 19(a), when k is taken as 0.5, 0.6 and 0.7, the SSIM values of the grey scale image are higher at low noise percentage, which indicates that the Gaussian template has a better denoising effect on the image with low Gaussian noise percentage in these cases. As the noise percentage increases, the SSIM value of denoised image takes the highest value when k is taken as 0.5, which indicates that the Gaussian template has good denoising effect on the image with higher percentage of Gaussian noise when k is taken as 0.5. Observe Fig. 19(b), when k is taken as 0.4, 0.5 and 0.6, the PSNR value of the grey scale image is higher, which indicates that the Gaussian template has better denoising effect on the image with low percentage of Gaussian noise in these cases. Combined with the results of SSIM value of the image, it is clear that k should be taken as 0.5.

4.3 Discussion of denoise method

In this paper, we present a LED-based temporal variant noise model for Fourier ptychographic microscopy. In the image denoising process, we use an adaptive Gaussian filter (AGF) denoising algorithm based on the noise model, where the Gaussian denoising parameter of the AGF σ varies with the temporal noise to information ratio ${N_i}$. To further justify our choice of denoising algorithm, we compare sparse 3-D transform-domain collaborative filtering (BM3D) denoising algorithm [36], the non local denoising algorithm [37], and the bilateral filtering denoising algorithm [38].

Reconstruction is performed using the EPRY [26] algorithm after pre-processing the acquired images with various algorithms. We simulate the FPM setup with hardware parameters as follows: the objective lens NA is 0.1, the corresponding pupil function is an ideal binary function (one inside the NA circle and zero outside); the height of the LED plane to the sampling plane is 100 mm; the distance between neighboring LEDs is 4 mm, and the incident wavelength is 635 nm using a 15 × 15 LED matrix light source. After simulating a set of 15 × 15 FPM raw data sets, Gaussian noise with the 0.002 standard deviation is added to each image [20]. The results of the simulation experiments are shown in Fig. 20 and Fig. 21.

 figure: Fig. 20.

Fig. 20. Comparison of simulation intensity results of different denoise method with EPRY (a) Input HR intensity; (b) BM3D [36]; (c) non-local [37]; (d) bilateral filtering [38]; (e) AGF (proposed).

Download Full Size | PDF

 figure: Fig. 21.

Fig. 21. Comparison of simulation phase reconstruction results of different denoise method with EPRY.

Download Full Size | PDF

The “Cameraman” and “Westconcordorthophoto” are initial input HR intensity image and phase image, respectively. Figure 20(a) and Fig. 21(a) show the HR input intensity and phase images, which are used to simulate the true values of the complex samples. The images pre-processed using the spatial domain denoising algorithms BM3D, non local and bilateral filtering obtained clearer image results, as shown in Fig. 20(b), Fig. 20(c) and Fig. 20(d). However, compared with AGF, it is more affected by the phase image and has more noise. Our algorithm performs best in the magnitude reconstruction results and can effectively suppress the noise.

To further compare the image results, we analyzed the image results using the objective evaluation metrics SSIM versus PSNR. The results are shown in Table 2. This result shows that the reconstruction results of the proposed method AGF are better than other algorithms in both magnitude and phase.

Tables Icon

Table 2. Objective evaluation of reconstructed image quality

From our experimental analysis, other algorithms have better denoising results than our algorithm, although the single image denoising results after adjusting the parameters are better than our algorithm. However, the number of FPM images is large, and it is necessary to adaptively select denoising parameters for images with different signal-to-noise ratios, so our algorithm achieves the best reconstruction results.

5. Conclusion

In order to solve the noise problem, this paper explores the characteristics of spatial noise and temporal noise in the FPM algorithm, using the results of blank slide samples as the reference value of system noise. Analyzing the distribution law of the obtained spatial noise and temporal noise, we get the conclusion that the temporal noise is uniformly distributed in the case field and is less affected by the light field. According to the conclusion, we established the temporal noise model of noise ratio and spatial position of LED matrix. The relationship between the noise proportion and Gaussian denoising parameters was analyzed, and a matrix of Gaussian denoising parameters was obtained to reconstruct the image after adaptive denoising of the acquired images at different locations. The resultant images with low noise, high contrast and detailed information are obtained. The results show that our proposed adaptive Gaussian denoising FPM reconstruction algorithm based on noise model can eliminate noise more effectively, retain more effective signals and obtain higher image contrast. The reported denoising method not only improves the quality and robustness of the FPM results, but also relaxes the imaging performance requirements for achieving high-quality FPM that can be imaged in noise-influenced environments.

Our algorithm can be used as an effective FPM image preprocessing algorithm in combination with other optimized FPM algorithms to obtain better experimental results. The preprocessing part of our algorithm analyses the sampling system used to suppress the noise of the acquired image, so the image preprocessing part of our algorithm can be combined with other reconstruction algorithms and used to enhance the quality of the reconstructed image. Especially for those reconstruction algorithms that are not optimized for image noise, the introduction of the image preprocessing part of our algorithm can have a better effect. Our algorithm is portable and can be combined with various reconstruction algorithms.

This temporal variant noise model is strongly influenced by the environment and the acquisition system, and when these conditions change, it takes a longer time to re-experiment the noise analysis on the actual system and construct a new noise model. For high-quality images acquired in the darkroom, where the dark field is less affected by the system and the environment, and the time-varying noise ratio contained in the acquired images is low, the optimization of our algorithm is not obvious.

We subsequently plan to use the model to analyze the image noise at different locations, to eliminate the dark field images that negatively affect the reconstruction, and to improve the reconstruction efficiency. In addition, the model can be used to analyze the signal-to-noise ratio of the acquired images, which can be used to guide the reconstruction sequence and speed up the convergence of the Fourier function.

Funding

Natural Science Foundation of Zhejiang Province (No.LY22F050002); Graduate Scientific Research Foundation of Hangzhou Dianzi University (No. CXJJ2023056).

Acknowledgment

This work is supported by Zhejiang Provincial Key Lab of Equipment Electronics.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. G Zheng, “Fourier ptychographic imaging,” 2015 IEEE Photonics Conference (IPC). IEEE, 201520–21.

2. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

3. Y. Fan, J. Sun, Y. Shu, et al., “Efficient synthetic aperture for phaseless Fourier ptychographic microscopy with hybrid coherent and incoherent illumination,” Laser Photonics Rev. 17(3), 2200201 (2023). [CrossRef]  

4. Y Fan, J Li, L Lu, et al., “Smart computational light microscopes (SCLMs) of smart computational imaging laboratory (SCILab),” PhotoniX 2, 1–64 (2021). [CrossRef]  

5. S. Zhou, J. Li, J. Sun, et al., “Accelerated Fourier ptychographic diffraction tomography with sparse annular LED illuminations,” J. Biophotonics 15(3), e202100272 (2022). [CrossRef]  

6. C. Jacobsen, “Relaxation of the Crowther criterion in multislice tomography,” Opt. Lett. 43(19), 4811–4814 (2018). [CrossRef]  

7. C. Zuo, J. Sun, J. Li, et al., “Wide-field high-resolution 3D microscopy with Fourier ptychographic diffraction tomography,” Optics and Lasers in Engineering 128(106003), 106003 (2020). [CrossRef]  

8. J. Li, J. Hao, X. Wang, et al., “Fourier ptychographic microscopic reconstruction method based on residual hybrid attention network,” Sensors 23(16), 7301 (2023). [CrossRef]  

9. Y. Gao, J. Chen, A. Wang, et al., “High-throughput fast full-color digital pathology based on Fourier ptychographic microscopy via color transfer . Science China Physics,” Sci. China Phys. Mech. Astron. 64(11), 114211 (2021). [CrossRef]  

10. R. Horstmeyer, X. Ou, G. Zheng, et al., “Digital pathology with Fourier ptychography,” Computerized Medical Imaging and Graphics 42, 38–43 (2015). [CrossRef]  

11. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21(26), 32400–32410 (2013). [CrossRef]  

12. G. Zheng, X. Ou, R. Horstmeyer, et al., “Fourier ptychographic microscopy: A gigapixel superscope for biomedicine,” Opt. Photonics News 25(4), 26–33 (2014). [CrossRef]  

13. X. Ou, R. Horstmeyer, G. Zheng, et al., “High numerical aperture Fourier ptychography: principle, implementation and characterization,” Opt. Express 23(3), 3472–3491 (2015). [CrossRef]  

14. J. Sun, C. Zuo, L. Zhang, et al., “Resolution-enhanced Fourier ptychographic microscopy based on high-numerical-aperture illuminations,” Sci. Rep. 7(1), 1187 (2017). [CrossRef]  

15. J. Sun, Q. Chen, Y. Zhang, et al., “Sampling criteria for Fourier ptychographic microscopy in object space and frequency space,” Opt. Express 24(14), 15765–15781 (2016). [CrossRef]  

16. D. Dominguez, L. Molina, B. Desai D, et al., “Hemispherical digital optical condensers with no lenses, mirrors, or moving parts,” Opt. Express 22(6), 6948–6957 (2014). [CrossRef]  

17. M. Alsubaie, S. Sen, D. Desai, et al., “Fourier ptychographic microscopy using a computer-controlled hemispherical digital condenser,” Appl. Opt. 55(23), 6421 (2016). [CrossRef]  

18. F. Phillips Z, V. D’Ambrosio M, L. Tian, et al., “Multi-contrast imaging and digital refocusing on a mobile microscope with a domed LED array,” PloS one 10(5), e0124938 (2015). [CrossRef]  

19. R. Fienup J, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” J. Opt. Soc. Am. A 4(1), 118–123 (1987). [CrossRef]  

20. L. Bian, J. Suo, J. Chung, et al., “Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient,” Sci. Rep. 6(1), 27384 (2016). [CrossRef]  

21. A. Zhou, N. Chen, H. Wang, et al., “Analysis of Fourier ptychographic microscopy with half of the captured images,” J. Opt. 20(9), 095701 (2018). [CrossRef]  

22. L. Tian, X. Li, K. Ramchandran, et al., “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]  

23. A. Wang, Z. Zhang, S. Wang, et al., “Fourier ptychographic microscopy via alternating direction method of multipliers,” Cells 11(9), 1512 (2022). [CrossRef]  

24. V. Bianco, B. Mandracchia, J. Běhal, et al., “Miscalibration-tolerant Fourier ptychography,” IEEE J. Select. Topics Quantum Electron. 27(4), 1–17 (2021). [CrossRef]  

25. V. Bianco, D. Priscoli M, D. Pirone, et al., “Deep learning-based, misalignment resilient, real-time Fourier Ptychographic Microscopy reconstruction of biological tissue slides,” IEEE J. Select. Topics Quantum Electron. 28(4), 1–10 (2022). [CrossRef]  

26. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]  

27. K. Ahn H and H. Chon B, “Reflective Fourier ptychographic microscopy using segmented mirrors and a mask,” Curr. Opt. Photonics 5(1), 40–44 (2021). [CrossRef]  

28. C. Lee, Y. Baek, H. Hugonnet, et al., “Single-shot wide-field topography measurement using spectrally multiplexed reflection intensity holography via space-domain Kramers–Kronig relations,” Opt. Lett. 47(5), 1025–1028 (2022). [CrossRef]  

29. Z. Wang, C. Bovik A, R. Sheikh H, et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

30. Y. Chen and E. Candes, “Solving random quadratic systems of equations is nearly as easy as solving linear systems,” Advances in Neural Information Processing Systems 28 (2015).

31. H. Späth Cluster analysis algorithms for data reduction and classification of objects . (No Title), 1980.

32. A Gersho and M. Gray RVector quantization and signal compression, Springer Science & Business Media2012.

33. Y. J. Akossou A and R. Palm, “Impact of data structure on the estimators R-square and adjusted R-square in linear regression,” Int. J. Math. Comput 20(3), 84–93 (2013).

34. J. Nevitt and G. R. Hancock, “Improving the root mean square error of approximation for nonnormal conditions in structural equation modeling,” Journal of Experimental Education 68(3), 251–268 (2000). [CrossRef]  

35. G. R. Hancock and M. J. Freeman, “Power and sample size for the root mean square error of approximation test of not close fit in structural equation modeling,” Educ. Psychol. Meas. 61(5), 741–758 (2001). [CrossRef]  

36. K. Dabov, A. Foi, V. Katkovnik, et al., “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. on Image Process. 16(8), 2080–2095 (2007). [CrossRef]  

37. A. Buades, B. Coll, and M. Morel, “A non-local algorithm for image denoising,” 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05)Ieee2, 60–65 (2005).

38. S. Kumar B K, “Image denoising based on gaussian/bilateral filter and its method noise thresholding,” Signal, Image and Video Processing 7(6), 1159–1172 (2013). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (21)

Fig. 1.
Fig. 1. Schematic of a traditional FPM experimental platform
Fig. 2.
Fig. 2. Noise analysis of FPM acquired images.
Fig. 3.
Fig. 3. Relationship between mean of noise and the spatial position of the LED (a) Spatial noise distribution; .(b) Temporal noise distribution.
Fig. 4.
Fig. 4. Relationship between temporal noise ${N_i}$ distribution and the spatial position of the LED (a) Realistic temporal noise ${N_i}$ distribution data; (b) Fitted temporal noise ${N_i}$ distribution data.
Fig. 5.
Fig. 5. Flowchart of the algorithm in this paper.
Fig. 6.
Fig. 6. Comparison of simulation intensity reconstruction results of different algorithms (a) Input HR intensity; (b) AP [19]; (c) EPRY [26]; (d) TPWFP [20]; (e) ADMM [23]; (f) Proposed method.
Fig. 7.
Fig. 7. Comparison of simulation phase reconstruction results of different algorithms (a) Input HR phase; (b) AP [19]; (c) EPRY [26]; (d) TPWFP [20]; (e) ADMM [23]; (f) Proposed method.
Fig. 8.
Fig. 8. Objective evaluation indicators of the algorithm under different temporal variant noise noises (a) SSIM; (b) RE.
Fig. 9.
Fig. 9. USAF-1951 resolution target experiment (a) Raw image;( b) EPRY [26]; (c) mcFPM [21]; (d) TPWFP [20]; (e) ADMM [23]; (f) Proposed method.
Fig. 10.
Fig. 10. Pixel normalized intensity curve.
Fig. 11.
Fig. 11. USAF-1951 resolution target experiment (a) Raw image; (b) A zoom-in on the smallest features; (c) Amplitude reconstructions from EPRY; (d) Amplitude reconstructions from proposed method; (e) Phase reconstructions from EPRY; (f) Phase reconstructions from proposed method.
Fig. 12.
Fig. 12. Comparison of amplitude reconstruction results of fly’s wing biospecimens (a) Raw image; (b) EPRY [26]; (c) mcFPM [21]; (d) TPWFP [20]; (e) ADMM [23]; (f) Proposed method.
Fig. 13.
Fig. 13. Comparison of phase reconstruction results of fly’s wing biospecimens (a) EPRY [26]; (b) mcFPM [21]; (c) TPWFP [20]; (d) ADMM [23]; (e) Proposed method.
Fig. 14.
Fig. 14. Comparison of reconstruction results of human gastric wall biospecimens (a) Raw image;(b) EPRY [26]; (c) mcFPM [21]; (d) TPWFP [20]; (e) ADMM [23]; (f) Proposed method.
Fig. 15.
Fig. 15. Comparison of reconstruction results of human gastric wall biospecimens (a) EPRY [26]; (b) mcFPM [21]; (c) TPWFP [20]; (d) ADMM [23]; (e) Proposed method.
Fig. 16.
Fig. 16. Image quality of different algorithms (a) Brenner; (b) Tenengrad.
Fig. 17.
Fig. 17. Fitting ${N_i}$ plots using different functions.
Fig. 18.
Fig. 18. Evaluation metrics for the fitted function (a) SSE; (b) R-square; (c) MSE; (d) RMSE.
Fig. 19.
Fig. 19. Discussion of denoising parameters (a) Variation of SSIM values with temporal noise ${N_i}$ at different values of k (b) Variation of PSNR values with temporal noise ${N_i}$ at different values of $k$.
Fig. 20.
Fig. 20. Comparison of simulation intensity results of different denoise method with EPRY (a) Input HR intensity; (b) BM3D [36]; (c) non-local [37]; (d) bilateral filtering [38]; (e) AGF (proposed).
Fig. 21.
Fig. 21. Comparison of simulation phase reconstruction results of different denoise method with EPRY.

Tables (2)

Tables Icon

Table 1. Fitting function evaluation metrics

Tables Icon

Table 2. Objective evaluation of reconstructed image quality

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

u i = x i x 0 λ ( x i x 0 ) 2 + ( y i y 0 ) 2 + h 2 v i = y i y 0 λ ( x i x 0 ) 2 + ( y i y 0 ) 2 + h 2
I i ( x , y ) = | ϝ 1 { C T F ( u , v ) T ( u u i ) ( v v i ) } | 2
{ u i 2 + v i 2 N A o b j λ , bright field u i 2 + v i 2 > N A o b j λ , dark field
S N i = ( x y I i ( x , y ) ) 2 H × L
I s t d i ( x , y ) = n = 1 10 I n i ( x , y ) 10
T N i = n = 1 10 x y | I n i ( x , y ) I s t d i ( x , y ) | 10
{ u i 2 + v i 2 2 N A o b j λ , Existing bright field u i 2 + v i 2 > 2 N A o b j λ No bright field
N i = x y T N i ( x , y ) x y I i ( x , y )
G ( x , y ) = 1 2 π σ 2 e x 2 + y 2 2 σ 2
σ i = k N i
I i d e n o i s e ( x , y ) = 1 2 π ( 0.5 N i ) 2 e x 2 + y 2 2 ( 0.5 N i ) 2
min T ( u , v ) i ( x , y ) | I i d e n o i s e ( x , y ) | F 1 { C T F ( u , v ) T ( u u i ) ( v v i ) } | | 2
{ n = P n ( u , v ) T n ( u u i ) ( v v i ) P n + 1 ( u , v ) = P n ( u , v ) + T n ( u u i ) ( v v i ) | T n ( u u i ) ( v v i ) | m a x 2 [ n n ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.