Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Accurate estimation of the illumination pattern’s orientation and wavelength in sinusoidal structured illumination microscopy

Open Access Open Access

Abstract

Structured illumination microscopy is able to improve the spatial resolution of wide-field fluorescence imaging by applying sinusoidal stripe pattern illumination to the sample. The corresponding computational image reconstruction requires precise knowledge of the pattern’s parameters, which are its phase (ϕ) and wave vector (p). Here, a computationally inexpensive method for estimation of p from the raw data is proposed and illustrated with simulations. The method estimates p through a selective discrete Fourier transform at tunable subpixel precision. This results in an accurate p estimation for all the illumination patterns and subsequently improves the superresolution image recovery by a factor of 10 around sharp edges as compared to an integer pixel approach. The technique as presented here is of major interest to the large variety of custom-build systems that are used. The feasibility of the presented method is proven in comparison with published data.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Wide-field fluorescence microscopy is limited in its maximum spatial resolution to what is commonly known as the Abbe diffraction limit [1]. The diffraction limit is a result of the finite range of spatial frequencies that a microscope’s objective can capture depending on its aperture. The 2D image formation under plain illumination can be formulated as a convolution (), giving D(r)=[S(r)I]h(r), with the acquired image D, the sample S, the plain illumination intensity I, the point spread function (PSF) h and the spatial coordinate r. In the Fourier transform of this expression, the convolution is replaced by a multiplication and vice versa D˜(k)=[I˜(k)S˜(k)]h˜(k), where tilde () indicates the Fourier transform, k is the Fourier space coordinate or spatial frequency, and h˜ is called the optical transfer function (OTF), which has the property of limiting the highest spatial frequency of the sample represented in the image to a cutoff frequency k2kmax and thus the maximum resolution of the image. Introducing nonuniform illumination of the sample will mix high spatial frequency information lying beyond the OTF’s support into its passband [2]. In structured illumination microscopy (SIM), this idea can be used by implementing a sinusoidal stripe pattern illumination. It is described by a wave vector p=pxk^x+pyk^y, with the unit vectors k^x and k^y, that determines a pattern angle

γ=arctan(py/px),
a fringe spacing of
L=(px2+py2)1/2,
and a phase ϕ yielding the pattern’s shift along the wave vector. Computational image reconstruction upon raw data acquisition may now separate spatial frequency components shifted from their original position into the passband and those naturally lying within the passband [3]. Relocating shifted components to their original position eventually results in a larger information content in the Fourier domain and equivalently in a higher resolution of the reconstructed image. For a single SIM reconstruction, multiple raw images of the sample with different illumination patterns are needed. In the 2D case, a typical configuration of three different illumination wave vectors p (with constant fringe period and three different orientations) and three different phases ϕ each yielding a total of nine images can be used to reconstruct one image. Image reconstruction in SIM for the described stripe pattern illumination is well known and can be derived from the more general 3D case [4]. Software implementations for SIM reconstruction are readily available [5,6]. Given a sinusoidal illumination pattern in the object plane, its wave vector p and phase ϕ need to be known for each of the raw data images to perform the well-established direct reconstruction. Estimation of p and ϕ is a critical task in SIM and is addressed here because a wrong parameter estimation for one of the nine raw data images is sufficient to corrupt the overall reconstruction. Common methods used to estimate the phase rely on precise knowledge of the wave vector [710] or require the pattern phases to be equally distributed [4,6]. This means that an incorrect estimation of p will inevitably lead to a wrong value for ϕ if no further assumptions on the phase distribution can be made and thus prevent correct image reconstruction. Furthermore, some of the methods for parameter estimation involve iterative steps that come with an increased computational cost [6,7]. This problem is usually addressed by gaining experimental control over the illumination pattern, using expensive hardware like spatial light modulators, vibration-free suspension systems, and piezo actuators to introduce predefined phase shifts. The idea of the proposed method is to determine the wave vector p, based on the raw data images in a noniterative way at a precision that is sufficient to apply a direct solution for the phase estimation as presented by Wicker [8].

As can be seen in Fig. 1, the sinusoidal illumination pattern results in three peaks in the Fourier spectrum of each input image, and p is estimated by localizing these peaks. Because the input images are real valued, one peak will be located at the center (zero frequency), and the other two peaks will be distributed symmetrically around the center along the vector p in the Fourier space. In essence, the algorithm presented here localizes one of the outer peaks with subpixel precision in the absolute value map of the Fourier transform. First a conventional Fourier transform as given by the fast Fourier transform (FFT) is applied on the raw image. To rule out contributions of the low frequencies, a mask is applied. Frequencies smaller than 95% of the actual pattern frequency are masked. This corresponds to a rough estimate in an actual setup. The maximum in the masked FFT image is identified with integer pixel precision offered by the FFT routine. Then the Fourier transform in a small region around the peak position with a higher sampling is calculated, where the peak can be localized with improved precision. As opposed to padding the raw image with zeros prior to applying the FFT, which increases the precision of the peak localization in the same way albeit at a high computational cost, the proposed method calculates the Fourier transform in a significantly smaller area at a specific sampling rate around the pre-estimated peak location. This reduces the computational cost to a fraction of an approach based on zero padding [11,12].

 figure: Fig. 1.

Fig. 1. Illustration of the subpixel peak localization in Fourier space. Panel (a) shows a raw data image of the sample pirate under stripe pattern illumination as described in the text. The edges of the image have been dampened to avoid discontinuities. In (b), the Fourier spectrum of (a) calculated by the FFT routine is shown. The pattern wave vector p is indicated in green. Estimating p based on the integer pixel location of the maximum in the Fourier spectrum will lead to a peak location mismatch as shown in the cropped Fourier spectrum (c). The red cross is the location of the maximum, the green cross the actual peak location based on the numerical value of the fringe period and orientation used. Panel (d) shows the result of the proposed selective Fourier transform of (a). The center has been selected to be the location of the maximum found in (c). The upsampling factor is α=10, and the size of the area is the same as in (c). Now the maximum (red) is much closer to the actual peak position (green).

Download Full Size | PDF

In Section 2, the theoretical background for sinusoidal SIM, including current methods of parameter estimation and the proposed subpixel peak detection for SIM, is described. In Section 3, the presented method is applied to simulated data, and the results are compared to published data. Finally, Section 4 provides concluding remarks on where the presented method integrates within published methods for parameter estimation.

2. THEORY

The theoretical background is described widely using the notation of Wicker et al. [7,8]. First, the general framework for SIM using a sinusoidal illumination pattern is presented. Then an overview of current parameter estimation methods is outlined, and the proposed approach for computationally inexpensive peak localization at subpixel precision is presented.

A. Sinusoidal SIM

Given a sinusoidal illumination pattern, the image formation can be described as

D(r)=[S(r)I(r)]h(r),
where
I(r)=m=11amexp[im(2πpr+ϕ)]
denotes the illumination pattern with the modulation depth am. The Fourier transform of D(r) can be written as
D˜(k)=m=11exp(imϕ)amS˜(kmp)h˜(k)C˜m(k).

It can be seen that D˜(k) contains frequency components or bands S˜m(k)=S˜(kmp) that are shifted by mp with respect to their original position in frequency space prior to multiplication with the OTF. In this way, high spatial frequency components of the sample that are not observable under plain illumination are transferred into the image, although the bands are shifted and summed up. By taking N images of the sample with different phases ϕn with n=1,,N, the resulting Fourier transforms of the images D˜n(k) can be written as

D˜(k)=MC˜(k),
with the acquired images in the vector D˜=[D˜1,,D˜N], the matrix Mnm=exp(imϕn), and a vector C˜(k)=[C˜1,C˜0,C˜+1]. If the inverse of M (M1) exists, the different components can be separated by
C˜(k)=M1D˜(k).

The final image is the inverse Fourier transform of S˜^(k), the final estimate in the Fourier domain. It is obtained by shifting each band to its original position and recombining them using a generalized Wiener filter:

S˜^(k)=m,damC˜m,d(k+mpd)h˜*(k+mpd)m,d|amh˜(k+mpd)|2+wA(k).

The Wiener filter reduces the degrading influence of the OTF and weights the bands in regions where they overlap according to their expected SNR. The Wiener parameter w is determined empirically, A(k) is an apodization function decreasing linearly from unity at the center to zero near the end of the extended OTF support, shaping the overall spectrum to prevent ringing artifacts in the final image, and the asterisk (*) indicates the complex conjugate. Because the resolution improvement only takes place in the direction of p, the process of image acquisition and band separation is repeated for different orientations d to obtain isotropic resolution enhancement. As can be seen from Eqs. (7) and (8), the phases ϕn and the wave vectors p need to be known to perform the reconstruction.

B. Conventional Parameter Estimation Techniques

Following Gustafsson et al. [4] and Müller et al. [6], if an equidistant phase distribution of ϕn can be assumed, retrieval of p is straightforward. Then Eq. (7) can be applied by using these relative phases in M that differ from the true phases only by a constant offset ϕΔ. Given a large enough OTF support with respect to p, and the OTF corrected components C˜m=C˜mh˜*|h˜|2, C˜±1 will have an overlapping region k with C˜0, where they will be different by a complex factor am only for the correct p (see Fig. 2). Thus, by maximizing the cross correlation () for m0,

[C˜0C˜m](p)=kC˜0*(k)C˜m(k+mp),
the vector p can be determined, and the complex factor am is calculated as
am=kC˜0*(k)C˜m(k+mp)k|C˜0(k)|2.

 figure: Fig. 2.

Fig. 2. Spectral overlap. The separated components C˜0 and C˜1 in Fourier space (kx, ky). If C˜1 is shifted by p (red arrow) to its correct position in Fourier space, both bands will have a region of overlap k in which they differ only by a complex factor [see Eq. (10)].

Download Full Size | PDF

Furthermore, ϕΔ=arg(am), with arg(·) giving the angle of a complex number, and am=|am| for a0=1. The workflow described so far is applied when equidistant phase steps within one pattern orientation are provided in the experimental setup.

However, if the assumptions about the phase distribution cannot be made but p is known, the phase of the illumination pattern can be determined based on the phase of the peak at p in D˜n as

ϕn=arg[D˜n(p)],
as described by Shroff et al. [9,10]. Alternatively the spectrum’s autocorrelation can be evaluated at p, and the phase can be determined as
ϕn=arg[(D˜nD˜n)(p)],
as described by Wicker, where D˜ is filtered by the complex conjugated OTF yielding D˜n=D˜nh˜* to reduce the influence of noise and asymmetries in the PSF [8]. Here represents the autocorrelation. Due to its noniterative and flexible nature, it is the method of choice for the work presented here. It is applied after p is estimated as described in Section 2.C.

A related method is mentioned for the sake of completeness. It determines the phase values by an iterative optimization. The idea is that two separated bands C˜i(k) and C˜j(klp) (l being an integer) should not have common information in the region where they overlap for ij+l if they are separated correctly. They have similar components, if M in Eq. (7) contains the wrong phase values. The phases in the separation matrix M are then found by minimizing a cost function that evaluates the cross correlation in the overlapping regions [7].

C. Subpixel Peak Detection

Because it may not be possible to do a parameter estimation based on a given prior knowledge like the phase distribution or the pattern wave vector, these parameters need to be extracted from the acquired raw data [13]. We address the situation where no assumption can be made about phase distribution, and the values of p are unknown to begin with. We propose to reconstruct p using the approach we describe as follows and then use the reconstructed value of p with Eq. (12) for reconstructing the phase value ϕ. This way the precision of the peak localization does not only govern the accuracy for the determined value of p but also the fidelity of the estimated ϕ. One approach is to localize the peaks in D˜(k) that come from the sinusoidal illumination pattern. For a discretized image consisting of Nx×Ny pixels (Nx,y being even integers) with indices x=[1,,Nx] and y=[1,,Ny] and spatial coordinates from (Nx,y2) to (Nx,y21) px in steps of 1 px in x^ and y^ direction, the conventional FFT returns an array of the same size with indices u=x and v=y, which correspond to frequency coordinates from (12)px1 to (121Nx,y)px1 in steps of (1Nx,y)px1 in the k^x and k^y directions, k^x and k^y being frequency domain analogs of x^ and y^. The localization precision of a maximum in the FFT thus depends on the step size 1Nx,y. This can be improved by padding the image with zeros prior to the Fourier transform. Depending on the padding size, this approach may become computationally expensive.

Our approach to this problem is to first localize the peaks in D˜(k) without any padding, giving the indices (u0,v0) with a localization precision equal to the step size 1Nx,y in the frequency domain. In the second step, the Fourier transform of the input image is calculated in a selected region in the frequency domain around the position of the peak with a chosen oversampling [11,12] using a twofold matrix multiplication:

D˜(u^,v^)=exp(2πiNyv^y)D(x,y)exp(2πiNxxu^),
where exp(Z) is the exponential of each element in array Z. The indices in u^ and v^ can now be chosen as fractional numbers around uo and v0. An adequate choice is to select an upsampling factor α and an area around the initial pixel location of 1.5 px in each direction such that u^,v^=[1.52,1.52+1α,,1.522α,1.521α]+u0,v0 represent subpixel indices around the original position of the peak. The corresponding frequency space coordinates range from {12+[1Nx,y·(u0,v01.521)]} to {12+[1Nx,y·(u0,v0+1.521)]1αNx,y}px1 in steps of 1αNx,ypx1. The position of the maximum localization in D˜(u^,v^) will then be as precise as if it had been localized in the FFT of the original image after increasing its size by a factor of α in both directions using zero padding at a negligible computational cost. Performing the peak localization for a set of nine raw input images, for a fivefold oversampling the computation times are 0.03 s and 0.43 s and for a tenfold oversampling 0.04 s and 1.80 s for the subpixel approach and the zero padding approach, respectively, on an Intel Core i7 at 2.10 GHz, using MATLAB. With respect to the computational complexity for a raw SIM image of dimension M×M, a fivefold zero padded FFT in both row and column directions will have a computational complexity of O(25M2log(25M2)), whereas the proposed method has complexity of O(5M2). For a tenfold oversampling, these numbers are O(100M2log(100M2)) versus O(10M2). The timing numbers provided can be seen in this context. When the oversampling factor is much smaller than M, as in this case, this is significant computational gain for achieving the same level of accuracy.

3. SIMULATIONS

To test the subpixel precision estimation of the illumination pattern, raw data images for two different samples (Siemens star and pirate; see Fig. 3) have been simulated, similar to what was done by Wicker et al. [7,8]. In the first step, the samples (pixel size of 65 nm) have been scaled such that their brightest pixel would have a photon count of 5×101, 5×102, 5×103, and 5×104 (four data sets), and the darkest photon count of zero, to add noise of the Poisson distribution in the last step. No further offset or noise was added. The illumination patterns of 200 nm fringe spacing are generated based on 20 randomly distributed orientation angles γ. For each orientation, a set of three phases (0°, 120°, and 240°, with a random variation of a Gaussian distribution at a standard deviation of 10° for each phase step) is generated. To maintain the maximum possible photon count in each raw data image, the sinusoidal illumination patterns are scaled such that they only vary in a range from zero to one. This way 60 raw data images are simulated per set. Each image is convolved with a point spread function, simulated using a 2D distribution based on the Bessel function of first kind and first order [14], given a numerical aperture of the imaging objective of NA=1.4. The emission wavelength was set to λem=515nm. Finally, noise is simulated based on the Poisson distribution. For each raw data image, the parameter estimation was performed as described previously for upsampling factors of α=110. First the peaks in the Fourier domain that come from the sinusoidal illumination pattern in real space were localized applying Eq. (13). From those peak positions, the orientation and fringe period of the illumination pattern in each raw data image are calculated using Eqs. (1) and (2).

 figure: Fig. 3.

Fig. 3. Samples (a) pirate and (b) Siemens star as they have been used in the simulations. They are represented in a size of 256×256px.

Download Full Size | PDF

The orientation γ as calculated from the peak location in Fourier space deviates from the actual orientation γ^ by Δγ=|γ^γ|. Although only 20 different values for γ were used for the simulations, for each individual raw data image, the deviation Δγ was calculated because the pattern phase and noise may change the result of the peak localization. Thus, each raw data image yields a value for γ and Δγ. In Fig. 4(a), the mean (solid line) and standard deviation (shaded area) of the 60 values for Δγ in a data set are presented for parameter estimation based on peak localization at different oversampling factors α and for the two different samples, for data sets simulated with a maximum photon count of 50. This result does not show significant variations for the other selected photon counts and is thus exemplary.

 figure: Fig. 4.

Fig. 4. (a) Deviation of the pattern orientation and (b) fringe spacing calculated from the detected peak position from the actual value for oversampling factors of 1–10. The calculations are performed for a maximum expected photon count of 5×101 in the brightest pixel. The solid line represents the mean deviation for all 60 samples, and the shaded area shows the standard deviation. This is done for two different samples [Siemens star (red) and pirate (blue)].

Download Full Size | PDF

Similar to the orientation, the fringe spacing L as calculated per Eq. (2) has a deviation from the initially set 200 nm. This deviation is calculated as ΔL=|L200nm|200nm to free the measure from scaling. The evaluation in Fig. 4(b) shows the mean (solid line) and standard deviation (shaded area) of the 60 values for ΔL in a data set. This evaluation is presented for parameter estimation based on peak localization at different oversampling factors α and for two different samples. The underlying data sets are simulated using a maximum photon count of 50. As for the orientation deviation, this result is exemplary for all selected photon counts. The pattern phase for each raw data image is calculated with Eq. (12) and the determined value for p. The values for the phases are evaluated following [7,8]. For each orientation γ, the deviation of the three phases was calculated as δϕ=ϕϕ^. The relative error Eγr for one orientation is then calculated as the standard deviation of all three δϕγ in a subset. As described in Refs. [7,8], this way a global phase offset is rejected. The mean (solid line) and standard deviation (shaded area) of Eγr are shown in Fig. 5 as a function of α.

 figure: Fig. 5.

Fig. 5. Relative phase error at oversampling of 1–10 for four different photon levels (a) 5×101, (b) 5×102, (c) 5×103, and (d) 5×104, and two different samples [Siemens star (red) and pirate (blue)]. The solid lines represent the mean value and the shaded areas the standard deviation of the phase error.

Download Full Size | PDF

A second evaluation of the estimated phases is shown in Fig. 6. Instead of the relative error Eγr, the absolute error Eγa is presented. It is calculated as the mean of the absolute values of all three δϕγ in a subset. This way a global phase offset is not rejected in the evaluation.

 figure: Fig. 6.

Fig. 6. Absolute phase error at oversampling of 1–10 for four different photon levels (a) 5×101, (b) 5×102, (c) 5×103, and (d) 5×104, and two different samples [Siemens star (red) and pirate (blue)]. The solid lines represent the mean value and the shaded areas the standard deviation of the phase error.

Download Full Size | PDF

Finally, Fig. 7 shows the influence of the proposed parameter estimation at subpixel precision. A raw data set was simulated as described earlier with three pattern angles of 11°, 71°, and 131° (similar to Shroff et al. [9]) and three phases each at a maximum photon count of 5×104. The data sets used to generate actual image reconstructions only contain nine raw images. Although the pattern angles are selected such that an isotropic resolution improvement can be achieved, the selected phases are still displaced from an equidistant distribution as described previously. In addition to the SIM raw data, a wide-field image under plain illumination was simulated, and the result of a Wiener deconvolution is shown in Fig. 7(a). The SIM reconstruction based on the actual input parameters yields the optimal result in Fig. 7(b) with the expected resolution enhancement compared to Fig. 7(a). Figures 7(c) and 7(d) show the reconstruction results based on parameter estimation for oversampling factors of 1 and 10. Figures 7(e) and 7(f) show the deviations of the results in Figs. 7(c) and 7(d) from Fig. 7(b), respectively. It is the absolute value of the difference of the values in corresponding pixels, and it can be seen that a reconstruction based on subpixel precision parameter estimation at an oversampling factor of 10 yields a result whose deviation from the optimal reconstruction can be reduced by almost 1 order of magnitude around sharp edges in the image. For the presented data, an oversampling larger than 10 does not improve the result of the reconstruction in terms of the shown deviation. The same evaluation has been done for the Siemens star as presented in Fig. 8. Here the absolute deviations of the reconstructions for one- and tenfold oversampling are shown in Figs. 8(a) and 8(b) similar to Figs. 7(e) and 7(f).

 figure: Fig. 7.

Fig. 7. Reconstruction of simulated SIM data. For a raw data set of nine images (three orientations with three phases each), parameter estimation and reconstruction were performed for oversampling factors of 1 and 10. Panel (a) shows the result of a conventional wide-field deconvolution under plain illumination. The SIM reconstruction with known pattern parameters in (b) shows the expected resolution enhancement. In (c) and (d), the image reconstruction based on the proposed parameter estimation for oversampling of 1 and 10, respectively, is shown. In (e) and (f), the absolute deviation of the reconstruction as shown in (c) and (d) from the reconstruction with known pattern parameters is presented.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Similar to Figs. 7(e) and 7(f), the absolute deviation of the reconstruction from the reconstruction with known pattern parameters is presented for oversampling factors of (a) 1 and (b) 10.

Download Full Size | PDF

4. RESULTS AND DISCUSSION

It can be shown that the proposed subpixel precision localization of a maximum in the Fourier domain is suitable to determine the wave vector of a sinusoidal pattern imposed on a structure in real space (Fig. 4). This method achieves an improvement in precision as a method based on zero padding would do, although at a fraction of the computational cost (time and memory), especially at increased oversampling factors. The benefit of this approach can be directly shown when doing phase estimation on 2D SIM data using Eq. (12).

In the introduction of the noniterative phase estimation [8], its feasibility was demonstrated in comparison with an iterative approach [7] that would determine the relative phases in a data set, and a reference to [2] and [15] is given for the estimation of the global phase offset to get the absolute phases. Figure 5 shows that the quality of the phase estimation does not change significantly by increasing the precision of the wave vector estimation in the tested range, and the results, at least for the pirate sample, correspond well to published results in Figs. 2(a) and 2(b) of Ref. [8] for the single-step approach at maximum expected number of photons of 5×101, 5×102, 5×103, and 5×104. Disregarding a global phase offset, localizing the pattern wave vector to conventional pixel precision is sufficient, as the error drops below 1° for realistic SNRs.

However, the absolute phase error as shown in Fig. 6 suggests that if Eq. (12) is to be applied for phase estimation, the presented method is of major importance. The high-precision calculation of the wave vector based on subpixel peak localization enables the phase estimation without an additional step to find a global phase offset. A difference in the behavior of both samples is visible. The Siemens star sample enables good phase detection even at lower oversampling. Phase estimation, except for the lowest simulated SNR, performs well. The more generalized sample pirate shows significant improvement with an increased oversampling.

Applying subpixel precision peak localization in the Fourier domain to determine the orientation and fringe spacing of sinusoidal patterns in real space is presented and its feasibility demonstrated on simulated data. Especially the application of the results in an established method for phase estimation shows that this is a way for parameter estimation in sinusoidal structured illumination microscopy for comparably fine (i.e., 200 nm) illumination patterns.

The benefits are twofold because the whole work flow of determining the necessary parameters in SIM reconstruction does not require prior knowledge of the phase distribution or the pattern wave vector, nor does it depend on iterative processes. Furthermore, it was demonstrated that the deviation from the optimal reconstruction can be reduced by about 1 order of magnitude, thus producing a more accurate result using the subpixel peak detection method. In addition, the savings in computational time for SIM reconstruction provided by the proposed subpixel method will be beneficial for real-time reconstruction of the SIM images if the pattern parameters need to be estimated prior to reconstruction.

Funding

H2020 European Research Council (ERC) (336716); Norwegian Centre for International Cooperation in Education, SIU Norway (INCP-2014/10024); University Grants Commission (UGC), India.

Acknowledgment

The authors would like to thank Krishna Agarwal for scientific review of the manuscript. BSA acknowledges ERC and the Norwegian Centre for International Cooperation in Education, SIU-Norway. KK acknowledges University Grant Commission, India funding.

REFERENCES

1. E. Abbe, “Beiträge zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung,” Archiv für mikroskopische Anatomie 9, 413–418 (1873). [CrossRef]  

2. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198, 82–87 (2000). [CrossRef]  

3. R. Heintzmann and C. G. Cremer, “Laterally modulated excitation microscopy: Improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999). [CrossRef]  

4. M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94, 4957–4970 (2008). [CrossRef]  

5. P. Křížek, T. Lukeš, M. Ovesný, K. Fliegel, and G. M. Hagen, “SIMToolbox: A MATLAB toolbox for structured illumination fluorescence microscopy,” Bioinformatics 32, 318–320 (2015).

6. M. Müller, V. Mönkemöller, S. Hennig, W. Hübner, and T. Huser, “Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ,” Nat. Comms. 7, 10980 (2016). [CrossRef]  

7. K. Wicker, O. Mandula, G. Best, R. Fiolka, and R. Heintzmann, “Phase optimisation for structured illumination microscopy,” Opt. Express 21, 2032–2049 (2013). [CrossRef]  

8. K. Wicker, “Non-iterative determination of pattern phase in structured illumination microscopy using auto-correlations in Fourier space,” Opt. Express 21, 24692–24701 (2013). [CrossRef]  

9. S. A. Shroff, J. R. Fienup, and D. R. Williams, “Phase-shift estimation in sinusoidally illuminated images for lateral superresolution,” J. Opt. Soc. Am. A 26, 413–424 (2009). [CrossRef]  

10. S. A. Shroff, J. R. Fienup, and D. R. Williams, “Lateral superresolution using a posteriori phase shift estimation for a moving object: Experimental results,” J. Opt. Soc. Am. A 27, 1770–1782 (2010). [CrossRef]  

11. R. Soummer, L. Pueyo, A. Sivaramakristnan, and R. J. Vanderbei, “Fast computation of Lyot-style coronagraph propagation,” Astrophys. J. Suppl. Ser. 167, 81–99 (2006). [CrossRef]  

12. M. Singh and K. Khare, “Accurate efficient carrier estimation for single-shot digital holographic imaging,” Opt. Lett. 41, 4871–4874 (2016). [CrossRef]  

13. L. Condat, J. Boulanger, N. Pustelnik, S. Sahnoun, and L. Sengmanivong, “A 2-D spectral analysis method to estimate the modulation parameters in structured illumination microscopy,” in IEEE 11th International Symposium on Biomedical Imaging (ISBI), 2014, vol. 11, pp. 604–607.

14. E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. Le Moal, C. Nicoletti, M. Allain, and A. Sentenac, “Structured illumination microscopy using unknown speckle patterns,” Nat. Photonics 6, 312–315 (2012). [CrossRef]  

15. J. T. Frohn, “Super-resolution fluorescence microscopy by structured light illumination” Ph.D. dissertation (Swiss Federal Institute of Technology, 2000).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Illustration of the subpixel peak localization in Fourier space. Panel (a) shows a raw data image of the sample pirate under stripe pattern illumination as described in the text. The edges of the image have been dampened to avoid discontinuities. In (b), the Fourier spectrum of (a) calculated by the FFT routine is shown. The pattern wave vector p is indicated in green. Estimating p based on the integer pixel location of the maximum in the Fourier spectrum will lead to a peak location mismatch as shown in the cropped Fourier spectrum (c). The red cross is the location of the maximum, the green cross the actual peak location based on the numerical value of the fringe period and orientation used. Panel (d) shows the result of the proposed selective Fourier transform of (a). The center has been selected to be the location of the maximum found in (c). The upsampling factor is α = 10 , and the size of the area is the same as in (c). Now the maximum (red) is much closer to the actual peak position (green).
Fig. 2.
Fig. 2. Spectral overlap. The separated components C ˜ 0 and C ˜ 1 in Fourier space ( k x , k y ). If C ˜ 1 is shifted by p (red arrow) to its correct position in Fourier space, both bands will have a region of overlap k in which they differ only by a complex factor [see Eq. (10)].
Fig. 3.
Fig. 3. Samples (a) pirate and (b) Siemens star as they have been used in the simulations. They are represented in a size of 256 × 256 px .
Fig. 4.
Fig. 4. (a) Deviation of the pattern orientation and (b) fringe spacing calculated from the detected peak position from the actual value for oversampling factors of 1–10. The calculations are performed for a maximum expected photon count of 5 × 10 1 in the brightest pixel. The solid line represents the mean deviation for all 60 samples, and the shaded area shows the standard deviation. This is done for two different samples [Siemens star (red) and pirate (blue)].
Fig. 5.
Fig. 5. Relative phase error at oversampling of 1–10 for four different photon levels (a)  5 × 10 1 , (b)  5 × 10 2 , (c)  5 × 10 3 , and (d)  5 × 10 4 , and two different samples [Siemens star (red) and pirate (blue)]. The solid lines represent the mean value and the shaded areas the standard deviation of the phase error.
Fig. 6.
Fig. 6. Absolute phase error at oversampling of 1–10 for four different photon levels (a)  5 × 10 1 , (b)  5 × 10 2 , (c)  5 × 10 3 , and (d)  5 × 10 4 , and two different samples [Siemens star (red) and pirate (blue)]. The solid lines represent the mean value and the shaded areas the standard deviation of the phase error.
Fig. 7.
Fig. 7. Reconstruction of simulated SIM data. For a raw data set of nine images (three orientations with three phases each), parameter estimation and reconstruction were performed for oversampling factors of 1 and 10. Panel (a) shows the result of a conventional wide-field deconvolution under plain illumination. The SIM reconstruction with known pattern parameters in (b) shows the expected resolution enhancement. In (c) and (d), the image reconstruction based on the proposed parameter estimation for oversampling of 1 and 10, respectively, is shown. In (e) and (f), the absolute deviation of the reconstruction as shown in (c) and (d) from the reconstruction with known pattern parameters is presented.
Fig. 8.
Fig. 8. Similar to Figs. 7(e) and 7(f), the absolute deviation of the reconstruction from the reconstruction with known pattern parameters is presented for oversampling factors of (a) 1 and (b) 10.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

γ = arctan ( p y / p x ) ,
L = ( p x 2 + p y 2 ) 1 / 2 ,
D ( r ) = [ S ( r ) I ( r ) ] h ( r ) ,
I ( r ) = m = 1 1 a m exp [ i m ( 2 π pr + ϕ ) ]
D ˜ ( k ) = m = 1 1 exp ( i m ϕ ) a m S ˜ ( k m p ) h ˜ ( k ) C ˜ m ( k ) .
D ˜ ( k ) = M C ˜ ( k ) ,
C ˜ ( k ) = M 1 D ˜ ( k ) .
S ˜ ^ ( k ) = m , d a m C ˜ m , d ( k + m p d ) h ˜ * ( k + m p d ) m , d | a m h ˜ ( k + m p d ) | 2 + w A ( k ) .
[ C ˜ 0 C ˜ m ] ( p ) = k C ˜ 0 * ( k ) C ˜ m ( k + m p ) ,
a m = k C ˜ 0 * ( k ) C ˜ m ( k + m p ) k | C ˜ 0 ( k ) | 2 .
ϕ n = arg [ D ˜ n ( p ) ] ,
ϕ n = arg [ ( D ˜ n D ˜ n ) ( p ) ] ,
D ˜ ( u ^ , v ^ ) = exp ( 2 π i N y v ^ y ) D ( x , y ) exp ( 2 π i N x x u ^ ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.