Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Resolution enhancement in laser scanning microscopy with deconvolution switching laser modes (D-SLAM)

Open Access Open Access

Abstract

Laser scanning microscopy is limited in lateral resolution by the diffraction of light. Superresolution methods have been developed since the 90s to overcome this limitation. However superresolution is generally achieved at the expense of a greater complexity (high power lasers, very long acquisition times, specific fluorophores) and limitations on the observable samples. In this paper we propose a method to improve the resolution of confocal microscopy by combining different laser modes and deconvolution. Two images of the same field are acquired with the confocal microscope using different laser modes and used as inputs to a deconvolution algorithm. The two laser modes have different Point Spread Functions and thus provide complementary information leading to an image with enhanced resolution compared to using a single confocal image as input to the same deconvolution algorithm. By changing the laser modes to Bessel-Gauss beams we were able to further improve the efficiency of the deconvolution algorithm and obtain images with a residual Point Spread Function having a width of 0.14 λ (72 nm at a wavelength of 532 nm). This method only requires a laser scanning microscope and is not dependent on certain specific properties of fluorescent proteins. The proposed method requires only a few add-ons to classical confocal or two-photon microscopes and can easily be retrofitted into an existing commercial laser scanning microscope.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In laser scanning microscopy (such as confocal microscopy or two-photon excited fluorescence microscopy) resolution is limited by the diffraction of light. Here we consider the resolution to be defined according to the Rayleigh criterion, namely the minimum distance at which two point like objects can be distinguished. This distance is defined as dRayleigh=1.22λ2NA, where λ is the wavelength and NA the numerical aperture of the objective. This limit restricts the maximal resolution of the microscope at around half the wavelength of the light source. To observe biological samples with fluorescent probes the wavelengths being used are mainly in the visible or near infrared regions of the electromagnetic spectrum (400 - 800 nm / 1200 nm). Given these wavelengths the best achievable resolution is around 200 nm. Optical microscopy is often used to observe biological tissues because it is compatible with live cell imaging and non destructive. Moreover high performance lasers (narrow spectral band, low divergence) and new fluorescent probes make it easier to observe numerous specific biological targets with high contrast.

Yet the diffraction limit restricts the minimal size of observable structures with a confocal microscope. Indeed synaptic vesicles (40-60 nm), microfilaments (≈6 nm), microtubules (≈25 nm) and several proteins (e.g., membrane receptors such as ion channels ≈ 10 nm) are much smaller than the diffraction limited resolution of conventional optical microscopes. It is then necessary to implement super-resolution techniques to observe such structures with optical microscopes. Several methods of super-resolution have already been developed: STimulated Emission Depletion microscopy (STED [1]), Photo Activated Localization Microscopy (PALM [2]), STochastic Optical Reconstruction Microscopy (STORM [3]), Structured Illumination Microscopy (SIM [4]), Image Scanning Microscopy (ISM [5,6]), Switching LAser Modes microscopy (SLAM [7]), to name just a few, each having its own advantages and drawbacks.

In this document we present briefly one of them, SLAM, and focus on an improved version of it using deconvolution. We first present the hypothesis of this improvement with simulations and then we provide experimental results obtained with different types of samples and laser beams, obtaining a residual full width at half maximum (FWHM) of 0.14λ ≈ 74 nm at λ = 532 nm.

2. Enhanced resolution with SLAM microscopy (switching laser modes)

SLAM microscopy was introduced in [7] as a simple and effective method that can improve the resolution of laser scanning microscopy by a factor of ≈ 2 (it is also referred as Fluorescence Emission Difference microscopy (FED) in [8] or Intensity Weighted Subtraction in [9]). It has also been used in CARS microscopy [10], showing the flexibility of the method that can be used even in modalities that do not rely on fluorescent probes. In SLAM two images with different laser modes are needed, a bright beam such as a classical Gaussian beam (TEM00) and a dark donut beam such as the transverse electric TE01 mode (which is azimuthally polarized). Typical intensity distributions of bright and dark beams are presented in Fig. 1.

 figure: Fig. 1

Fig. 1 Intensity distribution of the two laser modes. (a) The TEM00 vertically polarized Gaussian beam. (b) The TE01 azimuthally polarized Gaussian beam (donut mode). Pixel size: 10 nm. Scale bar: 100 nm.

Download Full Size | PDF

The SLAM image is obtained by subtracting the image taken with the donut beam from the classical confocal image taken with the bright beam with a weighting factor g that allows some adjustment of the processing. Most of the time the values of g lies between 0.5 and 1. The SLAM image subtraction is represented by the following relationship:

ImgSLAM=Imgbrigthg*Imgdonut

Due to the noise of confocal microscopy the images are filtered with a low-pass Gaussian filter before subtraction. The SLAM method is based on laser scanning microscopy; hence it can be added to a commercial system with only minor modifications involving the addition of a module switching the laser beam from the bright mode to a dark mode.

The major drawback of SLAM comes from its processing of the acquired images, namely the subtraction. This processing can add artefacts to the image and information can also be lost. The subtraction also creates negative values that have no physical sense and are set to zero for convenience. The weighting factor g is used to adjust the strength of the subtraction and minimize artefacts. However the contrast factor is global and might not be suitable for some specific areas thus creating inhomogeneities in the resulting image. A pixel adjusted contrast factor has been proposed in [9] but the overall resolution gain is then lower. Due to these problems we herein propose to use deconvolution to process the images.

In the following sections we will show that using two images with different PSFs (Point Spread Functions) for the deconvolution leads to improved results compared to just using the deconvolution with the classical confocal image obtained with a scanning Gaussian beam. We will then present a deconvolution technique that allows the processing of two images obtained from the laser modes used in SLAM to display the improvement in resolution.

3. Deconvolution method: theoretical background

Deconvolution is an image processing method aiming to recover blurry images degraded by some imperfections or defaults of the acquisition system. Deconvolution is a vast domain where methods can vary from simple filtering to complex iterative algorithms. The idea here is to use deconvolution combined with beam shaping (use of several laser modes) to overcome the diffraction limit.

Deconvolution is widely used as a post-processing technique to compensate for aberrations added during image acquisition. In general these aberrations consist in various types of blur due to defocus, motion, etc. but can also correspond to defects of the instrument (spherical or coma aberrations). From the deconvolution point of view the diffraction limit of optical microscopy is a blur degradation with a PSF that is the diffraction limited focal spot of the scanning laser beam.

Deconvolution is mainly used in astronomy to correct the blur introduced by atmospheric perturbations [11–14]. Deconvolution is also widely used in widefield microscopy to correct out of focus blur [15] or in laser scanning microscopy to improve resolution limited by diffraction and aberations [14,16].

Deconvolution is a mathematical process arising from a more general domain: inverse problems. The main principle of all the methods derived from the inverse problem theory is to find some unknown signal that is the source of some degraded data signal. In deconvolution the original object is extracted given an acquisition that is the convolution of this object with the PSF of the observing instrument.

In deconvolution theory, the starting point is the following equation for image formation:

y(k)=h(k)*x(k)+n(k),
where x is the intensity distribution of the object, y is the observed intensity (acquired image), h is the PSF of the instrument, n the noise, k the pixel position and * denotes the convolution product. This equation shows that the image taken with an instrument is the convolution product of the true image and of the PSF of the instrument. The deconvolution method has the objective of reversing this convolution product and recovering the true image.

3.1. Impact of the PSF on the deconvolution process

Our objective is to use different laser modes, with a different spatial frequency content, to enhance the deconvolution process. A PSF with a higher frequency content will provide the deconvolution with more information on how to recover the higher frequencies in the image. Moreover using two different PSFs coming from two laser modes that have complementary frequency content will further facilitate the deconvolution process. The two laser modes that we will use here are the two used for the SLAM method.

In this section we will show, using a simple deconvolution filter, that an MTF (Modulation Transfer Function) with higher spatial frequency content, equivalently with laser modes with finer structures, helps recovering the higher spatial frequency components of the image (the MTF is the modulus of the Fourier transform of the PSF). A donut laser mode has a larger content of high spatial frequencies than the classical Gaussian beam but has a deficient content of intermediate spatial frequencies, see Fig. 2. This is why we will propose to keep the two images for the deconvolution. In this manuscript the term frequency will be used as a short hand for spatial frequency for easier reading.

 figure: Fig. 2

Fig. 2 (a) MTF of the TEM00 beam. (b) MTF of the TE01 beam. (c) Both TEM00 and TE01 MTFs are displayed on the same image; the TEM00 MTF is colored in red and the TE01 MTF in green. The yellow color indicates that the two MTFs overlap. The center of each image corresponds to the zero frequency and frequency increases radially. λ = 532 nm. The image brightness was adjusted for better visualisation. (d) and (e) are the horizontal and vertical profiles of the MTFs displayed in (c).

Download Full Size | PDF

Figure 2 shows the frequency component difference between the MTF of the vertically polarized Gaussian TEM00 beam and the MTF of the azimuthally polarized TE01 beam. Panels (a) and (b) display the TEM00 and TE01 MTFs while panel (c) shows an overlap of the two for a better visualisation. Panels (d) and (e) display horizontal and vertical profile traces for a better comparison. We can see from this figure that the MTF of the TE01 beam has a higher frequency content; since high frequencies provide the details of the image, the reconstruction from this image will display finer structures. Having an MTF with a larger content of high frequencies can help the deconvolution algorithm recover high frequencies in the image that would normally be dominated by noise. We can see however that the TE01 MTF has a gap in its intermediate frequency range, which is why we want to keep the two images taken with the two PSFs; some intermediate/low frequency components are only present in the TEM00 MTF.

Figure 3 shows the two images that we will use to demonstrate the basics of the method. The two PSFs used for the simulation have been shown in Fig. 1. For the simulation we placed four series of four point-like objects disposed in a rectangle on an image. The vertical spacing between each point is 20 pixels (200 nm) in the first row and 30 pixels (300 nm) in the second row. The horizontal spacing between each point is 20 pixels (200 nm) in the first column and 30 pixels (300 nm) in the second column. This image is then convolved with each of the corresponding PSFs and a composite of Gaussian and Poisson noise is added.

 figure: Fig. 3

Fig. 3 Simulated confocal images obtained by the convolution of samples of point-like objects (1 pixel in size) with the corresponding PSFs. The objects are described in the text. (a) Image obtained with the Gaussian vertically polarized TEM00 PSF. (b) Image obtained with the TE01 azimuthally polarized PSF. Pixel size: 10 nm. Scale bar: 500 nm.

Download Full Size | PDF

We now want to test the effect of the MTF frequency content using a simple deconvolution technique, namely the Wiener filter [17]. The Wiener filter, fWiener is defined as follows,

fWiener=argminfE{xWienerxTrue2},
fWiener=argminfE{f*yxTrue2},
with E {} the mathematical expectation, * the convolution product, xTrue representing the true perfect image and xWiener is the Wiener filtered image. The term ‖x Wienerx True2 represents the quadratic error that this filter is designed to minimize. The upperscript hat (^) is used to represent the Fourier transform of the term it is placed on (f^ is the Fourier transform of f). The notation
argminfφ(f)
is a standard notation that refers to the value of the element f that minimize the expression φ(f).

The Wiener filter (Eq. (3)) is a linear filter designed to minimize the quadratic error of the reconstruction. The filtered image is obtained by multiplying the Wiener filter, expressed in the Fourier space, by the Fourier transform of the acquired image (y^u):x^uWiener=f^uWienery^u and then applying an inverse Fourier transform to the resulting product x^uWiener.

In that expression f is an intermediate notation for the Wiener filter fWiener. For an uncorrelated centered noise and considering that natural images have a spectrum following a power law (αuβ) the filter expression in Fourier space (frequency space) is:

f^uWiener=h^u*|h^u|2+αuβ;
h^ is the MTF of the corresponding acquisition (TEM00 or TE01), u the frequency components, α and β are parameters to be adjusted to obtain the best filtered image. β will be set to 2 to have a quadratic power law, which is the type of law most natural images will follow. α will still need to be hand tuned.

Figures 4(a) and 4(b) show the result of the Wiener deconvolution on images acquired with the two PSFs: the TEM00 for image (a) and the TE01 image for (b). By comparing the images in Fig. 4(a) and Fig. 4(b) we conclude that the Wiener filter can produce a higher resolution image if the image is acquired with a TE01 mode than if it is acquired with a TEM00 mode. The rippling effect is due to well known artefacts of the Wiener filter, that are more visible in the image acquired with a TE01 mode because the MTF of the TE01 beam lacks some intermediate frequencies compared to that of the TEM00 beam. To reduce this effect we suggest to use the two images and MTFs for the deconvolution. We will then have the high frequency content given by the PSF of the TE01 beam and the PSF of the TEM00 beam will compensate in the medium frequency range. However this cannot simply be done with the Wiener filter; we will use a better adapted deconvolution algorithm in sections 3.3 to demonstrate this effect.

 figure: Fig. 4

Fig. 4 Results of the Wiener filtering deconvolution on the images taken with the TEM00 mode (a) and with the TE01 mode (a). In (c) one finds the profiles of the coloured traces on the images in (a) (trace in blue) and (b) (trace in red). λ = 532 nm. Scale bar: 500 nm.

Download Full Size | PDF

Figure 5(a) shows in green the frequency components of the Wiener filtered image acquired with the TEM00 beam. Figure 5(a) highlights in red the frequency components that are only present in the Wiener filtered image acquired with the TE01 beam but are not present in the Wiener filtered image acquired with the TEM00 beam. These components have higher frequencies than those in green and thus contain more information with higher resolution. Figure 5(b) shows the total frequency content of the filtered images obtained after Wiener filtering, in red for the TE01 image and in green for the TEM00 image; when the two overlap perfectly the color will turn to yellow. We also see from the overlap in Fig. 5(b) a purely green band at intermediate frequencies. This demonstrates the lack of intermediate frequencies in the Wiener filtered image acquired with the TE01 mode and the need to use the second image to compensate this shortage.

 figure: Fig. 5

Fig. 5 (a) In red are the frequency components present in the Wiener filtered image taken with the TE01 beam that are not present in the Wiener filtered image taken with the TEM00 beam, which is represented in green. (b) total frequency content of the Wiener filtered image taken with the TE01 mode in red and with the TEM00 mode in green. The yellow color indicates that the two MTFs overlap. The center of each image corresponds to the zero frequency and frequency increases radially. The contrast has been modified for a better visibility.

Download Full Size | PDF

To conclude this section we have demonstrated that by selecting the beam shape in a laser scanning microscope (by changing the laser mode), we can help the deconvolution algorithm to recover the spectral content in higher frequencies. Using a PSF with an improved high frequency content allows the deconvolution to recover the higher frequency content in the image than when using a PSF with a weaker high frequency content. Moreover, depending on the laser mode being used, the images might be lacking some frequency content. To solve this issue we use two images taken with two different laser modes; this will result in an image with better resolution than if we just applied the same deconvolution to an image taken with a single classical beam (TEM00). The next section will further demonstrate this effect with a deconvolution algorithm that does not create as many ripple artefacts.

3.2. Maximum a posteriori expression (MAP)

We chose the MAP deconvolution framework because the generalization to multi-image and multi-PSF deconvolution is straightforward [15], and we will in the end use it for a two-image deconvolution scheme.

The method used here is derived from the maximum likelihood solution [14,18]. The maximum likelihood (ML) solution arises from maximizing the data likelihood. This solution needs to be modified to deal with noises present in the signal: a regularization (a priori) term is added to the likelihood term. Adding the regularization term and considering all terms as probability densities changes the solution to a so-called maximum a posteriori solution. The probability densities are assumed to follow stationary Gaussian fluctuations. Here we present an application of this method to deconvolution. For a complete demonstration of the method see [14,18,19].

In our study we consider the PSF to be invariant in our total field of view and we restrict the images to two-dimensional images (the confocal pinhole gives a very small depth of field).

Following this framework we find that the solution of the single-image deconvolution is the optimal image × that minimizes the following cost function:

ϕMAP(x)=ϕML(x)+ϕprio(x),
with
ϕML(x)=(Hxy)TW(Hxy)
being the maximum likelihood solution, H the PSF convolution operator, x the solution (deconvolved image) and y the observed image. T stands for the transpose operator and W is a weighted matrix that represents the inverse of the noise covariance matrix. This matrix is also referred to as the variance-covariance matrix. Diagonal elements are variances and i,j elements are covariances between positions i and j. Considering an uncorrelated Gaussian noise the matrix W becomes diagonal (only the variance is computed). The matrix W is also used to add other a priori information and constraints on the measured image, such as accounting for a dead pixel on the detector which will be represented by a infinite noise variance pixel and thus a zero value in the matrix W of the noise. In our case we will use it for edge consideration, with W having values dropping down to 0 at the edges.

This method leaves some freedom regarding the choice of the regularization term ϕprio(x). This regularization term is here to introduce some “a priori” knowledge on the deconvolution. Usually “a priori” terms are designed to counter the noise amplification effect of the deconvolution and sometimes to enforce positivity on the resulting image (since images are acquired with photon counting detectors, negative values are thus not allowed). Here we present two of those regularization terms that deal with noise amplification.

Tikhonov regularization (quadratic regularization)

The Tikhonov regularization is a classical smoothing regularization that allows fast deconvolution and is written for two-dimensional images:

ϕprio(x)=μDx2,
ϕprio(x)=μi,j[xi+1,jxi,j]2+μi,j[xi,j+1xi,j]2,
where D stands for the finite difference operator, i and j are pixel indices and µ is the regularization parameter. This parameter gives a weight to the regularization and determines the strength of the smoothing used to counter noise amplification. If the value of µ is too high, smoothing prevents deconvolution and if too small, deconvolution amplifies the noise.

To find the unique minimum (for a given µ) of such quadratic function one only needs to solve:

ΔϕMAP(x)=0,
HTWHx+μDTDx=HTWy.

Because of the weighting matrix W there is no analytical solution. The use of an iterative minimization algorithm is needed to find the proper minimum. We chose to use the conjugate gradient algorithm. For more details on the conjugate gradient algorithm see [20]. The conjugate gradient algorithm can solve equations that are of the form of A · x = b (where x is the unknown). In our case we have A = H T · W · H + µD T · D and b = HT · W · y.

Quadratic regularizations, like the Tikhonov regularization, are easily differentiable but are not really robust regarding noisy images and tend to excessively smooth images containing sharp changes (similar to the Wiener filter).

Edge preserving regularization (non-quadratic regularization)

Edge preserving regularizations are doing better at preserving sharp changes and are more robust regarding noisy images. One of the mainly used edge preserving regularization is the L1-L2 (linear-quadratic) regularization. It behaves as a quadratic regularization at low frequencies but linearly at high frequencies. This specific behaviour allows it to correctly smooth the noise while keeping high frequency information.

The regularization we chose to use here was introduced in [12] and [15] :

ϕprio(x)=λ0r[x^(r)θrln(1+x^(r)θr)],
with r the pixel position, x^(r) the object spatial gradient and θr the parameter that determines the linear to quadratic turning point of the regularization. The value of θr can be pixel specific; however we will use here a single value for θr over the whole image (as in [12]). λ0 is the regularization weighting parameter; it has the same role as µ for the Tikhonov regularization.

Because of the non-quadraticity of the regularization the conjugate gradient method cannot be used directly. We used a modified version of the conjugate gradient method that is designed for nonlinear problems to correctly minimize ϕMAP(x). Another modification of the algorithm is the addition of boundary constraints. To do so we used the method presented in [21] called “gradient projections” or “active set of parameters” to enforce a positivity constrain on the resulting image in the nonlinear conjugate gradient algorithm (images in optical microscopy come from cameras or photon detectors and therefore cannot have negative values).

The stop-criterium used for the nonlinear conjugate gradient method is based on the gradient of the cost function ϕMAP(x). This gradient should tend to zero as the number of iterations increases [20]. However due to rounding errors and time limitation and since after a certain number of iterations the differences between this gradient for two successive iterations become negligible [20] this stop-criterium is adapted for real-world applications. The stop-criterium used for the nonlinear conjugate gradient method was the difference between the norm of the gradients of the cost function ϕMAP(x) between two iterations (the current one and the previous one). When this difference falls below 10−7 the algorithm stops.

3.3. Multiple image-PSF deconvolution

We adapted deconvolution to a multiple-image algorithm, two in this case, by adding a likelihood term per image (as in [15]). Our method will use two images with two different PSFs to get the resulting deconvolved image. In two-image deconvolution the cost function to minimize is then:

ϕMAP(x)=ϕML1(x)+ϕML2(x)+ϕprio(x),
with
ϕML1(x)=(H1xy1)TW(H1xy1),
ϕML2(x)=(H2xy2)TW(H2xy2),
and where H1 and H2 are the two PSF convolution operators, y1 and y2 the two corresponding images and ϕprio(x) is the regularization term (quadratic or edge preserving).

The optimization method (minimization of the cost function) is the same as before, involving either the linear conjugate gradient method when using the Tikhonov regularization or the non-linear conjugate gradient method when using the L2-L1 edge preserving regularization.

4. Characteristics of the deconvolution algorithm

The purpose of this section is to characterize the effect of the different regularizations for this algorithm and compare the classical single-image deconvolution to the two-image deconvolution using specific PSFs, which we call D-SLAM as referring to the context of optical microscopy. The chosen PSFs are the two used in the SLAM method (see section 2).

4.1. Data and simulated PSFs

Simulated PSFs

For the two-image deconvolution we used the two PSFs from the SLAM method: a linearly polarized Gaussian beam, the TEM00 mode, and the azimuthally polarized beam, the TE01 mode. For the single-image deconvolution we used the linearly polarized Gaussian TEM00 mode. For all simulations the pixel size is 10 nm. We used the Richards-Wolf vectorial diffraction theory to numerically calculate the PSF of the laser scanning microscope [22,23].

Simulation dataset

For the simulation we placed four point-like objects disposed at the corners of a square. The vertical and horizontal spacing between each point is 30 pixels (300 nm). This image is then convolved with each of the corresponding PSFs.

After normalizing the data (from 0 to 1) we add some noise (using Matlab “Imnoise” function). The noise sources can be modelled as Gaussian noise (electronic noise) or Poison noise (shot noise). To be closer to the reality of optical microscopy the noise is a composite of Gaussian and Poisson noises. The Gaussian noise has a variance of ≈0.01 and an average of ≈0.05. The Poisson noise was added after a scaling with a factor of 50 to obtain a noise close to that of the experimental confocal images. It corresponds to a noise with an amplitude of ≈ 0.05 and a variance estimated at ≈ 0.0001 in empty areas and of amplitude ≈ 0.4 and variance estimated at ≈ 0.004 in high intensity areas. The parameters of each type of noise have been selected to mimic the noise of the confocal microscope by comparing it with a set of images acquired on our confocal microscope. The measured noise has a average of ≈ 0.03 and an estimated variance of ≈ 0.001 in empty areas. In high intensity area the average of the noise is around ≈ 0.4 and the variance is ≈ 0.03.

4.2. Results of deconvolution on simulated data

Figures 6 and 7 show several deconvolution results and compare them with the confocal and SLAM images. The profiles of Fig. 7 are traced along the red lines in Fig. 6.

 figure: Fig. 6

Fig. 6 Deconvolution results compared to those obtained with single image confocal microscopy and SLAM for four point-like objects separated by 30 pixels (300 nm). The four points of the original (undistorted) image are too small to be visible on an image (each point is only of one pixel size). Quad. deconv. refers to deconvolution with quadratic (Tikhonov) regularization. Non-quad. deconv. refers to deconvolution with non-quadratic regularization (edge preserving). TEM00 refers to the PSF of the vertically polarized Gaussian beam and TE01 to the PSF of the azimuthally polarized TE01 beam. TEM00 & TE01 specifies that the two-image deconvolution method is used. Scale bar: 200 nm.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Profiles corresponding to the red lines in Fig. 6. The original points are presented in grey. Quad. deconv. refers to deconvolution with quadratic (Tikhonov) regularization. Non-quad. deconv. refers to deconvolution with non-quadratic regularization (edge preserving). TEM00 & TE01 specifies that the two-image deconvolution method is used.

Download Full Size | PDF

As shown in Figs. 6 and 7 the use of two PSFs having different complementary frequency contents increases the quality of the deconvolution and decreases the residual FWHM of the point-like objects.

Figures 6 and 7 also prove the importance of the choice of the regularization for deconvolution algorithms. Indeed the deconvolution using an edge preserving regularization (non-quadratic) gives better results than the Tikhonov (quadratic) regularization (compare images 3 and 5 in Fig. 6. It is also true when two images with two PSFs are used in the deconvolution (compare images 4 and 6 in Fig. 6. From the profiles in Fig. 7 we can see that the SLAM method and the single-image deconvolution tend to have a maximum slightly shifted from the real position of the point like object. This shift is corrected when using two-image deconvolution.

Residual FWHM

To characterize the gain in resolution from each method we measured the FWHM of the residual PSF of the point-like object after deconvolution of each image (Table 4.2). ±X represents the uncertainty of measurements related to the pixel size.

Tables Icon

Table 1. Residual full width at half maximum (FWHM) of the point like object for each tested method in both directions (vertical and horizontal). Numerical values are extracted from the datasets presented in Figs. 6 and 7. All computations are for a wavelength of 532 nm.

4.3. Comparing with SLAM artefacts

For this section we used another set of simulated confocal images. Here we created a set of randomly arranged lines with varying intensities as a simple representation of microtubules. The simulation pixel size was 25 nm and each line has a width of one pixel (since microtubules are approximately 25 nm in diameter).

Figure 8 compares results of the two-image deconvolution using the non-quadratic regularization with the SLAM method. The negative values of the SLAM method were left on the traces in Fig. 8(b) for better visualisation of the SLAM method artefacts. The SLAM method sometimes fails to recover structures and can make them completely disappear in the negative values, which is not the case for the deconvolution (even if some structures are too close to be resolved).

 figure: Fig. 8

Fig. 8 Comparison of the proposed deconvolution method with SLAM on randomly simulated lines having a width of one pixel. (a) Comparison of confocal, SLAM and deconvolution with the original simulated image. Scale bar: 1 µm. (b) Profiles of the two traces drawn in (a) compare the methods.

Download Full Size | PDF

5. Experimental results

In this section we apply the deconvolution method to images obtained with a home-made confocal microscope that uses a water objective with NA = 1.2. The excitation wavelength is 532 nm. The complete schematic diagram of the confocal microscope is presented in Fig. 9.

 figure: Fig. 9

Fig. 9 Schematic representation of a home-made confocal system. In the laser module we used two beam splitters creating two separate optical paths to easily switch between the TEM00 and the TE01 beams. The azimuthally polarized TE01 beam is created by means of an Arcoptix radial/azimuthal polarization converter [24]. ND filters stands for neutral density filters.

Download Full Size | PDF

The excitation laser we used is a Coherent Inc. DPSS (Diode Pumped Solid State) Compass 215M Laser at 532 nm. We used a 60x UPlanSApo water immersion Olympus objective with a numerical aperture of 1.2. The PMT (Photomultiplier tube) was a Hamamatsu R3896 PMT mounted on the Hamamatsu C7950 PMT DAP-type socket. The mirrors, beamsplitters and ND filters are from Thorlabs: P01 mirrors, BS010 50:50 non-polarizing beamsplitter cube (400 -700 nm, 10 mm) and NEK01 absorptive ND filter kits. The dichroics and excitation filters are from Semrock: model Di01-R405/488/532/635-25×36 for the separation of the excitation and the emission and model BLP01-532R-25 as emission filters. To change the polarization from linear to azimuthal and create the donut shape we used an Arcoptix radial/azimutal polarization converter [24] with its electrical LC driver. The Arcoptix polarization converter is placed along a separate optical path so we can easily change from vertical to azimuthal polarization without realignment. We used the 50:50 non-polarizing beamsplitter cubes to split and recombine the two different paths for vertical and azimuthal polarizations. The arrows of the laser module in Fig. 9 indicate the polarization state of the beam: vertical or azimuthal.

The PSF used for the deconvolution algorithm is measured and averaged over several nano-spheres in the same sample. The laser beam having a FWHM of roughly 250 nm, a nano-sphere of 100 nm does not increase the measured PSF significantly (the difference is smaller than the pixel size) and provides a good estimate of the PSF.

5.1. Deconvolution on nano-sphere samples

To compare the resolution achieved with two-image deconvolution to that obtained with SLAM microscopy and single-beam confocal microscopy we observed fluorescent nano-spheres of 100-nm diameter. The nano-spheres have an absorption peak at 540 nm and an emission peak at 560 nm. We placed the nano-spheres on a slice with a mounting medium of refractive index ≈ 1.4 (Fluorescent mounting medium, Dako). The two images obtained with the TEM00 and TE01 laser modes were taken successively and the two images of the same scene needed to be aligned for the methods to work properly (for both SLAM and deconvolution). The pinhole diameter was 50 µm ≈ 4 Airy Units (AU); since the nano-spheres were on a single plane this allowed us to have a better signal than with a 1 AU pinhole.

We see from Fig. 10 that deconvolution allowed us to resolve two nano-spheres that were unresolved in classical confocal microscopy. Looking at the profiles in Fig. 11 we clearly see the improvement coming from the deconvolution. The noise disappears nearly completely and the resolution is greatly enhanced compared to that of the initial image, see Fig. 11(a). Comparing SLAM and deconvolution, we see that deconvolution yields a better resolving power and leaves a smaller residual PSF than SLAM, see Fig. 11(a) and Fig. 11(b). The residual FWHM in SLAM is not as good as that predicted from theory because of a small asymmetry in the shape of the TE01 mode achieved experimentally. To work properly, SLAM requires the donut mode (TE01) to be a nearly perfect symmetrical donut, which is not needed in the deconvolution algorithm.

 figure: Fig. 10

Fig. 10 Experimental comparison of the methods using imaging of nano-spheres. (a) Classical confocal. (b) SLAM. (c) Two-image deconvolution with edge preserving regularization. Scale bar: 1µm.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Profiles obtained from the images shown in Fig. 10. Red: confocal. Green: SLAM. Blue: two-image deconvolution with edge preserving regularization.

Download Full Size | PDF

5.2. Deconvolution applied to biologically relevant samples

Here we describe the resolution enhancement with fluorescent stained microtubule samples. The cells used are cultured hypocampal neurons (from rat) fixed in a 4% PFA solution. The microtubules were then stained by immunocytochemistry (monoclonal anti-acetylated -tubulin primary antibody - revealed by a donkey anti-mouse Rhodamine RedX secondary antibody). The same mounting medium (Dako) as in section 5.1 was used to mount the cells on slides. The images were taken with the same setup as disclosed in relation with Fig. 9, using a water-immersion objective with NA = 1.2 and an excitation laser at 532 nm. The pinhole diameter was set at 10 µm ≈ 1 AU, which is the optimal value of the pinhole for a confocal microscope.

Figure 12 compares the resolution enhancement obtained with SLAM and deconvolution with one and two laser beam imaging. The confocal image was filtered with a Gaussian filter having a FWHM of two pixels to get rid of the noise for the comparison. Each method clearly shows an improvement of the image resolution and contrast compared to the confocal image. By looking at the profiles in Fig. 12(b) we can see that the two-image deconvolution stands out and resolved a lot of structures that are not or just barely visible with single-image deconvolution or SLAM.

 figure: Fig. 12

Fig. 12 (a) Comparison of confocal, SLAM and deconvolution methods on images of the same field. Scale bar : 1 µm. (b) Integrated profiles along the 10-pixel wide lines in (a). Black: confocal. Green: SLAM. Blue: single-image deconvolution. Red: two-image deconvolution. The deconvolution method used the edge preserving regularization.

Download Full Size | PDF

We then measured the resolution enhancement by observing the Gephyrin protein immunodecteted by a monoclonal mAb7a Oyster 550 coupled antibody, that has an excitation maximum at 551 nm. The images were taken with the system shown in Fig. 9, with a water objective of NA = 1.2 and an excitation laser at 532 nm. The pinhole diameter was 10 µm ≈ 1 AU.

Here again Fig. 13 compares the enhancement in resolution obtained with SLAM and deconvolution with one and two laser beam imaging. The confocal image was again filtered with a Gaussian filter having a two-pixel FWHM. The same type of improvement as in Fig. 12 is also visible here and the plots of Fig. 13(b) reveals structures that are only resolved with the two-image deconvolution.

 figure: Fig. 13

Fig. 13 (a) Comparison of confocal, SLAM and deconvolution methods on images of the same field. (b) Profiles of the traces in (a). Black: confocal. Green: SLAM. Blue: single-image deconvolution. Red: two-image deconvolution. The deconvolution method used is the one based on edge preserving regularization.

Download Full Size | PDF

In the next section we will present how using two other types of laser beam can yield better accuracy because of their higher spatial frequency content. These beams are Bessel-Gauss beams of order 0 and 1.

6. Resolution enhancement using Bessel-Gauss beams (DB-SLAM)

Bessel-Gauss beams are a family of beams that have been used for extended depth of field [25–28] and light sheet microscopy [29] in two-photon excitation mode because of their non diffracting properties [30, 31]. Here we used the Bessel-Gauss beams in confocal microscopy [32–36], keeping the pinhole of the confocal microscope to maintain the sectioning and suppress the side lobes of the Bessel-Gauss beam [32,37]. With a pinhole of around one Airy Unit in diameter the side lobes are no longer visible; it is this configuration we used for imaging. The idea behind using this pair of Bessel-Gauss beams is that since they both have a smaller focal spot than the TEM00 and TE01 beams, the deconvolution results will be further improved.

6.1. Bessel-Gauss beams

Since the Bessel-Gauss beam of order zero can have a smaller central lobe as compared to the Gaussian beam, its use will decrease the size of the diffraction limited PSF and provide better resolution [32,34–37]. Here we used the vertically polarised zero-order Bessel-Gauss beam J0 and the azimuthally polarized first order Bessel-Gauss beam J1, that were used for the B-SLAM method in [32], to replace the vertically polarized Gaussian mode TEM00 and the azimuthally polarized mode TE01.

To create the Bessel-Gauss beam we used an axicon coupled with a lens that creates a thin ring of light (as introduced in [27]). The ring of light is then transported to the back aperture of the objective using telescopes. The objective then transforms the ring of light into a Bessel-Gauss beam; it does so because the Bessel-Gauss beam and the ring of light constitute a pair of Fourier transforms.

Figure 14 compares the focal spots of the Gaussian, Laguerre-Gauss TE01 and Bessel-Gauss beams for both vertical and azimuthal polarizations. It shows that both Bessel-Gauss beams have smaller size than their Gaussian or TE01 counterparts.

 figure: Fig. 14

Fig. 14 100 nm fluorescent nano-spheres (Fluosphere carboxylate-modified microspheres, orange fluorescence 540/560) observed on a confocal microscope with a pinhole of 15µm. Excitation wavelength: 532 nm. Scale bar: 300 nm. The nano-spheres are observed with Gaussian and Bessel-Gauss beams with vertical or azimuthal polarization, respectively.

Download Full Size | PDF

6.2. Deconvolution with Bessel-Gauss beams using simulated data

Here we test the deconvolution on images acquired with Bessel-Gauss beams with the same simulation tests as in sections 4.1 and 4.3. The Bessel-Gauss beam PSFs are numerically calculated using the Richards-Wolf vectorial diffraction theory [22,23,32].

Figures 15 and 16 show the resolution improvement of the deconvolution algorithm obtained by using Bessel-Gauss beams for image acquisition compared to using TEM00 and TE01 beams. The use of Bessel-Gauss beams for confocal image acquisition further enhances the deconvolution results and gives an even smaller resulting FWHM (below 100 nm).

 figure: Fig. 15

Fig. 15 Deconvolution results compared to confocal and SLAM for four points disposed on a square and separated by 30 pixels (300 nm). Original images are not presented because the four points are too small to be visible on an image (each point is only of one pixel size). Non-quad. deconv. refers to deconvolution with non-quadratic regularization (edge preserving). Scale bar: 200 nm.

Download Full Size | PDF

 figure: Fig. 16

Fig. 16 Profile traces corresponding to the red lines in Fig. 15 comparing the results obtained with the two deconvolution methods. The original points are presented in grey.

Download Full Size | PDF

Table 6.2 compares the residual FWHM after deconvolution for the two-image deconvolution with non-quadratic regularization (edge preserving) using the TEM00 and TE01 beams or the J0 and J1 beams.

Tables Icon

Table 2. Residual full width at half maximum (FWHM) of the point like objects for each tested method in both directions (vertical and horizontal). Calculations were made with the datasets presented in Figs. 15 and 16. All calculations are made assuming a wavelength of 532 nm.

6.3. Deconvolution on nano-sphere samples

We compared the two-image deconvolution using the Gaussian TEM00 and Laguerre-Gauss TE01 beams with the deconvolution using the Bessel-Gauss beams of order 0 and 1 on the nano-sphere samples. All the confocal images were acquired on the same home-made confocal system as the one shown in Fig. 9, using a water objective with NA = 1.2 and an excitation laser at 532 nm.

Figure 17 shows the results of two-image deconvolution using the TEM00 and TE01 images and compares it with the deconvolution using the images obtained with the Bessel-Gauss modes of order 0 and 1. Figure 18 compares the FWHM of one nano-sphere along the vertical and the horizontal axes. Using the Bessel-Gauss beams instead of the TEM00 and TE01 beams gives an improvement of ≈ 25% in the residual FWHM after deconvolution. The residual FWHMs are ≈97 nm ≈ 0.18 λ in the vertical direction and of ≈ 72 nm ≈0.14 λ in the horizontal direction. The observed nano-spheres have a diameter of ≈ 100 nm and are filled with fluorescent molecules; hence the maximum width of the nano-spheres is ≈ 100 nm. Since the nano-spheres are spherical objects and fluorescence imaging integrates all the fluorescence signal and acts as a summation projection, the perfect fluorescent profile for a nano-spheres should then show a FWHM of approximately 50 nm.

 figure: Fig. 17

Fig. 17 Comparison of images of the same objects obtained with confocal microscopy and deconvolution methods. Left: classical confocal with the TEM00 beam. Middle: two-image deconvolution with the TEM00 and TE01 beams. Right: two-image deconvolution with the Bessel-Gauss beams of order 0 and 1. Scale bar: 340 nm.

Download Full Size | PDF

 figure: Fig. 18

Fig. 18 Profile trace of the image shown in Fig. 17. Black: confocal. Red: two-image deconvolution with Gaussian and Laguerre-Gauss beams. Purple: two-image deconvolution with Bessel-Gauss beams of order 0 and 1.

Download Full Size | PDF

6.4. Deconvolution in biologically relevant samples

In this section we test the improvement in resolution that can be achieved with the deconvolution of confocal images taken with the Bessel-Gauss beams. The sample we used was an imunostaining of mirotubules similar to the one used in section 5.2, mounted in a polyvinyl alcohol mounting medium with DABCO from Fluka Analytical. The images were obtained on the same confocal microscope as before. We used the same water objective with NA = 1.2 and an excitation laser at 532 nm. The pinhole diameter is 10 µm ≈ 1 AU.

Figure 19 displays images and profile traces revealing that the deconvolution can resolve several bundles of microtubules, which was not the case in the confocal image taken with the Bessel-Gauss beam. In the case shown here, the smallest detected feature has a FWHM of ≈ 100 nm ≈ 0.19 λ.

 figure: Fig. 19

Fig. 19 (a) Confocal image of microtubules with the Bessel-Gauss beam of order 0 (top) and two-image deconvolution with Bessel-Gauss beams of order 0 and 1 (bottom). (a) Profile traces of the images in (a). Scale bar: 1µm.

Download Full Size | PDF

7. Discussion

The basic idea behind the method described here is that a structured PSF has a larger content of high spatial frequencies than a non structured PSF. As a consequence, this helps deconvolution algorithms to extract the high frequency content of the image. Having a higher frequency content in the PSF allows deconvolution algorithms to avoid considering high frequencies as belonging to noise, thus recovering the finer structures of the object that would otherwise have been hidden below the noise level and therefore discarded.

It should be noted that, in deconvolution, the number of pixels in the PSF matters. The larger the number of pixels the harder it is for a deconvolution algorithm to correctly process the image. In our case we used a larger PSF than usual, 20-30 pixels in diameter, because we needed to have enough pixels to properly sample the donut beams (TE01 and J1) and we wanted to be able to measure the residual FWHM with good precision. Because we had PSFs with many pixels we could not use the automated parameter estimation proposed in [15], as the latter was designed to work with smaller PSFs (≈ 10 pixels) and we had to manually tune the deconvolution parameters.

The deconvolution is typically performed in a few hundreds to a few thousands of iterations. The deconvolutions codes were written in MATLAB and performed on a computer with an I7-4770 processor clocked at 3.4 GHz. Under these conditions typical deconvolution times were around 10-30 min for an image with 1024×1024 pixels.

The Bessel-Gauss beams of order 0 and 1 produce the smallest possible bright or dark features when focussed by a simple microscope objective. If we consider the back aperture of the objective as a Fourier plane of the objective, a thin ring of light, which is transformed into a Bessel-Gauss beam in the focal plane, represents the highest frequencies the microscope objective is able to transmit. Bessel-Gauss beams thus contain the highest frequency content possible for a given microscope objective. Moreover, the fact that we use the confocal pinhole to filter out the fluorescence of the Bessel-Gauss beam side-lobes adds to the frequency content of the MTF of the Bessel-Gauss beams. All these considerations make the Bessel-Gauss beams the perfect candidate for the proposed deconvolution method.

Even though the proposed method cannot reach a resolution as high as the resolution demonstrated by methods such as STED, PALM, STORM, the experimental design is a lot simpler than STED and D-SLAM does not rely on specific fluorescent properties as in PALM or STORM.

The main limitation of the classical STED method is the high laser power required for the depletion beam that leads to high photobleaching and phototoxicity. The method can nonetheless achieve residual FWHM of 20 nm [38]. This aspect has been extensively investigated and several methods have been proposed to solve this high laser power issue. RESOLFT [39], protected STED [40] or cw-STED [41] are amongst the most popular of those methods and the reported residual FWHM can hardly be lower than 50 nm. By specifically engineering fluorescent probes the RESOLFT method can generate residual FWHMs as low as 40 nm [46], but at the expense of a more accute dependence on specific fluorescent proteins. Another difficulty of STED-like microscopy is the multi wavelength source requirement, which becomes even more stringent in multicolour STED. Some have achieved multi-colour STED imaging with up to 4 colours, but at the expense of the resolution, with a residual FWHM of 80 nm [44,45].

Methods like PALM and STORM are often referred to as single molecule localization microscopy (SMLM) and can both achieve residual FWHMs on the order of 10 nm [38]. The major limitation of those methods lies in the fact that they solely rely on the fluorescent probe properties, namely photo-activation for PALM and photo-switching and blinking for STORM. Moreover, 3D imaging is particularly difficult and the time resolution of those methods is usually not suited for live imaging, with a time resolution often in the order of minutes. Some algorithms lead to a temporal resolution in the order of a few seconds, but with a larger FWHM (60 nm) [42].

Lastly SIM microscopy offers a resolution in the order of 100 nm with fewer limitations than other methods, but the temporal resolution is still limited by the time necessary to apply the different patterns of illumination. Nonlinear SIM, or saturated SIM can have a residual FWHM of 40 nm [43] (or 0.08 λ at 488 nm) by generating high frequency harmonics of the illumination pattern, but this is usually done by using fluorophore saturation.

Some authors have compared the resolution, strength and weaknesses of each of these methods and have measured the residual FWHM measured on the same biological sample [47]. They reported a FWHM of 50 nm for STED, 50 nm for SMLM and 100 nm for SIM. In their comparison they highlighted the main drawback of the previously mentioned methods: SIM can generate reconstruction artefacts (negative values), STED is costly and complex and can hardly achieve multicolour imaging and SMLM is poorly suited for 3D imaging.

ISM also achieves resolution improvement (by a factor of ≈ 1.6) by scanning and often uses a deconvolution algorithm. However, ISM require dozens of images for its processing compared to only two as needed here. Moreover the PSF used for ISM are all gaussian or very similar to gaussian while the two PSF used here have been selected to contain complementary spectral information. Thanks to this complementarity, we were able to achieve a better resolution improvement compared to ISM, without modification of the detection part of the confocal microscope (having a good camera is critical for ISM [48]).

The method proposed in this document does not rely on specific properties of fluorescent probes; it only needs the generation of two laser modes in the optical path before the laser scanning microscope. Deconvolution is a well known and well studied post processing analysis with many reliable algorithms that can correctly process microscopy images. The only two limitations of the methods are the acquisition of two images that reduce the temporal resolution by a factor of two compared to classic confocal imaging and the post-processing time of the deconvolution. The method preserves all the experimental possibilities offered by classical confocal microscopy, like multicolour capability and live acquisitions with an added post-processing step.

We have demonstrated here that the use of different laser modes that form different structured PSFs play an important role in the deconvolution process. With more structured PSFs deconvolution leads to a better resolution than with less structured PSFs. We used this principle to help the deconvolution algorithm to obtain better results by controlling the PSF using beam shaping.

8. Conclusion

In this manuscript we demonstrated the efficiency of deconvolution based confocal microscopy by improving the resolution of an existing method (SLAM) based on laser mode switching. The use of two complementary laser modes was theoretically analysed and several experimental results were provided. We were able to significantly improve the resolution by advantageously using two different laser beams as excitation beams for laser scanning confocal microscopy. This method can easily be retrofitted into a commercial laser scanning microscope by adding a module that controls the laser mode. The deconvolution process can be performed by any deconvolution algorithm that is fit for multi-image deconvolution.

Funding

Natural Sciences and Engineering Research Council of Canada (NSERC)(371078-2010, 171034-05, 04753-2015); Canadian Institutes of Health Research (CIHR)(STP-53908); NSERC-CIHR Collaborative Health Research Projects (CGP-140190); Canada Research Chair in Chronic Pain and Related Brain Disorders (950-230938).

Acknowledgments

The authors would like to thank Charleen Salesse for providing cell cultures, Annie Castonguay, Martin Cottet, Louis E. Lorenzo, Cleophace Akitegetse, Alicja Gasecka, Simon Labrecque and Flavie Lavoie-Cardinal for helpful discussions.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. S.W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19(11), 780–782 (1994). [CrossRef]   [PubMed]  

2. E. Betzig, G.H. Patterson, R. Sougrat, O.W. Lindwasser, S. Olenych, J.S. Bonifacino, M.W. Davidson, J. Lippincott-Schwartz, and H.F. Hes, “Imaging intracellular fluorescent proteins at nano-meter resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]   [PubMed]  

3. M.J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–795 (2006). [CrossRef]   [PubMed]  

4. M.G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]   [PubMed]  

5. C.B. Müller and J. Enderlein, “Image scanning microscopy,” Phys. Rev. Lett. 104(19), 198101 (2010). [CrossRef]   [PubMed]  

6. C. Roider, R. Piestun, and A. Jesacher, “3D image scanning microscopy with engineered excitation and detection,” Optica 4(11), 1373–1381 (2017). [CrossRef]  

7. H. Dehez, M. Piché, and Y. De Koninck, “Resolution and contrast enhancement in laser scanning microscopy using dark beam imaging,” Opt. Express 21(13), 15912–15925 (2013). [CrossRef]   [PubMed]  

8. C. Kuang, S. Li, W. Liu, X. Hao, Z. Gu, Y. Wang, J. Ge, H. Li, and X. Liu, “Breaking the diffraction barrier using fluorescence emission difference microscopy,” Sci. Rep. 3, 1441 (2013). [CrossRef]   [PubMed]  

9. K. Korobchevskaya, C. Peres, Z. Li, A. Antipov, C.J.R. Sheppard, A. Diaspro, and P. Bianchini, “Intensity weighted subtraction microscopy approach for image contrast and resolution enhancement,” Sci. Rep. 6, 25816 (2016). [CrossRef]   [PubMed]  

10. A. Gasecka, A. Daradich, H. Dehez, M. Piché, and D. Côté, “Resolution and contrast enhancement in coherent anti-Stokes Raman-scattering microscopy,” Opt. Lett. 38(21), 4510–4513 (2013). [CrossRef]   [PubMed]  

11. J. L. Starck, E. Pantin, and F. Murtagh, “Deconvolution in astronomy: a review,” Publ. Astron. Soc. Pac. 117(800), 1051–1069 (2002). [CrossRef]  

12. L.M. Mugnier, C. Robert, J.M. Conan, V. Michau, and S. Salem, “Myopic deconvolution from wave-front sensing,” J. Opt. Soc. Am. A 18(4), 862–872 (2001). [CrossRef]  

13. L.M. Mugnier, T. Fusco, and J. Conan, “Mistral: a myopic edge-preserving image restoration method, with application to astronomical adaptive-optics-corrected long-exposure images,” J. Opt. Soc. Am. A 21(10), 1841–1854 (2004). [CrossRef]  

14. F. Soulez, “Une approche problèmes inverses pour la reconstruction de données multi-dimensionnelles par méthodes d’optimisation,” PhD thesis, Université Jean Monnet - Saint-Etienne (2008).

15. E.F.Y. Hom, F. Marchis, T.K. Lee, S. Haase, D.A. Agard, and J.W. Sedat, “Aida: an adaptive image deconvolution algorithm with application to multi-frame and three-dimensional data,” J. Opt. Soc. Am. A 24(6), 1580–1600 (2007). [CrossRef]  

16. F. Soulez, L. Denis, Y. Tourneur, and E. Thiébaut, “Blind deconvolution of 3D data in wide field fluorescence microscopy,” in Proceedings of IEEE International Symposium on Biomedical Imaging (IEEE, 2012), 1735–1738.

17. N. Wiener, The Interpolation, Extrapolation and Smoothing of Stationary Time Series, (J. Wiley and Sons, 1949).

18. E. Thiébaut, “Introduction to image reconstruction and inverse problems,” in Optics in Astrophysics, R. Foy and F. Foy, eds. (Springer, NATO Science Series II: Mathematics, Physics and Chemistry , vol 198), 397–423 (2005). [CrossRef]  

19. F. Orieux, “Inversion bayésienne myope et non-supervisée pour l’imagerie sur-résolue. Application à l’instrument SPIRE de l’observatoire spatial Herschel,” PhD thesis, Université Paris-sud 11 - Paris (2009).

20. J. Nocedal and S.J. Wright, “Conjugate Gradient Methods,” in Numerical Optimization, (SpringerSeries in Operations Research, 1999), pp. 100–133. [CrossRef]  

21. E. Thiebaut, “Optimization issues in blind deconvolution algorithms,” Proc. SPIE 4847, 174–183 (2002). [CrossRef]  

22. B. Richards and E. Wolf, “Electromagnetic diffraction in optical systems. 2. structure of the image field in an aplanatic system,” Proc. R. Soc. Lond. A 253(1274), 358–379 (1959). [CrossRef]  

23. L. Novotny and B. Hecht, “Propagation and focusing of optical fields,” in Principles of Nano-Optics, (Cambridge University, 2006), pp. 45–88. [CrossRef]  

24. Arcoptix Switzerland, “Radial/aximuthal polarisation converter,” <http://www.arcoptix.com/radial_polarization_converter.htm.

25. B.E.A. Saleh and M.C. Teich, “Beam Optics,” in Fundamentals of Photonics, (Wiley Series in Pure and Applied Optics. Wiley Online, 2001), pp. 80–107.

26. P. Dufour, M. Piché, Y. De Koninck, and N. McCarthy, “Two-photon excitation fluorescence microscopy with a high depth of field using an axicon,” Appl. Opt. 45(36), 9246–9252 (2006). [CrossRef]   [PubMed]  

27. G. Thériault, Y. De Koninck, and N. McCarthy, “Extended depth of field microscopy for rapid volumetric two-photon imaging,” Opt. Express 21(8), 10095–10104 (2013). [CrossRef]   [PubMed]  

28. S. Ipponjima, T. Hibi, Y. Kozawa, H. Horanai, H. Yokoyama, S. Sato, and T. Nemoto, “Improvement of lateral resolution and extension of depth of field in two-photon microscopy by a higher-order radially polarized beam,” Microscopy 63(1), 23–32 (2014). [CrossRef]  

29. T.A. Planchon, L. Gao, D.E. Milkie, M.W. Davidson, J.A. Galbraith, C.G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–426 (2011). [CrossRef]   [PubMed]  

30. K.B. Rajesha, N. Veerabagu Sureshb, P.M. Anbarasanc, K. Gokulakrishnanb, and G. Mahadevand, “Tight focusing of double ring shaped radially polarized beam with high NA lens axicon,” Opt. Laser Technol. 43(7), 1037–1040 (2011). [CrossRef]  

31. H. Dehez, A. April, and M. Piché, “Needles of longitudinally polarized light: guidelines for minimum spot size and tunable axial extent,” Opt. Express 20(14), 14891–14905 (2012). [CrossRef]   [PubMed]  

32. L. Thibon, L. E. Lorenzo, M. Piché, and Y. De Koninck, “Resolution enhancement in confocal microscopy using Bessel-Gauss beams,” Opt. Express 25(3), 2162–2177 (2017). [CrossRef]  

33. Z. S. Hegedus and V. Sarafis, “Superresolving filters in confocally scanned imaging systems,” J. Opt. Soc. Am. A 3(11), 2162–2177 (1986). [CrossRef]  

34. Y. Kozawa and S. Sato, “Sharper focal spot formed by higher-order radially polarized laser beams,” J. Opt. Soc. Am. A 24(6), 1793–1798 (2007). [CrossRef]  

35. Y. Kozawa, T. Hibi, A. Sato, H. Horanai, M. Kurihara, N. Hashimoto, H. Yokoyama, T. Nemoto, and S. Sato, “Lateral resolution enhancement of laser scanning microscopy by a higher-order radially polarized mode beam,” Opt. Express 72(6), 15947–15954 (2011). [CrossRef]  

36. J. Kim, D. Kim, and S. Back, “Demonstration of high lateral resolution in laser confocal microscopy using annular and radially polarized light,” Microsc. Res. Tech. 19(17), 441–446 (2011).

37. Y. Kozawa and S. Sato, “Numerical analysis of resolution enhancement in laser scanning microscopy using a radially polarized beam,” Opt. Express 23(3), 2076–2084 (2015). [CrossRef]   [PubMed]  

38. E. Sezgin, “Super-resolution optical microscopy for studying membrane structure and dynamics,” J. Phys.: Condens. Matter 29(27), 273001 (2017).

39. M. Hofmann, C. Eggeling, S. Jakobs, and S.W. Hell, “Breaking the diffraction barrier in fluorescence microscopy at low light intensities by using reversibly photoswitchable proteins,” Proc. Natl. Acad. Sci. USA 102(49), 17565–17569 (2005). [CrossRef]   [PubMed]  

40. J.G. Danzl, S.C. Sidenstein, C. Gregor, N.T. Urban, P. Ilgen, S. Jakobs, and S.W. Hell, “Coordinate-targeted fluorescence nanoscopy with multiple off states,” Nat. Photonics 10, 122–128 (2016). [CrossRef]  

41. G. Vicidomini, G. Moneron, K.Y. Han, V. Westphal, H. Ta, M. Reuss, J. Engelhardt, C. Eggeling, and S.W. Hell, “Sharper low-power STED nanoscopy by time gating,” Nat. Methods 8, 571–573 (2011). [CrossRef]   [PubMed]  

42. J. Min, C. Vonesch, H. Kirshner, L. Carlini, N. Olivier, S. Holden, S. Manley, J.C. Ye, and M. Unser, “FALCON: fast and unbiased reconstruction of high-density super-resolution microscopy data,” Sci. Rep. 4, 4577 (2014).

43. E.H. Rego, L. Shao, J.J. Macklin, L. Winoto, G.A. Johansson, N. Kamps-Hughes, M.W Davidson, and M.G. Gustafsson, “Nonlinear structured-illumination microscopy with a photoswitchable protein reveals cellular structures at 50-nm resolution,” Proc. Natl. Acad. Sci. USA 109(3), E135–E143 (2014). [CrossRef]  

44. P.A. Pellett, X. Sun, T.J. Gould, J.E. Rothman, M.Q. Xu, I.R. Corrêa, and J. Bewersdorf, “Two-color STED microscopy in living cells,” Biomed. Opt. Express 2(8), 2364–2371 (2011). [CrossRef]   [PubMed]  

45. F.R. Winter, M. Loidolt, V. Westphal, A.N. Butkevich, C. Gregor, S.J. Sahl, and S.W. Hell, “Multicolour nanoscopy of fixed and living cells with a single STED beam and hyperspectral detection,” Sci. Rep. 7, 46492 (2017). [CrossRef]   [PubMed]  

46. T. Grotjohann, I. Testa, M. Leutenegger, H. Bock, N.T. Urban, F. Lavoie-Cardinal, K.I. Willig, C. Eggeling, S. Jakobs, and S.W. Hell, “Diffraction-unlimited all-optical imaging and writing with a photochromic GFP,” Nature 478, 204–208 (2011). [CrossRef]   [PubMed]  

47. E. Wegel, A. Göhler, B.C. Lagerholm, A. Wainman, S. Uphoff, R. Kaufmann, and I.M. Dobbie, “Imaging cellular structures in super-resolution with SIM, STED and localisation microscopy: a practical comparison,” Sci. Rep. 6, 27290 (2016). [CrossRef]   [PubMed]  

48. O. Schulz, C. Pieper, M. Clever, J. Pfaff, A. Ruhlandt, R.H. Kehlenbach, F.S. Wouters, J. Großhans, G. Bunt, and J. Enderlein, “Resolution doubling in fluorescence microscopy with confocal spinning-disk image scanning microscopy,” Proc. Natl. Acad. Sci. USA. 110(52), 21000–21005 (2013). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (19)

Fig. 1
Fig. 1 Intensity distribution of the two laser modes. (a) The TEM00 vertically polarized Gaussian beam. (b) The TE01 azimuthally polarized Gaussian beam (donut mode). Pixel size: 10 nm. Scale bar: 100 nm.
Fig. 2
Fig. 2 (a) MTF of the TEM00 beam. (b) MTF of the TE01 beam. (c) Both TEM00 and TE01 MTFs are displayed on the same image; the TEM00 MTF is colored in red and the TE01 MTF in green. The yellow color indicates that the two MTFs overlap. The center of each image corresponds to the zero frequency and frequency increases radially. λ = 532 nm. The image brightness was adjusted for better visualisation. (d) and (e) are the horizontal and vertical profiles of the MTFs displayed in (c).
Fig. 3
Fig. 3 Simulated confocal images obtained by the convolution of samples of point-like objects (1 pixel in size) with the corresponding PSFs. The objects are described in the text. (a) Image obtained with the Gaussian vertically polarized TEM00 PSF. (b) Image obtained with the TE01 azimuthally polarized PSF. Pixel size: 10 nm. Scale bar: 500 nm.
Fig. 4
Fig. 4 Results of the Wiener filtering deconvolution on the images taken with the TEM00 mode (a) and with the TE01 mode (a). In (c) one finds the profiles of the coloured traces on the images in (a) (trace in blue) and (b) (trace in red). λ = 532 nm. Scale bar: 500 nm.
Fig. 5
Fig. 5 (a) In red are the frequency components present in the Wiener filtered image taken with the TE01 beam that are not present in the Wiener filtered image taken with the TEM00 beam, which is represented in green. (b) total frequency content of the Wiener filtered image taken with the TE01 mode in red and with the TEM00 mode in green. The yellow color indicates that the two MTFs overlap. The center of each image corresponds to the zero frequency and frequency increases radially. The contrast has been modified for a better visibility.
Fig. 6
Fig. 6 Deconvolution results compared to those obtained with single image confocal microscopy and SLAM for four point-like objects separated by 30 pixels (300 nm). The four points of the original (undistorted) image are too small to be visible on an image (each point is only of one pixel size). Quad. deconv. refers to deconvolution with quadratic (Tikhonov) regularization. Non-quad. deconv. refers to deconvolution with non-quadratic regularization (edge preserving). TEM00 refers to the PSF of the vertically polarized Gaussian beam and TE01 to the PSF of the azimuthally polarized TE01 beam. TEM00 & TE01 specifies that the two-image deconvolution method is used. Scale bar: 200 nm.
Fig. 7
Fig. 7 Profiles corresponding to the red lines in Fig. 6. The original points are presented in grey. Quad. deconv. refers to deconvolution with quadratic (Tikhonov) regularization. Non-quad. deconv. refers to deconvolution with non-quadratic regularization (edge preserving). TEM00 & TE01 specifies that the two-image deconvolution method is used.
Fig. 8
Fig. 8 Comparison of the proposed deconvolution method with SLAM on randomly simulated lines having a width of one pixel. (a) Comparison of confocal, SLAM and deconvolution with the original simulated image. Scale bar: 1 µm. (b) Profiles of the two traces drawn in (a) compare the methods.
Fig. 9
Fig. 9 Schematic representation of a home-made confocal system. In the laser module we used two beam splitters creating two separate optical paths to easily switch between the TEM00 and the TE01 beams. The azimuthally polarized TE01 beam is created by means of an Arcoptix radial/azimuthal polarization converter [24]. ND filters stands for neutral density filters.
Fig. 10
Fig. 10 Experimental comparison of the methods using imaging of nano-spheres. (a) Classical confocal. (b) SLAM. (c) Two-image deconvolution with edge preserving regularization. Scale bar: 1µm.
Fig. 11
Fig. 11 Profiles obtained from the images shown in Fig. 10. Red: confocal. Green: SLAM. Blue: two-image deconvolution with edge preserving regularization.
Fig. 12
Fig. 12 (a) Comparison of confocal, SLAM and deconvolution methods on images of the same field. Scale bar : 1 µm. (b) Integrated profiles along the 10-pixel wide lines in (a). Black: confocal. Green: SLAM. Blue: single-image deconvolution. Red: two-image deconvolution. The deconvolution method used the edge preserving regularization.
Fig. 13
Fig. 13 (a) Comparison of confocal, SLAM and deconvolution methods on images of the same field. (b) Profiles of the traces in (a). Black: confocal. Green: SLAM. Blue: single-image deconvolution. Red: two-image deconvolution. The deconvolution method used is the one based on edge preserving regularization.
Fig. 14
Fig. 14 100 nm fluorescent nano-spheres (Fluosphere carboxylate-modified microspheres, orange fluorescence 540/560) observed on a confocal microscope with a pinhole of 15µm. Excitation wavelength: 532 nm. Scale bar: 300 nm. The nano-spheres are observed with Gaussian and Bessel-Gauss beams with vertical or azimuthal polarization, respectively.
Fig. 15
Fig. 15 Deconvolution results compared to confocal and SLAM for four points disposed on a square and separated by 30 pixels (300 nm). Original images are not presented because the four points are too small to be visible on an image (each point is only of one pixel size). Non-quad. deconv. refers to deconvolution with non-quadratic regularization (edge preserving). Scale bar: 200 nm.
Fig. 16
Fig. 16 Profile traces corresponding to the red lines in Fig. 15 comparing the results obtained with the two deconvolution methods. The original points are presented in grey.
Fig. 17
Fig. 17 Comparison of images of the same objects obtained with confocal microscopy and deconvolution methods. Left: classical confocal with the TEM00 beam. Middle: two-image deconvolution with the TEM00 and TE01 beams. Right: two-image deconvolution with the Bessel-Gauss beams of order 0 and 1. Scale bar: 340 nm.
Fig. 18
Fig. 18 Profile trace of the image shown in Fig. 17. Black: confocal. Red: two-image deconvolution with Gaussian and Laguerre-Gauss beams. Purple: two-image deconvolution with Bessel-Gauss beams of order 0 and 1.
Fig. 19
Fig. 19 (a) Confocal image of microtubules with the Bessel-Gauss beam of order 0 (top) and two-image deconvolution with Bessel-Gauss beams of order 0 and 1 (bottom). (a) Profile traces of the images in (a). Scale bar: 1µm.

Tables (2)

Tables Icon

Table 1 Residual full width at half maximum (FWHM) of the point like object for each tested method in both directions (vertical and horizontal). Numerical values are extracted from the datasets presented in Figs. 6 and 7. All computations are for a wavelength of 532 nm.

Tables Icon

Table 2 Residual full width at half maximum (FWHM) of the point like objects for each tested method in both directions (vertical and horizontal). Calculations were made with the datasets presented in Figs. 15 and 16. All calculations are made assuming a wavelength of 532 nm.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

I m g S L A M = I m g b r i g t h g * I m g d o n u t
y ( k ) = h ( k ) * x ( k ) + n ( k ) ,
f W i e n e r = a r g m i n f E { x W i e n e r x T r u e 2 } ,
f W i e n e r = a r g m i n f E { f * y x T r u e 2 } ,
a r g m i n f φ ( f )
f ^ u W i e n e r = h ^ u * | h ^ u | 2 + α u β ;
ϕ M A P ( x ) = ϕ M L ( x ) + ϕ p r i o ( x ) ,
ϕ M L ( x ) = ( H x y ) T W ( H x y )
ϕ p r i o ( x ) = μ D x 2 ,
ϕ p r i o ( x ) = μ i , j [ x i + 1 , j x i , j ] 2 + μ i , j [ x i , j + 1 x i , j ] 2 ,
Δ ϕ M A P ( x ) = 0 ,
H T W H x + μ D T D x = H T W y .
ϕ p r i o ( x ) = λ 0 r [ x ^ ( r ) θ r ln ( 1 + x ^ ( r ) θ r ) ] ,
ϕ M A P ( x ) = ϕ M L 1 ( x ) + ϕ M L 2 ( x ) + ϕ p r i o ( x ) ,
ϕ M L 1 ( x ) = ( H 1 x y 1 ) T W ( H 1 x y 1 ) ,
ϕ M L 2 ( x ) = ( H 2 x y 2 ) T W ( H 2 x y 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.