Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Effect of depth dependent spherical aberrations in 3D structured illumination microscopy

Open Access Open Access

Abstract

We model the effect of depth dependent spherical aberration caused by a refractive index mismatch between the mounting and immersion mediums in a 3D structured illumination microscope (SIM). We first derive a forward model that takes into account the effect of the depth varying aberrations on both the illumination and the detection processes. From the model, we demonstrate that depth dependent spherical aberration leads to loss of signal only due to its effect on the detection response of the system, while its effect on illumination leads to phase shifts between orders that can be handled computationally in the reconstruction process. Further, by using the model, we provide guidelines for optical corrections of aberrations with different complexities, and explain how the proposed corrections simplify the forward model. Finally, we show that it is possible to correct both illumination and detection aberrations using a deformable mirror only on the detection path of the microscope.

© 2012 Optical Society of America

1. Introduction

Structured illumination microscopy (SIM) is the widefield super-resolution technique that results in a resolution that is twice that of classical optical resolution by combining only a small number of raw images [13]. SIM involves illuminating the sample with a diffraction-limited sinusoidal pattern and acquiring images with different phase shifts and orientations of the illumination pattern. The acquired raw images are transformed in such a way that they represent the output of several optical transfer functions (OTFs) having frequency support beyond the classical diffraction limit; these extended OTFs are simply the shifted versions of the original diffraction limited OTF covering an isotropically extended volume in the Fourier space. Typically the transformed data are put together via Wiener filtering to obtain a properly-weighted super-resolved image. The nonlinear extension of SIM known as saturated SIM achieves a resolution that is more than twice the classical resolution limit by using nonlinear response of the fluorophores to laser irradiation [4]. Example applications that led to new biological findings by the use of 3D SIM include study of nuclear envelope [5], spindle structure [6], chromosome structure during cell division [7], and plasmodesmata [8], and so on.

One of the major factors leading to the loss of resolution in any widefield-based technique is the depth dependent spherical aberration caused by mismatch between the mean refractive index of imaging specimen and the refractive index of the immersion medium. As with any other widefield system, 3D SIM suffers loss of resolution due to spherical aberration while imaging thick samples. In conventional widefield microscopy, the effect of depth dependent spherical aberration has been well studied [9,10] and optical [11,12] and computational [1315] methods to correct these aberrations have been proposed. In SIM, the effects of depth dependent spherical aberrations have so far only been considered for systems that use 2D illumination patterns for z-sectioning [16], but never for the systems that achieve three-dimensional super-resolution.

To treat the effect of depth dependent spherical aberration on three-dimensional SIM systems, we derive a compact model representing the object to image formation that accurately accounts for the effect of spherical aberration on both illumination and detection. From the model, we demonstrate that the aberration has different effects on the illumination and detection due to the fact that the detection works on the incoherent fluorescence signal, whereas the illumination pattern is generated using the mutual coherence of the illuminating wavefronts. Specifically, we show that the aberration effect on detection causes to a depth dependent attenuation leading to deviation of the actual transfer function from the standard convolution model. On the other hand, aberration of the illumination pattern causes only a deviation from the standard imaging model with no increased attenuation at high frequencies. Using the new 3D model, we provide guidelines for optical corrections of aberrations with different complexities, and explain how the proposed corrections simplify the forward model. In particular, we show that it is possible to correct the aberration for both illumination and detection using a deformable mirror only on the detection path of the microscope.

2. Derivation of the depth-variant model

In a widefield imaging system, there are two quantities representing axial distances: (i) the distance of the focal plane of objective lens from the coverslip, which is the user-controlled sectioning variable, z, and (ii) the distance of a given plane in the imaging sample from the coverslip, which is the object depth variable z′. Assuming a depth independent imaging response is equivalent to supposing that the transfer function depends only on the difference between these quantities; this leads to a 3D representation of the imaging transfer function. In reality, the transfer function depends on both the axial variables, and the term “depth variant response” signifies the dependence of the imaging response on the depth variable z′.

In the following, we will describe a schematic of a SIM system emphasizing the dependence of the transfer function on the above-mentioned axial variables. We will then describe the 4D amplitude transfer function that includes z and z′ as its independent variables which is responsible for making the SIM depth-variant. Next, from the amplitude transfer function, we formulate the depth dependent imaging transfer function for detection. Next we derive the expression for the illumination intensity pattern in terms of z and z′ and finally derive the overall imaging model of a SIM system. In this paper, only the forward model that represents the raw images in terms of the imaging specimen is developed. We do not consider the reconstruction problem.

2.1. Description of the imaging set-up

Figure 1(a) depicts the imaging set-up of a 3D SIM. The light exiting the fiber is collimated by the lens CL and directed to a linear transmission phase grating located at the plane PIM, which diffracts the beam into several orders. A beam block in an intermediate pupil plane (not shown) discards all diffraction orders except orders 0 and ±1. The three beams are refocused by the tube lens TL′ so that each forms an image of the fiber end-face in the back focal plane (PBF) of the objective lens. The beams produced as diffraction orders +1 and −1 were focused near opposing edges of the PBF aperture, and order 0 at its center. The line connecting the fiber images at the PBF will be orthogonal to the grating lines. The objective lens recollimates the beams and makes them intersect with each other in the objective lens’s focal plane, where they interfere to form an illumination intensity pattern with both axial and lateral structure. The overall intensity variation is essentially two-dimensional with the principal directions of variation are the z-axis and the direction orthogonal to the pattern direction of the diffraction grating. In particular, the 2D intensity pattern at the focal plane of the objective lens (PF) is a demagnified and lowpass-filtered image of the grating.

 figure: Fig. 1

Fig. 1 3D structured illumination microscope. (a) optical set-up; (b) simplified schematic.

Download Full Size | PDF

The fluorescent object to be studied is mounted under a glass coverslip and placed below the objective lens. The space between the objective lens and the coverslip is filled with the immersion medium (oil) whose refractive index matches with that of glass. Hence the coverslip, even though its thickness is non-negligible, can be represented by a single plane, PI. The refractive index of the object differs from that of the coverslip and oil, and is typically assumed to be uniform. Hence, the entire medium below the objective lens is composed of two regions of differing refractive indices separated at plane PI. Fluorescence light emitted by the sample as a response to the above-mentioned illumination pattern is gathered by the same objective lens and deflected by a dichroic mirror to the tube lens, TL, which focusses the light onto the image sensor located at the image plane PIM.

Figure 1(b) gives an alternative schematic of the SIM system that will facilitate the derivation of the depth dependent model in terms of the depth variable z and z′ defined at the beginning of this section. A 3D image acquisition involves a sequence of 2D acquisitions with different values of z such that PF samples the entire object depth with an appropriate step-size. For each value of z, the acquired 2D image includes the sum of contributions from all the planes in the object corresponding to different values of z′. It is important to note that the image plane PIM and the grating plane PIM are optically identical with the respect to the sample space, i.e., the transfer functions from these planes to the sample space are identical. It should also be emphasized that this function is mainly determined by transfer function from PBF to the sample space, since the light propagation from the planes PIM and PIM to PBF is a simple Fourier transformation, and aberrations therein are negligible. This transfer function is a 4D function, which we will denote by T, and is composed of a set of 2D functions representing the transformation from a given sample plane PS to PBF for all relevant values of z and z′. In the section 2.2, we specify the above mentioned amplitude transfer function, T. From this T, in the section 2.3, we express the 4D intensity transfer function, h, that relates the measured image in terms of the emitted fluorescence intensity. The transfer function h has been well characterized [9, 10, 1315]. We however re-derive in the section 2.3 in terms of present notations to facilitate the readability. Then in the section 2.4, we determine the effect of T on the illumination intensity. Finally, we derive the complete SIM transfer function.

2.2. The transfer function from a sample plane to the back-focal plane

In order express this transfer function from PS to PBF in Fourier space (see Fig. 1), we first consider the transfer function for z = 0 and z′ = 0. For an ideal objective lens (objective lens with perfect immersion correction), this is an aperture function in 2D Fourier space whose value is identity inside a circular region and zero outside. In practice however, this function will contain phase variations inside the circular region to represent the phase aberrations in the objective lens. Denoting the lateral Fourier Frequencies by X and Y (note the capitalization), let P(X,Y) = A(X,Y)exp(j2πQ(X,Y)) be this transfer function, where Q is the function representing the aberrations, and A(X,Y) is the aperture function. The function A(X,Y) is given by

A(X,Y)=1,forX2+Y2<NA/λ=0,otherwise,
where NA is the numerical aperture of the objective lens. To express the transfer function from PS to PBF for nonzero values z and z′, first note that, a positive value of z causes a reduction in the optical path length through the immersion medium, whereas, a positive value of z′ causes an increase in the optical path length through the mounting medium. Hence, the transfer function in the Fourier domain can be written as
T(X,Y,z,z)=A(X,Y)exp(j2πQ(X,Y))exp[j2π(znobjN(X,Y,nobj)znimmN(X,Y,nimm))],
where N(.,.,n) represents the phase dispersion function representing the transmission of light for unit distance through a medium of refractive index n. Here, nobj and nimm are the refractive indices of the object and the immersion medium respectively. The function N is given by [17]
N(X,Y,n)=1λ1(λX/n)2(λY/n)2,
where λ is the wavelength.

T(X,Y,z,z′) represents the wavefront at PBF when a point source is placed at PS. By reciprocity, it is also the Fourier transform of the wavefront at PS if the image plane PIM is illuminated with a point source. Equivalently, it is the Fourier transform of the wavefront at PS if the PBF is illuminated by a plane wave. T(X,Y, z, z′) forms the basis for the derivation of the imaging model with or without structured illumination. Note that T is in a mixed representation, where lateral variables are in the Fourier space, and the axial variables are in real space.

2.3. The detection transfer function

Let Sa(x,y,z′) be the emitted fluorescence amplitude distribution and let Sa(X,Y, z′) be its section-wise Fourier transform. For a given value of z, the wavefront amplitude distribution at PBF is given by

WBF(X,Y,z)=zSa(X,Y,z)T(X,Y,z,z)dz.
Let R(x,y,z) be the corresponding intensity image at PIM; this is the squared modulus of the inverse Fourier transform of WBF(X,Y, z). By representing the section-wise inverse Fourier transform in xy plane with xy1, R(x,y,z) can be represented in terms of the object intensity as
R(x,y,z)=|xy1[WBF(X,Y,z)]|2=zg(x,y,z,z)xyS(x,y,z)dz,
where
g(x,y,z,z)=|xy1[T(X,Y,z,z)]|2,S(x,y,z)=|Sa(x,y,z)|2,
with ⊕xy representing section-wise 2D convolutions. Equation (2) is the depth-variant convolution model, and g(x,y,z,z′) is the depth variant PSF (DV-PSF).

Under this notion, when the system is assumed to be depth independent, for example when nimm = nobj, the emission response is a function of only the difference zz′. In other words, the depth invariance corresponds to following relation:

g(x,y,z,z)=g(x,y,zz,0),
and Eq. (2) becomes a simple 3D convolution given by
R(x,y,z)=g0(x,y,z)S(x,y,z),
where g0(x,y,z) is the standard depth independent PSF given by g0(x,y,z) = g(x,y,z, 0).

In order to derive the overall depth dependent response in a form that is comparable with the conventional depth invariant model, we first transform Eq. (2) such that it has highest resemblance to a standard convolution. To this end, we apply a z′ dependent shift on z, such that the value of latter satisfies the following condition: z = 0 represents the position of the focal plane PF such that the 2D image of the object plane at z′ reaches the image plane PI with minimum distortion. This shift is known as the focus shift [18]. It is known that this shift is proportional to z′ by a factor that depends on refractive indices involved. Let the transformed DV-PSF be given by

h(x,y,z,z)=g(x,y,z+az,z),
where a is the focus shift (designated as the “detection focus shift”). The forward model with new representation becomes
R(x,y,z)=zh(x,y,zaz,z)xyS(x,y,z)dz
The above equation is in a form suitable for incorporating the illumination pattern in order to derive the overall imaging model. In the following sections, we will use the above expression to derive the final imaging model that will combine the effect of aberration of illumination and detection. We denote the operation in the above equation by R(x,y,z) = DVC(S(x,y,z), h(x,y,z,z′)), where DVC stands for depth variant convolution.

2.4. The effect of depth dependent spherical aberration on the illumination pattern

With reference to Fig. 1(b), let dk be the vector in lateral plane representing the direction orthogonal to grating lines. The complex amplitudes of the selected plane waves exiting the grating (PIM) can be represented by {uk0exp(j2πv0Tx+ϕk0), uk+exp(j2πvk+Tx+ϕk++ϕs), ukexp(j2πvkTx+ϕkϕs)}, where x = [x,y,z] represents 3D spatial position. Here, {v0, vk+, vk} are the wave vectors. The vector v0 is parallel to the z-axis and is of the form v0=[001/λ]T, where λ is the wavelength of illumination. The remaining vectors will be of the form

vk+=[XkYkZk]T,vk=[XkYkZk]T,
such that Xk2+Yk2+Zk2=(1/λ)2. The angle of the sub-vector [XkYk]T coincides with that of the vector dk. The phases {ϕk0, ϕk+, ϕk} are the instrumentation-dependent (unknown) phase shifts that have to be estimated from the data, and ϕs is the phase shift introduced for each 3D stack acquisition by controlling the position of the grating. Finally, the vectors {u0k,uk+,uk} represent the polarization of the plane waves. The tube lens TL′ performs a Fourier transform of the image of the grating at PIM and projects onto the back-focal plane PBF of the objective lens. Assuming that the fiber end-face can be approximated by delta functions, and ignoring the aperture effect on the plane waves, the illumination wavefront at the back-focal plane PBF can be expressed as
EBF,k(X,Y)=uk+δ(XXk,YYk)exp(jϕk++jϕs)+ukδ(X+Xk,Y+Yk)exp(jϕkjϕs)+u0kexp(jϕk0).

The objective lens performs an inverse transform of the wavefront at PBF and projects into the sample space with a 3D intensity pattern that is dependent on both z′ and z. As mentioned before, the principal directions of intensity variation are the z-axis and the direction orthogonal to the pattern direction of the diffraction grating. For each direction k, five 3D stacks are acquired with different values of the phase ϕs, and this process is repeated for directions k = 0,1,2 such that the directional vectors {(Xk,Yk), k = 0,1,2} are 120° apart in the xy plane. Let Ek(X,Y, z, z′) represent the set of 2D Fourier transform of the illumination of the object plane located at z′ with focal plane positioned at z. This function is determined by the transfer function T(X,Y,z,z′) and is given by Ek(X,Y, z, z′) = EBF,k(X,Y)T(X,Y, z, z′).

Finding the effect of depth dependent spherical aberration is equivalent to finding the illumination intensity as function of four variables, x,y,z, and z′; in contrast, as we will show, ignoring the depth dependent aberration is equivalent to assuming the illumination intensity to be a function of three variables only, i.e., a function of x, y, and z– z. Let L′(x,y,z,z′) be the illumination intensity function, and let (X,Y,z,z′) be its partial Fourier transform taken along x and y directions. It is given by

L^(X,Y,z,z)=Ek(X,Y,z,z)X,YE¯k(X,Y,z,z),
where Ēk is the complex conjugate of Ek, and ⊕X,Y represents the convolution only in the variables (X,Y). Here Ek(X,Y, z, z′) = T(X,Y,z,z′)EBF,k(X,Y) with T(X,Y, z, z′) and EBF,k(X,Y) being specified by Eqs. (1) and (5).

In the appendix A, we show that for a sinusoidal grating (when only the 0th and ±th orders are projected on the back focal plane), L′(x,y,z,z′) can be expressed as

L(x,y,z,z)=3+4m0kcos(2π(Xkx+Yky)+φkϕs)×cos(2πfz(zafz)+ϕkz)+2m1kcos(2π(2Xkx+2Yky)+2φk2ϕs),
where
af=fz/fz
fz=nobjλ[11(λXk/nobj)2(λYk/nobj)2]
fz=nimmλ[11(λXk/nimm)2(λYk/nimm)2]
m0k=u0k,uk+=u0k,uk;m1k=uk,uk+
φk=(1/2)(ϕkϕk++2πQ(Xk,Yk)2πQ(Xk,Yk))
φkz=(1/2)(ϕk+ϕk++2πQ(Xk,Yk)+2πQ(Xk,Yk))ϕk02πQ(0,0).
The factor af can be considered as the illumination focus shift. In the absence of depth dependent aberration, for example, when nimm = nobj, af becomes unity, and the axial component of the illumination pattern will depend only on the difference z – z′ as assumed in the standard methods of processing SIM data.

Now we consider the effect of the finite size of the fiber end-face. First we note that light from the laser source typically goes through a phase scrambler before passing through diffraction grating, and hence each pair of points at the fiber end-face are mutually incoherent. However, each photon exiting the fiber is split by the grating into the three orders, satisfying the need for mutual coherence required to create the interference pattern as given by Eq. (7). Further,for all such triplets, the relative distances and the phase differences among three points will be the same. Consequently, the illumination patterns generated by each point of the fiber end-phase will all be identical, and hence the present analysis equivalently accommodates the finite size of the fiber.

2.5. The complete imaging model

Under structured illumination, the emitted fluorescence intensity is the product of the object dye structure and the illumination intensity. As discussed above, due to the depth dependent spherical aberration, the illumination intensity is z-dependent. The fluorescence intensity is given by

FI(x,y,z,z)=L(x,y,z,z)S(x,y,z),
where S(x,y,z′) is the actual fluorescence dye structure. To obtain the 3D imaging model, FI(x,y,z, z′) has to be substituted in place of S(x,y,z′) in Eq. (4). Before expressing the model, we need to define the following:
Pz(z)=2πfz(aaf)z
With this definition, we show in the appendix B that the imaging model can be expressed as
R(x,y,z)=DVC(S(x,y,z),h(x,y,z,z)),+DVC(Sk(x,y,z)cos(Pz(z)),h(x,y,z,z)cos(2πfzzφkz)),DVC(Sk(x,y,z)sin(Pz(z)),h(x,y,z,z)sin(2πfzzφkz)),+DVC(Sk2(x,y,z),h(x,y,z,z)),
where
Sk(x,y,z)=cos(2π(Xkx+Yky)+φkϕs)S(x,y,z)Sk2(x,y,z)=cos(2π(2Xkx+2Yky)+2φk2ϕs)S(x,y,z),
and DVC(·,·) represent the depth dependent detection model defined in Eq. (4).

Figure 2 gives the flow chart for the complete depth variant SIM model given in Eq. (10). The first term in Eq. (10) is represented by the branch A in the flow-chart and it expresses the widefield imaging operation with depth dependent aberration as developed in [15]. The fourth term is represented by the branch B and it is the widefield detection on the laterally modulated signal Sk2(x,y,z) = S(x,y,z) cos(2P(x, y)) with P(x, y) = 2π(Xkx + Yky) + φk ϕs. Note that the modulation frequency here is twice that of the wavefront at the back-focal plane PBF. This term is responsible for the doubling of the lateral resolution. The second and third terms correspond to the axial resolution extension. They are represented by the branches C.1 and C.2 and express the depth variant convolutions by functions h(x,y,z,z′)cos(2πfzz – φkz) and h(x,y,z,z′)sin(2πfzz – φkz), where multiplications by cos(2πfzz – φkz) and sin(2πfzz – φkz) provide the axial resolution extension. Inputs for these two terms are derived from Sk(x,y,z) = S(x,y,z)cos(P(x,y)) by multiplying with axial functions cos(Pz(z)) and sin(Pz(z)) that represent the effect of aberration on the illumination, where Pz(z) = 2πfz(a – af)z. Note that the function Sk(x,y,z) is obtained by laterally modulating the original signal with the frequency of the illumination amplitude at the back-focal plane. Further, recall that a is the detection focus shift, and af is the illumination focus shift given by af = fz/fz with the frequencies fz and fz being given by Eqs. (8) and (9).

 figure: Fig. 2

Fig. 2 Flow diagram of SIM imaging model with depth dependent spherical aberration.

Download Full Size | PDF

When the refractive indices match, i.e., when nobj = nimm, the DV-PSF h(x,y,z, z′) in Eq. (10) becomes independent of z′ and hence the depth variant convolution DVC becomes a standard convolution. Further, the detection focus shift becomes a = 1, and, with reference to Eqs. (8) and (9), the illumination focus shift also becomes af = 1. As a result, Pz(z) becomes zero and hence the branch C.2 disappears; further, the multiplicative factor in the branch C.1 becomes equal to 1. The equivalent imaging equation is given by

R(x,y,z)=3h(x,y,z)S(x,y,z)+4m0khc(x,y,z)[Sk(x,y,z)]+2m1kh(x,y,z)Sk2(x,y,z),
where h(x,y,z) = h(x,y,z,0) and hc(x,y,z) = h(x,y,z) cos(2πfzz – φkz). This is indeed the model assumed in original SIM paper [3].

We will now qualitatively compare the effect of depth dependent aberration on the detection and illumination. To this end, we compare Eq. (10) with the depth invariant imaging model given in Eq. (11). The main difference is that the detection response is taken care by a simple convolution by h(x,y,z) in the depth-invariant model, and by the depth-variant convolution by h(x,y,z,z′) in the complete model. It is well known that this difference is more than a difference in the complexity. For z′ > 10 μm, the Fourier magnitude of the 3D transfer function falls-off steeper meaning that signal strength of the high frequency components is much lower than the case without spherical aberration [9]. Hence, spherical aberration will lead to the loss of signal even if the exact model is used in the reconstruction. Next, to analyze the effect of aberration on illumination alone, we first replace DVC by the convolution by h(x,y,z). We get

R(x,y,z)=3h(x,y,z)S(x,y,z)+4m0khc(x,y,z)[cos(Pz(z))Sk(x,y,z)]4m0khs(x,y,z)[sin(Pz(z))Sk(x,y,z)]+2m1kh(x,y,z)Sk2(x,y,z),
where hs(x,y,z) = h(x,y,z) sin(2πfzz – φkz). Comparing Eq. (12) with Eq. (11), we observe that the differing terms are given by
M1()=hc(x,y,z)(),
M2()=hc(x,y,z)[cos(Pz(z))()]hs(x,y,z)[sin(Pz(z))()],
where M1(•) is the term that acts on Sk(x,y,z) in the model of Eq. (11), and M2(•) is the term that acts on the same signal in the model of Eq. (12). It is straightforward to verify that impulse responses of M1(•) and M2(•) at any depth have identical frequency magnitude (shown in Appendix C). The only difference is that M2 has an higher computational complexity. This shows that spherical aberration on illumination causes only an increase in the complexity of the imaging model without any increase in the high frequency attenuation. Of course, illumination aberration effects will lead to a distorted reconstruction if standard depth-invariant model is incorrectly assumed.

3. Imaging with adaptive optics

In this section, we analyze the effect of adaptive optics schemes proposed in the papers [11,12] on the imaging model, and show how the models become simplified under these schemes. An electronically controlled deformable mirror is placed in a complementary plane that is optically identical to PBF, and mirror shape is controlled in a z-dependent way such that the spherical aberration is compensated. There are two type of schemes of compensating for depth dependent spherical aberration as explained below.

In the first scheme, the physical position of PF is kept fixed at PI (see Fig. 1), and z-sectioning is performed only by controlling the shape of the mirror. For each value of z, mirror shape is controlled to match the function DM1(X,Y, z) = −znobjN(X,Y, z, nobj) [11]. The resultant transfer function is the product of T(X,Y,0,z′) and exp(j2πDM1(X,Y, z)), which is given by

Tao(X,Y,z,z)=A(X,Y)exp(j2πQ(X,Y))exp(j2πN(X,Y,nobj)nobj(zz))
In the second scheme, PF is physically positioned at distance z from the PI, and the z-dependent function for the deformable mirror is set to [11]
DM2(X,Y,z)=z(nimmN(X,Y,nimm)nobjN(X,Y,nobj)).
We generalize the above expression with an addition of a new parameter as follows:
DM2(X,Y,z)=z(nimmN(X,Y,nimm)1a˜nobjN(X,Y,nobj)),
where ã is a design parameter. The resultant transfer function will be now the product of the original transfer function T(X,Y,z,z′) and exp(j2πDM2(X,Y, z)), which is given by
Tao(X,Y,z,z)=A(X,Y)exp(j2πQ(X,Y))exp(j2πN(X,Y,nobj)nobj(z1a˜z)).
Here, the factor ã is a free parameter whose significance will be explained in the next paragraph. Note that the transfer function in Eq. (15) can be considered as a special case of the one in Eq. (16), and hence it is sufficient to analyze the effect of the latter alone on the imaging model. Our goal is now to find the resulting modifications in the model of Eq. (10) when the above adaptive optics scheme is used. To this end, we need to find the effect of replacing the amplitude transfer function T(X,Y,z,z′) with the new transfer function Tao(X,Y, z, z′) in the derivation of Eq. (10).

To derive the effect of adaptive optics on the detection, we first write the detection equation based on the new coherent transfer function Tao(X,Y, z, z′):

R(x,y,z)=z|xy1[Tao(X,Y,z,z)]|2S(x,y,z)dz
By using Eq. (16), the above equation can be written as
R(x,y,z)=zhao(x,y,1a˜zz)S(x,y,z)dz,
where
hao(x,y,z)=|xy1[A(X,Y)exp(j2πQ(X,Y))exp(j2πN(X,Y,nobj)nobjz)]|2
To find the implication of this modified detection equation on the imaging model, we compare this detection equation with the one that was used in the derivation of original imaging model, which is Eq. (4). This comparison implies that incorporating the effect of adaptive optics amounts to replacing DVC with convolution by hao, and replacing the detection focus shift a by the parameter ã. The parameter hence allows the user to choose the detection focus shift. The imaging model in Eq. (10) now becomes
R(x,y,z)=3hao(x,y,z)S(x,y,z)+4m0khao,c(x,y,z)[cos(Pz(z))Sk(x,y,z)]4m0khao,s(x,y,z)[sin(Pz(z))Sk(x,y,z)]+2m1khao(x,y,z)Sk2(x,y,z)
where Pz(z) = 2πfz(ãaf)z, hao,c(x,y,z) = cos(2πfzzφkz)hao(x,y,z), and hao,s(x,y,z) = sin(2πfzz –φkz)hao(x,y,z).

Contrary to the general belief, it is possible to make the imaging model entirely free of depth dependent aberration by applying adaptive optics only on the detection path. To show this, we consider the model resulting from applying adaptive optics on the detection, which is given in Eq. (17). If we set the user-defined focus shift ã to be equal to the focus shift of illumination af, then Pz(z) becomes zero, and hence Eq. (17) becomes

R(x,y,z)=3hao(x,y,z)S(x,y,z)+4m0khao,c(x,y,z)[Sk(x,y,z)]+2m1khao(x,y,z)Sk2(x,y,z),
which is clearly depth invariant. This shows that it is possible to compensate the effect of depth dependent aberration both on illumination and detection by applying the adaptive optics only on the detection path of the microscope.

Interestingly, it is possible to correct the aberration for illumination alone by using a single adaptive element. To this end, we construct a complementary back-focal plane for illumination and apply the following form of z-dependent function as an adaptive mirror:

DMi(X,Y,z)=δ(X,Y)(f^zz)
Note that this is equivalent to applying a single adaptive element that is approximately a point at the center of the back-focal plane. Applying this function on the back-focal plane is equivalent to applying a z-dependent phase factor exp(j2πf̂zz) to the zeroth order illuminating plane wave, where f̂z is the design parameter. The resultant coherent transfer function is given by
Till(X,Y,z,z)=A(X,Y)exp(j2πf^zδ(X,Y)z)exp(j2πQ(X,Y))exp[j2π(znobjN(X,Y,nobj)znimmN(X,Y,nimm))],
where the subscript ill indicates that this transfer function is applied only on the illumination.

By following the same step as in the derivation of Eq. (7), it can be shown that the resultant illumination pattern can by obtained by replacing fz by fz + f̂z in Eq. (7). As a result, Pz(z) in Eq. (10), which represents aberration in the illumination pattern becomes

Pz(z)=2π(fz+f^z)(a(fz/(fz+f^z)))z.
This means that illumination aberration can be eliminated by setting f̂z = fz/a – fz. Consequently, the imaging model becomes
R(x,y,z)=DVC(S(x,y,z),h(x,y,z,z)),+DVC(Sk(x,y,z),h(x,y,z,z)cos(2π(fz/a)zφkz)),+DVC(Sk2(x,y,z),h(x,y,z,z)),
which reveals that the aberration in illumination has been eliminated. If the images obtained by using this type of aberration correction are processed assuming standard depth-invariant model, resolution loss at higher depths will be incurred only by the aberration effect on the detection; there will be no other forms of distortion that are normally encountered in the case of systems without any aberration correction. In the context of handling the finite size of the optical fiber, it is only required to add a phase equal to f̂zz for all points in the center-image of the fiber end-face at the back-focal plane (see the last paragraph of section 2.4). Hence the adaptive element should be flat with its size equal to or greater than the size of the fiber image at the back focal plane.

4. Conclusions

We have developed an imaging model for structured illumination microscopes that takes into account the full effect of depth dependent spherical aberration caused by refractive index mismatch between the immersion and mounting mediums. The model explicitly reveals the effect of aberration on the detection response and the illumination pattern distinctly, and will allow their effect in terms of signal loss to be compared by implementing computational correction independently for each effect. We demonstrated that depth dependent spherical aberration leads to loss of signal only due to its effect on the detection response of the system, and its effect on the illumination only amounts to an increased computational complexity in the forward model. Thus signal loss due to illumination aberration only occurs if an incorrect model is used in the reconstruction process. This contrasts with detection aberrations, which lead to loss of signal even if the exact depth variant model is used during data processing.

Appendix A

Substituting Eqs. (5) and (1) in Ek(X,Y, z, z′) = EBF,k(X,Y, z, z′)T(X,Y,z,z′) yields

Ek(X,Y,z,z)=uk+δ(XXk,YYk)exp(j2πλSknobjz)exp(j2πλSknimmz)a+u0kδ(X,Y)exp(j2πλnobjz)exp(j2πλnimmz)b+ukδ(X+Xk,Y+Yk)exp(j2πλSknobjz)exp(j2πλSknimmz)c
where
Sk=1(λXk/nobj)2(λYk/nobj)2
Sk=1(λXk/nimm)2(λYk/nimm)2
a=exp(jϕs+ϕk++j2πQ(Xk,Yk))
b=exp(jϕk0+j2πQ(0,0))
c=exp(jϕs+jϕk+j2πQ(Xk,Yk))

To compute L(X,Y,z,z′) explicitly, we first rewrite Eq. (18) as follow:

Ek(X,Y,z,z)=uk+δ(XXk,YYk)p(z,z)a+u0kδ(X,Y)p0(z,z)b+ukδ(X+Xk,Y+Yk)p(z,z)c
where
p(z,z)=exp(j2πλSknobjz)exp(j2πλSknimmz),p0(z,z)=exp(j2πλnobjz)exp(j2πλnimmz).
Substituting Eq. (19) in Eq. (6) gives
L^(X,Y,z,z)=3+δ(XXk,YYk)[m0ka¯bp¯(z,z)p0(z,z)+m0kb¯cp(z,z)p¯0(z,z)]+δ(X+Xk,Y+Yk)[m0kab¯p(z,z)p¯0(z,z)+m0kbcp¯(z,z)p0(z,z)]+δ(X2Xk,Y2Yk)m1ka¯c+δ(X+2Xk,Y+2Yk)m1kac¯,
where
m0k=u0k,uk+=u0k,uk;m1k=uk,uk+
The above equation can be re-written as
L^(X,Y,z,z)=3+δ(XXk,YYk)[m0ka¯bexp(j2π(fzzfzz))+m0kb¯cexp(j2π(fzzfzz))]+δ(X+Xk,Y+Yk)[m0kab¯exp(j2π(fzzfzz))+m0kbc¯exp(j2π(fzz=fzz))]+δ(X2Xk,Y2Yk)m1ka¯c+δ(X+2Xk,Y+2Yk)m1kac¯,
where
fz=nobjλ[1Sk]=nobjλ[11(λXk/nobj)2(λYk/nobj)2]
fz=nimmλ[1Sk]=nimmλ[11(λXk/nimm)2(λYk/nimm)2]
The inverse Fourier transform of Eq. (20) becomes
L(x,y,z,z)=3+2m0kcos(2π(Xkx+Yky+fzzfzz)ψa+ψb)+2m0kcos(2π(Xkx+Ykyfzz+fzz)+ψcψb)+2m1kcos(2π(2Xkx+2Yky)ψa+ψc),
where ψa = Angle(a), ψb = Angle(b), and ψc = Angle(c). Equation (21) can be further simplified into the following expression:
L(x,y,z,z)=3+4m0kcos(2π(Xkx+Yky)+(ψcψa)/2)×cos(2π(fzzfzz)+(ψc+ψa)/2ψb)+2m1kcos(2π(2Xkx+2Yky)+ψcψa),
The above equation can further be reformatted as follows:
L(x,y,z,z)=3+4m0kcos(2π(Xkx+Yky)+φkϕs)cos(2π(fzzfzz)+φkz)+2m1kcos(2π(2Xkx+2Yky)+2φk2ϕs),
where
φk=(1/2)(ϕkϕk++2πQ(Xk,Yk)2πQ(Xk,Yk))φkz=(1/2)(ϕk+ϕk++2πQ(Xk,Yk)+2πQ(Xk,Yk))ϕk02πQ(0,0).

Appendix B

Substituting FI(x,y,z,z′) = L′(x,y,z,z′)S(x,y,z′) in the place of S(x,y,z′) in Eq. (4) and then substituting Eq. (22) gives

R(x,y,z)=zh(x,y,zaz,z)xyFI(x,y,z,z)dz=zh(x,y,zaz,z)xy[L(x,y,z,z)S(x,y,z)]dz=3zh(x,y,zaz,z)xyS(x,y,z)dz+4m0kzh(x,y,zaz,z)xy[cos(2π(fzzfzz)+φkz)Ck(x,y)S(x,y,z)]dzR2(x,y,z)+2mk2zh(x,y,zaz,z)xy[Ck2(x,y)S(x,y,z)]dz,
where
Ck(x,y)=cos(2π(Xkx+Yky)+φkϕs);Ck2(x,y)=cos(2π(2Xkx+2Yky)+2φk2ϕs)
Next, we rewrite the first cosine term in the expression for R2(x,y,z) in the above equation as follows:
cos(2π(fzzfzz)+φkz)=cos(2π(fz(zafz))φkz)=cos(2π(fz(zaz))φkz+2πfz(aaf)z)=cos(2π(fz(zaz))φkz)cos(2πfz(aaf)z)sin(2π(fz(zaz))φkz)sin(2πfz(aaf)z),
where af = fz/fz. Substituting the above in the expression for R2(x,y,z) in Eq. (23) gives
R2(x,y,z)=zhc(x,y,zaz,z)xy[cos(2πfz(aaf)z)Ck(x,y)S(x,y,z)]+hs(x,y,zaz,z)xy[sin(2πfz(aaf)z)Ck(x,y)S(x,y,z)dz,]
where
hc(x,y,zaz,z)=h(x,y,zaz,z)cos(2πfz(zaz)φkz),hs(x,y,zaz,z)=h(x,y,zaz,z)sin(2πfz(zaz)φkz).

Equation (23) can be now written as

G(x,y,z)=zh(x,y,zaz,z)xyS(x,y,z)dz+zhc(x,y,zaz,z)xy[cos(Pz(z))cos(P(x,y))S(x,y,z)]dzzhz(x,y,zaz,z)xy[sin(Pz(z))cos(P(x,y))S(x,y,z)]dz+zh(x,y,zaz,z)xy[cos(2P(x,y))S(x,y,z)]dz,
where Pz(z′) = 2πfz(a – af)z′ and P(x,y) = 2π(Xkx + Yky) + φk ϕs. The above equation is clearly equivalent to Eq. (10).

Appendix C

The goal is to compare the result of applying M1(•) and M2(•) on a delta function located at (0,0,z0) given by δ(x,y,z – z0). Substituting δ(x,y,z – z0) in the place of (•) in Eqs. (13) and (14), we get

M¯1,z0(x,y,z)=M1(δ(x,y,zz0))=hc(x,y,z),M¯2,z0(x,y,z)=M2(δ(x,y,zz0))=cos(Pz(z0)hc(x,y,z)sin(Pz(z0))hs(x,y,z)
Note that hc(x,y,z) = h(x,y,z) cos(2πfzz – φkz), and hs(x,y,z) = h(x,y,z) sin(2πfzz – φkz), where h(x,y,z) is the widefield PSF. Hence 2,z0 (x,y,z) can be written as
M¯2,z0(x,y,z)=h(x,y,z)cos(2πfzzφkz+Pz(z0)).
This shows that M̄1,z0 (x,y,z) and M̄2,z0 (x,y,z) differ only by the phase of the cosine modulation, and hence their Fourier transforms have equal magnitude.

Acknowledgments

This research was supported in part by Human Frontier Science Program Organization (www.hfsp.org) under the grant LT00460/2007-C.

References and links

1. M. Gustafsson, A. Agard, and J. Sedat, “Doubling of lateral resolution of wide-field fluorescence microscopy using structured illumination,” Proc. SPIE 3919, 14–150 (2001).

2. P. Kner, B. Chhun, E. Griffis, L. Winoto, and M. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Meth. 6, 339–342 (2009). [CrossRef]  

3. M. Gustafsson, L. Shao, P. Carlton, C. Wang, I. Golubovskaya, W. Cande, D. Agard, and J. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94, 4957–4970 (2008). [CrossRef]   [PubMed]  

4. M. Gustafsson, “Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution,” Proc. Natl. Acad. Sci. U.S.A. 102, 13081–13086 (2005). [CrossRef]   [PubMed]  

5. L. Schermelleh, P. Carlton, S. Haase, L. Shao, L. Winoto, P. Kner, B. Burke, M. Cardoso, D. Agard, M. Gustafsson, H. Leonhardt, and J. Sedat, “Subdiffraction multicolor imaging of the nuclear periphery with 3D structured illumination microscopy,” Science 320, 1332–1336 (2008). [CrossRef]   [PubMed]  

6. M. Trammell, N. Mahoney, D. Agard, and R. Vale, “Mob4 plays a role in spindle focusing in Drosophila S2 cells,” J. Cell Sci. 121, 1284–1292 (2008). [CrossRef]   [PubMed]  

7. C. Wang, P. Carlton, I. Golubovskaya, and W. Cande, “Interlock formation and coiling of meiotic chromosome axes during synapsis,” Genetics 183, 905–915 (2009). [CrossRef]   [PubMed]  

8. J. Fitzgibbon, K. Bell, E. King, and K. Oparka, “Super-resolution imaging of Plasmodesmata using three-dimensional structured illumination microscopy,” Plant Phys. 153, 1453 –1463 (2010). [CrossRef]  

9. S. Gibson and F. Lanni, “Experimental test of an analytical model of aberration in an oil-immersion objective lens used in three-dimensional light microscopy,” J. Opt. Soc. Am. A 8, 1601–1613 (1991). [CrossRef]  

10. B. Hanser, M. Gustafsson, D. Agard, and J. Sedat, “Phase-retrieved pupil functions in wide-field fluorescent microscopy,” J. Microsc. 216, 32–48 (2004). [CrossRef]   [PubMed]  

11. Z. Kam, P. Kner, D. Agard, and J. Sedat, ”Modelling the application of adaptive optics to wide-field microscope live imaging,” J. Microsc. 226, 33–42 (2007). [CrossRef]   [PubMed]  

12. P. Kner, J. Sedat, and D. Agard, “Applying adaptive optics to three-dimensional wide-field microscopy,” Proc. SPIE 6888, 688–809 (2008).

13. C. Preza and J. Conchello, “Image estimation account for point-spread function depth variation in three-dimensional fluorescence microscopy,” Proc. SPIE 4964, 1–8 (2003).

14. C. Preza and J. Conchello, “Depth-variant maximum likelihood restoration for three-dimensional fluorescence microscopy,” J. Opt. Soc. Am. A 21, 1593–1601 (2004). [CrossRef]  

15. M. Arigovindan, J. Shaevitz, J. McGowan, J. Sedat, and D. Agard, “A parallel product-convolution approach for representing the depth varying point spread functions in 3D widefield microscopy based on principal component analysis,” Opt Express 18, 6461–6476 (2010). [CrossRef]   [PubMed]  

16. D. Débarre, E. Botcherby, T. Watanabe, S. Srinivas, M. Booth, and T. Wilson, “Image-based adaptive optics for two-photon microscopy,” Opt. Lett. 34, 2495–2497 (2009). [CrossRef]   [PubMed]  

17. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company Publishers, 2004).

18. S. Wiersma, P. Torok, T. Visser, and P. Varga, “Comparison of different theories for focusing through a plane interface,” J. Opt. Soc. Am. B 14, 1482–1490 (1997). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (2)

Fig. 1
Fig. 1 3D structured illumination microscope. (a) optical set-up; (b) simplified schematic.
Fig. 2
Fig. 2 Flow diagram of SIM imaging model with depth dependent spherical aberration.

Equations (66)

Equations on this page are rendered with MathJax. Learn more.

A ( X , Y ) = 1 , for X 2 + Y 2 < N A / λ = 0 , otherwise ,
T ( X , Y , z , z ) = A ( X , Y ) exp ( j 2 π Q ( X , Y ) ) exp [ j 2 π ( z n obj N ( X , Y , n obj ) z n imm N ( X , Y , n imm ) ) ] ,
N ( X , Y , n ) = 1 λ 1 ( λ X / n ) 2 ( λ Y / n ) 2 ,
W B F ( X , Y , z ) = z S a ( X , Y , z ) T ( X , Y , z , z ) d z .
R ( x , y , z ) = | x y 1 [ W B F ( X , Y , z ) ] | 2 = z g ( x , y , z , z ) x y S ( x , y , z ) d z ,
g ( x , y , z , z ) = | x y 1 [ T ( X , Y , z , z ) ] | 2 , S ( x , y , z ) = | S a ( x , y , z ) | 2 ,
g ( x , y , z , z ) = g ( x , y , z z , 0 ) ,
R ( x , y , z ) = g 0 ( x , y , z ) S ( x , y , z ) ,
h ( x , y , z , z ) = g ( x , y , z + a z , z ) ,
R ( x , y , z ) = z h ( x , y , z a z , z ) x y S ( x , y , z ) d z
v k + = [ X k Y k Z k ] T , v k = [ X k Y k Z k ] T ,
E B F , k ( X , Y ) = u k + δ ( X X k , Y Y k ) exp ( j ϕ k + + j ϕ s ) + u k δ ( X + X k , Y + Y k ) exp ( j ϕ k j ϕ s ) + u 0 k exp ( j ϕ k 0 ) .
L ^ ( X , Y , z , z ) = E k ( X , Y , z , z ) X , Y E ¯ k ( X , Y , z , z ) ,
L ( x , y , z , z ) = 3 + 4 m 0 k cos ( 2 π ( X k x + Y k y ) + φ k ϕ s ) × cos ( 2 π f z ( z a f z ) + ϕ k z ) + 2 m 1 k cos ( 2 π ( 2 X k x + 2 Y k y ) + 2 φ k 2 ϕ s ) ,
a f = f z / f z
f z = n obj λ [ 1 1 ( λ X k / n obj ) 2 ( λ Y k / n obj ) 2 ]
f z = n imm λ [ 1 1 ( λ X k / n imm ) 2 ( λ Y k / n imm ) 2 ]
m 0 k = u 0 k , u k + = u 0 k , u k ; m 1 k = u k , u k +
φ k = ( 1 / 2 ) ( ϕ k ϕ k + + 2 π Q ( X k , Y k ) 2 π Q ( X k , Y k ) )
φ k z = ( 1 / 2 ) ( ϕ k + ϕ k + + 2 π Q ( X k , Y k ) + 2 π Q ( X k , Y k ) ) ϕ k 0 2 π Q ( 0 , 0 ) .
F I ( x , y , z , z ) = L ( x , y , z , z ) S ( x , y , z ) ,
P z ( z ) = 2 π f z ( a a f ) z
R ( x , y , z ) = D V C ( S ( x , y , z ) , h ( x , y , z , z ) ) , + D V C ( S k ( x , y , z ) cos ( P z ( z ) ) , h ( x , y , z , z ) cos ( 2 π f z z φ k z ) ) , D V C ( S k ( x , y , z ) sin ( P z ( z ) ) , h ( x , y , z , z ) sin ( 2 π f z z φ k z ) ) , + D V C ( S k 2 ( x , y , z ) , h ( x , y , z , z ) ) ,
S k ( x , y , z ) = cos ( 2 π ( X k x + Y k y ) + φ k ϕ s ) S ( x , y , z ) S k 2 ( x , y , z ) = cos ( 2 π ( 2 X k x + 2 Y k y ) + 2 φ k 2 ϕ s ) S ( x , y , z ) ,
R ( x , y , z ) = 3 h ( x , y , z ) S ( x , y , z ) + 4 m 0 k h c ( x , y , z ) [ S k ( x , y , z ) ] + 2 m 1 k h ( x , y , z ) S k 2 ( x , y , z ) ,
R ( x , y , z ) = 3 h ( x , y , z ) S ( x , y , z ) + 4 m 0 k h c ( x , y , z ) [ cos ( P z ( z ) ) S k ( x , y , z ) ] 4 m 0 k h s ( x , y , z ) [ sin ( P z ( z ) ) S k ( x , y , z ) ] + 2 m 1 k h ( x , y , z ) S k 2 ( x , y , z ) ,
M 1 ( ) = h c ( x , y , z ) ( ) ,
M 2 ( ) = h c ( x , y , z ) [ cos ( P z ( z ) ) ( ) ] h s ( x , y , z ) [ sin ( P z ( z ) ) ( ) ] ,
T a o ( X , Y , z , z ) = A ( X , Y ) exp ( j 2 π Q ( X , Y ) ) exp ( j 2 π N ( X , Y , n obj ) n obj ( z z ) )
D M 2 ( X , Y , z ) = z ( n imm N ( X , Y , n imm ) n obj N ( X , Y , n obj ) ) .
D M 2 ( X , Y , z ) = z ( n imm N ( X , Y , n imm ) 1 a ˜ n obj N ( X , Y , n obj ) ) ,
T a o ( X , Y , z , z ) = A ( X , Y ) exp ( j 2 π Q ( X , Y ) ) exp ( j 2 π N ( X , Y , n obj ) n obj ( z 1 a ˜ z ) ) .
R ( x , y , z ) = z | x y 1 [ T a o ( X , Y , z , z ) ] | 2 S ( x , y , z ) d z
R ( x , y , z ) = z h a o ( x , y , 1 a ˜ z z ) S ( x , y , z ) d z ,
h a o ( x , y , z ) = | x y 1 [ A ( X , Y ) exp ( j 2 π Q ( X , Y ) ) exp ( j 2 π N ( X , Y , n obj ) n obj z ) ] | 2
R ( x , y , z ) = 3 h a o ( x , y , z ) S ( x , y , z ) + 4 m 0 k h a o , c ( x , y , z ) [ cos ( P z ( z ) ) S k ( x , y , z ) ] 4 m 0 k h a o , s ( x , y , z ) [ sin ( P z ( z ) ) S k ( x , y , z ) ] + 2 m 1 k h a o ( x , y , z ) S k 2 ( x , y , z )
R ( x , y , z ) = 3 h a o ( x , y , z ) S ( x , y , z ) + 4 m 0 k h a o , c ( x , y , z ) [ S k ( x , y , z ) ] + 2 m 1 k h a o ( x , y , z ) S k 2 ( x , y , z ) ,
D M i ( X , Y , z ) = δ ( X , Y ) ( f ^ z z )
T ill ( X , Y , z , z ) = A ( X , Y ) exp ( j 2 π f ^ z δ ( X , Y ) z ) exp ( j 2 π Q ( X , Y ) ) exp [ j 2 π ( z n obj N ( X , Y , n obj ) z n imm N ( X , Y , n imm ) ) ] ,
P z ( z ) = 2 π ( f z + f ^ z ) ( a ( f z / ( f z + f ^ z ) ) ) z .
R ( x , y , z ) = D V C ( S ( x , y , z ) , h ( x , y , z , z ) ) , + D V C ( S k ( x , y , z ) , h ( x , y , z , z ) cos ( 2 π ( f z / a ) z φ k z ) ) , + D V C ( S k 2 ( x , y , z ) , h ( x , y , z , z ) ) ,
E k ( X , Y , z , z ) = u k + δ ( X X k , Y Y k ) exp ( j 2 π λ S k n obj z ) exp ( j 2 π λ S k n imm z ) a + u 0 k δ ( X , Y ) exp ( j 2 π λ n obj z ) exp ( j 2 π λ n imm z ) b + u k δ ( X + X k , Y + Y k ) exp ( j 2 π λ S k n obj z ) exp ( j 2 π λ S k n imm z ) c
S k = 1 ( λ X k / n obj ) 2 ( λ Y k / n obj ) 2
S k = 1 ( λ X k / n imm ) 2 ( λ Y k / n imm ) 2
a = exp ( j ϕ s + ϕ k + + j 2 π Q ( X k , Y k ) )
b = exp ( j ϕ k 0 + j 2 π Q ( 0 , 0 ) )
c = exp ( j ϕ s + j ϕ k + j 2 π Q ( X k , Y k ) )
E k ( X , Y , z , z ) = u k + δ ( X X k , Y Y k ) p ( z , z ) a + u 0 k δ ( X , Y ) p 0 ( z , z ) b + u k δ ( X + X k , Y + Y k ) p ( z , z ) c
p ( z , z ) = exp ( j 2 π λ S k n obj z ) exp ( j 2 π λ S k n imm z ) , p 0 ( z , z ) = exp ( j 2 π λ n obj z ) exp ( j 2 π λ n imm z ) .
L ^ ( X , Y , z , z ) = 3 + δ ( X X k , Y Y k ) [ m 0 k a ¯ b p ¯ ( z , z ) p 0 ( z , z ) + m 0 k b ¯ c p ( z , z ) p ¯ 0 ( z , z ) ] + δ ( X + X k , Y + Y k ) [ m 0 k a b ¯ p ( z , z ) p ¯ 0 ( z , z ) + m 0 k b c p ¯ ( z , z ) p 0 ( z , z ) ] + δ ( X 2 X k , Y 2 Y k ) m 1 k a ¯ c + δ ( X + 2 X k , Y + 2 Y k ) m 1 k a c ¯ ,
m 0 k = u 0 k , u k + = u 0 k , u k ; m 1 k = u k , u k +
L ^ ( X , Y , z , z ) = 3 + δ ( X X k , Y Y k ) [ m 0 k a ¯ b exp ( j 2 π ( f z z f z z ) ) + m 0 k b ¯ c exp ( j 2 π ( f z z f z z ) ) ] + δ ( X + X k , Y + Y k ) [ m 0 k a b ¯ exp ( j 2 π ( f z z f z z ) ) + m 0 k b c ¯ exp ( j 2 π ( f z z = f z z ) ) ] + δ ( X 2 X k , Y 2 Y k ) m 1 k a ¯ c + δ ( X + 2 X k , Y + 2 Y k ) m 1 k a c ¯ ,
f z = n obj λ [ 1 S k ] = n obj λ [ 1 1 ( λ X k / n obj ) 2 ( λ Y k / n obj ) 2 ]
f z = n imm λ [ 1 S k ] = n imm λ [ 1 1 ( λ X k / n imm ) 2 ( λ Y k / n imm ) 2 ]
L ( x , y , z , z ) = 3 + 2 m 0 k cos ( 2 π ( X k x + Y k y + f z z f z z ) ψ a + ψ b ) + 2 m 0 k cos ( 2 π ( X k x + Y k y f z z + f z z ) + ψ c ψ b ) + 2 m 1 k cos ( 2 π ( 2 X k x + 2 Y k y ) ψ a + ψ c ) ,
L ( x , y , z , z ) = 3 + 4 m 0 k cos ( 2 π ( X k x + Y k y ) + ( ψ c ψ a ) / 2 ) × cos ( 2 π ( f z z f z z ) + ( ψ c + ψ a ) / 2 ψ b ) + 2 m 1 k cos ( 2 π ( 2 X k x + 2 Y k y ) + ψ c ψ a ) ,
L ( x , y , z , z ) = 3 + 4 m 0 k cos ( 2 π ( X k x + Y k y ) + φ k ϕ s ) cos ( 2 π ( f z z f z z ) + φ k z ) + 2 m 1 k cos ( 2 π ( 2 X k x + 2 Y k y ) + 2 φ k 2 ϕ s ) ,
φ k = ( 1 / 2 ) ( ϕ k ϕ k + + 2 π Q ( X k , Y k ) 2 π Q ( X k , Y k ) ) φ k z = ( 1 / 2 ) ( ϕ k + ϕ k + + 2 π Q ( X k , Y k ) + 2 π Q ( X k , Y k ) ) ϕ k 0 2 π Q ( 0 , 0 ) .
R ( x , y , z ) = z h ( x , y , z a z , z ) x y F I ( x , y , z , z ) d z = z h ( x , y , z a z , z ) x y [ L ( x , y , z , z ) S ( x , y , z ) ] d z = 3 z h ( x , y , z a z , z ) x y S ( x , y , z ) d z + 4 m 0 k z h ( x , y , z a z , z ) x y [ cos ( 2 π ( f z z f z z ) + φ k z ) C k ( x , y ) S ( x , y , z ) ] d z R 2 ( x , y , z ) + 2 m k 2 z h ( x , y , z a z , z ) x y [ C k 2 ( x , y ) S ( x , y , z ) ] d z ,
C k ( x , y ) = cos ( 2 π ( X k x + Y k y ) + φ k ϕ s ) ; C k 2 ( x , y ) = cos ( 2 π ( 2 X k x + 2 Y k y ) + 2 φ k 2 ϕ s )
cos ( 2 π ( f z z f z z ) + φ k z ) = cos ( 2 π ( f z ( z a f z ) ) φ k z ) = cos ( 2 π ( f z ( z a z ) ) φ k z + 2 π f z ( a a f ) z ) = cos ( 2 π ( f z ( z a z ) ) φ k z ) cos ( 2 π f z ( a a f ) z ) sin ( 2 π ( f z ( z a z ) ) φ k z ) sin ( 2 π f z ( a a f ) z ) ,
R 2 ( x , y , z ) = z h c ( x , y , z a z , z ) x y [ cos ( 2 π f z ( a a f ) z ) C k ( x , y ) S ( x , y , z ) ] + h s ( x , y , z a z , z ) x y [ sin ( 2 π f z ( a a f ) z ) C k ( x , y ) S ( x , y , z ) d z , ]
h c ( x , y , z a z , z ) = h ( x , y , z a z , z ) cos ( 2 π f z ( z a z ) φ k z ) , h s ( x , y , z a z , z ) = h ( x , y , z a z , z ) sin ( 2 π f z ( z a z ) φ k z ) .
G ( x , y , z ) = z h ( x , y , z a z , z ) x y S ( x , y , z ) d z + z h c ( x , y , z a z , z ) x y [ cos ( P z ( z ) ) cos ( P ( x , y ) ) S ( x , y , z ) ] d z z h z ( x , y , z a z , z ) x y [ sin ( P z ( z ) ) cos ( P ( x , y ) ) S ( x , y , z ) ] d z + z h ( x , y , z a z , z ) x y [ cos ( 2 P ( x , y ) ) S ( x , y , z ) ] d z ,
M ¯ 1 , z 0 ( x , y , z ) = M 1 ( δ ( x , y , z z 0 ) ) = h c ( x , y , z ) , M ¯ 2 , z 0 ( x , y , z ) = M 2 ( δ ( x , y , z z 0 ) ) = cos ( P z ( z 0 ) h c ( x , y , z ) sin ( P z ( z 0 ) ) h s ( x , y , z )
M ¯ 2 , z 0 ( x , y , z ) = h ( x , y , z ) cos ( 2 π f z z φ k z + P z ( z 0 ) ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.