Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Subaperture correlation based digital adaptive optics for full field optical coherence tomography

Open Access Open Access

Abstract

This paper proposes a sub-aperture correlation based numerical phase correction method for interferometric full field imaging systems provided the complex object field information can be extracted. This method corrects for the wavefront aberration at the pupil/ Fourier transform plane without the need of any adaptive optics, spatial light modulators (SLM) and additional cameras. We show that this method does not require the knowledge of any system parameters. In the simulation study, we consider a full field swept source OCT (FF SSOCT) system to show the working principle of the algorithm. Experimental results are presented for a technical and biological sample to demonstrate the proof of the principle.

©2013 Optical Society of America

1. Introduction

The use of Shack -Hartmann sensor based adaptive optics for wavefront aberration correction is well established in astronomy and microscopy for point like objects to achieve diffraction limited imaging [13]. It is currently an active field of research in optical coherence tomography/microscopy (OCT/OCM) [4, 5]. Recently, adaptive optics via pupil segmentation using SLM was demonstrated in two photon microscopy [7]. These results showed that the optical aberrations introduced by the sample, due to change in the refractive index with depth in the sample, can be reduced to recover diffraction-limited resolution. This can improve the imaging depth in tissues. Such segmented pupil approach has also been shown with scene based adaptive optics [8]. Lately, Tippie and Fienup demonstrated the subaperture correlation based phase correction as a post processing technique in the case of synthetic aperture digital off axis holographic imaging of extended objects [9].

Especially for coherent imaging techniques the advantage lies in the availability of phase information. This information has been successfully exploited for implementing digital refocusing techniques in OCT, by measuring the full complex field backscattered from the sample. Current methods rely however on two assumptions: first, that the samples exhibit an isotropic and homogenous structure with respect to its optical properties, and secondly, that the aberrations, if present, are well defined, or accessible. Whereas the first limitation has not been addressed so far, the second issue can be solved either by assuming simple defocus and applying spherical wavefront corrections, or by iteratively optimizing the complex wavefront with a merit function that uses the image sharpness as metric [10,11].

In this paper, we investigate the method of subaperture correlation based wavefront error detection and correction as a post processing technique to achieve near diffraction limited resolution in the presence of aberrations. This method has the advantage of directly providing the local wavefront gradient for each subaperture in a single step. As such it operates as a digital equivalent to a Shack-Hartmann sensor. In section 2 we present the theory behind the algorithm which is applicable to any coherent interferometric imaging system. We show that this algorithm does not require the knowledge of any system parameters and furthermore does not rely on the assumption of sample homogeneity with respect to its refractive index. In section 3 we present simulation results demonstrating the practical implementation and performance of the algorithm. We have considered a full field swept source OCT system for our simulation, which is a type of an interference microscopic system. We also show how this method allows for a fast and simple way to correct for defocus aberration. Finally in section 4 we present experimental proof of the principle.

2. Theory

For the theoretical analysis we consider a system based on a Michelson interferometer as shown in Fig. 1, but the theory presented here is valid for any interferometric setup. The interference of the light reflected from the object and reference mirror is adequately sampled using a 2-D camera placed at the image plane of the object. For simplicity, we assume that we have a 4-f telecentric imaging system, and the object field is bandlimited by a square aperture at the pupil plane. In the 2-D interferometric imaging setup, the recorded signal Id at point ξ of the detection plane and at the wavenumber k=2π/λ of the laser illumination is given by

Id(ξ;k)=|Es(ξ;k)|2+|Er(ξ;k)|2+Es*(ξ;k)Er(ξ;k)+Es(ξ;k)Er*(ξ;k).
Es and Er are the field from the object and the reference arm respectively at the detector expressed as
Es(ξ;k)=exp(i4kf)Eo(u;k)P(ξu;k)d2u=exp(i4kf)Es'(ξ;k)
and
Er(ξ;k)=R(ξ;k)exp[ik(4f+Δz)]
where the signal Es'(ξ;k) which contains object field information is given by the convolution of Eo, which is the geometrical image of the object field, with the point spread function (PSF) P of the system and u is a point in the object plane. R is the local reference field amplitude, Δz denotes the optical path length difference between the sample and the reference arm, and λ is the wavelength of light. Phase factors exp(i4kf)and exp[ik(4f+Δz)] in Eqs. (2) and (3) respectively represent the accumulated phase due to propagation of the field from sample and the reference mirror position to the detector plane. In case of monochromatic illumination, with effectively singlek, the complex signal Is=EsEr=Es'R which contains object field information, which we are interested in, can be retrieved via phase shifting methods where Δz is varied [12, 13]. Phase shifting methods are also used to detect the complex wavefront employing broad band light sources which provide coherence gating [14]. The complex signal Is can also be detected via frequency or wavelength diversity as in the case of swept source OCT. In this case, the object is placed at a distance longer than that of the reference mirror position with respect to a common reference point, and the interference signal is recorded with varying k. The Fourier transformation along the k dimension yields the tomographic signal separated from autocorrelation, constant intensity offset (DC) and complex conjugate terms [15]. We assume that the aberrations present in the system primarily originate from the sample and sample optics L2, and can be represented by an effective phase error at the pupil plane.

 figure: Fig. 1

Fig. 1 Schematic diagram for the interferometric setup. Lens L1 and L2 form a 4-f telecentric imaging system. Dotted rays show the imaging path of a point on the object in focus. Camera is at the focal plane of lens L1.

Download Full Size | PDF

In case of FF SSOCT, the complex signal Is(ξ,z), obtained after kz Fourier transformation, containing field information about the object’s zth layer can be written in the discrete from as

Is(ξ,z)=Δξ2I(mΔξ,nΔξ,z)=Im,n,z
where Δξ is the detector pixel pitch assumed to be the same in both directions, (m,n) determine the pixel location in the image plane. Let the size of the sample be M×M pixels. After embedding the 2-D signal into an array of zeros twice its size (2M×2M), we calculate the 2-D DFT of the signal Im,n,zon the computer to propagate it to the Fourier/pupil plane.
Dx,y=DFT[Im,n,z]=m=02M1n=02M1Im,n,zexp[i2π(mx2M+ny2M)]
where (x,y) determine the pixel location in the Fourier plane. In Eq. (5), we have neglected the constant complex factor due to propagation as it does not affect the reconstruction of intensity images, which we are finally interested in. The extent of the spatial frequency is limited due to the square pupil aperture. Let the extent of spatial frequencies in Fourier plane be N×N pixels (2M>N), then according to the paraxial approximation
LNΔxNλ0f2MΔξ
where L is the length of the side of the square aperture, λ0 is the central wavelength, f is the focal length of the lens, and Δx is the pixel pitch in the pupil plane. We window out the N×Npixel data D˜x,y at the Fourier plane, embed it into the middle of array of zeros of size 2M×2M and segment it into K×K subapertures with size N/K×N/K pixels, where . is the floor function. K is usually odd so that the central subaperture can be selected as a reference as shown in Fig. 2. In the absence of aberration the wavefront has same slope everywhere across the pupil plane, and hence all the images due to subaperture will be formed at the same position in the image plane. But, in the presence of aberration the slope of the wavefront will vary across the pupil which will result in relative shift between the images of the subaperture. Since we have a circular centro-symmetric optical system we chose the location of the image formed by central subaperture to be the reference where we want all images due to subapertures to be formed after aberration correction. We can also use aperture overlap as shown in Fig. 2 to increase the number of sampling points for local slope data.

 figure: Fig. 2

Fig. 2 Segmentation of Fourier data into KxK subapertures with K = 3. Green square dots represent sampling points of the average local slope data due to non-overlapping subapertures. Red dotted boxes represent subapertures with about 50% overlap in both directions. Blue circular dots represent sampling points of the slope data due to overlapping subapertures. With overlapping subapertures we can increase the sampling points to reduce the error in phase estimation.

Download Full Size | PDF

We assume that in each subaperture the local phase error can be approximated by the first order Taylor expansion. We measure the translation between the intensity of the images formed by the different subapertures with respect to intensity of the image formed by the central subaperture after normalizing them to the same level [16]. We use this information to calculate the relative local wavefront slope in each subaperture given by

sxpo,qo=ϕpxϕqx=ΔmπM  and  sypo,qo=ϕpyϕqy=ΔnπM,
where Δm and Δn are the shift in terms of pixels in horizontal and vertical direction in the image plane, po,qo are the center points in the pth and qth subaperture to which local slope data value is assigned . We assume the qth subaperture to be the central reference subaperture. We see that the slope data obtained is independent of any system parameters. We use this slope information to reconstruct the estimated wavefront error using Taylor monomials as basis function for modelling phase error. The detailed description of the method is given in Appendix A.

After we have estimated the phase error, ϕe we apply a phase correction exp(iϕe) to the Fourier data D˜ and then perform the 2-D IDFT to get the phase corrected imageI˜m,n,z given by

I˜m,n,z=14M2x=02M1y=02M1D˜x,yexp[iϕe(x,y)]exp[i2π(mx2M+ny2M)].
This process is repeated for all the layers in case of a 3-D volume image to get an aberration free volume image. We assumed a 4-f system with the pupil and the Fourier plane being the same. As we have shown that the method is independent of any system parameters, it can be applied to the Fourier transform of the image even if the pupil is not at the Fourier plane. But one should take care that the image is bandlimited. Otherwise, aliasing of higher spatial frequencies might disturb proper reconstruction when performing 2-D DFT on the image. Figure 3 shows the schematic diagram for the estimation of the phase error based on split aperture method.

 figure: Fig. 3

Fig. 3 Schematic of the subaperture correlation based phase correction method.

Download Full Size | PDF

3. Computer simulation

For the computer simulation we consider the same optical setup shown in Fig. 1 with focal length of lens L1 and L2 as f=50 mm. We consider a sweeping laser source with center wavelength λ0=850nm and bandwidth Δλ=80nm Fig. 4(a) shows the normalized power spectrum with respect to wavenumber k. We assume that the sweeping is linear ink. For simplicity, just for the demonstration of the working principle of the method, we consider only a 2-D object which gives diffuse reflection on illumination. Our object is a USAF bar target of size 512×512 pixels, multiplied with circular complex Gaussian random numbers to simulate the speckle noise. We zero-pad the object to the size of 1024×1024 pixels and apply the fast Fourier transform (FFT) to get to the pupil plane. We multiply the result with a square aperture, which is an array of unity value of size 512×512 pixels zero padded to size 1024×1024 pixels. To apply a phase error we multiply the result with a factor exp(iϕe) using Eq. (16) and then compute the IFFT to get to the image plane. In the last step, we compute the IFFT instead of FFT simply to avoid inverted images without actually affecting the phase correction method. We multiply the resulting field at the image plane with a phase factor of exp(i4kf) to take into account the propagation distance, then add the delayed reference on-axis plane wave with phase factor of exp[ik(4f+Δz)] with Δz=120μm and compute the squared modulus to obtain the simulated interference signal. This is done for each k in the sweep to create a stack of 2-D interference signal with k varying from 7.0598×106m1to 7.757×106m1 in 256 equal steps. The reference wave amplitude was 100 times the object field amplitude. Figure 4(b) shows the spectral interferogram at one lateral pixel.

 figure: Fig. 4

Fig. 4 (a) Normalized power spectrum, (b) amplitude of spectral interferogram at a lateral pixel, (c) A-scan, (d) image slice at the peak location in red dotted box in (c), (e) phase error in radians applied at the pupil plane, (f) aberrated image due to phase error in (e).

Download Full Size | PDF

For the reconstruction, we first calculate the FFT along k, being 256 spectral pixels after zero padding to size 512 pixels, for each lateral pixel of the detection plane(1024×1024pixels) to separate the DC, autocorrelation and the complex conjugate term from the desired object term. The resulting sample depth profile (A-scan) is plotted in Fig. 4(c). It allows separating the DC, autocorrelation and the complex conjugate term from the desired object term (red dashed rectangle). Figure 4(d) shows the image slice from the reconstructed image volume at the position of the peak in Fig. 4(c) without any phase error. The simulated phase error consisted of Taylor monomials up to 6th order and the coefficients were selected from the random Gaussian distribution such that peak to valley phase error across the aperture is 58.03 radians. Figure 4(e) shows the simulated phase error in radians and Fig. 4(f) is the corresponding aberrated image obtained after applying it. We have assumed that the phase error is independent of the light frequency and that the system is compensated for any dispersion effects.

Figure 5 shows the results for the various cases of aperture segmentation. In Fig. 5(a) with K=3 we have 9 sampling points, and hence we could fit only first 9 Taylor monomials according to Eq. (16). Comparing the degraded image in Fig. 4(f) to the corrected image in Fig. 5(a) we immediately see the improvement in visual image quality. Except for the image obtained in the case when K=3 with 50 percent shown in Fig. 5(h), all the images obtained for different cases are of similar visual quality. We found in our simulation that in order to see any significant smearing or blurring in the images, the peak to valley (P-V) aberration should be more than 15 radians. Otherwise the image is close to the diffraction limited image in visual quality. After phase corrections are applied under different conditions of pupil splitting, we find that the P-V residual phase errors are below 10 radians in all cases under consideration except for the case when K=3 with 50 percent overlap, shown in Fig. 5(k). Hence, we judge the performance of the different pupil splitting conditions based on the residual phase error maps. In Fig. 5(b) with K=5 we have 25 sampling points and hence we could fit Taylor monomials upto 6th order with 25 coefficients. We see that the residual phase error has further reduced as compared to the K=3 case in Fig. 5(d). In Fig. 5(c) and (g) we have K=7 and K=9 respectively, but we fit only Taylor monomials upto 6th order to see the effect of increasing sampling points. We see that residual error decreases for K=7 but increases for K=9. This is because with increasing K the size of the subapertures becomes smaller and hence the resolution of the images corresponding to these subapertures also reduces leading to registration error for shift calculations. This results in false slope calculation and phase error estimation. Obviously increasing the number of apertures allows in principle for higher order phase correction, but the smaller number of pixels in the subaperture leads to increasing phase errors. A possible solution is to use overlapping apertures as shown in Fig. 2.

 figure: Fig. 5

Fig. 5 (a), (b), (c) and (g) show the phase corrected images for non-overlapping subaperture with values of K equal to 3, 5, 7 and 9 respectively, and (d), (e), (f) and (j) are the respective residual phase error in radians. (h) and (i) are the images for subapertures with 50 percent overlap for K equal to 3 and 5, and (k) and (l) are the respective residual phase error in radians.

Download Full Size | PDF

We used overlapping subapertures with 50 percent overlap in order to increase the sampling points with uniform spacing and to maintain the number of pixels of the subapertures. 50 percent overlap ensures that we have maximum number of sampling points without the over redundancy of the subaperture data due to overlap. In case of overlapping apertures, K no longer stands for the number of subapertures but defines the size of each subaperture as N/K×N/K pixels. Nevertheless, for K=3 we have a higher residual phase error as compared to the non-overlapping case, also visible in the image in Fig. 5(h) as compared to Fig. 5(a). This is because we have assumed in our algorithm that phase error in each subaperture can be described by a 1st order Taylor series. However in the presence of higher order aberrations this assumption is no longer true if the size of the subaperture is big enough to capture large variation of phase error. This can result in an error in phase estimation, and using overlapping with large subapertures of same size can further add up the errors. Betzig et.al refer to errors induced due to overlapping subapertures as “residual coupling between the subregions” [7]. This is what we see in the case of K=3 with 50 percent overlap. It shows that for K=3 the size of subapertures are not optimal for the 1st order Taylor approximation of local phase error. However, the residual error decreases rapidly for the overlapping case whenK=5. In case of K=5 the subaperture size is small enough for the approximation to be valid and we see a significant reduction in the residual phase error in Fig. 5(l).

Figure 6 shows the plot of rms residual phase error for different values of K for both non-overlapping and overlapping subapertures with different percentage of overlap. We see that with overlapping subapertures of appropriate size with respect to the overall aberrations we obtain in general a lower residual phase error. Only if the subaperture size is chosen too large to cover significant amount of aberrations, which cannot be approximated by the 1st order Taylor series, the overlapping aperture approach performs worse. A smaller size of subapertures ultimately leads to loss of resolution and increased registration error, which in consequence leads to an increase in residual phase error. For bigger size of subapertures the wavefront is sampled with less points and hence one can find coefficients only for the lower order Taylor monomials. The optimal number, size and percentage of overlap of sub-apertures depend of course on the amount of aberrations present. However, from the plots in Fig. 6 we can see that the overlapping subapertures do not yield significantly different results from the non overlapping case for K = 5, 7 9 and 11. Also, increasing the percentage of overlap to more than 50 percent between subapertures does not significantly reduce the residual phase error. This is due to the redundancy of data caused by the overlap. We can see that for K = 3 the residual rms errors are lower for the cases of more than 50 percent overlap as compared to the 50 percent overlap case. But still the residual rms errors are higher than in the non overlapping case. On the other hand, for K = 13, the residual rms errors for the cases of more than 50 percent overlap are higher than both the 50 percent overlap and the non-overlapping case. Since both K = 3 and K = 13 fall in the regimes where the approximations and assumptions of the presented algorithm are no longer valid, it is difficult to explain the exact reasons behind the observed differences. Nevertheless, in Fig. 6 we can see that the condition K=5 with 50 percent overlap has performed the best with residual rms of 0.3412 radians which is well within the Marechal’s criterion of rms error of 0.449 radians for diffraction limited performance.

 figure: Fig. 6

Fig. 6 RMS residual phase error in radians for different values of K and different percentage of overlap between subapertures. For the same value of K the size of subapertures in non- overlapping and overlapping cases are the same.

Download Full Size | PDF

In the case of symmetric quadratic phase error across the aperture which results in defocus aberration, we can find the estimate of the phase error with just two subapertures. The aperture can be split into two equal halves vertically or horizontally. The phase error term in each half has a linear slope with equal magnitude but opposite direction. The opposite slopes in each half cause the images formed by each half to shift in opposite direction relative to each other. Using the relative shift information by cross correlating the images formed using two half apertures, one can easily derive the estimate of the coefficient of the quadratic phase as

a=πΔmMN
where Δm is the shift in terms of pixels in x direction, 2M is the total size of the data array in pixels and N is the size of the aperture in pixels. From Eq. (9) we can estimate the quadratic phase error. This method is simple and fast as compared to the defocus correction method based on sharpness optimization which requires long iterations.

4. Experimental results

The schematic of the FF SS OCT setup for the proof of principle study is shown in Fig. 7(a). The system is a simple Michelson interferometer based on Linnik configuration with same type of microscope objectives (5 × Mitotuyo Plan Apo NIR Infinity-Corrected Objective, NA = 0.14, focal length f = 40 mm) in both the sample and the reference arm. The sample consisted of a layer of plastic, a film of dried milk and an USAF resolution test target (RTT). A thin film of dried milk was used to produce scattering and diffuse reflection. The plastic layer of non uniform thickness and structure to create random aberration was created by heating a plastic used for compact disc (CD) cases. The output beam from the sweeping laser source (Superlum BroadSweeper 840-M) incident on the lens L1 is of diameter 10 mm and the field of view on the sample is ~2 mm. The power incident on sample is 5 mW. The RTT surface was in focus while imaging. The image of the sample formed by L2 is transferred to the camera plane using a telescope formed by lens L3 and L4 with effective magnification of 2.5×. A circular pupil P of diameter 4.8 mm is placed at the focal plane of lens L3 and L4 to spatially band limit the signal. Figures 7(c) and 7(e) show that the image obtained is speckled due scattering at the milk film and aberrated by the plastic layer.

 figure: Fig. 7

Fig. 7 (a) Schematic of the experimental setup: M is the mirror, B.S. is the beam splitter, MO is the 5X NIR microscope objective, L1 and L2 are lens with focal length f = 200 mm, L3 and L4 are lens with f = 150 mm and 75 mm and P is the pupil places at the focal plane of L3 and L4, (b) sample consisting of layer of plastic sheet, film of dried milk and the RTT, (c) image of the RTT surface obtained with non-uniform plastic sheeting, (d) Fourier transform of the image shows it is band limited, and (e) zoomed in image of (c) showing 390×390 pixels . Focus was placed on the RTT.

Download Full Size | PDF

For imaging, the laser is swept from wavelength λ=831.4nmtoλ=873.6nm, with Δλ=0.0824nm step width, and the frames at each wavelength are acquired using a CMOS camera (Photon Focus MV1-D1312I-160-CL-12) at the frame rate of 108 fps synchronized with the laser. A total of 512 frames are acquired. After zero padding and λtok mapping of the spectral pixels, a 1-D FFT is performed along the k dimension for each lateral pixel, the standard FDOCT procedure, which provides the depth location of the different layers of the sample. The layer corresponding to the RTT is picked for aberration correction and the corresponding enface intensity image is displayed in Fig. 7(c). Figure 7(e) shows the zoomed in image of Fig. 7(c) consisting of 390×390 pixels. The numbers 4 and 5 indicate the location of the 4th and 5th group elements. We can barely resolve the RTT element (4, 3). The horizontals bars are visually more affected by the aberration due to the anisotropic nature of the distorted plastic layer. FFT of the image after zero padding shows the spatial frequency information within a circular area. For further processing we filtered out a square area (600×600 pixels) from the spatial frequency distribution. The side length was chosen approximately equal to the diameter of the circular pupil (dotted square in Fig. 7(d)).

Figure 8 shows the phase correction results after subaperture processing. The Taylor monomials upto 5th order are fitted to determine the phase error. Figure 8(a) shows the result obtained using 9 non-overlapping subapertures with K=3. In this case we could fit only upto first 9 monomial terms given by Eq. (16), and hence monomial fitting limited to the 4th order. We also tried non-overlapping subapertures with K=5 and overlapping subaperture with 50% overlap for K=5. Note that Figs. 8(d)-(f) show only the estimation of the phase error that may be present. Unlike the simulation where we calculated the residual phase error knowing the reference phase error, here we look for the improvement in image quality to judge the best pupil splitting condition. We can clearly appreciate the improvement in resolution as the bars are more evident after phase correction. We can resolve horizontal bars upto element (5, 1) in Fig. 8(a) and (5, 4) in Figs. 8(b) and 8(c). This corresponds to the improvement in resolution by a factor of 1.6 for Fig. 8(a), and a factor of 2 for the case in Fig. 8(b) and (c) respectively over the aberrated image with the highest resolvable element being (4, 3). The improvement in resolution is calculated using the relation 2Δm6 where Δm is the improvement in elements. Vertical bars in the corrected images appear much sharper after phase correction. The theoretically calculated diffraction limited resolution of our experimental setup is 6.5 μm. Note that in our experiment the resolution is limited by the size of the pupil P. The best resolution obtained in case of Figs. 8(b) and 8(c) corresponding to element (5,4) is about 11 μmwhich seems to be far from the theoretical limit. This is primarily because of the strong local variations of the wavefront across the pupil plane due to the highly distorted plastic layer causing anisotropic imaging condition. Also, this may be the reason why phase maps shown in Figs. 8(d)-8(e) appear different. However, they have the same general profile.

 figure: Fig. 8

Fig. 8 Phase corrected images of the one in Fig. 7, obtained using (a) non-overlapping subapertures with K = 3, (b) non-overlapping subapertures with K = 5 and (c) overlapping subapertures with K = 5. (d), (e) and (f) are the detected phase error across the aperture in radians in the case of (a), (b) and (c) respectively.

Download Full Size | PDF

We also investigated the effect of refractive index change within the sample. We replaced the non uniform plastic layer of the sample with an uniform plastic layer of thickness ~1 mm. Without plastic layer the RTT was placed at the focal plane of the microscope objective while imaging. The presence of Plastic layer causes the image to be defocused as seen in Fig. 9(a). Figure 9(b) shows the result using nonoverlapping subapertures with K=3. The corresponding estimated phase error in Fig. 9(d) shows the presence of quadratic and 4th order terms which is expected due to the presence of defocus and spherical aberration caused by the change in refractive index within the sample. Since spherical aberration can be balanced with defocus, we applied a simple half aperture method to find the effective defocus error according to Eq. (9) as shown in Fig. 9(e) Successful focusing is clearly evident in these images as elements upto (6, 1) in Fig. 9(b) and (5, 6) in 9(c) can be resolved which corresponds to resolution of 7.8 μmand 8.7 μmrespectively. These values are close to the calculated theoretical diffraction limited resolution of 6.5 μm. In the subaperture processing we filtered out the square aperture, whereas the actual pupil was circular. So the subapertures at the edges were not completely filled with spatial frequency data. This may have contributed to the residual phase error.

 figure: Fig. 9

Fig. 9 (a) Aberrated image obtained using uniform plastic sheet, (b) phase corrected image using non-overlapping subapertures with K = 3, (c) image obtained by only defocus correction using two non-overlapping subapertures as in Fig. 7, (d) phase error in radians estimated for case (b) shows strong quadratic and fourth order terms, (e) quadratic phase error detected for case (c). Here defocus balances the spherical aberration.

Download Full Size | PDF

For demonstrating the method on a biological sample we applied the defocus correction using two non-overlapping subapertures to the 3-D volume image of a grape. The digital refocusing method enables to effectively achieve an extended depth of focus. The first layer of the grape was at the focal plane of the microscope objective. Since the theoretical depth of field is only 130 µm in the sample (assuming refractive index of 1.5), the deeper layers out of focus get blurred. Figure 10(a) shows a tomogram of the grape sample with an arrow indicating a layer at the depth of 424.8 μmfrom the focal plane. Figure 10(c) shows an enface view of that layer and Fig. 10(d) shows the defocus corrected image. We can appreciate the improvement in lateral resolution as the cell boundaries are clearly visible now.

 figure: Fig. 10

Fig. 10 (a) A tomogram of the grape sample, (b) 3-D image volume with dotted line showing location of the tomogram shown in (a), (c) enface image obtained at the depth of 424.8 µm in the grape sample indicated by arrow in (a), (d) is the digitally focused image of (c).

Download Full Size | PDF

In Fig. 11 blue dotted plot shows the coefficients of the defocus error calculated using the two subaperture method, given by Eq. (9), for each layer of a grape sample. Whereas, the solid red line shows the theoretically calculated coefficients using the knowledge of the optical distance of each layer of the grape sample from the focal plane given by

aTz=πλ0ΔzNz4M2Δξ2
where ΔzNz denotes the optical distance of the zth layer from the focal plane with Δz being the pixel pitch in depth and Nzbeing the location of layer in terms of pixels. Green dotted line indicates the theoretically calculated depth of focus at z=130μm. Here the number of pixels in the image is M=500 and pixel pitch of the camera is Δξ=8μm. The pixel pitch in depth is calculated assuming uniform refractive index of n=1.5 in the sample given by
Δz=λ022nΔλ
with λ0=852.5nm and Δλ=42.2nm. The effect of applying theoretically calculated defocus correction is basically the same as Fresnel propagating the field of out of focus layers to the focal plane. We can see from the plot that after a depth of about 380 μm, there is increasing deviation of the result obtained by using the two subaperture method from the theoretically calculated one using Eq. (10). This may give an impression that the subaperture method is inaccurate in estimating defocus error at greater depth. We further investigated the effect of defocus correction, calculated theoretically, on the visual quality of the images and compared it with the defocus corrected images obtained using the two subaperture method. We found that the images obtained using both methods appear similar in visual quality till a depth of about 380 μm. However, the images obtained using theoretically calculated defocus corrections appear blurry at greater depths. Whereas, in case of the two subaperture method the images obtained at increased depth are visually in sharp focus. This shows that for deeper layers the theoretically corrected defocus error requires further optimization or processing to achieve the level of performance demonstrated by the single step subaperture method. Media 1 shows the fly through the depth of the original and digitally focused 3-D volume images of the grape sample. Here we can clearly see that the same lateral resolution is maintained throughout the depth after defocus correction.

 figure: Fig. 11

Fig. 11 Comparison of coefficients of defocus error with depth z obtained theoretically and using the two subaperture method for a grape sample.

Download Full Size | PDF

Fig. 12: (a) Original grape sample layer at z=285 μm, (b) image obtained by applying theoretically calculated the phase error correction to (a), (c) theoretically estimated phase error in radians for (b), (d) defocus corrected image of (a) obtained using the two subaperture method, and (e) estimated phase error in radians for the case (d). (f) Original grape sample layer at z=511 μm, (g) image obtained by applying theoretically calculated the phase error correction to (f), (h) theoretically estimated phase error in radians for (g), (i) defocus corrected image of (f) obtained using the two subaperture method, and (j) estimated phase error in radians for the case (i) (See Media 1).

The result shown in Fig. 12(b) show that at the lower depth of z = 285 μm the image obtained by applying theoretically calculated defocus correction is visually sharp. However, at z = 511 μm the image obtained appears blurry as shown in Fig. 12(g). On the other hand, the images obtained by subaperture method appear in sharp focus for both z = 285 μm and z = 511 μm as shown in Figs. 12(d) and 12(i) respectively. This shows that higher P-V defocus errors estimated by the two subaperture method at greater depths do in fact yield better focusing performance. The difference in results between the theoretically calculated defocus error and the ones estimated using the two subaperture method may be due to the fact that for the theoretical calculations we assumed the sample to be of uniform refractive index. However, the sample may have random local refractive index fluctuations which may cause spherical aberration and with increasing depth this effect can add up. In our experimental result in Fig. 9, we have demonstrated that the two subaperture method is capable of balancing spherical aberration to achieve a focused image. The better results obtained for deeper layers of a grape sample using the subaperture method only supports this fact. The quantification of induced spherical aberration requires accurate knowledge of refractive index fluctuations within the sample. This will also require theoretical modelling of the sample and the system which is out of the scope of the present paper and will be investigated in future. Also, it is difficult to understand exactly the reason behind the abrupt strong defocus induced in the images after a certain depth which is also noticeable in Media 1. This is currently under investigation and will be answered by future research. Nevertheless, we have clearly demonstrated that the two subaperture method can correct for the defocus and spherical aberrations which are commonly encountered while imaging biological samples.

5. Discussion and conclusion

In this paper we investigated the subaperture correlation based phase correction as post processing technique for interferometric imaging where phase information can be accessed. In the theoretical analysis we showed that this method is independent of any system parameters such as propagation distance or physical detector size. We can correct for the aberration present in the images without knowing the system details provided the image is band limited and not severely aliased. The method works very well when the spatial frequency content is spread uniformly across the pupil which is the case when imaging diffuse scattering object with laser light. We further divided the pupil plane into overlapping subapertures to obtain more sampling points to reduce error in estimation of the phase error. We used Taylor monomials for fitting the phase estimate which are simple and easy to use. In our simulation we showed that the least square method for finding the coefficients of the monomials is reliable. We also conducted simple experiments to show the proof of principle using a FF SS OCT system. We were able to reduce the aberration introduced in the sample using the theory of subaperture processing. The images obtained after processing clearly shows an improvement in resolution. This method does not require long iterations as in case of optimization methods used for phase correction [10,11] and provides the phase error estimation in a single step. It can thus be seen as digital equivalent of a Hartmann Shack wavefront sensor. We have demonstrated the method using Taylor monomials as basis functions which are not orthogonal over the square aperture. Hence, in the presence of higher order aberration there might be coupling of errors between the coefficients of Taylor monomials which could leave the accurate quantification of the contributions due to various components of aberration such as defocus, spherical etc. difficult. Nevertheless, we have demonstrated that by using Taylor monomials we can reduce the overall aberration to achieve performance close to the diffraction limit. However, in the future it would be interesting to use Zernike like polynomials that are orthogonal over the sampled square aperture. This would help for an accurate quantification of various higher order aberration contributions. A potential application would be the characterization and quantification of aberrations present in the human eye while imaging the retina. For results presented in this paper the cross correlation of the subapertures were done in a sequential manner in MATLAB. The processing time for each frame is around 2 to 4 seconds. However, the speed can be greatly improved with parallel processing on a graphics processing unit (GPU). In case of 3-D imaging, the phase error correction can be applied throughout the isoplanatic volume as for example done in [18], where the aberration is uniform and the sample has uniform refractive index, using the phase error estimation from a single layer. Furthermore, subaperture processing opens the possibility of doing region of interest based phase correction if the aberrations and the sample are not uniform across the volume.

6. Appendix A

Let D˜p and D˜q represent any two square subapertures of the discretized, filtered and segmented pupil data D˜given by

D˜p=S˜pexp(iϕp)S˜pexp{i[ϕpo+(xxpo)ϕpx+(yypo)ϕpy]}
D˜q=S˜qexp(iϕq)S˜qexp{i[ϕqo+(xxqo)ϕqx+(yyqo)ϕqy]}
where S˜p and S˜q represent ideal aberration free data with ϕp and ϕq phase error, (ϕpx,ϕpy) and (ϕqx,ϕqy) are the average slope, (xpo,ypo) and (xqo,yqo) are the center pixels respectively in the pth and qth subaperture. We assume that any given subaperture is small enough such that the phase error can be approximated by the first order Taylor series. We calculate the 2-D IDFT of D˜p and D˜q given by
IDFT[D˜p]=14M2x=02M1y=02M1S˜px,yexp(iϕp)exp[i2π(mx2M+ny2M)]Ip(mMϕpπx,nMϕpπy)
IDFT[D˜q]=14M2x=02M1y=02M1S˜qx,yexp(iϕq)exp[i2π(mx2M+ny2M)]Iq(mMϕqπx,nMϕqπy)
We arrive at Eq. (14) and (15) by using the Fourier shift theorem. Ip and Iq are the low resolution image version of Is and we assume that both have the same intensity. This is a valid assumption if back-scattered light from the sample is diffuse which is generally the case when imaging tissue with coherent illumination. However, the Gaussian profile of the illumination and the presence of specular reflections from the sample can cause intensities to vary between subapertures. Therefore we normalize the intensities of the images to the same level. We subsequently cross correlate the intensities of Ip and Iq to determine the relative shift between the two images, from which we derive the relative slope information as given in Eq. (7).

We model the phase error ϕe as a linear combination of Taylor monomials TJ given by

ϕe=J=1ΖTJ(X,Y)=J=2Zj=0JaJjXjYJj,
X=xM...0x2M1Y=yM...0y2M1
where X and Y represent the pixel location in Cartesian coordinate with the center pixel in the whole data array as the origin, Z is the highest order of the monomials and aj are the coefficients we wish to determine. We have neglected constant and linear terms in ϕe as they do not cause the blurring of the image. The gradient of the phase error is then given by
ϕe=J=2Zj=0JaJj(XjYJj)
By comparing the slope data S=(sx,1...sx,Ns,sy,1...sy,Ns)T with the gradient of the phase error in Eq. (18) at the locations of central pixels in each subaperture, we have a solution in the matrix form as
GA=S
where G is the gradient matrix and A=(a20...aΖΖ)T is the coefficient matrix which we need to determine, Ns is the number of subapertures and the number of coefficientsNg={[(Z+1)(Ζ+2)/2]3}Ns. We find the least square solution to Eq. (19) as
A=(GTG)1GTS.
Once we have estimated the coefficients, we can estimate the phase error using Eq. (16). We are working with Taylor monomials as they are simple to use. Zernike polynomials can be also used, but one has to take care that they are orthogonal over the sampled aperture and the solutions are different for apertures of different shapes [19]. We show in our simulations that Taylor monomials work well enough in the determination of the phase error.

Acknowledgments

This research was supported by Carl Zeiss Meditec Inc. Dublin, USA, Medical University Vienna, European project FAMOS (FP7 ICT, contract no. 317744), FUN OCT (FP7 HEALTH, contract no. 201880) and the Christian Doppler Society (Christian Doppler Laboratory “Laser development and their application in medicine”).

References and links

1. B. C. Platt and R. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg. 17(5), S573–S577 (2001). [PubMed]  

2. J. L. Beverage, R. V. Shack, and M. R. Descour, “Measurement of the three-dimensional microscope point spread function using a Shack-Hartmann wavefront sensor,” J. Microsc. 205(1), 61–75 (2002). [CrossRef]   [PubMed]  

3. M. Rueckel, J. A. Mack-Bucher, and W. Denk, “Adaptive wavefront correction in two-photon microscopy using coherence-gated wavefront sensing,” Proc. Natl. Acad. Sci. U.S.A. 103(46), 17137–17142 (2006). [CrossRef]   [PubMed]  

4. M. Pircher and R. J. Zawadzki, “Combining adaptive optics with optical coherence tomography: Unveiling the cellular structure of the human retina in vivo,” Expert Rev. Ophthalmol. 2(6), 1019–1035 (2007). [CrossRef]  

5. K. Sasaki, K. Kurokawa, S. Makita, and Y. Yasuno, “Extended depth of focus adaptive optics spectral domain optical coherence tomography,” Biomed. Opt. Express 3(10), 2353–2370 (2012). [CrossRef]   [PubMed]  

6. L. A. Poyneer, “Scene-based Shack-Hartmann wave-front sensing: analysis and simulation,” Appl. Opt. 42(29), 5807–5815 (2003). [CrossRef]   [PubMed]  

7. N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7(2), 141–147 (2010). [CrossRef]   [PubMed]  

8. T. Haist, J. Hafner, M. Warber, and W. Osten, “Scene-based wavefront correction with spatial light modulators,” Proc. SPIE 7064, 70640M, 70640M-11 (2008). [CrossRef]  

9. A. E. Tippie and J. R. Fienup, “Sub-Aperture Techniques Applied to Phase-Error Correction in Digital Holography,” in Digital Holography and Three-Dimensional Imaging, OSA Techinal Digest (CD) (Optical Society of America, 2011), paper DMA4. http://www.opticsinfobase.org/abstract.cfm?URI=DH-2011-DMA4 [CrossRef]  

10. S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A 25(4), 983–994 (2008). [CrossRef]   [PubMed]  

11. S. G. Adie, B. W. Graf, A. Ahmad, P. S. Carney, and S. A. Boppart, “Computational adaptive optics for broadband optical interferometric tomography of biological tissue,” Proc. Natl. Acad. Sci. U.S.A. 109(19), 7175–7180 (2012). [CrossRef]   [PubMed]  

12. P. Hariharan, Optical Interferometry (Academic, 2003).

13. D. Malacara, Optical Shop Testing (Wiley, 1992).

14. M. Rueckel and W. Denk, “Properties of coherence-gated wavefront sensing,” J. Opt. Soc. Am. A 24(11), 3517–3529 (2007). [CrossRef]   [PubMed]  

15. W. Drexler and J. G. Fujimoto, Optical Coherence Tomography: Technology and Applications (Springer, 2008).

16. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33(2), 156–158 (2008). [CrossRef]   [PubMed]  

17. A. E. Tippie, A. Kumar, and J. R. Fienup, “High-resolution synthetic-aperture digital holography with digital phase and pupil correction,” Opt. Express 19(13), 12027–12038 (2011). [CrossRef]   [PubMed]  

18. D. Hillmann, G. Franke, C. Lührs, P. Koch, and G. Hüttmann, “Efficient holoscopy image reconstruction,” Opt. Express 20(19), 21247–21263 (2012). [CrossRef]   [PubMed]  

19. V. N. Mahajan and G. M. Dai, “Orthonormal polynomials in wavefront analysis: analytical solution,” J. Opt. Soc. Am. A 24(9), 2994–3016 (2007). [CrossRef]   [PubMed]  

Supplementary Material (1)

Media 1: AVI (7786 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Schematic diagram for the interferometric setup. Lens L1 and L2 form a 4-f telecentric imaging system. Dotted rays show the imaging path of a point on the object in focus. Camera is at the focal plane of lens L1.
Fig. 2
Fig. 2 Segmentation of Fourier data into KxK subapertures with K = 3. Green square dots represent sampling points of the average local slope data due to non-overlapping subapertures. Red dotted boxes represent subapertures with about 50% overlap in both directions. Blue circular dots represent sampling points of the slope data due to overlapping subapertures. With overlapping subapertures we can increase the sampling points to reduce the error in phase estimation.
Fig. 3
Fig. 3 Schematic of the subaperture correlation based phase correction method.
Fig. 4
Fig. 4 (a) Normalized power spectrum, (b) amplitude of spectral interferogram at a lateral pixel, (c) A-scan, (d) image slice at the peak location in red dotted box in (c), (e) phase error in radians applied at the pupil plane, (f) aberrated image due to phase error in (e).
Fig. 5
Fig. 5 (a), (b), (c) and (g) show the phase corrected images for non-overlapping subaperture with values of K equal to 3, 5, 7 and 9 respectively, and (d), (e), (f) and (j) are the respective residual phase error in radians. (h) and (i) are the images for subapertures with 50 percent overlap for K equal to 3 and 5, and (k) and (l) are the respective residual phase error in radians.
Fig. 6
Fig. 6 RMS residual phase error in radians for different values of K and different percentage of overlap between subapertures. For the same value of K the size of subapertures in non- overlapping and overlapping cases are the same.
Fig. 7
Fig. 7 (a) Schematic of the experimental setup: M is the mirror, B.S. is the beam splitter, MO is the 5X NIR microscope objective, L1 and L2 are lens with focal length f = 200 mm, L3 and L4 are lens with f = 150 mm and 75 mm and P is the pupil places at the focal plane of L3 and L4, (b) sample consisting of layer of plastic sheet, film of dried milk and the RTT, (c) image of the RTT surface obtained with non-uniform plastic sheeting, (d) Fourier transform of the image shows it is band limited, and (e) zoomed in image of (c) showing 390×390 pixels . Focus was placed on the RTT.
Fig. 8
Fig. 8 Phase corrected images of the one in Fig. 7, obtained using (a) non-overlapping subapertures with K = 3, (b) non-overlapping subapertures with K = 5 and (c) overlapping subapertures with K = 5. (d), (e) and (f) are the detected phase error across the aperture in radians in the case of (a), (b) and (c) respectively.
Fig. 9
Fig. 9 (a) Aberrated image obtained using uniform plastic sheet, (b) phase corrected image using non-overlapping subapertures with K = 3, (c) image obtained by only defocus correction using two non-overlapping subapertures as in Fig. 7, (d) phase error in radians estimated for case (b) shows strong quadratic and fourth order terms, (e) quadratic phase error detected for case (c). Here defocus balances the spherical aberration.
Fig. 10
Fig. 10 (a) A tomogram of the grape sample, (b) 3-D image volume with dotted line showing location of the tomogram shown in (a), (c) enface image obtained at the depth of 424.8 µm in the grape sample indicated by arrow in (a), (d) is the digitally focused image of (c).
Fig. 11
Fig. 11 Comparison of coefficients of defocus error with depth z obtained theoretically and using the two subaperture method for a grape sample.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

I d ( ξ;k )= | E s ( ξ;k ) | 2 + | E r ( ξ;k ) | 2 + E s * ( ξ;k ) E r ( ξ;k )+ E s ( ξ;k ) E r * ( ξ;k ).
E s ( ξ;k )=exp( i4kf ) E o ( u;k ) P( ξu;k ) d 2 u=exp( i4kf ) E s ' ( ξ;k )
E r ( ξ;k )=R( ξ;k )exp[ ik( 4f+Δz ) ]
I s ( ξ,z )=Δ ξ 2 I( mΔξ,nΔξ,z )= I m,n,z
D x,y =DFT[ I m,n,z ]= m=0 2M1 n=0 2M1 I m,n,z exp[ i2π( mx 2M + ny 2M ) ]
LNΔxN λ 0 f 2MΔξ
s x po,qo = ϕ p x ϕ q x = Δmπ M   and   s y po,qo = ϕ p y ϕ q y = Δnπ M ,
I ˜ m,n,z = 1 4 M 2 x=0 2M1 y=0 2M1 D ˜ x,y exp[ i ϕ e ( x,y ) ] exp[ i2π( mx 2M + ny 2M ) ].
a= πΔm MN
a T z = π λ 0 Δz N z 4 M 2 Δ ξ 2
Δz= λ 0 2 2nΔλ
D ˜ p = S ˜ p exp( i ϕ p ) S ˜ p exp{ i[ ϕ po +( x x po ) ϕ p x +( y y po ) ϕ p y ] }
D ˜ q = S ˜ q exp( i ϕ q ) S ˜ q exp{ i[ ϕ qo +( x x qo ) ϕ q x +( y y qo ) ϕ q y ] }
IDFT[ D ˜ p ]= 1 4 M 2 x=0 2M1 y=0 2M1 S ˜ p x,y exp( i ϕ p ) exp[ i2π( mx 2M + ny 2M ) ] I p ( m M ϕ p πx ,n M ϕ p πy )
IDFT[ D ˜ q ]= 1 4 M 2 x=0 2M1 y=0 2M1 S ˜ q x,y exp( i ϕ q ) exp[ i2π( mx 2M + ny 2M ) ] I q ( m M ϕ q πx ,n M ϕ q πy )
ϕ e = J=1 Ζ T J ( X,Y ) = J=2 Z j=0 J a Jj X j Y Jj ,
X=xM ...0x2M1 Y=yM ...0y2M1
ϕ e = J=2 Z j=0 J a Jj ( X j Y Jj )
GA=S
A= ( G T G ) 1 G T S.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.