Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Digital focusing of OCT images based on scalar diffraction theory and information entropy

Open Access Open Access

Abstract

This paper describes a digital method that is capable of automatically focusing optical coherence tomography (OCT) en face images without prior knowledge of the point spread function of the imaging system. The method utilizes a scalar diffraction model to simulate wave propagation from out-of-focus scatter to the focal plane, from which the propagation distance between the out-of-focus plane and the focal plane is determined automatically via an image-definition-evaluation criterion based on information entropy theory. By use of the proposed approach, we demonstrate that the lateral resolution close to that at the focal plane can be recovered from the imaging planes outside the depth of field region with minimal loss of resolution. Fresh onion tissues and mouse fat tissues are used in the experiments to show the performance of the proposed method.

©2012 Optical Society of America

1. Introduction

Optical coherence tomography (OCT) [1,2] is a relatively novel imaging modality that allows for non-invasive, cross-sectional imaging of the turbid biological tissues with micrometer resolution and with an imaging depth of up to 2 mm below the surface. In addition, OCT is also capable of providing useful information about physiological processes within biological tissue, for example functional microcirculations [3,4]. When the OCT system scans a tissue, the probe beam is typically focused into the sample by an objective lens. Because of the use of the optical lens to focus the probe beam, only the OCT image that falls within the depth of field (DOF) exhibits the desired lateral resolution, whereas the OCT image that falls outside the DOF region is blurred laterally. Although the problem is somewhat mitigated if one focuses a probe beam with a relatively low numerical aperture (NA) to perform imaging, this treatment unfortunately reduces the attainable lateral resolution at the focal plane. In general, the higher lateral resolution and the longer DOF are two of the most wishful parameters for the most OCT imaging applications; however, they are reciprocally coupled. The relationship between the lateral resolution and DOF is schematically illustrated in Fig. 1(a) for low and Fig. 1(b) for high NAs.

 figure: Fig. 1

Fig. 1 The effect of numerical aperture on the desirable lateral resolution and the DOF in the OCT imaging, for the cases of (a) low NA, resulting in relatively long DOF but low lateral resolution; and (b) high NA, leading to relatively shorter DOF but much high lateral resolution.

Download Full Size | PDF

Efforts have been paid in the community to overcome the coupling problems between the lateral resolution and the DOF. Ding et al. [5] incorporated an axicon lens with a top angle of 160° into the sample arm of an interferometer and maintained 10 μm lateral resolution over a focusing depth of 6 mm. Xie et al. [6] developed a probe based on gradient index lens rod for endoscopic spectral domain OCT with a capability of dynamically tracking the beam focus, for which a dynamic focusing range between 0 to 7.5 mm was demonstrated without moving the probe beam itself. Divetia et al. [7] used a 1 mm liquid-filled polymer lens for endoscopic OCT applications that dynamically provided scanning depth of field by changing the hydraulic pressure within the lens, which enabled dynamic control of the focal depth without the need for articulated parts. This configuration was shown to have resolving power of 5 μm over an object-depth of 2.5 mm. David et al. [8] presented a combined en face OCT/SLO system equipped with an adaptive optics closed loop for high resolution imaging of the in-vivo human retina. The correction of aberrations produced by the adaptive optics closed-loop system increased the signal-to-noise ratio in images and a slight improvement in the lateral resolution was also obtained. Dynamic focusing or focus tracking [9,10] is used for maintaining high lateral resolution over a large imaging depth. Holmes et al. [11] presented a multi-beam OCT system that overcomes the problem of limited lateral resolution inherent in single-beam Fourier domain OCT and reduces speckle noise. However, the above mentioned hardware-based methods all require a kind of special configuration of the hardware set-up in the system design in order to achieve their objectives, which inevitably limits the flexibility of OCT system by sacrificing the scanning speed and increasing the system cost.

There also exist several digital methods in the literature to compensate image degradation out of the DOF. Some research groups, Yasuno et al. [12], Kulkarni et al. [13], Ralston et al. [14], Wang et al. [15], Liu et al. [16] proposed system models or point spread functions (PSF) to describe coherent light and tissue interactions in the OCT images, upon which deconvolution algorithms were applied to reduce transverse blurring, thus the transverse resolution was improved. Rolland et al. [17] developed a Gabor-based fusion technique based on the concept of the inverse local Fourier transform and the Gabor’s signal expansion to produce a high lateral resolution image throughout the depth of imaging. Above digital methods [1217] require a prior knowledge of the optical parameters or the PSF of the optical system. A vector-field model of optical coherence microscopy [18] and interferometric synthetic aperture microscopy (ISAM) [19] were proposed to solve the inverse scattering problem for interference microscopy. They could achieve reconstructed volumes with a resolution in all planes that was equivalent to the resolution achieved only at the focal plane for the conventional high-resolution microscopy. Yu et al. [20] adopted angular spectrum diffraction model to simulate the wave propagation process from out-of-focus scatters and high-resolution details could be recovered from outside the depth-of-field region. The vector-field model [18] and ISAM [19] and angular spectrum diffraction method [20] do not require a prior knowledge about the optical parameters; however, they do require knowing the position of the beam focus before image reconstruction. Although it is not ideal, the PSF function may be measured by the use of a specialized phantom in a well-controlled separate experiment. Basing on the Gaussian function OCT model, Liu et al. [21] proposed an automatic PSF estimation method by searching the discontinuous point of information entropy from a series of the recovered images. However, the adopted OCT model is not sufficiently accurate because the phase information of OCT signal is neglected. In addition, the method that is used to estimate PSF is only applicable to the Gaussian function OCT model.

In this paper, we describe an alternative software approach that uses a scalar diffraction model to simulate the wave propagation from the out-of-focus scatters to the focal plane. We use information entropy method to automatically determine the propagation distance between the out-of-focus plane and the focal plane. We then demonstrate the effectiveness of the proposed method by the use of fresh onion samples and fat tissues excised from mice.

2. Principle

2.1. Scalar diffraction model

The scalar diffraction theory is accurate provided that the diffracting structures is large compared to the wavelength of light. For a monochromatic wave, the scalar field may be written as

u(P0,t)=A(P0)cos[2πνt+φ(P0)]

where A(P0)and φ(P0)are the amplitude and phase of the wave at position P0, and ν is the optical frequency. The scalar field may also be described as a complex function (Eq. (2)) and obeys the time-independent (Eq. (3))

U(P0)=A(P0)exp[jφ(P0)]
(2+k2)U=0

where2is the Laplacian operator, k is the wave number given by

k=2πnνc=2πλ

and λ is the wavelength in the medium. According to the Rayleigh-Sommerfeld diffraction theory [22], the wave disturbance at observation point P at the focal plane Σ is the superposition of waves emanating from all points at the de-focal plane Σ0 (Fig. 2 ). So, the optical field over the focal plane Σ can be represented as

 figure: Fig. 2

Fig. 2 Geometry of diffraction. Σ0, de-focal plane; Σ, focal plane.

Download Full Size | PDF

U(P)=1jλΣ0U(P0)exp(jkr)rcos(n,r)dΣ0

Based on the coordinate system displayed in Fig. 2, we get

U(x,y)=1jλΣ0U(x0,y0)exp(jk(xx0)2+(yy0)2+z2)(xx0)2+(yy0)2+z2z(xx0)2+(yy0)2+z2dx0dy0=zjλΣ0U(x0,y0)exp(jk(xx0)2+(yy0)2+z2)(xx0)2+(yy0)2+z2dx0dy0

For paraxial approximation, it can be re-written as

U(x,y)=1jλzΣ0U(x0,y0)exp{jkz+jk2z(xx0)2+(yy0)2}dx0dy0
U(x,y)=1jλzexp(jkz){U(x0,y0)exp[jk2z(x2+y2)]}
U(x,y)=12πexp(jkz)F1{F[U(x0,y0)]×exp[jz2k(kx2+ky2)]}
|U(x,y)|=12π|F1{F[U(x0,y0)]×exp[jz2k(kx2+ky2)]}|

Here kx,ky are spatial frequencies and F[U(x0,y0)] is the Fourier transform of U(x0,y0)with respect to spatial frequencies (kx,ky). The distance z between each de-focal plane and the focal plane is determined automatically by searching for the clearest recovered image |U(x,y)|via an image-definition-evaluation criterion based on information entropy theory (see Subsection 2.2).

There are a number of digital diffraction methods that may be used for focusing OCT en face images. For the scalar diffraction integral method [Eq. (6) and Eq. (7)], the pixel resolution of the reconstructed images could vary with the reconstruction distance because the method is based on the propagation of spherical wavefronts. However, the image resolution does not change for the Fresnel-approximation convolution method [Eq. (8)] and the Fresnel approximation FFT method [Eq. (9) and Eq. (10)] [23,24]. The numerical implementation of the scalar diffraction [Eq. (6) and Eq. (7)] requires a diffraction distance large enough to avoid aliasing. The Fresnel approximation condition in Eq. (7) to Eq. (10) also requires a diffraction distance large enough to guarantee precise reconstruction. The integral method [Eq. (6) and Eq. (7)] and the convolution method [Eq. (8)] do not work if the reconstruction plane is close to the initial plane. However, the Fresnel approximation FFT method [Eq. (9) and Eq. (10)] and the angular spectrum method [20,23,24] do not have any distance limitations because they are based on the propagation of plane waves. Although the angular spectrum method do not assume paraxial approximation, in this paper, the Fresnel approximation FFT method was used to focus the OCT en face images since the diffraction distance of defocused images is large enough to satisfy the requirement of Fresnel approximation and the method has advantages of simplicity and computational cost effective.

2.2. Image-definition-evaluation criterion based on information entropy

In 1948, Shannon used probability theory to model the information sources, i.e., the data produced by a source is treated as a random variable. The information context (entropy) of a discrete random variable X that has a probability distributionPX=(p1,p2,...pn) is then defined as

P(X)=P(pi)=i=1npilog(1/pi)

where the term log(1/pi)indicates the amount of uncertainty (information) associated with the corresponding outcome. Thus, entropy is a statistical average of uncertainty in a random event or information obtained by observing a data source, and the dispersion of a probability distribution. Based on this definition, we can analyze the OCT images as realizations of random variables, and information entropy of the grayscale image O′(x, y) can be expressed as

E(I)=i=1np(Ii)log(p(Ii))

where p(O(a))p(Ii)is the marginal probability of the image O′, which is the probability that the intensity value of the image is equal to Ii (possible values of I are I1,I2,,In):

p(Ii)=The number of pixels with intensity value equal to Ii in the image   Total number of pixels in the image

The Shannon entropy is a measure of dispersion of a probability distribution. A focused image corresponds to a low entropy value since it has a probability distribution with sharp structure appearance, whereas a defocused image yields a high entropy value since it has a dispersed structure distribution. Therefore, information entropy can be used as image-definition-evaluation (IDE) criterion in our evaluation, meaning that the lower the information entropy of image O′(x, y) is, the clearer the image O′(x, y) becomes. It then comes to that the diffraction distance z for each en face image in Eq. (10) can be determined by searching the minimum value of the information entropy of the recovered images as

{FindMins.t.zE(Iz)zmin<z<zmax

where zmin and zmax are minimal and maximal distances, respectively, between the de-focal plane and the focal plane. Theoretically, the recovered image is the clearest when the variable z in diffraction Eq. (10) equals the actual distance between the de-focal plane and the focal plane. Therefore, the complex OCT signal when focused can be calculated via Eq. (9).

2.3. Flow diagram of OCT 3D volume recovering

The flow diagram of the proposed method for recovering defocused FD-OCT images is given in Fig. 3 .

 figure: Fig. 3

Fig. 3 Flow diagram of OCT 3D volume recovery.

Download Full Size | PDF

The original 3D data set (spectrograms) of a sample is acquired by two-dimensional scanning of the probe beam through a pair of X-Y galvanometer mirrors. Each non-linear spectral interferogram that represents an A-scan is rescaled into linear k-space followed by Fast Fourier Transform (FFT) to become A-line complex OCT signal (phase and amplitude) along the z axis. Then, the 3D volume data is resampled along the depth (in the z direction) to obtain a sequence of en face (x-y) frames at different depth locations. Complex signals of each x-y/en face frame is considered as the optical sources in our model; and then they are digitally focused onto the focal plane using diffraction Eq. (9), where the distance z between de-focal plane and focal plane is determined by searching the minimal information entropy of the recovered images. This process is done one-by-one for the en face OCT image at all the depth within the 3D volume.

3. Experimental results

The OCT system used to acquire the 3D volumetric data set from a tissue sample is shown in Fig. 4 , which is similar to the one previously reported [21]. The system used a super luminescent diode as the illumination light source, with a central wavelength of 1300 nm and a bandwidth of 80 nm that provided a ~10 µm axial resolution in air. The light was split into two paths in a fiber based Michelson interferometer. One beam was coupled onto a stationary reference mirror and the second beam was delivered to the sample by a collimating lens and a focusing lens that gave a theoretical lateral resolution of ~16 μm and a depth of field of ~350 μm. The spectral interferogram between the reference light from the reference mirror and the light backscattered from the sample was sent to a homebuilt spectrometer via an optical circulator. The spectrometer consisted of a collimating lens, a transmission grating, a camera lens with a focal length of 100 mm, and 1024 element linear array InGaAs detector. A pilot laser is also coupled into the interferometer for the scanning beam guidance. The focal beam on the sample was scanned using a pair of galvanometer mirror mounted in the sample arm.

 figure: Fig. 4

Fig. 4 Schematic of the OCT system used in this study

Download Full Size | PDF

In order to show the performance of the proposed method, experiments were performed on fresh onion tissues and also the fat tissues excised from a euthanized mouse. In the experiments, 3D OCT images were obtained with 512 A-scans in each B-scan and 512 B scans in each 3D scan. We first acquired one such 3D data set under the condition that the focal position of the sample beam was placed 200 μm below the tissue surface. Due to the limited OCT imaging depth, this 3D data set is treated as a standard image with desirable imaging resolution and with minimal blurring effect because most of the imaging content is within the system DOF region. We then acquired another 3D data set under the condition that the focal position was 0.3 mm above the tissue surface. In this case, the imaging content is outside of the DOF region, leading to progressively lateral blurring of the OCT images over the imaging depth.

Figure 5 illustrates the recovered images at the different stages during the recovery process using the information entropy theory described in the last section, where the entropy curve as a function of the diffraction distance z in the entire process is shown in the middle of the figure. This example used the OCT images captured from a fresh onion tissue. The original en face OCT (defocused) is given in Fig. 5C. In this case, the entropy reached the minimal value of 1.8 when the recovered image (Fig. 5D) is achieved with diffraction distance z = +240 μm. This distance can be considered as the diffraction distance between this de-focal image plane and the actual focal plane. Note that the originally blurred image (Fig. 5C) in the out-of-focus region is now focused automatically through a digital means using the proposed algorithm. The computational time required to compute the entropy of one recovered image (512x512) at the selected depth z was 0.33 sec by the use of a HP dv6 laptop computer (processor: Intel i3 2.4GHz; RAM: 6G). The total processing time for one en face image is (0.33xNz) sec, where Nz is the selected number of z from zmin to zmax.

 figure: Fig. 5

Fig. 5 Change of the information entropy of recovered images A-F as a function of the diffraction distance z. Image C is the original de-focal image (z = 0) and image D is the clearest image with the lowest entropy, deemed as the recovered image at this plane.

Download Full Size | PDF

Figure 6(a) shows one typical defocused en face image of a fresh onion tissue from the 3D data set acquired when the focal position of the probe beam was 0.3 mm above the sample surface. Figure 6(b) shows the automatically recovered image from Fig. 6(a) by using the diffraction focusing method and entropy based IDE function. Figure 6(c) is the en face image of the onion tissue that was acquired when the focal position was placed about 200 μm below the tissue surface, i.e. the lateral blurring is minimal. Comparing Fig. 6(b) and Fig. 6(c), we can find that the recovered image from the de-focal area has similar quality to the image acquired when the sample was placed at the focal region. Entropy values for images in Fig. 6(a), Fig. 6(b), and Fig. 6(c) are 2.6776, 1.7971, 2.1282, respectively. We also plotted in Fig. 7 the signal profiles along selected locations for comparison, where it is clear that the lateral resolution of the OCT image is significantly improved ((Fig. 7(b)) after the defocused image (Fig. 7(a)) was recovered by the use of the proposed method. The recovered image (Fig. 7(b)) has similar lateral resolution to that captured when the sample was at the focal plane (Fig. 7(c)). Note that Fig. 7(a) and Fig. 7(c) should be ideally from the same sample locations. However, the images of Fig. 6(a) and Fig. 6(c) were captured at two different situations, one with the sample defocused and the other with the sample focused by translating the sample by 450 μm. Thus, it is difficult to find the perfect match between the locations for comparison.

 figure: Fig. 6

Fig. 6 (a) Typical defocused en face image of an onion when the sample was placed outside the DOF region. (b) Recovered image from (a). (c) Typical en- face image of the onion when the sample was placed within DOF region.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Plots of OCT signal strength across the line located at the marked position in Fig. 6. The x-axis indicates the pixel numbers, and the y-axis gives the grey-scale values corresponding to the OCT image.

Download Full Size | PDF

To further scrutinize the effectiveness of the proposed method in detail, we provide here the en face movie (Media 1) that is played along the imaging depth. In this movie, the image on the left is the de-focused image acquired when the sample was outside the DOF, the middle image is recovered from the left de-focused image after using the proposed approach, and the image on the right is the physically focused image acquired when the sample was placed within the DOF. Note that the images on the left and right are the raw OCT images without any post-processing.

The effectiveness of the proposed method to de-blur the OCT images is further substantiated by the experimental results from real tissue samples, i.e., fat tissues excised from the euthanized mice. The typical en face results are displayed in Fig. 8 . Entropy values for images in Figs. 8(a), 8(b), and 8(c) are 4.3137, 3.4148, and 2.6509, respectively. Again, a 3D movie (Media 2) is also provided for detailed comparison, where the convention for placing each images in the movie is the same as the previous one.

 figure: Fig. 8

Fig. 8 (a) Defocused en face image of the fat tissue when the sample was placed outside the DOF region. (b) Recovered image from (a). (c) Typical en- face image of the fat tissue when the sample was placed within DOF region.

Download Full Size | PDF

However, it is noticed that although the recovered image meets the desired expectations, the structure, such as the onion cell, is not as smooth as that captured exactly at the focal plane. This phenomenon is most likely due to the light attenuation within the sample, leading to that the light energy impinging onto the same tissue plane is slightly different between the images that were acquired under de-focused and focused conditions. Our next step in the development would be to test the proposed algorithm for in-vivo imaging applications, where the challenges are obvious, which is the inevitable subject movement that gives deleterious motion artifacts in the OCT images.

4. Conclusion

In this paper, we have demonstrated a scalar diffraction model to simulate the wave propagation from the out-of-focus scatters to the focal plane that gives an ability of digitally focusing the de-focused OCT images without the need of a prior knowledge of the system PSF function. We used the information entropy theory as the base for the IDE criterion to automatically find the diffraction distance between out-of-focus plane and focal plane, which is required to digitally propagate the defocused image to the focal plane. We have shown that the structural details can be recovered with a minimum loss of resolution from the blurred images that are from outside the DOF region. This method can be used in the automatic recovery of defocused OCT images where the system parameters or refraction index of the sample is unknown. Although a spectrometer-based OCT system is considered in the paper, the proposed method is also applicable to swept-source OCT system, full-field OCT system and other confocal optical system.

Acknowledgments

This work was supported in part by research grants from the National Institutes of Health (NIH) (R01HL093140 and R01HL093140S).

References and links

1. A. F. Fercher, W. Drexler, C. K. Hitzenberger, and T. Lasser, “Optical coherence tomography—principles and applications,” Rep. Prog. Phys. 66(2), 239–303 (2003). [CrossRef]  

2. P. H. Tomlins and R. K. Wang, “Theory, developments and applications of optical coherence tomography,” J. Phys. D Appl. Phys. 38(15), 2519–2535 (2005). [CrossRef]  

3. R. K. Wang, S. L. Jacques, Z. Ma, S. Hurst, S. R. Hanson, and A. Gruber, “Three dimensional optical angiography,” Opt. Express 15(7), 4083–4097 (2007). [CrossRef]   [PubMed]  

4. R. K. Wang and Z. Ma, “Real-time flow imaging by removing texture pattern artifacts in spectral-domain optical Doppler tomography,” Opt. Lett. 31(20), 3001–3003 (2006). [CrossRef]   [PubMed]  

5. Z. H. Ding, H. W. Ren, Y. H. Zhao, J. S. Nelson, and Z. P. Chen, “High-resolution optical coherence tomography over a large depth range with an axicon lens,” Opt. Lett. 27(4), 243–245 (2002). [CrossRef]   [PubMed]  

6. T. Xie, S. Guo, Z. Chen, D. Mukai, and M. Brenner, “GRIN lens rod based probe for endoscopic spectral domain optical coherence tomography with fast dynamic focus tracking,” Opt. Express 14(8), 3238–3246 (2006). [CrossRef]   [PubMed]  

7. A. Divetia, T. H. Hsieh, J. Zhang, Z. Chen, M. Bachman, and G. P. Li, “Dynamically focused optical coherence tomography for endoscopic applications,” Appl. Phys. Lett. 86(10), 103902 (2005). [CrossRef]  

8. D. Merino, Ch. Dainty, A. Bradu, and A. G. Podoleanu, “Adaptive optics enhanced simultaneous en-face optical coherence tomography and scanning laser ophthalmoscopy,” Opt. Express 14(8), 3345–3353 (2006). [CrossRef]   [PubMed]  

9. M. J. Cobb, X. Liu, and X. Li, “Continuous focus tracking for real-time optical coherence tomography,” Opt. Lett. 30(13), 1680–1682 (2005). [CrossRef]   [PubMed]  

10. B. Qi, A. P. Himmer, L. M. Gordon, X. D. V. Yang, L. D. Dickensheets, and I. A. Vitkin, “Dynamic focus control in high-speed optical coherence tomography based on a micro-electromechanical mirror,” Opt. Commun. 232(1-6), 123–128 (2004). [CrossRef]  

11. J. Holmes and S. Hattersley, “Image blending and speckle noise reduction in multi-beam OCT,” Proc. SPIE 7168, 71681N, 71681N-8 (2009). [CrossRef]  

12. Y. Yasuno, J. I. Sugisaka, Y. Sando, Y. Nakamura, S. Makita, M. Itoh, and T. Yatagai, “Non-iterative numerical method for laterally superresolving Fourier domain optical coherence tomography,” Opt. Express 14(3), 1006–1020 (2006). [CrossRef]   [PubMed]  

13. M. D. Kulkarni, C. W. Thomas, and J. A. Izatt, “Image enhancement in optical coherence tomography using deconvolution,” Electron. Lett. 33(16), 1365–1367 (1997). [CrossRef]  

14. T. S. Ralston, D. L. Marks, F. Kamalabadi, and S. A. Boppart, “Deconvolution methods for mitigation of transverse blurring in optical coherence tomography,” IEEE Trans. Image Process. 14(9), 1254–1264 (2005). [CrossRef]   [PubMed]  

15. R. K. Wang, “Resolution improved optical coherence-gating tomography for imaging biological tissue,” J. Mod. Opt. 46, 1905–1913 (1999).

16. Y. Liu, Y. Liang, G. Mu, and X. Zhu, “Deconvolution methods for image deblurring in optical coherence tomography,” J. Opt. Soc. Am. A 26(1), 72–77 (2009). [CrossRef]   [PubMed]  

17. J. P. Rolland, P. Meemon, S. Murali, K. P. Thompson, and K. S. Lee, “Gabor-based fusion technique for optical coherence microscopy,” Opt. Express 18(4), 3632–3642 (2010). [CrossRef]   [PubMed]  

18. B. J. Davis, S. C. Schlachter, D. L. Marks, T. S. Ralston, S. A. Boppart, and P. S. Carney, “Nonparaxial vector-field modeling of optical coherence tomography and interferometric synthetic aperture microscopy,” J. Opt. Soc. Am. A 24(9), 2527–2542 (2007). [CrossRef]   [PubMed]  

19. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007). [CrossRef]  

20. L. Yu, B. Rao, J. Zhang, J. Su, Q. Wang, S. Guo, and Z. Chen, “Improved lateral resolution in optical coherence tomography by digital focusing using two-dimensional numerical diffraction method,” Opt. Express 15(12), 7634–7641 (2007). [CrossRef]   [PubMed]  

21. G. Liu, S. Yousefi, Z. Zhi, and R. K. Wang, “Automatic estimation of point-spread-function for deconvoluting out-of-focus optical coherence tomographic images using information entropy-based approach,” Opt. Express 19(19), 18135–18148 (2011). [CrossRef]   [PubMed]  

22. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw Hill, Boston, 1996).

23. L. Yu and M. K. Kim, “Wavelength-scanning digital interference holography for tomographic three-dimensional imaging by use of the angular spectrum method,” Opt. Lett. 30(16), 2092–2094 (2005). [CrossRef]   [PubMed]  

24. L. Yu and M. K. Kim, “Pixel resolution control in numerical reconstruction of digital holography,” Opt. Lett. 31(7), 897–899 (2006). [CrossRef]   [PubMed]  

Supplementary Material (2)

Media 1: AVI (10466 KB)     
Media 2: AVI (6515 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 The effect of numerical aperture on the desirable lateral resolution and the DOF in the OCT imaging, for the cases of (a) low NA, resulting in relatively long DOF but low lateral resolution; and (b) high NA, leading to relatively shorter DOF but much high lateral resolution.
Fig. 2
Fig. 2 Geometry of diffraction. Σ0, de-focal plane; Σ, focal plane.
Fig. 3
Fig. 3 Flow diagram of OCT 3D volume recovery.
Fig. 4
Fig. 4 Schematic of the OCT system used in this study
Fig. 5
Fig. 5 Change of the information entropy of recovered images A-F as a function of the diffraction distance z. Image C is the original de-focal image (z = 0) and image D is the clearest image with the lowest entropy, deemed as the recovered image at this plane.
Fig. 6
Fig. 6 (a) Typical defocused en face image of an onion when the sample was placed outside the DOF region. (b) Recovered image from (a). (c) Typical en- face image of the onion when the sample was placed within DOF region.
Fig. 7
Fig. 7 Plots of OCT signal strength across the line located at the marked position in Fig. 6. The x-axis indicates the pixel numbers, and the y-axis gives the grey-scale values corresponding to the OCT image.
Fig. 8
Fig. 8 (a) Defocused en face image of the fat tissue when the sample was placed outside the DOF region. (b) Recovered image from (a). (c) Typical en- face image of the fat tissue when the sample was placed within DOF region.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

u ( P 0 , t ) = A ( P 0 ) cos [ 2 π ν t + φ ( P 0 ) ]
U ( P 0 ) = A ( P 0 ) exp [ j φ ( P 0 ) ]
( 2 + k 2 ) U = 0
k = 2 π n ν c = 2 π λ
U ( P ) = 1 j λ Σ 0 U ( P 0 ) exp ( j k r ) r cos ( n , r ) d Σ 0
U ( x , y ) = 1 j λ Σ 0 U ( x 0 , y 0 ) exp ( j k ( x x 0 ) 2 + ( y y 0 ) 2 + z 2 ) ( x x 0 ) 2 + ( y y 0 ) 2 + z 2 z ( x x 0 ) 2 + ( y y 0 ) 2 + z 2 d x 0 d y 0 = z j λ Σ 0 U ( x 0 , y 0 ) exp ( j k ( x x 0 ) 2 + ( y y 0 ) 2 + z 2 ) ( x x 0 ) 2 + ( y y 0 ) 2 + z 2 d x 0 d y 0
U ( x , y ) = 1 j λ z Σ 0 U ( x 0 , y 0 ) exp { j k z + j k 2 z ( x x 0 ) 2 + ( y y 0 ) 2 } d x 0 d y 0
U ( x , y ) = 1 j λ z exp ( j k z ) { U ( x 0 , y 0 ) exp [ j k 2 z ( x 2 + y 2 ) ] }
U ( x , y ) = 1 2 π exp ( j k z ) F 1 { F [ U ( x 0 , y 0 ) ] × exp [ j z 2 k ( k x 2 + k y 2 ) ] }
| U ( x , y ) | = 1 2 π | F 1 { F [ U ( x 0 , y 0 ) ] × exp [ j z 2 k ( k x 2 + k y 2 ) ] } |
P ( X ) = P ( p i ) = i = 1 n p i log ( 1 / p i )
E ( I ) = i = 1 n p ( I i ) log ( p ( I i ) )
p ( I i ) = The number of pixels with intensity value equal to  I i  in the image    Total number of pixels in the image
{ F i n d M i n s . t . z E ( I z ) z min < z < z max
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.