Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Bulk-phase-error correction for phase-sensitive signal processing of optical coherence tomography

Open Access Open Access

Abstract

We present a numerical phase stabilization method for phase-sensitive signal processing of optical coherence tomography (OCT). This method removes the bulk phase error caused by the axial bulk motion of the sample and the environmental perturbation during volumetric acquisition. In this method, the partial derivatives of the phase error are computed along both fast and slow scanning directions, so that the vectorial gradient field of the phase error is given. Then, the phase error is estimated from the vectorial gradient field by a newly developed line integration method; a smart integration path method. The performance of this method was evaluated by analyzing the spatial frequency spectra of en face OCT images, and it objectively shows the significant phase-error-correction ability of the method. The performance was also evaluated by observing computationally refocused en face images of ex vivo tissue samples, and it was found that the image quality was improved by the phase-error correction.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomography (OCT) is an established three-dimensional (3D) optical imaging modality [1]. Because of its noninvasive nature, high resolution, and high penetration, OCT is used for sub-millimeter-scale clinical tomographic imaging such as in ophthalmology and cardiology [2,3]. The axial resolution of OCT has been improved dramatically [4]. In conjunction with its noninvasiveness and high penetration, this high axial resolution motivated researchers to use OCT in microscopic imaging, which is called OCT microscopy (OCM) [5].

Although OCM can provide high axial and lateral resolutions, the latter is subject to a trade-off with the depth-of-focus (DOF) [6]. This trade-off prevents simultaneous high lateral resolution and high-penetration imaging. Several signal processing methods have been proposed to improve the lateral resolution and thus overcome this trade-off. The first group of these methods is based on nonlinear deconvolution referred to as CLEAN algorithm [7]. The method was transferred from radio astronomy to OCT imaging by Schmitt et al. [8] and was applied to a two-dimensional (2D) cross-sectional OCT image to improve both its axial and lateral resolutions. Another group of approaches to overcome the lateral resolution versus DOF trade-off is based on complex numerical manipulation performed in the frequency space. This group of methods includes interferometric synthetic aperture microscopy (ISAM) [911], forward-light-propagation-model based computational refocusing [12], and digital adaptive optics (DAO) [1316].

However, these processing methods are phase-sensitive. Specifically, the CLEAN algorithm uses the carrier phase of the OCT signal amplitude [8], while ISAM, forward-light-propagation-model based computational refocusing, and DAO rely on the 2D and/or 3D Fourier transforms of the complex OCT signal. However, the OCT phase is not always stable. Because OCT is a scanning modality, environmental fluctuations such as temperature fluctuations and sample bulk motion during scanning results in fluctuating phase offsets among the A-scans. This is denoted as bulk phase error (BPE) in this paper.

In some fortunate cases, the BPE is small, so the signal processing methods mentioned above will work to some degree [12]. However, in other cases, the sample motion and environmental fluctuations cause a non-negligible BPE, which limits the complex signal processing performance significantly [17]. In addition, larger aberration and larger defocus require higher phase stability (see Section 5.3 for details).

One of the most straightforward phase stabilization methods involves increasing the scanning speed. This can be achieved by using a high-speed scanning light source [18] for swept source OCT, high-speed line camera for spectral domain (SD-) OCT [19], or by a line-field OCT with a high-speed camera [20]. Since the acquisition time becomes short, the sample bulk motion and the environmental fluctuations that occur during scanning can be small. However, this approach requires a high-performance OCT setup. In addition, excessively high scanning speeds cause instability in the lateral optical scanners, such as the galvanometric mirror. This approach can thus also cause the BPE. Full-field (FF-) OCT offers another solution as it does not use galvanometric mirror and also it measures an en face OCT at a certain depth by a single-shot acquisition. However, FF-OCT is not really convenient as a basement of some OCT extensions, such as some types of polarization-sensitive OCT, which heavily depend on optical fiber implementation [21]. Ahmad et al. demonstrated an OCT scanning head with a cantilever mount [22]. Although it did improve the phase stability, it requires addition of a mechanical extension to the OCT scanner.

A numerical post-processing approach for phase stabilization was demonstrated by Shemonski et al. [23]. This method computes the phase difference between adjacent A-scans along the slow scan direction, and then estimates and corrects the phase by integrating it along the scan direction. However, this method assumes that the BPE along the fast scan is small so that the phase error can be removed using a mean filter. Although this method works reasonably well, the above assumption is not really accurate, as we will show in the following sections. Therefore, a new numerical phase stabilization method without the use of this assumption might improve the complex signal processing performance for OCT.

In this paper, we present a new phase stabilization method, i.e., BPE-correction method that does not use the assumption mentioned above. The proposed method first computes the en face 2D phase differentiation of the complex OCT signal, i.e., the 2D vectorial phase gradient field, and the vectorial field is then integrated to estimate the 2D BPE. A simple integration method that integrates the gradient field along the fast or slow scan directions necessarily results in an accumulation of the phase measurement error in one of these directions. To avoid this error accumulation, we demonstrate an integration method with a dynamically generated integration path, which is described as the smart integration path (SIP) method.

The SIP method is evaluated quantitatively based on the en face spatial frequency spectra of the phase-corrected complex OCT signals and is also evaluated qualitatively based on the computational refocusing performance. The simple integration method is also evaluated using the same methods and is compared with the SIP method.

2. Bulk-phase-error estimation and correction

The purpose of this study is to establish a BPE correction method. In this section, the principle of the method is described as follows. (1) An appropriate mathematical model of an OCT signal with a BPE is defined (Section 2.1). (2) Using this OCT signal model, the BPE estimation method is then presented. This step is further subdivided into two steps: estimation of the en face vectorial gradient field of the BPE (Section 2.2), and estimation of the BPE from the gradient field (Section 2.3). (3) Finally, the BPE is removed from the OCT signal (Section 2.4).

2.1 Model of the OCT signal with the bulk phase error

The BPE mainly originates from bulk motion occurring during the measurement period. This bulk motion can be classified into lateral and axial motions. We assume here that the former is smaller than the lateral optical resolution of OCT, thus meaning that it can be safely ignored. Although this assumption may not be true for some clinical measurements, e.g., retinal OCT, it is reasonable for most microscopic OCT imaging and anesthetized animal imaging applications.

The axial motion is also assumed to be smaller than the depth resolution of OCT. However, the point spread function (PSF) of OCT is a complex function that is based on both the amplitude and the phase. Because the phase changes more rapidly than the amplitude along the depth direction, a small axial motion may change the phase of the OCT signal. Therefore, we only consider the phase error caused by the bulk motion here.

In addition to the axial motion, air turbulence and temperature drift during the measurement can also cause the BPE. These perturbations alter the refractive index of both the air and the optical fiber, i.e., they change the optical path difference (OPL) between the sample arm and the reference arm in an interference system, which ultimately results in the BPE.

By taking the BPE into account, the OCT signal can be modeled as

$$S(x, y, z) = A(x, y, z) \exp i\left[\phi_s(x, y, z) + \phi_b(x, y)\right],$$
where $x$ and $y$ are the lateral positions along the fast and slow scan directions, respectively, and $z$ is the depth position. $A(x,y,z)$ is the OCT signal amplitude, $\phi _s(x, y, z)$ is the phase caused by the sample structure (sample phase), and $\phi _b(x, y)$ is the BPE.

It should be noted that the BPE $\phi _b(x, y)$ is not a function of $z$ because the axial motion occurs simultaneously at all depth positions. In contrast, the sample phase $\phi _s(x,y,z)$ is a function of $x$, $y$, and $z$ because the sample has a 3D structure. We use this difference in terms of the depth dependency to estimate the BPE, as will be described in the following sections.

2.2 Estimation of vectorial gradient field of the bulk phase error

As will be discussed in Section 5.1, it is difficult to estimate the BPE directly from the OCT signal. Therefore, we initially estimate the en face gradient of the BPE, which is a vectorial field with $x$- and $y$-components, and then estimate the BPE from this vectorial gradient field. In this section, we explain the process required to compute the vectorial gradient field by using the mathematical model that was presented in the previous section [Eq. (1)].

The gradient at each en face position is computed from the adjacent A-lines. Two adjacent A-lines oriented along the $x$-direction are written as

$$S(x, y, z) = A(x, y, z) \exp \left[i\phi_s(x, y, z) + i\phi_b(x, y) \right],$$
and
$$S(x + \Delta x, y, z) = A(x + \Delta x, y, z) \exp \left[i\phi_s(x + \Delta x, y, z) + i\phi_b(x + \Delta x, y)\right],$$
where $\Delta x$ represents the separation of the A-lines. The phase difference is then computed by multiplying $S(x+\Delta x,y,z)$ by the complex conjugate of $S(x,y,z)$, as follows
$$S(x + \Delta x, y, z) S^{*}(x, y, z) = \mathcal{A}(x,y,z) \exp \left[ i\Delta \phi_s(x, y, z) + i\Delta_x \phi_b\;(x, y)\right],$$
where $\Delta \phi _s(x, y, z) = \phi _s(x + \Delta x, y, z) - \phi _s(x, y, z)$ is the sample phase difference and $\Delta _x \phi _b\; (x, y) = \phi _b(x + \Delta x, y) - \phi _b(x, y)$ is the $x$-gradient of the BPE. $\mathcal {A}(x,y,z)$ is $A(x + \Delta x, y, z) A(x, y, z)$. Note that $A^{*}(x, y, z) = A(x, y, z)$ because it is a real function. This equation can be rewritten using Euler’s formula as
$$\begin{aligned} &S(x + \Delta x, y, z) S^*(x, y, z) =\\ & \mathcal{A}\left[\cos{\Delta \phi_s} \cos{\Delta_x\phi_b} - \sin{\Delta \phi_s} \sin{\delta_x\phi_b} + i\left(\sin{\Delta \phi_s} \cos{\Delta_x\phi_b} + \cos{\Delta \phi_s} \sin{\Delta_x\phi_b}\right)\right].\end{aligned}$$

The $x$-gradient of the BPE, denoted by $\Delta _x \phi _b\; (x, y)$, is obtained by averaging $S(x + \Delta x, y, z) S^{*}(x, y, z)$ along the depth direction as follows. The depth averaging of Eq. (5) becomes

$$\begin{aligned} & \left\langle{S(x + \Delta x, y, z) S^*(x, y, z)}\right\rangle_{z} =\\& \left\langle{\mathcal{A}}\right\rangle_{z}\left\langle{\cos{\Delta \phi_s}}\right\rangle_{z}\cos{\Delta_x\phi_b} - \left\langle{\mathcal{A}}\right\rangle_{z}\left\langle{\sin{\Delta \phi_s}}\right\rangle_{z} \sin{\Delta_x\phi_b} \\ &+ i \left\langle{\mathcal{A}}\right\rangle_{z}\left\langle{\sin{\Delta \phi_s}}\right\rangle_{z} \cos{\Delta_x\phi_b} + i \left\langle{\mathcal{A}}\right\rangle_{z}\left\langle{\cos{\Delta \phi_s}}\right\rangle_{z} \sin{\Delta_x\phi_b},\end{aligned}$$
where $\langle\rangle_{z}$ represents the averaging along the depth direction.

Since the A-scan spacing $\Delta x$ is smaller than the lateral optical resolution, we can safely assume that $\phi _s(x, y, z) \simeq \phi _s(x + \Delta x, y, z)$. In addition, the sample phase $\phi _s(x,y,z)$ is distributed randomly along the depth direction. Therefore, $\Delta \phi _s(x, y, z)$ is also distributed randomly and is centered at zero. In addition, the distribution is as narrow as it is not affected by phase wrapping. This suggests that $\left\langle\sin \Delta \phi_{s}\right\rangle_{z}$ approaches zero asymptotically via depth averaging. On the other hand, the BPE is not truly a function of $x$ or $y$, but is in fact a function of time. This means that $\Delta _x \phi _b\; (x,y)$ is not necessarily close to zero.

By substituting $\left\langle\sin \Delta \phi_{s}\right\rangle_{z} \rightarrow 0$ into Eq. (6), we obtain

$$\left\langle S(x+\Delta x, y, z) S^{*}(x, y, z)\right\rangle_{z}=\langle \mathcal{A}\rangle_{z}\left\langle\cos \Delta \Phi_{s}\right\rangle_{z} \exp \left[i \Delta_{x} \Phi_{b}(x, y)\right]$$
Since $\langle\mathcal{A}\rangle_{z}$ and $\left\langle\cos \Delta \phi_{s}\right\rangle_{z}$ are real functions, the $x$-gradient of the BPE can be computed by taking the phase angle of $\left\langle S(x+\Delta x, y, z) S^{*}(x, y, z)\right\rangle_{z}$ to be
$$\Delta_x \phi_b\;(x,y) = \angle{\langle S(x + \Delta x, y, z) S^{*}(x, y, z)\rangle}_{z}$$
Note that $\left\langle S(x+\Delta x, y, z) S^{*}(x, y, z)\right\rangle_{z}$ is obtained via the OCT measurement. This is therefore the equation that provides the $x$-gradient of the BPE from the measured data.

By applying the same process in the $y$-direction, we can obtain the $y$-gradient field $\Delta _y \phi _b\; (x,y)$ as follows

$$\Delta_y \phi_b\;(x,y) = \angle\langle{S(x, y+\Delta y, z) S^{*}(x, y, z)}\rangle_{z},$$
where $\Delta y$ represents the separation of the A-lines along the $y$-direction. Therefore, the en face vectorial gradient field $\Delta \phi _b(x,y) = \left ( \Delta _x \phi _b\; (x,y), \Delta _y \phi _b\; (x,y)\right )$ is determined.

2.3 Smart-integration-path (SIP) method for estimation of the bulk phase error from the vectorial gradient field

2.3.1 Problem description

The next step in BPE correction is to estimate the BPE itself from the vectorial gradient field obtained in Eqs. (8) and (9). This process can be achieved by applying line integrations to the gradient field. Here, we first clarify the difficulty of this line integration.

The simplest line integration approach is to perform line integrations on each horizontal ($x$) line along the $x$-direction and also perform a line integration along the $y$-direction at only one of the vertical ($y$) lines, e.g., the line furthest to the left in an en face field. The latter step is performed to achieve consistency among all the horizontal lines. This method is equivalent to performing a line integration with the paths shown in the example in Fig. 1. Here, the $x$-gradient values computed in the previous section are assigned to the boundaries of the horizontally adjacent pixels and the $y$-gradient values are assigned to the vertical boundaries. The integration begins from the left bottom corner in the upward direction and then turns to the right as indicated by the blue arrows.

 figure: Fig. 1.

Fig. 1. Schematic of the simplest integration path to compute the BPE from its vectorial gradient field. The $x$-gradient values of the BPE are assigned to the boundaries between the horizontally adjacent pixels, while the $y$-gradients are assigned to the vertical boundaries. The blue arrows indicate the integration paths and the red line indicates the path distance between pixels-A and B.

Download Full Size | PDF

Although this method is correct in principle, there are two problems with it. First, although the integration path distances between the horizontally adjacent pixels are very short, the corresponding distance for vertically adjacent pixels can be long, particularly at the right side of Fig. 1. For example, pixels-A and B are only related by the long path indicated by the red dashed line. OCT measurements suffer from noise, and this results in an estimation error for the phase gradient. This error then accumulates along the integration path. As a result, the mutual consistency of the estimated BPEs between pixels related by these long integration paths is low, although they are neighboring pixels.

Second, most of the estimated $y$-gradient values are not used in this line integration. The information throughput of this method is thus not optimal.

In the subsequent sections, we propose a new line integration method to overcome the two problems above. Later in this paper, the method described in this section, which is referred to as “simple integration method,” is used as a reference standard to evaluate the performance of the new method.

2.3.2 Estimating the bulk phase error from surrounding pixels

In this and the subsequent sections, we present a new line integration method denoted as SIP method. This method consists of three steps; (1) estimating the BPE of a pixel that is denoted as a target pixel, from that of a neighboring pixel that is defined as a source pixel, (2) improving the estimation accuracy through use of multiple source pixels, and (3) estimating the BPE of a whole en face field. Steps-1 and 2 are described in this section and Step-3 is described in the next section (Section 2.3.3).

Figure 2 depicts Step-1, where the two fundamental patterns of the integration paths are shown. The center (orange) pixel is the target pixel and the blue pixel represents a neighboring source pixel. In principle, an enormous number of possible paths from the source to the target pixels can be considered. However, by restricting the path length to be shorter than or equal to the three pixel boundaries, the numbers of paths become two and three for Figs. 2(a) and (b), respectively, as indicated by the arrows.

 figure: Fig. 2.

Fig. 2. Two fundamental integration-path patterns that estimate the BPE of the target pixel (orange pixel) from the BPE of the source pixel (blue).

Download Full Size | PDF

For each integration path, the BPE of the target pixel is then estimated using the line integration. Since each basic pattern contains multiple integration paths, multiple estimates are obtained. These phase estimates are averaged in complex form to obtain the final estimate.

This step (Step-1) is summarized by the following equations. For Fig. 2(a), the BPE of the target pixel is estimated as

$$\begin{aligned} \phi_b(x_0, y_0) &\equiv \angle \left[\exp i\left\{\phi_b^{(s)} + \Delta_y\phi_b(x_0-\Delta x, y_0-\Delta y) + \Delta_x\phi_b(x_0-\Delta x, y_0)\right\} \right. \\ & \quad \left. + \exp i \left\{\phi_b^{(s)} + \Delta_x\phi_b(x_0-\Delta x, y_0-\Delta y) + \Delta_y\phi_b(x_0, y_0-\Delta y)\right\} \right],\end{aligned}$$
where ($x_0, y_0$) is the target pixel position and $\phi _b^{(s)}$ is the previously estimated BPE of the source pixel. The first and second terms correspond to paths-(i) and (ii) of Fig. 2(a), respectively. Similarly, for Fig. 2(b),
$$\begin{aligned} \phi_b(x_0, y_0) & \equiv \angle \left[\exp i\left\{ \phi_b^{(s)} + \Delta_x\phi_b(x_0 - \Delta x, y_0) \right\}\right. \\ & \quad + \exp i\left\{\phi_b^{(s)} + \Delta_y\phi_b(x_0 - \Delta x, y_0) + \Delta_x\phi_b(x_0-\Delta x, y_0+\Delta y) - \Delta_y\phi_b(x_0, y_0+\Delta y)\right\}\\ & \quad \left. + \exp i\left\{\phi_b^{(s)} - \Delta_y\phi_b(x_0 - \Delta x, y_0-\Delta y) + \Delta_x\phi_b(x_0-\Delta x, y_0-\Delta y) + \Delta_y\phi_b(x_0, y_0-\Delta y)\right\}\right],\end{aligned}$$
where the first to third terms correspond to paths-(ii), (i) and (iii), respectively.

It is also noteworthy that although Fig. 2 shows only two patterns, it is exhaustive for all combinations of the source and target pixels. Namely, by placing the target pixels at the center of the 3 $\times$ 3 pixel field, any patterns with a single neighboring source pixel can be converted into one of the two presented patterns by rotation. Note that the positive/negative signs of each gradient in Eqs. (10) and (11) should be selected appropriately for these secondary patterns according to the direction of the path.Step-2 is estimation of the BPE of the target pixel from multiple neighboring source pixels. For this estimation procedure, multiple estimates are computed from each of the source pixels by the method of Step-1 at first [exemplified in (i) of Fig. 3]. Then, as depicted in (ii) of Fig. 3, the final estimate of the target pixel, $\phi _b^{(t)}$, is computed by complex averaging of these multiple estimations as

$$\phi_b^{(t)}(x_0, y_0) \equiv \angle \sum_i{ \exp i \phi_b^{(i)}(x_0,y_0), }$$
where $\phi _b^{(i)}$ is the estimate of $\phi _b$ from the $i$-th source pixel.

 figure: Fig. 3.

Fig. 3. Example of BPE estimation from multiple neighboring source pixels (blue). Multiple BPE estimations of the target pixel (orange) are obtained independently from each of the source pixels (i), and the estimated BPEs are averaged in complex form (ii).

Download Full Size | PDF

2.3.3 Sequential estimation of the entire en face bulk phase

The BPE of the complete en face field is finally estimated by performing Steps-1 and 2 sequentially. In this sequential process, we first select the initial source pixel, which is typically the center pixel of the en face field. Although the BPE of the initial source pixel is unknown, we can safely define it as being zero. Because this initial phase only affects the constant offset of the BPE estimate, this arbitrary selection of the initial phase does not harm the generality. After the initial source pixels are selected, the estimation is sequentially performed as depicted in Fig. 4. After the initial source pixels are selected, the estimation is sequentially performed as depicted in Fig. 4. The first estimation is performed for the target pixel (orange) immediately above the initial source pixel (blue) [Fig. 4(a)], for which the estimation is performed in the 3 $\times$ 3-pixel region centered at the target pixel (red box). The second target pixel is located at the right of the first one and estimated again in the 3 $\times$ 3-pixel region (red box) [Fig. 4(b)]. The estimation procedure is sequentially performed along the spiral trajectory depicted in the figure by the dashed arrow. Note that as the sequential estimation process progresses, the number of source pixels available increases. For example, four source pixels are available for estimation in Fig. 4(c), for which the estimation process is performed in the red box.

 figure: Fig. 4.

Fig. 4. Schematic of sequential estimation of the BPE. In the first step (a), the BPE of the center pixel in the entire en face field (blue pixel) is set to be zero. The BPE of the pixel above the center pixel (orange pixel) is estimated by applying the integral pattern from Fig. 2(b) to the red-boxed region. In the second step (b), the BPE of the target pixel (orange pixel) is estimated from two source pixels (blue pixels) by the integral patterns from Figs. 2(a) and (b). This estimation is sequentially performed by following the spiral route [dashed spiral arrows in (a)-(c)]. (c) is another example, where four source pixels (blue pixels) were used to estimate the BPE of the target pixel (orange pixel). The light blue pixels were pixels whose BPE have been estimated.

Download Full Size | PDF

In this paper, we call the estimation process that is described in these sections as the SIP method. This SIP method is free from the problems associated with the simple-integration method that was described in Section 2.3.1. Namely, each pixel is directly connected to all neighboring pixels as a target or source pixel. In addition, all gradient values are fully used in the estimation.

2.4 Bulk phase error removal from OCT volume

After the BPE $\phi _b(x,y)$ is estimated, it is removed from the original volumetric OCT data by complex conjugate multiplication as

$$S'(x,y,z) \equiv S(x,y,z) \exp\left[-i\phi_b^{(t)}(x,y)\right],$$
where $S'(x,y,z)$ is the BPE-corrected OCT signal.

3. Performance evaluation method

3.1 System setup and samples

To validate the SIP method, we used a SD-OCT system based on a fiber Michelson interferometer. The center wavelength of the probe beam was 820 nm and the bandwidth was 70 nm (M-D840-HP-i, Superlum, Ireland). The probe arm consisted of a fiber tip collimator (F280APC-850, Thorlabs Inc.), which collimated the beam to a 4.0 mm diameter, a 2D galvanometric scanner (GVS102, Thorlabs) and an OCT objective with an effective focal length of 18 mm (LSM02-BB, Thorlabs). These probe-arm specifications gave an in-focus lateral resolution of 4.8 µm and a depth-of-focus of 110 µm. The axial resolution was measured to be 4.2 µm in air. The spectral interference signal was measured using a spectrometer that was specifically designed for SD-OCT (a prototype device, Horiba, Kyoto, Japan). The spectrometer was equipped with a line CMOS camera (spL4096-140km, Basler AG, Germany) that digitized the interference signal through a Camera Link frame grabber (PCIe-1433, National Instruments, TX) at a line rate of 50,000 lines/s. Although the camera has 4096 pixels, only the central 2048 pixels were used. The axial pixel separation in an OCT image was 3.2 µm in air. Note that zero-padding was used to measure the axial resolution but not used for OCT image generation. The signal sensitivity was 90 dB.

The spectral interference signal was rescaled to the $k$-linear domain and numerical dispersion compensation was applied. The average spectrum was then subtracted from each spectrum to remove fixed pattern noise. Finally, the complex OCT signal was obtained using a fast Fourier transform (FFT).

Volumetric acquisition was performed over a 1 mm $\times$ 1 mm lateral region with 512 $\times$ 512 A-lines. These scanning parameters resulted in a lateral pixel spacing of 1.9 µm, which is a 0.39 fraction of the lateral resolution. The OCT volume was acquired in 6.6 s with 80% acquisition duty.

The OCT system was controlled by custom-made software written in LabVIEW 2018 (National Instruments, TX). The BPE correction methods were implemented in Python 3.7 with NumPy library ver. 1.16.5.

For the validation study, two chicken breast muscle tissues and two porcine heart tissues were used as samples. These samples were dissected to a sample size of approximately 2.5 mm $\times$ 2.5 mm. The samples were set so that the tissue surface (and not the cleavage surface) faced the objective. Physiological saline was applied to the surface to prevent the sample from drying out, but no refractive index matching gel was applied.

3.2 Objective evaluation by en face spatial frequency spectrum

The performances of the BPE-correction methods were evaluated objectively by en face spatial frequency spectrum analysis, which is described in this section, and also by observation of computationally refocused images, as will be described in Section 3.3.

The en face spatial frequency spectrum is a 2D Fourier transform of an en face slice of a complex OCT volume. If there is no BPE, then the frequency spectrum width is defined by the lateral optical resolution and the spectrum is centered at the zero frequency. However, if a BPE that is random or nonlinear with respect to the lateral space exists, the spatial frequency spectrum then becomes broad. If the BPE is linear with respect to the space, the spatial frequency spectrum may be off-center. Specifically, if the BPE correction works correctly, the spatial frequency spectrum will become narrow and will be centered at the zero frequency.

Using these properties of the spatial frequency spectrum as a basis, the BPE correction methods are evaluated as follows. First, we extract an en face slice from a complex OCT volume. Then, the BPE correction based on either simple integration or the SIP method is applied, and the en face spatial frequency spectrum is computed using a 2D-FFT. This computation was performed for all depth slices, ranging from 10- to 70-pixel depths, taken from a reference slice. The reference slice was selected as the shallowest slice available that does not contain the sample surface. The thickness of the depth region was 192 µm in air.

For the quantitative analysis, we computed the first and second moments, i.e., the mean and the variance, for the fast ($x$) and slow ($y$) scanning directions from the absolute frequency spectrum. These moments were computed at each depth. The depth average of the mean was then computed to evaluate the off-centering behavior of the spectrum, while the standard deviation (STD) was computed as the square root of the depth-averaged variance and was then used to evaluate the broadening of the spectrum.

3.3 Subjective evaluation based on computational refocusing performance

To assess the impact of the BPE correction on phase-sensitive OCT processing, we performed computational refocusing of the en face OCT images. The BPE properties of the complex OCT volumes were corrected using the SIP method, and a computational refocusing operation based on the forward light propagation model [12,24,25] was applied. For this refocusing operation, the defocus amounts were first selected to minimize the information entropy of the en face images at each depth [26]. These defocus values were then fitted by a third degree polynomial of the depth by a nonlinear least squares fitting algorithm. The final refocused images were then computed by correcting the fitted defocus. For comparison, computational refocused images were also computed without BPE correction. The image sharpness was then subjectively evaluated for each of the refocused images.

The details of the computational refocusing process are summarized in Appendix A.

4. Results

4.1 En face spatial frequency spectrum analysis

Figure 5(a) shows an example of an en face OCT image of a chicken breast muscle, while Figs. 5(b)-(d) show the spatial frequency spectra of this image Fig. 5(b) without BPE correction, Fig. 5(c) with BPE correction by the simple integration method, and Fig. 5(d) with BPE correction by the SIP method.

The non-phase-error-corrected spectrum [Fig. 5(b)] is broadened in the slow scanning frequency direction ($f_y$) and is off-centered along the fast scanning frequency ($f_x$) direction. The spectral broadening in the slow scanning direction would be caused by temporal phase fluctuations and drift during volume acquisition, which is likely to have a greater impact in the slow scanning direction than in the fast scanning direction. The observed off-centering would be caused by temporal linear phase drift and also by the alignment error of the pivotal point of the galvanometric scanner with respect to the back focal plane of the objective (see Section 5.2 for details). Note that a dark vertical line can be seen at the zero-frequency in the fast scanning direction ($f_x$). This line is caused by the average spectrum subtraction for fixed pattern noise removal.

 figure: Fig. 5.

Fig. 5. Example of en face OCT (a), its spatial frequency spectra with no phase correction (b), with the phase corrected by the simple integration path (c), and with the phase corrected using the SIP method (d).

Download Full Size | PDF

The off-centering can be corrected using the simple integration path method, as shown in Fig. 5(c). However, the spectrum is broader in $f_y$ than the non-phase-error-corrected spectrum. This is because the simple integration method enhances phase inconsistencies among the pixels that are adjacent along the slow scanning direction, as discussed in Section 2.3.1.

The spectrum obtained via the SIP method [Fig. 5(d)] shows the good performance of the correction method. The spectrum is centered in both directions and no spectral broadening is observed.The properties of the spectra can be evaluated more quantitatively using their moments as shown in Fig. 6, where each of the points represents a sample. The orange and blue dots represent the chicken breast muscle samples, while the yellow and gray are used for the porcine heart tissues. The means of $f_x$ were corrected to be close to the zero frequency by both the simple integration method and the SIP method, as shown in Fig. 6(a). For $f_y$, both the simple integration method and the SIP method showed improvements as the means were closer to zero than the non-phase-corrected data, as shown in Fig. 6(b). Although the SIP method showed a slightly inferior performance when compared with that of the simple integration method, the differences are only around a few mm$^{-1}$, which corresponds to only a few sampling points in the numerically obtained frequency spectra, so these differences are not significant.

 figure: Fig. 6.

Fig. 6. Averaged moment values along the depth direction of each sample. (a) and (b) show the mean values of the spatial frequency spectrum along the fast and slow scan directions, respectively. (c) and (d) show the STDs along the fast and slow scan directions, respectively. The orange and blue dots represent the chicken breast muscle, while the yellow and gray dots represent the porcine heart tissues. The black bars represent the averaged value of each moment.

Download Full Size | PDF

In the STD of $f_x$ [Fig. 6(c)], the simple integration method again shows a slightly better performance than the SIP method because it gives smaller STDs. However, the simple method significantly increased (i.e., worsened) the STD in $f_y$ (frequency of slow scanning direction), while the SIP method improved the STD, as shown in Fig. 6(d). Therefore, the overall performance of the SIP method surpasses that of the simple method. It is noteworthy that the SIP method slightly worsened the STD in $f_x$ [Fig. 6(c)], although it improved the STD in $f_y$ [Fig. 6(d)]. This small worsening is due to SIP’s simultaneous BPE correction nature for both $x$ and $y$ directions. The STDs of the original $f_x$, SIP’s $f_x$, and SIP’s $f_y$ all show similar values. It indicates that the BPE along $x$ was small even before the BPE correction. And then, SIP simultaneously corrected the BPE along $x$ and $y$. This simultaneous nature of SIP corrected the BPE along $y$ as avoiding the over correction but some residual errors would be redistributed into a residual BPE along $x$.

In summary, both the simple integration and SIP methods can remove the off-centering of the spectra. However, the simple integration method broadened (i.e., worsened) the spatial frequency spectra along the slow scanning direction, while the SIP method narrowed (i.e., improved) these spectra. The SIP method would thus be a more preferable option than the simple integration method.

4.2 Computational refocusing

Computational refocusing was performed with and without BPE correction as shown in Fig. 7, where the sample was a chicken breast muscle. Fig. 7(a) shows the original non-refocused image, while Fig. 7(b) shows a refocused image without BPE correction, and Fig. 7(c) shows computationally refocused images with BPE correction performed by the SIP method. The corrected defocus here was 480 µm. Figures 7(d) and 7(e) show magnified images of the boxed areas shown in Figs. 7(b) and 7(c), respectively. Both the computationally refocused images in Figs. 7(b) and 7(c) show significantly better resolution than the original non-refocused image. The magnified images show that the resolution of the phase error-corrected refocused image in Fig. 7(e) is higher than that of the non-phase-error-corrected refocused image in Fig. 7(d); for example, the dark line structures more clearly appear in Fig. 7(e) than in Fig. 7(d) (indicated by the arrows). Fig. 8 shows another example of computational refocusing of a porcine heart tissue sample. At depths of 160 µm and 320 µm, the en face images were extracted at around the in-focus depth, so the non-refocused image showed good resolution [Figs. 8(d) and 8(g)], and the computationally refocused images, both without [Figs. 8(e) and 8(h)] and with [Figs. 8(f) and 8(i)] phase error correction showed similarly good resolution. However, the images without BPE correction [Figs. 8(e) and 8(h)] exhibit vertical line artifacts that may be caused by an interaction between the BPE and the frequency filter used for computational refocusing [Eq. (14)]. This artifact cannot be seen in the refocused image with BPE correction. At the depth of 480 µm, the refocused images [Figs. 8(b) and 8(c)] both show fine fibrous structures that cannot be seen in the non-refocused image in Fig. 8(a).

5. Discussions

5.1 Why the BPE was computed via its gradient

In general, the sample phase $\phi _s(x,y,z)$ in Eq. (1) is distributed randomly along the depth direction. In the other words, the sample phase follows a uniform distribution. This fact gave us the erroneous idea that the BPE, represented by $\phi _b(x,y)$, can be computed by averaging the phase of Eq. (1) along the depth direction as $\left\langle\phi_{s}(x, y, z)+\phi_{b}(x, y)\right\rangle_{z}=\left\langle\phi_{s}(x, y, z)\right\rangle_{z}+\phi_{b}(x, y) \rightarrow \phi_{b}(x, y)$. However, this phase averaging procedure cannot give us an accurate estimate of $\phi _b$ for the following reasons.

 figure: Fig. 7.

Fig. 7. Examples of computational refocusing of en face chicken breast muscle OCT. (a) Original, non-computationally refocused OCT, (b) refocused image without phase error correction, (c) refocused image with SIP-method based phase error correction, and (d) and (e) show magnified images of (b) and (c), respectively.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. En face images of porcine heart tissue at three depth positions. The left column shows the unprocessed images, the center column shows the refocused images without BPE correction and the right column shows the refocused images with BPE correction. The lateral field of the images is 1 mm $\times$ 1 mm. The boxed images at the left bottom of each image are the magnified images of the boxed regions at the right.

Download Full Size | PDF

Because $\phi _s$ is distributed randomly along the depth direction, $\phi _s + \phi _b$ is also distributed randomly. Note that the phase is a cyclic quantity and that its numerical representation ranges over $2\pi$. Therefore, the randomly distributed phase, i.e., $\phi _s + \phi _b$, is distributed uniformly over the $2\pi$-range. Additionally, the depth average of $\phi _s + \phi _b$ converges toward the center of the range. Since the phase representation range can be selected arbitrarily, the depth average does not give an estimation of $\phi _b$ but, in fact, gives the center value of the arbitrarily selected phase representation range.

5.2 Origins of BPE

Although the main source of the BPE is the fluctuations of both the sample and the environment, static system properties also can cause the BPE. Our OCT signal model [Eq. (1)] indicates that any phase component that is independent of the depth can be treated as the BPE. This includes, for example, the lateral-position-dependent phase offset that occurred in the scanning optics [27]. The BPE correction methods also correct such static phase offset.

5.3 BPE correction is more important for larger defocus correction

The impact of BPE correction on computational refocusing varies with the amount of defocus. Figure 9 shows a comparison of computational refocusing without (the second column) and with (the third column) BPE correction at several depth positions, where the BPE correction was performed using the SIP method. The first column shows the depth slices without computational refocusing.

Around the focal plane (+0 µm), all three images show similar appearances. When defocus exists, the refocused images show better resolution than the non-refocused image. More specifically, if the defocus is moderate, e.g., +160 µm and +320 µm, then the refocused images without and with BPE correction show similar image qualities. In contrast, for large defocus, e.g., +480 µm and +640 µm, the refocused image with BPE correction reveals a finer structure than the image without correction, as shown in the magnified insets.

 figure: Fig. 9.

Fig. 9. En face images of the chicken breast muscle at five depth positions. The first to third columns show the unprocessed images, the refocused images without BPE correction, and the refocused images with BPE correction, respectively. The fourth and fifth columns show magnified images of the boxed regions in the second and third columns, respectively.

Download Full Size | PDF

This larger difference between the images with and without BPE correction can be accounted for by the properties of the spatial frequency filter used to perform the computational refocusing [Eq. (14) in Appendix A]. As shown in the equation, this phase filter is a quadratic phase function, and thus has a finer structure at a higher spatial frequency ($f_r$). Since the spatial frequency domain can be interpreted as the pupil plane, it can then be said that the phase filter has a finer structure at the periphery of an aperture. This fine structure then becomes even finer as the defocus ($z_0$) increases. The BPE causes noise in the spatial frequency spectrum. Because the phase filter consists of a finer structure, it is affected more easily by the noise. Therefore, BPE correction is more important for the case with larger defocus. This point can also be easily understood by another description as follows. As the defocus increases, the size of PSF also increases. The phase of the OCT signal should be consistent over the PSF. So, larger defocus requires higher consistency of the phase, i.e., smaller BPE.

5.4 Computation time

In our implementation, the computation time of SIP was 123 s for a volume with 512 $\times$ 512 A-lines, while that of the simple integration method was 110 s. As we mentioned in Section 3.1, the algorithms were implemented in Python 3.7, and run on a computer with an Intel Corei7-8750H CPU with 6 cores equipped with 16 GB memory. Both methods have similar computation times, and it is not too long for real applications. However, it should be noted that both implementations are not well optimized and use only a single core of the CPU. So, the computation time can potentially be faster. For the same reason, one of the methods can be faster than the others after the future optimization.

5.5 Further development

In the proposed method, sequential estimation of the BPE was performed along a spiral trajectory as shown in Section 2.3.3. However, this method cannot work if there is a no-signal region, e.g., where vignetting occurs, in the path. This problem could be solved by using a more sophisticated sequential estimation trajectory to be diverted away from the no-signal region. The design of such a path would be a task for future development.

6. Conclusion

We have established a new BPE correction method called the SIP method for phase-sensitive signal processing of OCT. The superior performance to a simple correction method (the simple integration method) was demonstrated by spatial frequency analysis. The computational refocusing performance was also improved by the BPE correction method. The SIP method would also enhance the performance of other types of phase-sensitive OCT processing techniques, including DAO, ISAM, and computational directional imaging methods [2830].

Appendix

A. 2D forward-light-propagation-model based computational refocusing

The computational refocusing method that we used is based on a forward-light-propagation model and is implemented as a phase filter in the spatial frequency domain. To refocus the OCT data, the complex spatial frequency spectrum of an en face complex OCT slice is computed by a 2D discrete Fourier transform (DFT). The spatial frequency spectrum is then multiplied by the following phase filter.

$$H^{-1}\left(f_x, f_y \right) = \exp{\left\lbrace -i\pi \frac{\lambda_c z_0}{2} \left(f_{x}^{2} + f_{y}^{2}\right)\right\rbrace } = \exp{\left\lbrace -i\pi \frac{\lambda_c z_0}{2} f_{r}^{2} \right\rbrace },$$
where $f_x$ and $f_y$ are the two lateral spatial frequencies, and $f_r$ is $\sqrt {f_{x}^{2} + f_{y}^{2}}$. $\lambda _c$ is the center wavelength of the probe beam and $z_0$ is the amount of defocus. The refocused OCT signal is then obtained by performing an inverse DFT of the filtered spectrum.

Funding

Japan Society for the Promotion of Science (15K13371, 18H01893, 18J13841); Japan Science and Technology Agency (JPMJMI18G8).

Acknowledgment

The authors are grateful for the loan of the prototype spectrometer used in the study, which was borrowed free of charge from Horiba.

The present study relates to a joint research project between Yokogawa Electric Corp. and University of Tsukuba. Fruitful technical discussions with Hiroyuki Sangu (Yokogawa), Atsushi Kubota and Renzo Ikeda (Skytechnology), Akihiro Shito and Yuichi Inoue (OptoSigma), Masato Takaya (Tatsuta), and Naoki Fukutake (Nikon) are highly appreciated. The authors also acknowledge a fruitful suggestion from one of the reviewers, which became the basement of the discussion at the last paragraph of Section 5.3.

Disclosures

KO, DO: Yokogawa Electric Corp. (F), Nikon (F), Kao Corp. (F), Topcon (F), Tomey Corp (F). SM, YY: Yokogawa Electric Corp. (F), Nikon (F), Kao Corp. (F), Topcon (F), Tomey Corp (F, P).

References

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]  

2. L. M. Sakata, J. DeLeon-Ortega, V. Sakata, and C. A. Girkin, “Optical coherence tomography of the retina and optic nerve – a review,” Clin. Exp. Ophthalmol. 37(1), 90–99 (2009). [CrossRef]  

3. G. J. Tearney, S. Waxman, M. Shishkov, B. J. Vakoc, M. J. Suter, M. I. Freilich, A. E. Desjardins, W.-Y. Oh, L. A. Bartlett, M. Rosenberg, and B. E. Bouma, “Three-Dimensional Coronary Artery Microscopy by Intracoronary Optical Frequency Domain Imaging,” JACC Cardiovasc. Imaging 1(6), 752–761 (2008). [CrossRef]  

4. B. Povazay, K. Bizheva, A. Unterhuber, B. Hermann, H. Sattmann, A. F. Fercher, W. Drexler, A. Apolonski, W. J. Wadsworth, J. C. Knight, P. S. J. Russell, M. Vetterlein, and E. Scherzer, “Submicrometer axial resolution optical coherence tomography,” Opt. Lett. 27(20), 1800–1802 (2002). [CrossRef]  

5. J. Izatt, M. Kulkarni, H.-W. Wang, K. Kobayashi, and M. Sivak, “Optical coherence tomography and microscopy in gastrointestinal tissues,” IEEE J. Sel. Topics Quantum Electron. 2(4), 1017–1028 (1996). [CrossRef]  

6. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Inverse scattering for optical coherence tomography,” J. Opt. Soc. Am. A 23(5), 1027–1037 (2006). [CrossRef]  

7. J. A. Högbom, “Aperture synthesis with a non-regular distribution of interferometer baselines,” Astron. Astrophys. Suppl. S. 15, 417 (1974).

8. J. M. Schmitt, “Restoration of Optical Coherence Images of Living Tissue Using the CLEAN Algorithm,” J. Biomed. Opt. 3(1), 66 (1998). [CrossRef]  

9. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007). [CrossRef]  

10. Y. Xu, X. K. B. Chng, S. G. Adie, S. A. Boppart, and P. S. Carney, “Multifocal interferometric synthetic aperture microscopy,” Opt. Express 22(13), 16606 (2014). [CrossRef]  

11. Y. Xu, Y.-Z. Liu, S. A. Boppart, and P. Scott Carney, “Automated interferometric synthetic aperture microscopy and computational adaptive optics for improved optical coherence tomography,” Appl. Opt. 55(8), 2034 (2016). [CrossRef]  

12. Y. Yasuno, J. Sugisaka, Y. Sando, Y. Nakamura, S. Makita, M. Itoh, and T. Yatagai, “Non-iterative numerical method for laterally superresolving Fourier domain optical coherence tomography,” Opt. Express 14(3), 1006–1020 (2006). [CrossRef]  

13. A. Kumar, W. Drexler, and R. A. Leitgeb, “Subaperture correlation based digital adaptive optics for full field optical coherence tomography,” Opt. Express 21(9), 10850–10866 (2013). [CrossRef]  

14. S. G. Adie, B. W. Graf, A. Ahmad, P. S. Carney, and S. A. Boppart, “Computational adaptive optics for broadband optical interferometric tomography of biological tissue,” Proc. Natl. Acad. Sci. 109(19), 7175–7180 (2012). [CrossRef]  

15. A. Kumar, T. Kamali, R. Platzer, A. Unterhuber, W. Drexler, and R. A. Leitgeb, “Anisotropic aberration correction using region of interest based digital adaptive optics in Fourier domain OCT,” Biomed. Opt. Express 6(4), 1124 (2015). [CrossRef]  

16. M. Wu, D. M. Small, N. Nishimura, and S. G. Adie, “Computed optical coherence microscopy of mouse brain ex vivo,” J. Biomed. Opt. 24(11), 116002 (2019). [CrossRef]  

17. N. D. Shemonski, S. G. Adie, Y.-Z. Liu, F. A. South, P. S. Carney, and S. A. Boppart, “Stability in computed optical interferometric tomography (Part I): Stability requirements,” Opt. Express 22(16), 19183–19197 (2014). [CrossRef]  

18. W. Wieser, W. Draxinger, T. Klein, S. Karpf, T. Pfeiffer, and R. Huber, “High definition live 3D-OCT in vivo: design and evaluation of a 4D OCT engine with 1 GVoxel/s,” Biomed. Opt. Express 5(9), 2963 (2014). [CrossRef]  

19. B. Tan, Z. Hosseinaee, L. Han, O. Kralj, L. Sorbara, and K. Bizheva, “250 kHz, 15 µm resolution SD-OCT for in-vivo cellular imaging of the human cornea,” Biomed. Opt. Express 9(12), 6569 (2018). [CrossRef]  

20. L. Ginner, A. Kumar, D. Fechtig, L. M. Wurster, M. Salas, M. Pircher, and R. A. Leitgeb, “Noniterative digital aberration correction for cellular resolution retinal optical coherence tomography in vivo,” Optica 4(8), 924 (2017). [CrossRef]  

21. B. Baumann, W. Choi, B. Potsaid, D. Huang, J. S. Duker, and J. G. Fujimoto, “Swept source / Fourier domain polarization sensitive optical coherence tomography with a passive polarization delay unit,” Opt. Express 20(9), 10229 (2012). [CrossRef]  

22. N. D. Shemonski, A. Ahmad, S. G. Adie, Y.-Z. Liu, F. A. South, P. S. Carney, and S. A. Boppart, “Stability in computed optical interferometric tomography (Part II): in vivo stability assessment,” Opt. Express 22(16), 19314–19326 (2014). [CrossRef]  

23. N. D. Shemonski, S. S. Ahn, Y.-Z. Liu, F. A. South, P. S. Carney, and S. A. Boppart, “Three-dimensional motion correction using speckle and phase for in vivo computed optical interferometric tomography,” Biomed. Opt. Express 5(12), 4131–4143 (2014). [CrossRef]  

24. Y. Nakamura, J. Sugisaka, Y. Sando, T. Endo, M. Itoh, T. Yatagai, and Y. Yasuno, “Complex Numerical Processing for In-Focus Line-Field Spectral-Domain Optical Coherence Tomography,” Jpn. J. Appl. Phys. 46(4A), 1774–1778 (2007). [CrossRef]  

25. A. Kumar, W. Drexler, and R. A. Leitgeb, “Numerical focusing methods for full field OCT: a comparison based on a common signal model,” Opt. Express 22(13), 16061 (2014). [CrossRef]  

26. B. C. Flores, “Robust method for the motion compensation of ISAR imagery,” in Intelligent Robots and Computer Vision X: Algorithms and Techniques, D. P. Casasent, ed. (Boston, MA, 1992), pp. 512–517.

27. A. G. Podoleanu, G. M. Dobre, D. J. Webb, and D. A. Jackson, “Coherence imaging by use of a newton rings sampling function,” Opt. Lett. 21(21), 1789–1791 (1996). [CrossRef]  

28. D. Oida, K. Oikawa, T.-A. Wang, M.-T. Tsai, S. Makita, and Y. Yasuno, “Virtual multi-directional optical coherence tomography,” Proc. SPIE 11228, 112281G (2020). [CrossRef]  

29. H. Spahr, C. Pfäffle, P. Koch, H. Sudkamp, G. Hüttmann, and D. Hillmann, “Interferometric detection of 3D motion using computational subapertures in optical coherence tomography,” Opt. Express 26(15), 18803 (2018). [CrossRef]  

30. L. Ginner, A. Wartak, M. Salas, M. Augustin, M. Niederleithner, L. M. Wurster, and R. A. Leitgeb, “Synthetic subaperture-based angle-independent Doppler flow measurements using single-beam line field optical coherence tomography in vivo,” Opt. Lett. 44(4), 967–970 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Schematic of the simplest integration path to compute the BPE from its vectorial gradient field. The $x$-gradient values of the BPE are assigned to the boundaries between the horizontally adjacent pixels, while the $y$-gradients are assigned to the vertical boundaries. The blue arrows indicate the integration paths and the red line indicates the path distance between pixels-A and B.
Fig. 2.
Fig. 2. Two fundamental integration-path patterns that estimate the BPE of the target pixel (orange pixel) from the BPE of the source pixel (blue).
Fig. 3.
Fig. 3. Example of BPE estimation from multiple neighboring source pixels (blue). Multiple BPE estimations of the target pixel (orange) are obtained independently from each of the source pixels (i), and the estimated BPEs are averaged in complex form (ii).
Fig. 4.
Fig. 4. Schematic of sequential estimation of the BPE. In the first step (a), the BPE of the center pixel in the entire en face field (blue pixel) is set to be zero. The BPE of the pixel above the center pixel (orange pixel) is estimated by applying the integral pattern from Fig. 2(b) to the red-boxed region. In the second step (b), the BPE of the target pixel (orange pixel) is estimated from two source pixels (blue pixels) by the integral patterns from Figs. 2(a) and (b). This estimation is sequentially performed by following the spiral route [dashed spiral arrows in (a)-(c)]. (c) is another example, where four source pixels (blue pixels) were used to estimate the BPE of the target pixel (orange pixel). The light blue pixels were pixels whose BPE have been estimated.
Fig. 5.
Fig. 5. Example of en face OCT (a), its spatial frequency spectra with no phase correction (b), with the phase corrected by the simple integration path (c), and with the phase corrected using the SIP method (d).
Fig. 6.
Fig. 6. Averaged moment values along the depth direction of each sample. (a) and (b) show the mean values of the spatial frequency spectrum along the fast and slow scan directions, respectively. (c) and (d) show the STDs along the fast and slow scan directions, respectively. The orange and blue dots represent the chicken breast muscle, while the yellow and gray dots represent the porcine heart tissues. The black bars represent the averaged value of each moment.
Fig. 7.
Fig. 7. Examples of computational refocusing of en face chicken breast muscle OCT. (a) Original, non-computationally refocused OCT, (b) refocused image without phase error correction, (c) refocused image with SIP-method based phase error correction, and (d) and (e) show magnified images of (b) and (c), respectively.
Fig. 8.
Fig. 8. En face images of porcine heart tissue at three depth positions. The left column shows the unprocessed images, the center column shows the refocused images without BPE correction and the right column shows the refocused images with BPE correction. The lateral field of the images is 1 mm $\times$ 1 mm. The boxed images at the left bottom of each image are the magnified images of the boxed regions at the right.
Fig. 9.
Fig. 9. En face images of the chicken breast muscle at five depth positions. The first to third columns show the unprocessed images, the refocused images without BPE correction, and the refocused images with BPE correction, respectively. The fourth and fifth columns show magnified images of the boxed regions in the second and third columns, respectively.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

S ( x , y , z ) = A ( x , y , z ) exp i [ ϕ s ( x , y , z ) + ϕ b ( x , y ) ] ,
S ( x , y , z ) = A ( x , y , z ) exp [ i ϕ s ( x , y , z ) + i ϕ b ( x , y ) ] ,
S ( x + Δ x , y , z ) = A ( x + Δ x , y , z ) exp [ i ϕ s ( x + Δ x , y , z ) + i ϕ b ( x + Δ x , y ) ] ,
S ( x + Δ x , y , z ) S ( x , y , z ) = A ( x , y , z ) exp [ i Δ ϕ s ( x , y , z ) + i Δ x ϕ b ( x , y ) ] ,
S ( x + Δ x , y , z ) S ( x , y , z ) = A [ cos Δ ϕ s cos Δ x ϕ b sin Δ ϕ s sin δ x ϕ b + i ( sin Δ ϕ s cos Δ x ϕ b + cos Δ ϕ s sin Δ x ϕ b ) ] .
S ( x + Δ x , y , z ) S ( x , y , z ) z = A z cos Δ ϕ s z cos Δ x ϕ b A z sin Δ ϕ s z sin Δ x ϕ b + i A z sin Δ ϕ s z cos Δ x ϕ b + i A z cos Δ ϕ s z sin Δ x ϕ b ,
S ( x + Δ x , y , z ) S ( x , y , z ) z = A z cos Δ Φ s z exp [ i Δ x Φ b ( x , y ) ]
Δ x ϕ b ( x , y ) = S ( x + Δ x , y , z ) S ( x , y , z ) z
Δ y ϕ b ( x , y ) = S ( x , y + Δ y , z ) S ( x , y , z ) z ,
ϕ b ( x 0 , y 0 ) [ exp i { ϕ b ( s ) + Δ y ϕ b ( x 0 Δ x , y 0 Δ y ) + Δ x ϕ b ( x 0 Δ x , y 0 ) } + exp i { ϕ b ( s ) + Δ x ϕ b ( x 0 Δ x , y 0 Δ y ) + Δ y ϕ b ( x 0 , y 0 Δ y ) } ] ,
ϕ b ( x 0 , y 0 ) [ exp i { ϕ b ( s ) + Δ x ϕ b ( x 0 Δ x , y 0 ) } + exp i { ϕ b ( s ) + Δ y ϕ b ( x 0 Δ x , y 0 ) + Δ x ϕ b ( x 0 Δ x , y 0 + Δ y ) Δ y ϕ b ( x 0 , y 0 + Δ y ) } + exp i { ϕ b ( s ) Δ y ϕ b ( x 0 Δ x , y 0 Δ y ) + Δ x ϕ b ( x 0 Δ x , y 0 Δ y ) + Δ y ϕ b ( x 0 , y 0 Δ y ) } ] ,
ϕ b ( t ) ( x 0 , y 0 ) i exp i ϕ b ( i ) ( x 0 , y 0 ) ,
S ( x , y , z ) S ( x , y , z ) exp [ i ϕ b ( t ) ( x , y ) ] ,
H 1 ( f x , f y ) = exp { i π λ c z 0 2 ( f x 2 + f y 2 ) } = exp { i π λ c z 0 2 f r 2 } ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.