Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Iterative re-weighted approach to high-resolution optical coherence tomography with narrow-band sources

Open Access Open Access

Abstract

Optical coherence tomography (OCT) is a non-invasive optical imaging modality capable of high resolution imaging of internal tissue structures. It is widely believed that the high axial resolution in OCT systems requires a wide-bandwidth light source. As a result, often the potential advantages of narrow-bandwidth sources (in terms of cost and/or imaging speed) are understood to come at the cost of significant reduction in imaging resolution. In this paper, we argue that this trade-off between resolution and speed is a shortcoming imposed by the-state-of-the-art A-scan reconstruction algorithm, Fast Fourier Transform, and can be circumvented through use of alternative processing methods. In particular, we investigate the shortcomings of the FFT as well as previously proposed alternatives and demonstrate the first application of an iterative regularized re-weighted l2 norm method to improve the axial resolution of fast scan rate OCT systems in the narrow-bandwidth imaging conditions. We validate our claims via experimental results generated from a home-built OCT system used to image layered phantom and in vivo data. Our results rely on new, sophisticated signal processing algorithms to generate higher precision (i.e., higher resolution) OCT images at correspondingly fast scan rates. In other words, our work demonstrates the feasibility of simultaneously more reliable and more comfortable medical imaging systems for patients by reducing the overall scan time, without sacrificing image quality.

© 2016 Optical Society of America

1. Introduction

Optical coherence tomography (OCT) [1, 2] is a well-known, non-invasive medical imaging modality capable of capturing micrometer-resolution, three-dimensional images from within optically scattering media (e.g., biological tissue). As with all imaging systems, resolution characterizes the smallest feature that can be discerned, or equivalently the smallest distance between two features such that they can be reliably distinguished. While the lateral resolution in OCT is diffraction-limited, the axial resolution is inversely proportional to the bandwidth of the light source [3, 4]. Hence, to achieve high-resolution imaging with OCT, most systems employ wide-bandwidth sources. However, adequately sampling the large bandwidth of these sources imposes a limit on the imaging speed. Some variants of OCT, such as depth-resolved spectrally encoded endoscopy (SEE) [5, 6] and interleaved optical coherence tomography (iOCT) [7], achieve faster imaging by multiplexing A-scans onto the bandwidth of the source. However, these strategies for faster imaging come at the cost of axial resolution in the case of SEE and imaging depth in the case of iOCT. Our goal is to implement a technique that allows us to maintain high resolution while using narrow bandwidth sources; hence to achieve faster imaging while maintaining the high resolution and imaging depth associated with traditional OCT.

Recent advances in microscopy have witnessed the emergence of several techniques that show super-resolution imaging capabilities (e.g., STORM, PALM and FPALM [8–10]). Common to many of these methods is the idea that the physical resolution limits of the system (e.g., diffraction) can be evaded by clever redesign of the classic imaging system, imaging protocol, or analysis algorithm. In OCT, attempts at surpassing the bandwidth-limited axial resolution constraints have come in various forms. Several groups have attempted to use de-convolution in post-processing to manipulate the point spread function through knowledge of the source shape [11, 12]. For example, Liu et al. [11] proposed to invoke the Wiener filter and the LucyRichardson algorithm, and Bousi et al. [12] later proposed an alternative technique that is based on modulation and deconvolution of the interferometric signal. Unfortunately, as we will show, these solutions fail to provide an axial resolution that is substantially higher than the inverse of the source bandwidth.

Other groups have taken an approach that better mimics the strategies of super-resolution microscopy: that is, recognizing that the true goal of OCT image reconstruction is to determine the location of sub-surface layers, these groups focus on strategies that provide increased localization precision even with a low-bandwidth source (i.e., poor axial resolution) through model-based approaches. For instance, Sekhar et al. [13] and Seelamatula et al. [14]) introduced Prony’s model-based harmonic retrieval method [15] as one potential solution to overcome the axial resolution limit imposed by the finite bandwidth of the source during standard OCT processing (i.e., when using the FFT). They showed that for a purely sinusoidal signal model, Prony recovers the depth-resolved location of various layers in the sample with fewer measurements and with higher precision (i.e., resolution) than FFT. Because the number of acquired samples for a given spectral sampling rate is often a direct function of the bandwidth, the ability to recover the same data with fewer samples suggests that Prony is a potential candidate algorithm for high resolution, narrow-band OCT imaging. However, a clear downside of Prony is that it is highly prone to estimation error when the underlying signal is something other than a pure sum of sinusoids: for example, when there is noise or the signal is far from a sum of sinusoids, as in soft tissue [16]. Unfortunately, these conditions are inevitable in almost all biomedical applications and measurement systems, causing Prony to be a poor method to achieve high resolution for such samples. By invoking model-based techniques [18], Liu et al. [17] recently proposed to use an autoregressive model to improve the axial resolution. However, as the authors showed, soft tissue needs a relatively high-order autoregressive model to provide sufficient detail, and that requires a high number of wavelength samples: with the sampling rates used in most OCT systems, this still translates into a wide bandwidth source, and the authors own use of a 105 nm source to collect data indicates this need.

Various groups have also introduced compressive sensing (CS) through l1-regularization [19–21] as a means to achieve high-resolution images while acquiring fewer samples, thereby accelerating the imaging rate. However, the requirement of sparse random sampling of the spectral bandwidth (as shown in Fig. 1(a)) is a major downside to l1-regularized CS: as a result, high resolution using CS still requires a wide-bandwidth source, since the measurement matrix in the CS reconstruction suffers from low resolution when the source is too narrow [22]. In contrast to random sampling, a CS-like algorithm based on uniform, non-sparse sampling would require fewer wavelength samples and is compatible with true narrow-band imaging (e.g., Fig. 1(b)). Such a scheme could make efficient use of a truly narrow-band source (which is generally cheaper) or allow a wide-band source to enable fast 3D imaging at high axial resolution using a spatial-spectral encoding scheme similar to SEE, wherein each lateral point is illuminated by a small portion of the entire source bandwidth.

 figure: Fig. 1

Fig. 1 The Gaussian shape shows the source S(k) versus the wavenumber, k. Arrows with O and X (in red and blue respectively) show samples of the source, collected for two interferograms (i.e., two lateral positions). As can be noted from the figures, in the case of random sampling, samples are collected from the whole bandwidth, whereas in the case of consecutive sampling, the collected samples occupy a sub-band of the source. Therefore, in the latter scheme, data from multiple lateral positions can be collected in only one source scan.

Download Full Size | PDF

In this paper, we present an alternative algorithm to permit high-fidelity localization of sub-surface features using adjacent spectral samples from a narrow-bandwidth source. The technique we use applies an iterative, regularized re-weighted l2 solution [23] to the problem of reconstructing OCT images for the first time. We compare the re-weighted l2 solution with FFT, Prony and the Lucy-Richardson algorithms and show that it outperforms these algorithms in recovering the reflectivity profile of the object, even when the data are collected with source bandwidths as narrow as 5–10 nm. In short, we demonstrate OCT imaging at resolutions that surpass the traditional axial resolution limit governed by the FFT algorithm. This study thus opens a door towards new options to design efficient OCT systems using potentialy cheaper sources with narrow bandwidths to achieve faster imaging speeds.

2. Problem statement

Let us consider a standard Fourier domain (FD) OCT system for which the model of the interferometric signal obtained by the detector is well understood (e.g., [4]). Let the sampling in the detector be modeled by an ideal sampler (i.e., a train of impulses). As most FD-OCT systems do not sample evenly in wavenumber, a resampling step is often carried out to obtain samples evenly spaced in wavenumber. After this step, the interferogram may be represented as follows:

I(n)=S(k)×|rRej2kzR+0zmaxrS(zS)ej2kzSdzS|2+ε(k)|k=k0K/2+nK/N
where n ∈ {1, 2,...,N} is the index for each spectral channel of the N channels (or pixels) in the raw OCT data, S(k) is the source spectral density as a function of wavenumber k (the source spectral envelope is usually known or measurable), rR is the reflectivity of the reference mirror, zR is the optical pathlength of the reference arm, zS is the optical pathlength of a given subsurface layer, zmax is the optical pathlength of the light penetration-limited imaging depth of the object, whose surface is located at zS = 0, rS(zS) is the depth-dependent reflectivity of the object, k0 is the center wavenumber, KN is the spectral sampling rate, and ε(k) accounts for additive measurement noise. Note that the factor of two in the exponents associated with the positions of the object and reference layers come from the double-pass nature of the reflectance measurement. In the noise-free case (i.e., ε(k) = 0), one can generally compensate for the source shape by dividing I(k) by S(k) (note that under the ideal Gaussian source model, S(k) ≠ 0). In practice, however, where the existence of noise is inevitable, the source shape can be approximately compensated for in alternative ways (see Data collection and processing).

In practice, the reflectivity of the reference surface is a system design parameter and is often chosen such that rRrS(zS) for all 0 ≤ zS < zmax [4]. Therefore, the cross-terms resulting from the power of 2 in Eq. (1) will become small and negligible with respect to other terms, and the reflectivity profile can be easily estimated through taking the Fourier transform of I(n). Since our computations take place in the discrete domain, let rS(zS) be approximated by a vector of length M, i.e., rS(zS) is a sum of M discrete layers. Note that M is a user-defined parameter and the approximation for the continuous reflectivity profile of a biological object improves as M tends to larger values:

rS(zS)m=0M1r(m)δ(zS2πNmKM).
In other words, rS(zS) in the interval 0 ≤ zSzmax is approximated by r(m), for m = 0,...,M − 1 which we call the discrete reflectivity profile representation of rS(zS). In the discrete domain, given that the goal is to take the Fourier transform from the sampled data, the discrete Fourier transform (DFT) relationship between the samples and the vector of the reflectivity profile of the object is as follows:
I(n)=m=0M1r(m)ej2πmnM,n=1,,N.
The above can also be formulated in matrix-vector notation. Let A be a matrix of size N × M, where anm=ej2πnmM, n ∈ {1,...,N} and m ∈ {0,...,M − 1}. Let I(n) be the vector of observations in the camera; then the OCT A-scan reconstruction is represented by the following system of equations:
Ar=I.

It is known that K determines the traditional (FFT-limited) axial resolution of the OCT system. For a given spectral sampling rate, decreasing the source bandwidth is equivalent to reducing N, the number of samples in the interferogram and A-scan. For biological samples, reducing N significantly may result in M > N, and our A-scan reconstruction problem becomes an under-determined system of linear equations. It is well known that under-determined systems of equations do not have unique solutions for the reflectivity profile. Notably, as most biological images acquired with OCT can be considered to be sparse in some domain (spatial, wavelet, etc.) (i.e., at least more than half of the reflection vector r is zero), sparse recovery solutions may provide an efficient mechanism to extract a unique reflectivity profile. The additional assumption of sparsity, moreover, can enable localization with a precision better than the axial resolution dictated by the source bandwidth.

3. Proposed solution: Regularized re-weighted l2

In contrast to CS methods, which sample few wavelengths but require randomized selection (Fig. 1), we propose to apply an iterative re-weighted l2 least squares solution to OCT image reconstruction that allows for contiguous spectral sampling over a small bandwidth (Fig. 1), as can be achieved by diffraction with a grating (similar to SEE). The algorithm, first proposed by Chartrand and Yin for sparse signal recovery [23] is also more robust to noise compared to Prony and requires no pre-knowledge of the specific number of layers at each lateral position. The general idea is to start with an initial guess of the discrete reflectivity profile of the object (e.g., the low-resolution FFT solution itself and other assumptions about the position of the object structure). At each iteration, the reflectivity profile is refined by suppressing points with lower reflection intensity, which are given smaller weights in the algorithm. A description of this algorithm as applied to OCT A-scan reconstruction, is as follows [Alg. 1]:

Tables Icon

Algorithm 1:. Regularized re-weighted l2 algorithm.

In the algorithm notation, diag[(rq2+β)(1p/2)] is a diagonal matrix with elements (rq2+β)(1p/2) appearing on its main diagonal, where the index q keeps track of the number of elapsed iterations. The parameter p should be chosen from the real numbers in the interval 0 ≤ p < 2 so that the algorithm promotes sparsity [23]. Here, AH is the hermitian transpose of matrix A in Eq. (4), and Δ is a user-defined parameter indicating the error tolerance for the inner loop. Parameter β is user-defined and serves to tune the weights of the reflectivity profile in each iteration. This parameter increases the robustness of the matrix inversion step in practice such that the measurement errors of the interferogram do not dramatically affect the solution. Following the guidelines in [23], the parameter β is initialized as 1 and is decreased by a factor of 10 each time the difference between two consecutive iterations is below β/10. Note that one could use a higher or lower factor than 10: lower factors result in slower convergence rates but are more robust to noise and artifacts in the data.

In our experiments, we let the initial guess for the algorithm, r0, be the low-resolution FFT solution, which is a general initial condition given a specific dataset. Note that a more sophisticated initial point is permitted; for instance, one that takes into account information from the bulk location of the object, or an approximate location of the internal layers. In our experiments, we found that a choice of 0.5 < p < 1 is ideal for the OCT data we used. Note that lower values for p lead to faster convergence, albeit with greater susceptibility to inaccuracy. Since our computations are performed offline, we opted for accuracy over computational speed and set p = 0.9 in all our results.

4. Data collection and processing

To test the effectiveness of our proposed method, we constructed a home-built, swept source (SS) OCT system comprising a 200-kHz laser (VCSEL, Thorlabs) with a center wavelength of 1310 nm, a total bandwidth of 100 nm, a coherence length > 100 mm, and a duty cycle of 60%. Samples were digitized (ATS9360, Alazartech) at a rate of 1.2 GSa/s with 4096 data points per sweep. Our choice of high sampling rate is due to the availability of data with this system; however, in our processing step, we downsampled each interferogram by 4, resulting in 1024 samples per interferogram with a sampling rate of 300 MSa/s. To simulate results from OCT systems with smaller source bandwidths, we sampled sub-regions of the interferogram. In each case, the samples were selected in a contiguous fashion, from the center wavenumber of the source outwards. This choice lets us make our comparisons consistent; however, since the source shape is compensated for, the results are generally independent of which region of the spectrum was selected. The selected region was zero-padded to 1024 points prior to further processing. We considered bandwidths ranging from 1 to 50nm.

To validate our results in objects with a layered structure, we imaged a tissue-mimicking scattering phantom and in-vivo fingertip. Fabrication of the phantom is described elsewhere [24]: in brief, each phantom comprised clear layers of polydimethylsiloxane (PDMS) with alternating scattering layers made of a PDMS/ Titanium dioxide (TiO2) mix (0.3 % w/w). The layered phantom and fingertip B-scans had lateral resolution of 16 μm and comprised 750 and 550 lateral positions respectively. The lateral range for both scans was set to 5 mm.

Data were processed in Matlab. Background subtraction of the source spectrum was performed by applying a high-pass spatial filter to the raw interferogram. The cut-on frequency was selected such that it did not interfere with the reflectivity profile relating to the structure of the object. We implemented Prony and the regularized re-weighted l2. We used the built-in Matlab function for the LR deconvolution algorithm: deconvlucy. For quantitative comparison of the images, the normalized mean squared error (MSE) and peak signal-to-noise ratio (PSNR) were considered. In the tables to follow, normalized MSE is reported in percent and PSNR in dB. All images were scaled in the range of 0 and 255 prior to quantitative comparison.

As mentioned earlier, the Prony algorithm is the only algorithm among those implemented in our work that requires the number of layers in the object as an input parameter. For Prony-based reconstructions, we chose M to be half of the number of samples for each source bandwidth (i.e., more than the expected number of layers), as it produced the best results. In the LR algorithm, for each iteration n, the algorithm stopped when the difference between two iterations n and n + 1 is less than five percent of the result of iteration n. For the regularized re-weighted l2, the parameter Δ was set to 5%, (i.e., a stopping rule similar to the LR algorithm); however, for convenience, the number of iterations for the reweighted l2 algorithm was limited to 50 (i.e., the stopping rule was when the two consecutive outputs of the iterations differed less than 5% or when the number of iterations reaches 50). The latter condition keeps the computation time low per A-scan.

5. Results and discussion

Tables 1 and 2 report on the MSE and PSNR for each B-scan obtained via FFT, Prony, LR de-convolution and re-weighted l2. The image corresponding to each column was generated using a different reconstruction algorithm applied to data comprising only a subset of the total source bandwidth, as described by each row. As the base of comparison, we generated the “ground truth” from the traditional FFT reconstruction obtained with the maximum possible source bandwidth (100nm). Each number in the tables shows how much the B-scan of an algorithm with a specific source bandwidth deviates from the ground truth. For each row, we emphasize in bold the algorithm with the best performance in each experiment (i.e., the smallest MSE and largest PSNR). Note that each A-scan was reconstructed independently. The unit for PSNR is dB and MSE has units of squared image intensities; as mentioned earlier, these were pre-scaled to have a range of 0–255. MSE is averaged over all A-scans in each B-scan. It is clear from these results that the re-weighted regularized l2 solution is significantly better than the FFT, Prony, and the LR deconvolution for all bandwidths, especially in the case of the fingertip data. We hypothesize that the fingertip data perform better than the multi-layered phantom because the original fingertip image is more sparse.

Tables Icon

Table 1. Quantitative comparison of layered phantom data for various source bandwidths, and different algorithms (FFT, Prony, LR deconvolution, and re-weighted l2). In each row, the number in bold indicates the largest PSNR and smallest MSE, which indicate the best performing algorithm for each source bandwidth.

Tables Icon

Table 2. Quantitative comparison of finger data for various source bandwidths, and different algorithms (FFT, Prony, LR deconvolution, and re-weighted l2). In each row, the number in bold indicates the largest PSNR and smallest MSE, which indicate the best performing algorithm for each source bandwidth.

To confirm our results qualitatively, Fig. 2 and Fig. 3 show B-scan images of the layered phantom and fingertip data, respectively. The image on the first row in each figure is the aforementioned ground truth along with a zoomed-in display of a boxed portion of the image. A few layers (or features in the case of the fingertip) of interest are specified on the ground truth. The remaining rows present results for various algorithms and various source bandwidths, where each image shows a zoomed-in version of the same boxed portion as in the ground truth.

 figure: Fig. 2

Fig. 2 Qualitative comparison of algorithms with various source bandwidths and phantom data. Note that the ground truth is reconstructed with the maximum available source bandwidth of 100 nm and is presented in the first row. The red box portion is zoomed in to the bottom left corner of each image for easier visual comparison. Note the smearing effect in FFT images where LR deconvolution is not able to compensate for it well enough when the source bandwidth is as narrow as 5–10 nm. Our proposed method performs significantly better with narrow bandwidth sources and has comparable performance to the other algorithms as the source bandwidth increases to 20–35 nm.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Qualitative comparison of algorithms with various source bandwidths and finger data. Note that the ground truth (on top) is reconstructed with the maximum available source bandwidth of 100 nm. The region inside the red box appears in zoomed form at the top right corner of each image for easier visual comparison. At 1 nm, feature a is well reconstructed for both re-weighted l2 and Prony; however, the Prony result is less informative for features b and c. With the 10-nm source, the re-weighted l2 solution results in an image very similar to the ground truth. Beyond that, all algorithms show the features of interest very well, except for FFT, which still shows a smearing effect that is particularly strong at feature a.

Download Full Size | PDF

In Fig. 2, B-scans derived from a 1-nm source show no clear features of the phantom structure. As the source bandwidth increases to 5 nm, layer 1 starts to be revealed. A more careful look at the reconstructed image from Prony and re-weighted l2 shows that these two provide a better estimate of the first layer (i.e., the layer is thinner), as opposed to FFT and LR de-convolution algorithms. As the source bandwidth increases to 10 nm, the performance of all algorithms improves to some extent; however, the internal structure of the layered phantom is visually better in the re-weighted l2 result, as it reveals layer 2 to an acceptable extent. FFT demonstrates a smearing effect that visually shows layers to be thicker than they really are, and neither LR deconvolution nor Prony are able to reveal layer 2 reliably at this bandwidth. As the source bandwidth increases to 15 nm, the re-weighted l2 solution reconstructs layers 1 to 8 with more accurate width and location estimates compared to the other algorithms, and by 20 nm, the result from re-weighted l2 visually looks very similar to the ground truth. Meanwhile, the layers in the FFT reconstruction, more specifically layers 1 and 2, are smeared and still look thicker than they really are, LR deconvolution does not present layer 7 reliably and Prony shows all layers but with incorrect widths. At larger bandwidths, all algorithms show the layers of interest very well, although the smearing effect from FFT still remains, especially for the stronger layers 1 and 2.

Similarly in Fig. 3, with a 1-nm source, the surface layer of the finger (feature a), is roughly seen in almost all B-scan results; however, re-weighted l2 shows the best estimate of the surface layer thickness and other structures. As the source bandwidth increases to 5 nm, feature a remains highly smeared in the FFT and LR solutions, as the apparent width is thicker than the actual width. The highly speckled appearance of Prony renders features b and c hardly visible (and most clear with LR), although both Prony and re-weighted l2 perform acceptably at localizing feature a. Increasing the source to 10 nm, FFT still exhibits smearing especially on feature a. Prony performs well on feature a, but b and c are still undistinguishable. LR and re-weighted l2 on the other hand, perform well in localizing all three features. Beyond 10 nm, all algorithms show the features of interest very well, except for FFT, which shows strong smearing for highly reflective feature a.

Note that as the bandwidth increases, FFT and LR perform sufficiently well, especially in the case where the source bandwidth is 35 nm or wider. This means that from a practical point of view, one might have to weigh the potential increase in the computational expense of the regularized re-weighted l2 when the source bandwidth is wide (35 nm or 50 nm in our table). To verify this, we used the mirror dataset to estimate the super-resolution factor for each source bandwidth. The super-resolution factor is defined as the ratio of the full-width-half-maximum of the bandwidth limited axial resolution to the obtained axial resolution from the proposed method. Results are presented in Fig. 4 and Table 3. Similarly, when the structure of the object is not sufficiently sparse, such as for the layered phantom, a wider bandwidth is needed to recover the underlying layer information. Therefore, future work should investigate options for localization in the case of a non-sparse biological tissue in the spatial domain, for instance, by exploring other domains in which the data can be sparse (e.g., the wavelet domain).

 figure: Fig. 4

Fig. 4 A-scan comparison of mirror data for various source bandwidths for FFT and the proposed algorithm. Ground truth data correspond to the FFT-generated result with a 100-nm source bandwidth. The improvement in resolution obtained with the current algorithm (i.e., the super-resolution factor) is greatest for narrow bandwidths.

Download Full Size | PDF

Tables Icon

Table 3. Comparison of resolution between FFT and the proposed method for various bandwidths. As can be noted, resolution improves as the source bandwidth increases for re-weighted l2 and FFT algorithm. However, the proposed method has significantly improved resolution in the narrow bandwidth cases. The theoretical axial resolution obtained with FFT processing is included for reference.

Each A-line is solved for independently and there is no relationship between lateral A-lines. The starting solution for the algorithm (i.e., the initialization condition) is the FFT solution, and the iterations of the proposed method serve to suppress the effect of a narrow bandwidth source. We envision that parallel computation can be used to improve speed because of the independence of the A-scan calculation process. Another practical point is the computation times of the aforementioned algorithms. The current implementation of our algorithm takes significantly longer compared to FFT, LR and Prony (Table 4). This was not a major issue in our experiments as they were implemented in post processing. The increased computation time is due to the inefficient implementation of the matrix inversion step; hence, it takes O(n3) to invert a matrix of size n by n. However, since the matrix to be inverted is toeplitz, the inversion step can be implemented much more efficiently for example using the algorithm proposed in [25] that takes O(n log3(n)).

Tables Icon

Table 4. Computation time comparison of the proposed algorithm with FFT, Prony and LR.

6. Conclusion and future work

In this work, we proposed for the first time the use of a regularized re-weighted least squares l2 solution to generate high-fidelity reconstructions of OCT images using a simulated narrow bandwidth source. The proposed algorithm is sufficiently robust to noise and non-idealities in the model and results in better quality images having more discernible structures of biological data even with 5–10 nm of bandwidth compared to the FFT, LR deconvolution and Prony. Using data collected from a variety of samples, we show that this algorithm outperforms other techniques previously shown in the literature, including the Prony algorithm and Lucy-Richardson deconvolution. Compared to compressed sensing and FFT techniques, this strategy has the strong advantage of being implementable with very narrow bandwidth sources that are sampled contiguously. Hence, this algorithm can be implemented with minimum change in the traditional OCT system hardware. In addition to the cost benefit of utilizing narrow bandwidth sources, the re-weighted l2 algorithm can be used to increase imaging speed if applied to a system with a broader source bandwidth. For instance, this system can be implemented similarly to SEE, wherein a grating is used to disperse the light over a line of data and various lateral points (A-scans) are sampled with different spectral regions.

Areas of future work include exploring new super-resolution and localization techniques more suitable for non-sparse biological tissue that exploit priors related to the signal domain or that consider the signal in other domains where it is potentially sparse. Moreover, we are interested in further investigating the relationship between adjacent A-scans and to use this information to potentially improve the lateral resolution of OCT B-scan images. A strong advantage of this work is its potential to yield images that are high-quality at lower cost and can minimize patient scan time using a practical system design.

Acknowledgments

This work has been supported by National Science Foundation Grants CNS-1329819 and AST-1247995. Lian Duan was supported by a Stanford Bio-X seed grant. Tara Javidi would like to thank Dr. M. Khajavikhan for the original discussions motivating this work. The authors would like to thank Genna Smith, Heeyoon Lee and Tahereh Marvdashti for their generous help in set-up and accessing OCT data and Kristen Lurie for her help with setting up the initial code for simulations.

References and links

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]   [PubMed]  

2. J. M. Schmitt, “Optical coherence tomography (OCT): a review,” IEEE J. Sel. Top. Quant. 5(4), 1205–1215 (1999). [CrossRef]  

3. A.F. Ferchner, “Optical coherence tomography,” J. Biomed. Opt. 1(2), 157–173 (1996). [CrossRef]  

4. W. Drexeler and J. G. Fujimoto, Optical Coherence Tomography, Technology and Applications (Springer, 2008). [CrossRef]  

5. G. J. Tearney, M. Shishkov, and B. E. Bouma, “Spectrally encoded miniature endoscopy,” Opt. Lett. 27(6), 412–414 (2002). [CrossRef]  

6. D. Yelin, W. White, J. Motz, S. Yun, B. Bouma, and G. Tearney, “Spectral-domain spectrally-encoded endoscopy,” Opt. Express 15(5), 2432–2444 (2007). [CrossRef]   [PubMed]  

7. H. Y. Lee, H. Sudkamp, T. Marvdashti, and A. K. Ellerbee, “Interleaved optical coherence tomography,” Opt. Express 21(22), 26542–26556 (2013). [CrossRef]   [PubMed]  

8. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]   [PubMed]  

9. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]   [PubMed]  

10. S. T. Hess, T. P. K. Giririjan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91(11), 4258–4272 (2006). [CrossRef]   [PubMed]  

11. Y. Liu, Y. Liang, G. Mu, and X. Zhu, “Deconvolution methods for image deblurring in optical coherence tomography,” J. Opt. Soc. Am. A 26(1), 72–77 (2009). [CrossRef]  

12. E. Bousi and C. Pitris, “Axial resolution improvement by modulated deconvolution in fourier domain optical coherence tomography,” J. Biomed. Opt. 17(7), 071307 (2012). [CrossRef]   [PubMed]  

13. S. C. Sekhar, H. Nazkani, T. Blu, and M. Unser, “A new technique for high-resolution frequency domain optical coherence tomography,” in Proceedings of IEEE conference on Acoustics, Speech and Signal Processing (IEEE, 2007), pp. I-425–I-428.

14. C. S. Seelamantula and S. Mulleti, “Super-resolution reconstruction in frequency domain optical-coherence tomography using the finite-rate-of-innovation principle,” IEEE Trans. on Sig. Proc. 62(19), 5020–5029 (2014). [CrossRef]  

15. P. Stoica and R. Moses, Introduction to Spectral Analysis (Prentice Hall, 2000).

16. B. D. Rao and K. S. Arun, “Model based processing of signals: a state space approach,” Proc. IEEE 80(2), 283–309 (1992). [CrossRef]  

17. X. Liu, S. Chen, D. Cui, X. Yu, and L. Liu, “Spectral estimation optical coherence tomography for axial super-resolution,” Opt. Express 23(20), 26521–26532 (2015). [CrossRef]   [PubMed]  

18. S. L. Marple, Digital Spectral Analysis: With Applications, (Prentice Hall, 1986).

19. N. Mohan, I. Stojanovic, W. C. Karl, B. E. A. Saleh, and M. C. Teich, “Compressed sensing in optical coherence tomography,” Proc. SPIE 7570, 75700L (2010). [CrossRef]  

20. X. Liu and J. U. Kang, “Compressive SD-OCT: the application of compressed sensing in spectral domain optical coherence tomography,” Opt. Express 18(21), 22010–22019 (2010). [CrossRef]   [PubMed]  

21. N. Zhang, T. Huo, C. Wang, T. Chen, J. Zheng, and P. Xue, “Compressed sensing with linear-in-wavenumber sampling in spectral-domain optical coherence tomography,” Opt. Lett. 37(15), 3075–3077 (2012). [CrossRef]   [PubMed]  

22. E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE T. Inform. Theory 52(2), 489–509 (2006). [CrossRef]  

23. R. Chartrand and W. Yin, “Iteratively reweighted algorithms for compressive sensing,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (IEEE, 2008), pp. 3869–3872.

24. K. L. Lurie, G. T. Smith, S. A. Khan, J. C. Liao, and A. K. Ellerbee, “Three-dimensional, distendable bladder phantom for optical coherence tomography and white light cystoscopy,” J. Biomed. Opts. 19(3), 036009 (2014). [CrossRef]  

25. M. Stewart, “A Superfast toeplitz solver with improved numerical stability,” SIAM. J. Matrix Anal. A. 25(3), 669–693 (2003). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1 The Gaussian shape shows the source S(k) versus the wavenumber, k. Arrows with O and X (in red and blue respectively) show samples of the source, collected for two interferograms (i.e., two lateral positions). As can be noted from the figures, in the case of random sampling, samples are collected from the whole bandwidth, whereas in the case of consecutive sampling, the collected samples occupy a sub-band of the source. Therefore, in the latter scheme, data from multiple lateral positions can be collected in only one source scan.
Fig. 2
Fig. 2 Qualitative comparison of algorithms with various source bandwidths and phantom data. Note that the ground truth is reconstructed with the maximum available source bandwidth of 100 nm and is presented in the first row. The red box portion is zoomed in to the bottom left corner of each image for easier visual comparison. Note the smearing effect in FFT images where LR deconvolution is not able to compensate for it well enough when the source bandwidth is as narrow as 5–10 nm. Our proposed method performs significantly better with narrow bandwidth sources and has comparable performance to the other algorithms as the source bandwidth increases to 20–35 nm.
Fig. 3
Fig. 3 Qualitative comparison of algorithms with various source bandwidths and finger data. Note that the ground truth (on top) is reconstructed with the maximum available source bandwidth of 100 nm. The region inside the red box appears in zoomed form at the top right corner of each image for easier visual comparison. At 1 nm, feature a is well reconstructed for both re-weighted l2 and Prony; however, the Prony result is less informative for features b and c. With the 10-nm source, the re-weighted l2 solution results in an image very similar to the ground truth. Beyond that, all algorithms show the features of interest very well, except for FFT, which still shows a smearing effect that is particularly strong at feature a.
Fig. 4
Fig. 4 A-scan comparison of mirror data for various source bandwidths for FFT and the proposed algorithm. Ground truth data correspond to the FFT-generated result with a 100-nm source bandwidth. The improvement in resolution obtained with the current algorithm (i.e., the super-resolution factor) is greatest for narrow bandwidths.

Tables (5)

Tables Icon

Algorithm 1: Regularized re-weighted l2 algorithm.

Tables Icon

Table 1 Quantitative comparison of layered phantom data for various source bandwidths, and different algorithms (FFT, Prony, LR deconvolution, and re-weighted l2). In each row, the number in bold indicates the largest PSNR and smallest MSE, which indicate the best performing algorithm for each source bandwidth.

Tables Icon

Table 2 Quantitative comparison of finger data for various source bandwidths, and different algorithms (FFT, Prony, LR deconvolution, and re-weighted l2). In each row, the number in bold indicates the largest PSNR and smallest MSE, which indicate the best performing algorithm for each source bandwidth.

Tables Icon

Table 3 Comparison of resolution between FFT and the proposed method for various bandwidths. As can be noted, resolution improves as the source bandwidth increases for re-weighted l2 and FFT algorithm. However, the proposed method has significantly improved resolution in the narrow bandwidth cases. The theoretical axial resolution obtained with FFT processing is included for reference.

Tables Icon

Table 4 Computation time comparison of the proposed algorithm with FFT, Prony and LR.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

I ( n ) = S ( k ) × | r R e j 2 k z R + 0 z max r S ( z S ) e j 2 k z S d z S | 2 + ε ( k ) | k = k 0 K / 2 + n K / N
r S ( z S ) m = 0 M 1 r ( m ) δ ( z S 2 π N m K M ) .
I ( n ) = m = 0 M 1 r ( m ) e j 2 π m n M , n = 1 , , N .
Ar = I .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.