Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spectral fusing Gabor domain optical coherence microscopy based on FPGA processing

Open Access Open Access

Abstract

High-resolution imaging using high numerical aperture imaging optics is commonly known to cause a narrow depth of focus, which limits the depth of field in optical coherence tomography (OCT). To achieve semi-invariant high resolution in all directions, Gabor domain optical coherence microscopy (GD-OCM) combines the in-focus regions of multiple cross-sectional images that are acquired while shifting the focal plane of the objective lens. As a result, GD-OCM requires additional processes for in-focus extraction and fusion, leading to longer processing times, as compared with conventional frequency domain OCT (FD-OCT). We previously proposed a method of spectral domain Gabor fusion that has been proven to improve the processing speed of GD-OCM. To investigate the full potential of the spectral domain Gabor fusion technique, we present the implementation of the spectral domain Gabor fusion algorithm using field programmable gate arrays (FPGAs) in a spectral acquisition hardware device. All filtering processes are now performed in an acquisition device as opposed to the post-processing of the original GD-OCM, which reduces the amount of data transfer between the image acquisition device and the processing host. To clearly demonstrate the imaging performance of the implemented system, we performed GD-OCM imaging of a stack of polymeric tapes. GD-OCM imaging was performed over seven focus zones. The results showed that the processing time for linear wavenumber calibration and spectral Gabor filtering can be improved with FPGA implementation. The total processing time was improved by about 35%.

© 2021 Optical Society of America

1. INTRODUCTION

Optical coherence tomography (OCT) is an optical imaging technology that is capable of in vivo microscopic depth cross-sectional imaging of biological tissues [1]. Particularly, one outstanding feature of OCT is its ability to provide digital depth sections of the sample at high resolution and high sensitivity. Besides the high speed and functional imaging capabilities, another key parameter that will open the path for optical diagnostics using OCT technology is high-resolution imaging, i.e., in a regime of a few microns or sub-microns, particularly in three dimensions. Furthermore, recent advances in frequency domain OCT (FD-OCT) allows for depth-resolved imaging at high speed and high sensitivity, which is attractive for in vivo three-dimensional (3D) and four-dimensional (i.e., 3D imaging over time) imaging of biological tissues [25]. On the basis of the coherence theory in the FD, FD-OCT performs a Fourier transform on the spectral interference signal to obtain a depth-resolved reflectivity profile along the incident beam path. The main advantage of FD-OCT over its time domain counterpart is that it can obtain the entire depth profile at once without scanning the optical path length of the reference beam, which improves its overall imaging speed [6].

The interference signal acquired by OCT is equivalent to an optical sampling of the sample reflectivity along the depth by using the low coherence of a broadband light source as a sampling gate [7]. Hence, the temporal coherence shape governs an axial point spread function (PSF) (i.e., axial resolution), which is inversely proportional to the power spectrum bandwidth of the light source [7,8]. Separately, the lateral resolution is governed by the lateral PSF of the imaging optics in the sample arm. For a conventional imaging lens, its lateral resolution is inversely proportional to its numerical aperture (NA). The lateral resolution, therefore, can be improved by increasing the NA of the objective lens without affecting the axial resolution. The use of a high-NA objective in OCT imaging is commonly called optical coherence microscopy (OCM) [9]. However, the increase in lateral resolution in OCM by opening the NA of the imaging optics leads to a decrease in the depth of focus (DOF), which varies inversely and quadratically with the NA [10]. This trade-off limits the usable in-focus portion of a depth cross-sectional image in FD-OCM, i.e., the high lateral resolution only maintains a depth range of a few hundred micrometers around the focal plane of the imaging lens.

To obtain a high resolution across a larger depth range in FD-OCM, researchers have proposed both hardware-based and software-based solutions. For the hardware-based solution, the sample needs to be translated along the axial direction [11]. Alternatively, several techniques utilizing tunable focus [1214] and multi-focus [15] imaging optics in FD-OCM have been investigated and reported. Gabor domain OCM (GD-OCM) is one such solution, which combines FD-OCM with a dynamic focus acquisition scheme to maintain a high lateral resolution over a large imaging depth range (i.e., up to 2 mm from the skin surface) [16]. The capability of GD-OCM for in vivo cellular imaging of biological tissues has been demonstrated [1719]. In addition, Gabor fusion to extend the imaging DOF in swept-source-based OCM using the master/slave technique has been recently demonstrated [20]. For examples of software-based solutions, interferometric synthetic aperture microscopy and computational adaptive optics apply a numerical computation algorithm to the post-processing to remove the defocus effect from the acquired images. However, the numerical techniques are normally sensitive to motion noise and require manual adjustment of the coefficients for each sample to reduce aberration. This is inconvenient for analyzing a wide variety of samples in real time [2123]. In addition, when imaging with extremely high NA imaging optics, the signal-to-noise ratio in the out-of-focus portion of the image remains low. To remedy these issues, one may combine the dynamic focus of the hardware with the numerical correction.

Despite their potential use for high-resolution imaging over a large 3D volume, these techniques require the acquisition of multiple cross-sectional images at different focal planes of the imaging optics and post-processing power to extract the in-focus portion of each acquired image and to combine them to obtain an extended DOF cross-sectional image. For GD-OCM, its original implementation of Gabor-based acquisition and fusion involves a large amount of acquired and processed data (i.e., 5–10 times more than that involved in conventional FD-OCT processing). For each single frame of the GD-OCM image, it acquires three or more cross-sectional images, corresponding to different focus positions, and fuses them together to obtain a single image that contains only in-focus information. These additional processes cause difficulty in pushing the GD-OCM technology for real-time imaging and display, which are essential for clinical use.

One solution that has been widely investigated is the use of a graphics processing unit (GPU) to boost the processing speed of the fast Fourier transformation [13,24,25]. Nevertheless, the remaining challenge is the limited speed of data transfer between the central processing unit (CPU) and the GPU memories [26,27]. Most recently, we proposed an alternative approach to speed up the processing of GD-OCM datasets by introducing Gabor fusion into the spectral domain, the so-called spectral domain GD-fusion [28]. Spectral domain GD-fusion has been proven to improve the processing speed of GD-OCM up to twice that of the conventional method and, hence, is promising for fast imaging and diagnostics.

In this work, we have further improved the robustness of the new method of spectral domain GD-fusion by implementing it in a field programmable gate array (FPGA) acquisition device as part of a custom-built spectrometer. The main idea is to push all signal processing of GD-OCM into the FPGA acquisition device. For demonstration purposes, GD-OCM imaging of seven focus zones, i.e., seven sets of raw spectra acquired while the focal plane of the imaging lens was shifted at seven different depth locations, was chosen. Each focus zone covered about 100 µm of DOF, which provided approximately 0.7 mm of total extended in-focus depth range in the final fused image.

2. EXPERIMENTAL SETUP OF THE FPGA-BASED SF-GD-OCM

A schematic diagram of the setup of the FPGA-based spectral fusing GD-OCM (SF-GD-OCM) system is shown in Fig. 1(a). The light source was a broadband super luminescent diode (EXS210090, Exalos AG, Switzerland) with 840 nm of central wavelength and about 70 nm of spectral width. The output from the source was coupled to a fiber and delivered to an input collimator (CL1) of a free-space Michelson interferometer. A collimated light beam was split by a nonpolarizing beam splitter cube (CCM1-BS014, Thorlabs, Inc., USA) into two beam paths, i.e., a reference beam path and a sample beam path. The light beam in the reference beam path was reflected back to the beam splitter by using a retroreflector (PS975M-B, Thorlabs, Inc.). A sample light beam was passed through a liquid lens (A-39N0 Corning Varioptic, Edmund Optics, Singapore), double reflected on a two-axis galvanometer mirror (JD2203, Sino-Galvo Technology, China), and focused onto a sample under test by an objective lens (RMS10X-PF, Thorlabs, Inc.). The objective lens was a microscope objective with an effective NA of 0.2 as determined by the collimated beam size, which provided approximately 2 µm of resolution and 100 µm of DOF. A liquid lens was placed on a collimated beam path in front of the two-dimensional galvanometer mirror to allow for the tuning of the focal plane of the objective. Light reflection and scattering from the sample were collected back by the objective lens and delivered back to the beam splitter. The light beam from the reference and sample paths was coupled to another single-mode fiber by a second collimator (CL2) at the exit of the interferometer and delivered to a custom-built spectrometer, as shown in Fig. 1(b).

 figure: Fig. 1.

Fig. 1. Schematic diagram of (a) the setup and (b) a photograph of the FPGA-based SF-GD-OCM system; MO, microscope objective; GV, galvanometer mirror; LL, liquid lens; BS, beam splitter; CL, collimator; RTF, retro reflector; OAP, off-axis parabolic mirror.

Download Full Size | PDF

The setup of the spectrometer is similar to that detailed in [29]. As shown in Fig. 1(b), a 90° off-axis parabolic mirror was used to collimate the diverging beam from the fiber tip at the input of the spectrometer. The collimated beam was incident on a reflective diffraction grating to disperse the light wavelength. A 45° off-axis parabolic mirror was used to focus the dispersed light on to a line-array sensor. A complementary metal oxide semiconductor (CMOS) line sensor (raL6144-80 km, Basler AG, Germany) was used in the implemented spectrometer to record the spectral interference signals from an interferometer. The FPGA device (PCIe-1473, National Instruments, USA) was used to grab spectrum data from the CMOS camera and perform several signal processes before transferring the data to the host computer for display.

 figure: Fig. 2.

Fig. 2. Example of the flow processes of the seven focus zones used in the FPGA-based SF-GD-OCM. BPF, bandpass filter.

Download Full Size | PDF

3. FPGA IMPLEMENTATION OF THE SPECTRAL DOMAIN GD-FUSION

As illustrated in Fig. 2, four signal processing algorithms were performed on the acquired signal in the FPGA device, i.e., the interpolation from linear-in-wavelength space to linear-in-wavenumber space as specified by the blue dashed box, the so-called “linear $ k $ interpolation”; the digital bandpass filtering as specified by the magenta dashed box to keep the interference only within the DOF of each zone; the fusing process as specified by the green dashed box; and the fast Fourier transform (FFT) of the fused spectra as shown in the red dashed box. To clearly demonstrate the effect of the DOF in FD-OCM, we used a stack of 10 layers of transparent masking tapes as a sample. An example of a fused cross-sectional image of the sample obtained by the system is shown in the red dashed box in Fig. 2.

In spectrometer-based FD-OCT, the spectral interference signal as captured by the spectrometer is commonly linear-in-wavelength ($ \lambda $) space but slightly nonlinear-in-wavenumber ($ k $) space. This nonlinearity causes broadening in the axial PSF after the Fourier transformation. Therefore, the acquired spectrum needs to be interpolated to make the sampling linear-in-wavenumber space before performing the Fourier transform to obtain an optimum depth profile. In this work, we have implemented, in the FPGA device, a linear interpolation using the following relation:

$$S\!\left({k_i^{\prime}} \right) = \left[{\frac{{\left({k_i^{\prime} - {k_n}} \right)}}{{\left({{k_{n + 1}} - {k_n}} \right)}} \times \left({S\!\left({{k_{n + 1}}} \right) - S\!\left({k_n} \right)} \right)} \right] + S\!\left({k_n} \right)\!,$$
where $n \le i \le n + 1$, $n$ is the pixel number on the acquired spectrum, ${k_n} = 2\pi /{\lambda _n}$ is the wavenumber at pixel number $n$, $S({k_n})$ is the amplitude of the spectrum at pixel number $n$, and $S({k_i^{\prime}})$ is the interpolated amplitude of the spectrum at a new position $i$ of a precomputed linear wavenumber space $k_i^{\prime}$. From the relation in Eq. (1), the value of $({k_i^{\prime} - {k_n}})/({{k_{n + 1}} - {k_n}})$ can be precalculated. Therefore, we utilized a lookup table (LUT) and a memory block on the FPGA for the linear $ k $ interpolation.

In GD-OCM acquisition, each set of GD-OCM data consists of multiple cross-sectional images acquired over the same cross-sectional field of view (FOV) but at different focus positions of the imaging optics, as shown in Fig. 3(a). The translation of the DOF of the objective lens along the axial direction in each cross-sectional image can be considered as a sliding transmission window in the Gabor transformation, as shown in Fig. 3(a). Nevertheless, the effect of the DOF alone may not be sufficient to suppress the out-of-focus portion of the cross-sectional image, as shown in Fig. 3(a), leading to fusing artifacts or ghost images. To improve the quality of the fused image, we further applied a sliding bandpass filter to the acquired spectra. For spectral domain GD-fusion in FPGA devices, these sliding windows can be implemented as multiple digital bandpass filters with different centers and bandwidths, corresponding to the center locations and widths of the DOFs, respectively. Therefore, after each interference spectrum became linear-in-wavenumber, an associated digital bandpass filter was applied to each spectrum, as shown in the magenta dashed box in Fig. 2. The remaining interference frequencies were only those corresponding to the scattering signals from within the DOF of the objective lens, as demonstrated by the filtered images in Fig. 3(b).

 figure: Fig. 3.

Fig. 3. Depth cross-sectional images acquired at different focus positions of the imaging optics (a) before and (b) after applying the bandpass filter.

Download Full Size | PDF

For all the results presented in this paper, the Chebyshev infinite impulse response (IIR) filter was implemented, which provided a sufficiently smooth and fast transition at the edge of the passed band. This smooth transition, i.e., with minimum ripples or sidelobes, was necessary for minimizing the artifacts in the fused image. Figures 4(a) and 4(b) show an overlaid plot of each average depth profile of the seven focus zones used in the GD-OCM imaging without and with the application of the digital bandpass filter, respectively, which demonstrates the performance of the FPGA-based Gabor window in the spatial domain for each focusing zone. The shape of the bandpass filter can be pre-optimized to fit the actual DOF in each focusing zone of GD-OCM. As shown in the magenta dashed box in Fig. 2, the DC spectral shape of the raw spectra was removed in the process, and the filtered signals became pure interference spectra. For the fusion process, all filtered spectra were combined by linear summation to obtain a single set of fused spectra, as shown in the green dashed box in Fig. 2. Fourier transformation of the fused spectra yielded a fused image that maintained high-resolution information across the imaging depth range, as shown in the red dashed box in Fig. 2. Figures 4(c) and 4(d) show the average depth profiles after FFT was performed on the fused spectrum without and with the application of the digital bandpass filters, respectively. Figure 4(f) clearly demonstrates that the application of the digital bandpass filters to the acquired spectra further improves the sharpness of the fused depth cross-sectional image as compared with that obtained without applying the filter, as shown in Fig. 4(e).

 figure: Fig. 4.

Fig. 4. (a) Overlaid plots of the logarithmic-scale average intensity profile along the depth of the seven focus zones used in the GD-OCM imaging, demonstrating the performance without Gabor filtering and (b) the filtering performance of each Gabor window. Logarithmic-scale intensity profile obtained by FFT of the fused spectrum (c) without and (d) after performing Gabor filtering. (e) and (f) are cross-sectional images as constructed from the depth profiles in (c) and (d), respectively.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Volumetric rendering of the 3D fused data acquired by the SF-GD-OCM: (a) without and (b) with the application of the digital bandpass filter. (a1)–(a3) and (b1)–(b3) are en face images reconstructed at different depth locations as designated on the 3D images in (a) and (b), respectively. (c) Volumetric rendering of the 3D fused data acquired by the spatial fusing method originally presented in [30]. (c1)–(c3) show its en face reconstruction at the depth locations as designated on the 3D image in (c).

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. (a)–(d) Depth cross-sectional images of an in vivo nail fold acquired at four different focus positions and (e) a fused cross-sectional image of the nail fold.

Download Full Size | PDF

Tables Icon

Table 1. Summary of the Usage of FPGA Resources in Each Process of the FPGA-Based GD-OCM

4. RESULTS AND DISCUSSION

Figures 5(a) and 5(b) shows comparison of the volumetric rendering of the 3D fused data without and with the application of the digital bandpass filter, respectively. Figures 5(a1)–(a3) and 5(b1)–(b3) are en face images reconstructed at different depths of the 3D volume in Figs. 5(a) and 5(b), respectively. The ghost images caused by the imperfect suppression of the DOF can be clearly observed as indicated by the white arrows in Figs. 5(a1)–(a3) as compared with that in Figs. 5(b1)–(b3). Figure 5(c) is a volumetric rendering of the 3D fused data obtained by using the original spatial fusing technique presented in [30]. Figures 5(c1)–(c3) are en face reconstruction at around the same depth locations with that in Figs. 5(b1)–(b3), respectively. The similar features can be observed as compared with that in Figs. 5(b1)–(b3). This implies that the fusing result from the FPGA-based spectral fusing is similar with that from the spatial fusing method. Moreover, to verify the capability of the developed system for imaging of biological sample, Figs. 6(a)–6(d) show images of in vivo finger’s nail fold acquired at different focus positions. The effect of the narrow DOF is clearly observed in each figure. Figure 6(e) shows the fused image of the nail fold obtained by the developed SF-GD-OCM system.

Table 1 summarizes and compares the amount of FPGA resources used in each step of the FPGA-based SF-GD-OCM, i.e., linear $ k $ interpolation, spectral filtering, spectral fusing, and FFT. FPGA devices commonly consist of three main resources: configurable logic blocks (CLBs), prebuilt logic blocks of digital signal processing (DSP) algorithms, and blocks of random-access memory (block RAM). Each CLB usually consists of flip-flops and LUTs. Flip-flops are used to manage the logic states in CLBs, and LUTs are commonly used to store the truth tables for the predefined combinatorial logic in CLBs. We implemented the linear $ k $ interpolation by mainly utilizing LUT memory. Therefore, the linear $ k $ interpolation process consumes more CLBs but fewer DSP blocks than those of the other processes. DSP slices are prebuilt blocks of complex signal processing algorithms. Hence, these resources are extremely useful for implementing complex algorithms, such as digital bandpass filtering and FFT, in FPGA, as shown in Table 1.

Block RAMs are used to store data, constants, and variables within the FPGA. They are useful for passing values across parallel tasks in FPGA as well as for storing datasets, such as the acquired frame of spectral interference in our case. Therefore, block RAMs were utilized in all processes. In particular, the spectral fusing process was simply the cumulative summation of the filtered spectra and, hence, used a large number of block RAMs but fewer CLBs and no DSP slice, as shown in Table 1. In terms of processing speed performance, the spectral fusing had the highest throughput of 510 Msamples/s. The linear $ k $ interpolation had a throughput of about 160 Msamples/s, which is equivalent to about 50 frames per second (fps) for 3000 samples per spectrum and 1000 spectra per frame. The process with the lowest throughput was the spectral filtering, which was about 80 Msamples/s and equivalent to approximately 27 fps.

Furthermore, the performance of the developed FPGA-based SF-GD-OCM was compared with that of the CPU-based SF-GD-OCM. The host computer had an Intel i7 CPU with a speed of 4 GHz and 24 GB of RAM, running a 64-bit Windows 7 operating system. The CPU-based SF-GD-OCM processing followed the procedure proposed in [28]. The GD-OCM dataset consisted of seven cross-sectional images acquired at different focal planes of the objective lens. The comparison of the processing times between the CPU-based SF-GD-OCM and the FPGA-based SF-GD-OCM is shown in Table 2. The results showed that, in this case, the processing time of the FPGA-based SF-GD-OCM was slightly better than that of the CPU-based SF-GD-OCM in most processes except FFT. However, the processing time for linear $ k $ interpolation and bandpass filtering in the FPGA was approximately twice as fast as that for conventional processing in the host computer.

Tables Icon

Table 2. Comparison of the Processing Times Between the FPGA-Based GD-OCM and the CPU-Based GD-OCM in Each Process

Tables Icon

Table 3. Comparison of the Processing Times per Spectrum Between the CPU-Based SF-GD-OCM and the FPGA-Based SF-GD-OCM When Implemented in the Host Computer with Different Resources

The total processing time for the CPU-based SF-GD-OCM and FPGA-based SF-GD-OCM was 164.1 µs and 107.0 µs, respectively. The FPGA-based SF-GD-OCM was about 35% faster than the CPU-based SF-GD-OCM. The FPGA processed the data according to the number of clock cycles, and, hence, it showed no variation in the processing time for each task. On the other hand, the computer always had background tasks, and, hence, it showed a variation in the processing time as measured by the standard deviation in Table 2. The background tasks were kept at a minimum in this experiment.

Nevertheless, it should be noted that the processing speed of the CPU-based SF-GD-OCM may vary depending on the processing power of the host PC components, e.g., CPU, RAM, and GPU. Therefore, the purpose of the speed comparison experiment was only to verify that, under the same host PC, it is possible to perform several spectral processing algorithms in an FPGA-enabled frame grabber and achieve better processing speed performance. The processing speed of the FPGA-based spectral fusing is determined by the number of clock cycles and, therefore, is governed by the internal clock speed of the FPGA device, which is independent of the processing power of the host computer. For verification, the Virtual Machine was used to simulate a low resource computer, i.e., the number of CPU cores and the amount RAM can be varied as shown in Table 3. We found that the processing speed of the FPGA-based spectral fusing does not depend on the host performance. In contrary, the processing speed of the host-based spectral fusing was varied with the number of CPU cores on the host. In addition to the head-to-head speed improvement, the spectral processing in the FPGA greatly reduced the amount of data transfer between the frame grabber and the host PC and reduced the amount of data needed to be held by the PC. This capability will allow the implementation of SF-GD-OCM with lower specification and cost for the host PC.

5. CONCLUSION

We have developed and experimentally implemented the spectral fusing algorithm of GD-OCM in an FPGA device. The fusion image quality is approximately the same as that obtained by the CPU-based SF-GD-OCM, as previously reported in [28]. The algorithm for the interpolation of the spectral interference signal from linear-in-wavelength to linear-in-wavenumber using a LUT technique was successfully developed. Linear wavenumber interpolation is known to be mandatory for any implementation of FD-OCT and OCM to obtain the optimum depth resolution of the system. In the FPGA-based SF-GD-OCM, linear wavenumber interpolation is mandatory prior to performing bandpass filtering.

Even though, in principle, the shifting of the DOF of the objective lens can serve as a sliding transmission window in SF-GD-OCM, the suppression of the out-of-focus detail by the DOF effect alone may not be sufficient when displaying the final image on a logarithmic scale. This could lead to fusing artifacts, e.g., ghost images and nonuniform intensity at the transition. To address this issue, we applied a bandpass filter on each acquired spectrum to further suppress the out-of-focus portions of the acquired data. For the implementation of the Chebyshev IIR filter in the FPGA, we found that the sixth-order filter coefficient is the lowest order that can provide an adequate passed band for the DOF of 100 µm of the objective lens. However, the 10th-order filter coefficient provides better fusion image quality and, hence, was implemented in the presented results. A higher order will also work at the cost of longer processing times. The center and width of the passed band of each zone were manually determined so that the combination of the transition edge of two consecutive windows is a flattop response to prevent fusing artifacts.

The resource usage of the FPGA was measured and reported, as shown in Table 1. The proposed technique of FPGA-based spectral fusing shows an improved speed over that of the CPU-based spectral fusing. Furthermore, the hardware implementation of the processing algorithm using the FPGA-enabled frame grabber reduced the amount of data transfer to the host PC.

In comparison with the GPU solution, the GPU is suitable for post-processing such as that of spatial fusing GD-OCM. The GPU-based processing of SF-GD-OCM is possible but not efficient since all raw spectra must be transferred to GPU memory, which will not address the bottleneck issue of data transfer speed between the CPU and GPU. The true potential of the SF-GD-OCM is the ability to perform focus extraction and fusing on raw spectra, which will reduce the amount of spectrum data need to be transferred to the host computer and reduce the number of FFTs as compared with the spatial fusing GD-OCM that can be implemented on GPU.

In this work, the liquid lens was chosen to shift the focal plane of the objective as shown in Fig. 1(a). This required about 200 ms to stabilize the focal shift position. Combining this with data capturing time per frame of the sensor, which was about 25 ms, the total time period for capturing one frame of data at each focus position was approximately 225 ms. From the results on Table 2, both the processing times of the CPU-based spectral fusing and the FPGA-based spectral fusing were faster than the data capturing period of the implemented system. Therefore, to benefit from the speed improvement, the acquisition system of GD-OCM with a faster focus shifting technique needs to be implemented.

Furthermore, it should be pointed out that the implementation of the FPGA-based spectral fusing on this off-the-shelf FPGA enabling a frame grabber that is the PCIe-1473, as presented in this study, is only for the purpose of early demonstration of the concept. For further optimized design, e.g., either for faster speed solution or a lower cost solution, a custom developed FPGA frame grabber with more resources and faster clock speed should be implemented. In fact, the FPGA frame grabber is just one possible hardware solution for the SF-GD-OCM. The SF-GD-OCM can also be implemented on swept-source-based GD-OCM by using an FPGA enabled analog-to-digital device.

In addition, the processing speed of the FPGA-based SF-GD-OCM can be further improved by implementing it on the system that can acquire spectrum in linear wavenumber domain, e.g., a swept-source with linear-in-wavenumber output [31,32] or linear-in-wavenumber spectrometer designs [3335]. This will remove the process of linear wavenumber interpolation and free up some resources of the FPGA. Therefore, the FPGA-based SF-GD-OCM is promising for high-speed GD-OCM.

Funding

Thailand Science Research and Innovation (MRG5980254).

Disclosures

The authors declare no conflicts of interest.

REFERENCES

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991). [CrossRef]  

2. A. F. Fercher, W. Drexler, and C. K. Hitzenberger, and T. Lasser, “Optical coherence tomography-principles and applications,” Rep. Prog. Phys. 66, 239–260 (2003).

3. M. Choma, M. Sarunic, C. Yang, and J. Izatt, “Sensitivity advantage of swept source and Fourier domain optical coherence tomography,” Opt. Express 11, 2183–2189 (2003). [CrossRef]  

4. W. Drexler, M. Liu, A. Kumar, T. Kamali, A. Unterhuber, and R. A. Leitgeb, “Optical coherence tomography today: speed, contrast, and multimodality,” J. Biomed. Opt. 19, 071412 (2014). [CrossRef]  

5. J. F. de Boer, B. Cense, B. H. Park, M. C. Pierce, G. J. Tearney, and B. E. Bouma, “Improved signal-to-noise ratio in spectral-domain compared with time-domain optical coherence tomography,” Opt. Lett. 28, 2067–2069 (2003). [CrossRef]  

6. L. An, P. Li, T. T. Shen, and R. Wang, “High speed spectral domain optical coherence tomography for retinal imaging at 500,000 A-lines per second,” Biomed. Opt. Express 2, 2770–2783 (2011). [CrossRef]  

7. M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, 7th ed. (Cambridge University, 1999).

8. B. E. Bouma and G. J. Tearney, Handbook of Optical Coherence Tomography (Marcel Dekker, 2002).

9. J. A. Izatt, M. R. Hee, G. M. Owen, E. A. Swanson, and J. G. Fujimoto, “Optical coherence microscopy in scattering media,” Opt. Lett. 19, 590–592 (1994). [CrossRef]  

10. A. Aguirre, P. Hsiung, T. Ko, I. Hartl, and J. Fujimoto, “High-resolution optical coherence microscopy for high-speed, in vivo cellular imaging,” Opt. Lett. 28, 2064–2066 (2003). [CrossRef]  

11. R. Huber, M. Wojtkowski, J. G. Fujimoto, J. Jiang, and A. Cable, “Three-dimensional and C-mode OCT imaging with a compact, frequency swept laser source at 1300 nm,” Opt. Express 13, 10523–10538 (2005). [CrossRef]  

12. S. Murali, K. S. Lee, and J. P. Rolland, “Invariant resolution dynamic focus OCM based on liquid crystal lens,” Opt. Express 15, 15854–15862 (2007). [CrossRef]  

13. M. Cua, S. Lee, D. Miao, M. J. Ju, P. J. Mackenzie, Y. Jian, and M. V. Sarunic, “Retinal optical coherence tomography at 1 µm with dynamic focus control and axial motion tracking,” J. Biomed. Opt. 21, 026007 (2016). [CrossRef]  

14. B. Qi, A. P. Himmer, L. M. Gordon, X. V. Yang, L. D. Dickensheets, and I. A. Vitkin, “Dynamic focus control in high-speed optical coherence tomography based on a microelectromechanical mirror,” Opt. Commun. 232, 123–128 (2004). [CrossRef]  

15. J. Holmes, “Theory and applications of multi-beam OCT,” Proc. SPIE 7139, 713908 (2008). [CrossRef]  

16. S. Murali, K. P. Thompson, and J. P. Rolland, “Three-dimensional adaptive microscopy using embedded liquid lens,” Opt. Lett. 34, 145–147 (2009). [CrossRef]  

17. K.-S. Lee, K. P. Thompson, P. Meemon, and J. P. Rolland, “Cellular resolution optical coherence microscopy with high acquisition speed for in-vivo human skin volumetric imaging,” Opt. Lett. 36, 2221–2223 (2011). [CrossRef]  

18. C. Canavesi, A. Cogliati, A. Mietus, Y. Qi, J. Schallek, J. P. Rolland, and H. B. Hindman, “In vivo imaging of corneal nerves and cellular structures in mice with Gabor-domain optical coherence microscopy,” Biomed. Opt. Express 11, 711–724 (2020). [CrossRef]  

19. K.-S. Lee, H. Zhao, S. F. Ibrahim, N. Meemon, L. Khoudeir, and J. P. Rolland, “Three-dimensional imaging of normal skin and nonmelanoma skin cancer with cellular resolution using Gabor domain optical coherence microscopy,” J. Biomed. Opt. 17, 126006 (2012). [CrossRef]  

20. R. Cernat, A. Bradu, N. M. Israelsen, O. Bang, S. Rivet, P. A. Keane, D.-G. Heath, R. Rajendram, and A. Podoleanu, “Gabor fusion master slave optical coherence tomography,” Biomed. Opt. Express 8, 813–827 (2017). [CrossRef]  

21. Y.-Z. Liu, N. D. Shemonski, S. G. Adie, A. Ahmad, A. J. Bower, P. S. Carney, and S. A. Boppart, “Computed optical interferometric tomography for high-speed volumetric cellular imaging,” Biomed. Opt. Express 5, 2988–3000 (2014). [CrossRef]  

22. M. Wu, D. M. Small, N. Nishimura, and S. G. Adie, “Computed optical coherence microscopy of mouse brain ex vivo,” J. Biomed. Opt. 24, 116002 (2019). [CrossRef]  

23. S. G. Adie, B. W. Graf, A. Ahmad, P. S. Carney, and S. A. Boppart, “Computational adaptive optics for broadband optical interferometric tomography of biological tissue,” Proc. Natl. Acad. Sci. USA 109, 7175–7180 (2012). [CrossRef]  

24. P. Tankam, Z. He, Y.-J. Chu, J. Won, C. Canavesi, T. Lepine, H. B. Hindman, D. J. Topham, P. Gain, and G. Thuret, “Assessing microstructures of the cornea with Gabor-domain optical coherence microscopy: pathway for corneal physiology and diseases,” Opt. Lett. 40, 1113–1116 (2015). [CrossRef]  

25. A. Cogliati, C. Canavesi, A. Hayes, P. Tankam, V.-F. Duma, A. Santhanam, K. P. Thompson, and J. P. Rolland, “MEMS-based handheld scanning probe with pre-shaped input signals for distortion-free images in Gabor-domain optical coherence microscopy,” Opt. Express 24, 13365–13374 (2016). [CrossRef]  

26. J. Li, P. Bloch, J. Xu, M. V. Sarunic, and L. Shannon, “Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units,” Appl. Opt. 50, 1832–1838 (2011). [CrossRef]  

27. H. Jeong, N. H. Cho, U. Jung, C. Lee, J.-Y. Kim, and J. Kim, “Ultra-fast displaying spectral domain optical Doppler tomography system using a graphics processing unit,” Sensors 12, 6920–6929 (2012). [CrossRef]  

28. P. Meemon, J. Widjaja, and J. P. Rolland, “Spectral fusing Gabor domain optical coherence microscopy,” Opt. Lett. 41, 508–511 (2016). [CrossRef]  

29. P. Pongchalee, P. Meemon, and J. Widjaja, “Design of spectrometer-based frequency-domain optical coherence tomography at 1300 nm wavelength for skin diagnostics,” in 10th Biomedical Engineering International Conference (BMEiCON) (IEEE, 2017), pp. 1–5.

30. J. P. Rolland, P. Meemon, S. Murali, K. P. Thompson, and K. S. Lee, “Gabor-based fusion technique for optical coherence microscopy,” Opt. Express 18, 3632–3642 (2010). [CrossRef]  

31. H. Lee, G. H. Kim, M. Villiger, H. Jang, B. E. Bouma, and C.-S. Kim, “Linear-in-wavenumber actively-mode-locked wavelength-swept laser,” Opt. Lett. 45, 5327–5330 (2020). [CrossRef]  

32. J. Xi, L. Huo, J. Li, and X. Li, “Generic real-time uniform K-space sampling method for high-speed swept-source optical coherence tomography,” Opt. Express 18, 9511–9517 (2010). [CrossRef]  

33. C. Yoon, A. Bauer, D. Xu, C. Dorrer, and J. P. Rolland, “Absolute linear-in-k spectrometer designs enabled by freeform optics,” Opt. Express 27, 34593–34602 (2019). [CrossRef]  

34. Z. Hu and A. M. Rollins, “Fourier domain optical coherence tomography with a linear-in-wavenumber spectrometer,” Opt. Lett. 32, 3525–3527 (2007). [CrossRef]  

35. G. Lan and G. Li, “Design of a k-space spectrometer for ultra-broad waveband spectral domain optical coherence tomography,” Sci. Rep. 7, 42353 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Schematic diagram of (a) the setup and (b) a photograph of the FPGA-based SF-GD-OCM system; MO, microscope objective; GV, galvanometer mirror; LL, liquid lens; BS, beam splitter; CL, collimator; RTF, retro reflector; OAP, off-axis parabolic mirror.
Fig. 2.
Fig. 2. Example of the flow processes of the seven focus zones used in the FPGA-based SF-GD-OCM. BPF, bandpass filter.
Fig. 3.
Fig. 3. Depth cross-sectional images acquired at different focus positions of the imaging optics (a) before and (b) after applying the bandpass filter.
Fig. 4.
Fig. 4. (a) Overlaid plots of the logarithmic-scale average intensity profile along the depth of the seven focus zones used in the GD-OCM imaging, demonstrating the performance without Gabor filtering and (b) the filtering performance of each Gabor window. Logarithmic-scale intensity profile obtained by FFT of the fused spectrum (c) without and (d) after performing Gabor filtering. (e) and (f) are cross-sectional images as constructed from the depth profiles in (c) and (d), respectively.
Fig. 5.
Fig. 5. Volumetric rendering of the 3D fused data acquired by the SF-GD-OCM: (a) without and (b) with the application of the digital bandpass filter. (a1)–(a3) and (b1)–(b3) are en face images reconstructed at different depth locations as designated on the 3D images in (a) and (b), respectively. (c) Volumetric rendering of the 3D fused data acquired by the spatial fusing method originally presented in [30]. (c1)–(c3) show its en face reconstruction at the depth locations as designated on the 3D image in (c).
Fig. 6.
Fig. 6. (a)–(d) Depth cross-sectional images of an in vivo nail fold acquired at four different focus positions and (e) a fused cross-sectional image of the nail fold.

Tables (3)

Tables Icon

Table 1. Summary of the Usage of FPGA Resources in Each Process of the FPGA-Based GD-OCM

Tables Icon

Table 2. Comparison of the Processing Times Between the FPGA-Based GD-OCM and the CPU-Based GD-OCM in Each Process

Tables Icon

Table 3. Comparison of the Processing Times per Spectrum Between the CPU-Based SF-GD-OCM and the FPGA-Based SF-GD-OCM When Implemented in the Host Computer with Different Resources

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

S ( k i ) = [ ( k i k n ) ( k n + 1 k n ) × ( S ( k n + 1 ) S ( k n ) ) ] + S ( k n ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.