Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

GPU-accelerated image registration algorithm in ophthalmic optical coherence tomography

Open Access Open Access

Abstract

Limited to the power of the light source in ophthalmic optical coherence tomography (OCT), the signal-to-noise ratio (SNR) of the reconstructed images is usually lower than OCT used in other fields. As a result, improvement of the SNR is required. The traditional method is averaging several images at the same lateral position. However, the image registration average costs too much time, which limits its real-time imaging application. In response to this problem, graphics processing unit (GPU)-side kernel functions are applied to accelerate the reconstruction of the OCT signals in this paper. The SNR of the images reconstructed from different numbers of A-scans and B-scans were compared. The results demonstrated that: 1) There is no need to realize the axial registration with every A-scan. The number of the A-scans used to realize axial registration is suitable to set as ∼25, when the A-line speed was set as ∼12.5kHz. 2) On the basis of ensuring the quality of the reconstructed images, the GPU can achieve 43× speedup compared with CPU.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical coherence tomography (OCT) was first proposed by Huang et al. from MIT in 1991 [1]. When the Fourier domain OCT was proposed by Fercher et al. in 1995 [2], OCT became a hotspot imaging system. Since OCT has the advantages of no contact, no damage, real-time, ultra-high resolution and so on, OCT has become a new biomedical imaging technology after X-ray [3], computed tomography (CT) [4], magnetic resonance imaging (MRI) [5], ultrasound [6], confocal microscopy [7] and other detection technologies [8,9]. OCT imaging technology has overcome the design problems of small imaging depth, slow scanning speed, and low resolution in the early stage of research. It has developed rapidly in recent years and is widely used in ophthalmology, brain imaging, developmental biology, tissue engineering, endoscopic medicine, etc. [1014]. The OCT system performs tomographic imaging of the optic nerve head and coronary arteries of the isolated retina based on the backscattered light of the object, with a resolution of 1-15 µm [15,16]. The accurate stratification and identification of lesions in OCT retinal images is an important basis for the diagnosis of ophthalmic diseases. However, OCT images are greatly affected by speckle noise and have low image contrast. These shortcomings bring great difficulties to the accurate detection of lesions. At present, some scholars have done some research on the noise reduction method of OCT ophthalmic images [1722], but the high-speed and robust noise reduction method is still worth further study.

Ophthalmic OCT images have problems such as low contrast, high noise, and the decrease of reflected signal with the increase of imaging depth. Among them, low signal-to-noise ratio brings great difficulty to the recognition of each layer of images. Speckle noise is dominant in the noise of OCT images, which is produced by the interference superposition of scattered light waves of random phases, and belongs to multiplicative noise. The optical properties of the target object, the size of the light source, the temporal coherence and the aperture of the detector will affect the speckle noise [23], and the speckle noise can be eliminated in the spatial and frequency domains, respectively. Most of the spatial image processing uses mean filtering [24] to remove speckle noise, but since mean filtering is isotropic diffusion, the boundary will be blurred while denoising, so anisotropic filtering [25,26], guided filtering [27,28] and wavelet filtering [29,30] have become main noise reduction methods which are commonly used in spatial image processing. Since the anisotropic diffusion equation can suppress the noise near the edge, the anisotropic diffusion method for edge-preserving denoising and the nonlinear diffusion filtering method combined with the Schrödinger equation are widely used in OCT image dispersion. In speckle noise removal, the linear space scale is extended to the complex domain, which also plays an important role in the extraction of important features of the image. R. Bermardes et al. proposed adaptive nonlinear diffusion filtering method in 2010 [31]. This method reduces diffusion in high-intensity regions such as retina and enhances diffusion in background regions, resulting in better noise removal and laminar edge preservation in OCT retinal images. The wavelet filtering method converts the sub-images in the wavelet domain according to the resolution difference, and then determines the nonlinear threshold coefficients in the horizontal, vertical and diagonal directions, so as to remove the high-frequency speckle noise.

In recent years, with the continuous development of GPU technology in accelerating numerical simulation, some researchers have begun to reconstruct the OCT images on the GPU platform [3234]. These studies highlight the improvement of GPU computing power on the efficiency. Unlike traditional single-core or multi-core CPU architectures that contain complex logic control units, GPU architectures provide thousands of computational units that can be used to accelerate numerical simulations. Accordingly, the GPU logic control unit is relatively simple. Therefore, high-performance numerical simulation based on GPU relies on the adaptability of algorithms and implementation methods to hardware.

Therefore, under the premise of fully considering the characteristics of GPU architecture, this paper introduces GPU acceleration technology into reconstruction of ophthalmic OCT images. This research focuses on the design of GPU parallel algorithms including interpolation in k domain, FFT performance, horizontal registration and axial registration to improve the SNR based on correlation coefficient. To evaluation the performance of the method, the SNR of the averaged images with different numbers of A-scans and B-scans were compared and analyzed. The results demonstrate that this method can obviously improve the SNR and speed simultaneously which means that teal-time imaging can be realized.

2. Method

2.1 OCT equipment

The OCT signals were collected with BV1000 from Suzhou Big Vision company. The central wavelength is ∼840 nm with a bandwidth of ∼40 nm, which means the axial resolution of the system is ∼7.7 µm. The number of the pixel for the CCD is 2048. After axial calibration, two adjacent pixel represents ∼4.5 µm. Single line mode, which was one of three scan patterns, was used to collect the signal and the A-line speed was set as ∼12.5kHz and ∼80 kHz. Each B-scan contained 1000 A-scans. The scanning range is 12 mm, which means that adjacent lateral points represent ∼12 µm. The lateral resolution is ∼20 µm. Twenty consecutive B-scans were repeated at the same lateral position.

2.2 Image averaged method

The flowchart of the image averaged method was shown as Fig. 1. The collected OCT signals were removed the dc-term and performed the interpolation in k domain first. The Fourier transform was performed for each A-scan to achieve the depth information and then reconstruct the B-scan images. The first B-scan images was selected as the template and the left 19 B-scan images were performed horizontal registration and axial registration. Finally, all the 20 images were added and the enhancement of the image were performed.

 figure: Fig. 1.

Fig. 1. Flowchart of the algorithm.

Download Full Size | PDF

2.2.1 Horizontal registration

Considering the special structure of the central fovea of macula, this region was selected to calculate the horizontal shift for each B-scan. The first B-scan image was used as a template and a smaller region was chosen shown as the red box labeled in Fig. 2. The region for the left 19 B-scan was labeled with green box in Fig. 2 which included the region of the red box. To match the horizontal position of the first and the left 19 B-scan, the red box was moved one lateral step and the corresponding correlation coefficient was calculated. The position of the largest correlation coefficient was used to calculate the horizontal shift. The brief algorithm was shown as following, where I1(k1:k2,k3:k4) represents the selected section include fovea centralis of the first B-scan image, In(k1-N/2 + i:k2-N/2 + i,k3-M/2 + j:k4-M/2 + j) represents a larger section which include the section corresponding to the section of I1(k1:k2,k3:k4). The corresponding GPU function was detailed shown in the appendix.

 for i = 1:N

 for j = 1:M

 x = corrcoef(I1(k1:k2,k3:k4),In(k1-N/2 + i:k2-N/2 + i,k3-M/2 + j:k4-M/2 + j))

 xx(i,j)=x(1,2);

 end

 C = max(xx,[],1);

 [a,p] = max(C);

 horizontal shift = p-N/2;

 figure: Fig. 2.

Fig. 2. Horizontal shift calculated algorithm.

Download Full Size | PDF

2.2.2 Axial registration

The horizontal registration was performed first. To match the axial displacement induced by eye movement, the correlation coefficient of the consecutive A-scans was calculated. Considering that the A-line speed was set as ∼12.5kHz, which was so short that the eye would probably not move, we chose 25 A-scans as a window to calculate the axial displacement between adjacent window. The correlation coefficient matrix was calculated while the green box was searched in the red box as shown in Fig. 3. The axial shift of the adjacent window was than ensured according to the position of the maximal value of the correlation coefficient matrix. To achieve high quality B-scan image, only the common lateral position for all 20 B-scan images were left, which induced the number of A-scans contained by B-scan images after horizontal registration less than 1000.

 figure: Fig. 3.

Fig. 3. Axial shift calculated algorithm.

Download Full Size | PDF

2.3 GPU realization

The procedure of the reconstruction of one B-scan OCT image with GPU was shown as Fig. 4. Since one B-scan image includes 1000 A-scans, 1000 threads were used in remove DC term, resampling, dispersion compensation and FFT procedure. For horizontal registration, the horizontal threads (64) are used to search in horizontal direction, while the axial threads (32) are used to search in axial direction in Fig. 4. For axial registration, the horizontal threads (200) are used to search in axial direction, while the axial threads (5) means the number of A-scan included in a window is 200.

 figure: Fig. 4.

Fig. 4. The flowchart of the reconstruction of OCT images with GPU.

Download Full Size | PDF

In this work, NVIDIA GeForce RTX 3060 was used to realize GPU-accelerated image reconstruction. Cuda11.4 and Visual Studio 2017 were used to program the algorithm. For horizontal registration, considering there were 20 B-scan images and the first B-scan image was used as a template, 19 block were needed to calculate the horizontal shifts by correlation calculation. Each block has M × N thread, where N represents the horizontal search range and M represents axial search range. The detailed structure was shown in Fig. 5. After all the correlation coefficients were achieved, the position of the maximal correlation coefficient was calculated and used to correct the images. For axial registration, considering there were 1000 A-scans for one B-scan and each window contained 25 A-scans, 1 block with M × 39 thread was need to calculate the axial shift, where M represents the axial search range and 39 represents 39 adjacent windows.

 figure: Fig. 5.

Fig. 5. The grid, block, and thread distribution for horizontal registration.

Download Full Size | PDF

3. Results

The reconstructed 20 repeated B-scan images were shown in Fig. 6. From Fig. 6, we can see that 20 images were almost the same. However, if we averaged these 20 images without any processing, even though the A-line speed was set as ∼80 kHz (shown in Fig. 7(a)), the averaged image was blurred. The perform of average improved the SNR especially the speckle noise, the layer structure of the retina can’t be distinguished. Figure 8 was the averaged image with the same B-scans with horizontal and axial registration. Compared with Fig. 7, the speckle noise improved obviously. Compared with Fig. 6, the layer structure kept the same as original B-scan images. The time spent by GPU and CPU for each procedure was listed in Table 1. For CPU, horizontal and axial registration totally spent 48489 ms, which dominated most of the time, while for GPU these two procedures only spent 12 ms. For GPU, the Fourier transform dominated most of the time. Since the FFT was realized with cufft, which is the function of the cuda. The total time for CPU and GPU were 55703 ms and 1278 ms, which meant that GPU can achieve 43× speedup.

 figure: Fig. 6.

Fig. 6. The reconstructed 20 consecutive B-scan images.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. The averaged image without any processing. (a) The A-line speed was set as ∼80 kHz; (b) The A-line speed was set as ∼12.5kHz.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. The averaged image with horizontal and axial registration. (a) The A-line speed was set as ∼80 kHz; (b) The A-line speed was set as ∼12.5kHz.

Download Full Size | PDF

Tables Icon

Table 1. The Average correlation coefficient when different numbers of A-scans used

To test the performance of the proposed method with the increasement of the B-scan images, SNR, which is defined as $SNR = {\mu _b}/{\sigma _b}$, was used to evaluate the performance, where ${\mu _b}$ and ${\sigma _b}$ are the mean and the standard deviation of the intensity in the background region. A set of 20 B-scan images were used to calculate the SNR as shown in Fig. 9. It can be seen from Fig. 9, the SNR of the OCT images can be improved by averaging multiple images corresponding to the same position. And with the increasement of the number of the images, the SNR increased. However, the speed of the increasement decrease with the number of the images.

 figure: Fig. 9.

Fig. 9. The SNR of the averaged image changed with the number of images.

Download Full Size | PDF

To find the best window for axial registration, the averaged images with a window of different numbers of A-scans were achieved as shown in Fig. 10. It can be seen from Fig. 10 (a), the layer in the green box was discontinuous which meant that the performance of the axial registration was not very good. Figure 10(b) was the averaged image with a window of 4 A-scans. It can be seen that the discontinuousness was greatly improved, however, the performance of the axial registration needs to be improved, especially for the section labeled with red box. Figure 10(b) was the averaged image with a window of 25 A-scans and the discontinuousness occurred in Fig. 10 (a) and (b) disappeared which meant that the performance of the axial registration was satisfied. The average correlation coefficient for different window with different numbers of A-scans were listed in Table 1. The results indicated that ∼25 A-scans was the best choice to perform the axial registration.

 figure: Fig. 10.

Fig. 10. The averaged images with a window of different numbers of A-scans. (a) a window of one A-scan; (b) a window of 4 A-scans; (c) a window of 25 A-scans.

Download Full Size | PDF

The GPU version of the solver is implemented on an NVIDIA GeForce RTX 3060 (with 3584 CUDA cores and 12GB RAM). All solvers are implemented based on the same mathematical formulas and numerical algorithms. For the horizontal registration, the horizontal search range N in section 2.2.1 was set as 32, while the axial search range M in section 2.2.1 was set as 64. So totally 64 × 32 threads were used to complete one B-scan image. Considering 19 B-scan images required horizontal registration, the block was set as 64 × 16, while the grid was set 2 × 19. For the axial registration, the axial search range M in section 2.2.2 was set as 200. Each window contained 20 A-scans, which means 50 windows were needed. The block was set as 200 × 5, while the grid was set 10 × 19. To validate the advantage of using GPU to improve the SNR of the OCT images with horizontal and axial registration, the comparison of the absolute solution time based on CPU and GPU were listed in Table 2, and their performance speedup was also given.

Tables Icon

Table 2. The comparison between GPU and CPU for each procedure

4. Conclusion

The OCT technology has been widely used in retinal imaging due to its high resolution and real-time performance. However, limited the power of the light source and the requirement of real time imaging, the SNR of the B-scan image in ophthalmic field is lower than the OCT images in other fields. Under the premise of fully considering the characteristics of GPU architecture, the image registration method based on GPU was designed to improve the SNR of the image. Up to 190000 threads were used with a maximal speed ratio of 5518. The total time for averaged 20 images with GPU was 1278 ms, which achieved 43× speedup compared with CPU. The research demonstrates the great potential of GPU-based high-performance solvers for OCT reconstruction method. Considering the FFT procedure dominated most of the time for GPU, in the following work, the FFT should be optimized to decrease the time.

Appendix

The kernel function used to calculate the correlation coefficient corresponding to the Matlab code shown in Section 2.2.1 is shown as following:

__global__ void kernel_corrcoef111(float* array, float* ck)

 {

 float sum1 = 0;

 float sum2 = 0;

 float sum3 = 0;

 float sum4 = 0;

 float sum5 = 0;

 float mean1 = 0;

 float mean2 = 0;

 int bx = blockIdx.x;

 int by = blockIdx.y;

 int tx = threadIdx.x;

 int ty = threadIdx.y;

 for (int i = 0; i < HOR_END - HOR_BEGIN; i++)

 {

 for (int j = 0; j < (VER_END - VER_BEGIN); j++)

 {

 sum1 += array[ROW_IMG * COL_IMG + (VER_BEGIN -DEEP_MOVE_1 + 1) + j + ROW_IMG * i + (HOR_BEGIN - HOR_MOVE + 1) * ROW_IMG + tx +ROW_IMG * ty + blockDim.y * ROW_IMG * bx + ROW_IMG * COL_IMG * by];

 sum2 += array[VER_BEGIN + j + HOR_BEGIN * ROW_IMG + ROW_IMG * i];

 }

 }

 mean1 = sum1 / ((VER_END - VER_BEGIN) * (HOR_END - HOR_BEGIN) + 0.0f);

 mean2 = sum2 / ((VER_END - VER_BEGIN) * (HOR_END - HOR_BEGIN) + 0.0f);

 for (int i = 0; i < HOR_END - HOR_BEGIN; i++)

 {

 for (int j = 0; j < (VER_END - VER_BEGIN); j++)

 {

 sum3 += (array[ROW_IMG * COL_IMG + (VER_BEGIN - DEEP_MOVE_1 + 1) + j + ROW_IMG * i + (HOR_BEGIN - HOR_MOVE + 1) * ROW_IMG + tx +ROW_IMG * ty + blockDim.y * ROW_IMG * bx + ROW_IMG * COL_IMG * by] - mean1) * (array[VER_BEGIN + j + HOR_BEGIN * ROW_IMG + ROW_IMG * i] - mean2);

 sum4 += (array[ROW_IMG * COL_IMG + (VER_BEGIN - DEEP_MOVE_1 + 1) + j + ROW_IMG * i + (HOR_BEGIN - HOR_MOVE + 1) * ROW_IMG + tx +ROW_IMG * ty + blockDim.y * ROW_IMG * bx + ROW_IMG * COL_IMG * by] - mean1) * (array[ROW_IMG * COL_IMG + (VER_BEGIN - DEEP_MOVE_1 + 1) + j + ROW_IMG * i + (HOR_BEGIN - HOR_MOVE + 1) * ROW_IMG + tx + ROW_IMG * ty + blockDim.y * ROW_IMG * bx + ROW_IMG * COL_IMG * by] - mean1);

 sum5 += (array[VER_BEGIN + j + HOR_BEGIN * ROW_IMG + ROW_IMG * i] - mean2) * (array[VER_BEGIN + j + HOR_BEGIN * ROW_IMG + ROW_IMG * i] - mean2);

 }

}

ck[blockDim.x * blockDim.y * (bx + gridDim.x * by)+ tx + blockDim.x * ty] = sum3 / (sqrt(sum4) + 0.0f) / (sqrt(sum5) + 0.0f);

}

The kernel function used to calculate the maximal correlation coefficient corresponding to the Matlab code shown in Section 2.2.1 is shown as following:

__global__ void kernel_shift(float* array1, int* xweizhi, int* tichu)

 {

 float* p = array1;

 int ps = 0;

 int tx = threadIdx.x;

 float tem = p[0 + 2 * DEEP_MOVE_1* 2 * HOR_MOVE * tx];

 for (int i = 1; i < 2 * DEEP_MOVE_1 * 2 * HOR_MOVE; i++)

 {

 if (tem < p[i + 2 * DEEP_MOVE_1 * 2 * HOR_MOVE * tx])

 {

 tem = p[i + 2 * DEEP_MOVE_1 * 2 * HOR_MOVE * tx];

 ps = i;

 }

 }

 if (p[ps + 2 * DEEP_MOVE_1 * 2 * HOR_MOVE * tx] < THRESHOLD) {

 tichu[tx + 1] = 1;

 }

 xweizhi[tx] = ps / 64;

}

Funding

National Key Research and Development Program of China (2018YFA0701700); National Natural Science Foundation of China-Liaoning Joint Fund (U20A20170); National Natural Science Foundation of China (62205120); the Project of State Key Laboratory of Radiation Medicine and Protection, Soochow University (GZK1202217).

Acknowledgments

We thank Suzhou Big Vision company for use of BV1000 OCT equipment.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]  

2. A. F. Fercher, C. K. Hitzenberger, G. Kamp, and S.Y. El-Zaiat, “Measurement of intraocular distances by backscattering spectral interferometry,” Opt. Commun. 117(1-2), 43–48 (1995). [CrossRef]  

3. X. Ou, X. Qin, B. Huang, J. Zan, Q. Wu, Z. Hong, L. Xie, H. Bian, Z. Yi, X. Chen, Y. Wu, X. Song, J. Li, Q. Chen, H. Yang, and X. Liu, “High-resolution X-ray luminescence extension imaging,” Nature 590(7846), 410–415 (2021). [CrossRef]  

4. S. Na, J. J. Russin, L. Lin, X. Yuan, P. Hu, K. B. Jann, L. Yan, K. Maslov, J. Shi, D. J. Wang, C. Y. Liu, and L. V. Wang, “Massively parallel functional photoacoustic computed tomography of the human brain,” Nat. Biomed. Eng 6(5), 584–592 (2021). [CrossRef]  

5. C. Z. Cooley, P. C. McDaniel, J. P. Stockmann, S. A. Srinivas, S. F. Cauley, M. Śliwiak, C. R. Sappo, C. F. Vaughn, B. Guerin, M. S. Rosen, M. H. Lev, and L. L. Wald, “A portable scanner for magnetic resonance imaging of the brain,” Nat. Biomed. Eng. 5(3), 229–239 (2020). [CrossRef]  

6. N. S. Awad, V. Paul, N. M. AlSawaftah, G. Ter Haar, T. M. Allen, W. G. Pitt, and G. A. Husseini, “Ultrasound-responsive nanocarriers in cancer treatment: A review,” ACS Pharmacol. Transl. Sci. 4(2), 589–612 (2021). [CrossRef]  

7. I. N. Petropoulos, G. Ponirakis, M. Ferdousi, S. Azmi, A. Kalteniece, A. Khan, H. Gad, B. Bashir, A. Marshall, A. J. M. Boulton, H. Soran, and R. A. Malik, “Corneal confocal microscopy: a biomarker for diabetic peripheral neuropathy,” Clin. Ther. 43(9), 1457–1475 (2021). [CrossRef]  

8. A. B. Jani, E. Schreibmann, S. Goyal, R. Halkar, B. Hershatter, P. J. Rossi, J. W. Shelton, P. R. Patel, K. M. Xu, M. Goodman, V. A. Master, S. S. Joshi, O. Kucuk, B. C. Carthon, M. A. Bilen, O. A. Abiodun-Ojo, A. A. Akintayo, V. R. Dhere, and D. M. Schuster, “18F-fluciclovine-PET/CT imaging versus conventional imaging alone to guide postprostatectomy salvage radiotherapy for prostate cancer (EMPIRE-1): a single centre, open-label, phase 2/3 randomised controlled trial,” Lancet 397(10288), 1895–1904 (2021). [CrossRef]  

9. H. Arabi, A. AkhavanAllaf, A. Sanaat, I. Shiri, and H. Zaidi, “The promise of artificial intelligence and deep learning in PET and SPECT imaging,” Phys. Medica 83, 122–137 (2021). [CrossRef]  

10. Y. Li, J. Jing, J. Yu, B. Zhang, T. Huo, Q. Yang, and Z. Chen, “Multimodality endoscopic optical coherence tomography and fluorescence imaging technology for visualization of layered architecture and subsurface microvasculature,” Opt. Lett. 43(9), 2074–2077 (2018). [CrossRef]  

11. C. C. Sahyoun, H. M. Subhash, D. Peru, R. P. Ellwood, and M. C. Pierce, “An experimental review of optical coherence tomography systems for noninvasive assessment of hard dental tissues,” Caries Res. 54(1), 43–54 (2020). [CrossRef]  

12. J. Men, Y. Huang, J. Solanki, X. Zeng, A. Alex, J. Jerwick, Z. Zhang, R. E. Tanzi, A. Li, and C. Zhou, “Optical coherence tomography for brain imaging and developmental biology,” IEEE J. Select. Topics Quantum Electron. 22(4), 1–13 (2016). [CrossRef]  

13. A. Levine, K. Wang, and O. Markowitz, “Optical coherence tomography in the diagnosis of skin cancer,” Dermatol. Clin. 35(4), 465–488 (2017). [CrossRef]  

14. I. Chatziralli, I. Milionis, A. Christodoulou, P. Theodossiadis, and G. Kitsos, “The Role of Vessel Density as Measured by Optical Coherence Tomography Angiography in the Evaluation of Pseudoexfoliative Glaucoma: A Review of the Literature,” Ophthalmol. Ther. 11(2), 533–545 (2022). [CrossRef]  

15. H. Cheong, S. K. Devalla, T. Chuangsuwanich, T. A. Tun, X. Wang, T. Aung, L. Schmetterer, M. L. Buist, C. Boote, A. H. Thiery, and M. J. Girard, “OCT-GAN: single step shadow and noise removal from optical coherence tomography images of the human optic nerve head,” Biomed. Opt. Express 12(3), 1482–1498 (2021). [CrossRef]  

16. F. Gao, Z. J. Wang, X. T. Ma, H. Shen, L. X. Yang, and Y. J. Zhou, “Effect of alirocumab on coronary plaque in patients with coronary artery disease assessed by optical coherence tomography,” Lipids Health Dis. 20(1), 106 (2021). [CrossRef]  

17. A. Ozcan, A. Bilenca, A. E. Desjardins, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography images using digital filtering,” J. Opt. Soc. Am. A 24(7), 1901–1910 (2007). [CrossRef]  

18. P. Puvanathasan and K. Bizheva, “Speckle noise reduction algorithm for optical coherence tomography based on interval type II fuzzy set,” Opt. Express 15(24), 15747–15758 (2007). [CrossRef]  

19. B. Qiu, Z. Huang, X. Liu, X. Meng, Y. You, G. Liu, K. Yang, A. Maier, Q. Ren, and Y. Lu, “Noise reduction in optical coherence tomography images using a deep neural network with perceptually-sensitive loss function,” Biomed. Opt. Express 11(2), 817–830 (2020). [CrossRef]  

20. Y. Ma, X. Chen, W. Zhu, X. Cheng, D. Xiang, and F. Shi, “Speckle noise reduction in optical coherence tomography images based on edge-sensitive cGAN,” Biomed. Opt. Express 9(11), 5129–5146 (2018). [CrossRef]  

21. M. Xu, C. Tang, F. Hao, M. Chen, and Z. Lei, “Texture preservation and speckle reduction in poor optical coherence tomography using the convolutional neural network,” Med. Image Anal. 64, 101727 (2020). [CrossRef]  

22. M. Liu, X. Chen, and B. Wang, “Axial and horizontal registration guided speckle suppression in single-line HD mode for retinal optical coherence tomography images,” Opt. Commun. 487, 126807 (2021). [CrossRef]  

23. J. M. Schmitt, S. H. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4(1), 95–105 (1999). [CrossRef]  

24. H. Yu, J. Gao, and A. Li, “Probability-based non-local means filter for speckle noise suppression in optical coherence tomography images,” Opt. Lett. 41(5), 994–997 (2016). [CrossRef]  

25. P. Puvanathasan and K. Bizheva, “Interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in optical coherence tomography images,” Opt. Express 17(2), 733–746 (2009). [CrossRef]  

26. A. Usha, N. Shajil, and M. Sasikala, “Automatic anisotropic diffusion filtering and graph-search segmentation of macular spectral-domain optical coherence tomographic (SD-OCT) images,” Curr. Med. Imaging 15(3), 308–318 (2019). [CrossRef]  

27. Q. Zhou, J. Guo, M. Ding, and X. Zhang, “Guided filtering-based nonlocal means despeckling of optical coherence tomography images,” Opt. Lett. 45(19), 5600–5603 (2020). [CrossRef]  

28. C. Gyger, R. Cattin, P. W. Hasler, and P. Maloca, “Three-dimensional speckle reduction in optical coherence tomography through structural guided filtering,” Opt. Eng 53(7), 073105 (2014). [CrossRef]  

29. D. C. Adler, T. H. Ko, and J. G. Fujimoto, “Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter,” Opt. Lett. 29(24), 2878–2880 (2004). [CrossRef]  

30. S. Chitchian, M. A. Fiddy, and N. M. Fried, “Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform,” J. Biomed. Opt. 14(1), 014031 (2009). [CrossRef]  

31. R. Bernardes, C. Maduro, P. Serranho, A. Araújo, S. Barbeiro, and J. Cunha-Vaz, “Improved adaptive complex diffusion despeckling filter,” Opt. Express 18(23), 24048–24059 (2010). [CrossRef]  

32. C. Chen, W. Shi, and V. X. Yang, “Real-time en-face Gabor optical coherence tomographic angiography on human skin using CUDA GPU,” Biomed. Opt. Express 11(5), 2794–2805 (2020). [CrossRef]  

33. M. Zhang, L. Ma, and P. Yu, “Three-dimensional full-range dual-band Fourier domain optical coherence tomography accelerated by graphic processing unit,” IEEE J. Select. Topics Quantum Electron. 25(1), 1–6 (2019). [CrossRef]  

34. Y. Wang, C. M. Oh, M. C. Oliveira, M. S. Islam, A. Ortega, and B. H. Park, “GPU accelerated real-time multi-functional spectral-domain optical coherence tomography system at 1300nm,” Opt. Express 20(14), 14797–14813 (2012). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Flowchart of the algorithm.
Fig. 2.
Fig. 2. Horizontal shift calculated algorithm.
Fig. 3.
Fig. 3. Axial shift calculated algorithm.
Fig. 4.
Fig. 4. The flowchart of the reconstruction of OCT images with GPU.
Fig. 5.
Fig. 5. The grid, block, and thread distribution for horizontal registration.
Fig. 6.
Fig. 6. The reconstructed 20 consecutive B-scan images.
Fig. 7.
Fig. 7. The averaged image without any processing. (a) The A-line speed was set as ∼80 kHz; (b) The A-line speed was set as ∼12.5kHz.
Fig. 8.
Fig. 8. The averaged image with horizontal and axial registration. (a) The A-line speed was set as ∼80 kHz; (b) The A-line speed was set as ∼12.5kHz.
Fig. 9.
Fig. 9. The SNR of the averaged image changed with the number of images.
Fig. 10.
Fig. 10. The averaged images with a window of different numbers of A-scans. (a) a window of one A-scan; (b) a window of 4 A-scans; (c) a window of 25 A-scans.

Tables (2)

Tables Icon

Table 1. The Average correlation coefficient when different numbers of A-scans used

Tables Icon

Table 2. The comparison between GPU and CPU for each procedure

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.