Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Reconstruction of high-resolution 6×6-mm OCT angiograms using deep learning

Open Access Open Access

Abstract

Typical optical coherence tomographic angiography (OCTA) acquisition areas on commercial devices are 3×3- or 6×6-mm. Compared to 3×3-mm angiograms with proper sampling density, 6×6-mm angiograms have significantly lower scan quality, with reduced signal-to-noise ratio and worse shadow artifacts due to undersampling. Here, we propose a deep-learning-based high-resolution angiogram reconstruction network (HARNet) to generate enhanced 6×6-mm superficial vascular complex (SVC) angiograms. The network was trained on data from 3×3-mm and 6×6-mm angiograms from the same eyes. The reconstructed 6×6-mm angiograms have significantly lower noise intensity, stronger contrast and better vascular connectivity than the original images. The algorithm did not generate false flow signal at the noise level presented by the original angiograms. The image enhancement produced by our algorithm may improve biomarker measurements and qualitative clinical assessment of 6×6-mm OCTA.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomographic angiography (OCTA) is a non-invasive imaging technology that can capture retinal and choroidal microvasculature in vivo [1]. Clinicians are rapidly adopting OCTA for evaluation of various diseases, including diabetic retinopathy (DR) [2,3], age-related macular degeneration (AMD) [4,5], glaucoma [6,7], and retinal vessel occlusion (RVO) [8,9]. High-resolution and large-field-of-view OCTA improve clinical observations, provide useful biomarkers and enhance the understanding of retinal and choroidal microvascular circulations [1013]. Many enhancement techniques have been applied to improve the OCTA image quality, including a regression-based algorithm bulk motion subtraction in OCTA [14], multiple en face image averaging [15,16], enhancement of morphological and vascular features using a modified Bayesian residual transform [17], and quality improvement with elliptical directional filtering [18]. These approaches can improve vessel continuity and suppress the background noise on angiograms with proper sampling density (i.e., sampling density that meets the Nyquist criterion). However, while commercial systems offer a range of fields of view, only 3×3-mm angiograms are adequately sampled for capillary resolution as the OCTA system scanning speed limits the number of A-lines included on each cross-sectional B-scan. Conventional image enhancement techniques like those mentioned above are not effective on the under-sampled 6×6-mm angiograms. This is unfortunate since the larger scans, with reduced resolution, are in more need of enhancement. The difficulty in enlarging the field without sacrificing resolution is a significant issue for development of OCTA technology, as its field of view is significantly smaller than modalities such as fluorescein angiography (FA).

Recently, deep learning has achieved dramatic breakthroughs, and researchers have proposed a number of convolutional neural networks (CNN) for OCTA image processing [1926]. As an important branch of image processing, super-resolution image reconstruction and enhancement also benefited from deep-learning-based methods [2731]. Here, we propose a high-resolution angiogram reconstruction network (HARNet) to reconstruct high-resolution angiograms of the superficial vascular complex (SVC). We evaluated the reconstructed high-resolution OCTA for noise level in the foveal avascular zone (FAZ), contrast, vascular connectivity, and false flow signal. We also demonstrate that HARNet is capable of improving not just under-sampled 6×6-mm, but 3×3-mm angiograms as well.

2. Methods

2.1 Data acquisition

The 6×6- and 3×3-mm OCTA scans of the macula used in this study were acquired with 304×304 A-lines using a 70-kHz commercial OCTA system (RTVue-XR; Optovue, Inc.). Two repeated B-scans were taken at each of the 304 raster positions and each B-scan consisted of 304 A-lines. The split-spectrum amplitude-decorrelation angiography (SSADA) algorithm was used to generate the OCTA data [32]. The reflectance values on structural OCT and flow values on OCTA were normalized and converted to unitless values in the range of [0 255]. A guided bidirectional graph search algorithm was employed to segment retinal layer boundaries [33] [Figs. 1(A1), 1(B1)]. 3×3- and 6×6-mm angiograms of the SVC [Figs. 1(A2), 1(B2)] were generated by maximum projection of the OCTA signal in a slab including the nerve fiber layer (NFL) and ganglion cell layer (GCL).

 figure: Fig. 1.

Fig. 1. Data acquisition for HARNet. (A1) Cross-sectional structural OCT of a 3×3-mm scan volume, with overlaid boundaries showing the top (red) and bottom (green) of the SVC slab. (A2) 3×3-mm angiogram of the superficial vascular complex (SVC) generated by maximum projection of the OCTA signal in the slab delineated in (A1). The yellow line shows the location of the B-scan in (A1). (B1) and (B2) Equivalent images for 6×6-mm angiograms from the same eye capture more peripheral features, but are of lower quality.

Download Full Size | PDF

2.2 Network architecture

Our network structure is composed of a low-level feature extraction layer, high-level feature extraction layers, and a residual layer (Fig. 2). Input to the network consists of SVC angiograms. The network first extracts shallow features from the input image through one convolutional layer with 128 channels. Then the high-level features are extracted through four convolutional blocks. Each convolutional block is composed of 20 convolutional layers (C1-C20) with 64 channels. The kernel size in all the convolutional layers is 3×3 pixels. Skip connections concatenate the output and input of each convolutional block as the input to the next convolutional block. The output and input of the last convolutional block are concatenated and then fed to the residual layer. The residual layer contains a channel that produces the residual image. The residual image and input image are summed to produce the final reconstructed output image. For the most part, low-resolution and high-resolution images have the same low-frequency information, so the output consists of the original input and the residual high-frequency components predicted by HARNet. By only learning these high-frequency components, we were able to improve the convergence rate of HARNet [27]. After each convolutional layer, excluding the residual layer, we added a rectified linear unit (ReLU) [34] to accelerate the convergence of HARNet.

 figure: Fig. 2.

Fig. 2. Algorithm flowchart. The network is comprised of three parts: a low-level feature extraction layer, high-level feature extraction layers, and a residual layer. The kernel size in all the convolutional layers is 3×3. The number of channels in the green, blue, and yellow convolutional layer are 128, 64, and 1, respectively. Red layers are concatenation layers that concatenate the output of the convolution block with its input via skip connections. (A) Example input and (B) output 6×6-mm angiogram.

Download Full Size | PDF

2.3 Training

2.3.1 Training data preprocessing

We trained HARNet by reconstructing 6×6-mm angiograms from their densely-sampled 3×3-mm equivalents. To do so, we first used bi-cubic interpolation to scale the size of the 6×6-mm SVC angiograms [Fig. 3(A)] by a factor of 2, so that they would be on the same scale as a 3×3-mm scan. Then we used intensity-based automatic image registration [35] [Fig. 3(D)] to register the scaled 6×6-mm angiograms [Fig. 3(B)] with the 3×3-mm angiograms [Fig. 3(C)]. The registration algorithm can produce a transform matrix, which contains translation, rotation, and scaling operations. Finally, we cropped the overlapping region from each by taking the maximum inscribed rectangle to construct the input for HARNet and the ground truth [Figs. 3(E) and 3(F)].

 figure: Fig. 3.

Fig. 3. Data preprocessing flow chart. (A) The original 6×6-mm superficial vascular complex (SVC) angiogram. (B) Up-sampled 6×6-mm SVC angiogram. (C) Original 3×3-mm SVC angiogram. (D) Registered image combining both angiograms. The yellow box is the largest inscribed rectangle. (E) Cropped central 3×3-mm section from the 6×6-mm angiogram. (F) Cropped original 3×3-mm angiogram.

Download Full Size | PDF

2.3.2 Loss function

We trained the network on a ground truth composed of the original 3×3-mm angiograms filtered with a bilateral filter. To minimize the difference between the output of network and the ground truth, the loss function used in the learning stage was a linear combination of the mean square error [MSE; Eq. (1)] and the structural similarity [SSIM; Eq. (2)] index [36,37]. MSE is used to measure the pixel-wised difference, and SSIM is based on three comparison measurements: reflectance amplitude, contrast, and structure:

$$\textrm{MSE} = \frac{1}{{w \times h}}\mathop \sum \nolimits_{i = 1}^w \mathop \sum \nolimits_{j = 1}^h {(X({i,j} )- Y({i,j} ))^2}$$
$$\textrm{SSIM} = \frac{{2{\mathrm{\mu} _X}{\mathrm{\mu} _Y} + {C_1}}}{{\mathrm{\mu} _X^2 + \mathrm{\mu} _Y^2 + {C_1}}} \cdot \frac{{2{\sigma _{XY}} + {C_2}}}{{\sigma _X^2 + \sigma _Y^2 + {C_2}}}$$
$$\textrm{Loss} = \textrm{MSE} + ({1 - \textrm{SSIM}} )$$
where $w,h{\; }$refer to the width and height of the image, X and $Y\; $refer to the output of HARNet and the ground truth, respectively, and ${\mathrm{\mu} _X}$ and ${\mathrm{\mu} _Y}\; $are their mean pixel values, ${\sigma _X}\; $and ${\sigma _Y}\; $are their standard deviations, and ${\sigma _{XY}}$ is the covariance. The values of the constants ${C_1} = 0.01$ and ${C_2} = 0.03$ were taken from the literature [37]. The loss function [Eq. (3)] was a linear combination of the MSE and the SSIM.

2.3.3 Subjects and training parameters

The data set used in this study consisted of 298 eyes scanned from 196 participants. Each eye was scanned with both a 3×3-mm and a 6×6-mm scan pattern. Ten healthy eyes from 10 participants were intentionally defocused and used in defocusing experiments. Of the remaining 288, we used 210 of these paired scans (randomly selected) for training, and reserved the rest for testing (N=78). The training data includes eyes with DR (N=195) and healthy eyes (N=15). The performance of this network on testing data was separately evaluated on eyes with diabetic retinopathy (N=53) and healthy controls (N=25). Finally, false-flow generation experiments also used 10 cases from the test set of healthy eyes. We used several data augmentation methods to expand the training dataset, including horizontal flipping, vertical flipping, transposition, and 90-degree rotation. For training, considering the hardware capability and computation cost, we used 38×38-pixel sub-images. To avoid the gradient exploding problem, we normalized the pixel value range to 0-1 using Eq. (4),

$$S^{\prime}({i,j} )= \frac{{S({i,j} )- \min (S )}}{{\max (S )- \min (S )}}$$
where $S({i,j} )$ is the pixel value ranging from 0-255 at position $({i,j} )$ of the angiogram, $S^{\prime}({i,j} )$ is the normalized pixel value at location $({i,j} )$, and $\textrm{min}({\cdot} )$ and $\max ({\cdot} )$ are minimum and maximum pixel value of overall image, respectively. Thus the 1050-images in the training dataset after augmentation can be decomposed into 176,405 sub-images, which are extracted from cropped SVC angiograms with a stride of 19. Since HARNet is a fully convolutional neural network, it can be applied on images of arbitrary sizes. Thus, we input the entire image to the model for testing, as the entire image is the clinically relevant data.

An Adam optimizer [38] with an initial learning rate of 0.01 was used to train HARNet by minimizing the loss. We used a global learning rate decay strategy to reduce the learning rate during training in which the learning rate was reduced by 90% when the loss showed no decline after 3 epochs, provided the rate was greater than 1 × 10−6. Training ceased when loss didn’t change by more than 1 × 10−5 in 5 epochs. The training batch size was 128.

We implemented HARNet in Python 3.6 with Keras (Tensorflow-backend) on a PC with a 16G RAM and Intel i7 CPU, and two NVIDIA GeForce GTX 1080Ti graphics cards.

3. Results

To validate the performance of our algorithm, we used a test dataset that composed of 78 paired original 3×3- and 6×6-mm angiograms and evaluated the reconstructed 3×3-mm and 6×6-mm angiograms using three metrics: noise intensity in the FAZ, global contrast, and vascular connectivity. In addition, we also performed experiments on defocused SVC angiograms, angiograms with different simulated noise intensities, and DR angiograms.

3.1 Evaluation metrics

3.1.1 Noise intensity

In healthy eyes, the FAZ is avascular, so to obtain an estimate of noise intensity ${I_{\textrm{Noise}}}$, we consider the pixel values in 0.3-mm diameter circle R centered in the FAZ

$${I_{\textrm{Noise}}} = \frac{1}{R} \times \mathop \sum \nolimits_{({i,j} )\in R} S{({i,j} )^2}$$
where $S({i,j} )$ is the pixel value at position $({i,j} )$.

3.1.2 Image contrast

The global contrast of the SVC angiograms produced by the network was measured by the root-mean-square (RMS) contrast [39],

$${C_{\textrm{RMS}}} = \sqrt {\frac{1}{A} \times \mathop \sum \nolimits_{({i,j} )\in A} {{(S({i,j} )- \mathrm{\mu})}^2}} $$
where $S({i,j} )$ is the pixel value at position $({i,j} )$, A is the total area of the SVC angiogram and$\; \mathrm{\mu} $ is its mean value.

3.1.3 Vascular connectivity

We also assessed vascular connectivity. To do so, we first binarized the angiograms [Figs. 4(A2)–4(D2)] using a global adaptive threshold method [40], then skeletonized the binary map to get the vessel skeleton map [Figs. 4(A3)–4(D3)]. Connected flow pixels were defined as any contiguous flow region with at a length of at least 5 (including diagonal connections), and the vascular connectivity was defined as the ratio of the number of connected flow pixels to the total number of pixels on the skeleton map [32].

 figure: Fig. 4.

Fig. 4. The performance of HARNet. Row 1: (A1) Original 3×3-mm superficial vascular complex (SVC) angiogram and (B1) HARNet output from (A1). (C1) Original 6×6-mm angiogram, and (D1) HARNet output from (C1). Row 2: adaptive threshold binarization of the corresponding images in row 1. Row 3: skeletonization of the corresponding images in row 2. HARNet outputs show enhanced connectivity relative to the original images.

Download Full Size | PDF

3.2 Performance on defocused angiograms

In order to further verify that our algorithm can improve the image quality of low-quality scans, we also evaluated its performance on defocused angiograms. To obtain defocused scans, we first performed autofocus to optimize the focal length to get optimal scans, and then manually adjusted the focal length to obtain angiograms defocused by 3 diopters. Finally, 10 defocused 3×3-mm angiograms and 10 defocused 6×6-mm angiograms were obtained. Defocused angiograms have lower signal-to-noise ratios than correctly focused angiograms, and vessels also appear dilated. The results show that angiograms reconstructed from defocused 3×3- and 6×6-mm angiograms had lower noise intensity and better connectivity than scans acquired under optimal focusing conditions (Fig. 5; Table 1). Therefore, our algorithm is also applicable to defocused angiograms and improves the quality of such scans. Since defocus leads to a general reduction in scan quality, this result also implies that our algorithm could be applicable on low-quality scans.

 figure: Fig. 5.

Fig. 5. Qualitative demonstration of image quality improvement by the proposed reconstruction method. (A1) 3 diopter defocused 3×3-mm superficial vascular complex (SVC) angiogram. (B1) Reconstruction of (A1). (C1) 3×3-mm OCTA acquired under optimal conditions. (A2) 3 diopter defocused 6×6-mm SVC angiogram. (B2) Reconstruction of (A2). (C2) 6×6-mm angiogram acquired under optimal conditions. (A3) Central 3×3-mm section from the defocused 6×6-mm SVC angiogram. (B3) Reconstruction of (A3). (C3) Central 3×3-mm section from the 6×6-mm angiogram acquired under optimal focusing conditions. The green box is the central 3×3-mm section in the 6×6-mm SVC angiograms.

Download Full Size | PDF

Tables Icon

Table 1. Noise intensity, contrast and vascular connectivity (mean ± std.) of reconstructed defocused SVC angiograms, and angiograms captured under optimal conditions.

3.3 Assessment of the false flow signal

One concern in OCTA reconstruction is the generation of false flow signal. Because OCTA reconstruction methods are designed to enhance vascular detail, they are susceptible to mistakenly enhancing background that may randomly share some features with true vessels. In order to evaluate whether HARNet produces such artifacts, we selected 10 3×3-mm angiograms with good quality from 10 healthy eyes and then produced denoised angiograms by applying a simple Gabor and median filter to the original 3×3-mm angiograms [(Fig. 6(A1)]. Then we added Gaussian noise to the denoised angiograms using different parameters ($\mathrm{\mu} ,\sigma $) [Figs. 6(B1)–6(E1)]. We varied $\mathrm{\mu} $ and $\sigma $ separately in increments of 0.005 from 0.001 to 0.1 and from 0.001 to 0.05, respectively, to obtain 2000 noisy 3×3-mm SVC angiograms with different noise intensities (0 - 2100). Next, we input the denoised and noisy angiograms into the network to obtain reconstructed angiograms from each [Figs. 6(A2)–6(E2)]. The false flow signal intensity was defined as

$${I_{\textrm{Flase flow signal}}} = \frac{1}{R} \times \mathop \sum \nolimits_{({i,j} )\in R} S{({i,j} )^2}$$
where ${I_{\textrm{False flow signal}}}$ is the false flow signal intensity, $S({i,j} )$ is the pixel value at position $({i,j} )$, and R corresponds to the same, physiologically flow-free 0.3-mm diameter circle within the FAZ as previously. We found our algorithm did not generate false flow signal when the noise intensity was under 500, which is far above the noise intensity measured in original 3×3-mm (146.77 ± 145.87) and 6×6-mm (93.10 ± 159.05) angiograms (Fig. 7).

 figure: Fig. 6.

Fig. 6. 3×3-mm superficial vascular complex (SVC) angiograms with different noise intensities. (A1) In 3×3-mm angiograms denoised with Gabor and median filtering, the noise intensity is 0. (B1-E1) 3×3-mm SVC angiograms with different noise intensities. (A2-E2) 3×3-mm angiograms reconstructed from the corresponding angiograms in row 1. When the noise intensity is less than 500, there is no false flow signal.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. (A) The relationship between noise intensity and false flow signal intensity. Each point represents one of 2000 noise enhanced scans. The red line indicates the measured cutoff-value (${I_{\textrm{Noise}}}$ = 500) for producing false flow signal. (B) Box plots of the noise intensity of 3×3- and 6×6-mm superficial vascular complex (SVC), non-defocused angiograms in the data set (N=298). The noise intensity measured in original 3×3-mm and 6×6-mm angiograms are far below the cutoff-value for false flow generation, with the exception of outlier images corrupted by apex reflection or true flow signals in the 0.3 mm diameter circle centered in the FAZ.

Download Full Size | PDF

3.4 Performance on DR angiograms

Many diseases present outside of the central area of the macula. The enhancement of larger field-of-view angiograms resolution and image quality may improve the measurements of disease biomarkers such as non-perfusion area and vessel density, thereby further helping ophthalmologists diagnose such diseases. However, since features in diseased eyes may vary from healthy, it is possible that image reconstruction algorithms could suffer from reduced performance on such images. To investigate, we examined reconstructed 6×6-mm angiograms (Fig. 8) of eyes with DR, a leading cause of blindness [41]. Although the 6×6-mm angiograms of eyes with DR have higher noise intensity than healthy eyes, results show that the reconstructed DR angiograms also demonstrate the improvement on noise intensity, contrast, and connectivity comparable to that of healthy controls (Table 2). Because abnormal morphological vessels play a very important role in the diagnosis, it is essential to retain the abnormal vascular morphology when processing images. The DR angiograms reconstructed by our algorithm can preserve pathological vascular abnormalities such as intraretinal microvascular abnormalities (IRMA), early neovascularization and microaneurysms [Fig. 8(A2)].

 figure: Fig. 8.

Fig. 8. HARNet performance on eyes with DR. Top row: original 6×6-mm superficial vascular complex (SVC) angiograms from an eye with active proliferative diabetic retinopathy (PDR) (A1), mild non-proliferative diabetic retinopathy (NPDR) (B1), diabetics without retinopathy (C1), and a healthy control (D1). Bottom row: 6×6-mm angiograms (A2-D2) HARNet output for (A1-D1). A microaneurysm (a green arrow) and intraretinal microvascular abnormalities (IRMA) (blue arrows) appear same in the reconstructed and original angiograms, demonstrating that HARNet preserves the vascular pathologies.

Download Full Size | PDF

Tables Icon

Table 2. Noise intensity, contrast, and vascular connectivity (mean ± std.) of reconstructed 6×6-mm SVC angiograms in eyes with diabetic retinopathy and healthy controls.

3.5 Performance of different methods

We also compared our algorithm with commonly used image enhancement methods including Gabor and Frangi filters. Compared to the original angiograms, our method significantly reduces noise and improves the vascular connectivity without producing false flow signal on all sizes of scans. There is no significant improvement of image contrast on 3 × 3-mm scans [(Fig. 9(D1)], while, the contrast shows significant improvement on 6 × 6-mm scans [Fig. 9(D2); Table 3]. The Gabor filter reduces the noise intensity and improves vascular connectivity, but the contrast is greatly reduced [Figs. 9(B1), 9(B2); Table 3]. The Frangi filter significantly enhances the contrast and improves vascular connectivity, but the noise intensity is significantly increased and may produce false flow signal [Figs. 9(C1), 9(C2); Table 3].

 figure: Fig. 9.

Fig. 9. Performance of different methods on image enhancement. Top row: original 3×3-mm superficial vascular complex (SVC) angiograms from a healthy eye; (A1) original data, (B1) after applying a Gabor filter, (C1) after applying a Frangi filter, and (D1) reconstructed using the proposed method. Bottom row: equivalent from a 6×6-mm scan.

Download Full Size | PDF

Tables Icon

Table 3. Comparison of noise intensity, contrast, and vascular connectivity (mean ± std.) between original angiograms and angiograms processed by different methods. N is the number of eyes.

4. Discussion

Image analysis of low-quality or under-sampled OCTA is challenging in several respects. Noise affects the visibility of small blood vessels, especially capillaries, leading to artifactual vessel fragmentation. Motion and shadow artifacts are common, and amplified by under-sampling. OCTA quality, then, can have a significant impact on the judgment of ophthalmologists or researchers. To help mitigate this concern, several noise reduction and image enhancement procedures have been proposed. To reduce noise and enhance vascular connectivity, datasets are sometimes obtained by acquiring multiple images of the same location over time, making it possible to apply various averaging techniques [15,16,42,43]. However, the acquisition of larger and larger amounts of data makes the total acquisition time longer, increasing the probability of image artifacts caused by eye motions and introducing additional difficulty for clinical imaging. Filtering is also often applied to OCTA images to improve image quality [18,44], but typical problems in data filtering are reduced image resolution and the loss of capillary signal. Other noise reduction strategies suffer similar issues. For instance, a regression-based algorithm [14] that can remove decorrelation noise due to bulk motion in OCTA has been reported. Although image contrast was improved by this method, the drawback is worse vessel continuity, and it also suffers the loss of capillaries with weak signal.

In this study, our proposed method can not only reduce noise and enhance connectivity, but also improve the capability to resolve capillaries in large-field-of-view scans. The two most common scan patterns used in research and the clinic are 3×3-mm and 6×6-mm [45,46]. While the smaller 3×3-mm OCTA can obtain higher image quality due to the denser scanning pattern, its small fields-of-view is a major limitation. Our algorithm’s ability to enhance 6×6-mm OCTA is a step toward compensating for this limitation. We achieved this enhancement by training a network to reconstruct images by learning features from the high-definition 3×3-mm images. This means that we did not need to manually segment vasculature to generate the ground truth, or generate high-definition scans by using a new scanning protocol in a prototype [19]. Therefore, our approach is a practical method to enhance 6×6-mm images by using an acquired 3×3-mm image, that could in principle also be extended to even larger fields-of-view with sparser sampling. Such enhancement via intelligent software could prove to be a superior method for achieving high-quality, large-field scans since hardware solutions (like, for example, increasing sampling density or incorporating adaptive optics) quickly lead to prohibitive cost and imaging times. Improving image quality and resolution may in turn promote better measurements of disease biomarkers such as non-perfusion area and vessel density; by extending improved image quality to a larger field-of-view we also increase the chance that we will detect pathology since disease can manifest outside of the central macular region usually imaged with OCTA [13,47].

We investigated the quality of our algorithm’s output by evaluating reconstructed angiograms with three metrics: noise intensity in the FAZ, global contrast, and vessel connectivity. The 6×6-mm angiograms obtained by our algorithm have almost no noise in the FAZ (0.16 ± 0.26) and vascular connectivity was likewise increased in the HARNet-processed images. In addition to these quantitative improvements, we consider the HARNet output images to appear qualitatively cleaner than the unprocessed input. We also performed experiments on defocused SVC angiograms, and the results show that the algorithm can improve such scans, which is an indication of robustness and broad utility. To demonstrate that the restored flow signal in the reconstructed angiograms is real, we verified whether a false flow signal is generated by using angiograms with different simulated noise intensities. The results show that our algorithm did not generate false flow signal when the noise intensity was under 500. This value far exceeds the noise intensity in the clinically-realistic OCTA angiograms examined in this study. Because the noise intensity in the FAZ and inter-capillary space is similar, we also think that artifactual vessels should not be generated outside of the FAZ.

HARNet improved the quality of both 3×3- and 6×6-mm OCTA angiograms according to the metrics examined in this study. Specifically, HARNet enhanced the quality of under-sampled 6×6-mm OCTA, while other enhancement algorithms perform poorly on such scans [48,49]. And it is interesting that, while HARNet was trained to reconstruct high-resolution 6×6-mm angiograms from sparsely sampled scans, the network also improved 3×3-mm images. In particular, the angiograms reconstructed from defocused scans compared favorably to equivalent images acquired at optimal focus for both scanning patterns. This implies that HARNet is effective as a general OCTA image enhancement tool, outside of the specific context of 6×6-mm angiogram reconstruction. Additionally, the image improvement provided by HARNet is more than just cosmetic, as demonstrated by the improvement in vessel connectivity. Although beyond the scope of this study, we speculate that other OCTA metrics (e.g., non-perfusion area or vessel density) may also prove to be more accurately measured on HARNet-reconstructed images.

Deep-learning-based algorithms are “black boxes” compared to the conventional image processing algorithms. Interpretability of deep-learning is an important field of research in machine learning. Matthew et al. [50] tried to understand CNNs using a kernel visualization technique. More recently, researchers proposed many methods to explain how CNNs work [5153]. For a specific CNN, we can use kernel visualization techniques or heat maps to understand what features the CNN used to make decisions [54]. For our future work, we could use the same visualization techniques to understand why HARNet is very effective on reconstructing angiograms, and employ an ablation study to get a deeper understanding of the structure of HARNet. The biggest advantage of deep-learning-based methods is that they have strong generalizability, which means CNNs can make a reliable prediction on unseen data. Furthermore, the transfer-learning technique is used to transfer the knowledge learned from one dataset to a new dataset using a small number of samples. OCTA data form different pathologies of the retina share a similar feature space. Thus, with the innate strong generalizability and transfer-learning technique, our HARNet should be able to handle OCTA data from different pathologies of the retina (i.e., age-related-degeneration and glaucoma).

There are some limitations to this study. Since we trained HARNet by using optimally sampled, centrally located 3×3-mm angiograms, features specific to the periphery, i.e., the grating-like vascular structure of the radial peripapillary capillaries [Fig. 10(C1)], could not be learned during training. HARNet therefore may introduce features that are physiologically specific to the central macula into more peripheral regions [Figs. 10(B2), 10(C2)]. Likewise, HARNet may remove features specific to peripheral regions, particularly if there are disease-specific features that are more prevalent in the periphery compared to the macula, such as neovascularization elsewhere, which tend to occur more along the major vessels, away from the central macula. Unfortunately, due to the lack of a high-resolution ground truth for the region outside the central macula, we can only speculate on this issue. HARNet also currently only works in only one vascular complex (the superficial), but the intermediate and deep capillary plexuses, as well as the choriocapillaris, are important in several diseases [5560]. Reconstruction of these vascular layers would also be beneficial; however, issues such as shadowing that present preferentially in low-density scanning patterns are only exacerbated in these deeper layers. This makes image reconstruction in these locations significantly more challenging. Finally, to completely characterize HARNet, it will also be important to assess performance on pathological scans. While our data indicates that HARNet can perform well on DR angiograms, there are of course many other diseases that could be examined for a more thorough assessment. Furthermore, a complete investigation of HARNet’s performance on these diseases would include the extraction of relevant biomarkers to determine if they are more or less accurately measured on reconstructed images. Due to eye motion, OCTA produces bright strip artifacts that are also passed to reconstructed angiograms (Fig. 11). However, our algorithm did not make efforts to correct this disturbance, since commercial systems could remove most motion artifacts by tracking at the scan acquisition level and such artifacts can also be frequently removed by other software means [14,61].

 figure: Fig. 10.

Fig. 10. (A1) Original 6×6-mm superficial vascular complex (SVC) angiograms. (B1) The centrally located 3×3-mm angiograms. (C1) The region outside the centrally located 3×3-mm angiograms. (A2-C2) HARNet output for (A1-C1). Since the ground truth used in training did not include the specific vascular patterns present in the green square, the reconstruction here may not be ideal (C2).

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Top row: (A1-C1) Original 6×6-mm superficial vascular complex (SVC) angiograms with motion artifacts. Bottom row: (A2-C2) HARNet output for (A1-C1). Blue arrows indicate the position of motion artifacts.

Download Full Size | PDF

5. Conclusions

We proposed an end-to-end image reconstruction technique for high-resolution 6×6-mm SVC angiograms based on high-resolution 3×3-mm angiograms. The high-resolution 6×6-mm angiograms produced by our network had lower noise intensity and better vasculature connectivity than original 6×6-mm SVC angiograms, and we found our algorithm did not generate false flow signal at realistic noise intensities. The enhanced 6×6-mm angiograms may improve the measurements of disease biomarkers such as non-perfusion area and vessel density.

Funding

National Institutes of Health (P30 EY010572, R01 EY024544, R01 EY027833); Research to Prevent Blindness (Unrestricted departmental funding grant, William & Mary Greve Special Scholar Award).

Disclosures

Oregon Health & Science University (OHSU), Yali Jia has a significant financial interest in Optovue, Inc. These potential conflicts of interest have been reviewed and managed by OHSU.

References

1. Y. Jia, S. T. Bailey, T. S. Hwang, S. M. McClintic, S. S. Gao, M. E. Pennesi, C. J. Flaxel, A. K. Lauer, D. J. Wilson, J. Hornegger, J. G. Fujimoto, and D. Huang, “Quantitative optical coherence tomography angiography of vascular abnormalities in the living human eye,” Proc. Natl. Acad. Sci. 112(18), E2395–E2402 (2015). [CrossRef]  

2. T. S. Hwang, Y. Jia, S. S. Gao, S. T. Bailey, A. K. Lauer, C. J. Flaxel, D. J. Wilson, and D. Huang, “Optical coherence tomography angiography features of diabetic retinopathy,” Retina 35(11), 2371–2376 (2015). [CrossRef]  

3. R. B. Rosen, J. S. Andrade Romo, B. D. Krawitz, S. Mo, A. A. Fawzi, R. E. Linderman, J. Carroll, A. Pinhas, and T. Y. P. Chui, “Earliest evidence of preclinical diabetic retinopathy revealed using optical coherence tomography angiography perfused capillary density,” Am. J. Ophthalmol. 203, 103–115 (2019). [CrossRef]  

4. Y. Jia, S. T. Bailey, D. J. Wilson, O. Tan, M. L. Klein, C. J. Flaxel, B. Potsaid, J. J. Liu, C. D. Lu, M. F. Kraus, J. G. Fujimoto, and D. Huang, “Quantitative optical coherence tomography angiography of choroidal neovascularization in age-related macular degeneration,” Ophthalmology 121(7), 1435–1444 (2014). [CrossRef]  

5. L. Roisman, Q. Zhang, R. K. Wang, G. Gregori, A. Zhang, C. L. Chen, M. K. Durbin, L. An, P. F. Stetson, G. Robbins, A. Miller, F. Zheng, and P. J. Rosenfeld, “Optical coherence tomography angiography of asymptomatic neovascularization in intermediate age-related macular degeneration,” Ophthalmology 123(6), 1309–1319 (2016). [CrossRef]  

6. H. L. Takusagawa, L. Liu, K. N. Ma, Y. Jia, S. S. Gao, M. Zhang, B. Edmunds, M. Parikh, S. Tehrani, J. C. Morrison, and D. Huang, “Projection-resolved optical coherence tomography angiography of macular retinal circulation in glaucoma,” Ophthalmology 124(11), 1589–1599 (2017). [CrossRef]  

7. H. L. Rao, Z. S. Pradhan, R. N. Weinreb, H. B. Reddy, M. Riyazuddin, S. Dasari, M. Palakurthy, N. K. Puttaiah, D. A. S. Rao, and C. A. B. Webers, “Regional comparisons of optical coherence tomography angiography vessel density in primary open-angle glaucoma,” Am. J. Ophthalmol. 171, 75–83 (2016). [CrossRef]  

8. R. C. Patel, J. Wang, T. S. Hwang, M. Zhang, S. S. Gao, M. E. Pennesi, S. T. Bailey, B. J. Lujan, X. Wang, D. J. Wilson, D. Huang, and Y. Jia, “Plexus-specific detection of retinal vascular pathologic conditions with projection-resolved OCT angiography,” Ophthalmol. Retin. 2(8), 816–826 (2018). [CrossRef]  

9. K. Tsuboi, H. Sasajima, and M. Kamei, “Collateral vessels in branch retinal vein occlusion: anatomic and functional analyses by OCT angiography,” Ophthalmol. Retin. 3(9), 767–776 (2019). [CrossRef]  

10. T. E. de Carlo, A. Romano, N. K. Waheed, and J. S. Duker, “A review of optical coherence tomography angiography (OCTA),” Int. J. Retin. Vitr. 1(1), 5–15 (2015). [CrossRef]  

11. Y. Jia, J. M. Simonett, J. Wang, X. Hua, L. Liu, T. S. Hwang, and D. Huang, “Wide-field OCT angiography investigation of the relationship between radial peripapillary capillary plexus density and nerve fiber layer thickness,” Invest. Ophthalmol. Visual Sci. 58(12), 5188–5194 (2017). [CrossRef]  

12. A. Ishibazawa, L. R. de Pretto, A. Yasin Alibhai, E. M. Moult, M. Arya, O. Sorour, N. Mehta, C. R. Baumal, A. J. Witkin, A. Yoshida, J. S. Duker, J. G. Fujimoto, and N. K. Waheed, “Retinal nonperfusion relationship to arteries or veins observed on widefield optical coherence tomography angiography in diabetic retinopathy,” Invest. Ophthalmol. Visual Sci. 60(13), 4310–4318 (2019). [CrossRef]  

13. Q. S. You, Y. Guo, J. Wang, X. Wei, A. Camino, P. Zang, C. J. Flaxel, S. T. Bailey, D. Huang, Y. Jia, and T. S. Hwang, “Detection of clinically unsuspected retinal neovascularization with wide-field optical cohenrence tomography angiography,” Retina 40(5), 891–897 (2020). [CrossRef]  .

14. A. Camino, Y. Jia, G. Liu, J. Wang, and D. Huang, “Regression-based algorithm for bulk motion subtraction in optical coherence tomography angiography,” Biomed. Opt. Express 8(6), 3053–3066 (2017). [CrossRef]  

15. A. Uji, S. Balasubramanian, J. Lei, E. Baghdasaryan, M. Al-Sheikh, E. Borrelli, and S. V. R. Sadda, “Multiple enface image averaging for enhanced optical coherence tomography angiography imaging,” Acta Ophthalmol. 96(7), e820–e827 (2018). [CrossRef]  

16. A. Camino, M. Zhang, C. Dongye, A. D. Pechauer, T. S. Hwang, S. T. Bailey, B. Lujan, D. J. Wilson, D. Huang, and Y. Jia, “Automated registration and enhanced processing of clinical optical coherence tomography angiography,” Quant. Imaging Med. Surg. 6(4), 391–401 (2016). [CrossRef]  

17. B. Tan, A. Wong, and K. Bizheva, “Enhancement of morphological and vascular features in OCT images using a modified Bayesian residual transform,” Biomed. Opt. Express 9(5), 2394–2406 (2018). [CrossRef]  

18. M. Chlebiej, I. Gorczynska, A. Rutkowski, J. Kluczewski, T. Grzona, E. Pijewska, B. L. Sikorski, A. Szkulmowska, and M. Szkulmowski, “Quality improvement of OCT angiograms with elliptical directional filtering,” Biomed. Opt. Express 10(2), 1013–1031 (2019). [CrossRef]  

19. P. Prentašic, M. Heisler, Z. Mammo, S. Lee, A. Merkur, E. Navajas, M. F. Beg, M. Šarunic, and S. Loncaric, “Segmentation of the foveal microvasculature using deep learning networks,” J. Biomed. Opt. 21(7), 075008 (2016). [CrossRef]  

20. Y. Guo, A. Camino, J. Wang, D. Huang, T. S. Hwang, and Y. Jia, “MEDnet, a neural network for automated detection of avascular area in OCT angiography,” Biomed. Opt. Express 9(11), 5147–5158 (2018). [CrossRef]  

21. D. Nagasato, H. Tabuchi, H. Masumoto, H. Enno, N. Ishitobi, M. Kameoka, M. Niki, and Y. Mitamura, “Automated detection of a nonperfusion area caused by retinal vein occlusion in optical coherence tomography angiography images using deep learning,” PLoS One 14(11), e0223965–14 (2019). [CrossRef]  

22. M. Guo, M. Zhao, A. M. Y. Cheong, H. Dai, A. K. C. Lam, and Y. Zhou, “Automatic quantification of superficial foveal avascular zone in optical coherence tomography angiography implemented with deep learning,” Vis. Comput. Ind. Biomed. Art 2(1), 1–9 (2019). [CrossRef]  

23. Y. Guo, T. T. Hormel, H. Xiong, B. Wang, A. Camino, J. Wang, D. Huang, T. S. Hwang, and Y. Jia, “Development and validation of a deep learning algorithm for distinguishing the nonperfusion area from signal reduction artifacts on OCT angiography,” Biomed. Opt. Express 10(7), 3257–3268 (2019). [CrossRef]  

24. J. L. Lauermann, M. Treder, M. Alnawaiseh, C. R. Clemens, N. Eter, and F. Alten, “Automated OCT angiography image quality assessment using a deep learning algorithm,” Graefe’s Arch. Clin. Exp. Ophthalmol. 257(8), 1641–1648 (2019). [CrossRef]  

25. J. Wang, T. T. Hormel, L. Gao, P. Zang, Y. Guo, X. Wang, S. T. Bailey, and Y. Jia, “Automated diagnosis and segmentation of choroidal neovascularization in OCT angiography using deep learning,” Biomed. Opt. Express 11(2), 927–944 (2020). [CrossRef]  

26. J. Wang, T. T. Hormel, Q. You, Y. Guo, X. Wang, L. Chen, T. S. Hwang, and Y. Jia, “Robust non-perfusion area detection in three retinal plexuses using convolutional neural network in OCT angiography,” Biomed. Opt. Express 11(1), 330–345 (2020). [CrossRef]  

27. J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2016), pp. 1646–1654.

28. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 (2017), pp. 105–114.

29. T. Tong, G. Li, X. Liu, and Q. Gao, “Image super-resolution using dense skip connections,” in Proceedings of the IEEE International Conference on Computer Vision (2017), pp. 4799–4807.

30. J. Xu, Y. Chae, B. Stenger, and A. Datta, “Dense bynet: Residual dense network for image super resolution,” in Proceedings - International Conference on Image Processing, ICIP (2018), pp. 71–75.

31. K. Zhang, W. Zuo, and L. Zhang, “Deep plug-and-play super-resolution for arbitrary blur kernels,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 1671–1681.

32. Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012). [CrossRef]  

33. Y. Guo, A. Camino, M. Zhang, J. Wang, D. Huang, T. Hwang, and Y. Jia, “Automated segmentation of retinal layer boundaries and capillary plexuses in wide-field optical coherence tomographic angiography,” Biomed. Opt. Express 9(9), 4429–4442 (2018). [CrossRef]  

34. V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th International Conference on Machine Learning (ICML-10) (2010), pp. 807–814.

35. S. Klein, M. Staring, K. Murphy, M. A. Viergever, and J. P. W. Pluim, “Elastix: A toolbox for intensity-based medical image registration,” IEEE Trans. Med. Imaging 29(1), 196–205 (2010). [CrossRef]  

36. A. Horé and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Proceedings - International Conference on Pattern Recognition (2010), pp. 2366–2369.

37. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

38. D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings (2015), pp. 1–15.

39. E. Peli, “Contrast in complex images,” J. Opt. Soc. Am. A 7(10), 2032–2040 (1990). [CrossRef]  

40. N. O.-I. transactions on systems, U. Man, U. And, and U. 1979, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst., Man, Cybern. 9(1), 62–66 (1979). [CrossRef]  

41. K. Ogurtsova, J. D. da Rocha Fernandes, Y. Huang, U. Linnenkamp, L. Guariguata, N. H. Cho, D. Cavan, J. E. Shaw, and L. E. Makaroff, “IDF Diabetes Atlas: Global estimates for the prevalence of diabetes for 2015 and 2040,” Diabetes Res. Clin. Pract. 128, 40–50 (2017). [CrossRef]  

42. S. Mo, E. Phillips, B. D. Krawitz, R. Garg, S. Salim, L. S. Geyman, E. Efstathiadis, J. Carroll, R. B. Rosen, and T. Y. P. Chui, “Visualization of radial peripapillary capillaries using optical coherence tomography angiography: The effect of image averaging,” PLoS One 12(1), e0169385 (2017). [CrossRef]  

43. P. M. Maloca, R. F. Spaide, S. Rothenbuehler, H. P. N. Scholl, T. Heeren, J. E. R. de Carvalho, M. Okada, P. W. Hasler, C. Egan, and A. Tufail, “Enhanced resolution and speckle-free three-dimensional printing of macular optical coherence tomography angiography,” Acta Ophthalmol. 97(2), e317–e319 (2019). [CrossRef]  

44. H. C. Hendargo, R. Estrada, S. J. Chiu, C. Tomasi, S. Farsiu, and J. A. Izatt, “Automated non-rigid registration and mosaicing for robust imaging of distinct retinal capillary beds using speckle variance optical coherence tomography,” Biomed. Opt. Express 4(6), 803–821 (2013). [CrossRef]  

45. A. H. Kashani, C. L. Chen, J. K. Gahm, F. Zheng, G. M. Richter, P. J. Rosenfeld, Y. Shi, and R. K. Wang, “Optical coherence tomography angiography: A comprehensive review of current methods and clinical applications,” Prog. Retinal Eye Res. 60, 66–100 (2017). [CrossRef]  

46. R. F. Spaide, J. G. Fujimoto, N. K. Waheed, S. R. Sadda, and G. Staurenghi, “Optical coherence tomography angiography,” Prog. Retinal Eye Res. 64, 1–55 (2018). [CrossRef]  

47. J. F. Russell, H. W. Flynn, J. Sridhar, J. H. Townsend, Y. Shi, K. C. Fan, N. L. Scott, J. W. Hinkle, C. Lyu, G. Gregori, S. R. Russell, and P. J. Rosenfeld, “Distribution of diabetic neovascularization on ultra-widefield fluorescein angiography and on simulated widefield OCT angiography,” Am. J. Ophthalmol. 207, 110–120 (2019). [CrossRef]  

48. P. Li, Z. Huang, S. Yang, X. Liu, Q. Ren, and P. Li, “Adaptive classifier allows enhanced flow contrast in OCT angiography using a histogram-based motion threshold and 3D Hessian analysis-based shape filtering,” Opt. Lett. 42(23), 4816–4819 (2017). [CrossRef]  

49. D. S. W. Ting, L. R. Pasquale, L. Peng, J. P. Campbell, A. Y. Lee, R. Raman, G. S. W. Tan, L. Schmetterer, P. A. Keane, and T. Y. Wong, “Artificial intelligence and deep learning in ophthalmology,” Br. J. Ophthalmol. 103(2), 167–175 (2019). [CrossRef]  

50. M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Springer Verlag, 2014), 8689 LNCS(PART 1), pp. 818–833.

51. Q. Zhang, Y. N. Wu, and S.-C. Zhu, “Interpretable convolutional neural networks,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society, 2017), pp. 8827–8836.

52. W. Samek, T. Wiegand, and K.-R. Müller, “Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models,” arXiv Prepr. arXiv1708.08296. (2017).

53. Q. shi Zhang and S. chun Zhu, “Visual interpretability for deep learning: a survey,” Front. Inf. Technol. Electron. Eng. 19(1), 27–39 (2018). [CrossRef]  

54. L. M. Zintgraf, T. S. Cohen, T. Adel, and M. Welling, “Visualizing deep neural network decisions: Prediction difference analysis,” 5th Int. Conf. Learn. Represent. ICLR 2017 - Conf. Track Proc. (2019).

55. L. Toto, E. Borrelli, L. Di Antonio, P. Carpineto, and R. Mastropasqua, “Retinal vascular plexuses’ changes in dry age-related macular degeneration, evaluated by means of optical coherence tomography angiography,” Retina 36(8), 1566–1572 (2016). [CrossRef]  

56. Y. T. Chi, C. H. Yang, and C. K. Cheng, “Optical coherence tomography angiography for assessment of the 3-dimensional structures of polypoidal choroidal vasculopathy,” JAMA Ophthalmol. 135(12), 1310–1316 (2017). [CrossRef]  

57. A. C. Onishi, P. L. Nesper, P. K. Roberts, G. A. Moharram, H. Chai, L. Liu, L. M. Jampol, and A. A. Fawzi, “Importance of considering the middle capillary plexus on OCT angiography in diabetic retinopathy,” Invest. Ophthalmol. Visual Sci. 59(5), 2167–2176 (2018). [CrossRef]  

58. T. S. Hwang, A. M. Hagag, J. Wang, M. Zhang, A. Smith, D. J. Wilson, D. Huang, and Y. Jia, “Automated quantification of nonperfusion areas in 3 vascular plexuses with optical coherence tomography angiography in eyes of patients with diabetes,” JAMA Ophthalmol. 136(8), 929–936 (2018). [CrossRef]  

59. A. Camino, Y. Guo, Q. You, J. Wang, D. Huang, S. T. Bailey, and Y. Jia, “Detecting and measuring areas of choriocapillaris low perfusion in intermediate, non-neovascular age-related macular degeneration,” Neurophotonics 6(04), 1 (2019). [CrossRef]  

60. L. Liu, B. Edmunds, H. L. Takusagawa, S. Tehrani, L. H. Lombardi, J. C. Morrison, Y. Jia, and D. Huang, “Projection-resolved optical coherence tomography angiography of the peripapillary retina in glaucoma,” Am. J. Ophthalmol. 207, 99–109 (2019). [CrossRef]  

61. P. Zang, G. Liu, M. Zhang, C. Dongye, J. Wang, A. D. Pechauer, T. S. Hwang, D. J. Wilson, D. Huang, D. Li, and Y. Jia, “Automated motion correction using parallel-strip registration for wide-field en face OCT angiogram,” Biomed. Opt. Express 7(7), 2823 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Data acquisition for HARNet. (A1) Cross-sectional structural OCT of a 3×3-mm scan volume, with overlaid boundaries showing the top (red) and bottom (green) of the SVC slab. (A2) 3×3-mm angiogram of the superficial vascular complex (SVC) generated by maximum projection of the OCTA signal in the slab delineated in (A1). The yellow line shows the location of the B-scan in (A1). (B1) and (B2) Equivalent images for 6×6-mm angiograms from the same eye capture more peripheral features, but are of lower quality.
Fig. 2.
Fig. 2. Algorithm flowchart. The network is comprised of three parts: a low-level feature extraction layer, high-level feature extraction layers, and a residual layer. The kernel size in all the convolutional layers is 3×3. The number of channels in the green, blue, and yellow convolutional layer are 128, 64, and 1, respectively. Red layers are concatenation layers that concatenate the output of the convolution block with its input via skip connections. (A) Example input and (B) output 6×6-mm angiogram.
Fig. 3.
Fig. 3. Data preprocessing flow chart. (A) The original 6×6-mm superficial vascular complex (SVC) angiogram. (B) Up-sampled 6×6-mm SVC angiogram. (C) Original 3×3-mm SVC angiogram. (D) Registered image combining both angiograms. The yellow box is the largest inscribed rectangle. (E) Cropped central 3×3-mm section from the 6×6-mm angiogram. (F) Cropped original 3×3-mm angiogram.
Fig. 4.
Fig. 4. The performance of HARNet. Row 1: (A1) Original 3×3-mm superficial vascular complex (SVC) angiogram and (B1) HARNet output from (A1). (C1) Original 6×6-mm angiogram, and (D1) HARNet output from (C1). Row 2: adaptive threshold binarization of the corresponding images in row 1. Row 3: skeletonization of the corresponding images in row 2. HARNet outputs show enhanced connectivity relative to the original images.
Fig. 5.
Fig. 5. Qualitative demonstration of image quality improvement by the proposed reconstruction method. (A1) 3 diopter defocused 3×3-mm superficial vascular complex (SVC) angiogram. (B1) Reconstruction of (A1). (C1) 3×3-mm OCTA acquired under optimal conditions. (A2) 3 diopter defocused 6×6-mm SVC angiogram. (B2) Reconstruction of (A2). (C2) 6×6-mm angiogram acquired under optimal conditions. (A3) Central 3×3-mm section from the defocused 6×6-mm SVC angiogram. (B3) Reconstruction of (A3). (C3) Central 3×3-mm section from the 6×6-mm angiogram acquired under optimal focusing conditions. The green box is the central 3×3-mm section in the 6×6-mm SVC angiograms.
Fig. 6.
Fig. 6. 3×3-mm superficial vascular complex (SVC) angiograms with different noise intensities. (A1) In 3×3-mm angiograms denoised with Gabor and median filtering, the noise intensity is 0. (B1-E1) 3×3-mm SVC angiograms with different noise intensities. (A2-E2) 3×3-mm angiograms reconstructed from the corresponding angiograms in row 1. When the noise intensity is less than 500, there is no false flow signal.
Fig. 7.
Fig. 7. (A) The relationship between noise intensity and false flow signal intensity. Each point represents one of 2000 noise enhanced scans. The red line indicates the measured cutoff-value ( ${I_{\textrm{Noise}}}$  = 500) for producing false flow signal. (B) Box plots of the noise intensity of 3×3- and 6×6-mm superficial vascular complex (SVC), non-defocused angiograms in the data set (N=298). The noise intensity measured in original 3×3-mm and 6×6-mm angiograms are far below the cutoff-value for false flow generation, with the exception of outlier images corrupted by apex reflection or true flow signals in the 0.3 mm diameter circle centered in the FAZ.
Fig. 8.
Fig. 8. HARNet performance on eyes with DR. Top row: original 6×6-mm superficial vascular complex (SVC) angiograms from an eye with active proliferative diabetic retinopathy (PDR) (A1), mild non-proliferative diabetic retinopathy (NPDR) (B1), diabetics without retinopathy (C1), and a healthy control (D1). Bottom row: 6×6-mm angiograms (A2-D2) HARNet output for (A1-D1). A microaneurysm (a green arrow) and intraretinal microvascular abnormalities (IRMA) (blue arrows) appear same in the reconstructed and original angiograms, demonstrating that HARNet preserves the vascular pathologies.
Fig. 9.
Fig. 9. Performance of different methods on image enhancement. Top row: original 3×3-mm superficial vascular complex (SVC) angiograms from a healthy eye; (A1) original data, (B1) after applying a Gabor filter, (C1) after applying a Frangi filter, and (D1) reconstructed using the proposed method. Bottom row: equivalent from a 6×6-mm scan.
Fig. 10.
Fig. 10. (A1) Original 6×6-mm superficial vascular complex (SVC) angiograms. (B1) The centrally located 3×3-mm angiograms. (C1) The region outside the centrally located 3×3-mm angiograms. (A2-C2) HARNet output for (A1-C1). Since the ground truth used in training did not include the specific vascular patterns present in the green square, the reconstruction here may not be ideal (C2).
Fig. 11.
Fig. 11. Top row: (A1-C1) Original 6×6-mm superficial vascular complex (SVC) angiograms with motion artifacts. Bottom row: (A2-C2) HARNet output for (A1-C1). Blue arrows indicate the position of motion artifacts.

Tables (3)

Tables Icon

Table 1. Noise intensity, contrast and vascular connectivity (mean ± std.) of reconstructed defocused SVC angiograms, and angiograms captured under optimal conditions.

Tables Icon

Table 2. Noise intensity, contrast, and vascular connectivity (mean ± std.) of reconstructed 6×6-mm SVC angiograms in eyes with diabetic retinopathy and healthy controls.

Tables Icon

Table 3. Comparison of noise intensity, contrast, and vascular connectivity (mean ± std.) between original angiograms and angiograms processed by different methods. N is the number of eyes.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

MSE = 1 w × h i = 1 w j = 1 h ( X ( i , j ) Y ( i , j ) ) 2
SSIM = 2 μ X μ Y + C 1 μ X 2 + μ Y 2 + C 1 2 σ X Y + C 2 σ X 2 + σ Y 2 + C 2
Loss = MSE + ( 1 SSIM )
S ( i , j ) = S ( i , j ) min ( S ) max ( S ) min ( S )
I Noise = 1 R × ( i , j ) R S ( i , j ) 2
C RMS = 1 A × ( i , j ) A ( S ( i , j ) μ ) 2
I Flase flow signal = 1 R × ( i , j ) R S ( i , j ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.