Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

An end-to-end network for segmenting the vasculature of three retinal capillary plexuses from OCT angiographic volumes

Open Access Open Access

Abstract

The segmentation of en face retinal capillary angiograms from volumetric optical coherence tomographic angiography (OCTA) usually relies on retinal layer segmentation, which is time-consuming and error-prone. In this study, we developed a deep-learning-based method to segment vessels in the superficial vascular plexus (SVP), intermediate capillary plexus (ICP), and deep capillary plexus (DCP) directly from volumetric OCTA data. The method contains a three-dimensional convolutional neural network (CNN) for extracting distinct retinal layers, a custom projection module to generate three vascular plexuses from OCTA data, and three parallel CNNs to segment vasculature. Experimental results on OCTA data from rat eyes demonstrated the feasibility of the proposed method. This end-to-end network has the potential to simplify OCTA data processing on retinal vasculature segmentation. The main contribution of this study is that we propose a custom projection module to connect retinal layer segmentation and vasculature segmentation modules and automatically convert data from three to two dimensions, thus establishing an end-to-end method to segment three retinal capillary plexuses from volumetric OCTA without any human intervention.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomography (OCT) can non-invasively provide three-dimensional (3D) images of tissue microstructure at micrometer resolution and has been widely used in ophthalmology for research and diagnosing ocular diseases [1]. OCT angiography (OCTA) is a novel imaging modality based on structural OCT. By measuring the OCT signal variation between consecutive B-scans, the intrinsic blood flow signal, down to the capillary level, can be detected and used to generate 3D images of the retinal microvasculature [2,3]. Because of the 3D, high-resolution nature of OCTA imaging, it is uniquely capable of elucidating the retinal circulatory structure in both humans [4] and model organisms [58] in vivo.

Many of the most important OCTA metrics, such as vessel area, skeleton density, and vessel morphological features (for example, caliber or tortuosity), rely on accurate vessel segmentation. A few studies have explored methods to reliably extract the OCTA-generated vasculature from en face images [911], but most such approaches have limitations. Many studies focused on just segmentation of the superficial complex [1113]. Nonetheless, different diseases can affect separate plexuses differently, and the organization of the different plexuses is important for a full understanding of retinal function [14]. Algorithms that are effective and characterize the deep, as well as the superficial plexus, are also needed. Furthermore, vessel segmentation is just one step in the entire process of deriving a binarized two-dimensional (2D) vasculature map from volumetric OCTA data. Generation of en face images (angiograms) is also crucial. This requires (1) accurate retinal anatomic layer segmentation, and (2) data projection within the segmented slab.

Many OCTA algorithms therefore need additional software support. Retinal layer segmentation in particular is non-trivial, and mis-segmentation can map flow to regions where it could be misinterpreted as pathological. Researchers have proposed a number of layer segmentation algorithms based on both conventional image processing methods [1517] and deep learning approaches [1821]. These methods, including deep learning, are sometimes context-dependent based on location [13,22,23], disease [2426], or species [24]. For example, an algorithm designed to segment a healthy human retina may not perform well on an eye with advanced disorganization of retinal layers, or on a rat eye. Following retinal slab segmentation, the projection strategy is then used in the segmented layer to produce a 2D en face image. Within the retina, maximum projection performs better than average projection on OCTA data [27]. All told, layer segmentation and projection design choices and accuracy can greatly influence vessel segmentation. Differences in vessel density studies may therefore be in part attributable to not just the vessel segmentation algorithm itself, but also the layer segmentation and projection methods.

Recently, deep-learning-based methods have demonstrated tremendous success in image processing. Especially in OCTA, deep learning shows great potential in accounting for artifacts [28,29] and enhancing retinal angiograms [30], detecting vascular biomarkers [28,29,3133], and classifying or staging retinopathy [3438]. In this study, we aim to use deep learning to achieve an end-to-end algorithm for segmenting the vasculature from OCT angiographic volumes of three retinal capillary plexuses in a rodent model. By using an end-to-end strategy, a completely automatic process can be achieved from OCTA scans (input end) to segmented capillary vasculatures (output end) without any manual intervention in the middle steps. We explored a rodent model in this work for two reasons. First, compared to a human retina, the rodent retina has more sparse vascular coverage. This can significantly improve the generation of ground truth since more isolated vessels are easier to differentiate from the background than the closely-packed capillaries found in human retinas. In turn, a higher confidence in the ground truth will enable better quantification of the network’s performance. Second, while many investigators have used OCTA to study the retinal vasculature in animal models of ocular diseases [3941], very few have investigated the robustness of their vessel segmentation algorithms. This study will provide a valuable automatic processing tool for OCTA analysis in animal imaging.

2. Methods

2.1 Data acquisition

A total of 88 OCT data volumes were acquired from one or both eyes of 10 Brown Norway rats by a prototype 50-kHz visible-light OCT (vis-OCT) system with a full-width half-maximum bandwidth of 90 nm from 510 to 610 nm [42]. Two to five scans were acquired from each eye. The OCT volumetric scans were collected over a 2.2×2.2-mm2 field of view. Each volume scan consists of 512 slow axes (Y) position sampling. At each Y position, three consecutive B-scans were captured, each containing 512 A-lines. The OCTA data were calculated simultaneously during scanning using the split-spectrum amplitude-decorrelation angiography (SSADA) algorithm [43].

2.2 Convolutional neural network architecture

In this study, the key challenges for accurate vessel segmentation from the OCT/OCTA volume (Fig. 1 A & B) are identifying the boundary of retinal layers and projecting corresponding volumetric slabs to 2D en face images in a single CNN architecture. To overcome these challenges, we designed a new convolutional neural network (Fig. 1) that contains a 3D convolutional module (Fig. 1 C), a custom projection module (Fig. 1 D) and three 2D convolutional modules (Fig. 1 E). The 3D convolutional module takes volumetric structural OCT data (Fig. 1 A) as an input to identify the boundaries of each retinal layer. This module adopts a U-net-like [44] fully convolutional architecture, which is composed of a down-sampling encoder and an up-sampling decoder. Between the encoder and decoder, three skip-connections connect to the corresponding convolutional layer. The custom projection module was designed to project volumetric slabs to 2D en face images of the SVP, ICP, and DCP. This module takes the retinal segmentation result from the 3D convolutional module and the volumetric OCTA data as input. The last module comprises three parallel 2D convolutional networks that perform vasculature segmentation and output three segmented retinal capillary plexuses (Fig. 1 F-H). All of these subnetworks have the same architecture, DenseNet [45], and each of them takes one output of the custom projection module as input to perform vasculature segmentation.

 figure: Fig. 1.

Fig. 1. The architecture of the proposed method. (A) Structural OCT data volume. (B) OCTA data volume. (C) Three-dimensional convolutional module, with skip connections shown as arrows connecting the different layers. (D) The custom projection module. (E1-E3) The two-dimensional DenseNet convolutional modules. (F) Segmentation result for the superficial vascular plexus, (G) intermediate capillary plexus, and (H) deep capillary plexus.

Download Full Size | PDF

The custom projection module receives the output of the previous 3D convolutional modules, the retinal layer segmentation result (Fig. 2 A), and volumetric OCTA data (Fig. 2 D). It learns a weight vector with shape 6×1 for each vascular plexus (Fig. 2 B). Each weight vector contains six scalars, which represent the background and five retinal layers (nerve fiber layer (NFL) + ganglion cell layer (GCL) + inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL) and retinal pigment epithelium (RPE)). Then, the weights are propagated to the whole volume (Fig. 2 C) according to the layer segmentation results, and the volumetric OCTA data are weighted (Fig. 2 E) by multiplication with the propagated volumetric weight. A maximum value projection method is used to generate the 2D en face image (Fig. 2 F) from the weighted volumetric OCTA data.

 figure: Fig. 2.

Fig. 2. Structure of the custom projection module. (A) The retinal layer segmentation result from the three-dimensional convolution module. (B) The learnable weight vector with shape 6×1. (C) Volumetric weight data. (D) Volumetric OCTA data. (E) Weighted OCTA volumetric data. (F) Projected 2D en face image (takes superficial vascular plexus as an example). NFL: nerve fiber layer. GCL: ganglion cell layer. IPL: inner plexiform layer. INL: inner nuclear layer. OPL: outer plexiform layer. ONL: outer nuclear layer. RPE: retinal pigment epithelium.

Download Full Size | PDF

2.3 Training

2.3.1 Dataset preparation

The dataset used in this study is composed of 88 samples, each sample containing a structural OCT data volume (Fig. 3 A), an OCTA data volume (Fig. 3 B), a volumetric ground truth map for the retinal layer segmentation (Fig. 3 C), and three ground truth maps for the retinal vascular plexus en face images (Fig. 3 D-F). To generate the volumetric ground truth map for retinal layer segmentation, we applied an automated retinal layer segmentation algorithm [16] to segment five retinal layer boundaries, then two certified graders (P.S., M.G.) were employed to correct segmentation errors. For each case, the grader takes about 20 minutes to complete the correction. Then, a third grader (Y.G.) reviewed all the correction results and ensured that no obvious errors in the segmentation. After the retinal layer segmentation, we used maximum value projection [27] to produce a retinal plexus en face OCT angiogram for each plexus. Three experts manually delineated the retinal vasculature in each en face OCT angiogram individually, and the final ground truth map was combined from the three manual grading outputs using a pixel-wise voting method.

 figure: Fig. 3.

Fig. 3. Representative dataset. (A) OCT data volume. (B) OCTA data volume. (C) Volumetric ground truth with six categories, background (black), NFL, GCL and IPL (cyan), INL (yellow), OPL (green), ONL (blue), and RPE (magenta). (D) Ground truth maps of the SVP, (E) ICP, and (F) DCP. NFL: nerve fiber layer. GCL: ganglion cell layer. IPL: inner plexiform layer. INL: inner nuclear layer. OPL: outer plexiform layer. ONL: outer nuclear layer. RPE: retinal pigment epithelium.

Download Full Size | PDF

2.3.2 Training settings

In the proposed network, the 3D and 2D convolutional modules perform different tasks. The 3D convolutional module performs a segmentation task, and outputs the location of the retinal layers. The 2D convolutional module performs binary segmentation and outputs retinal vessel segmentation results for each separate plexus. The training loss in both modules was calculated by weighted cross-entropy:

$$L ={-} \mathop \sum \limits_{i = 1}^C {y_i} \cdot log({{p_i}} )\cdot {\omega _i}$$
where $\; C$ is the number of classes, ${y_i}$ is the ground truth, ${p_i}$ class predicted by the network, and ${\omega _i}$ is the class weighting. The weight of the 3D module was set to 1 for the background and 2 for all retinal layers. The weight for the 2D module was set to 1 for background and 2 for vessels.

We use the Adam algorithm [46] to reduce the loss during the training phase. The initial learning rate was set to 0.001. The batch size was set to 2 as a compromise due to hardware limitations. The maximum training epoch was set to 1000. A global learning rate decay strategy was used to reduce the learning rate during the training. This strategy will reduce the learning rate by 90% when the validation loss shows no decrease (the difference of losses between two epochs lower than 0.0001) after 5 consecutive epochs. An early-stopping strategy was employed to stop training when the loss shows no decrease (the difference of losses between two epochs lower than 0.0001) after 10 consecutive epochs. The dataset was split into 66 (75%) cases for training, 10 (11%) cases for validation, and 12 (14%) cases for testing. Due to hardware limitations, the samples were randomly cropped to 84×360×84-pixel (width × height × depth) before being fed to the network.

We implemented our network in Python 3.7 with Tensorflow on a PC with an Intel i7 CPU, Nvidia TITAN RTX graphics card, and 64G RAM.

3. Results

3.1 Weight vectors in the custom projection layer

To verify that the custom projection layer is working as we expected it to, we plotted the learned weight vectors of this layer after training (Fig. 4). The weight vector for the SVP (Fig. 4 red line) shows a peak at NFL + GCL + IPL, indicating the OCTA data around the NFL + GCL + IPL slab contributed the most flow signal to generate the SVP. Similarly, the INL slab contributed to the ICP most, and OPL contributed to the DCP most.

 figure: Fig. 4.

Fig. 4. The learned weight vectors in the custom projection layer. SVP: superficial vascular plexus, ICP: intermediate capillary plexus, DCP: deep capillary plexus. NFL: nerve fiber layer. GCL: ganglion cell layer. IPL: inner plexiform layer. INL: inner nuclear layer. OPL: outer plexiform layer. ONL: outer nuclear layer. RPE: retinal pigment epithelium.

Download Full Size | PDF

3.2 Performance validation metrics

We used five-fold cross-validation to evaluate the performance of our network on the entire dataset. To quantify the performance of our network, we calculated three measures (specificity, sensitivity, and F1-score (Eq. (2))) on the results of retinal layer segmentation and retinal capillary segmentation:

$$\begin{aligned} & {\; }Specificity = \frac{{TN}}{{TN + FP}} \\ &{Sensitivity\; = \frac{{TP}}{{TP + FN}} }\\ &{F1 - score\; = \frac{{2 \times TP}}{{2 \times TP + FP + FN}},\; } \end{aligned}$$
where TP is true positives (correctly predicted target pixels), TN is true negative (correctly predicted non-target pixels), FP is the false positives (wrongly predicted non-target as target pixels), FN is the false negative (wrongly predicted target as non-target pixels).

3.3 Performance on retinal layer segmentation

We evaluated the performance of the retinal layer segmentation in Table 1. The specificity was high for all retinal layers, while the sensitivity was lower in the INL and OPL. This may be because the INL and OPL have relatively lower area ratios than other layers, which makes them more vulnerable to segmentation errors. As F1-score considers both specificity and sensitivity, it is likely a better indicator of overall network performance.

Tables Icon

Table 1. Agreement (in pixels) between network output and ground truth for retinal layer segmentation (mean ± standard deviation)

Large vessel shadows cannot be ignored in retinal layer segmentation. Since the 3D convolutional module that we used in our network can extract the context from the 3D volumetric data, the network was robust in areas with strong shadow artifacts caused by large vessels (Fig. 5. A). With the accurate segmentation from the 3D convolutional module, the custom projection module can generate high-quality 2D en face images of retinal capillary plexuses (Fig. 5. B-D).

 figure: Fig. 5.

Fig. 5. Retinal layer segmentation outputs from the 3D convolutional module and the output of the custom projection module. (A) The B-scan segmentation results. (B) Superficial vascular plexus. The solid red line indicates the position of the B-scan in A. (C) Intermediate capillary plexus. (D) Deep capillary plexus. ILM: inner limiting membrane. IPL: inner plexiform layer. INL: inner nuclear layer. OPL: outer plexiform layer. ONL: outer nuclear layer. EZ: ellipsoid zone. RPE: retinal pigment epithelium. BM: Bruch’s membrane.

Download Full Size | PDF

3.4 Performance in segmentation of three retinal capillary plexuses

The specificity approached 1 in each of the three retinal capillary plexuses, indicating high performance on distinguishing noise and background from the flow signal (Table 2). The sensitivity deteriorated in the ICP and DCP, which may be due to the relatively low layer segmentation accuracy in the INL and OPL. Compared to the SVP, the vasculature in the ICP and DCP have lower contrast and higher noise (Fig. 6. A1-C1). Moreover, at the junction between SVP and DCP, the vasculature in the ICP appears discontinuous (Fig. 6. B), which may contribute to the low sensitivity of the proposed method.

 figure: Fig. 6.

Fig. 6. Vasculature segmentation results in all three retinal capillary plexuses. Row A: Superficial vascular plexus; Row B: intermediate capillary plexus; Row C: deep capillary plexus. First column: maximum projection en face angiograms. Second column: the vessel segmentation results from the proposed method. Third column: ground truth map. Last column: overlay of the ground truth and automated segmentation result. White indicates regions where the ground truth and automated segmentation results overlap.

Download Full Size | PDF

Tables Icon

Table 2. Vessel segmentation performance in all three retinal capillary plexuses (mean ± standard deviation)

3.5 Vessel density quantification

We quantified the vessel density from the output of our network and compared it with the ground truth map on the test set. To increase the sample number for quantification, we split each en face plexus image into four equal parts, then calculated the mean vessel density in each part. As our method shows high performance on SVP, the vessel density on SVP shows very high consistency (Fig. 7 A). Although our method shows relatively lower accuracy for the ICP and DCP (Table 2), the vessel densities in the ICP and DCP were also very consistent (Fig. 7 B, C).

 figure: Fig. 7.

Fig. 7. Bland-Altman plot showing the agreement of vessel density between ground truth and the output of the proposed method. (A) Results for the SVP, (B) ICP, and (C) DCP. Though sensitivity for vessel segmentation in the ICP was low, this did not adversely affect vessel density quantification. SVP: superficial vascular plexus. ICP: intermediate capillary plexus. DCP: deep capillary plexus.

Download Full Size | PDF

4. Discussion

We used a deep convolutional network to design an end-to-end method to automatically segment all three retinal plexuses (SVP, ICP, and DCP) in visible light OCT/OCTA data from rat eyes. Prior to this, several automated retinal layer segmentation methods [15,1721] and vessel segmentation algorithms [4749] have been separately developed. However, those vessel segmentation algorithms utilized a device-specific layer segmentation algorithm to generate the 2D en face angiograms. Particularly, Li et al. [50] proposed a deep-learning-based method to perform the segmentation from 3D OCTA data to produce a 2D en face image, but they only segmented the large vessels of the SVP and not the capillaries. To the best of our knowledge, this is the first end-to-end method to segment all three retinal capillary plexuses from volumetric OCTA data. Our method contains three modules, a 3D convolutional module, a custom projection module, and a 2D conventional module. The 3D conventional module adopted a U-net-like architecture. With skip-connection between the encoder and decoder, the network could reuse the lower level features to help generate high definition segmentation results, and suppress vanishing gradients during training [44]. The custom projection module, which bridges between the 3D convolutional module and the 2D convolutional module, is key to this method because it removes human intervention and integrates the whole process. Comparison of the weight vector learned by the custom projection model (Fig. 4) to vessel density by depth in the rat eye [42] shows that it worked as we expected. The 2D convolutional module contains three parallel subnetworks that allow separate segmentation of the three retinal capillary plexuses.

Our results indicate that this network has good performance (F1-score > 90%) for vessel segmentation in the SVP. However, in the ICP and DCP, the performance deteriorated. With increasing depth, confounding factors introduced by low OCT signal strength are more prevalent, which interferes with network performance. Sensitivity in the ICP was particularly low. We believe that there are additional factors that may also contribute to this performance deterioration. First, as a junction between the SVP and DCP, the ICP vasculature in rats mainly comprises vertical vessels that connect the SVP (dominated by arteries and arterioles) and DCP (dominated by capillaries). These inter-plexus vessels appear in images as only single dots, so the ICP appears to be very sparse and discontinuous (Fig. 6 B), and not a complete blood vessel network, unlike the SVP or DCP [42]. The resulting disconnected vascular morphologies increase the difficulty of the segmentation task for the network and impose more challenges for human graders to manually delineate them. Additionally, due to its deeper position, the flow signal attenuation is also exacerbated, which may affect the performance of both network and manual grading. This could also decrease agreement with the ground truth.

We have also demonstrated that a deep-learning-based method can be used to build an end-to-end pipeline to segment the three retinal plexuses from OCTA volumes without manual assistance, which can greatly reduce the time required for data processing when applied to animal models. The method presented here is a useful research tool in its own right, since animal models continue to play an important role in ophthalmic research and will almost certainly continue to do so for the foreseeable future.

We also believe that the strategy outlined here will eventually be applicable to clinical (human) data sets. Human data shares a similar structure to rat data in both anatomical layers and vasculature organization, and so a model trained to perform a similar function for humans is imminently feasible. However, there are some challenges to be solved before this application can become a reality. The human retinal vasculature is denser than the rat, which may make it more difficult to generate a ground truth data set that has an accuracy similar to that of the rat eye. Furthermore, to achieve clinical relevance, any network to be used on human patients will have to demonstrate a robust performance for a wide variety of diseases. Finally, clinical data sets often contain data with poor image quality, and even the highest quality images are unlikely to be as clear as the images of rat retinas used in this study due to lack of anesthesia. Fortunately, clinical data are readily abundant. This will allow training with larger datasets, which may offset the impact of data diversity on network performance.

It is worth noting that, while the overlap between the ground truth and the network output was imperfect, our measurements of vessel density made from both the ground truth and the network output indicate that the network’s segmentation result did not adversely affect our ability to quantify vessel density. While the overlap between output and the ground truth is clearly important, small shifts in vessel location may reflect the difficulty of establishing the true location of a vessel in a pixelated image as much as network performance. Additionally, few diagnostic criteria are concerned with the precise pixel-scale location of a vessel. Instead, summary statistics like vessel density are used for retinopathy characterization [5154]. By avoiding manual segmentation, our results should be unambiguously transferrable between measurements in different contexts. Finally, with the greater scalability of the convolutional network, we can improve accuracy by increasing the dataset size and optimizing the network architecture. This approach may help improve segmentation in difficult layers in the future.

5. Conclusions

In summary, we proposed a deep learning method for vessel segmentation in all three retinal capillary plexuses of the rat eye from visible light OCT/OCTA. The network could segment five retinal layers using a 3D convolutional module, project 3D OCTA data to a 2D OCTA en face image using a custom projection layer, and segment three retinal capillary plexuses using a 2D convolutional module. By using these three modules, our network can achieve an end-to-end workflow for vessel segmentation of retinal plexuses. The high performance shown here indicates that this approach can replace complex data processing procedures and reduce errors caused by manual processing.

Funding

National Institutes of Health (P30 EY010572, R01 EY024544, R01 EY027833, R01EY031394, T32 EY023211); Unrestricted Departmental Funding Grant; Research to Prevent Blindness (William & Mary Greve Special Scholar Award); Bright Focus Foundation (G2020168).

Disclosures

Oregon Health & Science University (OHSU), Yali Jia has a significant financial interest in Optovue, Inc. These potential conflicts of interest have been reviewed and managed by OHSU.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. L. I. Kramoreva and Y. I. Rozhko, “Optical coherence tomography (Review),” J. Appl. Spectrosc. 77(4), 449–467 (2010). [CrossRef]  

2. M. Ang, A. C. S. Tan, C. M. G. Cheung, P. A. Keane, R. Dolz-Marco, C. C. A. Sng, and L. Schmetterer, “Optical coherence tomography angiography: a review of current and future clinical applications,” Graefe’s Arch. Clin. Exp. Ophthalmol. 256(2), 237–245 (2018). [CrossRef]  

3. S. S. Gao, Y. Jia, M. Zhang, J. P. Su, G. Liu, T. S. Hwang, S. T. Bailey, and D. Huang, “Optical coherence tomography angiography,” Invest. Ophthalmol. Vis. Sci. 57(9), OCT27 (2016). [CrossRef]  

4. J. P. Campbell, M. Zhang, T. S. Hwang, S. T. Bailey, D. J. Wilson, Y. Jia, and D. Huang, “Detailed vascular anatomy of the human retina by projection-resolved optical coherence tomography angiography,” Nat. Publ. Gr. 7, 42201 (2017). [CrossRef]  

5. J. R. Park, W. Choi, H. K. Hong, Y. Kim, S. J. Park, Y. Hwang, P. Kim, S. Woo, K. H. Park, and W. Y. Oh, “Imaging laser-induced choroidal neovascularization in the rodent retina using optical coherence tomography angiography,” Invest. Ophthalmol. Vis. Sci. 57(9), OCT331 (2016). [CrossRef]  

6. B. Tan, B. Maclellan, E. Mason, and K. Bizheva, “Structural, functional and blood perfusion changes in the rat retina associated with elevated intraocular pressure, measured simultaneously with a combined OCT + ERG system,” PLoS One 13(3), e0193592 (2018). [CrossRef]  

7. S. Pi, A. Camino, X. Wei, J. Simonett, W. Cepurna, D. Huang, J. C. Morrison, and Y. Jia, “Rodent retinal circulation organization and oxygen metabolism revealed by visible-light optical coherence tomography,” Biomed. Opt. Express 9(11), 5851 (2018). [CrossRef]  

8. S. Pi, T. T. Hormel, X. Wei, W. Cepurna, B. Wang, J. C. Morrison, and Y. Jia, “Retinal capillary oximetry with visible light optical coherence tomography,” Proc. Natl. Acad. Sci. U. S. A. 117(21), 11658–11666 (2020). [CrossRef]  

9. A. Rabiolo, F. Gelormini, R. Sacconi, M. V. Cicinelli, G. Triolo, P. Bettin, K. Nouri-Mahdavi, F. Bandello, and G. Querques, “Comparison of methods to quantify macular and peripapillary vessel density in optical coherence tomography angiography,” PLoS One 13(10), e0205773 (2018). [CrossRef]  

10. X. Liu, L. Bi, Y. Xu, D. Feng, J. Kim, and X. Xu, “Robust deep learning method for choroidal vessel segmentation on swept source optical coherence tomography images,” Biomed. Opt. Express 10(4), 1601 (2019). [CrossRef]  

11. M. Heisler, F. Chan, Z. Mammo, C. Balaratnasingam, P. Prentasic, G. Docherty, M. Ju, S. Rajapakse, S. Lee, A. Merkur, A. Kirker, D. Albiani, D. Maberley, K. B. Freund, M. F. Beg, S. Loncaric, M. V. Sarunic, and E. V. Navajas, “Deep learning vessel segmentation and quantification of the foveal avascular zone using commercial and prototype OCT-A platforms,” arXiv (2019).

12. G. Triolo, A. Rabiolo, N. D. Shemonski, A. Fard, F. Di Matteo, R. Sacconi, P. Bettin, S. Magazzeni, G. Querques, L. E. Vazquez, P. Barboni, and F. Bandello, “Optical coherence tomography angiography macular and peripapillary vessel perfusion density in healthy subjects, glaucoma suspects, and glaucoma patients,” Invest. Ophthalmol. Vis. Sci. 58(13), 5713–5722 (2017). [CrossRef]  

13. Q. Zhang, J. B. Jonas, Q. Wang, S. Y. Chan, L. Xu, W. Bin Wei, and Y. X. Wang, “Optical coherence tomography angiography vessel density changes after acute intraocular pressure elevation,” Sci. Rep. 8(1), 6024 (2018). [CrossRef]  

14. T. T. Hormel, Y. Jia, Y. Jian, T. S. Hwang, S. T. Bailey, M. E. Pennesi, D. J. Wilson, J. C. Morrison, and D. Huang, “Plexus-specific retinal vascular anatomy and pathologies as seen by projection-resolved optical coherence tomographic angiography,” Prog. Retin. Eye Res. 80, 100878 (2021). [CrossRef]  

15. D. Xiang, H. Tian, X. Yang, F. Shi, W. Zhu, H. Chen, and X. Chen, “Automatic segmentation of retinal layer in OCT images with choroidal neovascularization,” IEEE Trans. Image Process. 27(12), 5880–5891 (2018). [CrossRef]  

16. Y. Guo, A. Camino, M. Zhang, J. Wang, D. Huang, T. Hwang, and Y. Jia, “Automated segmentation of retinal layer boundaries and capillary plexuses in wide-field optical coherence tomographic angiography,” Biomed. Opt. Express 9(9), 4429 (2018). [CrossRef]  

17. Q. Dai and Y. Sun, “Automated layer segmentation of optical coherence tomography images,” in Proceedings - 2011 4th International Conference on Biomedical Engineering and Informatics, BMEI 2011 (2011), 1, pp. 142–146.

18. S. Apostolopoulos, S. De Zanet, C. Ciller, S. Wolf, and R. Sznitman, “Pathological OCT retinal layer segmentation using branch residual U-shape networks,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2017), 10435 LNCS, pp. 294–301.

19. L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search,” Biomed. Opt. Express 8(5), 2732 (2017). [CrossRef]  

20. A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, “ReLaynet: Retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks,” Biomed. Opt. Express 8(8), 3627 (2017). [CrossRef]  

21. P. Zang, J. Wang, T. T. Hormel, L. Liu, D. Huang, and Y. Jia, “Automated segmentation of peripapillary retinal boundaries in OCT combining a convolutional neural network and a multi-weights graph search,” Biomed. Opt. Express 10(8), 4340–4352 (2019). [CrossRef]  

22. M. S. Miri, M. D. Abràmoff, K. Lee, M. Niemeijer, J. K. Wang, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from SD-OCT and color fundus photographs using a machine-learning graph-based approach,” IEEE Trans. Med. Imaging 34(9), 1854–1866 (2015). [CrossRef]  

23. S. K. Devalla, P. K. Renukanand, B. K. Sreedhar, G. Subramanian, L. Zhang, S. Perera, J.-M. Mari, K. S. Chin, T. A. Tun, N. G. Strouthidis, T. Aung, A. H. Thiéry, and M. J. A. Girard, “DRUNET: a dilated-residual U-Net deep learning network to segment optic nerve head tissues in optical coherence tomography images,” Biomed. Opt. Express 9(7), 3244 (2018). [CrossRef]  

24. E. Silverstein, S. Freedman, G. P. Zéhil, K. Jiramongkolchai, and M. El-Dairi, “The macula in pediatric glaucoma: quantifying the inner and outer layers via optical coherence tomography automatic segmentation,” J. AAPOS 20(4), 332–336 (2016). [CrossRef]  

25. C. S. Lee, A. J. Tyring, N. P. Deruyter, Y. Wu, A. Rokem, and A. Y. Lee, “Deep-learning based, automated segmentation of macular edema in optical coherence tomography,” Biomed. Opt. Express 8(7), 3440 (2017). [CrossRef]  

26. F. Bai, M. J. Marques, and S. J. Gibson, “Cystoid macular edema segmentation of optical coherence tomography images using fully convolutional neural networks and fully connected CRFs,” arXiv Prepr. (2017).

27. T. T. Hormel, J. Wang, S. T. Bailey, T. S. Hwang, D. Huang, and Y. Jia, “Maximum value projection produces better en face OCT angiograms than mean value projection,” Biomed. Opt. Express 9(12), 6412 (2018). [CrossRef]  

28. Y. Guo, T. T. Hormel, H. Xiong, B. Wang, A. Camino, J. Wang, D. Huang, T. S. Hwang, and Y. Jia, “Development and validation of a deep learning algorithm for distinguishing the nonperfusion area from signal reduction artifacts on OCT angiography,” Biomed. Opt. Express 10(7), 3257–3268 (2019). [CrossRef]  

29. J. Wang, T. T. Hormel, L. Gao, P. Zang, Y. Guo, X. Wang, S. T. Bailey, and Y. Jia, “Automated diagnosis and segmentation of choroidal neovascularization in OCT angiography using deep learning,” Biomed. Opt. Express 11(2), 927–944 (2020). [CrossRef]  

30. M. Gao, Y. Guo, T. T. Hormel, J. Sun, T. S. Hwang, and Y. Jia, “Reconstruction of high-resolution 6×6-mm OCT angiograms using deep learning,” Biomed. Opt. Express 11(7), 3585–3600 (2020). [CrossRef]  

31. A. Y. Alibhai, L. R. P. De, E. M. Moult, C. Or, M. Arya, M. McGowan, O. Carrasco-Zevallos, B. Lee, S. Chen, and C. R. Baumal, “Quantification of retinal capillary nonperfusion in diabetics using wide-field optical coherence tomography angiography,” Retina 40, 412 (2018). [CrossRef]  

32. Y. Guo, T. T. Hormel, H. Xiong, J. Wang, T. S. Hwang, and Y. Jia, “Automated segmentation of retinal fluid volumes from structural and angiographic optical coherence tomography using deep learning,” Translational Vision Science and Technology 9, 54 (2020). [CrossRef]  

33. J. Wang, T. Hormel, L. Gao, P. Zang, Y. Guo, X. Wang, S. T. Bailey, and Y. Jia, “Fully automated choroidal neovascularization diagnosis and segmentation using deep learning in projection-resolved OCT angiography,” Invest. Ophthalmol. Vis. Sci. 61(7), 1656 (2020).

34. P. Zang, L. Gao, T. T. Hormel, J. Wang, Q. You, T. S. Hwang, and Y. Jia, “DcardNet: diabetic retinopathy classification at multiple levels based on structural and angiographic optical coherence tomography,” IEEE Trans. Biomed. Eng., accepted for publication (2020).

35. D. Le, M. Alam, C. K. Yao, J. I. Lim, Y. T. Hsieh, R. V. P. Chan, D. Toslak, and X. Yao, “Transfer learning for automated octa detection of diabetic retinopathy,” Trans. Vis. Sci. Tech. 9(2), 1–9 (2020). [CrossRef]  

36. D. B. Russakoff, A. Lamin, J. D. Oakley, A. M. Dubis, and S. Sivaprasad, “Deep learning for prediction of AMD progression: A pilot study,” Invest. Ophthalmol. Vis. Sci. 60(2), 712–722 (2019). [CrossRef]  

37. J. Yim, R. Chopra, T. Spitz, J. Winkens, A. Obika, C. Kelly, H. Askham, M. Lukic, J. Huemer, K. Fasler, G. Moraes, C. Meyer, M. Wilson, J. Dixon, C. Hughes, G. Rees, P. T. Khaw, A. Karthikesalingam, D. King, D. Hassabis, M. Suleyman, T. Back, J. R. Ledsam, P. A. Keane, and J. De Fauw, “Predicting conversion to wet age-related macular degeneration using deep learning,” Nat. Med. 26(6), 892–899 (2020). [CrossRef]  

38. M. Heisler, S. Karst, J. Lo, Z. Mammo, T. Yu, S. Warner, D. Maberley, M. F. Beg, E. V. Navajas, and M. V. Sarunic, “Ensemble deep learning for diabetic retinopathy detection using optical coherence tomography angiography,” Trans. Vis. Sci. Tech. 9(2), 20 (2020). [CrossRef]  

39. K. Devarajan, W. Di Lee, H. S. Ong, N. C. Lwin, J. Chua, L. Schmetterer, J. S. Mehta, and M. Ang, “Vessel density and En-face segmentation of optical coherence tomography angiography to analyse corneal vascularisation in an animal model,” Eye Vis. 6(1), 2 (2019). [CrossRef]  

40. T. H. Kim, T. Son, D. Le, and X. Yao, “Longitudinal OCT and OCTA monitoring reveals accelerated regression of hyaloid vessels in retinal degeneration 10 (rd10) mice,” Sci. Rep. 9(1), 16685 (2019). [CrossRef]  

41. Y. Kim, J. R. Park, H. K. Hong, M. Han, J. Lee, P. Kim, S. J. Woo, K. H. Park, and W. Y. Oh, “In vivo imaging of the hyaloid vascular regression and retinal and choroidal vascular development in rat eyes using optical coherence tomography angiography,” Sci. Rep. 10(1), 1–11 (2020). [CrossRef]  

42. S. Pi, A. Camino, M. Zhang, W. Cepurna, G. Liu, D. Huang, J. Morrison, and Y. Jia, “Angiographic and structural imaging using high axial resolution fiber-based visible-light OCT,” Biomed. Opt. Express 8(10), 4595–4608 (2017). [CrossRef]  

43. Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710 (2012). [CrossRef]  

44. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (2015), pp. 234–241.

45. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 (Institute of Electrical and Electronics Engineers Inc., 2017), 2017-Janua, pp. 2261–2269.

46. D. P. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” AIP Conf. Proc. 1631, 58–62 (2014). [CrossRef]  

47. P. Prentašic, M. Heisler, Z. Mammo, S. Lee, A. Merkur, E. Navajas, M. F. Beg, M. Šarunic, and S. Loncaric, “Segmentation of the foveal microvasculature using deep learning networks,” J. Biomed. Opt. 21(7), 075008 (2016). [CrossRef]  

48. N. Eladawi, M. Elmogy, O. Helmy, A. Aboelfetouh, A. Riad, H. Sandhu, S. Schaal, and A. El-Baz, “Automatic blood vessels segmentation based on different retinal maps from OCTA scans,” Comput. Biol. Med. 89, 150–161 (2017). [CrossRef]  

49. T. Pissas, E. Bloch, M. J. Cardoso, B. Flores, O. Georgiadis, S. Jalali, C. Ravasio, D. Stoyanov, L. Da Cruz, and C. Bergeles, “Deep iterative vessel segmentation in OCT angiography,” Biomed. Opt. Express 11(5), 2490 (2020). [CrossRef]  

50. M. Li, Y. Chen, Z. Ji, K. Xie, S. Yuan, Q. Chen, and S. Li, “Image Projection Network: 3D to 2D image segmentation in OCTA images,” IEEE Trans. Med. Imaging 39(11), 3343–3354 (2020). [CrossRef]  

51. J. Gołębiewska, K. Biała-Gosek, A. Czeszyk, and W. Hautz, “Optical coherence tomography angiography of superficial retinal vessel density and foveal avascular zone in myopic children,” PLoS One 14(7), e0219785 (2019). [CrossRef]  

52. S. U. Baek, Y. K. Kim, A. Ha, Y. W. Kim, J. Lee, J. S. Kim, J. W. Jeoung, and K. H. Park, “Diurnal change of retinal vessel density and mean ocular perfusion pressure in patients with open-angle glaucoma,” PLoS One 14(4), e0215684 (2019). [CrossRef]  

53. H. Akil, A. S. Huang, B. A. Francis, S. R. Sadda, and V. Chopra, “Retinal vessel density from optical coherence tomography angiography to differentiate early glaucoma, pre-perimetric glaucoma and normal eyes,” PLoS One 12(2), e0170476 (2017). [CrossRef]  

54. Q. S. You, J. Wang, Y. Guo, C. J. Flaxel, T. S. Hwang, D. Huang, Y. Jia, and S. T. Bailey, “Detection of reduced retinal vessel density in eyes with geographic atrophy secondary to age-related macular degeneration using projection-resolved optical coherence tomography angiography,” Am. J. Ophthalmol. 209, 206–212 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. The architecture of the proposed method. (A) Structural OCT data volume. (B) OCTA data volume. (C) Three-dimensional convolutional module, with skip connections shown as arrows connecting the different layers. (D) The custom projection module. (E1-E3) The two-dimensional DenseNet convolutional modules. (F) Segmentation result for the superficial vascular plexus, (G) intermediate capillary plexus, and (H) deep capillary plexus.
Fig. 2.
Fig. 2. Structure of the custom projection module. (A) The retinal layer segmentation result from the three-dimensional convolution module. (B) The learnable weight vector with shape 6×1. (C) Volumetric weight data. (D) Volumetric OCTA data. (E) Weighted OCTA volumetric data. (F) Projected 2D en face image (takes superficial vascular plexus as an example). NFL: nerve fiber layer. GCL: ganglion cell layer. IPL: inner plexiform layer. INL: inner nuclear layer. OPL: outer plexiform layer. ONL: outer nuclear layer. RPE: retinal pigment epithelium.
Fig. 3.
Fig. 3. Representative dataset. (A) OCT data volume. (B) OCTA data volume. (C) Volumetric ground truth with six categories, background (black), NFL, GCL and IPL (cyan), INL (yellow), OPL (green), ONL (blue), and RPE (magenta). (D) Ground truth maps of the SVP, (E) ICP, and (F) DCP. NFL: nerve fiber layer. GCL: ganglion cell layer. IPL: inner plexiform layer. INL: inner nuclear layer. OPL: outer plexiform layer. ONL: outer nuclear layer. RPE: retinal pigment epithelium.
Fig. 4.
Fig. 4. The learned weight vectors in the custom projection layer. SVP: superficial vascular plexus, ICP: intermediate capillary plexus, DCP: deep capillary plexus. NFL: nerve fiber layer. GCL: ganglion cell layer. IPL: inner plexiform layer. INL: inner nuclear layer. OPL: outer plexiform layer. ONL: outer nuclear layer. RPE: retinal pigment epithelium.
Fig. 5.
Fig. 5. Retinal layer segmentation outputs from the 3D convolutional module and the output of the custom projection module. (A) The B-scan segmentation results. (B) Superficial vascular plexus. The solid red line indicates the position of the B-scan in A. (C) Intermediate capillary plexus. (D) Deep capillary plexus. ILM: inner limiting membrane. IPL: inner plexiform layer. INL: inner nuclear layer. OPL: outer plexiform layer. ONL: outer nuclear layer. EZ: ellipsoid zone. RPE: retinal pigment epithelium. BM: Bruch’s membrane.
Fig. 6.
Fig. 6. Vasculature segmentation results in all three retinal capillary plexuses. Row A: Superficial vascular plexus; Row B: intermediate capillary plexus; Row C: deep capillary plexus. First column: maximum projection en face angiograms. Second column: the vessel segmentation results from the proposed method. Third column: ground truth map. Last column: overlay of the ground truth and automated segmentation result. White indicates regions where the ground truth and automated segmentation results overlap.
Fig. 7.
Fig. 7. Bland-Altman plot showing the agreement of vessel density between ground truth and the output of the proposed method. (A) Results for the SVP, (B) ICP, and (C) DCP. Though sensitivity for vessel segmentation in the ICP was low, this did not adversely affect vessel density quantification. SVP: superficial vascular plexus. ICP: intermediate capillary plexus. DCP: deep capillary plexus.

Tables (2)

Tables Icon

Table 1. Agreement (in pixels) between network output and ground truth for retinal layer segmentation (mean ± standard deviation)

Tables Icon

Table 2. Vessel segmentation performance in all three retinal capillary plexuses (mean ± standard deviation)

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

L = i = 1 C y i l o g ( p i ) ω i
S p e c i f i c i t y = T N T N + F P S e n s i t i v i t y = T P T P + F N F 1 s c o r e = 2 × T P 2 × T P + F P + F N ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.