Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated segmentation of peripapillary retinal boundaries in OCT combining a convolutional neural network and a multi-weights graph search

Open Access Open Access

Abstract

Quantitative analysis of the peripapillary retinal layers and capillary plexuses from optical coherence tomography (OCT) and OCT angiography images depend on two segmentation tasks – delineating the boundary of the optic disc and delineating the boundaries between retinal layers. Here, we present a method combining a neural network and graph search to perform these two tasks. A comparison of this novel method’s segmentation of the disc boundary showed good agreement with the ground truth, achieving an overall Dice similarity coefficient of 0.91 ± 0.04 in healthy and glaucomatous eyes. The absolute error of retinal layer boundaries segmentation in the same cases was 4.10 ± 1.25 µm.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomography (OCT) provides noninvasive, structural images of eye fundus tissue based on interferometric analysis of low-coherence light [1]. By considering blood flow induced temporal variation in the signal garnered from OCT, vasculature can be distinguished from static tissue. There are many versions of this technique; collectively they are termed OCT angiography (OCTA) [2–8]. Measurement of retinal layer thickness from structural OCT and analysis of capillary plexuses from OCTA can both help clinical diagnosis and early detection of glaucoma, which is the leading cause of irreversible blindness globally [9–13]. But the clinical utility of such measurements requires accuracy and precision, both of which depend critically on the segmentation of both the optic disc boundary and peripapillary retinal boundaries. Segmentation of these anatomical regions is, then, a critically important task.

Since manual segmentation is time-consuming, several methods to segment the optic disc and peripapillary retinal boundaries have been proposed [14–22]. For peripapillary retinal boundaries, segmentation graph search algorithms based on intensity differences between anatomical slabs from structural OCT have been used frequently and show good results. Antony et al. proposed a 3D graph search method for the segmentation of both the optic disc boundary and the peripapillary retinal boundaries [16]. Zang et al. proposed a method which detected the optic disc boundary and segmented peripapillary retinal boundaries separately using a dynamic-programming based graph search algorithm [20]. Gao et al. proposed a method which combined the active appearance model and graph search to segment the peripapillary retinal boundaries [21]. Yu et al. proposed a shared-hole graph search method which first segments the optic disc boundary and then segments the peripapillary retinal boundaries [22]. However, speckle noise and vessel shadows both seriously detrimentally impact segmentation accuracy based just on graph search.

Nowadays, deep learning plays an important role in medical image processing and several learning-based methods exist for segmentation of OCT data [23–28]. Devalla et al. proposed a dilated-residual U-Net to segment optic nerve head tissues such as the lamina cribrosa, choroid, sclera and so on [25]. But the peripapillary retinal boundaries were not segmented in this study. Kugelman et al. proposed a retinal boundary segmentation method for macular OCT based on a combination of recurrent neural networks and graph search [26]. However, the anatomical disruption caused by the optic disc makes peripapillary retinal boundaries segmentation much more challenging than the macular region. Networks trained on macular OCT scans therefore may not generalize well to the peripapillary region.

In this study, we propose an automated segmentation method for optic disc boundary detection and peripapillary retina layer segmentation. We designed two separate neural networks and trained one each to segment the optic disc boundary and peripapillary retinal layers. The final peripapillary retinal boundaries were calculated based on the prediction and gradient maps using a multi-weights graph search algorithm.

2. Methods

2.1 Patient recruitment and data acquisition

In this study, 46 healthy and 63 participants with glaucoma were recruited and tested at the Casey Eye Institute, Oregon Health & Science University. The diagnoses of all the participants were made by an expert clinical examination. The participants were enrolled after informed consent in accordance with an Institutional Review Board approved protocol. The study was conducted in compliance with the Declaration of Helsinki.

The peripapillary retinal area was scanned using a commercial 70-kHz spectral-domain OCT system (Avanti RTVue-XR, Optovue Inc) with 840-nm central wavelength. The scan regions were 4.5 × 4.5 mm and 1.6 mm in depth (304 × 304 × 640 pixels) centered on the optic disc. Two repeated B-frames were captured at each line-scan location. The blood flow of each line-scan location was detected using the split-spectrum amplitude-decorrelation angiography (SSADA) algorithm based on the speckle variation between two repeated B-frames [2,29]. The OCT structural images were obtained by averaging two repeated B-frames. For each data set, two volumetric raster scans (one x-fast scan and one y-fast scan) were registered and merged through an orthogonal registration algorithm to reduce motion artifacts [30].

In each OCT data set, the following layers or boundaries are anatomically important: inner limiting membrane (ILM), nerve fiber layer (NFL), ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), ellipsoid zone (EZ), retinal pigment epithelium (RPE), and Bruch’s membrane (BM). In this study, seven boundaries (Vitreous/ILM, NFL/GCL, IPL/INL, INL/OPL, OPL/ONL, ONL/EZ, and RPE/BM) were manually segmented by a human grader.

2.2 Neural network designing

The neural network used in this study was designed based on the architecture of the classic U-Net [31,32] (Fig. 1). Three max-pooling and (de)convolution layers were separately used in the down-sampling and up-sampling towers. Because each peripapillary retinal layer cannot be identified based just on the upper and lower boundaries, the global position in the whole retina is also an important feature. In order to capture both the relative and absolute location of each peripapillary retinal layer, a 3 × 3 normal and atrous-convolution layer [33,34] were cascaded together in each layer of the down-sampling and up-sampling towers. In addition, a global block was also designed to capture the local and global information before the final classification layer. The batch normalization [35] and exponential linear unit (ELU) function [36] were used after each convolution layer (except the output layer) to improve the stability of the final classification.

 figure: Fig. 1

Fig. 1 The architecture of the designed neural network.

Download Full Size | PDF

The Dice similarity coefficient (DSC) for each channel of the output map was used in the loss function:

Loss=11Ncn=1NcOutnLabn+epsOutnLabn+eps
where Nc is the number of final classes, eps is set to 1 × 10−5 to keep the division workable, and Outn and Labn are the nth channels of the output map and corresponding label manually segmented by a certified grader. Stochastic gradient descent with Nesterov momentum (momentum = 0.9) was used to optimize the variables in the neural network to find the minimum value of the loss function [37]. The learning rate was halved if the value of the loss function kept increasing over three consecutive training steps (starting from an initial learning rate of 0.1). The same network architecture was used for the segmentation of both the optic disc and peripapillary retinal boundaries.

The designed neural network was trained and tested in Python 3.6, and other image processing was performed in MATLAB 2018b. The workstation used in this study has an Intel (R) Core (TM) i7-8700K CPU @ 3.70GHz, 64.0 GB RAM and NVIDIA RTX 2080 GPU.

2.3 Optic disc boundary segmentation

The major challenge of the peripapillary retinal boundaries segmentation is the special structure of the optic disc, which is totally different from the surrounding retina and varies significantly between eyes. Because the en face shape of optic disc is usually approximately circular, 180 diametral B-frames were generated based on the detected disc center, thereby ensuring the images used to train the network in optic disc segmentation algorithm were similar.

2.3.1 Optic disc center detection

The optic disc center is needed for sampling of the 180 diametral B-frames. However, the optic disc is not always aligned at the exact center of the OCT data volume, and can be far away from the image center (Fig. 2(A)). Therefore, we designed an automated localization algorithm for the optic disc that leverages the lack of anatomical layers found in the disk region in order to determine its center. The internal, hierarchical structure of anatomic layers can manifest clearly in OCT images after proper image manipulation. To elucidate these features within our data volumes we designed a convolution kernel khie to generate a gradient map Ghie which demarcates the three strongest retinal layer gradients (Fig. 2(C) and 2(F)):

Ghie=Conv(Bnormal,khie)
where Conv(•) is the convolution, Bnormal is each normal B-frame and khie is a 5 × 5 kernel with 110in the first two rows and 115in the last three rows. The binary image of each gradient map was then generated by extracting the layers with intensity above an empirically determined threshhold (Fig. 2(D) and 2(G)). Because it lacks internal hierarchical structure, only one layer was detected inside the optic disc (Fig. 2(G)). After all the volumetric binary images were generated, we construct an en face accumulation image by summing the separate binary images (Fig. 2(H)). This leaves the region of the optic disc darker since it retains only one layer after binarization (instead of three), and so obtains lower values in the accumulation image. A binary en face image Ib was then generated based on the center region Ic (red box in Fig. 2(H)) of the accumulation image to improve the detection stability. This binarization process was defined as:
Ib(x,y)={11Ic(x,y)>1.3×mean(1Ic)0otherwise
The optic disc center was then calculated as the geometric center of the binary image just obtained. Though some large vessels might still be visible in the binary image due to the vessel shadows, the calculation of the optic disc center is unaffected due to the approximate rotational symmetry

 figure: Fig. 2

Fig. 2 Diagram of the optic disc center detection. (A) En face average projection of the volumetric OCT. The detected optic disc region is covered by green. The red point is the detected disc center. (B) The normal B-frame corresponding to the position of the left blue line, which is outside of the disc. (C) The gradient map of the B-frame in (B). (D) The binary image of the layers with highest gradient intensity in (C). (E) The normal B-frame corresponding to the position of the right blue line, which is inside the disc. (F) The gradient map of the B-frame in (E). (G) The binary image of the layers with highest gradient intensity in (F). Note the single band of pixels in the disc region. (H) En face accumulation projection based on the volumetric gradient map. The center region with two thirds of the image length is indicated by the red box.

Download Full Size | PDF

2.3.2 Diametral B-frames generation and disc boundary segmentation

The 180 diametral B-frames and corresponding labels were then generated from 1° to 180° based on the detected optic disc center and resized to 416 × 416 (416 being the pixel length of the image diagonal). After this we cropped the images for network training (Fig. 3). Because the optic disc boundary is defined as the Bruch’s membrane opening (BMO) [38], the area of EZ + RPE (cyan region in Fig. 3(C)) and the remaining B-frame area constituted the input labels for the disc boundary neural network. The initial en face optic disc binary image was then obtained based on the 180 prediction maps from the trained network through a coordinate transformation. The output region so obtained is rough so we performed a multi-angle edge smoothing process on the initial boundary consisting of two steps. First, the bumpy artifacts were removed through a morphological opening process. After that, the convex hull of the disc region was calculated to make sure the final disc region was convex (Fig. 4).

 figure: Fig. 3

Fig. 3 Generation of diametral B-frames. (A) En face average projection of a volumetric OCT scan from a glaucoma patient. The green point is the automatically detected optic disc center. The two red lines with angle and arrows indicate planes along with the diametral B-frames are generated. (B) The diametral B-frame corresponding to the red line at 1°. The green line corresponds to the optic disc center (green point) in (A). The region between two blue lines is the optic disc. The peripapillary retina is to the left and right of the blue lines. (C) The generated diametral B-frame corresponding to the red line at 45° in (A). The manually segmented EZ + RPE are colored in cyan.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Smoothing process of the initial optic disc boundary. (A) Volumetric prediction maps of EZ + RPE. (B) Initial optic disc region based on the en face projection of (A). (C) The bump artifacts were removed using morphological opening. (D) The final optic disc region after the convex hull calculation.

Download Full Size | PDF

2.4 Peripapillary retinal layer segmentation

The training data set for the peripapillary retinal boundaries segmentation network was obtained based on the manually delineated internal boundaries between retinal layers. In order to provide extra features for learning and help to mitigate errors due to layer distortion and vessel shadows near the disc we organized the input data as the combination of several adjacent B-frames. Therefore, each input image in the training data set contained channels with size 416 × 304 × 5, from a combination of five adjacent B-frames (Fig. 5(A)). Each input label in the training data set was calculated based on the manually segmented retinal layers of the middle (i.e., third, marked by red arrow in Fig. 5(A)) B-frame of the corresponding image. The size of each input label was 416 × 304 × 7, with the first channel corresponding to the area outside the retina. The other channels are the regions of the six main retinal layers (Fig. 5(C)).

 figure: Fig. 5

Fig. 5 The image and corresponding label in the training data set for the designed neural network for peripapillary retinal boundaries segmentation. (A) Image constructed from five adjacent B-frames. (B) Colormap of the peripapillary retinal layers based on the manually delineated boundaries of the B-frame marked by red arrow in (A). Six major layers are shown: NFL (red), IPL (green), INL (yellow), OPL (blue), ONL (purple), and EZ + RPE (cyan). (C) The seven channel labels based on the manual delineation of the third channel of (A).

Download Full Size | PDF

After the trained neural network obtained initial boundaries Binitial based on the prediction maps of each B-frames in the volumetric OCT, the final eight boundaries were obtained by refining the initial boundaries using a multi-weights graph search (Fig. 6). (The EZ/RPE boundary was not segmented by the neural network, but we added it at this step in order to obtain a complete segmentation.) To improve the accuracy and stability of this graph search, weights were calculated not just based on the search direction but also based on the vertical distance to initial boundaries. The multi-weights graph search was defined as

P(x,z)=argmin(P(x1,z+d(i))+G(x,z)×(w(i)+|z+d(i)Binitial(x1)|×0.1)i=[1,2,...,n]d=[3,2,1,0,1,2,3]w=[1.4,1.2,1.0,1.0,1.0,1.2,1.4]
where P(x,z) is the cost of the shortest path from the first column to the coordinate (x,z) in xth column, G(x,z) is the pixel value in the corresponding gradient map (examples in Fig. 2(C) and 2(F)), z+d(i) is the row of one of the n neighboring pixels in (x-1)th column, and w(i) is the empirically determined weight assigned to each search direction.

 figure: Fig. 6

Fig. 6 The initial boundaries were refined by a multi-weights graph search. (A) The prediction map generated from the trained neural network. (B) The initial boundaries based on the prediction map in (A). The optic disc region, as automatically determined by the algorithm, is indicated by the solid light blue vertical lines. The region between these lines and the orange dotted lines is where refined weights in the graph search are used to ensure convergence to the BMO. This region covers one quarter of the distance between the edge of the image and the optic disc. (C) The final boundaries after the multi-weights graph search and smoothing.

Download Full Size | PDF

Near the optic disc, there is large variation in the vitreous/ILM boundary location. Furthermore, in this region we require the boundaries converge to the BMO. To achieve these goals, we modified the search weights in this region (between the orange and blue lines in Fig. 6(B)) according to Eqs. (5) and (6):

For Vitreous/ILM:

n=21d=[10,9,...,0,...,9,10]w=[1.8,1.8,...,1.8,1.4,1.2,1.0,1.0,1.0,1.2,1.4,1.8,...,1.8,1.8]
For the NFL/GCL, IPL/INL, INL/OPL, OPL/ONL, and ONL/EZ:
n=17d=[8,7,...,0,...,7,8]w=[1.4,1.4,1.2,1.2,1.1,1.1,1.0,1.0,1.2,1.4,1.6,1.8,2.0,2.2,2.4,2.6,2.8].
The searching order of the eight boundaries was RPE/BM → Vitreous/ILM → NFL/GCL → ONL/EZ → INL/IPL → OPL/ONL → IPL/INL → EZ/RPE, and the search region included the initial estimate plus the six pixels above and below. For a boundary without an initial value, the search area was changed to [Bpre - 6, Bpre + 6], in which Bpre was the just segmented boundary of the last B-frame of the last B-frame. In addition, the area of each boundary did not exceed the region based on the associated slab’s upper and lower limit. For the region inside the optic disc, just the top and bottom boundaries were segmented based on the binary image of the whole retina. These weights and parameters were empirically chosen just based on the training data set and will be used in segmentation of future data. After the boundary segmentation, each boundary was smoothed by a mean filter with size 5 × 5.

3. Results

In this study, 78 eyes from 46 healthy individuals and 104 eyes from 63 glaucoma patients were scanned. Among the data set, 30 scan volumes each from different healthy participants and glaucoma patients were chosen for the training data set (10800 inputs for optic disc boundary segmentation and 18000 inputs for peripapillary retinal boundaries segmentation). The training batch size was set to 4. Among the 4 inputs, two of them were randomly chosen from the glaucoma training data and another two inputs were randomly chosen from the normal training data. The trained model was obtained after 18000 training steps. The rest of the data set was used to test the performance of this segmentation method. In addition, there was no overlap between the cases used in the training and testing data set.

3.1 Qualitative analysis

In Fig. 7, the segmented optic disc is shown in green. The region corresponds to the area expected from visual inspection.

 figure: Fig. 7

Fig. 7 The segmentation results of the optic disc boundary. In each part, the optic disc or its boundary is shown in green. (A) The en face average projection of the volumetric OCT scanned from a healthy participant. (B) The bottom-to-top 3D view of the volumetric OCT of (A). (C) The en face average projection of the volumetric OCT scanned from a glaucoma patient. (D) The bottom-to-top 3D view of the volumetric OCT of (C).

Download Full Size | PDF

The segmented boundaries of peripapillary retinal from a healthy participant is shown in Fig. 8. In addition, the anatomical structures outside and inside the optic disc are clearly shown in Figs. 8(B) and 8(C). The superficial vascular complex (SVC), defined as the inner 80% of ganglion cell complex (GCC), includes all structures between the ILM and IPL/INL border [13,39]. An en face SVC angiogram was generated by projecting the maximum decorrelation within the same slab [40–42]. In addition, the segmentation results based on the OCT data scanned from a glaucoma patient are shown in Fig. 9. The angiogram of the NFL slab, which is critically important to the detection and diagnosis of glaucoma, was defined as the radial peripapillary capillary plexus (RPCP). Notably, the glauomatous wedge shaped defect can be visualized on both RPCP angiogram (Fig. 9(B)) and NFL thickness map (Fig. 9(C)) [9–13]. The superotemporal area with capillary loss could also clearly be seen in the RPCP (marked by a green line in Fig. 9(B)).

 figure: Fig. 8

Fig. 8 Segmentation results of the left eye of a healthy participant. (A) The en face average projection, with the segmented optic disc region overlaid in green. (B) The 3D anatomical map of the entire volumetric OCT based on the segmented peripapillary retinal layers. (C) Cutaway from (B) at the blue line location in (A), clearly showing the anatomic structure inside the disc. (D) En face SVC angiogram based on the segmented boundaries. (E) B-frame corresponding to the red line in (A) with segmented peripapillary retinal boundaries. (F) Corresponding image for the blue line in (A). The slab boundaries are, from top to bottom, the Vitreous/ILM (red), NFL/GCL (green), IPL/INL (yellow), INL/OPL (blue), OPL/ONL (magenta), ONL/EZ (cyan), EZ/RPE (red) and RPE/BM (blue).

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Segmentation results for the right eye of a glaucoma patient. (A) En face average projection image, with the segmented optic disc region overlaid in green. (B) En face RPCP angiogram based on the segmented boundaries. Capillary loss in the superotemporal area is marked with a green line. (C) NFL thickness map based on the segmented peripapillary retinal boundaries. (D) B-frame corresponding to the red line in (A) with segmented peripapillary retinal boundaries. (E) Corresponding image for the blue line in (A). (F) The 3D anatomical map of whole volumetric OCT based on the segmented peripapillary retinal layers. (G) Cutaway from (F) at the blue line location in (A), clearly showing anatomic structure inside the optic disc. The slab boundaries are, from top to bottom, the Vitreous/ILM (red), NFL/GCL (green), IPL/INL (yellow), INL/OPL (blue), OPL/ONL (magenta), ONL/EZ (cyan), EZ/RPE (red) and RPE/BM (blue).

Download Full Size | PDF

3.2 Quantitative analysis

We tested 21960 diametral B-frames generated from 122 volumetric OCT scans to assess the performance of the neural network used in the optic disc boundary detection. The mean ± standard deviation of the testing loss (Eq. (1) between the predication maps and ground truth labels was 0.033 ± 0.028. We also calculated the DSC between the predicted final disc boundaries and corresponding manual delineations. The DSC was 0.92 ± 0.03 in normal and 0.91 ± 0.05 in glaucomatous eyes.

For the performance of peripapillary retinal boundaries segmentation, we calculated the absolute errors (µm, based on 3.125 µm/pixel) of the peripapillary retinal boundaries between our method and manual delineation (Table 1). The overall absolute errors were similar for both healthy and glaucomatous eyes. Because the NFL thickness is a critical feature for the detection and diagnosis of glaucoma, the NFL thickness based on our method was calculated and compared with the gold standard based on the manual delineation. The mean ± standard deviation value of the NFL thickness differences (manual minus automated) was 2.14 ± 1.45 µm in glaucomatous and 1.67 ± 1.83 µm in normal eyes.

Tables Icon

Table 1. Segmentation accuracy of our method

As another test of performance for the algorithm presented here, we also compared our results to those obtained with our previous method, which was based exclusively on the graph search algorithm [20]. The comparisons of the segmentation accuracy of peripapillary retinal boundaries is shown in Table 2.

Tables Icon

Table 2. Comparison of the peripapillary retinal boundaries segmentation

Through Table 2, it is clear that the segmentation accuracy and stability were both improved after combining the neural network with the classic graph search.

3.3 Neural network analysis

Inside the neural network, the addition of the atrous-convolution layer in each atrous-block and the global block greatly improved the performance of the neural networks. In order to further analyze the neural network design, we compared the validation accuracy (based on DSC) of the peripapillary retinal layers segmentation between the four architectures below: original U-Net, U-Net + global block, U-Net + cascaded atrous-block, and U-Net + global block + cascaded atrous-block (proposed) (Table 3). Clearly, adding the cascaded atrous-convolution layers in the down and up sampling towers and global block at the end of the network critically improved the convergence of the neural network. In addition, the validation accuracies of the healthy and glaucoma data based on the inputs using only one channel (the middle one) instead of the 5 used in our algorithm were 84.11% and 83.53% respectively. These accuracies were about 2% lower than the accuracies shown in the last column of Table 3 which proved the five channels input design was effective.

Tables Icon

Table 3. Comparison of the validation accuracy between different architectures

Figure 10 shows example feature maps learned by the network in the normal convolution layers of the global block. It is clear that in each map the network is learning different retinal layers, as each map highlights specific layers or combinations thereof. The result of each map then yields a complete segmentation.

 figure: Fig. 10

Fig. 10 The sixteen feature maps of normal layers in the Global block.

Download Full Size | PDF

4. Discussion

The structure inside the optic disc, layer distortion near the optic disc, and vessel shadows constitute three major difficulties for peripapillary retinal boundaries segmentation. First, the optic disc needs to be segmented before the peripapillary retinal boundaries segmentation due to its unique anatomical structure. We solved this challenge by utilizing a geometric reorientation (diametral B-frames) and training a neural network on this more amenable geometry. The generated diametral B-frames have a high degree of structural consistency, which greatly increased the segmentation accuracy and stability of the optic disc boundary. In addition, the smoothing method that conformed to the anatomical features of optic disc also guaranteed the fidelity of the boundary. In the peripapillary retinal boundaries segmentation stage, the reason of not using diametral B-frames was that the diametral B-frames have the same directions with large vessels. The large vessel shadows could hardly influence the segmentation of single EZ + RPE layer but will influence the segmentation accuracy of six adjacent layers.

For the network architecture, the atrous-convolution layers and global block in the neural network could capture both local and global information at each pixel. The combination of the input data and neural network used in the design guaranteed that the peripapillary retinal boundaries segmentation would not be influenced by either disc distortion or vessel shadows.

Though the segmentation accuracy was greatly improved by using the neural network, limitations were also obvious. The performance of this method was limited by the depth and breadth of the training data set. In order to use this method on other OCT devices with different scan patterns or data from patients with different eye diseases, the training data set would need to be expanded. However, the complexity of the network architecture should be sufficient to learn either new pathologies or instruments, since even in these situations the OCT scans have nearly the same overall structure. In a future study, this method will be used on an expanded training data set to broaden its capabilities.

5. Conclusions

We combined a neural network with the traditional graph search method to segment both the optic disc and peripalliary retina boundaries in an optic disc centered volumetric OCT scan. The addition of the neural network greatly improved both segmentation accuracy and stability. The quantified tissue information, especially the NFL thickness and analysis of capillary plexuses, have the potential to pose a significant improvement in the diagnosis and early detection of glaucoma.

Funding

National Institutes of Health (R01 EY023285, R01 EY027833, R01EY024544, P30 EY010572); Research to Prevent Blindness (New York, NY) (unrestricted departmental funding grant and William & Mary Greve Special Scholar Award).

Disclosures

Oregon Health & Science University (OHSU), David Huang and Yali Jia, have a significant financial interest in Optovue, Inc. These potential conflicts of interest have been reviewed and managed by OHSU.

References

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]   [PubMed]  

2. Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012). [CrossRef]   [PubMed]  

3. R. K. Wang, S. L. Jacques, Z. Ma, S. Hurst, S. R. Hanson, and A. Gruber, “Three dimensional optical angiography,” Opt. Express 15(7), 4083–4097 (2007). [CrossRef]   [PubMed]  

4. S. Yousefi, Z. Zhi, and R. K. Wang, “Eigendecomposition-based clutter filtering technique for optical micro-angiography,” IEEE Trans. Biomed. Eng. 58(8), 2316–2323 (2011). [CrossRef]   [PubMed]  

5. S. Makita, Y. Hong, M. Yamanari, T. Yatagai, and Y. Yasuno, “Optical coherence angiography,” Opt. Express 14(17), 7821–7840 (2006). [CrossRef]   [PubMed]  

6. A. Mariampillai, B. A. Standish, E. H. Moriyama, M. Khurana, N. R. Munce, M. K. Leung, J. Jiang, A. Cable, B. C. Wilson, I. A. Vitkin, and V. X. Yang, “Speckle variance detection of microvasculature using swept-source optical coherence tomography,” Opt. Lett. 33(13), 1530–1532 (2008). [CrossRef]   [PubMed]  

7. A. S. Nam, I. Chico-Calero, and B. J. Vakoc, “Complex differential variance algorithm for optical coherence tomography angiography,” Biomed. Opt. Express 5(11), 3822–3832 (2014). [CrossRef]   [PubMed]  

8. J. Enfield, E. Jonathan, and M. Leahy, “In vivo imaging of the microcirculation of the volar forearm using correlation mapping optical coherence tomography (cmOCT),” Biomed. Opt. Express 2(5), 1184–1193 (2011). [CrossRef]   [PubMed]  

9. Y. C. Tham, X. Li, T. Y. Wong, H. A. Quigley, T. Aung, and C. Y. Cheng, “Global prevalence of glaucoma and projections of glaucoma burden through 2040: a systematic review and meta-analysis,” Ophthalmology 121(11), 2081–2090 (2014). [CrossRef]   [PubMed]  

10. R. N. Weinreb, T. Aung, and F. A. Medeiros, “The pathophysiology and treatment of glaucoma: a review,” JAMA 311(18), 1901–1911 (2014). [CrossRef]   [PubMed]  

11. Y. Jia, E. Wei, X. Wang, X. Zhang, J. C. Morrison, M. Parikh, L. H. Lombardi, D. M. Gattey, R. L. Armour, B. Edmunds, M. F. Kraus, J. G. Fujimoto, and D. Huang, “Optical Coherence Tomography Angiography of Optic Disc Perfusion in Glaucoma,” Ophthalmology 121(7), 1322–1332 (2014). [CrossRef]   [PubMed]  

12. L. Liu, Y. Jia, H. L. Takusagawa, A. D. Pechauer, B. Edmunds, L. Lombardi, E. Davis, J. C. Morrison, and D. Huang, “Optical coherence tomography angiography of the peripapillary retina in glaucoma,” JAMA Ophthalmol. 133(9), 1045–1052 (2015). [CrossRef]   [PubMed]  

13. J. P. Campbell, M. Zhang, T. S. Hwang, S. T. Bailey, D. J. Wilson, Y. Jia, and D. Huang, “Detailed vascular anatomy of the human retina by projection-resolved optical coherence tomography angiography,” Sci. Rep. 7(1), 42201 (2017). [CrossRef]   [PubMed]  

14. M. K. Garvin, M. D. Abramoff, R. Kardon, S. R. Russell, X. Wu, and M. Sonka, “Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search,” IEEE Trans. Med. Imaging 27(10), 1495–1505 (2008). [CrossRef]   [PubMed]  

15. Z. Hu, M. Niemeijer, K. Lee, M. D. Abramoff, M. Sonka, and M. K. Garvin, “Automated segmentation of the optic disc margin in 3-D optical coherence tomography images using a graph-theoretic approach,” Proc. SPIE 7262, 72620U (2009). [CrossRef]  

16. B. J. Antony, M. D. Abràmoff, K. Lee, P. Sonkova, P. Gupta, Y. Kwon, M. Niemeijer, Z. Hu, and M. K. Garvin, “Automated 3D segmentation of intraretinal layers from optic nerve head optical coherence tomography images,” Proc. SPIE 7626, 76260U (2010). [CrossRef]  

17. K. Lee, M. Niemeijer, M. K. Garvin, Y. H. Kwon, M. Sonka, and M. D. Abramoff, “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imaging 29(1), 159–168 (2010). [CrossRef]   [PubMed]  

18. M. S. Miri, M. D. Abràmoff, K. Lee, M. Niemeijer, J. K. Wang, Y. H. Kwon, and M. K. Garvin, “Multimodal Segmentation of Optic Disc and Cup From SD-OCT and Color Fundus Photographs Using a Machine-Learning Graph-Based Approach,” IEEE Trans. Med. Imaging 34(9), 1854–1866 (2015). [CrossRef]   [PubMed]  

19. Z. Hu, C. A. Girkin, A. Hariri, and S. R. Sadda, “Three-dimensional choroidal segmentation in spectral OCT volumes using optic disc prior information,” Proc. SPIE 9697, 96971S (2016). [CrossRef]  

20. P. Zang, S. S. Gao, T. S. Hwang, C. J. Flaxel, D. J. Wilson, J. C. Morrison, D. Huang, D. Li, and Y. Jia, “Automated boundary detection of the optic disc and layer segmentation of the peripapillary retina in volumetric structural and angiographic optical coherence tomography,” Biomed. Opt. Express 8(3), 1306–1318 (2017). [CrossRef]   [PubMed]  

21. E. Gao, F. Shi, W. Zhu, C. Jin, M. Sun, H. Chen, and X. Chen, “Graph Search–Active Appearance Model based Automated Segmentation of Retinal Layers for Optic Nerve Head Centered OCT Images,” in SPIE Medical Imaging, (SPIE, 2017), pp. 101331Q–101331Q.

22. K. Yu, F. Shi, E. Gao, W. Zhu, H. Chen, and X. Chen, “Shared-hole graph search with adaptive constraints for 3D optic nerve head optical coherence tomography image segmentation,” Biomed. Opt. Express 9(3), 962–983 (2018). [CrossRef]   [PubMed]  

23. S. Apostolopoulos, S. De Zanet, C. Ciller, S. Wolf, and R. Sznitman, “Pathological OCT Retinal Layer Segmentation using Branch Residual U-shape Networks,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (September, 2017), pp. 294–301. [CrossRef]  

24. L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search,” Biomed. Opt. Express 8(5), 2732–2744 (2017). [CrossRef]   [PubMed]  

25. S. K. Devalla, P. K. Renukanand, B. K. Sreedhar, G. Subramanian, L. Zhang, S. Perera, J. M. Mari, K. S. Chin, T. A. Tun, N. G. Strouthidis, T. Aung, A. H. Thiéry, and M. J. A. Girard, “DRUNET: a dilated-residual U-Net deep learning network to segment optic nerve head tissues in optical coherence tomography images,” Biomed. Opt. Express 9(7), 3244–3265 (2018). [CrossRef]   [PubMed]  

26. J. Kugelman, D. Alonso-Caneiro, S. A. Read, S. J. Vincent, and M. J. Collins, “Automatic segmentation of OCT retinal boundaries using recurrent neural networks and graph search,” Biomed. Opt. Express 9(11), 5759–5777 (2018). [CrossRef]   [PubMed]  

27. A. Camino, Z. Wang, J. Wang, M. E. Pennesi, P. Yang, D. Huang, D. Li, and Y. Jia, “Deep learning for the segmentation of preserved photoreceptors on en face optical coherence tomography in two inherited retinal diseases,” Biomed. Opt. Express 9(7), 3092–3105 (2018). [CrossRef]   [PubMed]  

28. Y. Guo, A. Camino, J. Wang, D. Huang, T. S. Hwang, and Y. Jia, “MEDnet, a neural network for automated detection of avascular area in OCT angiography,” Biomed. Opt. Express 9(11), 5147–5158 (2018). [CrossRef]   [PubMed]  

29. S. S. Gao, G. Liu, D. Huang, and Y. Jia, “Optimization of the split-spectrum amplitude-decorrelation angiography algorithm on a spectral optical coherence tomography system,” Opt. Lett. 40(10), 2305–2308 (2015). [CrossRef]   [PubMed]  

30. M. F. Kraus, J. J. Liu, J. Schottenhamml, C. L. Chen, A. Budai, L. Branchini, T. Ko, H. Ishikawa, G. Wollstein, J. Schuman, J. S. Duker, J. G. Fujimoto, and J. Hornegger, “Quantitative 3D-OCT motion correction with tilt and illumination correction, robust similarity measure and regularization,” Biomed. Opt. Express 5(8), 2591–2613 (2014). [CrossRef]   [PubMed]  

31. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015, pp. 3431–3440.

32. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Proceedings of Medical Image Computing and Computer-Assisted Intervention – MICCAI2015:18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds. (Springer International Publishing, 2015), pp. 234–241. [CrossRef]  

33. V. K. Fisher Yu, “Multi-scale context aggregation by dilated convolutions,” arXiv:1511.07122 [cs.CV] (2016).

34. Q. Zhang, Z. Cui, X. Niu, S. Geng, and Y. Qiao, “Image segmentation with pyramid dilated convolution based on ResNet and U-Net,” in Neural Information Processing (Springer International Publishing, 2017), 364–372.

35. S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” in Proceedings of the 32nd International Conference on International Conference on Machine Learning-Volume 37 (JMLR.org, Lille, France, 2015), pp. 448–456.

36. T. U. Djork-Arné Clevert and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (elus),” arXiv:1511.07289 [cs.LG] (2015).

37. S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv:1609.04747 [cs.LG] (2016).

38. A. S. Reis, N. O’Leary, H. Yang, G. P. Sharpe, M. T. Nicolela, C. F. Burgoyne, and B. C. Chauhan, “Influence of clinically invisible, but optical coherence tomography detected, optic disc margin anatomy on neuroretinal rim evaluation,” Invest. Ophthalmol. Vis. Sci. 53(4), 1852–1860 (2012). [CrossRef]   [PubMed]  

39. T. S. Hwang, M. Zhang, K. Bhavsar, X. Zhang, J. P. Campbell, P. Lin, S. T. Bailey, C. J. Flaxel, A. K. Lauer, D. J. Wilson, D. Huang, and Y. Jia, “Visualization of 3 Distinct Retinal Plexuses by Projection-Resolved Optical Coherence Tomography Angiography in Diabetic Retinopathy,” JAMA Ophthalmol. 134(12), 1411–1419 (2016). [CrossRef]   [PubMed]  

40. M. Zhang, T. S. Hwang, J. P. Campbell, S. T. Bailey, D. J. Wilson, D. Huang, and Y. Jia, “Projection-resolved optical coherence tomographic angiography,” Biomed. Opt. Express 7(3), 816–828 (2016). [CrossRef]   [PubMed]  

41. J. Wang, M. Zhang, T. S. Hwang, S. T. Bailey, D. Huang, D. J. Wilson, and Y. Jia, “Reflectance-based projection-resolved optical coherence tomography,” Biomed. Opt. Express 8(3), 1536–1548 (2017). [CrossRef]   [PubMed]  

42. T. T. Hormel, J. Wang, S. T. Bailey, T. S. Hwang, D. Huang, and Y. Jia, “Maximum value projection produces better en face OCT angiograms than mean value projection,” Biomed. Opt. Express 9(12), 6412–6424 (2018). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 The architecture of the designed neural network.
Fig. 2
Fig. 2 Diagram of the optic disc center detection. (A) En face average projection of the volumetric OCT. The detected optic disc region is covered by green. The red point is the detected disc center. (B) The normal B-frame corresponding to the position of the left blue line, which is outside of the disc. (C) The gradient map of the B-frame in (B). (D) The binary image of the layers with highest gradient intensity in (C). (E) The normal B-frame corresponding to the position of the right blue line, which is inside the disc. (F) The gradient map of the B-frame in (E). (G) The binary image of the layers with highest gradient intensity in (F). Note the single band of pixels in the disc region. (H) En face accumulation projection based on the volumetric gradient map. The center region with two thirds of the image length is indicated by the red box.
Fig. 3
Fig. 3 Generation of diametral B-frames. (A) En face average projection of a volumetric OCT scan from a glaucoma patient. The green point is the automatically detected optic disc center. The two red lines with angle and arrows indicate planes along with the diametral B-frames are generated. (B) The diametral B-frame corresponding to the red line at 1°. The green line corresponds to the optic disc center (green point) in (A). The region between two blue lines is the optic disc. The peripapillary retina is to the left and right of the blue lines. (C) The generated diametral B-frame corresponding to the red line at 45° in (A). The manually segmented EZ + RPE are colored in cyan.
Fig. 4
Fig. 4 Smoothing process of the initial optic disc boundary. (A) Volumetric prediction maps of EZ + RPE. (B) Initial optic disc region based on the en face projection of (A). (C) The bump artifacts were removed using morphological opening. (D) The final optic disc region after the convex hull calculation.
Fig. 5
Fig. 5 The image and corresponding label in the training data set for the designed neural network for peripapillary retinal boundaries segmentation. (A) Image constructed from five adjacent B-frames. (B) Colormap of the peripapillary retinal layers based on the manually delineated boundaries of the B-frame marked by red arrow in (A). Six major layers are shown: NFL (red), IPL (green), INL (yellow), OPL (blue), ONL (purple), and EZ + RPE (cyan). (C) The seven channel labels based on the manual delineation of the third channel of (A).
Fig. 6
Fig. 6 The initial boundaries were refined by a multi-weights graph search. (A) The prediction map generated from the trained neural network. (B) The initial boundaries based on the prediction map in (A). The optic disc region, as automatically determined by the algorithm, is indicated by the solid light blue vertical lines. The region between these lines and the orange dotted lines is where refined weights in the graph search are used to ensure convergence to the BMO. This region covers one quarter of the distance between the edge of the image and the optic disc. (C) The final boundaries after the multi-weights graph search and smoothing.
Fig. 7
Fig. 7 The segmentation results of the optic disc boundary. In each part, the optic disc or its boundary is shown in green. (A) The en face average projection of the volumetric OCT scanned from a healthy participant. (B) The bottom-to-top 3D view of the volumetric OCT of (A). (C) The en face average projection of the volumetric OCT scanned from a glaucoma patient. (D) The bottom-to-top 3D view of the volumetric OCT of (C).
Fig. 8
Fig. 8 Segmentation results of the left eye of a healthy participant. (A) The en face average projection, with the segmented optic disc region overlaid in green. (B) The 3D anatomical map of the entire volumetric OCT based on the segmented peripapillary retinal layers. (C) Cutaway from (B) at the blue line location in (A), clearly showing the anatomic structure inside the disc. (D) En face SVC angiogram based on the segmented boundaries. (E) B-frame corresponding to the red line in (A) with segmented peripapillary retinal boundaries. (F) Corresponding image for the blue line in (A). The slab boundaries are, from top to bottom, the Vitreous/ILM (red), NFL/GCL (green), IPL/INL (yellow), INL/OPL (blue), OPL/ONL (magenta), ONL/EZ (cyan), EZ/RPE (red) and RPE/BM (blue).
Fig. 9
Fig. 9 Segmentation results for the right eye of a glaucoma patient. (A) En face average projection image, with the segmented optic disc region overlaid in green. (B) En face RPCP angiogram based on the segmented boundaries. Capillary loss in the superotemporal area is marked with a green line. (C) NFL thickness map based on the segmented peripapillary retinal boundaries. (D) B-frame corresponding to the red line in (A) with segmented peripapillary retinal boundaries. (E) Corresponding image for the blue line in (A). (F) The 3D anatomical map of whole volumetric OCT based on the segmented peripapillary retinal layers. (G) Cutaway from (F) at the blue line location in (A), clearly showing anatomic structure inside the optic disc. The slab boundaries are, from top to bottom, the Vitreous/ILM (red), NFL/GCL (green), IPL/INL (yellow), INL/OPL (blue), OPL/ONL (magenta), ONL/EZ (cyan), EZ/RPE (red) and RPE/BM (blue).
Fig. 10
Fig. 10 The sixteen feature maps of normal layers in the Global block.

Tables (3)

Tables Icon

Table 1 Segmentation accuracy of our method

Tables Icon

Table 2 Comparison of the peripapillary retinal boundaries segmentation

Tables Icon

Table 3 Comparison of the validation accuracy between different architectures

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

Loss=1 1 N c n=1 N c Ou t n La b n +eps Ou t n La b n +eps
G hie =Conv( B normal , k hie )
I b ( x,y )={ 1 1 I c ( x,y )>1.3×mean( 1 I c ) 0 otherwise
P(x,z)=argmin(P(x1,z+d(i))+G(x,z)×(w(i)+| z+d(i) B initial (x1) |×0.1) i=[1, 2, ..., n] d=[3, 2, 1, 0, 1, 2, 3] w=[1.4, 1.2, 1.0, 1.0, 1.0, 1.2, 1.4]
n=21 d=[10, 9, ..., 0, ..., 9, 10] w=[1.8, 1.8, ..., 1.8, 1.4, 1.2, 1.0, 1.0, 1.0, 1.2, 1.4, 1.8, ..., 1.8, 1.8]
n=17 d=[8, 7, ..., 0, ..., 7, 8] w=[1.4, 1.4, 1.2, 1.2, 1.1, 1.1, 1.0, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6, 2.8].
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.