Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

OCT-OCTA segmentation: combining structural and blood flow information to segment Bruch’s membrane

Open Access Open Access

Abstract

In this paper we present a fully automated graph-based segmentation algorithm that jointly uses optical coherence tomography (OCT) and OCT angiography (OCTA) data to segment Bruch’s membrane (BM). This is especially valuable in cases where the spatial correlation between BM, which is usually not visible on OCT scans, and the retinal pigment epithelium (RPE), which is often used as a surrogate for segmenting BM, is distorted by pathology. We validated the performance of our proposed algorithm against manual segmentation in a total of 18 eyes from healthy controls and patients with diabetic retinopathy (DR), non-exudative age-related macular degeneration (AMD) (early/intermediate AMD, nascent geographic atrophy (nGA) and drusen-associated geographic atrophy (DAGA) and geographic atrophy (GA)), and choroidal neovascularization (CNV) with a mean absolute error of ∼0.91 pixel (∼4.1 μm). This paper suggests that OCT-OCTA segmentation may be a useful framework to complement the growing usage of OCTA in ophthalmic research and clinical communities.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomography (OCT) is a well established imaging modality in ophthalmology. With its recent commercial spread, OCT angiography (OCTA), an extension of OCT that enables non-invasive visualization of ocular vasculature at micron-scale resolution, has generated considerable interest in the ophthalmic research and clinical communities. [19]. While OCTA provides clinicians and researchers a new tool to study ocular pathologies, it also has introduced potential artifacts [10]. Because the posterior eye vasculature is largely organized along en face plexuses, OCTA data are most commonly visualized via en face projections, which are typically taken between contours separating different retinal layers. Because of the eye’s curvature, and because any given retinal layer is curved itself, simple “re-slicing” of the volume along the plane orthogonal to the optical axis cannot guarantee the proper visualization of the ocular vascular network. In particular, such re-slicing will result in an en face image that contains different retinal layers as a function of lateral position. For this reason, segmentation of different ocular layers is a necessary prerequisite for many types of OCTA interpretation and analysis. Manual tracing remains the gold standard for segmentation, but the large data sizes of OCT(A) volumes make it impractically laborious in most settings, necessitating the usage of automatic algorithms.

Perhaps nowhere is the requirement for accurate segmentation more stringent than in OCTA analysis of the choriocapillaris (CC), the $\sim$10 μm thick capillary network of the choroid. Analysis of the CC is of great interest in studying a number of diseases, including age-related macular degeneration (AMD), where the CC is thought to play an important role [1113]. En face OCTA analysis of the CC typically requires segmentation of Bruch’s membrane (BM), a 2-4 μm thick acellular matrix situated between the retinal pigment epithelium (RPE) and the CC [14]. The en face OCTA CC image is then formed by projecting the OCTA volume between the BM segmentation and an offset of the segmentation contour. Because of limited segmentation accuracy, this projection is sometimes taken over alternate ranges not exactly corresponding to the CC (e.g., over a larger range, or a range slightly posterior to the CC). However, for the purposes of this paper we assume that the OCTA volume is only projected through the depths spanned by the CC. The CC’ axial dimension is such that CC segmentation requires accuracy relatively close to the axial resolution of the imaging system. Commercial OCT instruments typically have axial resolutions of 5-10 μm, full-width-at-half-maximum, in tissue.

Failure to accurately segment BM can substantially alter both qualitative and quantitative analysis of the CC. For example, OCTA analysis is often performed under the assumption that low or no OCTA signal in the CC indicates a reduction or lowering of CC blood flow. Such regions, termed flow deficits (also “flow voids” and “signal void”) constitute a major entity of interest in current OCTA-based studies of the CC [15]. However, the validity of this assumption depends on the accuracy of segmentation. In particular, if the segmentation contour is anterior to BM, the OCTA signal will be low, irrespective of the CC blood flow status (in the absence of pathology, the RPE is avascular). Conversely, if the segmentation line is posterior to BM, the OCTA signal will be high, irrespective of the CC blood flow status (Sattler’s and/or Haller’s layer will be included).

Other researchers have recognized the importance of segmenting BM (or closely related structures) and there have been a number of previous publications addressing this topic. There exist mainly two approaches: indirect and direct. The indirect approaches [1627] consist of segmenting the RPE as a substitute for the BM and then use methods like polynomial fitting to get an estimate of BM in locations where the RPE-BM complex is distorted by pathology. The drawback of this approach is that the result is only governed by prior assumptions about the curvature of the BM instead of being based on an actual measured signal. The direct approaches [2833] determine the location of the BM directly using machine learning methods. It is unclear how these methods determine the BM, since it is not visible on normal OCT scans. However, to our knowledge, all previous studies have exclusively used single OCT scans. We hypothesize that OCTA data can be used synergistically with OCT data for the segmentation of BM, and other ocular structures, because the spatial relationship between BM and CC, which is only visible on OCTA, is more stable than the RPE-BM complex in some pathologies. Therefore, we present in this paper an OCT-OCTA segmentation algorithm based on graph theory and convolutional neural network for segmenting the BM in the presence of pathology.

2. Developing a physiological basis for OCT-OCTA segmentation

To develop a rationale for OCT-OCTA segmentation of BM, it is necessary to develop an understanding of the structural and angiographic features of the RPE-BM-CC complex in healthy and pathologic eyes, as well as the appearance of these features on OCT and OCTA imaging. The following five subsections establish this understanding, and its linkage to OCT-OCTA segmentation.

2.1 Normal anatomy of the RPE, BM, and the CC

In the absence of pathology, the RPE, BM, and CC, share a close spatial relationship, with the anterior-most layer of BM being the basal lamina of the RPE, and the posterior-most layer of BM being the basal lamina of the CC. A brief description of the RPE, BM, and CC is given below, with a focus on their geometries.

  • RPE: The RPE is a single cell layer that is located immediately posterior to the neurosensory retina, which it nourishes. The thickness of the RPE-BM complex varies with a variety of factors, including age, disease, and retinal location, but is approximately $\sim$25 μm in the central subfield [34,35].
  • BM: BM is a penta-layered acellular matrix, $\sim$2-4 μm in thickness, situated between the RPE and CC [14]. For the purposes of this paper we will use “BM” to refer to the middle three layers of this structure, excluding the basal lamina of the RPE and basal lamina of the CC [14,3638].
  • CC: The CC, the capillary network of the choroid, is a monolayer meshwork of capillaries that is situated directly below BM. CC vessels have diameters $\sim$7-10 μm when measured in the plane perpendicular to BM (i.e., axially) [39] and $\sim$16-20 μm when measured in the plane parallel to BM (i.e., transversely) [40].

2.2 Pathological features disrupting the RPE-BM-CC complex

Many retinal pathologies disrupt the normal RPE-BM-CC complex described in the previous section. Below we briefly discuss pathology-related features of particular significance to BM segmentation.

  • Drusen: A hallmark of AMD, drusen are lipid accumulations that form between BM and the RPE. Due to their location, drusen create a separation between BM and the RPE [36,4143].
  • Choroidal neovascularization (CNV): Also termed macular neovascularization, CNV often involves the growth of vasculature through the BM into the sub-RPE (type I) or sub-retinal (type II) space. This aberrant vasculature can directly, or via fluid leakage or hemorrhage, create a spatial separation between BM and the RPE.
  • RPE Atrophy: RPE atrophy is the defining feature of nascent geographic atrophy (nGA), drusen-associated GA (DAGA), and GA.
  • CC Impairment: CC impairment, which can include atrophy or a reduction of blood flow, has been shown to occur during the natural aging process, as well as in all stages of AMD, with increasing impairment associated with increasing AMD severity [4451]. CC impairment also occurs in diabetic retinopathy (DR) [52].

2.3 OCT of the RPE, BM, and CC in health and disease

  • RPE: In the absence of pathology, the RPE appears as a relatively thick, bright band on OCT (it is highly back-scattering; Fig. 1(a)). In AMD eyes with RPE atrophy, the RPE absence/impairment result in OCT hypertransmission (Fig. 1(g) and Fig. 1(i)).
  • BM: When BM and the RPE retain their normal spatial proximity, it is typically difficult to resolve these two structures with standard resolution OCT. Thus, the position of BM is often inferred as the posterior most aspect of the RPE (this is sometimes referred to as the “RPE fit”). When pathological features, such as drusen or CNV, separate the RPE and BM, the BM is sometimes visible as a thin bright line (Fig. 1(e), Fig. 1(k)). BM is also visible in eyes with RPE atrophy, as in nGA, DAGA, and GA (Fig. 1(g), Fig. 1(i)).
  • CC: Unlike the larger choroidal vasculature of Sattler’s and Haller’s layer, the CC is not well resolved in OCT scans.

 figure: Fig. 1.

Fig. 1. Examples of B-Scans to demonstrate the appearance of the structural and angiographic features of the RPE-BM-CC complex in healthy and pathologic eyes on OCT and OCTA imaging. The columns show the signals for OCT (left) and OCTA (right) and each row corresponds to a different pathology. The scale bar in Fig. 1(a) applies for all subfigures.

Download Full Size | PDF

2.4 OCTA of the RPE, BM, and CC in health and disease

  • RPE: In the absence of pathology, the RPE is avascular, and therefore appears as a dark band on OCTA (Fig. 1(b)). However, in eyes with type II CNV, neovasculature can produce an OCTA signal within the RPE. Additionally, projection artifacts can cause larger retinal vasculature to appear in the position of the RPE (Fig. 1(b)).
  • BM: In the absence of pathology, BM is avascular, and therefore appears as a dark band on OCTA (Fig. 1(b)). However, in eyes with type I and type II CNV, neovasculaturization can produce an OCTA signal crossing through BM (Fig. 1(l)). As with the RPE, projection artifacts can cause larger retinal vasculature to appear in the position of BM.
  • CC: In the absence of pathology, the CC appears on OCTA as high signal layer immediately below BM. In the macula, the small intercapillary spacing of the CC makes it challenging to resolve individual capillaries using standard OCTA systems, though capillaries are resolvable with adaptive optics (AO)-OCTA [53] as well as through high-speed OCTA with averaging [54]. In AMD, the CC OCTA signal becomes progressively more patchy and attenuated, resulting in an increase in flow deficits (Fig. 1(f), Fig. 1(h), Fig. 1(j), Fig. 1(l)). In GA, the CC OCTA signal is substantially attenuated throughout much of the atrophic region (Fig. 1(j)); attenuated CC OCTA signals are also notable under regions of RPE atrophy in nGA and DAGA (Fig. 1(h)). Patchy and attenuated OCTA signals are also present, though to a lesser extent, in DR eyes (Fig. 1(d)).

2.5 Putting it together: motivating OCT-OCTA segmentation

The preceding four sections establish that structural (OCT) and blood flow (OCTA) features both provide information that can be used to determine the position of BM. In some situations, this information is largely redundant: for example, in healthy eyes, the position of BM may be reliably estimated as the posterior aspect of the OCT signal from the RPE-BM complex. Alternatively, the position of BM could be taken as the anterior most aspect of the OCTA signal from the CC. In other situations, one source may be more informative than the other: for example, in areas of drusen where BM is not resolved, the OCT signal provides quite indirect information about the position of BM, whereas the CC OCTA signal provides direct information. As a complementary example, in regions of GA, the CC OCTA signal is typically very attenuated, whereas the BM is easily visible on OCT.

3. Methods

To demonstrate an approach for coping with these different appearances when segmenting the BM, this section starts with a presentation of the patient selection and image acquisition process in this paper, followed by an explanation of the proposed algorithm to do so. It concludes with a description of the employed evaluation methodology.

3.1 Patient and data selection

The study was approved by the institutional review boards at the Massachusetts Institute of Technology and Tufts Medical Center. All participants were imaged in the ophthalmology clinic at the New England Eye Center at Tufts Medical Center. Written informed consent was obtained from all subjects prior to imaging. The research adhered to the Declaration of Helsinki and the Health Insurance Portability and Accountability Act. All subjects underwent a complete ophthalmic examination, including a detailed history, refraction, intraocular pressure measurement, anterior segment examination, and a dilated fundus examination by a general ophthalmologist or a retinal specialist at the New England Eye Center. Patients underwent color fundus photography, near-infrared reflectance (NIR) imaging, and fundus autofluorescence (FAF). Retrospective swept source OCT (SS-OCT) and swept source OCTA (SS-OCTA) data, acquired from the system described in Section 3.2, were then examined for the identified patients.

In total the data from 30 eyes from 24 subjects were used in this study, the exact numbers per pathology can be found in Table 1. For the further training and evaluation of the algorithm, the volumes were split into a training set, validation set and test set. From each of the five eyes per group, one was randomly assigned to the training set, one to the validation set, and three to the test set such that the test set contains most scans. This distribution was chosen in order to make the evaluation more significant.

Tables Icon

Table 1. Number of subjects and eyes per pathology used in this study.

3.2 Image acquisition and pre-segmentation processing

OCTA data were acquired with a 400 kHz SS-OCT system, having an axial resolution of $\sim$8-9 μm, and a transverse resolution of $\sim$20 μm, full-width-at-half-maximum in tissue [55]. Fringes were sampled using an optically clocked analog-to-digital card (Alazar Technologies, Quebec, Canada), with a maximum clock frequency of $\sim$1.1 GHz; a fiber Bragg grating was used to stabilize fringe phase [8]. Incident power on the cornea was $\sim$1.8 mW. 6 mm $\times$ 6 mm areas centered on the fovea, were imaged using a protocol that acquired 500 A-scans per B-scan, with 5 repeated B-scans per retinal location, at 500 distinct retinal locations. Accounting for the galvanometer duty cycle, the interscan time between repeated B-scans was $\sim$1.5 ms. To minimize the effects of patient motion, orthogonal x-fast and y-fast OCT volumes were acquired and subsequently registered using a previously published algorithm [5658]. OCTA images were computed using an intensity-based (i.e., amplitude decorrelation) scheme; OCTA data were not thresholded. The digital pixel spacings were $\sim$4.5 μm in the axial ($z$) coordinate, and $\sim$12 μm in the transverse ($x$ and $y$) coordinates.

3.3 OCT-OCTA segmentation algorithm

Our proposed algorithm uses the graph-cut framework, which has been widely used for OCT B-scan segmentation [59]. In the graph-cut framework, OCT B-scans are treated as weighted, directed graphs, and layers are segmented by computing shortest paths. In this approach each pixel is regarded as a vertex and is connected to the upper, right and lower neighboring pixels. To segment a layer, two additional columns are added to the left and right of the graph with a minimal weight. The starting point for the Dijkstra algorithm is then set to the top left corner and the ending point to the bottom right. Consequently the algorithm can move without additional costs in these additional columns and find the right starting and ending point of the layer. These additional columns are are later removed after the segmentation. In standard graph-cut approaches, edge weights are computed using axial OCT gradients, which capture edge features. However, in this study we seek a more general mapping of both OCT and OCTA data that reflects the varying utility that these features have in determining BM (see Section 2.5). In particular, the mapping from OCT and OCTA data to graph weights should vary as RPE-BM-CC complex varies (e.g., from healthy to drusen).

In this study, we opted to construct this mapping using convolutional neural networks (CNNs) using the U-Net architecture. The U-Net architecture was selected because of its proven utility in segmentation tasks [60], and because it is fully convolutional, contrasting with network architectures with fully connected layers, which are restricted to a fixed B-Scan image size. Our network outputs a mask with two entries per pixel (except for the last row and column of the B-scan): one for the horizontal edge weight that connects this pixel to its right neighbor, and another for the vertical edge weight that connects this pixel to its bottom neighbor. The vertical edge is bidirectional, thus it can also be used as upward connection from the lower pixel with the same weight.

We optimized the architecture and parameters of our network by training over 20 networks with different architectures on our training set and selecting the network with the best performance on our validation set. The selected network architecture is shown in Fig. 2. On the Encoder side, each block consists of five convolutional layers with a kernel size of $15\times 15$, followed by a pooling layer. In the first two blocks, 16 feature maps are created per convolution, followed by 32 feature maps in block three and four, ending with 64 feature maps in block five and six. The decoder is the mirrored encoder, except that instead of the pooling layer there is an up-convolution layer, followed by a ReLu layer to create the edge weight mask.

 figure: Fig. 2.

Fig. 2. Architecture of the proposed network. The encoder and decoder side consists of five convolutional blocks each. The blocks consist of five convolutional layer with a kernel size of 15. On the encoder side each block is followed by a pooling layer, whereas on the decoder side each block is preceded by an up-convolution layer. The number of channels per convolution is denoted at the bottom of each block.

Download Full Size | PDF

As training target, the edge weight mask was created with 0 (2) if the edge was (not) part of the manual segmentation contour of the BM. These numbers were chosen to match the range of the traditional axial gradient-based approach of Chiu et al. [59]. As augmentation, all B-Scans were flipped horizontally. The network weights were initialized using He initialization [61]. We used the ReLU activation function with batch normalization. The training target was defined as the mean squared error between the edge weight masks generated from the manual segmentation and the ones generated from the network. The network was trained for 100 epochs with a batch size of three using the Adam optimizer with an initial learning rate of 0.00005.

The final algorithm workflow is shown in Fig. 3 and consists of first constructing a graph of the scans and computing a reference contour that lies about in the middle of the retina. This first step is done in order to limit the search space for the later BM segmentation. The reference surface is determined using to following steps: (1) Computation of the axial gradient of the OCT B-Scan to compute the edge weights of the graph (filtering with a Gaussian filter kernel of $\sigma$ = 20 μm in axial and $\sigma$ = 100 μm in transverse direction first, followed by the axial gradient calculation using a window size of four pixel for more robustness). (2) Determine the brightest contour as being the shortest path across the graph using Dijkstra’s algorithm [62]. (3) Compute the shortest paths above and below the first result. (4) Define the reference surface as being the average of the axial position of the two brightest contours out of the three. (5) Reduce noise in the reference contour by median filtering and subsequently fitting a second order polynomial to it.

 figure: Fig. 3.

Fig. 3. Depiction of the algorithm workflow used in this work.

Download Full Size | PDF

Afterwards the OCT and OCTA B-Scans are denoised using a bilateral filter [63] ($\sigma _d$ = 4.5 μm in axial and transverse direction and $\sigma _r$ = 0.5). For the BM segmentation, this bilateral filtered B-scan is fed into the neural network to compute the edge weight mask. Afterwards a weighted graph is constructed where the edge weights between the pixels are taken from the horizontal and vertical entries in the edge mask. The search space of the graph is limited to an area of 300 μm below the reference contour, in which the BM is detected by again using Dijkstra’s algorithm. An optional post-processing step is then to filter the segmentation results. All parameters of the algorithm were determined on the validation set described in Section 3.1.

3.4 Evaluation

We evaluated our proposed algorithm, which we refer to as ConvOA (Conv for convolution; O for OCT; and A for OCTA), using an absolute, pixel-wise error relative to manual segmentations. We also compared the performance of our algorithm against a standard graph-cut algorithm with edge-weights computed using gradient OCT information (GradO; Grad for gradient) or gradient OCTA information (GradA). Finally, to assess whether performance differences of ConvOA were solely due to the deep learning based weight setting, rather than the combining of structure and flow information, we trained two additional networks, one with only OCT data as input (ConvO) and one with only OCTA data as input (ConvA). These two networks had the same architecture as the one used in the ConvOA case, to make them comparable. No post-processing was done in order to not bias the segmentation results.

In order to demonstrate that our results hold for different training and test splits, we also performed a five-fold cross validation where in each fold one volume per pathology was randomly assigned to the test set, one to the validation set and three to the training set. We performed a five-fold cross validation with the ConvOA network using ten epochs and a learning rate of 0.0005. For each fold the network that performed best on the validation set was chosen to be evaluated on the test set.

4. Results

For each pathology there were three volumes in the test set, for a total of 1500 B-Scans per pathology, and 9000 B-Scans across all cases. The mean and the standard deviation of the absolute, pixel-wise errors for each pathology separately and in total are summarized in Table 2 and boxplots are displayed in Fig. 4. Qualitative examples of the segmentation results are provided in Fig. 5. Histograms for the significant absolute errors (errors larger than two) of the different algorithms are shown in Fig. 6, where errors greater or equal to ten are accumulated. For the five-fold cross validation, one volume per pathology was in the test set, leading to a total of 500 B-Scans per pathology and 3000 B-Scans across all cases. The results for the five-fold cross validation (mean and the standard deviation of the absolute, pixel-wise errors for each pathology separately and in total for all folds and the mean over all folds) are shown in Table 3.

Tables Icon

Table 2. Mean and standard deviation (in brackets) for the absolute segmentation errors in pixels for the five evaluated algorithms for each pathology separately and in total. The lowest value per pathology is highlighted in bold print.

Tables Icon

Table 3. Mean and standard deviation (in brackets) for the absolute segmentation errors in pixels for the three algorithms in the cross validation for each pathology separately and in total for the five folds of the cross validation and the mean over all five folds. The lowest value per pathology and experiment is highlighted in bold print.

 figure: Fig. 4.

Fig. 4. Boxplots of the segmentation results for the five evaluated algorithms and each pathology separately and in total. Red bars indicate median error, red crosses indicate mean error, blue boxes indicate 25$\%$-75$\%$ quantiles, and whiskers indicate 5$\%$-95$\%$ quantiles. The dotted red horizontal lines correspond to the lowest lowest median error across the tested algorithms.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Qualitative examples of segmentation results in various pathologies (teal = manual segmentation, green = GradO, purple = ConvOA, other algorithms ommited for the sake of clarity). The scale bar in Fig. 5(a) applies for all subfigures.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Histograms for absolute errors larger than two pixels of the different algorithms and pathologies. Errors that are greater or equal to ten are accumulated in the right-most bar.

Download Full Size | PDF

5. Discussion

In this paper we presented an OCT-OCTA segmentation approach for segmenting BM in the presence of pathology. The motivation to use both structure (OCT) and blood flow (OCTA) information arose from observations about the RPE-BM-CC complex, which was altered in pathology, and how these alterations were reflected on OCT and OCTA imaging. We were particularly motivated by the close spatial relationship between BM and the CC that is maintained in certain situations in which the RPE-BM relationship is altered, such as drusen. The varying importance of OCT and OCTA data in determining the position of BM led us to adopt a deep learning approach. Comparison of our ConvOA algorithm with the other tested algorithms yields several notable features.

First, in the absence of pathology, the standard OCT gradient based approach has the best performance. This is consistent with the observation that the normal RPE-BM-CC complex allows for the position of the BM to be reliably inferred from the OCT volume as the bottom of the RPE. We would also expect the GradO algorithm to outperform the GradA algorithm in normals because (1) the RPE and CC have similar spatial associations to BM, and (2) OCTA data has lower signal-to-noise than does OCT data. Nevertheless, the errors made by the ConvOA algorithm are still very low and close to the ones of the GradO algorithms. For the training of the network only one volume per pathology was available, which captures only a small insight on how pathologies manifest in the retina. Consequently we expect that more training data and thus higher variability would further decrease the segmentation errors of the ConvOA algorithm.

As discussed in Section 2.3, DR does not alter the RPE-BM complex. Therefore the results yield the same findings as in the non-pathologic case where the GradO algorithm performs best when looking at the mean and median absolute error. One notable thing is that when looking at the boxplots (Fig. 4) and error histograms (Fig. 5) the GradO algorithm is making more larger errors than the ConvOA algorithms. One reason might be that the contrast between the RPE and CC is lowered by the vessel shadow artifacts, leading to less prominent axial gradients in this region, as exemplary shown in Fig. 6(d). In DR there are, in addition to the normal blood vessels, structures like microaneurysms that cast extra shadows creating larger regions that are susceptible to making errors when using only the axial gradient.

Second, in all other pathologies where the RPE-BM complex is altered, the results show that the ConvOA algorithm is outperforming the other four approaches. When looking at the error distribution in Fig. 5 it is obvious that the presented algorithm is making way less significant segmentation errors. Exemplary cases that explain this difference are shown in Fig. 6: the GradO algorithm is following the altered RPE-BM complex (Fig. 6(e), 6(f), 6(g), 6(h), 6(k), 6(l)) or cutting across regions where the axial gradient is not a meaningful measure for distinguishing between RPE and CC (Fig. 6(i), 6(j)) that lead to large segmentation errors. In contrast, the ConvOA algorithm with its combined information is able to follow the position of the BM relatively close.

Overall, one can conclude from the results that the gradient method on OCT achieves the best score in the ideal case where there is no RPE alteration. However when it comes to robustness in case of pathology, the neural networks are giving much more stable results throughout all pathologies presented in this paper. Moreover the combination of OCT and OCTA information as input leads to a decrease of the segmentation errors in all cases compared to only using OCT or OCTA as single input. This substantiates our initial hypothesis that the additional OCTA information improves performance.

It is important to emphasize that the presented algorithm segments BM, not the CC (although, in some situations, these two entities may be indistinguishable from one another under standard OCT imaging). Three points support this claim. First, BM is continuous, like our segmented contour, whereas the CC is a vascular network whose inter-capillary spacings present topological partitions not present in our segmented contour. Second, since the posterior-most aspect of BM is formed from the basal lamina of the CC, fitting a “smooth” contour along the anterior surface of the CC will correspond exactly to the position of BM. Third, in regions with CC alteration, such as beneath areas of nGA and DAGA, our estimated contour remains continuous, unlike the CC. For these reasons we feel it is most appropriate to say that we are segmenting BM, not the CC, although we do use the latter to estimate the position of the former.

There are several limitations to this study. First, the number of eyes per pathology on which the developed algorithm was tested is small, meaning that the presented results may not generalize to larger data sets. The primary reason for the small number of included eyes is that manual segmentation is very time-consuming. While the number of analyzed eyes is a limitation, it is somewhat softened by the relatively large number of analyzed B-scans (500 B-scans per volume). Moreover, the number of volumes in the training set was even lower, such that the algorithms based on convolutional neural networks might have a disadvantage compared to the axial gradient methods which do not need training data. Consequently these algorithms might perform better when trained on a larger cohort with more variability in the training data. This is also a drawback of the proposed algorithm since it is dependent on the quality, variability and availability of training data, as applies to all deep learning-based segmentation approaches. The presented algorithm is also limited in its applicability as it requires, by design, OCTA data. However, the rapidly increasing availability of OCTA in commercial systems makes the shown advantages and theoretical considerations provided in this paper widely applicable. Another aspect is that the network architecture can be optimized, e.g. the parameters of the network could be reduced by, for example, stacking multiple small sized convolution kernel instead of using relatively large 15x15 convolution kernels [64].

6. Conclusion

In this paper we motivated and proposed a general framework for OCT-OCTA segmentation. Additionally, we presented an OCT-OCTA segmentation algorithm, which uses both OCT and OCTA data, for segmenting BM in the presence of pathology. We validated the presented segmentation algorithm against manual segmentation data and showed close correspondence and robustness in case of pathology, and performed better than using OCT or OCTA data alone. Overall, we believe that OCT-OCTA segmentation naturally complements the continued adoption of OCTA in ophthalmology. For future work this algorithm could be trained with more data and the concept of OCT-OCTA segmentation could be transferred to other occular structures.

Funding

Deutsche Forschungsgemeinschaft (MA 4898/12-1); National Institutes of Health (5-R01-EY011289-31); Air Force Office of Scientific Research (FA9550-15-1-0473); Macula Vision Research Foundation; Champalimaud Vision Award; Beckman-Argyros Award in Vision Research.

Acknowledgments

We are grateful to the NVIDIA corporation for supporting our research with the donation of a Quadro P6000 GPU.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. S. Makita, Y. Hong, M. Yamanari, T. Yatagai, and Y. Yasuno, “Optical coherence angiography,” Opt. Express 14(17), 7821–7840 (2006). [CrossRef]  

2. J. Fingler, D. Schwartz, C. Yang, and S. E. Fraser, “Mobility and transverse flow visualization using phase variance contrast with spectral domain optical coherence tomography,” Opt. Express 15(20), 12636–12653 (2007). [CrossRef]  

3. Y. K. Tao, A. M. Davis, and J. A. Izatt, “Single-pass volumetric bidirectional blood flow imaging spectral domain optical coherence tomography using a modified Hilbert transform,” Opt. Express 16(16), 12350–12361 (2008). [CrossRef]  

4. A. Mariampillai, B. A. Standish, E. H. Moriyama, M. Khurana, N. R. Munce, M. K. K. Leung, J. Jiang, A. Cable, B. C. Wilson, A. Vitkin, and V. X. D. Yang, “Speckle variance detection of microvasculature using swept-source optical coherence tomography,” Opt. Lett. 33(13), 1530–1532 (2008). [CrossRef]  

5. E. Jonathan, J. Enfield, and M. J. Leahy, “Correlation mapping: rapid method for retrieving microcirculation morphology from optical coherence tomography intensity images,” P Soc. Photo-Opt. Ins. 7898, 78980M (2011). [CrossRef]  

6. C. Blatter, T. Klein, B. Grajciar, T. Schmoll, W. Wieser, R. Andre, R. Huber, and R. A. Leitgeb, “Ultrahigh-speed non-invasive widefield angiography,” J. Biomed. Opt. 17(7), 0705051 (2012). [CrossRef]  

7. Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012). [CrossRef]  

8. W. Choi, K. J. Mohler, B. Potsaid, C. D. Lu, J. J. Liu, V. Jayaraman, A. E. Cable, J. S. Duker, R. Huber, and J. G. Fujimoto, “Choriocapillaris and choroidal microvasculature imaging with ultrahigh speed OCT angiography,” PLoS One 8(12), e81499 (2013). [CrossRef]  

9. D. M. Schwartz, J. Fingler, D. Y. Kim, R. j. Zawadzki, L. S. Morse, S. S. Park, S. E. Fraser, and J. S. Werner, “Phase-variance optical coherence tomography: a technique for noninvasive angiography,” Ophthalmology 121(1), 180–187 (2014). [CrossRef]  

10. R. F. Spaide, J. G. Fujimoto, N. K. Waheed, S. R. Sadda, and G. Staurenghi, “Optical coherence tomography angiography,” Prog. Retinal Eye Res. 64, 1–55 (2018). [CrossRef]  

11. G. Lutty, J. Grunwald, A. B. Majji, M. Uyama, and S. Yoneya, “Changes in choriocapillaris and retinal pigment epithelium in age-related macular degeneration,” Molecular Vision 5 (1999).

12. D. S. McLeod, R. Grebe, I. Bhutto, C. Merges, T. Baba, and G. A. Lutty, “Relationship between rpe and choriocapillaris in age-related macular degeneration,” Invest. Ophthalmol. Visual Sci. 50(10), 4982–4991 (2009). [CrossRef]  

13. I. Bhutto and G. Lutty, “Understanding age-related macular degeneration (AMD): Relationships between the photoreceptor/retinal pigment epithelium/Bruch’s membrane/choriocapillaris complex,” Mol. Aspects Med. 33(4), 295–317 (2012). [CrossRef]  

14. C. A. Curcio and M. Johnson, “Structure, function, and pathology of Bruch’s membrane,” in Retina, S. J. Ryan, ed. (Saunders, Philadelphia, 2012), chap. 20, pp. 465–481.

15. R. F. Spaide, “Choriocapillaris flow features follow a power law distribution: implications for characterization and mechanisms of disease progression,” Am. J. Ophthalmol. 170, 58–67 (2016). [CrossRef]  

16. Q. Yang, A. R. C Z. Wang, Y. Fukuma, M. Hangai, N. Yoshimura, A. Tomidokoro, M. Araie, A. S. Raza, D. C. Hood, and K. Chan, “Automated layer segmentation of macular OCT images using dual-scale gradient information,” Opt. Express 18(20), 21293–21307 (2010). [CrossRef]  

17. S. Farsiu, S. J. Chiu, J. A. Izatt, and C. A. Toth, “Fast detection and segmentation of drusen in retinal optical coherence tomography images,” in Proceedings of SPIE BiOS 6844, (SPIE, 2008).

18. K. Yi, M. Mujat, B. H. Park, W. Sun, J. W. Miller, J. M. Seddon, L. H. Young, J. F. de Boer, and T. C. Chen, “Spectral domain optical coherence tomography for quantitative evaluation of drusen and associated structural changes in non-neovascular age-related macular degeneration,” Br. J. Ophthalmol. 93(2), 176–181 (2009). [CrossRef]  

19. B. Baumann, E. Götzinger, M. Pircher, H. Sattmann, C. Schütze, F. Schlanitz, C. Ahlers, U. Schmidt-Erfurth, and C. K. Hitzenberger, “Segmentation and quantification of retinal lesions in age-related macular degeneration using polarization-sensitive optical coherence tomography,” J. Biomed. Opt. 15(6), 061704 (2010). [CrossRef]  

20. L. Zhang, K. Lee, M. Niemeijer, R. F. Mullins, M. Sonka, and M. D. Abramoff, “Automated segmentation of the choroid from clinical SD-OCT,” Invest. Ophthalmol. Visual Sci. 53(12), 7510–7519 (2012). [CrossRef]  

21. A. Mishra, A. Wong, K. Bizheva, and D. A. Clausi, “Intra-retinal layer segmentation in optical coherence tomography images,” Opt. Express 17(26), 23719–23728 (2009). [CrossRef]  

22. S. J. Chiu, J. A. Izatt, R. V. O’Connell, K. P. Winter, C. A. Toth, and S. Farsiu, “Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images,” Invest. Ophthalmol. Visual Sci. 53(1), 53–61 (2012). [CrossRef]  

23. G. Gregori, F. Wang, P. J. Rosenfeld, Z. Yehoshua, N. Z. Gregori, B. J. Lujan, C. A. Puliafito, and W. J. Feuer, “Spectral domain optical coherence tomography imaging of drusen in nonexudative age-related macular degeneration,” Ophthalmology 118(7), 1373–1379 (2011). [CrossRef]  

24. F. G. Schlanitz, B. Baumann, T. Spalek, C. Schutze, C. Ahlers, M. Pircher, E. Gotzinger, C. K. Hitzenberger, and U. Schmidt-Erfurth, “Performance of automated drusen detection by polarization-sensitive optical coherence tomography,” Invest. Ophthalmol. Visual Sci. 52(7), 4571–4579 (2011). [CrossRef]  

25. Q. Yang, C. A. Reisman, K. Chan, R. Ramachandran, A. Raza, and D. C. Hood, “Automated segmentation of outer retinal layers in macular OCT images of patients with retinitis pigmentosa,” Biomed. Opt. Express 2(9), 2493–2503 (2011). [CrossRef]  

26. Q. Chen, W. Fan, S. Niu, J. Shi, H. Shen, and S. Yuan, “Automated choroid segmentation based on gradual intensity distance in HD-OCT images,” Opt. Express 23(7), 8974–8994 (2015). [CrossRef]  

27. Q. Chen, T. Leng, L. Zheng, L. Kutzscher, J. Ma, L. de Sisternes, and D. L. Rubin, “Automated drusen segmentation and quantification in SD-OCT images,” Med. Image Anal. 17(8), 1058–1072 (2013). [CrossRef]  

28. A. Lang, A. Carass, E. Sotirchos, P. Calabresi, and J. L. Prince, “Segmentation of retinal OCT images using a random forest classifier,” in Proceedings of SPIE 8669, (SPIE, 2013).

29. H. Danesh, R. Kafieh, H. Rabbani, and F. Hajizadeh, “Segmentation of choroidal boundary in enhanced depth imaging OCTs using a multiresolution texture based modeling in graph cuts,” Comput. Math. Methods Medicine 2014, 1–9 (2014). [CrossRef]  

30. V. Kajic, M. Esmaeelpour, B. Povazay, D. Marshall, P. L. Rosin, and W. Drexler, “Automated choroidal segmentation of 1060 nm OCT in healthy and pathologic eyes using a statistical model,” Biomed. Opt. Express 3(1), 86–103 (2012). [CrossRef]  

31. X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, “Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks,” Neurocomputing 237, 332–341 (2017). [CrossRef]  

32. L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in oct images of non-exudative amd patients using deep learning and graph search,” Biomed. Opt. Express 8(5), 2732–2744 (2017). [CrossRef]  

33. S. G. Zadeh, M. W. Wintergerst, V. Wiens, S. Thiele, F. G. Holz, R. P. Finger, and T. Schultz, “Cnns enable accurate and fast segmentation of drusen in optical coherence tomography,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, (Springer, 2017), pp. 65–73.

34. M. Boulton and P. Dayhaw-Barker, “The role of the retinal pigment epithelium: topographical variation and ageing changes,” Eye 15(3), 384–389 (2001). [CrossRef]  

35. F. Ko, P. J. Foster, N. G. Strouthidis, Y. Shweikh, Q. Yang, C. A. Reisman, Z. A. Muthy, U. Chakravarthy, A. J. Lotery, and P. A. Keane, “Associations with retinal pigment epithelium thickness measures in a large cohort: results from the uk biobank,” Ophthalmology 124(1), 105–117 (2017). [CrossRef]  

36. C. A. Curcio, “Soft drusen in age-related macular degeneration: biology and targeting via the oil spill strategies,” Invest. Ophthalmol. Visual Sci. 59(4), AMD160–AMD181 (2018). [CrossRef]  

37. J. D. M. Gass and J. D. M. Gass, Stereoscopic Atlas of Macular Diseases: Diagnosis and Treatment, vol. 1 (Mosby St. Louis, 1997).

38. A. P. Schachat, C. P. Wilkinson, D. R. Hinton, P. Wiedemann, K. B. Freund, and D. Sarraf, Ryan’s Retina E-Book (Elsevier Health Sciences, 2017).

39. R. S. Ramrattan, T. L. van der Schaft, C. M. Mooy, W. De Bruijn, P. Mulder, and P. De Jong, “Morphometric analysis of bruch’s membrane, the choriocapillaris, and the choroid in aging,” Invest. Ophthalmol. Visual Sci. 35(6), 2857–2864 (1994).

40. J. Olver, “Functional anatomy of the choroidal circulation: methyl methacrylate casting of human choroid,” Eye 4(2), 262–272 (1990). [CrossRef]  

41. S. H. Sarks, “Drusen and their relationship to senile macular degeneration,” Aust. J. Opthalmology 8(2), 117–130 (1980). [CrossRef]  

42. J. Sarks, S. Sarks, and M. Killingsworth, “Evolution of soft drusen in age-related macular degeneration,” Eye 8(3), 269–283 (1994). [CrossRef]  

43. C. A. Curcio and C. L. Millican, “Basal linear deposit and large drusen are specific for early age-related maculopathy,” Arch. Ophthalmol. 117(3), 329–339 (1999). [CrossRef]  

44. E. M. Moult, N. K. Waheed, E. A. Novais, W. Choi, B. Lee, S. B. Ploner, E. D. Cole, R. N. Louzada, C. D. Lu, P. J. Rosenfeld, J. S. Duker, and J. G. Fujimoto, “Swept-source optical coherence tomography angiography reveals choriocapillaris alterations in eyes with nascent geographic atrophy and drusen-associated geographic atrophy,” Retina 36, S2–S11 (2016). [CrossRef]  

45. E. Borrelli, Y. Shi, A. Uji, S. Balasubramanian, M. Nassisi, D. Sarraf, and S. R. Sadda, “Topographic analysis of the choriocapillaris in intermediate age-related macular degeneration,” Am. J. Ophthalmol. 196, 34–43 (2018). [CrossRef]  

46. W. Choi, E. M. Moult, N. K. Waheed, M. Adhi, B. Lee, C. D. Lu, T. E. de Carlo, V. Jayaraman, P. J. Rosenfeld, J. S. Duker, and J. G. Fujimoto, “Ultrahigh-speed, swept-source optical coherence tomography angiography in nonexudative age-related macular degeneration with geographic atrophy,” Ophthalmology 122(12), 2532–2544 (2015). [CrossRef]  

47. M. Thulliez, Q. Zhang, Y. Shi, H. Zhou, Z. Chu, L. de Sisternes, M. K. Durbin, W. Feuer, G. Gregori, and R. K. Wang, “Correlations between choriocapillaris flow deficits around geographic atrophy and enlargement rates based on swept source oct imaging,” Ophthalmol. Retin. 3(6), 478–488 (2019). [CrossRef]  

48. R. S. Ramrattan, T. L. van der Schaft, C. M. Mooy, W. De Bruijn, P. Mulder, and P. De Jong, “Morphometric analysis of bruch’s membrane, the choriocapillaris, and the choroid in aging,” Invest. Ophthalmol. Visual Sci. 35(6), 2857–2864 (1994).

49. A. Biesemeier, T. Taubitz, S. Julien, E. Yoeruek, and U. Schraermeyer, “Choriocapillaris breakdown precedes retinal degeneration in age-related macular degeneration,” Neurobiol. Aging 35(11), 2562–2573 (2014). [CrossRef]  

50. D. S. McLeod, R. Grebe, I. Bhutto, C. Merges, T. Baba, and G. A. Lutty, “Relationship between rpe and choriocapillaris in age-related macular degeneration,” Invest. Ophthalmol. Visual Sci. 50(10), 4982–4991 (2009). [CrossRef]  

51. F. Zheng, Q. Zhang, Y. Shi, J. F. Russell, E. H. Motulsky, J. T. Banta, Z. Chu, H. Zhou, N. A. Patel, and L. de Sisternes, “Age-dependent changes in the macular choriocapillaris of normal eyes imaged with swept-source optical coherence tomography angiography,” Am. J. Ophthalmol. 200, 110–122 (2019). [CrossRef]  

52. W. Choi, N. K. Waheed, E. M. Moult, M. Adhi, B. Lee, T. C. De, V. Jayaraman, C. R. Baumal, J. S. Duker, and J. G. Fujimoto, “Ultrahigh speed swept source optical coherence tomography angiography of retinal and choriocapillaris alterations in diabetic patients with and without retinopathy,” Retina 37(1), 11–21 (2017). [CrossRef]  

53. K. Kurokawa, Z. Liu, and D. T. Miller, “Adaptive optics optical coherence tomography angiography for morphometric analysis of choriocapillaris,” Biomed. Opt. Express 8(3), 1803–1822 (2017). [CrossRef]  

54. J. V. Migacz, I. Gorczynska, M. Azimipour, R. Jonnal, R. J. Zawadzki, and J. S. Werner, “Megahertz-rate optical coherence tomography angiography improves the contrast of the choriocapillaris and choroid in human retinal imaging,” Biomed. Opt. Express 10(1), 50–65 (2019). [CrossRef]  

55. W. Choi, B. Potsaid, V. Jayaraman, B. Baumann, I. Grulkowski, J. J. Liu, C. D. Lu, A. E. Cable, D. Huang, J. S. Duker, and J. G. Fujimoto, “Phase-sensitive swept-source optical coherence tomography imaging of the human retina with a vertical cavity surface-emitting laser light source,” Opt. Lett. 38(3), 338–340 (2013). [CrossRef]  

56. M. Kraus, B. Potsaid, M. Mayer, R. Bock, B. Baumann, J. Liu, J. Hornegger, and J. Fujimoto, “Motion correction in optical coherence tomography volumes on a per a-scan basis using orthogonal scan patterns,” Biomed. Opt. Express 3(6), 1182–1199 (2012). [CrossRef]  

57. M. F. Kraus, J. J. Liu, J. Schottenhamml, C. L. Chen, A. Budai, L. Branchini, T. Ko, H. Ishikawa, G. Wollstein, J. Schuman, J. S. Duker, J. G. Fujimoto, and J. Hornegger, “Quantitative 3D-OCT motion correction with tilt and illumination correction, robust similarity measure and regularization,” Biomed. Opt. Express 5(8), 2591–2613 (2014). [CrossRef]  

58. S. B. Ploner, M. F. Kraus, E. M. Moult, L. Husvogt, J. Schottenhamml, A. Y. Alibhai, N. K. Waheed, J. S. Duker, J. G. Fujimoto, and A. K. Maier, “Efficient and high accuracy 3-D OCT angiography motion correction in pathology,” (2020). arXiv:2010.06931.

59. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010). [CrossRef]  

60. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

61. K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE international conference on computer vision, (2015), pp. 1026–1034.

62. E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numer. Math. 1(1), 269–271 (1959). [CrossRef]  

63. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Iccv, vol. 98 (1998), p. 2.

64. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2015), pp. 1–9.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Examples of B-Scans to demonstrate the appearance of the structural and angiographic features of the RPE-BM-CC complex in healthy and pathologic eyes on OCT and OCTA imaging. The columns show the signals for OCT (left) and OCTA (right) and each row corresponds to a different pathology. The scale bar in Fig. 1(a) applies for all subfigures.
Fig. 2.
Fig. 2. Architecture of the proposed network. The encoder and decoder side consists of five convolutional blocks each. The blocks consist of five convolutional layer with a kernel size of 15. On the encoder side each block is followed by a pooling layer, whereas on the decoder side each block is preceded by an up-convolution layer. The number of channels per convolution is denoted at the bottom of each block.
Fig. 3.
Fig. 3. Depiction of the algorithm workflow used in this work.
Fig. 4.
Fig. 4. Boxplots of the segmentation results for the five evaluated algorithms and each pathology separately and in total. Red bars indicate median error, red crosses indicate mean error, blue boxes indicate 25 $\%$ -75 $\%$ quantiles, and whiskers indicate 5 $\%$ -95 $\%$ quantiles. The dotted red horizontal lines correspond to the lowest lowest median error across the tested algorithms.
Fig. 5.
Fig. 5. Qualitative examples of segmentation results in various pathologies (teal = manual segmentation, green = GradO, purple = ConvOA, other algorithms ommited for the sake of clarity). The scale bar in Fig. 5(a) applies for all subfigures.
Fig. 6.
Fig. 6. Histograms for absolute errors larger than two pixels of the different algorithms and pathologies. Errors that are greater or equal to ten are accumulated in the right-most bar.

Tables (3)

Tables Icon

Table 1. Number of subjects and eyes per pathology used in this study.

Tables Icon

Table 2. Mean and standard deviation (in brackets) for the absolute segmentation errors in pixels for the five evaluated algorithms for each pathology separately and in total. The lowest value per pathology is highlighted in bold print.

Tables Icon

Table 3. Mean and standard deviation (in brackets) for the absolute segmentation errors in pixels for the three algorithms in the cross validation for each pathology separately and in total for the five folds of the cross validation and the mean over all five folds. The lowest value per pathology and experiment is highlighted in bold print.

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.