Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Active contour method for ILM segmentation in ONH volume scans in retinal OCT

Open Access Open Access

Abstract

The optic nerve head (ONH) is affected by many neurodegenerative and autoimmune inflammatory conditions. Optical coherence tomography can acquire high-resolution 3D ONH scans. However, the ONH’s complex anatomy and pathology make image segmentation challenging. This paper proposes a robust approach to segment the inner limiting membrane (ILM) in ONH volume scans based on an active contour method of Chan-Vese type, which can work in challenging topological structures. A local intensity fitting energy is added in order to handle very inhomogeneous image intensities. A suitable boundary potential is introduced to avoid structures belonging to outer retinal layers being detected as part of the segmentation. The average intensities in the inner and outer region are then rescaled locally to account for different brightness values occurring among the ONH center. The appropriate values for the parameters used in the complex computational model are found using an optimization based on the differential evolution algorithm. The evaluation of results showed that the proposed framework significantly improved segmentation results compared to the commercial solution.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The eye’s retina is part of the central nervous system (CNS) and as such features a similar cellular composition as the brain [1]. Accordingly, many chronic brain conditions lead to retinal changes. For example, retinal alterations have been described in clinically isolated syndrome [2], multiple sclerosis [3], neuromyelitis optica spectrum disorders [4–6], Susac’s symdrome [7,8], Parkinson’s disease [9], and Alzheimer’s dementia [10].

The retina is the only part of the CNS that is readily accessible by optical imaging, putting great potential into its imaging in the context of these disorders. Currently, spectral domain optical coherence tomography (SD-OCT) is the method of choice to acquire retinal 3D images in µm resolution [11,12].

The human retina demonstrates two macroscopic landmarks, whose analysis is especially promising in the context of neurological disorders. The macula around the fovea is the visual field’s center and contains the highest density of retinal ganglion cells. Macular SD-OCT images can be analysed quantitatively with intra-retinal segmentation [13]. The derived thickness or volume changes can then be used to quantify neuro-axonal damage in neurological disorders, e.g. in multiple sclerosis [14–16].

The second landmark is the optic nerve head (ONH), where all axons from retinal ganglion cells leave the eye towards the brain, thereby forming the optic nerve.

Two membranes define the ONH region and limit the ONH towards the inner and outer eye: the inner limiting membrane (ILM) and the Bruch’s membrane (BM) (Fig. 1). Segmenting both structures provides an important starting point for calculating imaging biomarkers of the ONH. Yet, development of suitable segmentation approaches is still an active research topic since ONH segmentation presents several difficult challenges for image analysis. An example where the commercial device fails is shown in Fig. 1. At the ONH, ILM segmentation is particularly challenging because of the broad range of ILM topologies in highly swollen to shallow ONH, ONH with deep cupping, and in some cases even with ILM overhangs. Segmentation is further complicated by a dense vasculature with often loose connective tissue, which can cause ILM surfaces with vastly irregular shapes, see Fig. 10.

 figure: Fig. 1

Fig. 1 (Left) Volume scan (C-Scan) of the optic nerve head (ONH). (Right) The volume scan consists of slices (B-scans), where each column is an axial depth scan (A-Scan). The ONH typically has a cup in its center (1). The blue line marks the inner limiting membrane (ILM) separating the vitreous body (2) from retinal tissue (3). The magenta line represents the Bruch’s membrane (BM). Note how the blue line failed to correctly detect the ILM, and misses the vessel at the left top, as well as the deep cuping.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 a) B-scan presenting several challenges for ILM segmentation using the original CV model: the yellow arrow shows the ONH region with no retinal layer information except several tissue remaining from nerve fibers and the ILM; the white arrows show shadows caused by the presence of large blood vessels. b) B-scan showing segmentation results using the original CV model. c) Boundary potential V on a sample B-scan. In the red area, V(x) = ρ, whereas in the green area V(x) = 0.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 a) Example of an ONH region with very low contrast compared to the vitreous (yellow arrow). This leads to the contour leaking into the tissue. b) The same scan with correctly detected ILM after scaling the values for c1 and c2 according to equation 7; c) Example of an ONH region with low intensity values at the inner layers (yellow arrow) compared to the dark vitreous as well as to the hyperintense outer layers (white arrow). This causes the contour to falsely detect parts of the lower layers as vitreous. d) The same scan with correctly detected ILM by rescaling values c1 and c2 after the interface has almost reached the desired ILM contour.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Three processing steps to obtain initial segmentation: a) filtering with morphological and Gauss filter b) thresholding, neglecting small connected components c) the initial contour

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Error distribution over 20 scans represented as box plots for the 10 optimization runs. The + symbol and the ◦ symbol denote outliers with an error deviating more than 1.5 × STD and more than 3.0 × STD from the mean, respectively.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Errors of the complete data set (40 scans): segmentation of the device (blue), proposed algorithm with the segmentation of the device as initial contour (green) and proposed algorithm with own initial contour (red). The lower bars show the error in the inner 1.5 mm circle of the ONH. The device error of scan 26 is very high (110 508 voxels) because the segmentation was completely wrong on some B-scans.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Normalized distribution of local Euclidean distances between manual and automatic segmentation.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Quality assessment of a volume scan: (a) B-scan with low SNR region in orange, (b) corresponding SLO image with quality assessment map, color coded according to the criteria shown at the right hand side of the map; green rectangle delimits the scan area; blue arrow shows the B-scan position.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Distribution of errors per B-Scan (a) (group40) obtained by comparing with manual segmentations, (b) (group100) as obtained by the corrections performed by an experienced grader.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Segmentation of three neighboring B-scans obtained from the 3D segmentation of the volume scan. The green line depicts the manual segmentation, the red our proposed method, and the blue line the segmentation as given by the device.

Download Full Size | PDF

A simplified ONH parameter is e.g. the peripapillary retinal nerve fiber layer thickness (pRNFL), which is measured in a circular scan with 12° diameter around the ONH, thereby avoiding direct ONH analysis. Certainly though, the ONH 3D structure itself contains potentially highly diagnostically relevant information as well, as it e.g. has been shown in glaucoma [17–20]. However, in applications concerning neurological disorders we need not only the minimum distance from the BMO points to the ILM, the so called BMO minimum rim width parameter (BMO-MRW), an already established parameter in the research of glaucoma [17–20], but also several others that incorporate various regions, volumes and areas. These will be able to capture the changes that ONH undergoes in form of swelling, as for example in idiopathic intracranial hypertension [21, 22], as well as in form of atrophy after optic neuritis [23, 24] or other optic neuropathies, for example, in the context of multiple sclerosis [25] or neuromyelitis optica spectrum disorders [5].

1.1. State of the art and proposed solution

One of the most frequently applied methods for retinal layer and ILM segmentation is based on graph-cuts [13], which works nicely in conditions with smooth surfaces, e.g. at the macula. Despite these important advances, the main drawback of graph-cuts based segmentation is that finding the shortest path through a graph is often implemented via Dijkstra’s algorithm, which results are retrieved as a graph function, which can only have one coordinate per axial depth scan (A-scan). This makes it impossible to correctly segment structures passing one A-scan more than once, e.g. by ILM overhangs at the ONH. Another disadvantage is that the employed constraints defined in most of the graph-cuts construction can result in underestimating extreme shapes, which regularly occur at the ONH, e.g. deep cups.

An attempt to overcome this problem was presented by [26] by incorporating a convex function in the graph-cut approach, which allowed detection of steep transitions, while keeping the ILM surface smooth. Furthermore, in order to allow the segmentation result to have several coordinates per A-scan, [27] employed a gradient vector flow field in the construction of the columns used in the graph-theoretic approach.

However, these recent advances still do not consider features of the ONH shape like overhangs. As a consequence, a tedious and time consuming manual ILM segmentation correction is necessary before ONH shape parameters can be derived in scans with deep ONH cupping or steep forms [28].

Recently Keller et al. [29] addressed this issue as well by introducing a new length-adaptive graph search metric. This approach is employed though to delineate the retina/vitreous boundary of a different application and region of the retina, namely for patients with full-thickness macular holes.

Novosel et al. [30] proposed a 3D approach to jointly segment retinal layers and lesions in eyes with topology-disrupting retinal diseases that can handle local intensity variations as well as the presence or absence of pathological structures in the retina. This method extends and generalizes the existing framework of loosely coupled level set, previously developed for the simultaneous segmentation of interfaces between retinal layers in macular and peripapillary scans of healthy and glaucoma subjects [31], drusen segmentation in subjects with dry age-related macular degeneration [32] and fluid segmentation in subjects with central serous rethinopathy [33]. Some of these methods are still 2D based, [32,33], and all mainly focus on the macula. Carass et al [34] proposed a 3D level set approach based on multiple-object geometric deformable model that overcomes the voxel precision restrictions that are present in the aforementioned methods. This method also focused on macula scans, and included only images without strong pathological changes.

More recently, deep learning emerged as a possible alternative to established graph-cuts, promising a flexible framework for retina analysis. Fang et al. [35] used a convolutional neural network to predict the central pixel label of a given image patch, and subsequently used the graph based approach to finalize the segmentation, while Roy et al. [36] designed a fully convolutional network (FCN) to segment retinal layers and fluid filled cavities in OCT images, partly based on [37] and [38]. These methods are all based on pixel-wise labelling without using topological relationships between layers or layer shape, which can lead to errors in the segmentation. He et al. [39] proposed a framework to correct topological defects by cascading two FCNs. This method was also applied and tested only on macula scans.

In order to address this issue, we here propose a modified Chan-Vese (CV) based segmentation method with sub-pixel accuracy that is fast, robust and able to correctly detect 3D ILM surfaces regardless of ONH shape complexity or overhangs.

Key features of the developed framework are:

  • This approach addresses the issue of ILM A-scan overhangs, which are frequently seen in eyes from patients with neurologic or autoimmune neuroinflammatory diseases.
  • We include an additional lower boundary constraint in the Chan-Vese method in order to avoid the level set evolution in tissue regions with low contrast.
  • The parameters c1 and c2 from the level set equation [40] were obtained as a result of an optimization process and further modified after several iteration steps by a scaling factor that incorporates the data locally. This greatly increased the segmentation accuracy.
  • We adapted a local fitting intensity energy introduced by [41] into the narrow-band context.
  • One of the most important aspects of our work is the thorough investigation and automatic setting of parameters involved in the level set equation through an optimization process. Thus, we optimized and validated our approach using 3D ONH scans from a heterogeneous data set of 40 eyes of healthy controls and patients with autoimmune neuroinflammatory diseases.
  • We further validated our segmentation and investigated the influence of image quality on the results using 100 randomly chosen scans from a large database that included 416 ONH volumes of healthy controls and patients with autoimmune neuroinflammatory diseases.

In the following we describe the algorithmic approach as well as its optimization for segmentation performance and computation time in detail.

2. Methods

2.1. Region based active contour methods

Active contours have been introduced by [42] as powerful methods for image segmentation. A contour C (also called a snake) is moved over the image domain Ω, where the dynamics are defined by the gradient flow of some suitable energy functional F = F [C]. Here, F [C] depends on the intensity function I = I(x) ∈ [0, 1] of the given gray scale image. Thus, the final segmentation is obtained by finding the energy minimizing contour C for the given image I. In region-based methods, the two regions defined by the contour C are used to model the energy function F, as proposed in a general setting by [43].

Let Ω denote the image domain. In the context of OCT volumes, Ω is 3D, and we are looking for a surface C, that divides the retinal tissue from the vitreous body, i.e. the final contour should be the segmented ILM, satisfying:

inside(C)=Ω1the retinal tissue
outside(C)=Ω2the vitreous body

The classical CV model [40] approximates the intensity function I (x) by some piecewise constant function with values c1 and c2 in Ω1 and Ω2, respectively, x = (x, y, z) is a voxel in the image I. The energy Fcv is defined as the weighted sum of a region based global intensity fitting (gif) energy Fgif, penalizing deviations of I(x) from the corresponding value ci, i ∈ 1, 2, a surface energy Fsurf given by the surface area of C, and a volume energy Fvol given by the volume of Ω1:

Fcv[C]=Fgif+μFsurf+νFvol,
Fgif=λ1Ω1|I(x)c1|2dx+λ2Ω2|I(x)c2|2dx,
Fsurf=Cds,dsistheareaelement
Fvol=Ω11dx.

Note that by minimizing Fcv, the surface energy Fsurf leads to a smooth contour surface C, whereas the volume energy Fvol yields a balloon force. Moreover, minimizing Fcv with respect to the values of c1 and c2 results in choosing c1 and c2 as the average intensities in the regions Ω1 and Ω2, respectively. The positive weighting parameters λ1, λ2, µ, ν control the influence of the corresponding energy term. Finding good values for these parameters is crucial for obtaining the desired results. Usually the parameters λ1 and λ2 are taken to be equal and may therefore be set to λ1 = λ2 = 1. In section 3.5 it will be explained how values for several parameters of our final model were established.

2.2. Challenges in OCT images

The classical CV model [40] is robust in segmenting noisy images, without clearly defined boundaries. Moreover, complicated shapes such as overhangs, various connected components, even topological changes are handled naturally. However, applying the original formulation of CV to OCT scans does not yield good results. As already discussed in the literature, see e.g. [41], CV fails to provide good segmentation if the two delineated regions, in our case, Ω1 and Ω2, are strongly non-uniform in their gray values. Performance gets even worse in the presence of very dark regions inside the tissue, see Figs. 2(a) and 2(b), where a slice (B-scan) of a typical OCT volume scan is depicted, or in regions with extreme high intensities inside the tissue or the vitreous. As a consequence, using local image averages, as proposed in the classical CV model, is not able to provide a satisfactory segmentation.

In the following, we address these problems by adapting the global fitting energy to a local one in the narrow band setting [41] and by using a boundary potential to prevent the contour to move into certain regions. These modifications result in a very stable segmentation algorithm for OCT scans provided that the coefficients weighting the energy terms are chosen suitably.

2.2.1. Global fitting energy

We adapt the global values c1 and c2, obtained after optimization (see subsection 3.5), column-wise (per A-scan) in order to obtain correct segmentation results at the ONH, where the tissue has very low intensity and little contrast to the vitreous, see Fig. 3(a). We scale the values on each column using the formula:

c^i=ci2(1+maxkMI(xkl))i{1,2},lN,
where M × N is the size of a B-scan. This significantly improves the segmentation results, see Fig. 3(b). Additionally, in order to prevent the contour to penetrate the retina in regions with dark upper but hyperintense lower layers, see see Fig. 3(c), we rescale these values again, after the interface has almost reached the desired ILM contour. The factor used is computed with the same formula as in Eq. (7), but the maximum intensity considers only voxels from the top of the volume to 77 µm (20 px) below the current interface position. Segmentation results obtained after this scaling step are shown in Fig. 3(d).

We choose to rescale the values of c1 and c2 instead of rescaling the column intensities, since the latter would also influence the local fitting energy (lif), presented in section 2.2.3.

2.2.2. Boundary potential

To prevent the contour C, at the ONH, from evolving into dark areas, caused by absence of retinal layers and presence of large vessels, characteristic to this region (see Fig. 2(a)), we introduce a local boundary potential V(x):

Fbound=Ω1V(x)dx.

This potential is set to a very high value ρ at these dark regions, detected as follows: in each column, starting from bottom to top, V(x) is set to ρ until the first gray value I (x) is larger than 45 % of the maximum gray value in that column. All the other voxels are set to 0, see Fig. 2(c) for an example.

2.2.3. Local intensity fitting energy

In images with large intensity inhomogeneities, e.g. caused by varying illumination, it is not sufficient to minimize only a global intensity fitting energy. In order to achieve better segmentation results, [41] introduced two fitting functions f1(x) and f2(x), which are supposed to locally approximate the intensity I(x) in Ω1 and Ω2, respectively. The local intensity fitting energy functional is defined as:

Flif=λ1Ω1Kσ(xy)|I(y)f1(x)|2dydx+λ2Ω2Kσ(xy)|I(y)f2(x)|2dydx.

Similar to the global constants c1, c2, explicit formulas for functions f1 and f2 are obtained by energy minimization [41]. Since all calculations are restricted to a narrow band along the contour in our approach, we used compact support kernels for Kσ based on a binomial distribution with σ representing the kernel size. The adapted formula for calculating f1 and f2 is presented in Appendix 7.2. Note, that these modifications also considerably reduce the computation time as compared to a global convolution.

2.3. Our energy model

In a final step, the influence of the local and global intensity fitting energy, are combined similar to the work presented in [41] by introducing another weight parameter ω and to arrive at our modified CV functional F:

F=ωFgif+(1ω)Flif+μFsurf+νFvol+Fbound.

2.4. Level set formulation

To solve the minimization of the above energy functional, the level set method [40] is used, which replaces the unknown surface C by the level set function ϕ(x), considering that ϕ(x) > 0 if the point x is inside C, ϕ(x) < 0 if x is outside C, and ϕ(x) = 0 if x is on C. Energy minimization is then obtained by solving the following gradient descent flow equation for ϕ to steady state:

ϕt=δε(ϕ)[(1ω)Flocal(x)ωFglobal(x)+μdiv(ϕ|ϕ|)νV(x)]
withFglobal(x)=(I(x)c1(x))2+(I(x)c2(x))2.

Flocal(x), adapted to our narrow band approach, is given in detail in appendix 7.2. In the above Formula, δε denotes the smoothed Dirac delta function having compact support on the narrow band of width ε, see appendix 7.1, and k=div(ϕ|ϕ|) is the mean curvature of the corresponding levelset surface.

3. Implementation

3.1. Initial contour

For initialization we developed a basic 2D segmentation algorithm to create a start contour (start segmentation). In a first step, morphological filters (erosion and subsequent dilation with 15 × 7 ellipse structure element) and a smoothing filter (Gaussian blur with kernel size 15 × 7 and variance σx = 6, σz = 3) are used, to reduce speckle noise. In the next step, we set each pixel with at least 35 % of the maximum column intensity to white. The remaining pixels are set to black. To keep only the tissue connected to the retina, enhanced at the previous step, we set the pixel values of all connected components consisting of less than 2,400 pixels, which corresponds to 0.11 mm2 in the used OCT data sets, to gray. Finally, in each column the contour is set at the first white pixel from top to bottom. If no white pixel exists, the first gray pixel is taken instead. These processing steps are exemplified in Fig. 4.

3.2. Narrow band and reinitialization

Starting with the initial contour, the signed distance function and the reinitialization of the 3D level set function is computed using fast marching [44,45, s. 86]. We use the L1 norm to achieve faster calculation time. For the same reason the reinitialization process is done only every 10th iteration step, while the computation for f1 and f2 each 5th iteration step. All computation is done on the narrow band around the zero level set of width ε + max(σx, σy, σz), where σ is the kernel size of Kσ, see Eq. (9), ε represents the regularization value for the smooth Heaviside function Hε and Dirac delta function δε. Our value for ε is 5.

3.3. Removal of non-connected volume parts

A bad initial contour can produce non connected volumes in the tissue and in the vitreous. To improve the segmentation we remove these volumes from the final result, as well as ignore a small area around the image border, where unnatural connectivities might occur due to low contrast or missing image information.

3.4. Numerical implementation and computation time

To solve the gradient flow equation, an explicit first order time discretization is used, where for the spatial finite difference discretization we followed [40]. Note, the volume scan has different resolution in the two lateral directions, see below, section 3.5.1. We accounted for this by choosing different spatial mesh sizes hx = hz = 1 and hy = 3. The higher value in y direction reduces the influence of differences between the B-scans, especially near the cup, and reduces computation time. Also, curvature weight parameters, µx = µz = 3.5 µy, are direction dependent in order to create a smooth contour. The lower resolution in y direction is forces the segmentation to include even small tissue that is still connected to the retina. For the chosen parameters see Table 4.

Tables Icon

Table 1. Total error and parameter values as obtained in 10 optimization runs, sorted in ascending order of the error. Total error represents the accumulated error of the 20 scans in the optimization set.

Tables Icon

Table 2. Mean error, and 10 %, 50 % and 90 % quantile of the error distribution computed as the euclidian distance between automatic (proposed, device) and manual segmentation.

Tables Icon

Table 3. Overview of scan quality (with corresponding labels) and segmentation corrections, as well as voxel error of the corrections made for 100 volume scan, randomly selected from a large database.

Tables Icon

Table 4. Parameter values; x, y, z denote coordinate directions horizontal, axial and lateral (between B-scans).

The computation time depends on the initial contour and on the shape of the ONH, and takes on average 14.2 s (minimum time is 12.8 s, maximum 15.9 s) on a standard PC (Intel Core i7-5600U (2 × 2.6 GHz) with Debian 9 and gcc 6.3.0). For the ONH scan shown in Fig. 12(a) 15.5 s were needed (0.2 s for creating the initial contour and 15.3 s for segmentation using the proposed CV method). The OCT image size was 384 × 496 × 145 voxels.

3.5. Parameter optimization

We used an automatic parameter optimization procedure to find values for the parameters ν, ω, c1 and c2 for our computational model (11).

3.5.1. OCT image data

Image data consisted of 3D ONH scans obtained with a spectral-domain OCT (Heidelberg Spectralis SDOCT, Heidelberg Engineering, Germany) using a custom ONH scan protocol with 145 B-scans, focusing the ONH with a scanning angle of 15° × 15° and a resolution of 384 A-scans per B-scan. The spatial resolution in x direction is ≈ 12.6 µm, in axial direction ≈3.9 µm and the distance between two B-scans ≈ 33.5 µm. Our database consists of 416 ONH volume scans that capture a wide spectrum of ONH topological changes specific to neuroinflammatory disorders (71 healthy control eyes, 31 eyes affected by idiopathic intracranial hypertension, 60 eyes from neuromyelitis optica spectrum disorders, and 252 eyes of multiple sclerosis patients). We have chosen 140 scans randomly from this database, which presented different characteristics, from scans with good quality up to noisy ones, from healthy but also eyes from patients with different neurological disorders, in order to cover a broad range of shapes. 40 ONH scans were manually segmented and used as ground truth for the optimization process. These will be called the group40. The remaining 100 scans – in the following referred to as the group100 – were used for the validation of the segmentation results and assessment of image quality influence on the segmentation results.

Incomplete volume scans as well as those with retinas damaged by other pathologies were not included.

3.5.2. Error measurement

All 40 scans of the group40 were manually segmented and checked by an experienced grader. From this dataset, 20 images were used for optimization, while the other 20 for validating the results. For one optimization run, 10 files were randomly chosen from the optimization set. The measure used for the minimization process was defined as the sum of the errors for the parameter ν, ω, c1 and c2. An error metric similar to the one described in [46] was employed, where the error is defined as the number of wrongly assigned voxels, i.e. the sum of the number of false positive and false negative. Note that this metric does not depend on the position of the retina. In order to compare different optimization results, the accumulated error of all the 20 scans of the optimization set was used.

3.5.3. Optimization algorithm

The method chosen is the multi-space optimization differential evolution algorithm as provided by GNU Octave [47].

This algorithm creates a starting population of 40 individuals with random values. In our case, an individual represents a parameter set for ν, ω, c1 and c2 together with the accumulated segmentation error of the 10 selected volume scans. During optimization, the algorithm crosses and mutates the individuals to create new ones, and drops out newly created or old ones depending on which exhibits larger errors. We allowed for at most 2000 iterations and set box constraints for all four parameters. Note that for each newly created individual, the cost function (error) has to be evaluated by first performing 10 segmentations for the randomly chosen OCT-scans, then calculating the error by comparing with results from manual segmentation. Thus, each iteration step is computationally demanding. The differential evolution algorithm has been chosen since it is derivative free and supports the setting of specific bounds for the parameters. Moreover, we observed a high reproducibility of the finally obtained optimal parameter set. To perform the optimization, we used the Docker Swarm on OpenStack infrastructure from [48], which allowed to do parallel computations on a PC cluster.

4. Results

4.1. Optimization results

In Table 1, the results of all 10 optimization runs sorted in increasing order of the total error are presented. The parameters given in the first line, namely ν = 0.03248, ω = 0.15915, c1 = 0.24206, c2 = 0.94945 have been chosen for all subsequent calculations.

Note that the four parameters ν, ω, c1, c2 are not independent from each other as it can be seen in the definition of the energy functional. The parameter that shows the largest variation is the balloon force weight parameter ν. This variation is highly influenced by the presence or absence of one specific volume scan in the randomly chosen optimization set (subset of 10 out of 20), which appears as outlier with highest error in all 10 error distributions as shown in Fig. 5. This occurs because the parameters will account for this particular scan if it is contained in the optimization set.

4.2. Evaluation of segmentation performance

Our results are summarized in Fig. 6. For each scan, numbered from 1-40, the first bar (blue) represents the segmentation error given by ILM segmentation implemented in the device, the second bar (green) is the error of our segmentation method starting with an initial contour given by the device’s segmentation, and the last bar (red) represents the error of our segmentation method starting with an initial contour calculated as described in subsection 3.1. The lower bars represents the amount of error at the ONH region.

For evaluation and analysis of our results, we used GNU Octave version 4.0.3 [49] and R version 3.3.2 [50]. Our segmentation method outperforms the segmentation available from the device (Wilcoxon Signed-Rank Test p-value = 1.837 × 10−10 at 0.05 confidence interval).

To check the influence of the initial contour, we compared the results of our segmentation method, obtained starting either with our own initial contour (mean / STD = 21 707 5365, STD represents standard deviation) or with the segmentation given by the device (mean / STD = 21 715 / 5152; Wilcoxon Signed-Rank Test p-value = 0.3007 at 0.05 confidence interval). Our model outperforms again the segmentation of the device (error of the segmentation of the device mean / STD = 34 884 / 14 062; Wilcoxon Signed-Rank Test p-value = 1.819 × 10−12 at 0.05 confidence interval). Moreover, starting with two different initializations, the results of our method are very close to each other (bars 2 and 3), therefore showing that our method is independent on a given rough estimation of the ILM.

We then looked at the error contribution of the central region around the ONH consisting of all A-Scans within a radius of 1.5 mm. Around 50 % of the total error is represented by the errors in the ONH centered region, which usually presents the strongest topological challenges. When evaluating the same comparison with the ILM segmentation computed by the device, our algorithm again performed considerably better (error inside the center region with our algorithm mean / STD = 11 092 / 3306, as compared to the error of the device mean / STD = 18 744 / 6328; Wilcoxon Signed-Rank Test p-value = 1.819 × 10−12 at 0.05 confidence/interval).

Lastly, we evaluated the local error size distribution, i.e. the local Euclidean distance between manual and our automatic segmentation on each B-scan of all 40 scans, see Fig. 7 and mean error as well as 10 %, 50 % and 90 % quantile of the error distribution, see Table 2. Here we have calculated the distance of each line segment of the calculated segmentation line to the ground truth segmentation line at each OCT slice. Note that the manual segmentation is pixel based whereas our proposed method has subpixel accuracy. We can see a large majority of these local errors are less than 2 µm, followed by a range from 2 µm to 4 µm for most of the remaining ones.

4.3. Segmentation validation

To evaluate the performance of our segmentation method in a clinical setting, we analyzed the 100 OCT scans described in section 3.5.1, denoted as group100. An experienced grader manually checked all scans and corrected the segmentation, if necessary. Besides segmentation results, the influence of scan quality on the segmentation was assessed. To this end, another experienced grader labeled regions in all scans (initial group40 see section 3.5.2, and the aforementioned group100) according to the following criteria (Table 5):

Figure 8 shows a B-scan with the labeled A-scans that suffer from low SNR, and the corresponding SLO image with a projected map of all A-scans labeled according to these quality criteria.

Table 3 shows all A-scans of group100 that have been assessed according to the quality criteria, as well as the number of their corresponding corrected A-scans. It can be seen, that less than 0.2 % of all A-Scans had to be corrected. Note, that scan quality does influence the segmentation results, yet only a smaller percentage of bad quality regions presented also an erroneous ILM: 13.3 % of the A-scans having low SNR and 6.2 % of the ones with low illumination. In the concrete clinical application context a user would be interested on his workload in number of B-scans that must be corrected. The modified A-Scans in Table 3 represent 724 B-scans, a rather low fraction (5.0 %) of the total number of 14 500 evaluated.

Furthermore, Fig. 9 shows the error distribution in µm3 for all B-scans of group40 and group100, respectively. Although the two groups underwent a different segmentation correction process the results presented in these two figures are similar. Most of the scans present small errors, with few larger outliers. 77.15 % of group40 and 99.3 % of group100 are below 0.0003 mm3. To understand the magnitude of these outliers, note that the average volume of a healthy control is (1.80 ± 0.49) [21]. The maximum error per volume for group40, 0.065 46 mm3, represents 3.6 % of an average volume, while 90 % of all volumes have an error less or equal to 0.0454 mm3, e.g. 2.52 % of the average volume. For group100 the maximum error per volume is 0.012 59 mm3, 0.69 % of an average volume, while the error for 90 % of all volumes is less or equal to 0.0025 mm3, which represents 0.138 %.

We emphasize again a very important aspect of the ONH OCT scans. Due to the anatomy of the ONH and blood vessels, the contrast between tissue and vitreous is often insufficient to precisely delimit the retinal tissue even for an experienced grader. To investigate the segmentation goodness in these ambiguous parts, a second experienced grader labeled these in all scans of group100. The results showed that, from all the corrected voxels, approximately 25 % were contained in these labeled areas.

4.4. ILM segmentation

We present several challenging examples selected from the larger database presented in subsection 4.3. Figure 10 shows a case where a hole in 3D is formed. Our segmentation (red line) is able to correctly detect the ILM, whereas the device implemented method fails. Also Fig. 11 shows several cases where our method manages to detect all visible tissue being connected to the ILM. To emphasize the 3D nature of our method, we present in Fig. 12 rendered surfaces obtained from the segmentation result. It can be clearly seen, that overhangs in the cup create special topologies, that were not properly handled by previous methods.

 figure: Fig. 11

Fig. 11 Segmentation results, showing B-scans that present particular challenges: deep cups - (a), (b), and (e); overhangs - (a), (d), (e), and (f); tissue still connected to ILM: (c). The red line depicts our proposed method, and the blue line the segmentation given by the device.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 3D view of the ILM segmentation results showing particularly interesting topologies at the ONH.

Download Full Size | PDF

5. Discussion

We developed and implemented a method for segmenting the ILM surface from ONH centered 3D OCT volume scans utilizing a modified CV model. We optimized, tested and validated the method on scans presenting a variety of ONH shapes found in healthy persons and patients with autoimmune neuroinflammatory disorders.

Our method is providing ILM segmentation that handles not only steep cups, but also challenging topological structures, i.e. overhangs. To achieve this we performed several modifications to the original CV approach, which enabled us to account for low contrast regions especially at the limit between vitreous and retina, for strong inhomogeneities throughout a single B-scan, but also throughout the entire volume by introducing a local fitting energy additionally to the global one. Wang et al. [41] introduced an energy functional with a local fitting term and an auxiliary global intensity fitting one and showed the advantages of using such an approach in terms of accuracy and robustness in inhomogeneous regions. Our method further expands this approach by incorporating this energy formulation in a computational efficient narrow-band method. By locally rescaling the two parameters c1 and c2 in the global energy definition (see subsection 2.1 and 2.2.1) we further stop the interface from entering inside retinal tissue. Furthermore, the introduction of a boundary potential successfully accounts for shadows cast by the presence of large blood vessels at the ONH. Not the least, we carefully chose key parameters ω, ν, c1 and c2 as a result of an extensive optimization process.

As a result, the performance of the proposed multi-energy segmentation framework significantly improved segmentation results compared to the one integrated in the commercial device. Furthermore, our approach’s results were independent from the initialization contour.

We did not compare our method with other retinal segmentation methods found in the literature, since each method was evaluated on different types of retinal images, different metrics for the accuracy measurement. However, to our knowledge, none of the existing methods, i.e. when based on graph-cuts, tackles the challenge of segmenting overhangs present especially at the ONH region [27,51–57].

Of note, although the length-adaptive method introduced by Keller et al. [29] and evaluated in pathologic eyes in the macula region performed better than the shortest path algorithm, it was not able to successfully segment every feature of the pathology presented. Here, especially weak gradients caused segmentation challenges. Again, a direct comparison with this method is difficult since this approach was applied to the macula region. Also, it is not clear how this approach would handle overhangs that present as several holes in one B-scans.

Several level-sets methods have also been employed in retinal layer segmentation, most of which are related to the macula region and, or specific pathologies in this region [30–34]. [32,33] use 2D segmentation, which limits their capabilities in capturing challenging 3D structures, as seen at the ONH. Furthermore, most of these methods focus on the macula, except [31] which also provides data on ONH scans, although as the authors stated themselves, this algorithm would need further modifications to account for difficult topologies. To overcome the intensity variation and disruptive pathological changes caused by several diseases, [30] proposed a 3D approach steered by local differences in the signal between adjacent retina layers. It is not clear how this approach would perform at the ONH, especially in low contrast between vitreous and interface, since this might be the reason why drusen, especially small ones were not so accurately detected. Another question that arises is, to which extend is the segmentation affected by noise, since the later is highly dependent on the contrast between layers in the attenuation coefficient values.

Deep learning based methods are an important advancement, which might lead to powerful segmentation approaches of retinal tissue [35,36, 39]. Their performance is highly dependent on the training dataset [58], and thus their applicability to a broad clinical spectrum of retinal changes remains to be shown. It will be interesting to see first applications of this potentially very promising approach for segmenting ONH scans.

A limitation of our method lies in the successful detection of an initial contour. This failed in one case among the investigated 40 scans used in the optimization process, which was caused by weak image quality and an grossly uneven image contrast. For practical use, a minimum image quality might prevent such errors.

We also tested the performance of our segmentation on a larger database, containing 100 volume scans of healthy controls and several different neurological disorders and investigated the influence of image quality on the goodness of the segmentation. The results have shown that our proposed segmentation is robust against scan image quality (like noise, and bad illumination) with only a small percentage of the total segmentation having to be corrected in these cases. Additionally, overall the amount of error was small, with few outliers, which represented a small amount of the total volume of a complete scan. Therefore, our segmentation represents a considerable improvement over the current commercial solution. Furthermore, it can be employed in the clinical routine.

Our method may present a key component in future approaches to analyze overall ONH structure and volume in health and disease. For this, an accurate detection of tissue between ILM and BM is necessary. Further, detection of the ONH center and the ocular axis are required to establish coordinate origin and scan rotation. We previously developed a 2D based segmentation approach for ONH volume computation [59], which was able to robustly detect the BM as lower boundary in healthy people as well as in patients suffering from various neurological disorders [21,22,60]. However, in this method the other key surface for volume computation -the ILM - was taken from the device itself, which led to erroneous boundary definitions in many cases. Thus, with the proposed modified CV based method, ILM detection for ONH volume computation may be further optimized.

6. Conclusion

It is crucial to have a robust ILM segmentation for the analysis of ONH structure in both neurologic and ophthalmologic conditions. A better insight in morphometric changes of the ONH could potentially improve diagnosis and the understanding of diseases like multiple sclerosis, neuromyelitis optica spectrum disorders, Parkinson’s disease and Alzheimer’s dementia.

Our approach provides a flexible and accurate computational method to segment the ILM in 3D OCT volume scans. We could show the superiority in performance of the proposed CV model compared to a standard commercial software method and its robustness against the large variations in the ONH topology. Furthermore, our method is fast and allows for flexible initialization. Thus, it shows great potential for clinical use as it is capable to robustly handle ONH data of both healthy controls and patients suffering from different neurological disorders. We hope that similar concepts, especially the optimization framework, will be applied in other applications as well.

7. Formula

7.1. Regularized Heaviside and delta function

For numerical calculation we use the approximated Heaviside function Hε

Hε(z)={1z>ε0z<ε12[1+zε+1πsin(πzε)]|z|ε
with corresponding smoothed Dirac delta function δε
δε(z)={0|z|>ε22ε[1+cos(πzε)]|z|ε.

We made this choice in order to have compact support of δε, being the narrow band of width 2ε.

7.2. Convolution with compact support kernel

The calculation of the local intensity fitting force involves convolutions which have been adapted for allowing to consequently use the narrow band. Here, not all values are well defined and therefore the convolution formula has to be modified.

The definition of the local intensity averages are modified according to (nan means not a number)

f1(x)={Kσ(x)*[Hε(ϕ(x))I(x)]Kσ(x)*Hε(ϕ(x))Kσ(x)*Hε(ϕ(x))>0nanother
f2(x)={Kσ(x)*[(1Hε(ϕ(x)))I(x)]Kσ(x)*[1Hε(ϕ(x))]Kσ(x)*[1Hε(ϕ(x))]>0nanother

Now the local fitting forces are calculated as follows:

If f1(y)nanKσ(yx)>0andf2(y)nanKσ(yx)>0, than

Flocal(x)=[λ1f1(y)nanKσ(yx)|I(x)f1(y)|2dyf1(y)nanKσ(yx)+λ1f2(y)nanKσ(yx)|I(x)f2(y)|2dyf2(y)nanKσ(yx)]
in other case, set
Flocal(x)=0

7.3. Parameters

All parameter values of our computational model are shown in Table 4.

Funding

German Federal Ministry of Economics and Technology (ZIM-Projekt KF 2381714BZ4, BMWi Exist 03EUEBE079); DFG Excellence Cluster NeuroCure Grant (DFG 257).

Acknowledgments

The authors would like to thank Inge Beckers, Josef Kauer, Henning Nobmann and Timm Oberwahrenbrock for helpful discussions, Hanna Zimmermann and Charlotte Bereuter for their valuable input about the OCT data, and their help in the quality assessment of the scans (Hanna Zimmermann) and performing the segmentation correction and uncertainty region labeling (Charlotte Bereuter). We also thank Christoph Jansen and the HTW Berlin to provide the infrastructure helping to speed up computation time for the optimization process. We acknowledge support from the Open Access Publication Fund of Charité - Universitätsmedizin Berlin.

Disclosures

A patent application has been submitted for the method described in this paper. KG: (P); FH: (P); FP: (P); AUB: (P), Nocturne UG (I); EMK (P), Nocturne UG (I,E). F. Paul serves on the scientific advisory board for the Novartis OCTIMS study; received speaker honoraria and travel funding from Bayer, Novartis, Biogen Idec, Teva, Sanofi-Aventis/Genzyme, Merck Serono, Alexion, Chugai, MedImmune, and Shire; is an academic editor for PLoS ONE; is an associate editor for Neurology Neuroimmunology and Neuroinflammation; consulted for Sanofi Genzyme, Biogen Idec, MedImmune, Shire, and Alexion; and received research support from Bayer, Novartis, Biogen Idec, Teva, Sanofi-Aventis/Genzyme, Alexion and Merck Serono. A.U. Brandt served on the scientific advisory board for the Biogen Vision study; received travel funding and/or speaker honoraria from Novartis Pharma, Biogen, Bayer and Teva; has consulted for Biogen, Nexus, Teva and Motognosis, Nocturne.

References and links

1. A. London, I. Benhar, and M. Schwartz, “The retina as a window to the brain—from eye research to CNS disorders,” Nat. Rev. Neurol. 9, 44–53 (2012). [CrossRef]  

2. T. Oberwahrenbrock, M. Ringelstein, S. Jentschke, K. Deuschle, K. Klumbies, J. Bellmann-Strobl, J. Harmel, K. Ruprecht, S. Schippling, H.-P. Hartung, O. Aktas, A. U. Brandt, and F. Paul, “Retinal ganglion cell and inner plexiform layer thinning in clinically isolated syndrome,” Multiple Scler. J. 19, 1887–1895 (2013). [CrossRef]  

3. T. Oberwahrenbrock, S. Schippling, M. Ringelstein, F. Kaufhold, H. Zimmermann, N. Keser, K. L. Young, J. Harmel, H.-P. Hartung, R. Martin, F. Paul, O. Aktas, and A. U. Brandt, “Retinal damage in multiple sclerosis disease subtypes measured by high-resolution optical coherence tomography,” Multiple Scler. Int. 2012, 530305 (2012).

4. E. Schneider, H. Zimmermann, T. Oberwahrenbrock, F. Kaufhold, E. M. Kadas, A. Petzold, F. Bilger, N. Borisow, S. Jarius, B. Wildemann, K. Ruprecht, A. U. Brandt, and F. Paul, “Optical coherence tomography reveals distinct patterns of retinal damage in neuromyelitis optica and multiple sclerosis,” PLOS ONE 8, 1–10 (2013). [CrossRef]  

5. F. C. Oertel, J. Kuchling, H. Zimmermann, C. Chien, F. Schmidt, B. Knier, J. Bellmann-Strobl, T. Korn, M. Scheel, A. Klistorner, K. Ruprecht, F. Paul, and A. U. Brandt, “Microstructural visual system changes in AQP4-antibody–seropositive NMOSD,” Neurol. - Neuroimmunol. Neuroinflammation 4, e334 (2017). [CrossRef]  

6. F. C. Oertel, H. Zimmermann, J. Mikolajczak, M. Weinhold, E. M. Kadas, T. Oberwahrenbrock, F. Pache, J. Bellmann-Strobl, K. Ruprecht, F. Paul, and A. U. Brandt, “Contribution of blood vessels to retinal nerve fiber layer thickness in NMOSD,” Neurol. - Neuroimmunol. Neuroinflammation 4, e338 (2017). [CrossRef]  

7. M. Ringelstein, P. Albrecht, I. Kleffner, B. Bühn, J. Harmel, A.-K. Müller, D. Finis, R. Guthoff, R. Bergholz, T. Duning, M. Krämer, F. Paul, A. Brandt, T. Oberwahrenbrock, J. Mikolajczak, B. Wildemann, S. Jarius, H.-P. Hartung, O. Aktas, and J. Dörr, “Retinal pathology in susac syndrome detected by spectral-domain optical coherence tomography,” Neurology 85, 610–618 (2015). [CrossRef]   [PubMed]  

8. A. U. Brandt, T. Oberwahrenbrock, F. Costello, M. Fielden, K. Gertz, I. Kleffner, F. Paul, R. Bergholz, and J. Dörr, “Retinal lesion evolution in Susac syndrome,” Retina 36, 366–374 (2016). [CrossRef]  

9. N. M. Roth, S. Saidha, H. Zimmermann, A. U. Brandt, J. Isensee, A. Benkhellouf-Rutkowska, M. Dornauer, A. A. Kühn, T. Müller, P. A. Calabresi, and F. Paul, “Photoreceptor layer thinning in idiopathic Parkinson’s disease,” Mov. Disord. 29, 1163–1170 (2014). [CrossRef]   [PubMed]  

10. C. Y. Lui Cheung, M. K. Ikram, C. Chen, and T. Y. Wong, “Imaging retina to study dementia and stroke,” Prog. Retin. Eye Res. 57, 89–107 (2017). [CrossRef]  

11. D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, C. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991). [CrossRef]   [PubMed]  

12. M. Bock, A. U. Brandt, J. Dörr, C. F. Pfueller, S. Ohlraun, F. Zipp, and F. Paul, “Time domain and spectral domain optical coherence tomography in multiple sclerosis: a comparative cross-sectional study,” Multiple Scler. J. 16, 893–896 (2010). [CrossRef]  

13. A. Lang, A. Carass, M. Hauser, E. S. Sotirchos, P. A. Calabresi, H. S. Ying, and J. L. Prince, “Retinal layer segmentation of macular OCT images using boundary classification,” Biomed. Opt. Express 4, 1133 (2013). [CrossRef]   [PubMed]  

14. A. Petzold, L. J. Balcer, P. A. Calabresi, F. Costello, T. C. Frohman, E. M. Frohman, E. H. Martinez-Lapiscina, A. J. Green, R. Kardon, O. Outteryck, F. Paul, S. Schippling, P. Vermersch, P. Villoslada, L. J. Balk, O. Aktas, P. Albrecht, J. Ashworth, N. Asgari, G. Black, D. Boehringer, R. Behbehani, L. Benson, R. Bermel, J. Bernard, A. Brandt, J. Burton, J. Calkwood, C. Cordano, A. Courtney, A. Cruz-Herranz, R. Diem, A. Daly, H. Dollfus, C. Fasser, C. Finke, J. Frederiksen, E. Garcia-Martin, I. G. Suárez, G. Pihl-Jensen, J. Graves, J. Havla, B. Hemmer, S.-C. Huang, J. Imitola, H. Jiang, D. Keegan, E. Kildebeck, A. Klistorner, B. Knier, S. Kolbe, T. Korn, B. LeRoy, L. Leocani, D. Leroux, N. Levin, P. Liskova, B. Lorenz, J. L. Preiningerova, J. Mikolajczak, X. Montalban, M. Morrow, R. Nolan, T. Oberwahrenbrock, F. C. Oertel, C. Oreja-Guevara, B. Osborne, A. Papadopoulou, M. Ringelstein, S. Saidha, B. Sanchez-Dalmau, J. Sastre-Garriga, R. Shin, N. Shuey, K. Soelberg, A. Toosy, R. Torres, A. Vidal-Jordana, A. Waldman, O. White, A. Yeh, S. Wong, and H. Zimmermann, “Retinal layer segmentation in multiple sclerosis: a systematic review and meta-analysis,” The Lancet Neurol. 16, 797–812 (2017). [CrossRef]   [PubMed]  

15. E. H. Martinez-Lapiscina, S. Arnow, J. A. Wilson, S. Saidha, J. L. Preiningerova, T. Oberwahrenbrock, A. U. Brandt, L. E. Pablo, S. Guerrieri, I. Gonzalez, O. Outteryck, A.-K. Mueller, P. Albrecht, W. Chan, S. Lukas, L. J. Balk, C. Fraser, J. L. Frederiksen, J. Resto, T. Frohman, C. Cordano, I. Zubizarreta, M. Andorra, B. Sanchez-Dalmau, A. Saiz, R. Bermel, A. Klistorner, A. Petzold, S. Schippling, F. Costello, O. Aktas, P. Vermersch, C. Oreja-Guevara, G. Comi, L. Leocani, E. Garcia-Martin, F. Paul, E. Havrdova, E. Frohman, L. J. Balcer, A. J. Green, P. A. Calabresi, and P. Villoslada, “Retinal thickness measured with optical coherence tomography and risk of disability worsening in multiple sclerosis: a cohort study,” The Lancet Neurol. 15, 574–584 (2016). [CrossRef]   [PubMed]  

16. A. Cruz-Herranz, L. J. Balk, T. Oberwahrenbrock, S. Saidha, E. H. Martinez-Lapiscina, W. A. Lagreze, J. S. Schuman, P. Villoslada, P. Calabresi, L. Balcer, A. Petzold, A. J. Green, F. Paul, A. U. Brandt, and P. Albrecht, “The APOSTEL recommendations for reporting quantitative optical coherence tomography studies,” Neurology. 86, 2303–2309 (2016). [CrossRef]   [PubMed]  

17. T. C. Chen, “Spectral domain optical coherence tomography in glaucoma: Qualitative and quantitative analysis of the optic nerve head and retinal nerve fiber layer (an AOS thesis),” Transactions Am. Ophthalmol. Soc. 107, 254–281 (2009).

18. F. Pollet-Villard, C. Chiquet, J.-P. Romanet, C. Noel, and F. Aptel, “Structure-function relationships with spectral-domain optical coherence tomography retinal nerve fiber layer and optic nerve head measurements structure-function relationships with SD-OCT,” Investig. Ophthalmol. Vis. Sci . 55, 2953 (2014). [CrossRef]  

19. B. C. Chauhan, V. M. Danthurebandara, G. P. Sharpe, S. Demirel, C. A. Girkin, C. Y. Mardin, A. F. Scheuerle, and C. F. Burgoyne, “Bruch’s membrane opening minimum rim width and retinal nerve fiber layer thickness in a normal white population,” Ophthalmology. 122, 1786–1794 (2017). [CrossRef]  

20. P. Enders, W. Adler, F. Schaub, M. M. Hermann, T. Dietlein, C. Cursiefen, and L. M. Heindl, “Novel Bruch’s membrane opening minimum rim area equalizes disc size dependency and offers high diagnostic power for glaucoma,” Investig. Opthalmology & Vis. Sci. 57, 6596 (2016).

21. F. Kaufhold, E. M. Kadas, C. Schmidt, H. Kunte, J. Hoffmann, H. Zimmermann, T. Oberwahrenbrock, L. Harms, K. Polthier, A. U. Brandt, and F. Paul, “Optic nerve head quantification in idiopathic intracranial hypertension by spectral domain oct,” PLOS ONE 7, 1–6 (2012). [CrossRef]  

22. P. Albrecht, C. Blasberg, M. Ringelstein, A.-K. Müller, D. Finis, R. Guthoff, E.-M. Kadas, W. Lagreze, O. Aktas, H.-P. Hartung, F. Paul, A. U. Brandt, and A. Methner, “Optical coherence tomography for the diagnosis and monitoring of idiopathic intracranial hypertension,” J. Neurol. 264, 1370–1380 (2017). [CrossRef]   [PubMed]  

23. S. L. Galetta, P. Villoslada, N. Levin, K. Shindler, H. Ishikawa, E. Parr, D. Cadavid, and L. J. Balcer, “Acute optic neuritis,” Neurol. - Neuroimmunol. Neuroinflammation 2, e135 (2015). [CrossRef]  

24. A. Petzold, M. P. Wattjes, F. Costello, J. Flores-Rivera, C. L. Fraser, K. Fujihara, J. Leavitt, R. Marignier, F. Paul, S. Schippling, C. Sindic, P. Villoslada, B. Weinshenker, and G. T. Plant, “The investigation of acute optic neuritis: a review and proposed protocol,” Nat. Rev. Neurol. 10, 447–458 (2014). [CrossRef]   [PubMed]  

25. S. B. Syc, C. V. Warner, S. Saidha, S. K. Farrell, A. Conger, E. R. Bisker, J. Wilson, T. C. Frohman, E. M. Frohman, L. J. Balcer, and P. A. Calabresi, “Cup to disc ratio by optical coherence tomography is abnormal in multiple sclerosis,” J. Neurolog. Sci. 302, 19–24 (2011). [CrossRef]  

26. A. Shah, J. K. Wang, M. K. Garvin, M. Sonka, and X. Wu, “Automated surface segmentation of internal limiting membrane in spectral-domain optical coherence tomography volumes with a deep cup using a 3D range expansion approach,” in 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI) (2014), pp. 1405–1408.

27. M. S. Miri, V. A. Robles, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Incorporation of gradient vector flow field in a multimodal graph-theoretic approach for segmenting the internal limiting membrane from glaucomatous optic nerve head-centered SD-OCT volumes,” Comput. Med. Imaging Graph. 55, 87–94 (2017). [CrossRef]  

28. F. A. Almobarak, N. O’Leary, A. S. C. Reis, G. P. Sharpe, D. M. Hutchison, M. T. Nicolela, and B. C. Chauhan, “Automated segmentation of optic nerve head structures with optical coherence tomography segmentation of optic nerve head structures,” Investig. Ophthalmol. Vis. Sci . 55, 1161 (2014). [CrossRef]  

29. B. Keller, D. Cunefare, D. S. Grewal, T. H. Mahmoud, J. A. Izatt, and S. Farsiu, “Length-adaptive graph search for automatic segmentation of pathological features in optical coherence tomography images,” J. Biomed. Opt . 21, 076015 (2016). [CrossRef]  

30. J. Novosel, K. A. Vermeer, J. H. de Jong, Z. Wang, and L. J. van Vliet, “Joint segmentation of retinal layers and focal lesions in 3-D OCT data of topologically disrupted retinas,” IEEE Transactions on Med. Imaging 36, 1276–1286 (2017). [CrossRef]  

31. J. Novosel, G. Thepass, H. G. Lemij, J. F. de Boer, K. A. Vermeer, and L. J. van Vliet, “Loosely coupled level sets for simultaneous 3D retinal layer segmentation in optical coherence tomography,” Med. Image Analysis 26, 146–158 (2015). [CrossRef]  

32. J. Novosel, Z. Wang, H. de Jong, K. A. Vermeer, and L. J. van Vliet, “Loosely coupled level sets for retinal layers and drusen segmentation in subjects with dry age-related macular degeneration,” in Medical Imaging 2016: Image Processing, M. A. Styner and E. D. Angelini, eds. (SPIE, 2016).

33. J. Novosel, Z. Wang, H. de Jong, M. van Velthoven, K. A. Vermeer, and L. J. van Vliet, “Locally-adaptive loosely-coupled level sets for retinal layer and fluid segmentation in subjects with central serous retinopathy,” in 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI) (IEEE, 2016).

34. A. Carass, A. Lang, M. Hauser, P. A. Calabresi, H. S. Ying, and J. L. Prince, “Multiple-object geometric deformable model for segmentation of macular OCT,” Biomed. Opt. Express 5, 1062 (2014). [CrossRef]   [PubMed]  

35. L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search,” Biomed. Opt. Express 8, 2732 (2017). [CrossRef]   [PubMed]  

36. A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, “ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks,” Biomed. Opt. Express 8, 3627 (2017). [CrossRef]   [PubMed]  

37. H. Noh, S. Hong, and B. Han, “Learning deconvolution network for semantic segmentation,” in 2015 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2015).

38. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Lecture Notes in Computer Science (SpringerInternational Publishing, 2015), pp. 234–241. [CrossRef]  

39. Y. He, A. Carass, Y. Yun, C. Zhao, B. M. Jedynak, S. D. Solomon, S. Saidha, P. A. Calabresi, and J. L. Prince, “Towards topological correct segmentation of macular OCT from cascaded FCNs,” in Fetal, Infant and Ophthalmic Medical Image Analysis (SpringerInternational Publishing, 2017), pp. 202–209. [CrossRef]  

40. T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Trans. Image Process. 10, 266–277 (2001). [CrossRef]  

41. L. Wang, C. Li, Q.-S. Sun, D.-S. Xia, and C.-Y. Kao, “Active contours driven by local and global intensity fitting energy with application to brain MR image segmentation,” Comp. Med. Imag. Graph. 33, 520–531 (2009). [CrossRef]  

42. M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” International Journal of Computer Vision 1, 321–331 (1988). [CrossRef]  

43. D. Mumford and J. Shah, “Optimal approximations by piecewise smooth functions and associated variational problems,” Commun. Pure Appl. Math. 42, 577–685 (1989). [CrossRef]  

44. J. Sethian, “Level Set Methods: An Act of Violence,” American Scientist (1997).

45. J. Sethian, Level Set Methods and Fast Marching Methods: Evolving Interfaces in Computational Geometry, Fluid Mechanics, Computer Vision, and Materials Science, Cambridge Monographs on Applied and Computational Mathematics (Cambridge University Press, 1999).

46. A. A. Taha and A. Hanbury, “Metrics for evaluating 3d medical image segmentation: analysis, selection, and tool,” BMC Med. Imaging 15, 29 (2015). [CrossRef]   [PubMed]  

47. S. Das, A. Abraham, U. K. Chakraborty, and A. Konar, “Differential Evolution Using a Neighborhood-Based Mutation Operator,” IEEE Trans. Evol. Comput. 13, 526–553 (2009). [CrossRef]  

48. C. Jansen, M. Witt, and D. Krefting, Employing Docker Swarm on OpenStack for Biomedical Analysis(SpringerInternational Publishing, 2016), p. 303–318.

49. J. W. Eaton, D. Bateman, S. Hauberg, and R. Wehbring, GNU Octave version 4.0.0 manual: a high-level interactive language for numerical computations (2015).

50. R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria (2017).

51. M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Transactions on Med. Imaging 28, 1436–1447 (2009). [CrossRef]  

52. Z. Hu, M. D. Abràmoff, Y. H. Kwon, K. Lee, and M. K. Garvin, “Automated segmentation of neural canal opening and optic cup in 3d spectral optical coherence tomography volumes of the optic nerve head,” Investig. Ophthalmol. Vis. Sci. 51, 5708 (2010). [CrossRef]  

53. K. Lee, M. Niemeijer, M. K. Garvin, Y. H. Kwon, M. Sonka, and M. D. Abràmoff, “Segmentation of the optic disc in 3D OCT scans of the optic nerve head,” IEEE Trans Med Imaging 29, 159–168 (2010). [CrossRef]  

54. B. J. Anthony, M. D. Abràmoff, K. Lee, P. Sonkova, P. Gupta, Y. Kwon, M. Neimeijer, Z. Hu, and M. K. Garvin, “Automated 3D segmentation of intraretinal layers from optic nerve head optical coherence tomography images,” Proc. SPIE 7626, 76260 (2010). [CrossRef]  

55. B. J. Antony, M. S. Miri, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Automated 3d segmentation of multiple surfaces with a shared hole: Segmentation of the neural canal opening in SD-OCT volumes,” Med Image Comput. Comput. Assist. Interv 17, 739–746 (2014). [PubMed]  

56. M. S. Miri, K. Lee, M. Niemeijer, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images, Proc. SPIE 8669, 86690 (2013). [CrossRef]  

57. M. S. Miri, M. D. Abràmoff, K. Lee, M. Niemeijer, J. K. Wang, Y. H. Kwon, and M. K. Garvin, “Multimodal segmentation of optic disc and cup from sd-oct and color fundus photographs using a machine-learning graph-based approach,” IEEE Transactions on Med. Imaging 34, 1854–1866 (2015). [CrossRef]  

58. B. J. Antony, M. D. Abràmoff, M. M. Harper, W. Jeong, E. H. Sohn, Y. H. Kwon, R. Kardon, and M. K. Garvin, “A combined machine-learning and graph-based framework for the segmentation of retinal surfaces in SD-OCT volumes,” Biomed. Opt. Express 4, 2712 (2013). [CrossRef]  

59. E. M. Kadas, F. Kaufhold, C. Schulz, F. Paul, K. Polthier, and A. U. Brandt, 3D Optic Nerve Head Segmentation in Idiopathic Intracranial Hypertension(SpringerBerlin Heidelberg, 2012), pp. 262–267.

60. P. Albrecht, C. Blasberg, S. Lukas, M. Ringelstein, A.-K. Müller, J. Harmel, E.-M. Kadas, D. Finis, R. Guthoff, O. Aktas, H.-P. Hartung, F. Paul, A. U. Brandt, P. Berlit, A. Methner, and M. Kraemer, “Retinal pathology in idiopathic moyamoya angiopathy detected by optical coherence tomography,” Neurology 85, 521–527 (2015). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 (Left) Volume scan (C-Scan) of the optic nerve head (ONH). (Right) The volume scan consists of slices (B-scans), where each column is an axial depth scan (A-Scan). The ONH typically has a cup in its center (1). The blue line marks the inner limiting membrane (ILM) separating the vitreous body (2) from retinal tissue (3). The magenta line represents the Bruch’s membrane (BM). Note how the blue line failed to correctly detect the ILM, and misses the vessel at the left top, as well as the deep cuping.
Fig. 2
Fig. 2 a) B-scan presenting several challenges for ILM segmentation using the original CV model: the yellow arrow shows the ONH region with no retinal layer information except several tissue remaining from nerve fibers and the ILM; the white arrows show shadows caused by the presence of large blood vessels. b) B-scan showing segmentation results using the original CV model. c) Boundary potential V on a sample B-scan. In the red area, V(x) = ρ, whereas in the green area V(x) = 0.
Fig. 3
Fig. 3 a) Example of an ONH region with very low contrast compared to the vitreous (yellow arrow). This leads to the contour leaking into the tissue. b) The same scan with correctly detected ILM after scaling the values for c1 and c2 according to equation 7; c) Example of an ONH region with low intensity values at the inner layers (yellow arrow) compared to the dark vitreous as well as to the hyperintense outer layers (white arrow). This causes the contour to falsely detect parts of the lower layers as vitreous. d) The same scan with correctly detected ILM by rescaling values c1 and c2 after the interface has almost reached the desired ILM contour.
Fig. 4
Fig. 4 Three processing steps to obtain initial segmentation: a) filtering with morphological and Gauss filter b) thresholding, neglecting small connected components c) the initial contour
Fig. 5
Fig. 5 Error distribution over 20 scans represented as box plots for the 10 optimization runs. The + symbol and the ◦ symbol denote outliers with an error deviating more than 1.5 × STD and more than 3.0 × STD from the mean, respectively.
Fig. 6
Fig. 6 Errors of the complete data set (40 scans): segmentation of the device (blue), proposed algorithm with the segmentation of the device as initial contour (green) and proposed algorithm with own initial contour (red). The lower bars show the error in the inner 1.5 mm circle of the ONH. The device error of scan 26 is very high (110 508 voxels) because the segmentation was completely wrong on some B-scans.
Fig. 7
Fig. 7 Normalized distribution of local Euclidean distances between manual and automatic segmentation.
Fig. 8
Fig. 8 Quality assessment of a volume scan: (a) B-scan with low SNR region in orange, (b) corresponding SLO image with quality assessment map, color coded according to the criteria shown at the right hand side of the map; green rectangle delimits the scan area; blue arrow shows the B-scan position.
Fig. 9
Fig. 9 Distribution of errors per B-Scan (a) (group40) obtained by comparing with manual segmentations, (b) (group100) as obtained by the corrections performed by an experienced grader.
Fig. 10
Fig. 10 Segmentation of three neighboring B-scans obtained from the 3D segmentation of the volume scan. The green line depicts the manual segmentation, the red our proposed method, and the blue line the segmentation as given by the device.
Fig. 11
Fig. 11 Segmentation results, showing B-scans that present particular challenges: deep cups - (a), (b), and (e); overhangs - (a), (d), (e), and (f); tissue still connected to ILM: (c). The red line depicts our proposed method, and the blue line the segmentation given by the device.
Fig. 12
Fig. 12 3D view of the ILM segmentation results showing particularly interesting topologies at the ONH.

Tables (5)

Tables Icon

Table 1 Total error and parameter values as obtained in 10 optimization runs, sorted in ascending order of the error. Total error represents the accumulated error of the 20 scans in the optimization set.

Tables Icon

Table 2 Mean error, and 10 %, 50 % and 90 % quantile of the error distribution computed as the euclidian distance between automatic (proposed, device) and manual segmentation.

Tables Icon

Table 3 Overview of scan quality (with corresponding labels) and segmentation corrections, as well as voxel error of the corrections made for 100 volume scan, randomly selected from a large database.

Tables Icon

Table 4 Parameter values; x, y, z denote coordinate directions horizontal, axial and lateral (between B-scans).

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

inside ( C ) = Ω 1 the retinal tissue
outside ( C ) = Ω 2 the vitreous body
F cv [ C ] = F gif + μ F surf + ν F vol ,
F gif = λ 1 Ω 1 | I ( x ) c 1 | 2 d x + λ 2 Ω 2 | I ( x ) c 2 | 2 d x ,
F surf = C d s , d s is the area element
F vol = Ω 1 1 d x .
c ^ i = c i 2 ( 1 + max k M I ( x kl ) ) i { 1 , 2 } , l N ,
F bound = Ω 1 V ( x ) d x .
F lif = λ 1 Ω 1 K σ ( x y ) | I ( y ) f 1 ( x ) | 2 d y d x + λ 2 Ω 2 K σ ( x y ) | I ( y ) f 2 ( x ) | 2 d y d x .
F = ω F gif + ( 1 ω ) F lif + μ F surf + ν F vol + F bound .
ϕ t = δ ε ( ϕ ) [ ( 1 ω ) F l o c a l ( x ) ω F g l o b a l ( x ) + μ div ( ϕ | ϕ | ) ν V ( x ) ]
with F g l o b a l ( x ) = ( I ( x ) c 1 ( x ) ) 2 + ( I ( x ) c 2 ( x ) ) 2 .
H ε ( z ) = { 1 z > ε 0 z < ε 1 2 [ 1 + z ε + 1 π sin ( π z ε ) ] | z | ε
δ ε ( z ) = { 0 | z | > ε 2 2 ε [ 1 + cos ( π z ε ) ] | z | ε .
f 1 ( x ) = { K σ ( x ) * [ H ε ( ϕ ( x ) ) I ( x ) ] K σ ( x ) * H ε ( ϕ ( x ) ) K σ ( x ) * H ε ( ϕ ( x ) ) > 0 n a n other
f 2 ( x ) = { K σ ( x ) * [ ( 1 H ε ( ϕ ( x ) ) ) I ( x ) ] K σ ( x ) * [ 1 H ε ( ϕ ( x ) ) ] K σ ( x ) * [ 1 H ε ( ϕ ( x ) ) ] > 0 n a n other
F l o c a l ( x ) = [ λ 1 f 1 ( y ) n a n K σ ( y x ) | I ( x ) f 1 ( y ) | 2 d y f 1 ( y ) n a n K σ ( y x ) + λ 1 f 2 ( y ) n a n K σ ( y x ) | I ( x ) f 2 ( y ) | 2 d y f 2 ( y ) n a n K σ ( y x ) ]
F l o c a l ( x ) = 0
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.