Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated intraretinal segmentation of SD-OCT images in normal and age-related macular degeneration eyes

Open Access Open Access

Abstract

This work introduces and evaluates an automated intra-retinal segmentation method for spectral-domain optical coherence (SD-OCT) retinal images. While quantitative assessment of retinal features in SD-OCT data is important, manual segmentation is extremely time-consuming and subjective. We address challenges that have hindered prior automated methods, including poor performance with diseased retinas relative to healthy retinas, and data smoothing that obscures image features such as small retinal drusen. Our novel segmentation approach is based on the iterative adaptation of a weighted median process, wherein a three-dimensional weighting function is defined according to image intensity and gradient properties, and a set of smoothness constraints and pre-defined rules are considered. We compared the segmentation results for 9 segmented outlines associated with intra-retinal boundaries to those drawn by hand by two retinal specialists and to those produced by an independent state-of-the-art automated software tool in a set of 42 clinical images (from 14 patients). These images were obtained with a Zeiss Cirrus SD-OCT system, including healthy, early or intermediate AMD, and advanced AMD eyes. As a qualitative evaluation of accuracy, a highly experienced third independent reader blindly rated the quality of the outlines produced by each method. The accuracy and image detail of our method was superior in healthy and early or intermediate AMD eyes (98.15% and 97.78% of results not needing substantial editing) to the automated method we compared against. While the performance was not as good in advanced AMD (68.89%), it was still better than the manual outlines or the comparison method (which failed in such cases). We also tested our method’s performance on images acquired with a different SD-OCT manufacturer, collected from a large publicly available data set (114 healthy and 255 AMD eyes), and compared the data quantitatively to reference standard markings of the internal limiting membrane and inner boundary of retinal pigment epithelium, producing a mean unsigned positioning error of 6.04 ± 7.83µm (mean under 2 pixels). Our automated method should be applicable to data from different OCT manufacturers and offers detailed layer segmentations in healthy and AMD eyes.

© 2017 Optical Society of America

1. Introduction

Spectral-Domain Optical Coherence Tomography (SD-OCT) is extremely useful for imaging retinal structures and widely used in clinical practice. Until its relatively recent advent [1], the retina could be assessed only by observation of en face images, such as color fundus photographs, limited by superposition and masking of different retinal structures in the depth axis. The ability of SD-OCT to rapidly acquire three-dimensional (3D) retinal data, resolving structures in the depth axis, allows the direct visualization of its layered structure. Many new quantitative features extracted from SD-OCT data, such as the thickness of individual intra-retinal layers [2], size, shape, and distribution of drusen (extracellular material accumulations that typically appear between the retinal pigment epithelium (RPE) and Bruch’s membrane) [3,4], cysts and fluid-filled regions [5], and macular [6], are currently used or investigated as biomarkers in retinal disease diagnosis [7–9].

The accurate and reliable segmentation of retinal layers in SD-OCT scans is a fundamental problem for the identification of new quantitative features that could be useful as disease biomarkers. Manual segmentation of structures in SD-OCT cubes is laborious and challenging, since each three-dimensional scan data consists of a large collection of planar B-scans (typically hundreds of images); hence, there is a need for an automated segmentation algorithm.

Much work has been done to date in developing segmentation algorithms of the retina multi-layered structure in SD-OCT images. A variety of them are built into commercial systems where details of their designs remain undisclosed [10], making replication in independent studies nearly impossible. Some of these commercial methods also require human interaction to initiate or refine the segmentation results [11–13], which can be time-consuming. In addition, recent publications have reported substantial errors in their results [14–17]. A number of experimental algorithms have also been proposed by research groups, either in two-dimensional individual scans [18,19] or taking advantage of the three-dimensional nature of the acquired data, considering a priori determined smoothness, continuity and layer interaction constraints. Much of current research in intra-retinal structure quantification often employs a segmentation method based in a graph-theory approach. A general graph-based approach for the segmentation of surfaces was proposed in [20], with applications to volumetric OCT data in [21–24] (implementation for segmentation of retinal boundaries available for download in reference [25]). Other current work takes advantage of machine learning techniques trained on manually labelled examples [26], possibly including graph-based refinements [27].

Although these and other automated or semi-automated segmentation methods have been applied to SD-OCT images [28–35], they have a number of limitations. First, most segmentation algorithms fail to produce acceptable results in complicated cases, such as severe retinal deformities [15–17,36] or in lower quality scans that are routinely found in clinical practice. Second, most methods were designed for use with SD-OCT images acquired by experimental systems or from a particular vendor. Third, available methods often produce outlines that smooth out important retinal pathologies. Such outlines are undesirable in conditions where accurate characterization can provide important disease biomarkers, such as for discerning individual small drusen and their volumetric and reflectivity properties in early AMD patients [3].

In this work we present a novel segmentation method with application to SD-OCT data, based in the iterative adaptation of a weighted median process, wherein a three dimensional weighting function is defined considering image intensity and gradient properties. The framework also considers smoothness constraints and prior knowledge related to the appearance of retinal layers in SD-OCT data. Our method is sought to address the aforementioned challenges of prior algorithms, producing accurate delineations of the intra-retinal layered structure in scans acquired during clinical practice and maintaining a stable behavior in disease cases while also producing accurate outlines that follow the particular pathologies of the disease, not smoothing out important details. This method was used previously for the segmentation of the retina in hydroxychloroquine toxicity [15] and post-surgery in macular hole cases [6]. In these applications, the results of our methods were successful and superior to current commercial software by visual inspection, but details about the method and its evaluation have not yet been published. In this manuscript we provide a detailed description and implementation of the segmentation method and its quantitative and qualitative evaluation in healthy and AMD eyes of several severity grades. We compared its results on individual layer boundaries to manual segmentations drawn by two independent retinal specialists and to the Iowa Reference Algorithm OCT-Explorer (version 3.5) [25] in a sample set acquired during routine clinical practice of healthy, early, and advanced AMD eyes. We also evaluated its quantitative differences with a manually-corrected reference standard in a larger publicly available independent data set of healthy and AMD eyes, acquired in a separate institution and with an imaging system from a different vendor [37,38].

2. Material and methods

We propose the automated three-dimensional segmentation of 10 retinal boundaries in SD-OCT exams, with coordinate nomenclature, acronyms and example location of the segmented boundaries defined in Fig. 1. Since our method aims to identify a retinal boundary in the form of a surface, that is, the separation between two retinal layers, our nomenclature could not exactly follow the consensus established by Staurenghi et al. [39] in the form of zones. The nomenclature for these boundaries has been established in the following manner throughout the text: Each boundary was indicated by the common acronym used to identify one of its adjacent retinal layers and preceded by “i-” or “o-”, depending on whether such boundary indicates the innermost or the outermost limits of the layer, respectively. Our algorithm aimed to identify the following boundaries: Internal Limiting Membrane (ILM), inner boundary of the Retinal Nerve Fiber Layer (i-RNFL), outer boundary of the Retinal Nerve Fiber Layer (o-RNFL), outer boundary of the Inner Plexiform Layer (o-IPL), outer boundary of the Inner Nuclear Layer (o-INL), outer boundary of the Outer Plexiform Layer (o-OPL), inner boundary of the Ellipsoid Zone (i-EZ), outer boundary of the Ellipsoid Zone (o-EZ), inner boundary of the Retinal Pigment Epithelium (i-RPE), and outer boundary of the Retinal Pigment Epithelium (o-RPE). The differentiation between ILM and i-RNFL was defined to resolve those cases with vitreous detachment, but for most cases, as the one in Fig. 1, both boundaries remain in the same location. A correspondence between this nomenclature and the one defined by Staurenghi et al. [39] is also indicated in Fig. 1, where each of the boundaries in this work is associated to the separation between two adjacent zones.

 figure: Fig. 1

Fig. 1 a) Example SD-OCT cube, comprising 3-D data. b) Example B-scan (planar images at each vertical location) within the cube, formed by a series of line-scans at each horizontal location (A-scans). c) Detail showing the location of each retinal boundary segmented by our proposed algorithm. The figure also indicates the correspondence between the nomenclature used in this work and the interface between two zones that each boundary represents, as defined in [39].

Download Full Size | PDF

The key elements of the method are shown in Fig. 2. We considered a pre-processing de-noising step of non-local means (NL-means) filtering [40] (labeled as 1 in Fig. 2, radius of the search windows t = (14 µm axially, 82 µm horizontally), f = (6 µm axially, 35 µm horizontally) and filtering degree h = 0.008). While the most prominent noise in OCT images is multiplicative speckle noise, the raw images were first transformed using a logarithmic function, making an additive noise model like NL-means acceptable [41,42]. The choice of this filtering was made in order to obtain reasonable noise reduction while maintaining sufficient structure information and layer edges and also to minimize the computational cost. The window radius was smaller in the axial direction due to a higher axial resolution (more pixels contained in small window sizes) and since we expect the retinal layers to extend in the horizontal direction.

 figure: Fig. 2

Fig. 2 Steps in segmentation: Input data: SD-OCT cube. De-noising: (1) Non-local (NL) means filtering. Segmentation algorithm: (2) Initial ILM estimation, (3) ILM iterative segmentation, (4) differentiation of RNFL and RPE complexes, (5) initial estimation of remaining retinal layers, (6) iterative segmentation. Output: Segmented SD-OCT cube.

Download Full Size | PDF

The algorithm is based on an iterative process that updates the segmentation to closely follow the actual location of each boundary while maintaining a smooth behavior (steps labeled as 3 and 6 in Fig. 2). The core operation in the iterative process uses a 2D weighted median (WM) filter in a 3D context, adapted to be suitable for the segmentation of the 3D SD-OCT data, and a set of given parameters and constraints based on two facts of anatomical knowledge of the retinal layers: (1) the order of appearance of the layers in the axial direction (known anatomically) and (2) the known constraint that the layers do not cross each other.

Initial estimations of the ILM boundary (labeled as 2 in Fig. 2), differentiations of the RNFL-complex and the RPE-complex (4), and the remaining nine boundaries (5), are determined to act as seed for the iterative process. While more accurate initial estimations result in faster convergence of the algorithm, we found that an erroneous (within certain limits) initialization would also converge in an acceptable solution. For simplicity in the description of our methods, in this section we describe the core of the segmentation algorithm (iterative segmentation), while the details on our approach determining the initial seed estimations are included in the appendices. Figure 3 shows an example three-dimensional rendering of the initial estimation of the retinal layer boundaries and the result after the iterative segmentation step, as well as the final segmentation results in a B-scan, and a transverse cut of the cube in the axial-vertical plane (called “transverse scan” here). An additional example for the ILM is shown in Figs. 10 (in Appendix A). We can observe how the core iterative process refines regions of initial depth overestimation (Fig. 10(c)) as well as on regions of initial depth underestimation (Figs. 10(b) and 10(d)-(e)).

 figure: Fig. 3

Fig. 3 a) Rendering of the initial estimation of the 10 retinal boundaries. From top to bottom: Red: ILM. Green: i-RNFL. Dark blue: o-RNFL. Magenta: o-IPL. White: o-INL. Cyan: o-OPL. Green: i-EZ. Red: o-EZ. White: i-RPE. Yellow: o-RPE. b) Rendering of the retinal boundaries after refinement step. Axial differences between the layers are exaggerated for displaying purposes. c) On the left, example B-scan taken across the center of the fovea (top), and transverse scan across the center of the fovea (bottom). On the right, the same images are shown with the segmentation of the retinal boundaries after refinement step, as indicated by the legend displayed on the far right Note that the location of the segmented ILM boundary is not clearly visible in the B-scan and transverse scan since it has the same location as the i-RNFL boundary.

Download Full Size | PDF

2.1. Iterative boundary segmentation based in weighted median filtering

The WM filter is a nonlinear filtering option that was introduced as an improved version of the median filter [43] to add flexibility and reduce edge jitter, and streaking effects [44] by assigning a nonnegative weight controlling the filtering behavior to each position in the filter window. Here, it has been adapted for the iterative segmentation of the 3D SD-OCT data. Each retinal layer boundary is defined in the form of a surfaceB(x,y), describing the z coordinate location (axial depth) associated to the (x,y) coordinates (in the horizontal and vertical axes, respectively). The boundary at the k iteration, B(x,y)k, is updated given the results from the last iteration B(X,Y)k1 (with (X,Y) indicating the subspace defined by the cube horizontal-vertical extent) and a set of filtering parameters:

B(x,y)k=          θargminn=LLm=LLW(x,y,zk1(x,y;n,m;β))k|zk1(x,y;n,m;β)θ|,
zk1(x,y;n,m;β)=B(x+n,y+m)k1S(x+n,y+m;β)k1+S(x,y;β)k1.
with B(X,Y)0 described by an initial estimation (see appendices). W(x,y,z)k is an iteration-dependent weighting function and L indicates the window length of the WM filter. S(x,y;β)k1 is a smoothing function determined from the result at the k1 iteration and with a defined smoothing factor β, acting as an offset for the k iteration. The median is calculated on the differences between the boundary location at iteration k and S(x,y;β)k1 to avoid the interference of overall retina inclination and curvature. This way, the axial index zk for the weighting function and magnitude is considered as a distance with respect to this offset at the neighboring positions but then translated to the offset at the position of interest (x,y). The algorithmic solution used in this work for the implementation of WM filter as an optimization is reported in reference [44]. The iterative process was chosen to end when the boundary is updated in less than 2 µm in the axial direction at all locations in the horizontal-vertical plane (about the typical minimum pixel axial dimension in SD-OCT images).

2.2. Weighting function definition

The iteration-dependent weighting function W(x,y,z)k is defined in in the following manner:

W(x,y,z)k=r(α|I(x,y,z)|sgn(IZ(x,y,z)))M(x,y,z)k,
where I(x,y,z) refers to the reflectivity recorded in a discrete voxel position. M(x,y,z)k represents a masking volume in the 3D subspace determined by the cube extent, defined to indicate the anatomical constraints within the retinal boundaries. This masking volume is set independently for each boundary and iteration with a value of 1 for the voxels within defined upper and lower boundary limits in the axial direction and 0 otherwise (boundary-based upper and lower limits are described in Table 1). The parameter α is set to indicate whether the boundary should present an increase in reflectivity (dark-to-light transition) in the axial direction, assigned with α=+1, or a decrease in reflectivity (light-to-dark transition), assigned with α=1. indicates a three-dimensional gradient operator, computed here with a 3D Sobel operator kernel of 25µ extent. The axial derivative IZ is computed using the same Sobel kernel, sgn(t) is the signum function, and r(v) is the ramp function, defined as:
r(v)={v, if v00, if v<0,
The reasoning behind this choice of weighting function is that we want the filtering result to follow those points where the gradient is more prominent, within the limits allowed by the defined anatomical constraints. The larger the magnitude of the gradient, the larger is the weight and the higher the tendency towards such a particular depth. The signum and ramp function, together with the parameter α, allows selecting just the positive or negative gradients in the axial direction, as each layer boundary will present a particular derivative sign.

Tables Icon

Table 1. Segmentation parameters chosen. Boundaries are indicated with the same acronyms as throughout the text. Coordinates (x,y) for each boundary are not indicated for simplicity.

2.3. Smoothing function definition

The smoothing S(x,y;β)k is introduced to reduce the staircase effect [45] when the WM filter is repeated a large iterative number of times (due to the nature of the median operation) and greatly improved the quality of the segmentation results. Its definition is based on a similar idea to the “flattening” usually done in SD-OCT segmentation algorithms [19,22,35,46], in which depth distances are considered with respect to estimations of the ILM or RPE layers, and not with respect to the axial axis origin at the top of the complete cube. A “flattening” step relative to the ILM layer usually helps the segmentation of inner retinal layers in healthy eyes, since they tend to follow a similar curvature. However, in eyes with severe abnormalities, the retinal layers can take curvatures very different from the ILM due to, for example, drusen, fluid accumulations, or RPE detachment. In this work, we considered S(x,y;β)k a smoothed version of the boundary S(x,y)k, computed using a diffusion regularization fitting operation with a smoothing factor of β. The fitting tools employed in this characterization are available online for download [47].

2.4. Choice of parameters and masking volumes

The defined boundary dependencies and parameter values are specified in Table 1. These were set partly considering the boundary order of appearance in SD-OCT images, common to every eye, and partly heuristically through experimentation in a separate data set of eyes than those evaluated in this work. A mask indicating the approximated en face extent (location in the horizontal-vertical plane) of the foveal pit was also generated to resolve the possible intersection of inner retina boundaries at such locations and considered in the masking volumes M(X,Y,Z)k. The process we followed to obtain this approximation is described in Appendix D, although other approaches are possible. In Table 1, OFOV and IFOV indicate the regions outside and inside the estimated foveal pit mask, respectively. The window length in the filtering was L=0.1 mm for all the segmented layers in both horizontal and vertical directions. The length of the smoothing factor was set to β = 6mm for the inner retina layers (ILM, i-RNFL, o-RNFL, o-IPL, o-INL, o-OPL) and β = 2mm for the outer retina layers (i-EZ, o-EX, i-RPE, o-RPE), as we observed larger variations in the axial direction for the later due to the presence of numerous abnormalities in eyes diagnosed with AMD, such as drusen and GA. As displayed in Fig. 2 and by the dependencies indicated in Table 1, the ILM boundary was determined independently from the rest of the boundaries, without cross-boundary dependencies, while the remaining 9 boundaries where updated in parallel in the iterative process and presented cross-boundary dependencies.

3. Experiments

3.1. Data collection

The research was approved by the institutional Human Subjects Committee and followed the tenets of the Declaration of Helsinki. We used two different SD-OCT data sets to evaluate our segmentation method, both quantitatively and qualitatively, as indicated in Table 2: A first data set consisted of scans from eyes of different AMD severity and potential segmentation difficulty, and a larger independent data set was collected from independent previous work [38] in which reference standard markings were available.

Tables Icon

Table 2. Distribution of exams and outlines collected in the evaluation data sets. “Automated” refers to the method introduced here, “Iowa” refers to the Iowa Reference Algorithm software.

For the first data set (Data set 1), we evaluated our methods in 42 representative images collected from a set of 14 macular SD-OCT cubes (three B-scans per cube, chosen as the one located directly at the fovea center and the two at ± 0.5 mm from the fovea center), acquired using a Cirrus HD-OCT instrument (Carl Zeiss Meditec, Inc., Dublin, CA). The images were organized in three different groups, as indicated in Table 2: Twelve images presented clinically without known abnormalities (Group I), fifteen images presented with drusen were diagnosed with early to intermediate AMD (Group II), and the remaining fifteen images presented areas of intraretinal fluid accumulation and RPE detachment and were diagnosed with advanced exudative AMD (Group III). Diagnosis and classification in such groups were performed by a fellowship-trained vitreoretinal specialist with more than 9 years of clinical experience according to clinical practice standard. The SD-OCT data was randomly chosen from scans acquired during clinical practice from eyes in each of the three different groups (from patients under author TL care). We segmented the 3D SD-OCT cubes using our proposed method and obtained hand-drawn outlines made by two independent retinal specialists (a fourth-year resident and retina fellow) for each of the ten boundaries segmented by our method in the 42 images (with a total of 420 outlines by each reader, 378 of them evaluated), drawn directly in each B-scan using image processing software (Photoshop, Adobe Systems Inc.; Snagit 11, TechSmith Corporation). We also collected the segmentation results produced by the Iowa Reference Algorithm OCT-Explorer (version 3.5, Retinal Image Analysis Lab, Iowa Institute for Biomedical Imaging, Iowa City, IO) for comparison, a common research software tool, available for download [25].

As a larger second data set (Data set 2) we used the SD-OCT scans collected by Farisu et al. [38], which are available online for download [37]. This collection consists of a total of 384 SD-OCT macular scans along with manually corrected reference standards for three retinal layer boundaries, semi-automatically drawn by their proposed segmentation algorithm [46] and later corrected manually by experienced graders (certified by the Duke Advanced Research in Spectral Domain OCT Imaging laboratory): The inner limiting membrane, the inner boundary of the retinal pigment epithelium (site where most AMD related conditions are visible in SD-OCT), and the boundary of Bruch’s membrane, all outlined within a 5 mm diameter circle around the center of the fovea. 115 of the scans were collected from healthy elderly patients, and the remaining 269 scans were collected from patients presenting with non-exudative AMD at different severity stages. Upon review of the collected cases, we excluded 15 of these scans (1 from healthy eyes, 14 from AMD eyes) due to low signal strength, which made the visualization of the layer boundaries nearly impossible. No other exclusion criterion was considered.

The Cirrus HD-OCT instrument employed to acquire the scans in Data set 1 produces image volumes with dimensions of 6 (horizontal) x 6 (vertical) x 2 (axial) –mm. The voxel dimensions in the horizontal, vertical, and axial directions were approximately 12, 47, and 2 -µm respectively (512, 128, and 1024 voxels in each direction). The raw data produced by the SD-OCT instrument was imported into the vendor’s proprietary software for analysis and reconstruction (Cirrus Research Browser, version 6.2.0.3, Carl Zeiss Meditec, Inc.) and later exported to a format accessible by Matlab (The MathWorks Inc., Natick, MA), where subsequent data processing was implemented. The SD-OCT scans in Data set 2 were acquired using Bioptigen, Inc. (Research Triangle Park, NC) imaging systems at four different clinical sites [38]. Each SD-OCT cube in this second data set was formed of 1000 (horizontal) x 100 (vertical) x 512 (axial) voxels covering dimensions of approximately 6.7 (horizontal) x 6.7 (vertical) x 1.6 (axial) –mm. The voxel dimensions in the horizontal, vertical, and axial directions in this second data set were approximately 6.7, 67, and 3.1 -µm respectively.

3.2. Quantitative comparison of boundary positioning between automated methods, manual markings and the semi-automated reference standard

For Data set 1, we compared quantitatively the boundary positioning differences produced by our proposed automated method (Aut.), the Iowa Reference Algorithm (Iowa), and the manual outlines drawn by the two readers (Reader1 and Reader2) for nine retinal boundaries: i-RNFL, o-RNFL, o-IPL, o-INL, o-OPL, i-EZ, o-EZ, i-RPE and o-RPE—a total of 378 boundary outlines for each method in 42 images—. The ILM boundary was excluded since its location coincided with the i-RNFL boundary for the cases evaluated, as explained later in the results section. Six comparisons were made for each boundary evaluated: Reader1-Reader2 (Inter-reader agreement), Aut.-Reader1, Aut.-Reader2, Iowa-Reader1, Iowa-Reader2, Aut.-Iowa.

For Data set 2, our automated method results were compared to the outlines available in the manually corrected reference standard which corresponded to retinal boundaries outlined by the automated method: ILM and i-RPE, outlined in a total of 36900 images [37,38]. The Bruch’s membrane boundary, marked semi-automatically in the second data set was not evaluated since it was not outlined by the automated algorithm. One comparison was made for each boundary evaluated: Aut.-Reference Standard.

The difference in the segmentation of a boundary by two different methods was quantified using the mean unsigned positioning error (MUE) of their axial location. For a particular boundaryB(x,y), outlined by two different methods or readers B1(x,y) andB2(x,y), their MUE was defined by:

MUE(B1,B2)=x=1Iy=1J|B1(x,y)B2(x,y)|IJ,
where I is the number of total evaluated A-scans (discrete horizontal locations) and J is the number of evaluated B-scans (discrete vertical locations). For each of the 3 groups in data set 1, the MUE collected for each of the 6 possible comparisons between the 4 segmentation methods evaluated were compared using analysis of variance (ANOVA) to test for statistically significant differences among the comparisons overall and pairwise between all possible comparison pairs (multiple comparison). Differences were considered statistically significant if the resulting p-value was <0.01.

3.3. Qualitative accuracy evaluation of automated methods and manual markings

The quantitative comparison of automated methods and manual markings highlight measurable differences between them, but may not be a measure of a method’s accuracy, due to difficulty in acquiring a perfectly accurate gold standard. Thus, an independent reader evaluated qualitatively the segmentation results produced by our automated method, the Iowa Reference Standard, and those produced by the two readers in Data set 1. This reader (a vitreoretinal specialist with more than 9 years of clinical experience) was masked and shown 378 sets of four images. The four images on each set displayed, in random blinded order, the same B-scan with the overlapped outline of a particular retinal boundary as segmented by our automated method, the Iowa Reference Standard, and each of the two readers, respectively. An unedited version of the B-scan (without markings) and the acronym of the retinal boundary that the 4 outlines aimed to identify was also displayed in each set (the acronym was included since only one boundary was shown at a time for each of the 4 segmentation versions). The masked reader assigned a score from 1 to 4 for each of the presented images within each set rating their accuracy, from most accurate to least accurate (failure to produce any result), with score meanings shown in Table 3. The accuracy scores collected for each of the 4 segmentation methods were compared using analysis of variance (ANOVA) to test for statistically significant differences among the groups overall and pairwise between all possible group pairs (multiple comparison). Differences were considered statistically significant if the resulting p-value was <0.01.

Tables Icon

Table 3. Meaning of accuracy rating scores assigned in qualitative evaluation, from 1 (most accurate) to 4 (failure to produce any result).

4. Results

Upon review of the segmentation results, the automated segmentation method proposed here produced satisfactory results in all of the evaluated cases (Fig. 4, Fig. 5, Fig. 6, and Fig. 7 and Table 4 and Table 5). The Iowa Reference Standard software produced satisfactory results in the healthy cases where it was evaluated (Group I in the first data set evaluated), satisfactory although somewhat smoother results in the early to intermediate AMD cases (Group II), and it failed to produce results in any of the severe AMD cases (Group III), where the segmentation software crashed. As an example, Fig. 4 displays the results in a sample image from each group in Data set 1. An original B-scan without markings and the outlines produced by each method are shown. Since patients with vitreous detachment were not included in the analysis, the i-RNFL boundary location coincided with the ILM boundary location for the cases evaluated. Our method produced nearly identical locations for these two boundaries (as we can see in the B-scans shown in Fig. 3 and Fig. 4, where the location of the ILM boundary is practically completely masked by that of the i-RNFL boundary), while the manual readers drew the ILM boundary with a stable small offset inner to the i-RNFL boundary (as observed in Fig. 4). We therefore decided to not consider the ILM boundary in the quantitative and qualitative evaluation of the first data set, since the comparison would only represent a bias related to a reader’s ability to draw two layers in the same location.

 figure: Fig. 4

Fig. 4 Example results in a sample SD-OCT B-scan from each group in Data set 1. The first, second and third rows show the results for a scan in group I (healthy), group II (early AMD), and group III (advanced AMD), respectively. First, second, third, fourth, and fifth columns: Original B-scan without outlines, with outlines resulting from our automated segmentation method, with outlines marked by the first and second reader, and with outlines resulting from the Iowa Reference Standard software, respectively (results for group III are not displayed since the Iowa software failed to produce any result). The labeling of each of the indicated boundaries is shown in in the legend in the top right.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Example results in an eye diagnosed with early AMD from Data set 2. First column: Original B-scan (top) and transverse scan (bottom) across the center of the fovea. Second column: Results produced by our automated method in the B-scan (top) and transverse scan (bottom). The labeling of the segmented boundaries is indicated by the legend in the right, from top to bottom. Note that the location of the ILM boundary (top red line) is not clearly visible because it has the same location as the i-RNFL boundary (top green line). Third column: Independent semi-automated outline locations of the ILM and i-RPE boundaries in the B-scan (top) and transverse scan (bottom).

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Example results in an eye diagnosed with advanced AMD from Data set 2. First column: Original B-scan (top) and transverse scan (bottom) across the center of the fovea. The yellow circles in the bottom image indicate regions where substantial differences between the automated method and reference standard were observed. Second column: Results produced by our automated method in the B-scan (top) and transverse scan (bottom). The labeling of the segmented boundaries is indicated by the legend in the right, from top to bottom. Note that the location of the ILM boundary (top red line) is not clearly visible because it has the same location as the i-RNFL boundary (top green line). Third column: Independent semi-automated reference standard of the ILM and i-RPE boundaries in the B-scan (top) and transverse scan (bottom).

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Results per group in quantitative comparisons among methods. Color codes are indicated in the legend and the error bars indicate standard deviation. Comparisons in group III of Data set 1 involving the Iowa Reference Standard software are missing since the software failed to produce any results. a) Comparison of axial MUE mean and std per group among the methods evaluated in Data set 1 (Cirrus scans). b) Axial MUE mean and std per group between our automated method and manually corrected reference standard in the Data set 2 (Bioptigen scans).

Download Full Size | PDF

Tables Icon

Table 4. Pairwise segmentation differences in terms of mean unsigned axial positioning error for the methods evaluated in Data set 1. The mean and std (in parentheses) values are shown in microns. *Comparisons involving the Iowa Reference Standard software were limited to scans in groups I and II since the software failed to produce any results for scans in group III.

Tables Icon

Table 5. Segmentation differences in terms of axial MUE between our automated method and the independent semi-automated reference standard (Aut.-Reference Standard) in Data set 2. The mean and std (in parentheses) values are shown in microns.

Figures 5 and 6 show example results from Data set 2 in an eye presenting with early AMD and an eye presenting advanced AMD, respectively. Both figures display the automated segmentation and the independent semi-automatically drawn reference markings collected form Farsiu et al. [37,38].

4.1. Quantitative comparison of boundary positioning between automated methods, manual markings and the semi-automated reference standard

The axial MUE measurements between the boundary outlines evaluated in Data set 1 are summarized in Table 4. Comparisons involving the Iowa Reference Standard software were limited to scans in groups I and II since the software failed to produce any results for scans in group III. A comparison of axial MUE mean and std per group, considering all boundaries is also shown in Fig. 7(a). Overall, the 6 comparisons had statistically significant differences for Groups I and II (ANOVA p-value <0.01). The MUE values showed lower overall differences between our method and each of the two readers (about 10 and 7 pixels within the span of 1024 pixels, respectively, shown in microns in Table 4 and Fig. 7(a)) than when comparing the two readers (about 11 pixels), and differences were even smaller when comparing our method to the Iowa software (about 4 pixels, considering only healthy and early or intermediate AMD cases). In a pairwise MUE comparison for groups I and II, MUE between our method and reader 2 was significantly different than between the two readers and between our method and reader 1, while MUE between our method and reader 1 was not significantly different than between the two readers. MUE between our method and the Iowa software was significantly smaller than the rest of the comparisons. Differences between the Iowa software and each independent reader were overall smaller but very comparable to our method considering the measured standard deviation. It is important to note that overall MUE values reported in Table 4 in comparisons with the Iowa software do not include those scans in group III, were higher mean and standard deviation MUE was observed for the rest of the comparisons. Significance was not found when comparing the MUE between our method and a reader to the MUE between the Iowa software and the same reader for Groups I and II. Similar results hold independently for each different group (healthy, intermediate, and advanced disease), with the exception of scans in Group III (advanced disease), where the comparison between automated method and the first reader yielded slightly higher differences but still well within the inter-reader difference ranges, and where the Iowa software failed to produce results. The 3 comparisons included in Group III did not have statistically significant differences (ANOVA p-value = 0.38). Evaluating each boundary independently, we can observe that higher agreement between the two readers corresponded to lower differences between the automated method and both of the readers. Although these results do not guarantee accuracy in all boundaries outlined, it seems that the differences between the automated method and at least one of the manual readers are accentuated in those with reader disagreement (at least one reader marking deviates from the actual location).

The axial MUE values between our segmentation method and the collected semi-automated reference standard in Data set 2 are summarized in Table 5. The table shows the mean and std (in microns) values for the two boundaries where the reference standard is available, within healthy and AMD eyes. The average axial MUE values for both boundaries within each group (healthy, AMD, or considering all the scans) are also displayed in Fig. 7(b). The observed axial MUE had a mean of 6.04 microns across boundaries and scans evaluated.

4.2. Qualitative accuracy evaluation of automated methods and manual markings

The qualitative accuracy scores assigned by an expert third reader to the manually drawn outlines, those produced by our automated method, and those produced by the Iowa Reference Standard software are shown in Fig. 8(a). The figure displays the mean and standard deviation values across boundaries per group and overall throughout groups (with 1-4 range, with lower values indicating higher accuracy). Figure 8(b) displays the percentage of correct outlines for each method and group, averaged across boundaries. We considered the outline to be correct if the assigned visual score was 1 or 2, not needing any substantial editing according to the opinion of the third reader. The percentage of correct outlines for each particular boundary, group and method is also given in Table 6. Note that the Iowa Reference Standard software values for Group III are not shown since the software failed to produce results for such group (all 0%). Overall, the accuracy scores collected by the four methods had statistically significant differences (ANOVA p-value <0.01). In a pairwise comparison, our automated method (all groups in Fig. 8(a), score of 1.44 ± 0.7) provided statistically significant better accuracy than those produced by the two readers (2.32 ± 0.77, and 2.25 ± 0.74, for the first and second readers, respectively) and the Iowa software (2.69 ± 1.15). The differences between the two manual readers were not statistically significant and the Iowa software had statistically significant worse scores than the other three methods (mainly caused by the errors reported in Group III). Our automated method also produced an overall rate of 87.57% visually correct outlines, a higher number than the manual markings or the independent software. Percentage of correct outlines was also higher by our method for each individual group evaluated.

 figure: Fig. 8

Fig. 8 Comparison of visual accuracy between outlines determined by the first reader (dark blue), the second reader (cyan), our proposed automated method (yellow), and by the Iowa Reference Standard software (red). a) Mean visual accuracy scores per group and overall throughout groups. The error bars indicate standard deviation. b) Percentage of correct outlines, indicating no substantial editing (i.e. score of 1 or 2) needed according to the opinion of the third independent reader, per group and average across group.

Download Full Size | PDF

Tables Icon

Table 6. Percentage of correct outlines (those receiving accuracy scores of 1 or 2 by an independent reader) from the first manual reader (R1) second manual reader (R2), our automated method (Aut.), and the Iowa Reference Standard software (Iowa) for each group and boundary. Values for the Iowa Reference Standard software in Group III are not shown since it failed to produce results in this group (all 0%).

5. Discussion and conclusions

In this work we describe a new segmentation algorithm of intraretinal layers in SD-OCT images, which automatically outlines the location of ten retinal boundaries. The method is based on an iterative process and could potentially be applied to other segmentation tasks. Our approach is novel in terms of technical innovations and in terms of functional utility. In terms of technical innovation, our method uses three-dimensional gradient information to generate a weighting function defined in an iterative median filtering operation. This weighting function is recalculated at each iteration and for each boundary, following the particular boundary evolution. The iterative median operation and choice of weighting function seeks a solution with continuity throughout regions of the SD-OCT scan, favoring neighboring regions with high gradient values, while also allowing certain abrupt variations that solve possible retinal abnormalities or scan misalignment. The 3D nature of the algorithm also helps resolve the location of boundary regions not easily seen in a singular B-scan. A set of pre-defined rules are also considered based on anatomical knowledge of the retinal layers to guarantee a stable solution.

In terms of functional utility, our method produced accurate retinal layer outlines in the clinical images of healthy patients and patients diagnosed with early or intermediate AMD, and it preserved retinal details, tracing retinal abnormalities such as small drusen while guaranteeing stability in the segmentation. This is an advantage over other known methods, which typically produce results that miss subtle changes related to disease stage (such as accurate delineation of the RPE in regions containing drusen) to favor a stable solution or processing speed. A direct comparison to a widely used state-of-the-art research software package (Iowa Reference Algorithm OCT-Explorer version 3.5) revealed that our method produced results of better quality. The outlines produced by our method in the advanced disease cases were of good quality (although substantially lower than in healthy and early AMD patients), especially considering the difficulty of the task for these particular images, while the comparison software failed to produce any results (although this may be a limitation of the software version used in this work). Our method also shows promising results in its application to images acquired across different OCT manufactures, judging the results in a publicly available independent large data set of 114 healthy and 255 AMD eyes acquired with a different manufacturer.

Overall, the axial location of nine boundaries in Data set 1 yielded smaller differences between our method and each of the two readers than observed between the two readers (Table 4 and Fig. 7(a)), and differences were even smaller when comparing our method to the Iowa software (considering only healthy and early or intermediate AMD cases). The high inter-reader differences highlight the segmentation difficulty of the images tested in Data set 1 and also show the stability advantages of using an automated segmentation method. Generating multiple manual markings of retinal boundaries in SD-OCT scans is a difficult and time-consuming process wherein a reader’s judgment and skill plays an important role. While the two readers involved here had sufficient anatomical knowledge and experience, and there were no time constraints to draw the boundaries, each reader had a substantial amount of work to carry out—drawing a total of 420 outlines in 42 images from clinical studies of varying image quality, which could certainly explain the inter-reader variation we observed. Since these are no absolute gold standard markings, a quantitative comparison to the manual outlines drawn in data set 1 is not a reliable measure of accuracy (see possible errors produced in the delineations by the two readers in Fig. 4 and low number of correct outlines in Table 6). Therefore, we evaluated the accuracy of segmentations subjectively by using a third highly experienced independent reader who scored the quality of the segmentations. The boundaries produced by our automated method provided statistically significant better accuracy (p<0.01), as judged by this third reader, than those produced by the two readers and the Iowa software (Fig. 8(a)). The Iowa software produced results that seemed acceptable in the early and intermediate AMD eyes, but limited the ability to accurately follow slight differences, especially within the OS boundary. Moreover, the Iowa software failed to produce any result in any of the scans in the third group (severe AMD); it terminated in an error, while our method produced acceptable segmentation (with average ratings between “good” and “optimal”) considering the difficulty of the task. It should be noted that the Iowa software uses a limited set of images for training purposes, which may not be in agreement with the images used in this study and therefore compromising its results.

The percentage of correct outlines segmented by our method was also stable across different boundaries (Table 6), in contrast to the results produced by the other options tested. Our method also produced a higher percentage of correct outlines at each independent tested group. However, although the percentage of healthy and early to intermediate AMD cases showing accurate results was close to 100%, this number decreased substantially in the advanced disease cases. This fact indicates that such cases should be reviewed for correctness before deriving any conclusion from the results of its automated processing. It is important to note that the accuracy of our method in the early to intermediate AMD cases was superior across all boundaries to the automated method we compared against, especially in the outer retina boundaries, which may be a result of the limitations of the comparison software as mentioned above. When evaluating only the outlines in which both readers produced visually correct scores (110 outlines), the mean absolute differences between our method and the manual readers was 11.22 microns (11.73 std.) and 10.79 microns (10.13 std.) —about 6 pixels, for readers 1 and 2, respectively, and the mean difference between the two readers was 8.96 microns (5.79 std.) —about 5 pixels. These numbers show that for those outlines where both readers made accurate markings (representing the inherent variability of what is judged as an accurate marking by an expert) the differences between our method and the readers were within the ranges observed between the two readers.

We believe that our segmentation method can be generalized to other data sets and other SD-OCT acquisition systems, as evaluated in the second larger independent data set (Data set 2), collected from a separate institution [37]. The location of the automated segmentation seems accurate and resembles that of the reference standard (Figs. 5-6), with the only substantial differences observed in small regions of the ILM boundary (as shown in the transverse in Fig. 6, indicated by yellow circles). These differences were mostly due to the high misalignment between B-scans observed in Data set 2. Although our method automatically corrected this misalignment, a small number of scans still presented slight difficulties.

The reference standard markings in Data set 2 are generated by the method proposed by Chiu et al. [19] followed by manual corrections made by an expert reader. Although such method can outline seven retinal boundaries, reference standard markings for only three of these boundaries were available for download. Therefore, the comparisons presented here were limited to the two boundaries produced by our method that were also outlined in the downloaded data (the ILM and i-RPE). When evaluated against manual markings, Chiu’s method has been reported to produce its highest differences in the o-OPL boundary with 5.3 microns mean difference, while the rest of the boundaries had mean differences of less than 5 microns. Importantly, in their evaluation, only normal eyes with high imaging quality were considered, and the bias found between their automated method and manual outlines was corrected before calculating their differences. Although our results are not directly comparable to the evaluation results of Chiu et al., the differences displayed in Table 5 (average differences of 6.04 microns, with standard deviation of 7.83 microns, with highest differences observed for the i-RPE boundary in AMD patients, 7.07 microns) are within the ranges reported in their study. It is also important to note that larger differences observed in the first data set may be also an effect of the higher axial resolution of the Cirrus HD-OCT system over the Bioptigen system, as higher pixel dimensions may help two different readers or segmentation techniques to agree in the same pixel location.

There are several limitations in our work. Segmentation accuracy is limited in extensive regions where a particular boundary is not visible due to lack of layer thickness, as we can observe for the o-RNFL boundary in the far-left region of the B-scan shown in Fig. 3(c). The boundary location in such regions gets estimated by interpolation using neighboring locations where gradients are most prominent. This interpolation produces adequate results in reduced isolated regions (such as obscure regions caused by blood vessels), but in more extensive regions the results may be sub-optimal. For SD-OCT scans of normal and AMD eyes, however, this behavior is only observed for part of the outer temporal region of the o-RNFL boundary, where there is no retinal nerve fiber layer thickness, also explaining the lower accuracy observed for this particular boundary. Nevertheless, we found that the pre-defined set of smoothing parameters affecting this interpolation produced satisfactory results, though not perfect.

Although the location of the ILM and inner boundary of the RPE (the site where of most common abnormalities related to AMD can be observed in SD-OCT) was evaluated in a large number of eyes in Data set 2 (369 3D cubes), the rest of the boundaries were only evaluated in Data set 1 (14 3D cubes). However, evaluation in Data set 1 considered 42 images and 378 outlines for each evaluated method. Each of the two manual readers had to draw 378 continuous outlines, while a third independent reader assigned independent scores for each of the 1512 outlines generated by the manual readers and the two independent methods tested in this work. Obtaining and judging manual outlines in SD-OCT 3D data is a difficult and time-consuming process, and it is also noted as a limitation in previous work, where a similar or lower number of cubes were evaluated [22]. Comparison to the ILM and inner boundary of the RPE could be expanded by including the results obtained for 20 cubes using the method proposed by Chiu et al. [46], which are available for download alongside manual markings made by two readers [48]. Further work can also include comparison to a number of other retinal layer segmentation methods that are available for download [49–51].

Another limitation of this work is its high computer memory and time requirements for the iterative weighted median filtering process: The segmentation of 10 boundaries in a full-resolution SD-OCT cube takes approximately 2 hours (performed on a personal computer with a 3.40GHz processor and 8GB RAM). While this would translate in approximately 6 seconds per boundary per image (128 B-scans in a cube), the estimation of each boundary in each B-scan is dependent within the other boundaries and location in adjacent B-scans, which requires the processing of the whole cube data. The long processing time may be reduced by implementing our method in C/C + + , taking advantage of graphic processing unit computing, and not processing the SD-OCT data at full resolution. Previous segmentation methods have reported faster processing speeds by using optimized software and by using a more drastic internal smoothing function or processing lower resolution versions of the SD-OCT scans, whose results can be later interpolated to the original resolution. A version of the method presented here operating at lower resolution versions of the collected scans (implemented in Matlab) was able to resolve the 10 retinal boundaries in approximately 10 minutes. While our interest in the work presented herein is to resolve the location of the boundaries in a pixel-by-pixel basis using the actual resolution of the collected data, consideration of such speed-enhancing techniques may provide a viable solution in a setting where faster results are needed. Whether the smoother results maintain necessary accuracy would need to be investigated. The automated segmentation of a large number of SD-OCT scans can also be parallelized, enabling the processing a large number of scans in a shorter time than done serially. Our group is currently working in optimizing the processing speed of the presented method, and we will be providing online access to a segmentation tool for the research community.

While the WM adaptation and its parameters were set partly based in common knowledge and partly heuristically through experimentation in a separate data set of eyes than those evaluated in this work, instead of from the result of an automated optimization or machine-learning method, the results from our approach were satisfactory. Future work includes the adaptation of such parameters in a machine-learning approach where a set of images will be separated for training purposes. We hypothesize that using a machine learning approach to learn the behavior of retinal data and optimize such parameters could benefit the segmentation of complicated cases were our current approach had the most difficulties. However, machine learning methods often require large numbers of training data to be representative of retinal pathology heterogeneity, and considering the difficulties generating large annotated data sets there is no guarantee that a machine learning approach could be easily adopted to optimize all parameters and to produce better results than the ones presented here.

Our group has been investigating the correlation of quantitative SD-OCT measurements with prediction of progression in early and intermediate AMD patients [3]. A reliable automated segmentation approach, such as that presented herein, is crucial to enabling such work. Our work has also shown superior results to other state-of-the-art clinical software in eyes affected by hydroxychloroquine toxicity [15] and post-surgical macular hole eyes [6]. We plan to expand our segmentation method to SD-OCT scans acquired in the region surrounding the optic disc as well as to eyes presenting with other retinal pathologies, such as glaucoma and retinitis pigmentosa. Retinal segmentation of severely affected eyes from these and other retinal diseases can be more challenging, since some of the boundaries may be difficult to visualize, such the o-INL, o-OPL, i-EZ and o-EZ boundaries in severe retinitis pigmentosa. This fact may require establishing certain rules for those regions in which a boundary disappears, as well as revising our rules that guide automated segmentation (Table 1), but we expect the core of our algorithm will remain otherwise similar.

6 Appendix A. Initial ILM estimation

An example of the process for the initial estimation of the ILM boundary is illustrated in Fig. 9. The cube voxel values were normalized to take values between 0 and 1 (example B-scan shown in Fig. 9(a)). We then constructed a histogram of the voxel values throughout the 3D cube (blue line in Fig. 9(c)), considering only those with values over 0.1 in order to eliminate blacked-out voxels. The first peak of the histogram represents a large number of similarly low values, mainly corresponding to regions in the vitreous region and part of the choroid coat. We estimated the value distribution within this background region by mirroring the lower histogram values within the first peak (red line in Fig. 9(c)); voxels with values over a threshold at 2 standard deviations from the mean of this distribution (vertical green line in Fig. 9(c)) were considered as foreground region. This foreground region was processed with the morphological opening by an axial line of 40µm to further remove possible voxels in the vitreous region, since we expect the retinal structure to have at least this thickness. Fig. 9(d) displays the resulting foreground region in the example B-scan and its resulting histogram throughout the cube is shown in Fig. 9(f). The initial estimation of the ILM boundary is then formed A-scan by A-scan as the innermost positive gradient in the axial direction that belongs to this foreground region (masked gradient shown in Fig. 9(e)). Gradients were computed using a 3D Sobel operator kernel of 25µ extent (Fig. 9(b)). The location of the ILM in those A-scans without a positive gradient of these characteristics was determined by cubic interpolation using the estimation from nearby locations. The resulting ILM initial estimation for our example is shown in Figs. 10 (a)-10(e). This initial estimation, although quite successful in most SD-OCT cubes, presents erroneous localized positions (as observed in Fig. 10), which are solved in the iterative WM process that forms the main core of the segmentation method.

 figure: Fig. 9

Fig. 9 a) Example of de-noised B-scan in cube. b) Axial gradient in the B-scan. c) Histogram from values in SD-OCT cube (blue line), estimated distribution of background values (red line) and selected threshold (green line). d) Segmented foreground region. e) Masked gradient. f) Histogram of values in foreground region.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 a) Rendering of example ILM initial estimation. b), c), d) B-scans and ILM initial estimation corresponding to the red, green, and blue lines indicated in the rendering shown in a), respectively. e) Transverse scan (transverse cut of the SD-OCT cube in the axial-vertical plane) across the center of the fovea, corresponding to the yellow line in the rendering shown in a). f) Rendering of example with refined ILM. g), h), i) B-scans and refined ILM corresponding to the red, green, and blue lines indicated in the rendering shown in f), respectively. e) Transverse scan across the center of the fovea, corresponding to the yellow line in the rendering shown in f).

Download Full Size | PDF

7 Appendix B. Differentiation between RNFL-complex and RPE-complex

We estimate the extent of the RNFL-complex and the RPE-complex in order to facilitate the initial estimation of the remaining nine boundaries. We considered the higher 20% values of the voxels at higher depth than the estimated ILM to generate a foreground mask where the values of the vitreous region and choroid coat are mostly excluded. Lower intensity voxels in the darker region between the RNFL and RPE complexes (outer to the OPL but inner to the IS), and possible fluid acculturations and detachment in the retina are also mostly excluded, and can be roughly identified by finding the isolated background regions (red regions displayed in Fig. 11(a)) within the foreground mask. The identified locations are averaged axially and weighted by their inverted recorded intensity values (so that they tend to the darker areas), forming a rough initial estimation of the separation between the two complexes (blue line in Fig. 11(a)). This initial estimation then acts as seed in the iterative weighted median process, to obtain a surface that tends to follow the i-EZ location (blue line in Fig. 11(b)), which for the layers considered in this work is taken as separation between those belonging to the RNFL complex, inner to such surface, or the RPE complex, outer to such surface.

 figure: Fig. 11

Fig. 11 a) Example B-scan with voxels in the darker regions between the RNFL and RPE complexes identified in red. The blue line indicates the weighted mean axial position of these regions. b) Example B-scan with blue line indicating the separation of layers belonging to the RNFL and RPE complexes as identified in this work.

Download Full Size | PDF

8 Appendix C. Initial estimation of intraretinal layer boundaries

The regions belonging to the RNFL and RPE complexes were processed independently to select an initial depth estimation of the i-RNFL, o-RNFL, o-IPL, o-INL, and o-OPL boundaries in the case of the RNFL complex and the i-EZ, o-EZ, i-RPE, and o-RPE boundaries for the RPE complex. The gradient profile of each A-scan (computed using a 3D Sobel operator, as for the ILM initial estimation) was first smoothed using a Savitzky-Golay filter of third degree and window size equal to one fourth of the complex estimated axial thickness as a pre-processing step, so as to identify a maximum of four peaks in each complex. An example of this process is shown in Figure 12.

 figure: Fig. 12

Fig. 12 a) B-scan with initial estimation of retinal layer boundaries. From top to bottom: Red: ILM. Green: i-RNFL. Dark blue: o-RNFL. Magenta: o-IPL. White: o-INL. Cyan: o-OPL. Green: i-EZ. Red: o-EZ. White: i-RPE. Yellow: o-RPE. The red vertical line indicates a region of the A-scan for which the gradient profile is shown in (b). (c) Shows the filtered gradient profile with the initial identification of each boundary.

Download Full Size | PDF

The initial estimation of the five boundaries in the first complex group was identified by considering a maximum of four valleys in each A-scan smoothed gradient profile, selecting the ones with the lowest value (shown in green, dark blue, magenta, and cyan, respectively, in Figs. 12(a) and 12(c)). The location of the last, penultimate and antepenultimate valleys were assigned to the o-OPL, o-IPL, and o-RNFL, respectively. In the case that only two valleys were present, the o-IPL and o-RNFL were assigned the first valley location and the o-OPL was assigned the second, and all three layers were assigned the same value in case only one valley was present. The i-RNFL was given the location of the last peak found inner to the depth assigned to o-RNFL layer. If no such peak could be found, the location of the i-RNFL was set as the same as the refined ILM layer. The o-INL location was selected as the highest peak between the o-IPL and o-OPL locations. The boundary locations were interpolated by bilinear interpolation for those A-scans where no valleys were found.

The initial estimation of the i-EZ boundary (shown in green in Fig. 12(a) and 12(c)) was obtained as the innermost location in the RPE complex differentiation. The initial estimation of the remaining three boundaries in the RPE complex (o-EZ, i-RPE, o-RPE) was identified by considering a maximum of two valleys in each A-scan smoothed gradient profile, selecting the ones with the lowest value (shown in red, white and yellow in Figs. 12(a) and 12(c), respectively). The location of the last and penultimate valleys was assigned as the o-RPE and o-EZ, respectively. In the case that only one valley was found, the o-EZ was assigned the same value as the initial i-EZ boundary estimation, and both o-RPE and o-EZ boundaries were assigned the same i-EZ location if no valleys were found. The i-RPE was given the location of the first (innermost) peak found between the values assigned to the o-EZ and o-RPE. If no such peak was found, the i-RPE was set as the same as the i-EZ initial estimation.

Figure 12(a) shows a B-scan taken across the fovea center with the results obtained from this initial boundary estimation. The figure shows that the initial estimation is far from perfect, but it proved to be sufficient to act as seed of the iterative weighted median process that forms the main core of the method. A 3D rendering of this initial estimation in the whole SD-OCT cube is shown in Fig. 3(a).

9 Appendix D. Topographic estimation of the foveal pit location within the SD-OCT cube

The topographic location of the foveal pit is estimated following a series of thresholding and morphological operations. We first applied a k-means classification algorithm [52] to the set of en face thickness measurements between the ILM and o-IPL boundaries, dividing the values in two different groups. Values under the identified threshold by the k-means algorithm and within 1mm to the center of the image were select as an initial mask. We then applied a morphological opening to this mask followed by a morphological dilation with a kernel consisting on a circle of 0.25 mm radius. An example of the results of this foveal pit identification process is displayed in Fig. 13.

 figure: Fig. 13

Fig. 13 Example result in the estimation of the foveal pit location (indicated in red). a) Rendering of the segmented ILM layer, with foveal pit location. b) B-scan across the center of the fovea (location indicated by the blue line in a), with estimated foveal pit location indicated in red. b) Transverse scan across the center of the fovea (location indicated by the green line in a), with estimated foveal pit location indicated in red.

Download Full Size | PDF

Funding

This work was supported by a grant from the Bio-X Interdisciplinary Initiatives Program of Stanford University, and a Spectrum-SPADA innovation grant of Stanford University, part of the Clinical and Translational Science Award (CTSA) program funded by the National Center for Advancing Translational Sciences (Grant: UL1 TR001085) at the National Institutes of Health (NIH).

References and links

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]   [PubMed]  

2. J. R. Lee, J. W. Jeoung, J. Choi, J. Y. Choi, K. H. Park, and Y. D. Kim, “Structure-function relationships in normal and glaucomatous eyes determined by time- and spectral-domain optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 51(12), 6424–6430 (2010). [CrossRef]   [PubMed]  

3. L. de Sisternes, N. Simon, R. Tibshirani, T. Leng, and D. L. Rubin, “Quantitative SD-OCT imaging biomarkers as indicators of age-related macular degeneration progression,” Invest. Ophthalmol. Vis. Sci. 55(11), 7093–7103 (2014). [CrossRef]   [PubMed]  

4. K. Yi, M. Mujat, B. H. Park, W. Sun, J. W. Miller, J. M. Seddon, L. H. Young, J. F. de Boer, and T. C. Chen, “Spectral domain optical coherence tomography for quantitative evaluation of drusen and associated structural changes in non-neovascular age-related macular degeneration,” Br. J. Ophthalmol. 93(2), 176–181 (2009). [CrossRef]   [PubMed]  

5. G. Quellec, K. Lee, M. Dolejsi, M. K. Garvin, M. D. Abràmoff, and M. Sonka, “Three-dimensional analysis of retinal layer texture: identification of fluid-filled regions in SD-OCT of the macula,” IEEE Trans. Med. Imaging 29(6), 1321–1330 (2010). [CrossRef]   [PubMed]  

6. L. de Sisternes, J. Hu, D. L. Rubin, and T. Leng, “Visual prognosis of eyes recovering from macular hole surgery through automated quantitative analysis of spectral-domain optical coherence tomography (SD-OCT) scans,” Invest. Ophthalmol. Vis. Sci. 56(8), 4631–4643 (2015). [CrossRef]   [PubMed]  

7. J. H. Acton, R. T. Smith, D. C. Hood, and V. C. Greenstein, “Relationship between retinal layer thickness and the visual field in early age-related macular degeneration,” Invest. Ophthalmol. Vis. Sci. 53(12), 7618–7624 (2012). [CrossRef]   [PubMed]  

8. F. L. Ferris, M. D. Davis, T. E. Clemons, L. Y. Lee, E. Y. Chew, A. S. Lindblad, R. C. Milton, S. B. Bressler, R. Klein, and Age-Related Eye Disease Study (AREDS) Research Group, “A simplified severity scale for age-related macular degeneration: AREDS Report No. 18,” Arch. Ophthalmol. 123(11), 1570–1574 (2005). [CrossRef]   [PubMed]  

9. Q. Chen, T. Leng, L. Zheng, L. Kutzscher, J. Ma, L. de Sisternes, and D. L. Rubin, “Automated drusen segmentation and quantification in SD-OCT images,” Med. Image Anal. 17(8), 1058–1072 (2013). [CrossRef]   [PubMed]  

10. S. Y. Lee, P. F. Stetson, H. Ruiz-Garcia, F. M. Heussen, and S. R. Sadda, “Automated characterization of pigment epithelial detachment by optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 53(1), 164–170 (2012). [CrossRef]   [PubMed]  

11. P. A. Keane, P. S. Mand, S. Liakopoulos, A. C. Walsh, and S. R. Sadda, “Accuracy of retinal thickness measurements obtained with Cirrus optical coherence tomography,” Br. J. Ophthalmol. 93(11), 1461–1467 (2009). [CrossRef]   [PubMed]  

12. G. Matt, S. Sacu, W. Buehl, C. Ahlers, R. Dunavoelgyi, C. Pruente, and U. Schmidt-Erfurth, “Comparison of retinal thickness values and segmentation performance of different OCT devices in acute branch retinal vein occlusion,” Eye (Lond.) 25(4), 511–518 (2011). [CrossRef]   [PubMed]  

13. F. G. Schlanitz, C. Ahlers, S. Sacu, C. Schütze, M. Rodriguez, S. Schriefl, I. Golbaz, T. Spalek, G. Stock, and U. Schmidt-Erfurth, “Performance of drusen detection by spectral-domain optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 51(12), 6715–6721 (2010). [CrossRef]   [PubMed]  

14. C. Ahlers, C. Simader, W. Geitzenauer, G. Stock, P. Stetson, S. Dastmalchi, and U. Schmidt-Erfurth, “Automatic segmentation in three-dimensional analysis of fibrovascular pigmentepithelial detachment using high-definition optical coherence tomography,” Br. J. Ophthalmol. 92(2), 197–203 (2008). [CrossRef]   [PubMed]  

15. L. de Sisternes, J. Hu, D. L. Rubin, and M. F. Marmor, “Localization of damage in progressive hydroxychloroquine retinopathy on and off the drug: inner versus outer retina, parafovea versus peripheral fovea,” Invest. Ophthalmol. Vis. Sci. 56(5), 3415–3426 (2015). [CrossRef]   [PubMed]  

16. H. J. Lee, M. S. Kim, Y. J. Jo, and J. Y. Kim, “Thickness of the macula, retinal nerve fiber layer, and ganglion cell layer in the epiretinal membrane: The repeatability study of optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 56(8), 4554–4559 (2015). [CrossRef]   [PubMed]  

17. H. J. Lee, M. S. Kim, Y. J. Jo, and J. Y. Kim, “Ganglion cell–inner plexiform layer thickness in retinal diseases: Repeatability study of spectral-domain optical coherence tomography,” Am. J. Ophthalmol. 160(2), 283–289 (2015). [CrossRef]   [PubMed]  

18. A. Mishra, A. Wong, K. Bizheva, and D. A. Clausi, “Intra-retinal layer segmentation in optical coherence tomography images,” Opt. Express 17(26), 23719–23728 (2009). [CrossRef]   [PubMed]  

19. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010). [CrossRef]   [PubMed]  

20. K. Li, X. Wu, D. Z. Chen, and M. Sonka, “Optimal surface segmentation in volumetric images--a graph-theoretic approach,” IEEE Trans. Pattern Anal. Mach. Intell. 28(1), 119–134 (2006). [CrossRef]   [PubMed]  

21. M. K. Garvin, M. D. Abramoff, R. Kardon, S. R. Russell, X. Wu, and M. Sonka, “Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search,” IEEE Trans. Med. Imaging 27(10), 1495–1505 (2008). [CrossRef]   [PubMed]  

22. M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging 28(9), 1436–1447 (2009). [CrossRef]   [PubMed]  

23. P. A. Dufour, L. Ceklic, H. Abdillahi, S. Schröder, S. De Dzanet, U. Wolf-Schnurrbusch, and J. Kowal, “Graph-based multi-surface segmentation of OCT data using trained hard and soft constraints,” IEEE Trans. Med. Imaging 32(3), 531–543 (2013). [CrossRef]   [PubMed]  

24. S. J. Chiu, M. J. Allingham, P. S. Mettu, S. W. Cousins, J. A. Izatt, and S. Farsiu, “Kernel regression based segmentation of optical coherence tomography images with diabetic macular edema,” Biomed. Opt. Express 6(4), 1172–1194 (2015). [CrossRef]   [PubMed]  

25. K. Lee, M. D. Abramoff, M. Garvin, and M. Sonka, “The Iowa Reference Algorithms. (Retinal Image Analysis Lab, Iowa Institute for Biomedical Imaging),” (accessed 12 Jan 2015, Updated 29 Sep 2014). https://www.iibi.uiowa.edu/content/iowa-reference-algorithms-human-and-murine-oct-retinal-layer-analysis-and-display

26. K. A. Vermeer, J. van der Schoot, H. G. Lemij, and J. F. de Boer, “Automated segmentation by pixel classification of retinal layers in ophthalmic OCT images,” Biomed. Opt. Express 2(6), 1743–1756 (2011). [CrossRef]   [PubMed]  

27. A. Lang, A. Carass, M. Hauser, E. S. Sotirchos, P. A. Calabresi, H. S. Ying, and J. L. Prince, “Retinal layer segmentation of macular OCT images using boundary classification,” Biomed. Opt. Express 4(7), 1133–1152 (2013). [CrossRef]   [PubMed]  

28. A. Ehnes, Y. Wenner, C. Friedburg, M. N. Preising, W. Bowl, W. Sekundo, E. M. Zu Bexten, K. Stieger, and B. Lorenz, “Optical coherence tomography (OCT) device independent intraretinal layer segmentation,” Transl. Vis. Sci. Technol. 3(1), 1–16 (2014). [CrossRef]   [PubMed]  

29. H. Ishikawa, S. Piette, J. M. Liebmann, and R. Ritch, “Detecting the inner and outer borders of the retinal nerve fiber layer using optical coherence tomography,” Graefes Arch. Clin. Exp. Ophthalmol. 240(5), 362–371 (2002). [CrossRef]   [PubMed]  

30. H. Ishikawa, D. M. Stein, G. Wollstein, S. Beaton, J. G. Fujimoto, and J. S. Schuman, “Macular segmentation with optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 46(6), 2012–2017 (2005). [CrossRef]   [PubMed]  

31. H. Ishikawa, J. Kim, T. R. Friberg, G. Wollstein, L. Kagemann, M. L. Gabriele, K. A. Townsend, K. R. Sung, J. S. Duker, J. G. Fujimoto, and J. S. Schuman, “Three-dimensional optical coherence tomography (3D-OCT) image enhancement with segmentation-free contour modeling C-mode,” Invest. Ophthalmol. Vis. Sci. 50(3), 1344–1349 (2009). [CrossRef]   [PubMed]  

32. N. Jain, S. Farsiu, A. A. Khanifar, S. Bearelly, R. T. Smith, J. A. Izatt, and C. A. Toth, “Quantitative comparison of drusen segmented on SD-OCT versus drusen delineated on color fundus photographs,” Invest. Ophthalmol. Vis. Sci. 51(10), 4875–4883 (2010). [CrossRef]   [PubMed]  

33. R. Kafieh, H. Rabbani, M. D. Abramoff, and M. Sonka, “Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map,” Med. Image Anal. 17(8), 907–928 (2013). [CrossRef]   [PubMed]  

34. M. A. Mayer, J. Hornegger, C. Y. Mardin, and R. P. Tornow, “Retinal nerve fiber layer segmentation on FD-OCT scans of normal subjects and glaucoma patients,” Biomed. Opt. Express 1(5), 1358–1383 (2010). [CrossRef]   [PubMed]  

35. M. Mujat, R. Chan, B. Cense, B. Park, C. Joo, T. Akkin, T. Chen, and J. de Boer, “Retinal nerve fiber layer thickness map determined from optical coherence tomography images,” Opt. Express 13(23), 9480–9491 (2005). [CrossRef]   [PubMed]  

36. D. Cabrera Fernández, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express 13(25), 10200–10216 (2005). [CrossRef]   [PubMed]  

37. S. Farsiu, S. J. Chiu, R. V. O’Connell, F. A. Folgar, E. Yuan, J. A. Izatt, and C. A. Toth, “World’s largest annotated SD-OCT dataset for intermediate AMD and control subjects used in our recent paper” (accessed July 2013). http://people.duke.edu/~sf59/RPEDC_Ophth_2013_dataset.htm.

38. S. Farsiu, S. J. Chiu, R. V. O’Connell, F. A. Folgar, E. Yuan, J. A. Izatt, C. A. Toth, and Age-Related Eye Disease Study 2 Ancillary Spectral Domain Optical Coherence Tomography Study Group, “Quantitative classification of eyes with and without intermediate age-related macular degeneration using optical coherence tomography,” Ophthalmology 121(1), 162–172 (2014). [CrossRef]   [PubMed]  

39. G. Staurenghi, S. Sadda, U. Chakravarthy, R. F. Spaide, and International Nomenclature for Optical Coherence Tomography (IN•OCT) Panel, “Proposed lexicon for anatomic landmarks in normal posterior segment spectral-domain optical coherence tomography: the IN•OCT consensus,” Ophthalmology 121(8), 1572–1578 (2014). [CrossRef]   [PubMed]  

40. A. Buades, B. Coll, and J. M. Morel, “A non-local algorithm for image denoising,” Comput. Vis. Patt. Recog. 2(7), 60–65 (2005).

41. S. Parrilli, M. Poderico, C. V. Angelino, G. Scarpa, and L. Verdoliva, “A non local approach for SAR image denoising,” IEEE Geoscience and Remote Sensing Symposium (IGARSS) (2010).

42. L. Fang, S. Li, R. P. McNabb, Q. Nie, A. N. Kuo, C. A. Toth, J. A. Izatt, and S. Farsiu, “Fast acquisition and reconstruction of optical coherence tomography images via sparse representation,” IEEE Trans. Med. Imaging 32(11), 2034–2049 (2013). [CrossRef]   [PubMed]  

43. D. R. K. Brownrigg, “The weighted median filter,” Commun. Assoc. Comput. Machinery 27(8), 807–818 (1984).

44. L. Yin, R. K. Yang, M. Gabbouj, and Y. Neuvo, “Weighted median filters: A tutorial,” IEEE Trans. Circuits Syst. II Analog Digit. Siganl Process. 43(3), 157–192 (1996). [CrossRef]  

45. A. Buades, B. Coll, and J. M. Morel, “The staircasing effect in neighborhood filters and its solution,” IEEE Trans. Image Process. 15(6), 1499–1505 (2006). [CrossRef]   [PubMed]  

46. S. J. Chiu, J. A. Izatt, R. V. O’Connell, K. P. Winter, C. A. Toth, and S. Farsiu, “Validated Automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images,” Invest. Ophthalmol. Vis. Sci. 53(1), 53–61 (2012). [CrossRef]   [PubMed]  

47. J. D’Errico, “Surface Fitting using gridfit” (MATLAB CENTRAL File Exchange, 12 Jul 2013, Updated 29 Jul 2010). http://www.mathworks.com/matlabcentral/fileexchange/8998-surface-fitting-using-gridfit.

48. S. J. Chiu, J. A. Izatt, R. V. O’Connell, K. P. Winter, C. A. Toth, and S. Farsiu, “Dataset for SD-OCT Retinal Image Analysis” (2012). http://people.duke.edu/~sf59/Chiu_IOVS_2011_dataset.htm.

49. Teng, and Pang-yu. “Caserel - An Open Source Software for Computer-aided Segmentation of Retinal Layers in Optical Coherence Tomography Images” (2013). http://pangyuteng.github.io/caserel/

50. M. Mayer, “Optical coherence tomography segmentation and evaluation GUI” (2016). https://www5.cs.fau.de/research/software/octseg/

51. P. A. Dufour, “OCT segmentation application” (2012). http://pascaldufour.net/Research/software_data.html

52. J. A. Hartigan and M. A. Wong, “Algorithm AS 136: A K-means clustering algorithm,” J. R. Stat. Soc. Series C 28(1), 100–108 (1979).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 a) Example SD-OCT cube, comprising 3-D data. b) Example B-scan (planar images at each vertical location) within the cube, formed by a series of line-scans at each horizontal location (A-scans). c) Detail showing the location of each retinal boundary segmented by our proposed algorithm. The figure also indicates the correspondence between the nomenclature used in this work and the interface between two zones that each boundary represents, as defined in [39].
Fig. 2
Fig. 2 Steps in segmentation: Input data: SD-OCT cube. De-noising: (1) Non-local (NL) means filtering. Segmentation algorithm: (2) Initial ILM estimation, (3) ILM iterative segmentation, (4) differentiation of RNFL and RPE complexes, (5) initial estimation of remaining retinal layers, (6) iterative segmentation. Output: Segmented SD-OCT cube.
Fig. 3
Fig. 3 a) Rendering of the initial estimation of the 10 retinal boundaries. From top to bottom: Red: ILM. Green: i-RNFL. Dark blue: o-RNFL. Magenta: o-IPL. White: o-INL. Cyan: o-OPL. Green: i-EZ. Red: o-EZ. White: i-RPE. Yellow: o-RPE. b) Rendering of the retinal boundaries after refinement step. Axial differences between the layers are exaggerated for displaying purposes. c) On the left, example B-scan taken across the center of the fovea (top), and transverse scan across the center of the fovea (bottom). On the right, the same images are shown with the segmentation of the retinal boundaries after refinement step, as indicated by the legend displayed on the far right Note that the location of the segmented ILM boundary is not clearly visible in the B-scan and transverse scan since it has the same location as the i-RNFL boundary.
Fig. 4
Fig. 4 Example results in a sample SD-OCT B-scan from each group in Data set 1. The first, second and third rows show the results for a scan in group I (healthy), group II (early AMD), and group III (advanced AMD), respectively. First, second, third, fourth, and fifth columns: Original B-scan without outlines, with outlines resulting from our automated segmentation method, with outlines marked by the first and second reader, and with outlines resulting from the Iowa Reference Standard software, respectively (results for group III are not displayed since the Iowa software failed to produce any result). The labeling of each of the indicated boundaries is shown in in the legend in the top right.
Fig. 5
Fig. 5 Example results in an eye diagnosed with early AMD from Data set 2. First column: Original B-scan (top) and transverse scan (bottom) across the center of the fovea. Second column: Results produced by our automated method in the B-scan (top) and transverse scan (bottom). The labeling of the segmented boundaries is indicated by the legend in the right, from top to bottom. Note that the location of the ILM boundary (top red line) is not clearly visible because it has the same location as the i-RNFL boundary (top green line). Third column: Independent semi-automated outline locations of the ILM and i-RPE boundaries in the B-scan (top) and transverse scan (bottom).
Fig. 6
Fig. 6 Example results in an eye diagnosed with advanced AMD from Data set 2. First column: Original B-scan (top) and transverse scan (bottom) across the center of the fovea. The yellow circles in the bottom image indicate regions where substantial differences between the automated method and reference standard were observed. Second column: Results produced by our automated method in the B-scan (top) and transverse scan (bottom). The labeling of the segmented boundaries is indicated by the legend in the right, from top to bottom. Note that the location of the ILM boundary (top red line) is not clearly visible because it has the same location as the i-RNFL boundary (top green line). Third column: Independent semi-automated reference standard of the ILM and i-RPE boundaries in the B-scan (top) and transverse scan (bottom).
Fig. 7
Fig. 7 Results per group in quantitative comparisons among methods. Color codes are indicated in the legend and the error bars indicate standard deviation. Comparisons in group III of Data set 1 involving the Iowa Reference Standard software are missing since the software failed to produce any results. a) Comparison of axial MUE mean and std per group among the methods evaluated in Data set 1 (Cirrus scans). b) Axial MUE mean and std per group between our automated method and manually corrected reference standard in the Data set 2 (Bioptigen scans).
Fig. 8
Fig. 8 Comparison of visual accuracy between outlines determined by the first reader (dark blue), the second reader (cyan), our proposed automated method (yellow), and by the Iowa Reference Standard software (red). a) Mean visual accuracy scores per group and overall throughout groups. The error bars indicate standard deviation. b) Percentage of correct outlines, indicating no substantial editing (i.e. score of 1 or 2) needed according to the opinion of the third independent reader, per group and average across group.
Fig. 9
Fig. 9 a) Example of de-noised B-scan in cube. b) Axial gradient in the B-scan. c) Histogram from values in SD-OCT cube (blue line), estimated distribution of background values (red line) and selected threshold (green line). d) Segmented foreground region. e) Masked gradient. f) Histogram of values in foreground region.
Fig. 10
Fig. 10 a) Rendering of example ILM initial estimation. b), c), d) B-scans and ILM initial estimation corresponding to the red, green, and blue lines indicated in the rendering shown in a), respectively. e) Transverse scan (transverse cut of the SD-OCT cube in the axial-vertical plane) across the center of the fovea, corresponding to the yellow line in the rendering shown in a). f) Rendering of example with refined ILM. g), h), i) B-scans and refined ILM corresponding to the red, green, and blue lines indicated in the rendering shown in f), respectively. e) Transverse scan across the center of the fovea, corresponding to the yellow line in the rendering shown in f).
Fig. 11
Fig. 11 a) Example B-scan with voxels in the darker regions between the RNFL and RPE complexes identified in red. The blue line indicates the weighted mean axial position of these regions. b) Example B-scan with blue line indicating the separation of layers belonging to the RNFL and RPE complexes as identified in this work.
Fig. 12
Fig. 12 a) B-scan with initial estimation of retinal layer boundaries. From top to bottom: Red: ILM. Green: i-RNFL. Dark blue: o-RNFL. Magenta: o-IPL. White: o-INL. Cyan: o-OPL. Green: i-EZ. Red: o-EZ. White: i-RPE. Yellow: o-RPE. The red vertical line indicates a region of the A-scan for which the gradient profile is shown in (b). (c) Shows the filtered gradient profile with the initial identification of each boundary.
Fig. 13
Fig. 13 Example result in the estimation of the foveal pit location (indicated in red). a) Rendering of the segmented ILM layer, with foveal pit location. b) B-scan across the center of the fovea (location indicated by the blue line in a), with estimated foveal pit location indicated in red. b) Transverse scan across the center of the fovea (location indicated by the green line in a), with estimated foveal pit location indicated in red.

Tables (6)

Tables Icon

Table 1 Segmentation parameters chosen. Boundaries are indicated with the same acronyms as throughout the text. Coordinates (x,y) for each boundary are not indicated for simplicity.

Tables Icon

Table 2 Distribution of exams and outlines collected in the evaluation data sets. “Automated” refers to the method introduced here, “Iowa” refers to the Iowa Reference Algorithm software.

Tables Icon

Table 3 Meaning of accuracy rating scores assigned in qualitative evaluation, from 1 (most accurate) to 4 (failure to produce any result).

Tables Icon

Table 4 Pairwise segmentation differences in terms of mean unsigned axial positioning error for the methods evaluated in Data set 1. The mean and std (in parentheses) values are shown in microns. *Comparisons involving the Iowa Reference Standard software were limited to scans in groups I and II since the software failed to produce any results for scans in group III.

Tables Icon

Table 5 Segmentation differences in terms of axial MUE between our automated method and the independent semi-automated reference standard (Aut.-Reference Standard) in Data set 2. The mean and std (in parentheses) values are shown in microns.

Tables Icon

Table 6 Percentage of correct outlines (those receiving accuracy scores of 1 or 2 by an independent reader) from the first manual reader (R1) second manual reader (R2), our automated method (Aut.), and the Iowa Reference Standard software (Iowa) for each group and boundary. Values for the Iowa Reference Standard software in Group III are not shown since it failed to produce results in this group (all 0%).

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

B (x,y) k =           θ argmin n=L L m=L L W ( x,y, z k1 ( x,y;n,m;β ) ) k | z k1 ( x,y;n,m;β )θ |,
z k1 ( x,y;n,m;β )=B (x+n,y+m) k1 S ( x+n,y+m;β ) k1 +S ( x,y;β ) k1 .
W ( x,y,z ) k =r( α| I( x,y,z ) |sgn( I Z ( x,y,z ) ) )M ( x,y,z ) k ,
r( v )={ v, if v0 0, if v<0 ,
MUE( B 1 , B 2 )= x=1 I y=1 J | B 1 (x,y) B 2 (x,y) | IJ ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.