Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automatic segmentation of up to ten layer boundaries in SD-OCT images of the mouse retina with and without missing layers due to pathology

Open Access Open Access

Abstract

Accurate quantification of retinal layer thicknesses in mice as seen on optical coherence tomography (OCT) is crucial for the study of numerous ocular and neurological diseases. However, manual segmentation is time-consuming and subjective. Previous attempts to automate this process were limited to high-quality scans from mice with no missing layers or visible pathology. This paper presents an automatic approach for segmenting retinal layers in spectral domain OCT images using sparsity based denoising, support vector machines, graph theory, and dynamic programming (S-GTDP). Results show that this method accurately segments all present retinal layer boundaries, which can range from seven to ten, in wild-type and rhodopsin knockout mice as compared to manual segmentation and has a more accurate performance as compared to the commercial automated Diver segmentation software.

© 2014 Optical Society of America

1. Introduction

Accurate quantification of retinal layer thicknesses in spectral domain optical coherence tomography (SD-OCT) images of mouse eyes is crucial for the study and initial treatment evaluation of many ophthalmic and neurologic diseases in humans [1, 2]. However, segmenting these layers manually [1, 3] is time-consuming, limiting its practicality for use in large-scale studies. Furthermore, layer thicknesses calculated from manual segmentations are inherently subjective due to variability between graders.

While many automated algorithms for segmenting retinal layers in human eyes have been developed [413], few have addressed the segmentation of murine eyes. These include a 3D segmentation algorithm by Ruggeri and colleagues that segments two retinal layer boundaries [14], and a two-algorithm method by Molnár and colleagues that segments three retinal layer boundaries by first calculating borders using row projections in a sliding window and then refining these borders iteratively [15]. The method by Yazdanpanah and colleagues utilizes active contours to segment retinal layers in SD-OCT images of rat eyes [16]. However, that paper was limited in application, as the test images were preselected based on three limiting criteria: 1) The test images were chosen from wild-type (WT) or diseased eyes in which no retinal layer was completely missing. 2) The test images were limited to the central slices of the volumes where the retinal layers were clearly visible, eliminating images from the periphery of the retinal volumes where retinal layers had lower quality and images of the optic nerve where several layers disappear. 3) The algorithm only segmented six retinal layer boundaries, ignoring the NFL-GCL, IIS-OS, OIS-OS, and RPE-Choroid. Finally, while this paper was under review, a new work by Antony and colleagues was published which addressed a graph-based method for the automated segmentation of 10 retinal layer boundaries in normal mice, excluding the optic nerve head (ONH) region [17].

Here, we present a novel segmentation methodology that accurately segments the retinal layers in images from all sections of the retina, including the periphery and the ONH of WT and rhodopsin knockout (Rho(−/−)) mice with missing layers and significant pathology, captured with a commercial Bioptigen Inc. (Research Triangle Park, NC) SD-OCT system. We previously developed a framework for segmenting retinal layers in human eyes based on graph theory and dynamic programming (GTDP) [4]. Section 2 briefly reviews the layers of the murine retina and the GTDP and support vector machine (SVM) techniques in the context of the present problem. Section 3 introduces our layer segmentation technique for the mouse retina. The new algorithm, which we name S-GTDP, combines the GTDP framework with an SVM algorithm to detect pathological eyes. Section 4 demonstrates the accuracy of our algorithm by quantitatively comparing our automated results against manual segmentation and the commercially available Diver software (Bioptigen Inc.), and Section 5 outlines conclusions and future directions.

2. Review

In this section, we briefly review the layers of the murine retina, the GTDP framework originally developed for human retinal [4, 5, 18] and corneal [19] layer segmentation, as well as the basics of SVM classification [20]. While the general GTDP framework is similar for different applications, in the following we have modified and extended the core formula in the context of murine retinal layer boundary segmentation.

2.1 Murine retina

Figure 1 shows example SD-OCT images of WT and Rho(−/−) mice retinas.

 figure: Fig. 1

Fig. 1 (a) Ten targeted retinal layer boundaries in a WT mouse SD-OCT B-scan (Group A). (b) Morphological cross-section from an age-matched WT mouse retina stained with toluidine blue. Bar: 50 μm. (c) Eight targeted retinal layer boundaries in a Rho(−/−) mouse SD-OCT B-scan (Group B). (d) Morphological cross-section from an age-matched Rho(−/−) mouse retina stained with toluidine blue. Bar: 50 μm.

Vitreous-NFL: Vitreous-Nerve Fiber LayerELM: External Limiting Membrane
NFL-GCL: NFL-Ganglion Cell LayerIIS-OS: Inner Boundary of Inner Segment-Outer Segment
IPL-INL: Inner Plexiform Layer-Inner Nuclear LayerOIS-OS: Outer boundary of IS-OS
INL-OPL: INL-Outer Plexiform LayerOS-RPE: Outer Segments-Retinal Pigment Epithelium
OPL-ONL: OPL-Outer Nuclear LayerRPE-Choroid: RPE-Choroid

Download Full Size | PDF

2.2 GTDP layer segmentation

The GTDP framework represents an SD-OCT B-scan as a graph consisting of nodes (i.e. image pixels) and edges that connect adjacent pixels. Weights are assigned to each of the edges based on a priori information about the layer boundaries. A cut is defined as a path that traverses edges from the leftmost column of the image to the rightmost column of the image. The desired cut is the path with the minimum summed weight of traversed edges.

In this paper, we modify and extend the weighting scheme in [4] as follows:

wab=(2(ga+gb))+λs(|iaib|)+wv+wmin,
where:

  • wab is the weight of the edge connecting nodes a and b,
  • gj is the normalized vertical gradient of the image at node j ∈ {a,b},
  • λs is the “similarity factor” weight,
  • ij is the normalized intensity of node j ∈ {a,b},
  • wv is the “vertical penalty” term to add extra weight to edges going up, down, or diagonal,
  • wmin is the minimum weight term (1 x 10−5) added for numerical stability.

Due to the first “gradient” term in the right side of the equation, the boundaries between layers prefer to be located at pixels with large vertical gradients. The second “similarity” term prefers boundaries with similar or smoothly changing intensity pixels. The phrase normalization refers to linearly projecting pixel values between zero and one. The third “vertical penalty” term is included to prevent the segmentation from “hopping” between boundaries. For efficient and simplified computation, we only create edges from any node (pixel) to its upper right neighbor, its right neighbor, and its lower right neighbor. The only exceptions are the ONH and vessel regions (with steep boundaries), where we allow vertical edges as discussed in Section 3.7.

We use an iterative method to segment all retinal layers in each SD-OCT B-scan using GTDP. As detailed in our previous publication [4], once a new layer boundary is segmented, it is used to limit the search space for the subsequent layer boundaries. We use Dijkstra’s algorithm, initialized by the zero-weight endpoint selection method of [4], to find the lowest summed weight path across the image.

2.3 SVM

In summary, an SVM is a supervised classification algorithm that takes as input a set of training examples, each consisting of a feature vector and a binary label. The algorithm maps each training example to the space of the provided features and finds the maximum-margin hyper-plane that separates the positive (labeled as one) and negative (labeled as zero) training examples. While SVMs in their basic form are binary linear classifiers, a simple kernel trick [20] turns them into nonlinear classifiers and thus extends their application.

SVMs and other machine learning classifiers have previously been used for various classification problems, including automatic and semiautomatic retinal layer segmentation by classifying pixels as belonging to different layers [21, 22], glaucoma detection [2327], and segmentation of the ONH [28]. In this work, we exploit a novel utilization of SVMs to detect the images of diseased eyes in which some retinal layers may be missing.

3. Methods

This section introduces our method for segmenting up to ten layer boundaries in SD-OCT images of the mouse retina. The algorithm is an extension of our previously presented technique for segmenting layered structures via GTDP in normal human eyes [4]. The core steps are outlined in Fig. 2 and described in detail in the following subsections.

 figure: Fig. 2

Fig. 2 Overview of the algorithm for classifying and segmenting murine SD-OCT volumes.

Download Full Size | PDF

3.1 Volume denoising

SD-OCT images are corrupted by speckle noise, so it is beneficial to denoise them to reduce the effect of noise on the segmentation results. Using B-scan averaging or other special scanning patterns [29, 30] reduces noise but decreases the image acquisition speed. Thus, to improve the quality of our captured images, we denoise individual B-scans in the SD-OCT volume using two different sparsity based denoising methods, which are freely available online. That is, we create two sets of denoised images for each mouse and utilize them to calculate appropriate graph weights for each layer boundary. The first of these denoising techniques is called sparsity based simultaneous denoising and interpolation (SBSDI) [31]. Based on our empirical experiments on a training data set, we have found that SBSDI provides the most accurate segmentation results for the Vitreous-NFL, IPL-INL, INL-OPL, OPL-ONL, OIS-OS, and OS-RPE layer boundaries. This training data set was composed of 400 B-scans from four WT mice and 400 B-scans from four Rho(−/−) mice with advanced retinal degeneration. We also denoise each B-scan with the block-matching and 3D filtering (BM3D) algorithm [32], which we have found to be most appropriate for the segmentation of the other layer boundaries. Note that it is possible to use either of these methods alone in our segmentation framework, which reduces the overall computation time. However, the resulting segmentation will be less accurate than the proposed method that utilizes both denoising algorithms. After denoising, the gray-level values of all images are normalized to values between zero and one.

3.2 SVM based volume classification

We trained an SVM to classify each B-scan in a volume as belonging either to the group with all 10 retinal layer boundaries (Group A) or the group with eight or fewer boundaries (Group B). In our experiments, the latter group consisted solely of Rho(−/−) mice while the former group consisted of WT mice without advanced retinal degeneration. To train an SVM for classifying B-scans as belonging to either Group A or B, we used the training data set described in Section 3.1.

To calculate each feature vector, we first need to find the top left corner of the visible retina within each B-scan (Fig. 3). We detect the left boundary of the retina within the B-scan by thresholding the image at the intensity value of 0.6, removing all connected components with fewer than 50 pixels, and then finding the first column with non-zero values. Next, we compute a pilot estimate of the Vitreous-NFL boundary, as detailed below in Section 3.4. Starting from 120 µm to the right of the leftmost column of the visible retina, we extract a 320 µm deep and 480 µm wide (corresponding to 200 pixels and 400 pixels, respectively in our experiments) rectangle from the image. Then, we average each row of the rectangle, resulting in a [200 × 1] vector that we use as our feature vector for each training example. The feature vector values were linearly projected to values between zero and one. Figure 3 shows an example rectangle extracted from an SD-OCT B-scan from a WT mouse and the corresponding feature vector.

 figure: Fig. 3

Fig. 3 Example rectangular region-of-interest isolated from an SD-OCT B-scan from a WT mouse, and the corresponding feature vector used for classifying SD-OCT volumes.

Download Full Size | PDF

Each training example was assigned a label, zero if the training example is WT and one if the training example had missing layers. Our proposed SVM utilized a Gaussian kernel function with a sigma value of 10 to enable non-linear decision capability. We used MATLAB (The MathWorks Inc. Natick, MA) functions svmtrain and svmclassify to implement the proposed SVM. We use the trained SVM to classify all B-scans in each volume. Finally, we use the mode of all B-scan classifications in a volume to decide whether the mouse belongs to Group A or B.

3.3 Gradient image creation

As discussed in detail in our previous publication, it is beneficial to construct two sets of vertical gradients, also known as dark-to-light and light-to-dark, to better separate neighboring retinal layers (Fig. 4) [4]. To calculate these gradients, we convolve each denoised image with either [1;-1] (MATLAB notation) for the dark-to-light gradient or [-1;1] for the light-to-dark gradient, set all negative values to zero, and normalize the image to values between zero and one.

 figure: Fig. 4

Fig. 4 Example gradient images of the SD-OCT image in Fig. 1(a), where retinal layer boundaries are deinterlaced. (a) Dark-to-light gradient image. (b) Light-to-dark gradient image.

Download Full Size | PDF

3.4 Pilot layer segmentation using GTDP

Common mouse retinal OCT scans are centered at the ONH. Considering the significant deformation of retinal layers near the ONH, our method utilizes the location of the ONH to improve the accuracy of retinal layer segmentation.

After determining the number of layers to segment in each volume (Section 3.2), we preliminarily segment all present layers in each scan of the volume so that we can use this pilot segmentation to determine the location of vessels and the ONH.

When segmenting each SD-OCT B-scan, we start by selectively segmenting the Vitreous-NFL boundary. Starting with the dark-to-light gradient image (Fig. 4(a)), we iterate through each column and set every pixel in the column below the innermost intensity peak to zero, based on the assumption that the innermost dark-to-light boundary corresponds to the Vitreous-NFL. The edge weights are then calculated and the Vitreous-NFL boundary is segmented (see Tables 1 and 2 for implementation details and parameters used for all layer boundary segmentations outlined in this section).

Tables Icon

Table 1. Segmentation parameters for Group A.

Tables Icon

Table 2. Segmentation parameters for Group B.

Next, we selectively segment the most prominent dark-to-light boundary under the NFL. In most cases of Group A, this is the IIS-OS boundary; in Group B, this is the OS-RPE. In some cases, this assumption does not hold throughout a B-scan, and the ELM gradient may appear more prominently than the IIS-OS or OS-RPE gradients. To address this issue, we note that the ELM is a very thin dark-to-light layer boundary above our target boundary and below a thick low intensity region (ONL). Thus, we threshold the dark-to-light gradient image at the value of 0.1 and set any nonzero pixel above the second intensity peak (corresponding to an estimate of the IIS-OS or OS-RPE boundary) that is bordered above and below by zero-intensity pixels to zero. The edge weights are then calculated and the IIS-OS or OS-RPE boundary is segmented.

We then segment the INL-OPL boundary in a straightforward fashion. To achieve an accurate INL-OPL boundary segmentation, we utilize the dark-to-light gradient based weighting, and limit our search region between our estimated Vitreous-NFL and IIS-OS or OS-RPE boundaries (for Groups A and B, respectively).

Next, we segment the ELM boundary. For Group A, where the ELM is assumed to be always present, we limit the search region in the dark-to-light gradient image, calculate the edge weights, and segment the ELM. However, this process is different for Group B because the ELM is only intermittently present. To address this, we test regions of the pilot ELM segmentation to see where the segmentation is valid and where the ELM does not exist. First, we record the maximum and minimum dark-to-light gradient values 6.4 µm below and above the pilot ELM segmentation, respectively. Then, we divide the pilot ELM segmentation into 24-µm wide segments and calculate the mean of the noted maximum and minimum dark-to-light gradient values. If the mean of the maximum gradient values is above or equal to 0.09 and the mean of the minimum gradient values is below or equal to 0.02, that section of the ELM segmentation is declared as valid. Otherwise, we assume that the ELM in that 24-µm wide segment does not exist.

In Group A, after segmenting the ELM, we segment the OS-RPE. To do this, we utilize the dark-to-light gradient and the parameters of Tables 1 and 2 in a straightforward fashion.

Next, we segment the boundaries that are most prominent in the light-to-dark gradient image (Fig. 4(b)). These include the NFL-GCL, IPL-INL, OPL-ONL, and OIS-OS, which can be segmented in a straightforward fashion by isolating the search region and using the parameters in Tables 1 and 2. Note that we segment the OIS-OS boundary only in Group A (the IS-OS is absent in Group B).

Finally, we segment the RPE-Choroid boundary in a similar straightforward fashion. For this boundary, we use different gradients for Group A and Group B because utilizing the light-to-dark boundary for Group A results in more accurate segmentation, while the opposite is true for Group B.

3.5 ONH segmentation

Since internal retinal layers do not exist within the ONH, it is important for us to know where the nerve head is located in our B-scans so that we do not segment internal layers in those regions. To segment the ONH, we make use of our pilot layer segmentation. The pilot Vitreous-NFL is designed to cut through the hyper-reflective peak of the ONH center due to the lack of vertical edges as well as the utilization of severe diagonal edge weights. Thus, we obtain the ONH center as the location of the maximum value in the summed voxel projection (SVP) of the SD-OCT volume created by averaging intensities above the pilot Vitreous-NFL boundary (Fig. 5(a)).

 figure: Fig. 5

Fig. 5 ONH segmentation. (a) SVP for ONH center estimation. (b) SVP for ONH segmentation. (c) The corresponding fitted ONH ellipse.

Download Full Size | PDF

After estimating the ONH center, we search for the boundaries of the entire ONH area. We create another SVP by averaging the intensities between the pilot OPL-ONL and RPE-Choroid boundaries in Group A, or between the pilot INL-OPL and RPE-Choroid boundaries in Group B (Fig. 5(b)). The quality of the SVP is enhanced using the BM3D algorithm. We segment this area by fitting an ellipse to this SVP, which is centered at the previously calculated center of the ONH and oriented along the x and y axes (Fig. 5(c)).

3.6 Vessel segmentation

The presence of large blood vessels appearing as hyper-reflective bulges in the NFL makes accurate segmentation difficult. As shown in Fig. 6, if the appearance of vessels is not carefully considered in algorithm design, inconsistent segmentation of the NFL-GCL boundary will occur. This is important since small changes in the NFL thickness due to inconsistent segmentation may be erroneously associated to glaucoma and other neurological diseases.

 figure: Fig. 6

Fig. 6 (a) Automatic segmentation of an SD-OCT B-scan from a WT mouse by Bioptigen Inc. Diver 2.0 software with inconsistent NFL-GCL segmentation in the presence of vessels. (b) The corresponding automatic segmentation by our S-GTDP method.

Download Full Size | PDF

To consider the vessels in our segmentation, we must first determine the locations of these retinal vessels, and we do this by using our pilot layer segmentation. We first create two SVPs of our SD-OCT volume. The first SVP is created by averaging the intensities between the pilot OPL-ONL and RPE-Choroid for volumes in Group A, or between the pilot INL-OPL and RPE-Choroid in Group B. The second SVP is created by averaging the intensities between the pilot Vitreous-NFL and IPL-INL. In the first SVP the vessels have dark shadows while in the second SVP they appear bright (Fig. 7(a) and 7(b)).

 figure: Fig. 7

Fig. 7 Vessel segmentation. (a) Low intensity vessels SVP. (b) High intensity vessels SVP. (c) Gabor-filtered combined SVP for vessel segmentation.

Download Full Size | PDF

Next, to enhance the contrast between vessel and non-vessel pixels in both SVPs, we employ a multi-scale approach using Laplacian-of-Gaussian (LoG) filters and Gabor wavelets, as detailed in [33, 34]. Briefly, this method first convolves the SVP with a bank of LoG filters of various standard deviations, and keeps the maximum response at each pixel. Next, the LoG filtered image is convolved with a bank of Gabor wavelets of varying wavelengths, scales, and orientations, and the maximum response at each pixel is kept. We sum the two SVPs to combine their information and enhance the quality of the combined SVP (Fig. 7.c) with the BM3D algorithm, and convert the combined SVP into a mask of vessel locations by thresholding each pixel at the intensity of 0.6. Since the vessel bulge is larger than its observed shadow, we horizontally dilate our segmented vessels by convolving the vessel mask with a [1 × 7] averaging filter with uniform weights and setting all non-zero values to one.

3.7 Second pass segmentation

We segment each B-scan in the SD-OCT volume a second time to incorporate the new information from the segmented ONH and vessels. To deal with the missing layers in the ONH, each B-scan that crosses the ONH area is separated into three sections: the image to the left of the ONH, the ONH, and the image to the right of the ONH (Fig. 8). The left and right images are treated as independent images, and all internal retinal layer boundaries are segmented in both.

 figure: Fig. 8

Fig. 8 ONH scan separated into regions, with the hyper-reflective peak (consisting of the nerve fibers and the hyaloid artery) and flecks labeled. The right region is segmented by S-GTDP.

Download Full Size | PDF

Since we segment the Vitreous-NFL and RPE-Choroid boundaries in the ONH, the entire B-scan is used for segmenting these two boundaries. The Vitreous-NFL boundary is preliminarily segmented using the method detailed in Section 3.4 with a slight variation in the graph connectivity scheme. When segmenting the Vitreous-NFL boundary in the presence of a hyper-reflective peak above the ONH (consisting of the nerve fibers and the hyaloid artery without a clear border), we also create edges to vertically connected neighbors in our graph representation of the image so that the path can make the vertical leap necessary to segment the hyper-reflective peak.

Next, we remove hyper-reflective flecks (Fig. 8) within the NFL from the gradient image by setting pixels that are both above and below high intensity pixels in the denoised image to zero (the ratio of the mean intensity of the pixels within 16 µm below the target pixel to the mean intensity of the pixels within 16 µm above the target pixel is less than 1.45). Next, we test if our preliminary Vitreous-NFL segmentation cuts through a hyper-reflective peak in the NFL by measuring the mean pixel intensity in each column above the preliminary Vitreous-NFL segmentation. For every column that this value is greater than 0.043, we set all pixels below the innermost intensity peak to zero before segmenting the Vitreous-NFL boundary again.

To segment the RPE-Choroid boundary in scans within the ONH region, we use the same method as detailed above in section 3.4, except we segment the light-to-dark gradient image instead of the dark-to-light gradient image because the dark-to-light RPE-Choroid boundary disappears in the center of the ONH region.

In our second pass segmentation, we also take the segmented vessel locations into consideration. Our algorithm segments the NFL-GCL boundary under the vessels’ hyper-reflective bulges by adjusting the search region in columns where vessels are detected. In these columns, the search region for the NFL-GCL boundary is decreased from the inner side by 16 µm, so that we detect the light-to-dark gradient corresponding to the bottom of the hyper-reflective bulge instead of any gradient within the bulge. However, if there is no prominent light-to-dark gradient within the top 35.2 µm of this adjusted search region column, we assume that we have missed the bottom of the bulge, so we revert to the original NFL-GCL search region. In vessel regions, we also create edges to vertically connected neighbors in our graph representation of the image so that the path can make the vertical leaps necessary to segment the hyper-reflective bulges. The vessel locations are not considered in B-scans of the ONH area because the vessels extend radially from the optic nerve.

3.8 Parameters

This section lists all algorithmic parameters needed for reproducing the results reported in this paper. Note that all parameters in this paper were determined empirically based on our training data set (described in Section 3.1), and that we used the same parameters for all experiments. As explained in Section 3.2, our SVM based algorithm automatically determines if a particular image benefits most from using the parameters for Group A (Table 1) or Group B (Table 2).

3.9 Animal models and SD-OCT imaging hardware, software, and protocol

All imaging was performed in vivo utilizing a Bioptigen Inc. Envisu R2200 Ultra-high-resolution SD-Ophthalmic Imaging System (SD-OIS), which is commercially available for small animal imaging (840 nm SD-OCT with customized 180 nm Superlum Broadlighter source, providing 1.6 µm digital axial resolution and 2 µm optical axial resolution). For comparison with our segmentation results, we employed the commercially available automated mouse retina layer segmentation software (Diver 2.0) also developed by Bioptigen Inc.

The retina of each animal was imaged using unaveraged 100 equally distanced B-scans, each with 1000 A-scans, spanning a 1.2mm × 1.2mm region centered at the optic nerve. In our first sets of experiments, which were used for validating the accuracy of our algorithm, Groups A and B consisted of the eyes of 10 WT (6 to 9 weeks old) and 10 Rho(−/−) [35] mice (4-8 weeks old), respectively. In WT mice, no significant change in the retinal layer thicknesses takes place after the fourth week (e.g [36].), thus the mice from the age range we used can be considered appropriate to serve as controls in the context of this study. Rho(−/−) mice fail to develop rod photoreceptor outer segments, which causes progressive photoreceptor degeneration. By 4-5 weeks old, the number of rod nuclei decreases by 10-15 percent and by 12 weeks the retinas lose over 90% of the rod photoreceptors [35]. There was no overlap between these 20 mice and those used for training the SVM and segmentation algorithms.

Finally, as an anecdotal demonstration of the algorithm’s applicability to other types of pathologies, we imaged one 25 day old mouse with the retinal degeneration 10 (Rd10) pathology [37] and one mouse displaying abnormal morphology in the OPL of unknown origin (Fig. 9). We used the exact same algorithmic parameters for all experiments in this paper.

 figure: Fig. 9

Fig. 9 Original SD-OCT images and the same images with retinal layer boundaries automatically segmented by S-GTDP. (a) WT retina. (b) WT retina including ONH. (c) WT retinal periphery. (d) Rho(−/−) retina. (e) Rho(−/−) retina including ONH. (f) Rho(−/−) retinal periphery. (g) Retina displaying abnormal morphology in the OPL of unknown origin. (h) Retina from an Rd10 mutant mouse.

Download Full Size | PDF

All images used for validation were also manually segmented by two graders using a custom software (DOCTRAP version 19.6) previously developed at Duke University [5]. Both manual graders started from scratch and were blinded to the automatic segmentation and the other grader’s marking. We chose the more experienced grader as the gold standard to which all other methods (S-GTDP, Bioptigen’s Diver software, and the second grader) were compared.

Our automated algorithm was implemented in MATLAB and was incorporated as an extension of the DOCTRAP software package. For all experiments, our software was executed fully automatically.

4. Results

This section shows the retinal layer segmentation results obtained using the procedures described in Section 3. Section 4.1 examines the results of our SVM classification method, and Section 4.2 compares our S-GTDP fully automatic segmentation results to the gold standard manual segmentation.

4.1 SVM classification performance

To validate our SVM classification algorithm, we used the full data set consisting of 2000 un-averaged B-scans from 10 WT mice and 10 Rho(−/−) mice. We counted the fraction of images that were correctly classified in each of Groups A and B, as summarized in Table 3. We distinguished WT retinas from Rho(−/−) retinas with 100% accuracy by using the mode of the B-scan classifications as the classification of the entire retina.

Tables Icon

Table 3. Fraction of images in each data set correctly classified by the SVM algorithm.

4.2 Segmentation validation

For quantitative comparison, a subset of 200 images, 10 linearly spaced images per volume, was selected from the 2000 un-averaged B-scans and manually segmented independently by two graders and automatically by S-GTDP and Bioptigen’s Diver software.

While performing these experiments, we noted that Bioptigen Inc.’s Diver software does not segment any layers in certain B-scans. Even in B-scans that the Diver software segments, selected sections of layer boundaries are reported as zero whenever the segmentation algorithm fails. Out of the 200 B-scans we used for comparison, 192 (99 for Group A and 93 for Group B) were segmented by the Diver software, and all of these had sections where layer boundaries were reported as zero, signifying invalid segmentation results. Furthermore, the Diver software only segments 8 layer boundaries, skipping the ELM and OIS-OS boundaries.

To provide a fair comparison between our algorithm and the Diver software, we set up two separate experiments. The first compared segmentation results in only the A-scans where the Diver software both segmented the B-scan and did not set segmentation values to zero. That is, we chose the very best possible results that could be obtained using Bioptigen’s commercial software. In this subset, we compared both S-GTDP and Bioptigen to the same manual grader and also compared the two manual graders to estimate inter-grader variability. These comparison results can be seen in Table 4 for Group A and Table 5 for Group B. Next, we demonstrated the accuracy of S-GTDP in the all cases, including those where Bioptigen’s algorithm failed, by comparing S-GTDP to a manual grader and also comparing the two manual graders to estimate inter-grader variability for the full set of 200 images. These comparison results can be seen in Table 6 for Group A and Table 7 for Group B.

Tables Icon

Table 4. Comparison of segmentation results for the limited number of A-scans from 99 B-scans in Group A for which Bioptigen’s software provided valid results. (STD = Standard Deviation)

Tables Icon

Table 5. Comparison of segmentation results for the limited number of A-scans from 93 B-scans in Group B for which Bioptigen’s software provided valid results. (STD = Standard Deviation)

Tables Icon

Table 6. Comparison of all A-scans from the 100 B-scans in Group A. (STD = Standard Deviation)

Tables Icon

Table 7. Comparison of all A-scans from the 100 B-scans in Group B. (STD = Standard Deviation)

For each B-scan, we calculated the absolute value of the average pixel difference between the locations of the automatic and manual segmentations for every retinal layer boundary. Then, we computed the average, standard deviation, and median of these absolute pixel differences for the entire data sets of Group A and Group B. These pixel values were converted to µm by multiplying the pixel values by our system’s axial resolution of 1.6 µm per pixel.

To account for the fact that the retina is not present throughout the width of each B-scan, we automatically calculated the left and right sides of the retina within each B-scan and only compared the segmentation results within those boundaries. Also, to be fair to Bioptigen’s Diver software, we did not compare segmentation results within the ONH, even though our algorithm is designed to segment the hyper-reflective peak. This is because the definition of the NFL boundary in the presence of the hyaloid artery is not globally accepted.

Additionally, each manual grader exhibited a bias when tracing layer boundaries by consistently tracing above or below the boundary by a constant distance. To determine this bias, we used a training set of 20 images, 10 from each of Groups A and B, and minimized the sum of the absolute pixel difference values in each comparison scenario. To account for the bias, we shifted each layer boundary in the automatic segmentation by S-GTDP down by bias values of 0.9, 0.8, 0, 0.2, −0.1, 0.7, 0.7, 0.6, 0.4, and 0.5 pixels, respectively. We also shifted each layer boundary in the automatic segmentation by Bioptigen down by bias values of 0.5, 5.4, 11, 1.8, 8.1, 5.4, 0.5, and −1.6 pixels, respectively. Note that there are only 8 bias values for Bioptigen because it does not segment the ELM and OIS-OS boundaries. We did not correct for the bias between the two manual graders because their difference reflects the inherent subjectivity of manual markings.

The results in Tables 4 through 7 show that our automatic S-GTDP algorithm segmented retinal layer boundaries on average more closely to the more experienced manual grader as compared to a second manual grader. In the limited results for Group A (Table 4), the two manual graders differed in their layer boundary segmentation by an average of 2.19 µm, while our fully automatic algorithm differed from the more experienced manual grader by 2.15 µm. Similarly, in the limited results for Group B (Table 5), the two manual graders differed by an average of 2.60 µm, while our automatic algorithm differed from the more experienced manual grader by only 1.90 µm. In the entire results for Group A (Table 6), the two manual graders differed by an average of 2.19 µm, while our automatic algorithm differed from the more experienced manual grader by 2.17 µm. Finally, in the entire results for Group B (Table 7), the two manual graders differed by an average of 2.66 µm, while the automatic algorithm differed from the more experienced manual grader by only 1.96 µm.

5. Conclusion

The qualitative results in Fig. 9 demonstrate the robustness of our S-GTDP framework to accurately segment retinal layers of murine eyes in SD-OCT images, even in the presence of various pathological features. Quantitative results in Tables 4 through 7 show that our automatic segmentation closely matches the gold standard of manual segmentation and outperforms Bioptigen’s commercially available Diver software.

These results further demonstrate that our algorithm has an overall consistent performance, as its accuracy was not diminished in the images that categorically failed to be meaningfully segmented by the Diver software. We also show that results obtained by our algorithm match those obtained by the more experienced manual grader on average more closely than results from the second manual grader, as detailed in Tables 4 through 7, which attests to the ability of our algorithm to reduce the subjectivity inherent in manual segmentation. This is highly encouraging for reducing the time and manpower required to segment such features in preclinical studies.

The SVM algorithm in Section 3.2 utilized a linear horizontal projection to yield the desired feature vectors. This is due to the fact that the mouse retina in the Bioptigen system appeared relatively flat. In the case of encountering a non-flat retina (e.g. when utilizing other imaging systems or alternative animal species), we can easily modify the algorithm to remove its reliance on a flat retina. As described and implemented in our previous publications [4, 5], we can attain a rough estimate of the retina curvature by fitting a convex hull to a pilot estimate of the OS-RPE. Then, the image’s A-scans are re-arranged according to this pilot estimate to flatten the retina, for which a horizontal projection is then valid.

The accuracy of our proposed layer segmentation framework has yet to be quantified for other scenarios, including the segmentation of mouse SD-OCT images with other pathologies. This can be easily achieved using approaches such as one-against-all classification, in which a separate SVM would be trained for each pathology type to classify a B-scan as belonging to that pathology class or belonging to the class of “everything else.”

The design of a fully automated layer segmentation algorithm capable of dealing with different manifestations of retinal pathologies from different diseases is a challenging problem. While many algorithms in the past few years have been developed to deal with a specific type of pathology, the users of these algorithms often must know the specific pathology in their data set and select the appropriate algorithm. Our proposed two-step approach is the simplest case of a general framework for the automated segmentation of retinal boundaries from eyes with different anatomic and pathologic features. In this framework, the first SVM based step detects the specific pathology and selects the appropriate algorithm for the data set in hand. The second step can utilize any of the layer segmentation algorithms developed in the past few years. Our future work will address the more challenging case of segmenting retinal layers in human eyes with multiple types of pathology from different diseases such as diabetic retinopathy, macular hole, and age-related macular degeneration.

Acknowledgments

This research was supported by the U.S. Army Medical Research Acquisition Activity Contract W81XWH-12-1-0397, North Carolina Biotechnology Center IDG 2012-1015, NIH P30 EY005722, EY12859, and Duke University Pratt Undergraduate Fellowship Program. The authors would like to thank Ying Hao for her excellent help in processing the retina for microscopy. We would like to thank Dr. Tatiana Rebrik and Dr. Wai T. Wong for providing the mice in Fig. 9(g) and Fig. 9(h), respectively. Dr. Izatt is a co-founder and Chief Science Advisor for Bioptigen Inc. and has corporate, equity, and intellectual property interests (including royalties) in this company. Duke University also has equity and intellectual property interests (including royalties) in Bioptigen, Inc.

References and links

1. L. R. Ferguson, J. M. Dominguez II, S. Balaiya, S. Grover, and K. V. Chalam, “Retinal Thickness Normative Data in Wild-Type Mice Using Customized Miniature SD-OCT,” PLoS ONE 8(6), e67265 (2013). [CrossRef]   [PubMed]  

2. V. J. Srinivasan, T. H. Ko, M. Wojtkowski, M. Carvalho, A. Clermont, S.-E. Bursell, Q. H. Song, J. Lem, J. S. Duker, J. S. Schuman, and J. G. Fujimoto, “Noninvasive Volumetric Imaging and Morphometry of the Rodent Retina with High-Speed, Ultrahigh-Resolution Optical Coherence Tomography,” Invest. Ophthalmol. Vis. Sci. 47(12), 5522–5528 (2006). [CrossRef]   [PubMed]  

3. O. P. Kocaoglu, S. R. Uhlhorn, E. Hernandez, R. A. Juarez, R. Will, J.-M. Parel, and F. Manns, “Simultaneous Fundus Imaging and Optical Coherence Tomography of the Mouse Retina,” Invest. Ophthalmol. Vis. Sci. 48(3), 1283–1289 (2007). [CrossRef]   [PubMed]  

4. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010). [CrossRef]   [PubMed]  

5. S. J. Chiu, J. A. Izatt, R. V. O’Connell, K. P. Winter, C. A. Toth, and S. Farsiu, “Validated Automatic Segmentation of AMD Pathology Including Drusen and Geographic Atrophy in SD-OCT Images,” Invest. Ophthalmol. Vis. Sci. 53(1), 53–61 (2012). [CrossRef]   [PubMed]  

6. D. C. DeBuc, “A review of algorithms for segmentation of retinal image data using optical coherence tomography,” Image Segmentation, P.-G. Ho, ed. (2011), pp. 15–54.

7. D. C. DeBuc, G. M. Somfai, S. Ranganathan, E. Tátrai, M. Ferencz, and C. A. Puliafito, “Reliability and reproducibility of macular segmentation using a custom-built optical coherence tomography retinal image analysis software,” J. Biomed. Opt. 14(6), 064023 (2009). [CrossRef]   [PubMed]  

8. G. Gregori, F. Wang, P. J. Rosenfeld, Z. Yehoshua, N. Z. Gregori, B. J. Lujan, C. A. Puliafito, and W. J. Feuer, “Spectral domain optical coherence tomography imaging of drusen in nonexudative age-related macular degeneration,” Ophthalmology 118(7), 1373–1379 (2011). [PubMed]  

9. V. Kajić, M. Esmaeelpour, C. Glittenberg, M. F. Kraus, J. Honegger, R. Othara, S. Binder, J. G. Fujimoto, and W. Drexler, “Automated three-dimensional choroidal vessel segmentation of 3D 1060 nm OCT retinal data,” Biomed. Opt. Express 4(1), 134–150 (2013). [CrossRef]   [PubMed]  

10. R. J. Zawadzki, A. R. Fuller, D. F. Wiley, B. Hamann, S. S. Choi, and J. S. Werner, “Adaptation of a support vector machine algorithm for segmentation and visualization of retinal structures in volumetric optical coherence tomography data sets,” J. Biomed. Opt. 12(4), 041206 (2007). [CrossRef]   [PubMed]  

11. H. Ishikawa, D. M. Stein, G. Wollstein, S. Beaton, J. G. Fujimoto, and J. S. Schuman, “Macular Segmentation with Optical Coherence Tomography,” Invest. Ophthalmol. Vis. Sci. 46(6), 2012–2017 (2005). [CrossRef]   [PubMed]  

12. A. Lang, A. Carass, M. Hauser, E. S. Sotirchos, P. A. Calabresi, H. S. Ying, and J. L. Prince, “Retinal layer segmentation of macular OCT images using boundary classification,” Biomed. Opt. Express 4(7), 1133–1152 (2013). [CrossRef]   [PubMed]  

13. J. Tian, P. Marziliano, M. Baskaran, T. A. Tun, and T. Aung, “Automatic segmentation of the choroid in enhanced depth imaging optical coherence tomography images,” Biomed. Opt. Express 4(3), 397–411 (2013). [CrossRef]   [PubMed]  

14. M. Ruggeri, H. Wehbe, S. Jiao, G. Gregori, M. E. Jockovich, A. Hackam, Y. Duan, and C. A. Puliafito, “In Vivo Three-Dimensional High-Resolution Imaging of Rodent Retina with Spectral-Domain Optical Coherence Tomography,” Invest. Ophthalmol. Vis. Sci. 48(4), 1808–1814 (2007). [CrossRef]   [PubMed]  

15. J. Molnár, D. Chetverikov, D. Cabrera DeBuc, W. Gao, and G. Somfai, “Layer extraction in rodent retinal images acquired by optical coherence tomography,” Mach. Vis. Appl. 23(6), 1129–1139 (2012). [CrossRef]  

16. A. Yazdanpanah, G. Hamarneh, B. R. Smith, and M. V. Sarunic, “Segmentation of Intra-Retinal Layers From Optical Coherence Tomography Images Using an Active Contour Approach,” IEEE Trans. Med. Imaging 30(2), 484–496 (2011). [CrossRef]   [PubMed]  

17. B. J. Antony, M. D. Abràmoff, M. M. Harper, W. Jeong, E. H. Sohn, Y. H. Kwon, R. Kardon, and M. K. Garvin, “A combined machine-learning and graph-based framework for the segmentation of retinal surfaces in SD-OCT volumes,” Biomed. Opt. Express 4(12), 2712–2728 (2013). [CrossRef]  

18. J. Y. Lee, S. J. Chiu, P. Srinivasan, J. A. Izatt, C. A. Toth, S. Farsiu, and G. J. Jaffe, “Fully Automatic Software for Quantification of Retinal Thickness and Volume in Eyes with Diabetic Macular Edema from Images Acquired by Cirrus and Spectralis Spectral Domain Optical Coherence Tomography Machines,” Invest. Ophthalmol. Vis. Sci. 54, 7595–7602 (2013). [CrossRef]   [PubMed]  

19. F. LaRocca, S. J. Chiu, R. P. McNabb, A. N. Kuo, J. A. Izatt, and S. Farsiu, “Robust automatic segmentation of corneal layer boundaries in SDOCT images using graph theory and dynamic programming,” Biomed. Opt. Express 2(6), 1524–1538 (2011). [CrossRef]   [PubMed]  

20. M. A. Hearst, S. T. Dumais, E. Osman, J. Platt, and B. Scholkopf, “Support vector machines,” IEEE Intell. Syst. Appl. 13, 18–28 (1998).

21. A. R. Fuller, R. J. Zawadzki, S. Choi, D. F. Wiley, J. S. Werner, and B. Hamann, “Segmentation of Three-dimensional Retinal Image Data,” IEEE Trans. Vis. Comput. Graph. 13(6), 1719–1726 (2007). [CrossRef]   [PubMed]  

22. K. A. Vermeer, J. van der Schoot, H. G. Lemij, and J. F. de Boer, “Automated segmentation by pixel classification of retinal layers in ophthalmic OCT images,” Biomed. Opt. Express 2(6), 1743–1756 (2011). [CrossRef]   [PubMed]  

23. Z. Burgansky-Eliash, G. Wollstein, T. Chu, J. D. Ramsey, C. Glymour, R. J. Noecker, H. Ishikawa, and J. S. Schuman, “Optical Coherence Tomography Machine Learning Classifiers for Glaucoma Detection: A Preliminary Study,” Invest. Ophthalmol. Vis. Sci. 46(11), 4147–4152 (2005). [CrossRef]   [PubMed]  

24. C. Bowd, J. Hao, I. M. Tavares, F. A. Medeiros, L. M. Zangwill, T.-W. Lee, P. A. Sample, R. N. Weinreb, and M. H. Goldbaum, “Bayesian Machine Learning Classifiers for Combining Structural and Functional Measurements to Classify Healthy and Glaucomatous Eyes,” Invest. Ophthalmol. Vis. Sci. 49(3), 945–953 (2008). [CrossRef]   [PubMed]  

25. D. Bizios, A. Heijl, and B. Bengtsson, “Trained Artificial Neural Network for Glaucoma Diagnosis Using Visual Field Data: A Comparison With Conventional Algorithms,” J. Glaucoma 16(1), 20–28 (2007). [CrossRef]   [PubMed]  

26. E. Galilea, G. Santos-García, and I. Suárez-Bárcena, “Identification of Glaucoma Stages with Artificial Neural Networks Using Retinal Nerve Fibre Layer Analysis and Visual Field Parameters,” Innovations in Hybrid Intelligent Systems, E. Corchado, J. Corchado, and A. Abraham, eds. (Springer Berlin Heidelberg, 2007), pp. 418–424.

27. D. Bizios, A. Heijl, J. L. Hougaard, and B. Bengtsson, “Machine learning classifiers for glaucoma diagnosis based on classification of retinal nerve fibre layer thickness parameters measured by Stratus OCT,” Acta Ophthalmol. (Copenh.) 88(1), 44–52 (2010). [CrossRef]   [PubMed]  

28. M. D. Abramoff, M. K. Garvin, and M. Sonka, “Retinal Imaging and Image Analysis,” IEEE Rev. Biomed. Eng. 3, 169–208 (2010). [CrossRef]  

29. L. Fang, S. Li, Q. Nie, J. A. Izatt, C. A. Toth, and S. Farsiu, “Sparsity based denoising of spectral domain optical coherence tomography images,” Biomed. Opt. Express 3(5), 927–942 (2012). [CrossRef]   [PubMed]  

30. M. A. Mayer, A. Borsdorf, M. Wagner, J. Hornegger, C. Y. Mardin, and R. P. Tornow, “Wavelet denoising of multiframe optical coherence tomography data,” Biomed. Opt. Express 3(3), 572–589 (2012). [CrossRef]   [PubMed]  

31. L. Fang, S. Li, R. P. McNabb, Q. Nie, A. N. Kuo, C. A. Toth, J. A. Izatt, and S. Farsiu, “Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation,” IEEE Trans. Med. Imaging 32(11), 2034–2049 (2013). [CrossRef]   [PubMed]  

32. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering,” IEEE Trans. Image Process. 16(8), 2080–2095 (2007). [CrossRef]   [PubMed]  

33. R. Estrada, C. Tomasi, M. T. Cabrera, D. K. Wallace, S. F. Freedman, and S. Farsiu, “Exploratory Dijkstra forest based automatic vessel segmentation: applications in video indirect ophthalmoscopy (VIO),” Biomed. Opt. Express 3(2), 327–339 (2012). [CrossRef]   [PubMed]  

34. R. Estrada, C. Tomasi, M. T. Cabrera, D. K. Wallace, S. F. Freedman, and S. Farsiu, “Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing,” Biomed. Opt. Express 2(10), 2871–2887 (2011). [CrossRef]   [PubMed]  

35. J. Lem, N. V. Krasnoperova, P. D. Calvert, B. Kosaras, D. A. Cameron, M. Nicolò, C. L. Makino, and R. L. Sidman, “Morphological, physiological, and biochemical changes in rhodopsin knockout mice,” Proc. Natl. Acad. Sci. U.S.A. 96(2), 736–741 (1999). [CrossRef]   [PubMed]  

36. E. S. Lobanova, S. Finkelstein, N. P. Skiba, and V. Y. Arshavsky, “Proteasome overload is a common stress factor in multiple forms of inherited retinal degeneration,” Proc. Natl. Acad. Sci. U.S.A. 110(24), 9986–9991 (2013). [CrossRef]   [PubMed]  

37. R. Barhoum, G. Martínez-Navarrete, S. Corrochano, F. Germain, L. Fernandez-Sanchez, E. J. de la Rosa, P. de la Villa, and N. Cuenca, “Functional and structural modifications during retinal degeneration in the rd10 mouse,” Neuroscience 155(3), 698–713 (2008). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 (a) Ten targeted retinal layer boundaries in a WT mouse SD-OCT B-scan (Group A). (b) Morphological cross-section from an age-matched WT mouse retina stained with toluidine blue. Bar: 50 μm. (c) Eight targeted retinal layer boundaries in a Rho(−/−) mouse SD-OCT B-scan (Group B). (d) Morphological cross-section from an age-matched Rho(−/−) mouse retina stained with toluidine blue. Bar: 50 μm.
Fig. 2
Fig. 2 Overview of the algorithm for classifying and segmenting murine SD-OCT volumes.
Fig. 3
Fig. 3 Example rectangular region-of-interest isolated from an SD-OCT B-scan from a WT mouse, and the corresponding feature vector used for classifying SD-OCT volumes.
Fig. 4
Fig. 4 Example gradient images of the SD-OCT image in Fig. 1(a), where retinal layer boundaries are deinterlaced. (a) Dark-to-light gradient image. (b) Light-to-dark gradient image.
Fig. 5
Fig. 5 ONH segmentation. (a) SVP for ONH center estimation. (b) SVP for ONH segmentation. (c) The corresponding fitted ONH ellipse.
Fig. 6
Fig. 6 (a) Automatic segmentation of an SD-OCT B-scan from a WT mouse by Bioptigen Inc. Diver 2.0 software with inconsistent NFL-GCL segmentation in the presence of vessels. (b) The corresponding automatic segmentation by our S-GTDP method.
Fig. 7
Fig. 7 Vessel segmentation. (a) Low intensity vessels SVP. (b) High intensity vessels SVP. (c) Gabor-filtered combined SVP for vessel segmentation.
Fig. 8
Fig. 8 ONH scan separated into regions, with the hyper-reflective peak (consisting of the nerve fibers and the hyaloid artery) and flecks labeled. The right region is segmented by S-GTDP.
Fig. 9
Fig. 9 Original SD-OCT images and the same images with retinal layer boundaries automatically segmented by S-GTDP. (a) WT retina. (b) WT retina including ONH. (c) WT retinal periphery. (d) Rho(−/−) retina. (e) Rho(−/−) retina including ONH. (f) Rho(−/−) retinal periphery. (g) Retina displaying abnormal morphology in the OPL of unknown origin. (h) Retina from an Rd10 mutant mouse.

Tables (7)

Tables Icon

Table 1 Segmentation parameters for Group A.

Tables Icon

Table 2 Segmentation parameters for Group B.

Tables Icon

Table 3 Fraction of images in each data set correctly classified by the SVM algorithm.

Tables Icon

Table 4 Comparison of segmentation results for the limited number of A-scans from 99 B-scans in Group A for which Bioptigen’s software provided valid results. (STD = Standard Deviation)

Tables Icon

Table 5 Comparison of segmentation results for the limited number of A-scans from 93 B-scans in Group B for which Bioptigen’s software provided valid results. (STD = Standard Deviation)

Tables Icon

Table 6 Comparison of all A-scans from the 100 B-scans in Group A. (STD = Standard Deviation)

Tables Icon

Table 7 Comparison of all A-scans from the 100 B-scans in Group B. (STD = Standard Deviation)

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

w a b = ( 2 ( g a + g b ) ) + λ s ( | i a i b | ) + w v + w m i n ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.