Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automatic segmentation of closed-contour features in ophthalmic images using graph theory and dynamic programming

Open Access Open Access

Abstract

This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique.

©2012 Optical Society of America

1. Introduction

Fast, accurate, and objective detection and quantification of imaging biomarkers are crucial for the study and diagnosis of ophthalmic diseases. Due to the time consuming and subjective nature of manual segmentation, considerable work has been done in recent years to automate the segmentation of ocular structures. For example, many layer segmentation algorithms have been developed to delineate the retinal [16], choroid-scleral [7,8], and corneal layers [9,10] on optical coherence tomography (OCT) images [11].

However, in addition to layered structures, there is also a need to automatically segment and quantify closed-contour features in ophthalmic images. Examples of closed-contour anatomical and pathological structures include retinal pigmented epithelial (RPE) cells in confocal fluorescence microscopy images of flat-mounted retina [12], intra-retinal cysts in OCT images of patients with diabetic macular edema (DME) [13,14], and photoreceptors in adaptive optics scanning ophthalmoscope (AOSO) images of retina [1518]. Previous works have been presented to segment closed-contour features, including fluid or detachments on OCT images of the retina [1921], drusen in fundus photographs [22,23] or in OCT images [24], geographic atrophy in fundus auto-fluorescence images [25], vessel lumens in intravascular OCT images [26], corneal cells [2731], and photoreceptors on AOSO images [3237]. However, these segmentation algorithms were often developed for specific ophthalmic applications, rather than as a general framework applicable to both layer and closed-contour features. Furthermore, in some cases such as with photoreceptor or corneal cell identification, cell counts or densities were achieved but the actual structure size and shape were not necessarily determined [27,29,3135,37].

To segment circular biological structures, transformation of images into the polar domain has proven to be an effective tool within the medical community [3841]. This is due to the observation that circles in the Cartesian domain are represented as lines in the polar domain. Using this transform, the mathematical operations and manipulations required to segment the image are greatly simplified since these operations no longer need to be performed radially. However, for non-circular, closed-contour structures, the polar transform is no longer adequate since oblong or convoluted features deviate from a flat path.

In this paper, we present a general framework to segment arbitrarily shaped, closed-contour features. We previously developed a framework for layer segmentation based on graph theory and dynamic programming (GTDP) [6] to segment retinal and corneal layers on spectral domain optical coherence tomography (SDOCT) images [5,6,10]. Section 2 of this paper extends the application of our layer segmentation technique to include closed-contour segmentation. The algorithm is based on the projection of closed-contour features in the Cartesian domain into lines in a virtual domain, which we term the quasi-polar domain. In the quasi-polar domain, features of interest are then segmented as layers using our GTDP technique. In Section 3, we apply the framework to segment RPE cells in confocal microscopy images of an age-related macular degeneration (AMD) mouse model, and in Section 4.1 we evaluate the algorithm’s effectiveness in detail. Lastly, to highlight the flexibility of the presented framework, Section 4.2 illustrates the segmentation of closed-contour features for other ophthalmic features, including cysts on SDOCT and cone photoreceptor cells on AOSO images.

2. A generalized closed-contour segmentation framework using GTDP

This section introduces a generalized framework for closed-contour feature segmentation. The method is an extension of our previously presented technique for layer segmentation via GTDP [6]. The core steps are outlined in Fig. 1 and discussed in detail in the following subsections. Section 3 then provides the specific algorithmic steps required to implement this framework for RPE cell segmentation in confocal fluorescence microscopy images.

 figure: Fig. 1

Fig. 1 Schematic for segmenting closed-contour features via GTDP.

Download Full Size | PDF

2.1. Pilot structure estimation

The first step is to attain pilot estimates of all structures to segment. These estimates should contain information about both the location and shape of the structures. Structure localization prevents the algorithm from confusing the target structure with similarly-shaped extraneous features, while shape estimation results in a more accurate quasi-polar transform, as will be discussed in Section 2.2.

For a grayscale image IcallF×G in the Cartesian domain, the pilot estimates can be represented as a binary image Bcall, where

Bcall(z)={1,if zpilot estimate0,otherwise,z

and z=(i,j) denotes the pixel at the ith row and jth column of the image. Many basic techniques such as thresholding, contours, local minima, or the multiscale generalized likelihood ratio test [42] can be used to attain this estimate, and Section 3.2 describes one such method for pilot structure estimation. To isolate a single structure, Icall and Bcall can be cropped to smaller image blocks (Ic and Bc), both M×N where M<F and N<G.

2.2. Image transformation into the quasi-polar domain

This section explains the quasi-polar transform in depth. In its simplest form, the quasi-polar transform can be described by a polar transform followed by a flattening step. Figure 2 shows an illustration, where a circle, an oval, and an arbitrary shape in the Cartesian domain (Figs. 2(a-c)) are first transformed into the polar domain (Figs. 2(d-f)). Information about the shape of the pilot estimate is then used to flatten the structure into a line in the quasi-polar domain (Fig. 2(g)). Once transformed into the quasi-polar domain, the structure can be segmented as if it were a layer using the GTDP technique.

 figure: Fig. 2

Fig. 2 Schematic of the quasi-polar transform. (a-c) A circle, oval, and arbitrary shape in the Cartesian domain. (d-f) Transformation of (a-c) into the polar domain based on the centroid (yellow asterisks in a-c). The centroid in (a-c) becomes a line (yellow) in the polar domain. (g) Flattening of (d-f) into a line in the quasi-polar domain. The result is a transformation of any closed-contour shape in the Cartesian domain into a line in the quasi-polar domain.

Download Full Size | PDF

It can also be observed from Fig. 2 that a simple polar transform is sufficient to map closed-contour shapes (Figs. 2(a-c)) into layered structures (Figs. 2(d-f)). Motivation for a further transformation into the quasi-polar domain (Fig. 2(g)) stems from the tendency for shorter geometric paths to be found when using the GTDP technique, especially in low signal-to-noise ratio (SNR) imaging scenarios. This is due to the observation that a cut (segmentation) with fewer traversed nodes often yields a lower total weight. As a result, oblong features are disadvantaged since their paths in the polar domain are longer than the shortest geometric path (Figs. 2(e, f)). By performing an extra flattening step, the path of the oblong structure now reflects the shortest geometric distance.

The quasi-polar transform is implemented using the following three steps:

  • 1. Map pixels from the Cartesian domain into the polar domain based on their distance and angle from a reference pixel and axis, respectively. The reference pixel zref can be any single pixel, where Bc(zref)=1; however ideally zref lies at the center of the closed-contour feature to facilitate its flatness in the polar domain. The centroid of the pilot estimate is therefore a good choice for zref, where
    zref=(iref,jref)=1Kk=1Kzk

    for the set of K pixels {zk}k=1K, where Bc(zk)=1. Example binary images (Bc) are shown in Figs. 2(a-c) with zref marked as a yellow asterisk and the reference axis defined as the j-axis. Using the Cartesian binary image Bc and Eq. (3), generate the polar-transformed binary image Bp (Figs. 2(d-f)), where Bp(r,θ) denotes the pixel in Bp with a radius r and angle θ from the reference pixel and axis, respectively. Then, assuming a unit step size, BpR×Θ where R=max(r) and Θ=361,

    Bp(r,θ)=Bc(i,j),0<rmax((iiref)2+(jjref)2),i,j0°<θ360°,i=rsin(θ)+iref,j=rcos(θ)+jref.

  • 2. Find a function r=f(θ) that best estimates the boundary between the background and the pilot estimate in Bp. This can be achieved by taking the vertical gradient of the image, or by using the GTDP layer segmentation technique described in Section 2.3 with edge weights determined by the vertical gradient of Bp.
  • 3. Generate the quasi-polar binary image BqR×Θ (Fig. 2(g)), where for all columns {bpk}k=1Θ in Bp, the kth column bqk in Bq is determined by
    bqk=Cbpk,whereCΘ×ΘandC(i,j)={1,if i=(1Θj=1Θf(j))f(j)+j0,otherwise.

    Use the exact transformation mapping BcBq (Steps 2 and Eq. (4)) to then transform the grayscale image Ic into its quasi-polar equivalent, Iq.

Since Step 3 flattens the pilot estimate instead of the structure itself, in general the structure is not perfectly flat in Iq. This also implies that a pilot estimate shape closer to the actual structure results in a flatter structure in the quasi-transform domain.

2.3. Layer segmentation using GTDP

Once the image is transformed into the quasi-polar domain, it can be segmented using our GTDP-based automatic layer segmentation algorithm [6]. In summary, this technique represents the image as a graph containing nodes (i.e. image pixels), and edges, which connect adjacent pixels. We assign weights to each of the edges based on a priori information about the structure to prefer paths through the structure boundaries. Our GTDP technique includes automatic endpoint initialization, where an additional column is added to either side of the image with minimal vertical edge weights. As we detailed in [6], the start and end nodes can then be arbitrarily defined anywhere within these two columns. Finally, any optimization method that finds the minimum weighted path from the start node to the end node (e.g. Dijkstra’s method [43]) can be used to segment the structure.

2.4. Transformation back into the Cartesian domain

Once the structure is segmented in the quasi-polar domain, the segmentation information is transformed back into the Cartesian domain by applying the inverse of Steps 1 and 3 from Section 2.2.

2.5. Segmentation of subsequent structures

After the first structure is segmented, Steps 1-3 in Section 2.2 are repeated for each subsequent structure (e.g. for each individual cell or cyst). We can achieve this either by sequential segmentation of individual features, or by parallelizing the segmentation to make this process more efficient.

3. Implementation for RPE cell segmentation

AMD is the leading cause of irreversible blindness in Americans older than 60 years [44]. In a recent study, Ding et al. showed that anti-amyloid therapy protects against RPE damage and vision loss in an APOE4 mouse model of AMD [12]. In [12], the average size and density of the cells were used as biomarkers for disease progression. To attain these quantitative metrics, 22,495 cells were analyzed using customized semi-automatic segmentation software. To achieve a drastically faster segmentation rate, this section describes an implementation of the generalized closed-contour segmentation algorithm from Section 2 to fully automatically segment RPE cells in confocal microscopy images of flat-mounted retina of normal and AMD mouse models.

3.1. Data set

In the following, we describe our fully automatic segmentation method as tailored to the data set from Ding et al.’s report published in PNAS [12]. However, the adaption of our method to any other data set is straightforward. The data set in [12] included 23 confocal microscopy fluorescence images {(Icall)k}k=1231024×1024 (0.64 × 0.64 mm2) of central RPE flat mounts from 17 mice with approximately 1000 cells per image. RPE cells were stained with a rabbit antibody against ZO-1 (40–2200, 1:100; Invitrogen) and imaged on a Nikon Eclipse C1 microscope.

3.2. Pilot estimation of cell morphology

We first normalized the intensities of the image (Icall) from [12] s.t0Icall1. We then smoothed Icall using an 11 × 11 pixel Gaussian filter with standard deviation σ=25 pixels. Next, we computed the extended-minima transform [45] that found all minima and suppressed those with a depth less than 0.05. To generate the pilot estimate Bcall for all cells, we set Bcall(z)=1 for all points z lying within the convex hull [46] of each minima. To localize the desired cell and its corresponding pilot estimate, we first cropped Icall and Bcall to 201 × 201 pixel image blocks corresponding to the largest conceivable cell size in our model. Figure 3(a) shows Ic, the cropped cell image, and Fig. 3(b) shows Bc, the cropped pilot estimate image. Both images were centered at the zref computed for the desired pilot estimate. Since Bc contained other cell estimates, we then set all pixels outside the desired estimate in Bc to zero.

 figure: Fig. 3

Fig. 3 RPE cell segmentation using the quasi-polar transform. (a, b) Image containing the cell to segment (a), and the pilot estimate (b) with its centroid (blue asterisk). (c) Polar-transformation of (b) segmented using GTDP to generate r=f(θ) (green). (d) Polar-transformation of (a) using the centroid from (b) as the reference pixel. The black regions are invalid points that lie outside the image in the Cartesian domain. (e, f) Images (c) and (d) flattened based on r=f(θ), respectively. (g) Transformation of r'=f(θ') from (f) back into the Cartesian domain.

Download Full Size | PDF

3.3. Image transformation into the quasi-polar domain

We next transformed IcIpIq (Fig. 3(a, d, f)) by following Steps 1-3 in Section 2.2 with the following implementation details:

  • ▪ After generating Bp in Step 1, we found the logical OR combination of bOR=bp1|bpΘ and set bp1=bpΘ=bOR, since θ1=θΘ=0°=360°.
  • ▪ For Step 2, we first segmented the boundary in Bp using the GTDP layer segmentation technique. We then smoothed the segmentation using a moving average filter with a span of 1%. This smoothed cut was set as the function r=f(θ) (Fig. 3(c)), which provided the shape information necessary to flatten the image.
  • ▪ To generate the edge weights in Section 3.4, we created a threshold image Tc, where
    Tc(z)={Sc(z),if Sc(z)>mean(Sc)0,otherwise,z

    and Sc=wiener2(Ic) (MATLAB notation; The MathWorks, Natick, MA) was denoised using the Wiener filter. We then set all connected components in Tc smaller than 20 pixels to zero, and for all z where Bcall(z)=1 and Bc(z)=0, we set Tc(z)=1. Finally, we transformed Tc into its quasi-polar equivalent, Tq.

While Fig. 3(f) shows the resulting quasi-polar transformed cell image Iq, Fig. 3(e) shows the quasi-polar transformed pilot estimate. As can be seen, the transform yielded a fully flattened pilot estimate and an approximately flattened cell structure.

3.4. Cell segmentation using GTDP

Once the quasi-polar transformed image Iq was generated, we used our GTDP layer segmentation framework to segment the cell boundary. To do so, we defined the following three cost functions:

  • (1) Penalize bright nodes further from the centroid to avoid segmenting multiple cells at once (Tq1 in Eq. (7)).
  • (2) Penalize nodes that include the pilot estimates of other cells to also avoid segmenting multiple cells at once (Tq2 in Eq. (7)).
  • (3) Penalize nodes that fall below the threshold criteria for Tq (Tq3 in Eq. (7)).

We implemented these three cost functions using the following MATLAB (The MathWorks, Natick, MA) notation, where tq and tqx are corresponding columns in Tq and Tqx, respectively, such that

tq1=bwlabel(tq>0),tq2=bwlabel(~tq|tq==1),tq3=2bwlabel(tq~=1)2.

We then combined these cost functions such that

Tq1(Tq1==0)=Tq2(Tq2>0)+1.5,Tq3(Tq3<0)=max(Tq3)+2,Tq=Tq1+Tq3.

Finally, we calculated the edge weights using

wab=normalize(Iq(za)Iq(zb),0,0.25)+normalize(Tq(za)+Tq(zb),1,2)+wmin,

where wab is the weight assigned to the edge connecting pixels za and zb, and wmin=0.00001. The normalize(x,y,z) notation indicates a normalization of the values x to range from y to z

Prior to finding the shortest path, we discarded all edges from the graph containing nodes in previously segmented cells to avoid segmenting them again. We also excluded the edges that allowed those cells to be included within a newly segmented cell. We then found the minimum-weighted path p using Dijkstra’s method, where r'=p(θ') and θ' represent the row and column axes of Iq, respectively. Since θ'1=θ'Θ in the quasi-polar domain, r'1 must equal r'Θ; in practice however, this is not guaranteed when using automatic endpoint initialization (Section 2.3). Therefore, if r'1r'Θ, we set both endpoints equal to (r'1,θ'1) and found the corresponding minimum-weighted path p1. We then set both endpoints equal to (r'Θ,θ'Θ) and found the path pΘ. The ultimate selected path p (Fig. 3(f), magenta) was the one that passed through the brighter overall path, where

p={p1,if i=1ΘIq(p1(θ'i),θ'i)i=1ΘIq(pΘ(θ'i),θ'i)pΘ,otherwise.

3.5. Cell transformation back into the Cartesian domain

We transformed the segmentation r'=p(θ') back into the Cartesian domain (Fig. 3(g)) by following the method in Section 2.4. However, since the cell was segmented using only a cropped section of the image, we shifted the coordinates of the segmented layer back to its appropriate location on the original-sized image 1024×1024. The result was a binary edge image Ec1024×1024, where all pixels {zm}m=1Θ coincided with the segmentation such that Ec(zm)=1. For all pixels zm and zm+1 that were unconnected, we used linear interpolation to connect the dots and create a closed-contour.

3.6. Iteration for all cells

To segment all other cells, we iterated the steps outlined in Sections 3.3 through 3.6 for all pilot estimates of the cells in the image. To prevent gaps from occurring between adjacent cells, we created a preference for already-segmented cell borders by brightening their intensity values, where Ic(z)=max(Ic(z),mean(Ic)+1.5std(Ic)) and T(z)=1 for all z where Ec(z)=1.

3.7. Refinement

The gold standard segmentation from [12] used in our validation study (Section 4.1) separated cells by drawing a line through the middle of the cell border with a single pixel thickness. Since the presented algorithm did not necessarily yield a single-pixel cell border, this section describes the post-processing steps required to match our automatic segmentation with the gold standard protocol, and to remove any erroneous cell segmentations.

First, we removed all cells smaller than 50 pixels by setting Ec(z)=1 for all z lying within the cell. We then generated the skeleton [47] of Ec and removed all spurs [47]. Next, we looked at individual cell edges by setting the branch points in Ecequal to zero. We defined the minimum edge threshold to be t=mean(Ic(ZE))1.5std(Ic(ZE)), where ZE was the set of all pixels where Ec(z)=1. If all pixels for a given cell edge fell below t, or if 80% of the pixels fell below t and there were at least 15 such pixels, then the cell edge was removed from Ec. Once we removed all invalid cell edges, we restored all branch points on Ec and removed any newly created spurs or pixels no longer connected to the cell edge.

4. Experimental results

This section shows the closed-contour segmentation results obtained using the procedures described in Section 3. Section 4.1 evaluates in detail the RPE cell segmentation algorithm described in Section 3, while Section 4.2 briefly illustrates the segmentation of closed-contour features for other ophthalmic applications.

4.1. RPE cell segmentation results

To validate our fully automatic segmentation results against the gold standard, we compared the cell count and mean cell size for each of the 23 images in [12]. To prevent from biasing our results, these images were not used during algorithm development stage. Instead, a single confocal microscopy fluorescence image of an AMD mouse model not included in the validation data set was used to train the algorithm.

We did not alter the data in Ding’s study in any shape or form, with one slight exception. In [12], closed-counter features smaller than 100 pixels were absorbed into the neighboring cells, since such regions were likely a result of incorrect manual segmentation. In our replication of the results from [12], we instead discarded these regions from our quantitative comparison since such regions were negligible. In [12], regions of the image considered invalid due to imaging errors, flat-mount preparation errors, or cells being cut off by image borders, were marked separately and discarded from the analysis. In this study, we used the markings from [12] to ignore the exact same invalid regions in our analysis. Note that, the above two types of outliers occupied a very small percentage of the total area segmented.

Quantitative results for the RPE cell segmentation algorithm described in Section 3 are shown in Table 1 . The average error between automatic segmentation and the gold standard was 1.49 ± 1.44% for the cell count and 1.32 ± 1.53% for the mean cell area. The median error was found to be 0.97% and 0.71% for the detected number of cells and mean cell area, respectively. Qualitative results for this validation study are shown in Fig. 4 , where Fig. 4(a) (corresponding to Image 16 in Table 1) is the automatically segmented confocal fluorescence image of an AMD flat-mounted mouse retina, and Figs. 4(b-g) are zoomed-in cell segmentation results. Figure 5 shows two confocal images of RPE cells (Figs. 5(a, d)) along with their gold standard (Figs. 5(b, e)) and automatic (Figs. 5(c, f)) segmentation results. The image shown in Figs. 5(a-c) corresponds to Image 9 in Table 1 with approximately median error, and Figs. 5(d-f) show Image 10 from Table 1 with maximum error. Lastly, Fig. 6 shows zoomed-in images of Images 6 and 9 from Table 1 comparing the gold standard (Figs. 6(a-e)) to automatic segmentation (Figs. 6(f-j)). The images in Figs. 6(a-c) are examples of erroneous gold standard segmentation, while Fig. 6(j) is an example of erroneous automatic segmentation. Their corresponding images (Figs. 6(e-h)) are examples of accurate segmentation. Finally, Figs. 6(d, i) show an undetermined case where it is unclear which segmentation is correct.

Tables Icon

Table 1. Comparison of the RPE cell count and average cell area obtained for each confocal fluorescence microscopy image via fully automatic segmentation versus the gold standard

 figure: Fig. 4

Fig. 4 Examples of RPE cell segmentation. (a) Automatically segmented confocal fluorescence image of a flat-mounted mouse retina (Image 16 in Table 1). The cell borders could still be segmented in cases when (b) the pilot estimate (yellow) was off-center and not a close estimate of the cell, (c) image artifacts or extraneous features were present, (d) the reflectivity of the cell border varied, (e) the cell had a low signal-to-noise ratio, (f) the cell was of abnormal shape, and (g) cell sizes were large or small. Each colored box in (a) corresponds to the zoomed-in image with the same colored box in (b-g).

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Comparison of fully automatic segmentation versus the gold standard. (a, d) Confocal fluorescence microscopy images of flat-mounted mouse retina. (b, e) Gold standard segmentation of RPE cells (magenta) obtained semi-automatically using an independent technique. (c, f) Fully automatic segmentation (magenta) using the presented closed-contour GTDP technique. For the gold standard, cells bordering the image and invalid regions due to folding of tissue during preparation and imaging artifacts were ignored. These regions were therefore discarded (a-f, black borders) for the comparison study shown in Table 1.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Zoomed-in comparison of fully automatic segmentation versus the gold standard. (a) Erroneous gold standard segmentation: The small middle cell was merged with adjacent cells. (b) Erroneous gold standard segmentation that did not significantly alter quantitative comparison: Although the middle cell was incorrectly shifted, the cell count remained correct. (c) Erroneous gold standard segmentation: An enlarged diseased cell was incorrectly separated into two cells. We emphasize that such errors (a-c) were very infrequent in the gold standard data set consisting of thousands of semi-automatically segmented cells. However, existence of even a handful of such errors shows the limitation of subjective segmentation techniques relative to the automatic segmentation (f-h). (d, i) An undetermined case: The gold standard (d) delineated two separate cells, while the automatic segmentation (i) grouped them as a single cell. Since these are diseased RPE cells, it is unclear whether cells with a partially disappeared cell border should be considered individually or as a unit. (j) Erroneous automatic segmentation: Borders were incorrectly drawn through the cells due to an artifact. (e-h) Accurate gold standard (e) and automatic (f-h) segmentation.

Download Full Size | PDF

All 23 images used in the study and their corresponding manual and automatic segmentation data are available at http://www.duke.edu/~sf59/Chiu_BOE_2012_dataset.htm. The “single click” algorithm was coded in MATLAB (The MathWorks, Natick, MA) and resulted in an average computation time of 2.95 minutes per image (with an average of 964 cells per image) on a desktop computer with a 64-bit operating system, Core i7 2600K CPU (Intel, Mountain View, CA), solid state hard drive, and 16 GB of RAM. This time included the overhead required for reading and writing operations.

4.2. Segmentation results for other applications

Section 2 presents a generalized segmentation framework for closed-contour structures, and Sections 3 and 4.1 discuss the implementation and results for RPE cell segmentation. We have additionally implemented the algorithm to segment other ophthalmic features. These include intra-retinal cysts seen on SDOCT images (Spectralis; Heidelberg Engineering, Heidelberg, Germany) of patients with DME (Fig. 7 ). We have also included automatic segmentation of cone photoreceptors on AOSO images [18] of the retina (Fig. 8 ). To segment the cysts and photoreceptors, the same basic framework as described in Section 3 was utilized. Changes required to adapt the algorithm to different image types included a modification of image filters and pilot estimation techniques, changing the edge weights, and different refinement steps to exclude erroneous segmentation (e.g. removing segmentations of hypo-reflective blood vessels in SDOCT images).

 figure: Fig. 7

Fig. 7 (a) SDOCT retinal image of a patient with DME. (b) Fully automatically segmented retinal layers (inner limiting membrane – blue; RPE – magenta; Bruch’s membrane – cyan) via layer GTDP, and segmented intra-retinal cysts (magenta) via closed-contour GTDP.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 (a) AOSO image of cone photoreceptors provided by the authors of [18]. (b) Fully automatically segmented cones using the closed-contour GTDP segmentation technique.

Download Full Size | PDF

5. Discussion

To the best of our knowledge, no other technique has been reported for fully automatically segmenting confocal fluorescence images of RPE cells. The qualitative results shown in Fig. 4 demonstrate accurate automatic cell segmentation despite the presence of image artifacts, low signal-to-noise ratio, or inaccurate estimations. Furthermore, the cell shape and size were not constraining factors for the presented automatic algorithm.

Quantitative results show that our fully automatic algorithm accurately segmented RPE cells on confocal images of flat-mounted mouse retina with AMD, yielding cell count and mean cell area measurements with an average error of less than 1.5%. Automatic segmentation errors similar to Fig. 6(j) occurred due to the presence of bright imaging artifacts, which erroneously altered values in the first cost function of Eq. (6). However, as shown in Figs. 6(a-c), even the gold standard segmentation reported in PNAS contained errors as well. Such errors will naturally occur in most manual or semi-automatic segmentation tasks required for large-scale studies, where multiple human experts subjectively grade images at different dates. Furthermore, since manual correction is extremely time consuming, cells with inconsequential errors (Fig. 6(b)) may not have been a priority to correct. As a result, a 0% error from the gold standard does not imply perfect segmentation.

To further reduce the effect of pilot estimation and achieve more accurate segmentation, one may employ an iterative approach, in which the result of segmentation for one iteration is used as the pilot estimate for the next iteration. In addition, application of modern adaptive denoising methods to raw images, which remove imaging artifacts without altering the anatomic structures, may further improve the automated segmentation performance [48,49].

Our motivation for using cell count and area as validation metrics stems from the need to quantify the extent of cell damage present in different groups of diseased mice [12]. While error in the absolute position of each automatically segmented cell may have been a more direct validation metric, the presence of errors in the gold standard made this unfeasible. Furthermore, the cell boundary is several pixels thick, making it difficult to assign a “true” boundary position. While the gold standard divided cells often through the middle of the cell border, this was not always the case. As a result, we deemed cell count and area to be a more meaningful and viable validation metric.

Finally, we showed preliminary results demonstrating the segmentation of cysts seen in SDOCT images of eyes with DME (Fig. 7) and cone photoreceptors seen in AOSO images of the retina (Fig. 8). Prior studies on retinal fluid segmentation have focused on the detection of larger regions of intra-retinal fluid [19] or fluid constrained by retinal layer segmentation [21]. However, our algorithm is sensitive to the small changes in intra-retinal fluid buildup. With regards to cone photoreceptor segmentation, many previous works assumed a given photoreceptor size, shape, or packing [3235,37]. Our method relaxes these constraints, allowing for a future version of the algorithm capable of segmenting both photoreceptor rods and cones in a single image. Lastly, while we plan to validate these algorithms in future studies, we believe these preliminary results are encouraging, as they show the flexibility of our algorithm to be extended to different structures, diseases, and imaging modalities.

6. Conclusion

In summary, we demonstrated the extension of our automatic GTDP framework to segment not only layered, but also closed-contour structures. Implementation of the algorithm for RPE cell segmentation in confocal fluorescence images of flat-mounted AMD mouse retina resulted in an accurate extraction of cell count and average cell size. We believe that such a tool will be extremely useful for future studies, which use cell morphology as a biomarker for the onset and progression of disease. While validation has yet to be shown for other ophthalmic applications, we have demonstrated preliminary results for intra-retinal cyst segmentation in a SDOCT image as well as cone photoreceptors segmentation in an AOSO image. This is highly encouraging for reducing the time and manpower required for segmenting closed-contour features in ophthalmic studies.

Acknowledgments

We would like to thank Dr. Jindong Ding for his supervision on the semi-automatic segmentation of RPE cells, Dr. Priyatham Mettu for his expertise on DME cyst segmentation, Prof. Glenn Jaffe for providing the image in Fig. 7(a), and Linkun (Sam) Xi for manual correction of the RPE cells. This work was supported in part by The American Health Association Foundation, NIH (1R21EY021321-01A1, NIH P30 EY-005722), Research to Prevent Blindness (Duke’s 2011 Unrestricted Grand Award), the North Carolina Biotechnology Center, and the John T. Chambers Scholarship.

Publisher’s note: This paper was corrected on June 28, 2012, to add a funding source.

References and links

1. A. Yazdanpanah, G. Hamarneh, B. R. Smith, and M. V. Sarunic, “Segmentation of intra-retinal layers from optical coherence tomography images using an active contour approach,” IEEE Trans. Med. Imaging 30(2), 484–496 (2011). [CrossRef]   [PubMed]  

2. K. A. Vermeer, J. van der Schoot, H. G. Lemij, and J. F. de Boer, “Automated segmentation by pixel classification of retinal layers in ophthalmic OCT images,” Biomed. Opt. Express 2(6), 1743–1756 (2011). [CrossRef]   [PubMed]  

3. Y. Y. Liu, M. Chen, H. Ishikawa, G. Wollstein, J. S. Schuman, and J. M. Rehg, “Automated macular pathology diagnosis in retinal OCT images using multi-scale spatial pyramid and local binary patterns in texture and shape encoding,” Med. Image Anal. 15(5), 748–759 (2011). [CrossRef]   [PubMed]  

4. Q. Yang, C. A. Reisman, K. Chan, R. Ramachandran, A. Raza, and D. C. Hood, “Automated segmentation of outer retinal layers in macular OCT images of patients with retinitis pigmentosa,” Biomed. Opt. Express 2(9), 2493–2503 (2011). [CrossRef]   [PubMed]  

5. S. J. Chiu, J. A. Izatt, R. V. O’Connell, K. P. Winter, C. A. Toth, and S. Farsiu, “Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images,” Invest. Ophthalmol. Vis. Sci. 53(1), 53–61 (2012). [CrossRef]   [PubMed]  

6. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010). [CrossRef]   [PubMed]  

7. V. Kajić, M. Esmaeelpour, B. Považay, D. Marshall, P. L. Rosin, and W. Drexler, “Automated choroidal segmentation of 1060 nm OCT in healthy and pathologic eyes using a statistical model,” Biomed. Opt. Express 3(1), 86–103 (2012). [CrossRef]   [PubMed]  

8. L. Duan, M. Yamanari, and Y. Yasuno, “Automated phase retardation oriented segmentation of chorio-scleral interface by polarization sensitive optical coherence tomography,” Opt. Express 20(3), 3353–3366 (2012). [CrossRef]   [PubMed]  

9. J. Eichel, A. Mishra, P. Fieguth, D. Clausi, and K. Bizheva, “A novel algorithm for extraction of the layers of the cornea,” in Canadian Conference on Computer and Robot Vision, 2009. CRV '09 (IEEE, 2009), pp. 313–320.

10. F. LaRocca, S. J. Chiu, R. P. McNabb, A. N. Kuo, J. A. Izatt, and S. Farsiu, “Robust automatic segmentation of corneal layer boundaries in SDOCT images using graph theory and dynamic programming,” Biomed. Opt. Express 2(6), 1524–1538 (2011). [CrossRef]   [PubMed]  

11. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]   [PubMed]  

12. J. D. Ding, L. V. Johnson, R. Herrmann, S. Farsiu, S. G. Smith, M. Groelle, B. E. Mace, P. Sullivan, J. A. Jamison, U. Kelly, O. Harrabi, S. S. Bollini, J. Dilley, D. Kobayashi, B. Kuang, W. Li, J. Pons, J. C. Lin, and C. B. Rickman, “Anti-amyloid therapy protects against retinal pigmented epithelium damage and vision loss in a model of age-related macular degeneration,” Proc. Natl. Acad. Sci. U.S.A. 108(28), E279–E287 (2011). [CrossRef]   [PubMed]  

13. M. R. Hee, C. A. Puliafito, J. S. Duker, E. Reichel, J. G. Coker, J. R. Wilkins, J. S. Schuman, E. A. Swanson, and J. G. Fujimoto, “Topography of diabetic macular edema with optical coherence tomography,” Ophthalmology 105(2), 360–370 (1998). [CrossRef]   [PubMed]  

14. T. Otani, S. Kishi, and Y. Maruyama, “Patterns of diabetic macular edema with optical coherence tomography,” Am. J. Ophthalmol. 127(6), 688–693 (1999). [CrossRef]   [PubMed]  

15. J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14(11), 2884–2892 (1997). [CrossRef]   [PubMed]  

16. H. Hofer, L. Chen, G. Y. Yoon, B. Singer, Y. Yamauchi, and D. R. Williams, “Improvement in retinal image quality with dynamic correction of the eye’s aberrations,” Opt. Express 8(11), 631–643 (2001). [CrossRef]   [PubMed]  

17. A. Roorda, F. Romero-Borja, W. Donnelly Iii, H. Queener, T. Hebert, and M. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express 10(9), 405–412 (2002). [PubMed]  

18. R. F. Cooper, A. M. Dubis, A. Pavaskar, J. Rha, A. Dubra, and J. Carroll, “Spatial and temporal variation of rod photoreceptor reflectance in the human retina,” Biomed. Opt. Express 2(9), 2577–2589 (2011). [CrossRef]   [PubMed]  

19. D. C. Fernández, “Delineating fluid-filled region boundaries in optical coherence tomography images of the retina,” IEEE Trans. Med. Imaging 24(8), 929–945 (2005). [CrossRef]   [PubMed]  

20. C. Ahlers, C. Simader, W. Geitzenauer, G. Stock, P. Stetson, S. Dastmalchi, and U. Schmidt-Erfurth, “Automatic segmentation in three-dimensional analysis of fibrovascular pigmentepithelial detachment using high-definition optical coherence tomography,” Br. J. Ophthalmol. 92(2), 197–203 (2008). [CrossRef]   [PubMed]  

21. G. Quellec, K. Lee, M. Dolejsi, M. K. Garvin, M. D. Abramoff,, and M. Sonka, “Three-dimensional analysis of retinal layer texture: identification of fluid-filled regions in SD-OCT of the macula,” IEEE Trans. Med. Imaging 29(6), 1321–1330 (2010). [CrossRef]   [PubMed]  

22. R. T. Smith, J. K. Chan, T. Nagasaki, U. F. Ahmad, I. Barbazetto, J. Sparrow, M. Figueroa, and J. Merriam, “Automated detection of macular drusen using geometric background leveling and threshold selection,” Arch. Ophthalmol. 123(2), 200–206 (2005). [CrossRef]   [PubMed]  

23. A. D. Mora, P. M. Vieira, A. Manivannan, and J. M. Fonseca, “Automated drusen detection in retinal images using analytical modelling algorithms,” Biomed. Eng. Online 10(1), 59 (2011). [CrossRef]   [PubMed]  

24. S. Farsiu, S. J. Chiu, J. A. Izatt, and C. A. Toth, “Fast detection and segmentation of drusen in retinal optical coherence tomography images,” Proc. SPIE 6844, 68440D, 68440D-12 (2008). [CrossRef]  

25. N. Lee, A. F. Laine, and R. T. Smith, “A hybrid segmentation approach for geographic atrophy in fundus auto-fluorescence images for diagnosis of age-related macular degeneration,” in 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007. EMBS 2007 (IEEE, 2007), pp. 4965–4968.

26. S. Tsantis, G. C. Kagadis, K. Katsanos, D. Karnabatidis, G. Bourantas, and G. C. Nikiforidis, “Automatic vessel lumen segmentation and stent strut detection in intravascular optical coherence tomography,” Med. Phys. 39(1), 503–513 (2012). [CrossRef]   [PubMed]  

27. S. V. Patel, J. W. McLaren, J. J. Camp, L. R. Nelson, and W. M. Bourne, “Automated quantification of keratocyte density by using confocal microscopy in vivo,” Invest. Ophthalmol. Vis. Sci. 40(2), 320–326 (1999). [PubMed]  

28. F. J. Sanchez-Marin, “Automatic segmentation of contours of corneal cells,” Comput. Biol. Med. 29(4), 243–258 (1999). [CrossRef]   [PubMed]  

29. A. Ruggeri, E. Grisan, and J. Jaroszewski, “A new system for the automatic estimation of endothelial cell density in donor corneas,” Br. J. Ophthalmol. 89(3), 306–311 (2005). [CrossRef]   [PubMed]  

30. M. E. Díaz, G. Ayala, R. Sebastian, and L. Martínez-Costa, “Granulometric analysis of corneal endothelium specular images by using a germ-grain model,” Comput. Biol. Med. 37(3), 364–375 (2007). [CrossRef]   [PubMed]  

31. A. H. Karimi, A. Wong, and K. Bizheva, “Automated detection and cell density assessment of keratocytes in the human corneal stroma from ultrahigh resolution optical coherence tomograms,” Biomed. Opt. Express 2(10), 2905–2916 (2011). [CrossRef]   [PubMed]  

32. B. Xue, S. S. Choi, N. Doble, and J. S. Werner, “Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera,” J. Opt. Soc. Am. A 24(5), 1364–1372 (2007). [CrossRef]   [PubMed]  

33. K. Y. Li and A. Roorda, “Automated identification of cone photoreceptors in adaptive optics retinal images,” J. Opt. Soc. Am. A 24(5), 1358–1363 (2007). [CrossRef]   [PubMed]  

34. M. Pircher, J. S. Kroisamer, F. Felberer, H. Sattmann, E. Götzinger, and C. K. Hitzenberger, “Temporal changes of human cone photoreceptors observed in vivo with SLO/OCT,” Biomed. Opt. Express 2(1), 100–112 (2011). [CrossRef]   [PubMed]  

35. M. Mujat, R. D. Ferguson, A. H. Patel, N. Iftimia, N. Lue, and D. X. Hammer, “High resolution multimodal clinical ophthalmic imaging system,” Opt. Express 18(11), 11607–11621 (2010). [CrossRef]   [PubMed]  

36. R. S. Jonnal, O. P. Kocaoglu, Q. Wang, S. Lee, and D. T. Miller, “Phase-sensitive imaging of the outer retina using optical coherence tomography and adaptive optics,” Biomed. Opt. Express 3(1), 104–124 (2012). [CrossRef]   [PubMed]  

37. K. Loquin, I. Bloch, K. Nakashima, F. Rossant, and M. Paques, “Photoreceptor detection in in-vivo adaptive optics images of the retina: towards a simple interactive tool for the physicians,” in 2011 IEEE International Symposium on Biomedical Imaging: from Nano to Macro (IEEE 2011), pp. 191–194.

38. C. A. Glasbey and M. J. Young, “Maximum a posteriori estimation of image boundaries by dynamic programming,” J. R. Stat. Soc. Ser. C Appl. Stat. 51(2), 209–221 (2002). [CrossRef]  

39. S. Timp and N. Karssemeijer, “A new 2D segmentation method based on dynamic programming applied to computer aided detection in mammography,” Med. Phys. 31(5), 958–971 (2004). [CrossRef]   [PubMed]  

40. Z. Yan, B. J. Matuszewski, S. Lik-Kwan, and C. J. Moore, “A novel medical image segmentation method using dynamic programming,” in International Conference on Medical Information Visualisation—BioMedical Visualisation,2007. MediVis 200 (IEEE 2007), pp. 69–74.

41. S. Lu, “Accurate and efficient optic disc detection and segmentation by a circular transformation,” IEEE Trans. Med. Imaging 30(12), 2126–2133 (2011). [CrossRef]   [PubMed]  

42. S. Farsiu, J. Christofferson, B. Eriksson, P. Milanfar, B. Friedlander, A. Shakouri, and R. Nowak, “Statistical detection and imaging of objects hidden in turbid media using ballistic photons,” Appl. Opt. 46(23), 5805–5822 (2007). [CrossRef]   [PubMed]  

43. E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numerische Mathematik 1(1), 269–271 (1959). [CrossRef]  

44. N. M. Bressler, “Age-related macular degeneration is the leading cause of blindness,” JAMA 291(15), 1900–1901 (2004). [CrossRef]   [PubMed]  

45. P. Soille, Morphological Image Analysis: Principles and Applications (Springer, 1999).

46. T. Cormen, C. Leiserson, R. Rivest, and C. Stein, Introduction to Algorithms (The MIT Press, 2001).

47. R. Gonzalez and R. Woods, Digital Image Processing, 3rd ed. (Prentice Hall, 2007).

48. H. Takeda, S. Farsiu, and P. Milanfar, “Robust kernel regression for restoration and reconstruction of images from sparse noisy data,” in 2006 IEEE International Conference on Image Processing (IEEE, 2006), pp. 1257–1260.

49. L. Fang, S. Li, Q. Nie, J. A. Izatt, C. A. Toth, and S. Farsiu, “Sparsity-based denoising of spectral domain optical coherence tomography images,” Biomed. Opt. Express 3(5), 927–942 (2012). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Schematic for segmenting closed-contour features via GTDP.
Fig. 2
Fig. 2 Schematic of the quasi-polar transform. (a-c) A circle, oval, and arbitrary shape in the Cartesian domain. (d-f) Transformation of (a-c) into the polar domain based on the centroid (yellow asterisks in a-c). The centroid in (a-c) becomes a line (yellow) in the polar domain. (g) Flattening of (d-f) into a line in the quasi-polar domain. The result is a transformation of any closed-contour shape in the Cartesian domain into a line in the quasi-polar domain.
Fig. 3
Fig. 3 RPE cell segmentation using the quasi-polar transform. (a, b) Image containing the cell to segment (a), and the pilot estimate (b) with its centroid (blue asterisk). (c) Polar-transformation of (b) segmented using GTDP to generate r=f(θ) (green). (d) Polar-transformation of (a) using the centroid from (b) as the reference pixel. The black regions are invalid points that lie outside the image in the Cartesian domain. (e, f) Images (c) and (d) flattened based on r=f(θ), respectively. (g) Transformation of r'=f(θ') from (f) back into the Cartesian domain.
Fig. 4
Fig. 4 Examples of RPE cell segmentation. (a) Automatically segmented confocal fluorescence image of a flat-mounted mouse retina (Image 16 in Table 1). The cell borders could still be segmented in cases when (b) the pilot estimate (yellow) was off-center and not a close estimate of the cell, (c) image artifacts or extraneous features were present, (d) the reflectivity of the cell border varied, (e) the cell had a low signal-to-noise ratio, (f) the cell was of abnormal shape, and (g) cell sizes were large or small. Each colored box in (a) corresponds to the zoomed-in image with the same colored box in (b-g).
Fig. 5
Fig. 5 Comparison of fully automatic segmentation versus the gold standard. (a, d) Confocal fluorescence microscopy images of flat-mounted mouse retina. (b, e) Gold standard segmentation of RPE cells (magenta) obtained semi-automatically using an independent technique. (c, f) Fully automatic segmentation (magenta) using the presented closed-contour GTDP technique. For the gold standard, cells bordering the image and invalid regions due to folding of tissue during preparation and imaging artifacts were ignored. These regions were therefore discarded (a-f, black borders) for the comparison study shown in Table 1.
Fig. 6
Fig. 6 Zoomed-in comparison of fully automatic segmentation versus the gold standard. (a) Erroneous gold standard segmentation: The small middle cell was merged with adjacent cells. (b) Erroneous gold standard segmentation that did not significantly alter quantitative comparison: Although the middle cell was incorrectly shifted, the cell count remained correct. (c) Erroneous gold standard segmentation: An enlarged diseased cell was incorrectly separated into two cells. We emphasize that such errors (a-c) were very infrequent in the gold standard data set consisting of thousands of semi-automatically segmented cells. However, existence of even a handful of such errors shows the limitation of subjective segmentation techniques relative to the automatic segmentation (f-h). (d, i) An undetermined case: The gold standard (d) delineated two separate cells, while the automatic segmentation (i) grouped them as a single cell. Since these are diseased RPE cells, it is unclear whether cells with a partially disappeared cell border should be considered individually or as a unit. (j) Erroneous automatic segmentation: Borders were incorrectly drawn through the cells due to an artifact. (e-h) Accurate gold standard (e) and automatic (f-h) segmentation.
Fig. 7
Fig. 7 (a) SDOCT retinal image of a patient with DME. (b) Fully automatically segmented retinal layers (inner limiting membrane – blue; RPE – magenta; Bruch’s membrane – cyan) via layer GTDP, and segmented intra-retinal cysts (magenta) via closed-contour GTDP.
Fig. 8
Fig. 8 (a) AOSO image of cone photoreceptors provided by the authors of [18]. (b) Fully automatically segmented cones using the closed-contour GTDP segmentation technique.

Tables (1)

Tables Icon

Table 1 Comparison of the RPE cell count and average cell area obtained for each confocal fluorescence microscopy image via fully automatic segmentation versus the gold standard

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

B c all (z)={ 1,if zpilot estimate 0,otherwise ,z
z ref =( i ref , j ref )= 1 K k=1 K z k
B p (r,θ)= B c (i,j), 0<rmax( (i i ref ) 2 + (j j ref ) 2 ),i,j 0°<θ360°, i=rsin(θ)+ i ref ,j=rcos(θ)+ j ref .
b qk =C b pk ,whereC Θ×Θ and C(i,j)={ 1,if i=( 1 Θ j=1 Θ f(j) )f(j)+j 0,otherwise .
T c (z)={ S c (z),if  S c (z)>mean( S c ) 0,otherwise ,z
t q1 =bwlabel( t q >0), t q2 =bwlabel(~ t q | t q ==1), t q3 =2bwlabel( t q ~=1)2.
T q1 ( T q1 ==0)= T q2 ( T q2 >0)+1.5, T q3 ( T q3 <0)=max( T q3 )+2, T q = T q1 + T q3 .
w ab =normalize( I q ( z a ) I q ( z b ),0,0.25 ) +normalize( T q ( z a )+ T q ( z b ),1,2 )+ w min ,
p={ p 1 ,if  i=1 Θ I q ( p 1 (θ ' i ),θ ' i ) i=1 Θ I q ( p Θ (θ ' i ),θ ' i ) p Θ ,otherwise .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.