Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Feasibility of level-set analysis of enface OCT retinal images in diabetic retinopathy

Open Access Open Access

Abstract

Pathology segmentation in retinal images of patients with diabetic retinopathy is important to help better understand disease processes. We propose an automated level-set method with Fourier descriptor-based shape priors. A cost function measures the difference between the current and expected output. We applied our method to enface images generated for seven retinal layers and determined correspondence of pathologies between retinal layers. We compared our method to a distance-regularized level set method and show the advantages of using well-defined shape priors. Results obtained allow us to observe pathologies across multiple layers and to obtain metrics that measure the co-localization of pathologies in different layers.

© 2015 Optical Society of America

1. Introduction

Diabetic retinopathy (DR) is an eye disease which is caused by retinal vascular abnormalities and may cause blindness. Clinical methods, such as fundus photography and fluorescein angiography, are available for visualizing vascular and tissue pathologies due to DR. Spectral domain optical coherence tomography (SD-OCT) can image cross-sections of the retina, thus facilitating the visualization and automated identification of pathological changes that occur across the retinal depth [14]. Enface retinal images derived from SD-OCT imaging is an emerging technique that can produce coronal (frontal) images of the retina at different depths [3]. Enface SD-OCT imaging complements standard retinal OCT by providing an easy-to-understand global overview of the retinal surface.

The photoreceptor inner segment ellipsoid (ISe) junction is a highly reflective band just above the retinal pigment epithelium (RPE) [5, 6]. The ISe junction layer plays a fundamental role in vision and disruption of this layer has been correlated with poor visual acuity in DR [7]. Enface ISe images are reconstructed from a single interface within the retinal tissue from the SD-OCT images, and therefore have high contrast and offer precise representation of the spatial extent of disruptions due to retinal pathologies [6]. Many image processing techniques focus on the separation of abnormal areas from normal areas in retinal images to detect and monitor changes due to disease. The manual analysis of large volumes of data is neither feasible nor highly accurate as it is affected by many external factors, stemming from variations in inter-observer capabilities. Therefore, there is a need for the automated analysis of pathologies in retinal images [811]. Level-set techniques may provide an effective method for automated identification of abnormalities in OCT images. In fact, the use of level-set methods for segmentation of geographic atrophy in enface OCT images has been previously demonstrated [11, 12]. Metrics obtained from the observed data can provide insight into the progression of disease, thereby allowing clinicians to follow an appropriate treatment plan.

In this paper, we demonstrate the feasibility of a level-set method to segment DR pathologies in enface OCT images of both inner and outer retinal layers. The model uses shape priors, derived from Fourier descriptors, to describe the typical form of pathologies present in the image. The segmentation allowed visualization and measurement of areas of pathology across multiple retinal enface layers. Application of this method has the potential to determine whether disruptions in the ISe layer are related to pathologies in the inner retinal layers.

2. Level-set segmentation using shape priors

The popularity of level-set methods as a general framework for image segmentation has been growing [1113]. The level set method is an effective method that can be used for image segmentation technique as it follows the evolution of interfaces, such as when regions break or merge together. The basic idea of the level-set segmentation method is to begin with an initial curve C and to evolve the curve so that it rests on the object’s boundaries. The evolutions of the curve are brought about using constraints from the image [14]. Edge-based level-set models have been used to extract geographic atrophy in SD-OCT images [11, 12]. A type of the region-based models, known as variational level-set models, provides optimal segmentation by minimizing an energy functional. This energy functional usually depends on the image data as well as the characteristic features used to identify the objects to be segmented. One of the primary and classical variational level-set models was developed by Chan and Vese [15]. Their method seeks the desired segmentation as the best piecewise constant approximation to a given image [16]. Chan and Vese proposed to minimize the following energy functional:

E(c1,c2,ϕ)=μL(ϕ)+Eout(c1,c2,ϕ)=μδ(ϕ)|ϕ|dxdy+(Ic1)2H(ϕ)dxdy+(Ic2)2(1H(ϕ))dxdy
Here, μ ≥ 0 is a fixed parameter, ϕ is the level-set function, L(ϕ) is the length of the curve, c1 and c2 are the respective mean intensities inside and outside the contour, H(ϕ) and δ(ϕ) are the 1D Heaviside and Dirac delta function, respectively and I is the original image to be segmented. L(ϕ) denotes the internal energy which controls the smoothness of the curve and Eout(c1, c2, ϕ) is the external energy which is driven by image features and forces the contour towards object boundaries. The Chan-Vese model assumes intensity homogeneity and seeks to partition an image into regions of constant mean intensity. This often leads to poor segmentation results in complicated images. Additionally, the Chan-Vese model is dependent on the placement of the initial contour, yielding different results for different initial locations of the contour [16]. The segmentation of medical images generally faces challenges including noise introduced during the acquisition process, missing or broken boundaries, and complex biological structures. In such cases, the introduction of prior information, such as an approximation of the shape, intensity, and other features of the tissue of interest could help the segmentation algorithm perform better. Recently, level set based approaches that integrate shape priors have been proposed using different shape models. These approaches either use specific shapes known a priori, or the shape parameters are obtained from available training data [1722]. Fourier descriptors are a highly effective and compact means of representing object shape and thus make an ideal candidate for use as a shape prior in the level-set formulation [2326]. In this work, we focus our study on the description of shape priors using Fourier descriptors to simulate pathologies occurring in the human retina. The shape signatures are derived using the centroid distance function. We include a cost-function which serves as a weighting function to determine the similarity between the target curve and the derived curve. We obtain the automated segmentation of pathologies visible in seven enface retinal images generated at different retinal depths. After segmentation, we measure the co-localization of pathologies in the inner retina and ISe layers to determine the effect of DR pathologies of the inner retina on the integrity of the photoreceptor cells.

3. Enface OCT Images

Enface OCT images were generated by performing high density SDOCT imaging over a 15° × 15° retinal area using a commercial instrument (Spectralis, Heidelberg Engineering), as previously described in [6]. The volume scan consisted of 145 raster horizontal B-scans, each with 768 A-scans and a depth resolution of 3.9μm. The instruments eye tracker allowed 9 B-scans to be averaged at each location. The spacing between B-scans was 30μm. The methodology for generating enface OCT images from B-scans is shown in Fig. 1. After correcting for the curvature of the retinal pigment epithelium (RPE), an enface image of a retinal layer was generated by extracting horizontal slices at a prescribed depth location within the retinal tissue from each of 145 B-Scans. The depth locations of retinal cell layers were established in healthy control subjects by manually measuring the distance from the RPE to the corresponding retinal layer. Intensity data within individual slices of all B-scans were averaged to create rows of the corresponding enface image. This process was repeated at different retinal depth locations for generating enface images of 7 retinal layers. The time required for generation of the enface images was approximately 1 minute. The depth locations of these slices were chosen to correspond approximately with the normal anatomical locations of nerve fiber layer (NFL), ganglion cell and inner plexiform layers (GCL+IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), inner segment ellipsoid (ISe) layer, and RPE. The horizontal slice thickness for averaging was 30 microns for ISe and RPE layers and 20 microns for all other retinal layers. Although enface images generated by this method will contain features of adjacent retinal cell layers due to normal topographic alterations in retinal layers, an alternative segmentation method of retinal layer boundary identification would be challenging in diabetic retinopathy subjects due to pathologically irregular retinal structure and layer organization.

 figure: Fig. 1

Fig. 1 Methodology for segmentation of B-scans to generate enface images of retinal layers. Depth locations of horizontal slices are indicated by red lines on an example B-scan and the corresponding retinal layers are indicated.

Download Full Size | PDF

Example of enface images of 7 retinal layers in a healthy human eye is shown in Fig. 2. Normal vascular patterns and relatively uniform reflectively were observed in most retinal layers. Shadows of the retinal vasculature are observed on the ISe and RPE layers. Retinal NFL and GCL+IPL enface images displayed a central circular dark region that corresponded to the normal foveal depression.

 figure: Fig. 2

Fig. 2 Enface OCT images obtained in a healthy subject. The layers are as follows: Top row left to right: NFL, GCL+IPL, INL, OPL. Bottom row, left to right: ONL, ISe, and RPE.

Download Full Size | PDF

4. Method

Since enface images are generated by storing data in consecutive rows, unwanted artifacts often appear in the form of horizontal lines across the image due to differences in B-Scan intensity. The first step in our program is the removal of these artifacts prior to segmentation in order to avoid any interference that they may cause. The next step is to define shape priors to quantitatively describe the shape of typical pathologies observed in the enface images. These shape priors are embedded into the level-set formulation to help segment objects of similar shape. Lastly, morphological post-processing is applied to remove small objects and spurious pixels. Once the regions of interest have been segmented, metrics are acquired to describe the size of the pathologies and the position of pathologies’ centroids. The centroid is used to determine the co-localization of the pathologies in the GCL+IPL and ISe layers. If the centroids in the two layers are within a small distance of each other or overlapping, then the presence of reduced reflectivity in the ISe layer is due to abnormalities present in the overlying GCL+IPL layer. We compared our results with results generated from the well-known distance-regularized level-set evolution method by Chunming et al [27] to show the difference in results when using a manually defined contour as opposed to embedding shape priors into the contour.

4.1. Preprocessing

In order to preserve features of interest whilst removing artifacts generated as a result of B-scan intensity differences, we process the image in the frequency domain. Figure 3 (left) shows enface image of the ISe layer in a healthy human eye with horizontal lines running across the image. The vertical lines visible in the Fourier transformed image in Fig. 3 (center) correspond to the horizontal lines in the spatial domain image. By removing these lines in the Fourier domain we can obtain an image with less vertical intensity variations whilst retaining necessary spatial information, as can be seen in Fig. 3 (right).

 figure: Fig. 3

Fig. 3 Enface ISe image obtained in a healthy subject. Left: Before preprocessing; Center: Fourier transform of original image; Right: After preprocessing with horizontal lines removed.

Download Full Size | PDF

4.2. Fourier descriptors for shape modeling

Two-dimensional (2D) object features such as edges, lines, and shapes are used as low-level features in many image processing and computer visions applications. The use of shape priors facilitates the segmentation process if one knows an estimated outline of the object of interest. Shape features can be well-represented by Fourier descriptors, which can be easily derived and compactly characterized. Fourier descriptors describe the shape of an object in terms of its boundaries. In our model, we define the Fourier descriptor coefficients as those that represent contours for the expected shape of the pathology. An advantage of doing so is that the coefficients are invariant to rotation, scale, and translation. This is important as pathologies can occur at various rotations and be of varying sizes. There are many methods, such as statistical analysis given a large number of samples, to define a target shape that compactly represents objects belonging to the same class. The target shape signature was obtained by manual segmentation of pathology regions in 35 enface images from 5 DR subjects (7 enface images per subject). A total of 91 pathology regions were extracted from these enface images to comprise the training set. Data obtained in a different group of DR subjects were used for testing the segmentation method to avoid potential bias. All DR subjects had either non-proliferative or proliferative diabetic retinopathy with clinically significant macular edema. The target shape signature was obtained by averaging over the discrete Fourier transforms of the identified pathologies. The spatial-domain averaged pathology target image is then obtained by applying the inverse Fourier transform. The one-dimensional (1D) Fourier descriptors of this 2D target image are obtained by using the centroid-distance function. The absolute values of the Fourier descriptors represent the characteristic features of the pathology and are utilized as the shape prior to guide curve evolution in our level-set equation.

4.3. Implementation of level-set segmentation using shape priors

In this section we will discuss the implementation details of obtaining the Fourier descriptors for a polygonal curve. Suppose the discrete boundary of an object S, whose boundary is defined by a closed curve C is plotted on the XY plane. If we traverse the boundary of the object by starting at an arbitrary point (x0, y0), the coordinate pairs that will be encountered as we are tracing the boundary path are (x1, y1), (x2, y2),..., (xN−1, yN−1). We can express these coordinate pairs as x(k) = xk and y(k) = yk. The shape signature of the boundary itself can therefore be represented as the sequence of coordinates s(k) = [x(k), y(k)], for k = 0, 1, 2,...,N − 1. An obvious advantage of such a representation is that it reduces a two-dimensional (2D) problem to a one-dimensional problem (1D).). Before applying the discrete Fourier transform (DFT) on the shape signature, the target shape must be sampled to a fixed number of points. By varying the number of sample points, the accuracy of the shape representation can be adjusted. The larger the number of sample points, the more accurately the shape is represented. We sample the shape to a fixed number of points using the equal points sampling method, in which 64 (a power of two integer facilitates the use of the discrete Fourier transform (DFT)), candidate points which are equally spaced along the shape boundary are selected. There are two advantages to using a lower number of sample points. Firstly, this number of points gives a smoothed representation of the shape signature of interest and reduces the computation power required. Secondly, Fourier descriptors give equal weight to all harmonics. This emphasizes the differences in the higher order harmonics, which are more sensitive to irregularities [28]. By using a lower number of harmonics we can avoid this drawback.

The discrete Fourier transform (DFT) of s(k) is

a(u)=1Nk=0N1s(k)e(j2πukN)
for u = 0, 1, 2,...,N − 1. The complex coefficients of a(u) are called the Fourier descriptors of the boundary, a frequency based description of the boundary of the images. The inverse Fourier transform of a(u) restores s(k), i.e.,
s(k)=u=0N1a(u)e(j2πukN)
for k = 0, 1, 2,...,N − 1. We use the centroid distance function to calculate the shape signature, as it has been shown that shape representation using the centroid distance function is significantly better than using other techniques, such as complex coordinates and curvature signature [29]. The centroid distance function is given by the distance of the boundary points from the centroid (xc, yc) of the shape:
r(k)=([x(k)xc]2+[y(k)yc]2)12,k=0,1,,N1xc=1Nk=0N1x(k)yc=1Nk=0N1y(k)

In order to extract the FD of the desired shape prior, the 1D Fourier transform must be applied to the centroid distance function, r(k) as follows:

an=1Nk=0N1r(k)ej2πnkNn=0,1,,N1
All the Fourier transformed coefficients are standardized by the first Fourier transformed coefficient a0. An advantage of using FD’s for boundary representation is that the FD’s are rotation, scale, and translation invariant. The phase information is ignored and the coefficient magnitudes are retained. In order to denote the Fourier transformed coefficients that have been normalized and whose phase information is ignored we do bn=|ana0| where bn is invariant to rotation, translation, scaling, and change of starting point [30]. Translation has no effect on the descriptors except at the position where k = 0, which has the impulse function δ(k). The inverse DFT of this descriptor is equivalent to the centroid of the object. Translation invariance is easily achieved by setting k(0) = 0, and translating the origin of the coordinate system to the center of mass of the pattern [31]. Rotation invariance can be achieved by ignoring the phase information of a(u0) and using only the absolute value or magnitude of |a(u0)| at each descriptor. Scale invariance is achieved by dividing |a(u0)| by the DC component. In order to measure the similarity between a target shape T and a query Q, the Euclidean distance between the Fourier descriptor representations of the two shapes is measured [32]. Therefore, we define our level set model as:
Eimg+Eshape+Eregion+Ecost
or more specifically:
Eimg(ϕ,c1,c2)=δε(ϕ)μ(ϕ|ϕ|)+H(ϕ)dxdy+λ1(Ic1)2H(ϕ))dxdy+λ2(Ic2)2(1H(ϕ))dxdy+S˜1S˜2

An explanation of the parameters is given as follows: δε(ϕ) is the Dirac delta function, which is the directional derivative of the Heaviside function. This can be implemented using a discretized and regularized version of the function, given by:

1πεε2+ϕ2
where ε is an infinitesimal term to avoid division by 0. ϕ is a signed distance function obtained using a distance transform, such that the values inside ϕ are negative and the values outside ϕ are positive. ϕ|ϕ| is the curvature, given by the ratio of the gradient and magnitude of the signed distance function. This can be implemented using a finite difference scheme as follows:
ϕxxϕy22ϕxyϕxϕy+ϕyyϕx2(ϕx+ϕy)3/2
where ϕx, ϕy, ϕxy are the derivatives of ϕ in the x, y, xy directions, respectively. H(ϕ) is is the unit-step Heaviside function such that H(ϕ) = 1 if ϕ ≥ 1 and H(ϕ) = 0 if ϕ < 0. For a smooth approximation of the unit step function, we implement the Heaviside by the following equation [15]:
H(ϕ)=12(1+2πarctan(ϕε))

The integral is implemented as a sum over all the image pixels such that the value of this functional is the integral of the image I over all the pixels where ϕ is positive. λ1, λ2, and μ in the level-set equation are the only parameters to be tuned. In order to maintain consistency in the application of the algorithm, we set the values of these parameters to 1. c1 and c2 are the respective average intensities inside and outside the contour ϕ. They are computed as follows:

c1(ϕ)=I(x,y)H(ϕ(x,y))dxdyH(ϕ(x,y)dxdyc2(ϕ)=I(x,y)(1H(ϕ(x,y)))dxdy1H(ϕ(x,y)dxdy
S1 and S2 are the respective low order coefficients of the target shape and object pathology and the value of this functional minimizes the distance between the target and object pathology. The principal steps of the algorithm are given as follows:
  1. Initialize the signed distance function, ϕ.
  2. Compute the values of c1 and c2 using Equation 10.
  3. Calculate the cost function between Fourier descriptors of target and test objects.
  4. Solve for the Equation 6 using discretized Equations of 7,9, and 10.
  5. Iterate until convergence.
The algorithm will converge as the functional reaches a minimum when the contour reaches the boundaries of the object. After segmentation, the images are post-processed to remove small objects and spurious pixels.

4.4. Validation of level-set segmentation

To validate the method, manual segmentation of pathologies was performed by a masked observer (AF) and compared to the automated level-set segmentation results. Manual segmentation was performed by outlining and demarcating regions of pathology on enface retinal images in three DR subjects using ImageJ software. To compare level-set and manual segmentations, metrics of sensitivity, specificity, precision, and accuracy were calculated after categorizing every pixel of the automated segmentation regions as true-positive, true-negative, false-positive or false-negative based on their correspondence with manually segmented regions. These calculations were done on the binary images of the manually and automated segmented images.

5. Results

We present the results of the application of our segmentation method in three DR subjects and demonstrate co-localization of pathologies in the GCL+IPL and ISe layers in two of the three subjects. Figures 4 and 5 show the segmentation results of our level-set method applied to enface retinal images generated in two DR subjects. The removal of the horizontal lines in the frequency domain aids in the segmentation process and allows for a much cleaner representation of irregularities in the image. As shown in the figures, both light and dark irregularly shaped pathologies were detected and outlined, regardless of size, shape, and orientation. The results shown for DR subject 1 (Fig. 4) show segmentation of predominantly bright regions in the GCL+IPL layer which represent hard exudates and dark regions in the ISe layer. Compiled data from 21 enface layers in three DR subjects yielded an average sensitivity and specificity of 0.48 and 0.93 for level-set segmentation, respectively. The average precision (positive predictive value) and accuracy of the level set segmentation method were 0.52 and 0.91, respectively.

 figure: Fig. 4

Fig. 4 Enface images of DR subject 1 with and without pathologies segmented. Top row, left to right: NFL, GCL+IPL, INL, and OPL. Bottom row, left to right: ONL, ISe, and RPE. Arrows point to the location of a sample B-scan which is shown below the enface images to indicate the retinal depth (red line) of the enface images and correspondence of visualized pathologies.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Enface images of DR subject 2 with and without pathologies segmented. Top row, left to right: NFL, GCL+IPL, INL, and OPL. Bottom row, left to right: ONL, ISe, and RPE. Arrows point to the location of a sample B-scan which is shown below the enface images to indicate the retinal depth (red line) of the enface images and correspondence of visualized pathologies.

Download Full Size | PDF

For DR subject 2, (Fig. 5) the results show both bright and dark segmented regions in the GCL+IPL layer and smaller dark regions in the ISe layer. Sample B-scans, indicating the retinal depth at which enface images were generated, are shown to verify the correspondence between pathologies detectable on enface retinal images and B-scan images. From enface images in DR subject 1, the centroids and areas of the three largest objects in the GCL+IPL and ISe layers were calculated in order to determine the extent of co-localization of pathologies present in these two layers. In the enface image of the GCL+IPL layer, these bright objects appeared as dark objects in the ISe layer. Figure 6 shows the respective centroids marked for the GCL+IPL (blue dots) and the ISe (yellow dots) enface images for DR subjects 1 and 3. Shown in Table 1 are the estimated areas and centroids of the detected pathologies in the GCL+IPL and ISe enface images. The three segmented pathologies in the tope row of Fig. 6 (DR subject 1) are referred to as the left, center, and right in Table 1. The high correspondence in the locations of the centroids of the pathologies between the GCL+IPL and ISe images are depicted in Table 1 for both DR subjects 1 and 3. The high co-localization of centroids in DR subject 1 indicate shadowing of the pathology in the GCL+IPL layer as the probable source of reduced reflectance of light in the ISe layer.

 figure: Fig. 6

Fig. 6 Top row: Centroids of pathologies marked on the GCL+IPL and ISe layers of DR subject 1. Bottom row: Centroids of pathologies marked on enface GCL+IPL and ISe images of DR subject 3

Download Full Size | PDF

Tables Icon

Table 1. Pathology areas and centroids estimated for GCL+IPL and ISe layers in DR subjects 1 and 3. The three segmented pathologies shown in Fig. 6 are referred to here as the left, center, and right pathologies.

We compared our results with results generated from the distance-regularized level-set evolution method by Chunming et al [27] in order to emphasize the complications which may arise when using a manually defined contour as opposed to embedding shape priors into the contour. Figure 7 shows the algorithm results on segmentation of pathologies in the GCL+IPL and ISe layers using our method (top row) and the distance-regularized method proposed by [27] (bottom row). In the method proposed by [27], the contour encloses two objects as one rather than splitting to detect three larger pathologies and several smaller ones. Similarly, the results shown in the ISe layer depict a single pathology being segmented rather than the multiple pathologies which are clearly visible. As we have chosen the number of samples in the Fourier descriptor to be an integer power of two, the DFT can be more efficiently calculated, with a time complexity of O(Mlog2M), where M is the number of sample points [33]. The time complexity of solving the level-set equation for a single image is O(M2), where M is the number of pixels in the image.

 figure: Fig. 7

Fig. 7 Segmentation results on GCL+IPL and ISe images. Top row: our method. Bottom row: distance-regularized level-set evolution method [27].

Download Full Size | PDF

6. Conclusion and discussion

Monitoring of pathological changes in retinal layers is important for management of patients with DR. Segmentation and quantification of regions of pathologies in enface OCT images offers a promising method. We have proposed a level-set based method which incorporates shape priors and metrics derived from those priors to propagate the curve towards boundaries of objects of interest. The parameters of the level-set equation are replaced with parameters which can more appropriately handle variations in intensity within objects. In the current study, the shape priors were defined using Fourier descriptors obtained from 91 pathology samples taken from 35 enface retinal images. In future studies, a larger training set may be required to improve performance and better evaluation of the level-set segmentation method.

We have provided results showing how our method segments pathologies across enface images of retinal layers. Additionally, the centroids and areas of pathologies in the GCL+IPL and ISe images were measured to determine whether the pathologies visible in the GCL+IPL layer affected the visibility of the ISe layer. Confirmation of the detected pathologies by thorough evaluation of small structures on the large number of B-scans was not practical. However, the high sampling density of B-scans and correspondence between automated and manual segmentation provided support for method validation. The sensitivity of the method may have been adversely affected by several factors, including image smoothing from the Fourier transform step that eliminated horizontal lines, boundary smoothing that prevented jagged edges, and a post-processing step that removed small segmented areas. The sensitivity may be improved by incorporating geometric measures of the pathologies (such as average size, circularity, eccentricity, jaggedness) as cost terms in the level-set equation. One limitation of the study was the small sample size. Thus it is unclear whether this approach can be successfully applied in data sets with more variable image quality. Further studies in a larger data will be needed to establish the validity of the method, which will determine its’ utility for clinical trials.

Overall, use of Fourier descriptors as shape priors provides a compact and robust way of describing objects in terms of their boundaries and can greatly improve the accuracy of the segmentation. Application of the proposed technique can allow clinicians to observe the morphological changes brought about by disease and can also provide insight into factors that affect the integrity of the ISe layer.

Acknowledgments

NIH grant EY001792, Department of VA, Research to Prevent Blindness.

References and links

1. C. Yueli, L. N. Vuong, J. Liu, J. Ho, V. Srinivasan, I. Gorczynska, A. J. Witkin, J. Duker, J. S. Schuman, and J. Fujimoto, “Three-dimensional ultrahigh resolution optical coherence tomography imaging of age-related macular degeneration,” OSA 17, 4046–4060 (2009).

2. U. Nair, S. Ganekal, M. Soman, and K. Nair, “Correlation of spectral domain optical coherence tomography findings in acute central serous chorioretinopathy with visual acuity,” Clin. Ophthalmol. 6, 1949–1954 (2008).

3. B. Wolff, A. Matet, V. Vasseur, J.-A. Sahel, and M. Mauget-Fasse, “En face oct imaging for the diagnosis of outer retinal tubulations in age-related macular degeneration,” Am. J. Ophthalmol. 2012, 3 (2012).

4. M. zu Bexten E, “Automated segmentation of pathological cavities in optical coherence tomography scans,” Invest. Ophthalmol. Vis. Sci. 54, 4385–4393 (2013). [CrossRef]  

5. R. S. Jonnal, O. P. Kocaoglu, Q. Wang, S. Lee, and D. T. Miller, “Phase-sensitive imaging of the outer retina using optical coherence tomography and adaptive optics,” Biomed. Opt. Express. 3, 104–124 (2012). [CrossRef]   [PubMed]  

6. J. Wanek, R. Zelkha, J. Lim, and M. Shahidi, “Feasibility of a method for en face imaging of photoreceptor cell integrity,” Am. J. Ophthalmol. 152, 807–814 (2011). [CrossRef]   [PubMed]  

7. A. Maheshwary, S. F. Oster, R. M. S. Yuson, L. Cheng, F. Mojana, and W. Freeman, “The association between percent disruption of the photoreceptor inner segmentouter segment junction and visual acuity in diabetic macular edema,” Am. J. Ophthalmol. 150, 63–67 (2010). [CrossRef]  

8. M. U. Akram, A. Tariq, M. A. Anjum, and M. Y. Javed, “Automated detection of exudates in colored retinal images for diagnosis of diabetic retinopathy,” Appl. Opt. 51, 4858–4866 (2012). [CrossRef]   [PubMed]  

9. H. Jelinek and M. Cree, Automated Image Detection of Retinal Pathology (CRC Press, 2009). [CrossRef]  

10. A. Gelas, K. Mosaliganti, A. Gouaillard, L. Souhait, R. Noche, N. Obholzer, and S. G. Megason, “Variational level-set with gaussian shape model for cell segmentation,” in “Proc. Int. Conf. Image Proc.”, (ICIP and IEEE, 2009), pp. 1089–1092.

11. Z. Hu, G. Medioni, M. Hernandez, A. Hariri, X. Wu, and S. Sadda, “Segmentation of the geographic atrophy in spectral-domain optical coherence tomography and fundus autofluorescence images,” Invest. Ophthalmol. Vis. Sci. 54, 8375–8383 (2013). [CrossRef]   [PubMed]  

12. Q. Chen, L. de Sisternes, T. Leng, L. Zheng, L. Kutzscher, and D. L. Rubin, “Semi-automatic geographic atrophy segmentation for sd-oct images,” Biomed. Opt. Express 4, 2729–2750 (2013). [CrossRef]  

13. D. Cremers, M. Rousson, and R. Deriche, “A review of statistical approaches to level set segmentation: Integrating color, texture, motion and shape,” Int. J. Comput. Vis. 72, 195–215 (2007). [CrossRef]  

14. M. Airouche, L. Bentabet, and M. Zelmat, “Image segmentation using active contour model and level set method applied to detect oil spills,” in “Proc. World Congress on Engineering,” (WCE, 2009).

15. T. Chan and L. Vese, “Active contours without edges,” IEEE Trans. Image. Process. 10, 266–277 (2001). [CrossRef]  

16. Y. Yuan and C. He, “Variational level-set methods for image segmentation based on both l2 and sobolev gradients,” Nonlinear Anal. Real World Appl. 13, 959–966 (2012). [CrossRef]  

17. X. Liu, D. L. Lange, M. A. Haider, T. H. V. der Kwast, A. J. Evans, M. N. Wernick, and I. S. Yetik, “Unsupervised segmentation of the prostate using mr images based on level set with a shape prior,” in “Conf. Proc. IEEE Eng. Med. Biol. Soc.”, (IEEE, 2009), pp. 3613–3616.

18. D. Cremers, N. Sochen, and C. Schnorr, “Towards recognition-based variational segmentation using shape priors and dynamic labeling,” in “4th Intl. Conf. on Scale Space Theories in Comp. Vis.”, (Springer, 2003), pp. 388–400.

19. T. Chan and W. Zhu, “Level set based shape prior segmentation,” in “Com. Vis. and Pattern Recog.”, (IEEE, 2005).

20. S. Chen and R. Radke, “Level-set segmentation with both shape and intensity priors,” in “Intl. Conf. on Computer Vis.”, (IEEE, 2009), pp. 763–770.

21. G. Liu, X. Sun, K. Fu, and H. Wang, “Aircraft recognition in high-resolution satellite images using coarse to fine shape prior,” in “Geoscience and remote sensing letters,” (IEEE, 2013).

22. K. Cheng, L. Gu, and J. Xu, “A novel shape prior based level set method for liver segmentation from mr images,” in “Proc. of 5th Intl. Conf. on Info. Tech. and Appl. in Biomed.”, (IEEE, 2008).

23. Z. Yang, Y. Kong, and Y. Fu, “Decomposed contour prior for shape recognition,” in “Intl. Conf. on Pattern Recog.”, (IEEE, 2012), pp. 760–770.

24. V. A. Prisacariu and I. Reid, “Nonlinear shape manifolds as shape priors in level set segmentation and tracking,” in “Comp. Vis. and Pattern Recog. (CVPR), IEEE Conference on,” (2011).

25. M. A. Charmi, S. Derrode, and F. Ghorbel, “Using fourier-based shape alignment to add geometric prior to snakes,” in “Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Proc.”, (IEEE, 2009), pp. 1209–1212.

26. M. A. Charmi, M. A. Mezghich, S. MHiria, S. Derrode, and F. Ghorbel, “Geometric shape prior to region-based active contours using fourier-based shape alignment,” in “Imaging Systems and Techniques (IST), 2010 IEEE Intl. Conf. on,” (IEEE, 2010), pp. 478–481.

27. C. Li, C. Xu, C. Gui, and M. D. Fox, “Distance regularized level set evolution and its application to image segmentation,” IEEE Trans. Image. Process. 19, 3243–3254 (2010). [CrossRef]   [PubMed]  

28. F. J. Rohlf and J. Archie, “A comparison of fourier methods for the description of wing shape in mosquitoes (diptera: Culicidae),” Systematic Zoology 33, 302–317 (1984). [CrossRef]  

29. D. Zhang and G. Lu, “A comparative study on shape retrieval using fourier descriptors with different shape signatures,” in “Proc. of Intl. Conf. on Intelligent Multimedia and Distance Education,” (2001), pp. 1–9.

30. G. Zhang, Z. Ma, Q. Tong, Y. He, and T. Zhao, “Shape feature extraction using fourier descriptors with brightness in content-based medical image retrieval,” in “Proc. of the 2008 Intl. Conf. on Intelligent Information Hiding and Multimedia Signal Processing,” (IEEE Computer Society, 2008), pp. 71–74.

31. F. J. van Rensburg, J. Treurnicht, and C. Fourie, “The use of fourier descriptors for object recognition in robotic assembly,” in “Proc. 5th CIRP Intl. Seminar on Intelligent Computation in Manufacturing Eng.”, (2009).

32. D. Zhang and G. Lu, “A comparative study of fourier descriptors for shape representation and retrieval,” in “Proc. of 5th Asian Conference on Computer Vision (ACCV),” (2002), pp. 646–651.

33. M. Mandal and A. Asif, Continuous and Discrete Time Signals and Systems (Cambridge University Press, 2006).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Methodology for segmentation of B-scans to generate enface images of retinal layers. Depth locations of horizontal slices are indicated by red lines on an example B-scan and the corresponding retinal layers are indicated.
Fig. 2
Fig. 2 Enface OCT images obtained in a healthy subject. The layers are as follows: Top row left to right: NFL, GCL+IPL, INL, OPL. Bottom row, left to right: ONL, ISe, and RPE.
Fig. 3
Fig. 3 Enface ISe image obtained in a healthy subject. Left: Before preprocessing; Center: Fourier transform of original image; Right: After preprocessing with horizontal lines removed.
Fig. 4
Fig. 4 Enface images of DR subject 1 with and without pathologies segmented. Top row, left to right: NFL, GCL+IPL, INL, and OPL. Bottom row, left to right: ONL, ISe, and RPE. Arrows point to the location of a sample B-scan which is shown below the enface images to indicate the retinal depth (red line) of the enface images and correspondence of visualized pathologies.
Fig. 5
Fig. 5 Enface images of DR subject 2 with and without pathologies segmented. Top row, left to right: NFL, GCL+IPL, INL, and OPL. Bottom row, left to right: ONL, ISe, and RPE. Arrows point to the location of a sample B-scan which is shown below the enface images to indicate the retinal depth (red line) of the enface images and correspondence of visualized pathologies.
Fig. 6
Fig. 6 Top row: Centroids of pathologies marked on the GCL+IPL and ISe layers of DR subject 1. Bottom row: Centroids of pathologies marked on enface GCL+IPL and ISe images of DR subject 3
Fig. 7
Fig. 7 Segmentation results on GCL+IPL and ISe images. Top row: our method. Bottom row: distance-regularized level-set evolution method [27].

Tables (1)

Tables Icon

Table 1. Pathology areas and centroids estimated for GCL+IPL and ISe layers in DR subjects 1 and 3. The three segmented pathologies shown in Fig. 6 are referred to here as the left, center, and right pathologies.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

E ( c 1 , c 2 , ϕ ) = μ L ( ϕ ) + E out ( c 1 , c 2 , ϕ ) = μ δ ( ϕ ) | ϕ | d x d y + ( I c 1 ) 2 H ( ϕ ) d x d y + ( I c 2 ) 2 ( 1 H ( ϕ ) ) d x d y
a ( u ) = 1 N k = 0 N 1 s ( k ) e ( j 2 π u k N )
s ( k ) = u = 0 N 1 a ( u ) e ( j 2 π u k N )
r ( k ) = ( [ x ( k ) x c ] 2 + [ y ( k ) y c ] 2 ) 1 2 , k = 0 , 1 , , N 1 x c = 1 N k = 0 N 1 x ( k ) y c = 1 N k = 0 N 1 y ( k )
a n = 1 N k = 0 N 1 r ( k ) e j 2 π n k N n = 0 , 1 , , N 1
E img + E shape + E region + E cost
E img ( ϕ , c 1 , c 2 ) = δ ε ( ϕ ) μ ( ϕ | ϕ | ) + H ( ϕ ) d x d y + λ 1 ( I c 1 ) 2 H ( ϕ ) ) d x d y + λ 2 ( I c 2 ) 2 ( 1 H ( ϕ ) ) d x d y + S ˜ 1 S ˜ 2
1 π ε ε 2 + ϕ 2
ϕ x x ϕ y 2 2 ϕ x y ϕ x ϕ y + ϕ y y ϕ x 2 ( ϕ x + ϕ y ) 3 / 2
H ( ϕ ) = 1 2 ( 1 + 2 π arctan ( ϕ ε ) )
c 1 ( ϕ ) = I ( x , y ) H ( ϕ ( x , y ) ) d x d y H ( ϕ ( x , y ) d x d y c 2 ( ϕ ) = I ( x , y ) ( 1 H ( ϕ ( x , y ) ) ) d x d y 1 H ( ϕ ( x , y ) d x d y
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.