Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images

Open Access Open Access

Abstract

Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice’s coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice’s coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.

© 2016 Optical Society of America

1. Introduction

A multitude of retinal diseases result in the death or degeneration of photoreceptor cells. As such, the ability to visualize and quantify changes in the photoreceptor mosaic could be useful for the diagnosis and prognosis of these diseases, or for studying treatment efficacy. The use of adaptive optics (AO) to correct for the monochromatic aberrations of the eye has given multiple ophthalmic imaging systems the ability to visualize photoreceptors in vivo [1–8], with the AO scanning light ophthalmoscope (AOSLO) [2] being the most common of these technologies. AO ophthalmoscopes have been utilized to quantify and characterize various aspects of the normal photoreceptor mosaic [1, 9–15], and also in the analysis of mosaics affected by various retinal diseases [16–22].

The resolution advantage of confocal AOSLO enables the visualization of the smallest photoreceptors in the retina - rods and foveal cones [12]. However, confocal AOSLO photoreceptor imaging, which is thought to rely on intact outer segment structure to propagate the waveguided signal, occasionally provides ambiguity in identifying retinal structures [23]. For example, in the confocal AOSLO image of the perifoveal retina shown in Fig. 1(a), it is challenging to clearly distinguish cones from rods. To address this issue, and inspired by earlier works on enhanced visualization of retinal vasculature using multiple-scattered light [24], non-confocal split detector AOSLO has been developed [25]. Split detector AOSLO reveals the cone inner segment mosaic, even in conditions where the waveguided signal in the accompanying confocal image is diminished [25]. Moreover, as shown in Fig. 1(b), split detector AOSLO images allow unambiguous identification of cones in the normal perifoveal retina.

 figure: Fig. 1

Fig. 1 Comparison of peripheral rod and cone visualization in confocal and split detector AOSLO imaging in a normal subject (a) Confocal AOSLO image at 10° from the fovea shows rod and cone structures, but it is challenging to confidently differentiate a cluster of rods from a single cone with a complex waveguide signal. (b) Split detector image from the same location better visualizes cones but is incapable of visualizing rods. Circle indicates a cone, which is clearly visible in split detector AOSLO but appears as a collection of rod-like structures in confocal AOSLO. Scale bar: 20 μm.

Download Full Size | PDF

Identification of individual photoreceptors is often a required step in quantitative analysis of the photoreceptor mosaic. Since manual grading is subjective (with low repeatability for split detector and confocal AOSLO images [26]), and is often too costly and time consuming for clinical applications, multiple automated algorithms have been developed for detecting cones in AO images [27–37]. However, given the distinctive appearance of cones in split detector AOSLO images, one cannot simply transfer an existing algorithm from other AO modalities (e.g. confocal AOSLO, AO flood illumination, or AO optical coherence tomography) to split detector AOSLO without modifications. Thus, quantitative analysis of photoreceptor mosaics visualized on split detector AOSLO currently requires manual grading.

To address this problem, we present a fully-automated algorithm for the detection of cones in non-confocal split detector based AOSLO images. We validate this algorithm against the gold standard of manual marking at a variety of locations within the human retina.

2. Methods

We denote our proposed automated cone segmentation algorithm the adaptive filtering and local detection (AFLD) method. This two-step algorithm first estimates the radius of Yellott’s ring [38, 39], which is an annular structure that appears in the Fourier transform of cone photoreceptor images, with a radius that coarsely corresponds to the modal cone spatial frequency [40]. The estimated frequency is then used to create a Fourier domain adaptive filter to remove information non-pertinent to the cones. In the second step, a priori information about the appearance of cones in split detector AOSLO images is used to aid in detection of cones. The schematic in Fig. 2 outlines the core steps in our algorithm, which are discussed in the following subsections. In what follows in this section, we first describe the image acquisition and pre-processing procedures in section 2.1. The two steps of the AFLD algorithm are explained in sections 2.2 and 2.3, respectively. Finally, section 2.4 describes the validation process used for evaluating the performance of the algorithm.

 figure: Fig. 2

Fig. 2 Schematic for AFLD split detector AOSLO cone detection algorithm.

Download Full Size | PDF

2.1 Image acquisition and pre-processing

This research followed the tenets of the Declaration of Helsinki, and was approved by the institutional review boards at the Medical College of Wisconsin (MCW) and Duke University. Image sets from 14 subjects with normal vision were obtained from the Advanced Ocular Imaging Program image bank (MCW, Milwaukee, Wisconsin). In addition, images from 2 subjects with congenital achromatopsia and 2 subjects with oculocutaneous albinism were obtained. All images were acquired using a previously described split detector AOSLO [7, 25], with a 1.0° field of view.

Axial length measurements were obtained from all subjects using an IOL Master (Carl Zeiss Meditec, Dublin, CA). To convert from image pixels to retinal distance (μm), images of a Ronchi ruling with a known spacing were used to determine the conversion between image pixels and degrees. An adjusted axial length method [41] was then used to approximate the retinal magnification factor (in µm/degree) and calculate the µm/pixel of each image.

We extracted eight split detector images of the photoreceptor mosaic from each subject’s data set within a single randomly-chosen meridian (superior, temporal, inferior, nasal) at multiple eccentricities (range: 500-2,800 µm). An ROI containing approximately 100 photoreceptors was extracted from each image, and intensity values were normalized to stretch between 0 and 255.

2.2 Fourier domain adaptive filtering

Cone mosaics can be well approximated by the band pass component of their Fourier domain representation [38, 39] (Fig. 3). Therefore, removing low and high frequency components in the non-confocal split detector based AOSLO images of cones would reduce noise and improve image contrast. Thus, in the first step of the AFLD algorithm, we estimated the modal spatial frequency of the cones within the split detector AOSLO images in order to set the cutoff frequencies of a preprocessing band pass filter. This is similar to the method proposed in [42], where the modal frequency is estimated to directly calculate cone density in confocal AOSLO images. Here, we devised an alternate method for estimating the modal frequencies to account for the differences in the Fourier spectra between split detector and confocal AOSLO.

 figure: Fig. 3

Fig. 3 Adaptive filtering of split detector AOSLO cone images. (a) Original Image. (b) Discrete Fourier transform of (a) log10 compressed. Red arrow points to Yellott’s ring. (c) Average radial cross section of (b) over 9 sections after filtering in black. Fitted curve in red. (d) Result of subtracting fitted red curve from the black curve in (c). Peak corresponding to cones shown in blue. (e) Upper fourth of peak from (d). (f) Band pass filter in Fourier domain. (g) Filtered original image. (h) Information removed from (a) by filtering.

Download Full Size | PDF

In this step, we transformed the image into the frequency domain using the discrete Fourier transform. Next, we applied a log10 transformation to the absolute value of the Fourier transformed image, resulting in a frequency domain image with a roughly circular band corresponding to the spatial frequency of the cones in the original image (Fig. 3(b)). We then applied a 7 × 7 pixel uniform averaging filter to this image to reduce noise.

In the next step, we estimated the 1-dimensional modal spatial frequency of the cones, corresponding to the radius of Yellott’s ring within the Fourier image. We averaged nine equally spaced slices through the center of the frequency space to get a 1-dimensional curve (black line in Fig. 3(c)) with a prominent peak corresponding to the modal cone spatial frequency. Note that due to the split detector orientation, the bulk of the spectral power for the cone component of the image is near the horizontal axis. Thus, the selected nine slices were spaced angularly from −20 to 20 degrees around the horizontal axis. As can be seen in Fig. 3(c), the resulting curve consists of a peak corresponding to cone information, along with a gradually decreasing distribution. We found that removing this underlying distribution prior to estimating the modal spatial frequency improved the reliability of locating the peak corresponding to the cone information and the final accuracy of the algorithm, even though removing the distribution leads to shifting the peak to slightly higher frequencies. We estimated the underlying distribution using the least-squares fit of a sum of two exponentials to the data (red curve in Fig. 3(c)), and then subtracted the fit from the curve (Fig. 3(d)). Next, we used MATLAB’s findpeaks function to find the peak corresponding to the modal cone spatial frequency. We chose the first prominent peak within the frequency range of 0.04 to 0.16 pixels−1. This range corresponds on average to cone densities between 5400 and 88000 cones/mm2, which was chosen to encapsulate the range of cone densities seen in healthy eyes at the eccentricities examined [43]. To find the final estimate of the modal cone spatial frequency (fC), we found the center of mass of the upper fourth of the peak (Fig. 3(e)).

We used the estimate of the cone spatial frequency to create a binary annular band pass filter (Fig. 3(f)), with upper and lower cutoff frequencies fC+0.04 and the greater value between fC0.04 and 0.025 pixels−1, respectively. The hard lower bound was used to ensure adequate low frequency information was removed. We then multiplied this filter by the Fourier transformed cone photoreceptor image. We inverse Fourier transformed the resulting filtered image to the space domain (Fig. 3(g)), which has much of the non-cone low and high frequency fluctuations (Fig. 3(h)) removed. We used no additional windowing beyond what was described for the forward and inverse Fourier transforms.

2.3 Cone detection

The second step in the AFLD method is to find the location of cones within the filtered image. In split detector images, individual cones appear as pairs of horizontally separated dark and bright regions (Fig. 4(a)). The relative shift of these two regions is constant throughout an image and depends on the orientation of the split detection aperture (all images in this study had dark regions to the left of light regions). Thus, to improve detection accuracy, we exploited this a priori information on the paired local minima and maxima manifestation of cones in split detector AOSLO images.

 figure: Fig. 4

Fig. 4 Detection of cones in band pass filtered image from Fig. 3. (a) Original image (b) Filtered image, same as Fig. 3(g), with local minima marked in red and maxima in green. (c) Matched pairs of minima and maxima. (d) Final detection results on original image from (a). Cones found using matched pairs are shown in green and the cone found using maximum value only is shown in red.

Download Full Size | PDF

First, we found the location and intensity values of all local minima and maxima in the filtered image (Fig. 4(b)). Next, we paired the minima and maxima together (Fig. 4(c)) following the constraints that 1) maxima must be to the right of minima, 2) the points must be within 1.5fC1 weighted distance of each other, 3) for each maximum only the pair with the smallest weighted distance may be used, and 4) each minimum is only included in one pair. The weighted distance is defined as dw=((X2X1)2+(2(Y2Y1))2)1/2, which has higher weight on the vertical component in order to prioritize horizontal matches. We used the inverse of the modal frequency to limit the search regions as it is related to the size and spacing of cones. To find these opposing extrema pairs, we used the convex hull function [44] to find the closest (using dw) minima for each maxima under the given constraints. To make sure the matches are one-to-one, we checked each paired minima to see if it is paired with multiple maxima, and if so, we kept only the pair with the smallest distance between points. In order to satisfy the above constraints, some maxima may have no corresponding minimum pair. We recorded these unpaired maxima to be analyzed as well.

The final step is to use the matched extrema pairs to determine the locations of cones. We calculated the average intensity magnitude for each opposing extrema pair as Ip=(ImaximaIminima)/2. We then used thresholding to determine whether an extrema pair corresponded to a cone in the image. The threshold T varied for different modal cone frequencies and for each image was defined as:

T={0.875σ,iffC<0.065pixels10.7σ,if0.065fC<0.075pixels10.525σ,if0.075fCpixels1,
where σ is the standard deviation of the band pass filtered image. We accepted all extrema pairs with Ip>T as corresponding to cones, and the location of each cone was defined as the average of the minimum and maximum’s location (Fig. 4(d)). We chose the threshold values in Eq. (1) empirically through training based on the observation that there was tendency to overestimate the number of cones at low modal frequencies and underestimate the number of cones at high frequencies. As such, the threshold values are generally higher at higher eccentricities and lower at lower eccentricities from the fovea. These thresholds were set based on a training data set from subjects which were not included in the testing set.

Additionally, we evaluated the maxima that were not paired in the same way using Imaxima instead of Ip, with the threshold for the same frequency bounds set to 2.1σ, 1.75σ, or 1.4σ, respectively. Naturally, cones found using the maxima alone have their location set to be at the maximum’s position. Only three percent of cones found across the validation data set were from the maxima alone.

2.4 Measures of performance and validation

Subjects with normal vision were divided into groups for training and validation. Subjects from the training and validation data sets did not overlap. In order to form the training group, one subject from each of the four meridians was randomly selected. All 8 images from each subject were used, totaling 32 images for algorithm training. All parameters used in implementing the algorithm were set based on this training. An expert grader performed the manual evaluations used in this process. We then used the 80 images from the remaining 10 subjects for performance evaluation of the algorithm.

We computed standard measures of performance for the AFLD algorithm with respect to the gold-standard of manual marking. Here, two expert manual graders independently evaluated the validation set; the first was the expert used in training the algorithm, and a second grader was also engaged. Thus, for each eye there were data from the AFLD algorithm and each of the graders. Two approaches were taken. First, we focused on the sensitivity and precision of the AFLD method. We analyzed the spatial distributions within each image of the cones identified by AFLD and by the manual assessments. We identified the overlaps in these cones (identified by both methods) and the sources (AFLD or manual) of the non-overlapping cones. This was based on a nearest neighbor analysis. This enabled determination of sensitivity (true positive rate) and precision (1-false discovery rate). Then, we considered measures of the total numbers of cones detected per image, expressed as the cone density, and contrasted them across the AFLD and manual results from both graders on a per image basis for the validation data set.

For the sensitivity/precision analysis, we focused on identifying cones in each image that both ALFD and the manual grading detected, those that AFLD failed to detect and those which it falsely detected. We found one-to-one pairs between the automatic and manual locations with the following constraints: 1) there are no restrictions on the orientation with respect to each other, 2) the two locations must be within 0.75dmed of each other (where dmed is the median cone spacing from manual marking for the image), 3) for each manual marking only the pair with the smallest distance is used, and 4) each automatic marking is only included in one pair. We estimated dmed for an image by finding the distance for each manually marked cone to its nearest neighbor in pixels and then taking the median of these measurements. To remove border artifacts, we did not consider the cones located within 7 pixels of the edge of the image. The border value was chosen to correspond to half the average value of dmed across our data set.

We denote the number of cones detected by both AFLD and manual as NTrue Positive, by manual with no corresponding automatic as NFalse Negative, and by automatic with no corresponding manual as NFalse Positive. The numbers of cones from both the automatic and manual markings are then expressed as:

NAutomaticCones=NTruePositive+NFalsePositive,
NManualCones=NTruePositive+NFalseNegative.
In order to compare results of the automatic and manual approaches, we calculated the true positive rate, false discovery rate, and Dice’s coefficient [45, 46] as:
truepositiverate=NTruePositive/NManualCones,
falsediscoveryrate=NFalsePositive/NAutomaticCones,
Dicecoefficient=2NTruePositive/(NManualCones+NAutomaticCones).
Dice’s coefficient is a common metric for describing the overall similarity between two sets of observations, and is affected by both the true and false positives. Additionally, we calculated the same metrics comparing the two graders to one another. We used paired two-sided Wilcoxon signed rank tests in order to analyze significances of differences for each of these three summary metrics.

In order to compare the total numbers of cones found in images without considering their spatial locations, we calculated the cone density, defined as the ratio of the number of cones marked in an image to the total area of the image. This was compared across AFLD and both graders: grader #1 vs. AFLD; grader #2 vs. AFLD; grader #1 and grader #2 average (per image) vs. AFLD; and grader #1 vs grader #2. Two complementary approaches were taken. First, linear regression and correlation analyses were conducted. Then, Bland-Altman analyses were performed [47].

3. Results

3.1 Cone detection in healthy eyes

We coded the fully automated AFLD algorithm in MATLAB (The MathWorks, Natick, MA). The algorithm was run on a desktop computer with an i7-5930K CPU at 3.5 GHz and 64 GB of RAM. Parallelization across 6 cores with hyper threading was used. The mean run time for the AFLD algorithm was 0.03 seconds per image across the validation data set (average image size of 206.5 by 206.5 pixels). This included time for reading the images and saving results.

Across the validation data set, the AFLD algorithm found over 10500 cones in total. Figure 5 is a representative segmentation result in three images from different subjects and acquired from different eccentricities. Note the differences in cone spacing, and image quality.

 figure: Fig. 5

Fig. 5 AFLD cone detection in images of different cone spacing and image quality. Original images are shown in the top row and images with automatically detected cones marked in green are shown in the bottom row. Images have increasing cone density from left to right. The cone density is 8,947 cones/mm2 in (a), 14,206 cones/mm2 in (b), and 34,063 cones/mm2 in (c) as determined by the first manual marking. Dice’s coefficients are 0.9843 for (a), 0.9774 for (b), and 0.9243 for (c).

Download Full Size | PDF

Figure 6 illustrates results for the AFLD and first manual marking for three images. In the set of marked images, matched pairs between AFLD and manual are shown in green for automatic and yellow for manual (true positives). Cones missed by AFLD are shown in cyan (false negatives), and locations marked by AFLD but not by the manual grader are shown in red (false positives).

 figure: Fig. 6

Fig. 6 Comparison of AFLD marking to the first manual marking on images with varying quality and cone contrast. Original images are shown in the top row. AFLD and manual markings are shown in the bottom row with markings as follows: green (automatic) and yellow (manual) denotes a match; cyan denotes a false negative; and red denotes a false positive. Dice’s coefficients are 0.9957 for (a), 0.9461 for (b), and 0.9123 for (c). Dice’s coefficients are approximately one standard deviation above, at, and one standard deviation below the mean value for the validation set.

Download Full Size | PDF

Tables 1 and 2 summarize results of the sensitivity/precision analysis of the AFLD algorithm. Quantitatively, the true positive rates and Dice’s coefficients are relatively high. The false discovery rates are relatively low (albeit with notably greater variability across all the images for all methods). Table 1 summarizes the performance of the AFLD algorithm and grader #2, while using grader #1 as the gold standard. No significant differences were found between AFLD and the second manual grader for the true positive rates (p = 0.96), false discovery rates (p = 0.18), and Dice’s coefficients (p = 0.06). Table 2 shows analogous contrasts, now using grader #2 as the gold standard and evaluating performance of grader #1 as well as AFLD. Here, there were statistically significant differences in the true positive rates (p = 0.048), false discovery rates (p<0.001), and Dice’s coefficients (p<0.001), even though their absolute differences per image were relatively small.

Tables Icon

Table 1. Performance of AFLD and second manual marking with respect to the first manual marking across the validation data set (standard deviations shown in parenthesis).

Tables Icon

Table 2. Performance of AFLD and first manual marking with respect to the second manual marking across the validation data set (standard deviations shown in parenthesis), where (*: p<0.05, ***: p<0.001).

Figure 7 gives a set of scatter plots of results for the cone density per image. These plots contrast manual grader #1 vs. AFLD (Fig. 7(a)), manual grader #2 vs. AFLD (Fig. 7(b)), the average per image of the two graders vs. AFLD (Fig. 7(c)), and manual grader #1 vs. manual grader #2 (Fig. 7(d)). All plots could be modeled as linear (p<0.001), with relatively small confidence intervals about the regression lines. The slopes of all regression lines in Fig. 7 were significantly different from unity (p<0.001), and the intercepts of all lines were significantly different from zero (p<0.001).

 figure: Fig. 7

Fig. 7 Scatter plots of cone density measures for (a) grader #1 vs. AFLD, (b) grader #2 vs. AFLD, (c) average of both graders vs. AFLD, and (d) grader #1 vs. grader #2. In each plot, the solid black line shows the regression line, with the corresponding equation, coefficient of determination (R2), and p value shown in the upper left corner. Dotted lines are 95% confidence intervals.

Download Full Size | PDF

Figure 8 gives Bland-Altman plots for the same contrasts as Fig. 7. The solid line is the average difference per method and the dotted lines are 95% confidence limits. Notably, for cone densities below about 2.25 × 104 cones/mm2, differences are within the confidence limit intervals. Above that density, there is more scatter between the two manual graders as well as in the comparisons with AFLD. This is consistent with the scatter in Fig. 7.

 figure: Fig. 8

Fig. 8 Bland-Altman plots of cone density for (a) grader #1 - AFLD, (b) grader #2 - AFLD, (c) average of both graders - AFLD, and (d) grader #1 - grader #2. The solid black line shows the mean difference, and the dotted lines show the 95% confidence limits of agreement.

Download Full Size | PDF

3.2 Preliminary cone detection in pathologic eyes

Section 3.1 described our detailed quantitative analysis of AFLD performance for detecting cones on split detector AOSLO images of normal eyes, which is the main goal of this paper. As a demonstrative example, we tested our algorithm on four subjects with diseases that affects photoreceptors. Figure 9 below shows the segmentation results for two subjects with albinism (Fig. 9(a)-9(b)) and two subjects with achromatopsia (Fig. 9(c)-9(d)). In these experiments, we implemented the AFLD algorithm as is without any modification. The extension and quantitative validation of our algorithm for diseased eyes are out of the scope of this preliminary paper, and will be fully addressed in our upcoming publications.

 figure: Fig. 9

Fig. 9 AFLD cone detection in split detector images of four subjects with photoreceptor pathology. Original images are shown in the left column, and images with automatically detected cones marked in green are shown in the right column. Subject pathologies are (a-b) oculocutaneous albinism and (c-d) achromatopsia.

Download Full Size | PDF

4. Discussion

We developed a fully automated algorithm for localizing cones in non-confocal split detector based AOSLO images of healthy eyes. This utilizes an adaptive filter along with a priori information about the imaging modality to aid in detection. The algorithm was validated (without a need for manual adjustment of parameters, which were estimated from a separate training data set) against the current gold standard of manual segmentation. Our fast algorithm performed with a high degree of sensitivity, precision, and overall similarity as defined by the Dice’s coefficient, when compared to manual grading on a large data set of images differing greatly in appearance and cone density.

There were slight differences in overall similarity (Dice’s coefficient) of the AFLD algorithm and the two manual graders (Tables 1, 2). When comparing the automated marking and manual marking of the second grader to the marking of the first grader, the results of AFLD algorithm were closer to the gold standard first grader’s marking (without statistical significance). Alternatively, when comparing the automated marking and manual marking of the first grader to the marking of the second grader, the results of first grader were closer to the gold standard second grader’s marking (with statistical significance). This might be in part due to the fact that the algorithmic parameters were determined based on the training set marking from the first grader (the more experienced of the two). That is, our algorithm tends to mimic grader #1’s marking; thus it is reasonable that the automatic results would more closely match the first grader’s marking for the validation set. However, it should be emphasized that regardless of statistical significance, the Dice’s coefficient difference between the automated and manual methods in each of the above two cases was on the order of 0.01, a negligible amount in practice.

The AFLD method also shows good performance in determining summary measures per image of cone density. The linear regression and correlation analyses of the cone density scatter plots of Fig. 7 demonstrate the high degree of linear correspondence between the automatic and manual methods. Correlation is higher for AFLD vs. grader #1, consistent with the discussion above. Notably, the scatter plots show less (albeit still high) correlation between the two graders. Given the large sample size (n = 80), it is not surprising that, in a formal statistical sense, the values for the slopes and intercepts of the regression lines were found to be statistically different from one and zero, respectively. However, the magnitude of these differences was sufficiently small to demonstrate the fidelity of the automatic approach. The differences between the automated and manual results are also small, as seen in the Bland-Altman plots of Fig. 8. Again, these differences are greater when AFLD is compared to grader #2 than grader #1. And again, as for the scatter plots, the differences for grader #1 vs. grader #2 are also relatively larger. The discrepancies tend to occur at higher cone densities, where the AFLD method tends to underestimate the manually determined number of cones. The smaller cones at high eccentricities are more difficult to detect even by manual graders, as illustrated in Fig. 7(d) and Fig. 8(d).

Interpretation and quantification of retinal anatomy and pathology, as based on a single ophthalmic image modality, are at times unreliable. Of course, it is not surprising that a higher resolution system such as confocal AOSLO can visualize pathology not identifiable on clinical imaging systems such as optical coherence tomography (OCT) [22]. Moreover, as shown in Fig. 1, perifoveal cones and rods can be often better identified on split detector AOSLO and confocal AOSLO, respectively. Thus, optimal quantification of rod and cone photoreceptor structures requires analysis of both confocal and split detector AOSLO images from the same subject. On another front, OCT’s superior axial resolution with respect to AOSLO provides important complementary 3D information about the retinal structures. As such, a recent study recommends employing a multi-imaging modality approach (including OCT and AOSLO imaging systems) to provide additional evidence needed for confident identification of photoreceptors [23]. Development of such multi-modality image segmentation algorithms is part of our ongoing work.

A limitation of this study is that we only trained and quantitatively tested our algorithm on split detector AOSLO images of healthy eyes. Our data set does, however, contain images with significant variability in appearance (e.g. Fig. 6). Additionally, the algorithm relies on being able to locate Yellott’s ring, which may be difficult for irregular mosaics presented in some disease cases. As a future approach to this, we are developing a kernel regression [48] based machine learning algorithm trained on a large data set from diseased eyes in order to be able to locate cones without reliance on locating Yellott’s ring. However, the qualitative results in the pathologic images in Fig. 9 show promise for the adaptability of our algorithm to diseased cases. It is encouraging that the AFLD algorithm is able to correctly locate cones and ignore vascular and pathologic features that were not seen in training. We note that direct implementation of an algorithm developed for normal eyes is not expected to be robust for all types of pathology, as was the case for the development of automatic segmentation algorithms for retinal OCT. However, development of automated OCT segmentation algorithms for normal eyes (e.g [49].) was the crucial first step in development of future algorithms robust to pathology (e.g [48, 50].). Indeed, as in the case of segmenting pathologic features in OCT images, there is a long road ahead in developing a robust comprehensive fully automatic AOSLO segmentation algorithm applicable to a large set of ophthalmic diseases.

Acknowledgments

Research reported in this publication was supported by the National Eye Institute and the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under award numbers R01EY017607, R01EY024969, P30EY001931, and T32EB001040. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References and links

1. A. Roorda and D. R. Williams, “The arrangement of the three cone classes in the living human eye,” Nature 397(6719), 520–522 (1999). [CrossRef]   [PubMed]  

2. A. Roorda, F. Romero-Borja, W. Donnelly Iii, H. Queener, T. Hebert, and M. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express 10(9), 405–412 (2002). [CrossRef]   [PubMed]  

3. R. J. Zawadzki, S. M. Jones, S. S. Olivier, M. Zhao, B. A. Bower, J. A. Izatt, S. Choi, S. Laut, and J. S. Werner, “Adaptive-optics optical coherence tomography for high-resolution and high-speed 3D retinal in vivo imaging,” Opt. Express 13(21), 8532–8546 (2005). [CrossRef]   [PubMed]  

4. D. Merino, C. Dainty, A. Bradu, and A. G. Podoleanu, “Adaptive optics enhanced simultaneous en-face optical coherence tomography and scanning laser ophthalmoscopy,” Opt. Express 14(8), 3345–3353 (2006). [CrossRef]   [PubMed]  

5. C. Torti, B. Považay, B. Hofer, A. Unterhuber, J. Carroll, P. K. Ahnelt, and W. Drexler, “Adaptive optics optical coherence tomography at 120,000 depth scans/s for non-invasive cellular phenotyping of the living human retina,” Opt. Express 17(22), 19382–19400 (2009). [CrossRef]   [PubMed]  

6. R. D. Ferguson, Z. Zhong, D. X. Hammer, M. Mujat, A. H. Patel, C. Deng, W. Zou, and S. A. Burns, “Adaptive optics scanning laser ophthalmoscope with integrated wide-field retinal imaging and tracking,” J. Opt. Soc. Am. A 27(11), A265–A277 (2010). [CrossRef]   [PubMed]  

7. A. Dubra and Y. Sulai, “Reflective afocal broadband adaptive optics scanning ophthalmoscope,” Biomed. Opt. Express 2(6), 1757–1768 (2011). [CrossRef]   [PubMed]  

8. R. S. Jonnal, O. P. Kocaoglu, Q. Wang, S. Lee, and D. T. Miller, “Phase-sensitive imaging of the outer retina using optical coherence tomography and adaptive optics,” Biomed. Opt. Express 3(1), 104–124 (2012). [CrossRef]   [PubMed]  

9. A. Roorda and D. R. Williams, “Optical fiber properties of individual human cones,” J. Vis. 2(5), 404–412 (2002). [CrossRef]   [PubMed]  

10. Y. Kitaguchi, K. Bessho, T. Yamaguchi, N. Nakazawa, T. Mihashi, and T. Fujikado, “In vivo measurements of cone photoreceptor spacing in myopic eyes from images obtained by an adaptive optics fundus camera,” Jpn. J. Ophthalmol. 51(6), 456–461 (2007). [CrossRef]   [PubMed]  

11. T. Y. Chui, H. Song, and S. A. Burns, “Adaptive-optics imaging of human cone photoreceptor distribution,” J. Opt. Soc. Am. A 25(12), 3021–3029 (2008). [CrossRef]   [PubMed]  

12. A. Dubra, Y. Sulai, J. L. Norris, R. F. Cooper, A. M. Dubis, D. R. Williams, and J. Carroll, “Noninvasive imaging of the human rod photoreceptor mosaic using a confocal adaptive optics scanning ophthalmoscope,” Biomed. Opt. Express 2(7), 1864–1876 (2011). [CrossRef]   [PubMed]  

13. M. Pircher, J. S. Kroisamer, F. Felberer, H. Sattmann, E. Götzinger, and C. K. Hitzenberger, “Temporal changes of human cone photoreceptors observed in vivo with SLO/OCT,” Biomed. Opt. Express 2(1), 100–112 (2011). [CrossRef]   [PubMed]  

14. O. P. Kocaoglu, S. Lee, R. S. Jonnal, Q. Wang, A. E. Herde, J. C. Derby, W. Gao, and D. T. Miller, “Imaging cone photoreceptors in three dimensions and in time using ultrahigh resolution optical coherence tomography with adaptive optics,” Biomed. Opt. Express 2(4), 748–763 (2011). [CrossRef]   [PubMed]  

15. M. Lombardo, S. Serrao, and G. Lombardo, “Technical factors influencing cone packing density estimates in adaptive optics flood illuminated retinal images,” PLoS One 9(9), e107402 (2014). [CrossRef]   [PubMed]  

16. S. S. Choi, N. Doble, J. L. Hardy, S. M. Jones, J. L. Keltner, S. S. Olivier, and J. S. Werner, “In vivo imaging of the photoreceptor mosaic in retinal dystrophies and correlations with visual function,” Invest. Ophthalmol. Vis. Sci. 47(5), 2080–2092 (2006). [CrossRef]   [PubMed]  

17. J. L. Duncan, Y. Zhang, J. Gandhi, C. Nakanishi, M. Othman, K. E. H. Branham, A. Swaroop, and A. Roorda, “High-resolution imaging with adaptive optics in patients with inherited retinal degeneration,” Invest. Ophthalmol. Vis. Sci. 48(7), 3283–3291 (2007). [CrossRef]   [PubMed]  

18. S. S. Choi, R. J. Zawadzki, M. A. Greiner, J. S. Werner, and J. L. Keltner, “Fourier-domain optical coherence tomography and adaptive optics reveal nerve fiber layer loss and photoreceptor changes in a patient with optic nerve drusen,” J. Neuroophthalmol. 28(2), 120–125 (2008). [CrossRef]   [PubMed]  

19. S. Ooto, M. Hangai, A. Sakamoto, A. Tsujikawa, K. Yamashiro, Y. Ojima, Y. Yamada, H. Mukai, S. Oshima, T. Inoue, and N. Yoshimura, “High-resolution imaging of resolved central serous chorioretinopathy using adaptive optics scanning laser ophthalmoscopy,” Ophthalmology 117(9), 1800–1809 (2010). [CrossRef]   [PubMed]  

20. D. Merino, J. L. Duncan, P. Tiruveedhula, and A. Roorda, “Observation of cone and rod photoreceptors in normal subjects and patients using a new generation adaptive optics scanning laser ophthalmoscope,” Biomed. Opt. Express 2(8), 2189–2201 (2011). [CrossRef]   [PubMed]  

21. Y. Kitaguchi, S. Kusaka, T. Yamaguchi, T. Mihashi, and T. Fujikado, “Detection of photoreceptor disruption by adaptive optics fundus imaging and Fourier-domain optical coherence tomography in eyes with occult macular dystrophy,” Clin. Ophthalmol. 5, 345–351 (2011). [CrossRef]   [PubMed]  

22. K. E. Stepien, W. M. Martinez, A. M. Dubis, R. F. Cooper, A. Dubra, and J. Carroll, “Subclinical photoreceptor disruption in response to severe head trauma,” Arch. Ophthalmol. 130(3), 400–402 (2012). [CrossRef]   [PubMed]  

23. A. Roorda and J. L. Duncan, “Adaptive optics ophthalmoscopy,” Annu Rev Vis Sci 1(1), 19–50 (2015). [CrossRef]   [PubMed]  

24. T. Y. P. Chui, D. A. Vannasdale, and S. A. Burns, “The use of forward scatter to improve retinal vascular imaging with an adaptive optics scanning laser ophthalmoscope,” Biomed. Opt. Express 3(10), 2537–2549 (2012). [CrossRef]   [PubMed]  

25. D. Scoles, Y. N. Sulai, C. S. Langlo, G. A. Fishman, C. A. Curcio, J. Carroll, and A. Dubra, “In vivo imaging of human cone photoreceptor inner segments,” Invest. Ophthalmol. Vis. Sci. 55(7), 4244–4251 (2014). [CrossRef]   [PubMed]  

26. M. A. Abozaid, C. S. Langlo, A. M. Dubis, M. Michaelides, S. Tarima, and J. Carroll, “Reliability and repeatability of cone density measurements in patients with congenital achromatopsia,” Adv. Exp. Med. Biol. 854, 277–283 (2016). [CrossRef]   [PubMed]  

27. K. Y. Li and A. Roorda, “Automated identification of cone photoreceptors in adaptive optics retinal images,” J. Opt. Soc. Am. A 24(5), 1358–1363 (2007). [CrossRef]   [PubMed]  

28. B. Xue, S. S. Choi, N. Doble, and J. S. Werner, “Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera,” J. Opt. Soc. Am. A 24(5), 1364–1372 (2007). [CrossRef]   [PubMed]  

29. D. H. Wojtas, B. Wu, P. K. Ahnelt, P. J. Bones, and R. P. Millane, “Automated analysis of differential interference contrast microscopy images of the foveal cone mosaic,” J. Opt. Soc. Am. A 25(5), 1181–1189 (2008). [CrossRef]   [PubMed]  

30. A. Turpin, P. Morrow, B. Scotney, R. Anderson, and C. Wolsley, “Automated identification of photoreceptor cones using multi-scale modelling and normalized cross-correlation,” in Image Analysis and Processing – ICIAP 2011, G. Maino and G. Foresti, eds. (Springer Berlin Heidelberg, 2011), pp. 494–503.

31. K. Loquin, I. Bloch, K. Nakashima, F. Rossant, P.-Y. Boelle, and M. Paques, “Automatic photoreceptor detection in in-vivo adaptive optics retinal images: statistical validation,” in Image Analysis and Recognition, A. Campilho and M. Kamel, eds. (Springer Berlin Heidelberg, 2012), pp. 408–415.

32. X. Liu, Y. Zhang, and D. Yun, “An automated algorithm for photoreceptors counting in adaptive optics retinal images,” Proc. SPIE 8419, 84191Z (2012). [CrossRef]  

33. R. Garrioch, C. Langlo, A. M. Dubis, R. F. Cooper, A. Dubra, and J. Carroll, “Repeatability of in vivo parafoveal cone density and spacing measurements,” Optom. Vis. Sci. 89(5), 632–643 (2012). [CrossRef]   [PubMed]  

34. F. Mohammad, R. Ansari, J. Wanek, and M. Shahidi, “Frequency-based local content adaptive filtering algorithm for automated photoreceptor cell density quantification,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2012), pp. 2325–2328. [CrossRef]  

35. S. J. Chiu, Y. Lokhnygina, A. M. Dubis, A. Dubra, J. Carroll, J. A. Izatt, and S. Farsiu, “Automatic cone photoreceptor segmentation using graph theory and dynamic programming,” Biomed. Opt. Express 4(6), 924–937 (2013). [CrossRef]   [PubMed]  

36. L. Mariotti and N. Devaney, “Performance analysis of cone detection algorithms,” J. Opt. Soc. Am. A 32(4), 497–506 (2015). [CrossRef]   [PubMed]  

37. D. M. Bukowska, A. L. Chew, E. Huynh, I. Kashani, S. L. Wan, P. M. Wan, and F. K. Chen, “Semi-automated identification of cones in the human retina using circle Hough transform,” Biomed. Opt. Express 6(12), 4676–4693 (2015). [CrossRef]   [PubMed]  

38. J. I. Yellott Jr., “Spectral analysis of spatial sampling by photoreceptors: Topological disorder prevents aliasing,” Vision Res. 22(9), 1205–1210 (1982). [CrossRef]   [PubMed]  

39. J. I. Yellott Jr., “Spectral consequences of photoreceptor sampling in the rhesus retina,” Science 221(4608), 382–385 (1983). [CrossRef]   [PubMed]  

40. N. J. Coletta and D. R. Williams, “Psychophysical estimate of extrafoveal cone spacing,” J. Opt. Soc. Am. A 4(8), 1503–1513 (1987). [CrossRef]   [PubMed]  

41. A. G. Bennett, A. R. Rudnicka, and D. F. Edgar, “Improvements on Littmann’s method of determining the size of retinal features by fundus photography,” Graefes Arch. Clin. Exp. Ophthalmol. 232(6), 361–367 (1994). [CrossRef]   [PubMed]  

42. R. F. Cooper, C. S. Langlo, A. Dubra, and J. Carroll, “Automatic detection of modal spacing (Yellott’s ring) in adaptive optics scanning light ophthalmoscope images,” Ophthalmic Physiol. Opt. 33(4), 540–549 (2013). [CrossRef]   [PubMed]  

43. C. A. Curcio, K. R. Sloan, R. E. Kalina, and A. E. Hendrickson, “Human photoreceptor topography,” J. Comp. Neurol. 292(4), 497–523 (1990). [CrossRef]   [PubMed]  

44. C. B. Barber, D. P. Dobkin, and H. Huhdanpaa, “The quickhull algorithm for convex hulls,” ACM Trans. Math. Softw. 22(4), 469–483 (1996). [CrossRef]  

45. L. R. Dice, “Measures of the amount of ecologic association between species,” Ecology 26(3), 297–302 (1945). [CrossRef]  

46. T. Sørensen, “A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on Danish commons,” Biol. Skr. 5, 1–34 (1948).

47. J. M. Bland and D. G. Altman, “Statistical methods for assessing agreement between two methods of clinical measurement,” Lancet 327(8476), 307–310 (1986). [CrossRef]   [PubMed]  

48. S. J. Chiu, M. J. Allingham, P. S. Mettu, S. W. Cousins, J. A. Izatt, and S. Farsiu, “Kernel regression based segmentation of optical coherence tomography images with diabetic macular edema,” Biomed. Opt. Express 6(4), 1172–1194 (2015). [CrossRef]   [PubMed]  

49. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010). [CrossRef]   [PubMed]  

50. S. J. Chiu, J. A. Izatt, R. V. O’Connell, K. P. Winter, C. A. Toth, and S. Farsiu, “Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images,” Invest. Ophthalmol. Vis. Sci. 53(1), 53–61 (2012). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Comparison of peripheral rod and cone visualization in confocal and split detector AOSLO imaging in a normal subject (a) Confocal AOSLO image at 10° from the fovea shows rod and cone structures, but it is challenging to confidently differentiate a cluster of rods from a single cone with a complex waveguide signal. (b) Split detector image from the same location better visualizes cones but is incapable of visualizing rods. Circle indicates a cone, which is clearly visible in split detector AOSLO but appears as a collection of rod-like structures in confocal AOSLO. Scale bar: 20 μm.
Fig. 2
Fig. 2 Schematic for AFLD split detector AOSLO cone detection algorithm.
Fig. 3
Fig. 3 Adaptive filtering of split detector AOSLO cone images. (a) Original Image. (b) Discrete Fourier transform of (a) log10 compressed. Red arrow points to Yellott’s ring. (c) Average radial cross section of (b) over 9 sections after filtering in black. Fitted curve in red. (d) Result of subtracting fitted red curve from the black curve in (c). Peak corresponding to cones shown in blue. (e) Upper fourth of peak from (d). (f) Band pass filter in Fourier domain. (g) Filtered original image. (h) Information removed from (a) by filtering.
Fig. 4
Fig. 4 Detection of cones in band pass filtered image from Fig. 3. (a) Original image (b) Filtered image, same as Fig. 3(g), with local minima marked in red and maxima in green. (c) Matched pairs of minima and maxima. (d) Final detection results on original image from (a). Cones found using matched pairs are shown in green and the cone found using maximum value only is shown in red.
Fig. 5
Fig. 5 AFLD cone detection in images of different cone spacing and image quality. Original images are shown in the top row and images with automatically detected cones marked in green are shown in the bottom row. Images have increasing cone density from left to right. The cone density is 8,947 cones/mm2 in (a), 14,206 cones/mm2 in (b), and 34,063 cones/mm2 in (c) as determined by the first manual marking. Dice’s coefficients are 0.9843 for (a), 0.9774 for (b), and 0.9243 for (c).
Fig. 6
Fig. 6 Comparison of AFLD marking to the first manual marking on images with varying quality and cone contrast. Original images are shown in the top row. AFLD and manual markings are shown in the bottom row with markings as follows: green (automatic) and yellow (manual) denotes a match; cyan denotes a false negative; and red denotes a false positive. Dice’s coefficients are 0.9957 for (a), 0.9461 for (b), and 0.9123 for (c). Dice’s coefficients are approximately one standard deviation above, at, and one standard deviation below the mean value for the validation set.
Fig. 7
Fig. 7 Scatter plots of cone density measures for (a) grader #1 vs. AFLD, (b) grader #2 vs. AFLD, (c) average of both graders vs. AFLD, and (d) grader #1 vs. grader #2. In each plot, the solid black line shows the regression line, with the corresponding equation, coefficient of determination (R2), and p value shown in the upper left corner. Dotted lines are 95% confidence intervals.
Fig. 8
Fig. 8 Bland-Altman plots of cone density for (a) grader #1 - AFLD, (b) grader #2 - AFLD, (c) average of both graders - AFLD, and (d) grader #1 - grader #2. The solid black line shows the mean difference, and the dotted lines show the 95% confidence limits of agreement.
Fig. 9
Fig. 9 AFLD cone detection in split detector images of four subjects with photoreceptor pathology. Original images are shown in the left column, and images with automatically detected cones marked in green are shown in the right column. Subject pathologies are (a-b) oculocutaneous albinism and (c-d) achromatopsia.

Tables (2)

Tables Icon

Table 1 Performance of AFLD and second manual marking with respect to the first manual marking across the validation data set (standard deviations shown in parenthesis).

Tables Icon

Table 2 Performance of AFLD and first manual marking with respect to the second manual marking across the validation data set (standard deviations shown in parenthesis), where (*: p<0.05, ***: p<0.001).

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

T={ 0.875σ, if f C <0.065 pixels 1 0.7σ, if0.065 f C <0.075 pixels 1 0.525σ, if0.075 f C pixels 1 ,
N AutomaticCones = N TruePositive + N FalsePositive ,
N ManualCones = N TruePositive + N FalseNegative .
true positive rate= N TruePositive / N ManualCones ,
false discovery rate= N FalsePositive / N AutomaticCones ,
Dicecoefficient= 2N TruePositive / (N ManualCones + N AutomaticCones ).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.