Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Intraretinal fluid identification via enhanced maps using optical coherence tomography images

Open Access Open Access

Abstract

Nowadays, among the main causes of blindness in developed countries are age-related macular degeneration (AMD) and the diabetic macular edema (DME). Both diseases present, as a common symptom, the appearance of cystoid fluid regions inside the retinal layers. Optical coherence tomography (OCT) image modality was one of the main medical imaging techniques for the early diagnosis and monitoring of AMD and DME via this intraretinal fluid detection and characterization. We present a novel methodology to identify these fluid accumulations by means of generating binary maps (offering a direct representation of these areas) and heat maps (containing the region confidence). To achieve this, a set of 312 intensity and texture-based features were studied. The most relevant features were selected using the sequential forward selection (SFS) strategy and tested with three archetypal classifiers: LDC, SVM and Parzen window. Finally, the most proficient classifier is used to create the proposed maps. All of the tested classifiers returned satisfactory results, the best classifier achieving a mean test accuracy higher than 94% in all of the experiments. The suitability of the maps was evaluated in a context of a screening issue with three different datasets obtained with two different devices, testing the capabilities of the system to work independently of the used OCT device. The experiments with the map creation were performed using 323 OCT images. Using only the binary maps, more than 91.33% of the images were correctly classified. With only the heat maps, the proposed methodology correctly separated 93.50% of the images.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Thanks to the advances in medical imaging, their use spread over many clinical specialties in combination with classical medical procedures. This significantly facilitates the detection and diagnosis of a large variability of pathologies. Additionally, many medical image modalities permitted to discover relationships between conditions that could have never been revealed without this new point of view. A representative example can be observed in the field of ophthalmic imaging: the study of eye structures can return valuable information regarding not only specific afflictions of the eye but others of general impact. In that sense, alterations in the retinal layer morphology and the presence of abnormal structures can be a sign of heart diseases [1, 2], diabetes [3, 4] or even pathologies related to the nervous system [5, 6].

One of the current dominant ophthalmological image modalities is Optical Coherence Tomography (OCT). It allows the expert to observe, in a non-invasive way, a cross-sectional visualization of the eye fundus and the structures that conform it [7, 8], which still subject of investigations focused on identifying the retinal morphology under pathological scenarios [9,10]. This type of medical screening technique has already left obsolete classical visualization methods, allowing new advances in the analysis of relevant structures (as the retinal vasculature [11]) or the identification of pathological conditions (as the epiretinal membrane [12]).

The scope of this work is related with the intraretinal fluid that may be present inside the retinal layers with the proposal of a new paradigm for their identification and intuitive visualization. This fluid is produced by relevant diseases as the Age-related Macular Degeneration (AMD) or the Diabetic Macular Edema (DME), among the main causes of blindness in developed countries. These two pathologies have in common the presence of fluid bodies inside the retinal layers. The progressive fluid accumulation and consequent deformation of the retinal architecture increasingly diminishes the sight quality of the patient. If no treatment is applied on time, the accumulated damage will end up rendering him completely blind. That is why the detection of the indicated fluid has become an impending matter, being the inspection of OCT imaging one of the most effective ways for it.

Both semiautomatic and automatic works were proposed to segment these fluid regions. For example, Wang et al. [13] proposed a semiautomatic method for the 3D retinal fluid segmentation problem. Their proposal uses an interactive graph cuts algorithm for the segmentation of the first OCT slice. Then, the labeling information is propagated through the slices via motion estimation.

Nonetheless, most of the recent approaches are automatic. De Moura et al. [14] proposed a methodology based on several texture descriptors to classify OCT image samples depending on if they contained fluid region or not. Wilkins et al. [15] proposed the segmentation of these fluid regions via a preprocessing of the OCT image and a posterior thresholding, followed by a false positive (FP) filtering step based on the segmented area size and intensity constraints that were empirically set. Roychowdhury et al. [16] further improved the Wilkins et al. [15] approach by categorizing the candidate regions by shape and size constraints into large, broken large and small cysts categories. Finally, the candidates are filtered with different rules depending on the assigned category.

This preprocessing, candidate region finding and final filtering is also followed by González et al. [17] but, instead of a classical thresholding, they use a watershed algorithm to generate the candidate setand a trained classifier with texture features to filter the detected FPs. Girish et al. [18] proposed an unsupervised automatic methodology based on the watershed transform as González et al. [17], but using k-means clustering for the initial seeds for the watershed algorithm to reduce the workload of the posterior FP filtering step.

Chen et al. [19] proposed a two-step methodology with also a preprocessing phase. In this work, the voxels of the image are firstly classified to find pathological points using a supervised classification approach. Then, using graph-search and graph-cut methods combined, they segment both intraretinal and subretinal fluid regions in the 3D space. Xu et al. [20] followed a similar strategy as Chen et al. [19], using the same 52 texture, Hessian matrices eigenvalues and distance descriptors to perform a voxel classification. This work adds layer dependent information and sample balancing between the three considered stratum to improve the detection sensitivity (specially in smaller fluid regions). Montuoro et al. [10] proposes a methodology also based in voxel classification, but they do not define a set of image features. Instead, they generated convolution kernels using principal component analysis in cubic patches around the training set voxels. Having into account the layer structural relationship, they use a graph theory based algorithm to segment the results, using the probabilities of the classifications as region costs. Finally, they use an auto-context loop (iterative approach that includes spatial context from previous classifications) to refine the results.

Wang et al. [21] fully took advantage on the volumetric information by using two orientations of B-Scans and a C-Scan. The fluid regions were segmented using a fuzzy C-means algorithm for the initial fluid region clustering and the boundaries detected with a level-set method. These segmentations are combined to generate the 3D volumetric segmentation. Finally, as the other presented methods, a FP filtering step is used to remove undesired detected artifacts. Moreover, this proposal uses OCT angiographies taken from the same scan to help in this final filtering step, removing vascular shadowing artifacts. Esmaeili et al. [22], on the other hand, took advantage of this 3D data to further improve the denoising step. In their approach, they use a curvelet-based technique to transform the image and K-SVD dictionary learning to modify the curvelet coefficients, reducing the original image noise when reconstructed. Finally, they use an empirically set thresholding and a posterior candidate filtering step based to obtain the final segmentation.

Chiu et al. [23] estimate the approximate position of the fluid and retinal layers with a kernel regression based classification to, then, use their graph theory and dynamic programming framework to obtain a precise segmentation. As features, they use pixel intensity, gradient and location descriptors, as well as Law’s texture energy measures.

Rashno et al. [24] used a novel approximation to the segmentation problem based on transforming the image into the neutrosophic domain. This transformation maps the OCT image gray levels into three sets: T, I and F. The T (true) set is assigned to white pixels, I (indeterminate) is assigned to noise pixels and F (false) set is assigned to black pixels. A probability for belonging to each one of the three considered sets is assigned to each pixel. A correction is applied to pixels with a high-level of indeterminacy to reduce the image noise. For the segmentation inside the region of interest, an unsupervised clustering method is applied where the number of clusters is automatically determined. As in other works, a final candidate filtering step removes false positives based on the region size and the layer where the candidate is positioned in. Rashno et al. [25] also proposed an alternative to this methodology, using also the neutrosophic sets but segmenting the fluid regions with a graph-cut approach instead of the aforementioned clustering method. Sahoo et al. [26] proposed the Retinal Fluid Automatic Detection (RFAD) algorithm. This proposal firstly performs a k-means clustering to, then, apply a complex set of decision rules to further adjust the segmentation.

Other works, like the one proposed by Wu et al. [27] focus on a particular case of fluid accumulation. Wu et al. focused on segmenting the Neurosensory Retinal Detachment related fluid accumulations. They use a k-means clustering algorithm to classify the pixels into three categories depending on their thickness and a graph-cut segmentation method in the enface fundus image. Finally, using these regions found in the enface OCT scan, they limit the region of interest for the B-Scan segmentation. For this secondary segmentation they follow a fuzzy C-means and posterior level set method, similar to the proposal of Wang et al. [21].

Recently, deep learning-based techniques have been introduced satisfactorily to the fluid segmentation problem. Lee et al. [28], as reference, proposed an automatic segmentation methodology using convolutional neural networks (CNN). Schlegl et al. [29] used a neural network comprising two processing components, an encoder to obtain the abstract information of the image and a decoder to map that information into a final segmentation. Gopinath and Sivaswamy [30] proposed a method also using a CNN implementation for the segmentation of cystoid macular edemas, followed by a post-processing step using clustering to refine the previously identified cystoid regions. Roy et al. [31] propose a new fully convolutional deep architecture named ReLayNet, formed by a series of encoder blocks relaying the intermittent feature representations to their matched decoder blocks through concatenation layers. Venhuizen et al. [32] proposes a fully convolutional neural network (FNCC) where every pixel in the volume is analyzed and given a probability of belonging to the fluid region. It is composed of a cascade of two FCNNs with two complementary tasks. The first extracts the region of interest, whereas the second actually segments the fluid regions. Both architectures are based on the U-net, proposed by Ronneberger et al. [33] specially for biomedical image segmentation. Similarly, Tennakoon et al. [34] used a deep neural net also inspired by U-Net architecture, but adding a batch normalization layer and an adversarial network to encode higher order relationships. This approach also applied a preprocessing step to the dataset and a median filter to reduce the speckle noise. Finally, Girish et al. [35] also recently proposed an approach based on the U-Net fully convolutional network to automatically capture both micro and macro-level features for the characterization of the fluid structures.

In Table 1, the reader can see a summary of all the main works presented above from the state of the art. It is specified the type of learning followed (supervised if labeled samples were used to train the method and unsupervised if the methodology is capable of separate the classes without labeled examples), the type of algorithm (semiautomatic if it needs the intervention of the user to generate the result, automatic if no further input is needed), type of pathology the system was tested with (if specified by the authors) and the knowledge domain that the methodology analyzes to generate the final result (2D if only features from a scan are used at a time and 3D if features from multiple consecutive OCT scans are considered).

Tables Icon

Table 1. Comparative taxonomy of the state of the art. NS = not specified.

As the reader can see, the state of the art is currently following a classical segmentation paradigm, obtaining satisfactory results (as shown, for example, in the benchmarking test by Girish et al. [36]) even with recent deep learning approximations. Nonetheless, an accurate segmentation is not always attainable in the case of retinal fluid accumulations. Some fluid regions, like the ones presented in Fig. 1, do not have an entire defined border that can be segmented or also appear in nearby groups that interfere with their individual segmentation. Moreover, most of the proposed works require a preprocessing step to filter the typically existing noise of the OCT images and a posterior phase of FP removal. Finally, in these particular cases where there is no clear segmentation, different experts may create different ground truths. This makes more difficult to train and/or evaluate an automated procedure based on the classical segmentation paradigm.

 figure: Fig. 1

Fig. 1 OCT image portions with hardly to segment fluid areas.

Download Full Size | PDF

In this work, we faced the issue of the intraretinal cystoid fluid identification with a new and alternative paradigm. Instead of segmenting candidates, we perform a regional analysis taking advantage of the texture differences between the fluid regions and the healthy retinal tissue without any kind of preprocessing. This alternative paradigm of regional analysis, in contrast to the classical specific segmentations, is able to offer robust results despite the lack of defined borders, as it identifies the pathological regions and not the precise contour. Moreover, as we study regional properties, the system can be trained using representative samples instead of an accurately segmented ground truth, reducing the dependency and influence of the clinicians when training the models. Additionally, this paradigm and texture analysis is resilient to the variability of the OCT image conditions, being able to work without preprocessing steps or posterior filtering of FPs, very common in the state of the art and also relevant for the clinical usage.

These regions, given their fluid nature, present more homogeneous patterns than the samples coming from healthy (or other kind of pathological) zones. To take advantage of this, a representative variability of characteristics were used. These characteristics involved both intensity and texture-based features. The defined set of features was analyzed with a feature selector to identify those with the highest discriminative power. These selected features, derived from representative sets of fluid and non-fluid regions, were employed to train and test the representative classifiers. This way, we construct models that are able to classify different samples into the two possible categories (cystoid fluid region if they had fluid-like structures inside it and non-fluid region if they were clear of them), evaluating its viability and robustness through several tests, models and a feature selection technique without the need of any kind of preprocessing step.

Next, these trained models are exploited to create the proposed maps, representing the information contained in the entire OCT images. In particular, two different complementary maps algorithms are constructed indicating the cystoid fluid presence over the entire OCT scan. The creation of these explicit maps will reduce the workload of the clinicians, increasing their productivity and improving the diagnostic quality of pathologies related to these fluid bodies (as well as allowing an early detection of them). Pathologies as relevant as the previously indicated AMD and DME (among others).

The present work is divided in three following sections: Section 2, Proposed methodology, offers a detailed explanation on the steps that were followed to train the models and create the maps, as well as the design decisions that were made. Section 3, Results and discussion, presents and comments the experiments, tests and their outcomes product of the method implementation and validation. Finally, Section 4, Conclusions, expresses a resolution on the final results and possible future work lines.

2. Proposed methodology

The main stages of the proposed methodology are depicted in Fig. 2. Firstly, the system finds the region of interest (ROI), which corresponds to the retinal area where the fluid regions may appear. Then, a model is trained with representative square samples that were extracted from the identified ROI from a set of OCT images. Finally, using this trained model, we can create two different complementary maps to graphically represent the fluid regions in OCT images and test their possible usefulness for the analysis of the expert clinician. In the following sections, each step will be explained with further details.

 figure: Fig. 2

Fig. 2 Stages of the proposed methodology and respective steps.

Download Full Size | PDF

2.1. Retinal layer extraction

Given the fluid is produced inside the retinal layers, the method firstly identifies this region to restrict the ROI for the rest of the process. By removing the choroid and the vitreous humor we can leverage feature qualities that are common in both the excluded areas and the fluid cystoid regions, helping the model to better discern the fluid and non-fluid samples. Also, regarding the map generation step, by only analyzing the retinal area (where the fluid leakages occur), we reduce considerably the total computational workload and time spent to obtain the desired cystoid fluid maps.

As seen in Fig. 3, the ROI is delimited by the Inner Limiting Membrane (ILM) and the Retinal Pigment Epithelium (RPE). To extract these layers, we based our approach in the work of Chiu et al. [37]. This method represents the image as a graph, where the nodes correspond to the image pixels and the edge weights to their gradients. Then, using the Dijkstra’s algorithm [38], the minimum weighted paths between both sides of the OCT image are found. These paths correspond to each one of the retinal layers. This way, we obtain both limiting ILM and RPE layers.

 figure: Fig. 3

Fig. 3 ILM and RPE retinal layers in an OCT scan.

Download Full Size | PDF

2.2. Trained model creation

This stage of the methodology [14, 39], as illustrated in Fig. 2, is divided in three main steps. These steps produce the model and the set of features that are used afterwards as the core for the fluid map generation stage. To find the most proficient configuration for this task, a suitable feature selection, posterior model training and a comparison between the trained classifiers were made.

2.2.1. Extraction of representative samples

First, from the identified ROI, several representative squared samples of a defined size are extracted. Each of the samples is characterized afterwards by a vector of 312 relevant features for medical imaging issues, including statistics that describe the frequency distribution of the gray levels and texture-based descriptors. As seen in the complete list of features of Table 2, the descriptors were chosen considering different characteristic aspects of the possible patterns of fluid and normal tissues that the system may face in the OCT images. Additionally, if a chosen sample falls partially outside the ROI, the features are extracted only from the maximum rectangular subsample that exclusively contains ROI. This way, we ensure that both the spatial pixel distribution is maintained for the texture descriptors to analyze and only relevant patterns are considered by the machine learning algorithms.

Tables Icon

Table 2. Brief descriptions of the defined feature categories.

2.2.2. Selection of relevant features

Given that the set of 312 features is considerable, we can assume that this subset of features contains redundant and useless information. To filter these less relevant markers and improve the model performance we use a dimensionality reduction technique: Sequential Forward Selection (SFS). This algorithm finds the best subset of features that are able to separate both considered classes. SFS, as a forward-oriented selection method, begins from an empty set. Then, increasingly, the selector adds features that better satisfy the specified criterion. In our case, this criterion consists in the subset that maximizes the between scatter while minimizing the within one (inter-intra distance).

2.2.3. Classifier training

Finally, using the reduced feature vector matrices, a collection of representative classifiers are trained and tested to further study the system behavior over different classifying strategies. The chosen models for these tests were the Linear Discriminant Classifier (LDC), the Support Vector Machines (SVM) and the Parzen window method. The LDC classifier finds the direction along which the two classes are best separated. In this work, we assume same covariance matrices between classes. Thus, we are able to approximate Fisher’s criterion using minimum squared-error procedures. SVMs, on the other hand, approximate the best hyperplane that separates the two most proximal samples and uses it to discriminate future observations. For this work, an exponential kernel function was used with θ = 1. Finally, the Parzen window classifier uses the density estimation for each class on a given point to infer the final classification. The Parzen window smoothing parameter was calculated using Lissack & Fu’s leave-one-out estimate [47].

2.3. Fluid maps generation

To create the cystoid fluid maps, the OCT image is divided into overlapping windows. Then, for each sample, a feature vector is extracted and classified using the model that was trained in the previous stage. To reduce the algorithm workload, the samples are obtained only from the minimum rectangular area that contains the ROI. Fig. 4 presents an example of this rectangular region.

 figure: Fig. 4

Fig. 4 Original image and a representation of the minimum rectangular area that contains the ROI, represented as green (retinal ROI) and blue (non ROI area contained in the sampling area).

Download Full Size | PDF

Depending on the retinal morphology, some samples may partially or completely contain regions outside the retinal layers. These samples, if they do not entirely contain ROI, should be discarded. Otherwise, if they partially contain ROI, we should filter those who do not have enough information to return an accurate result in the posterior classification step. Consequently, we verify for each sample that its center pixel falls inside the ROI (represented in Fig. 4 as green). If this condition is not met, the sample is discarded. This way, we ensure a minimal amount of ROI inside each sample to be analyzed by the system, preventing misclassifications for scarcity of information.

As this solution also allows certain area from outside the ROI in the accepted samples, the texture features are only extracted from the maximum rectangular section inside each sample that contains valid ROI pixels. This way, we diminish the information loss in the retinal borders compared to if we directly discarded these samples for being partially outside the defined ROI.

2.3.1. Construction of the binary maps

Binary maps offer a simple and direct representation of the identified cystoid fluid presence using the classification results. Also, this visualization method offers robustness to the sample overlap variations used in the image sampling step. Each positive sample marks as fluid presence its central position and its closest pixels, which are assumed to be part of the same category. This process is depicted in Fig. 5. The higher the overlap chosen, the smaller the area that is assigned to the same category as its closest classified sample. This is also what gives the binary maps the mentioned robustness to the overlap change, helping to offer similar results with different sample overlap configurations. Fig. 6 shows a representative example of the resulting binary map compared to the original OCT image.

 figure: Fig. 5

Fig. 5 Binary map creation steps. With the classification results (a), we identify their original positions (b) in the OCT image and assign the surrounding pixels to their category (c).

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Original retinal image ROI (a) and the resulting binary map (b), generated with a sample overlap of 52px.

Download Full Size | PDF

2.3.2. Construction of the heat maps

This approach offers a complementary representation of the cystoid fluid information, indicated to be used along with the binary maps. Heat maps represent the confidence of the model on the overlapped area of belonging to a fluid region. To create these maps, following a voting strategy, each sample that is overlapping a certain pixel acts as a ballot: for each pixel, the voting is performed by accumulating the number of superimposed samples that considered the pixel as part of a fluid pattern. An example of this process and the corresponding final result is illustrated in Fig. 7.

 figure: Fig. 7

Fig. 7 Voting process steps. First, the classification results (a) are projected into the original image (b). Then, each window votes for their overlapping pixels (c). The resulting image of this voting process can be seen in (d).

Download Full Size | PDF

This way of sampling presents, however, a biased result (that is, some pixels are overlapped by more windows than others). This creates the lattice pattern shown in the final result of Fig. 7 and the dimness present in detections that are close to the borders. To balance the results, the number of positive votes for each pixel is divided by the total number of windows that voted in that given position. Hence, each pixel in the confidence map contains the proportion of windows that were considered as pathological by the trained model. Moreover, as these maps are destined to ease the workload of an expert clinician, a complementary color mapping is applied after the normalization, for a better visualization of the map values. This heat map is more intuitive and easier to revise by the expert clinician than a grayscale visualization. The proposed color scale (compared in Fig. 8 with the normalized grayscale map) offers sharper and more distinctive gradients between confidence levels. Consequently, an human expert could easily understand the displayed classification results. Note the detection on the right border that, without normalization [Fig. 7(d)], was almost impossible to perceive. It now offers a higher intensity range of values despite the lesser number of overlapping window voters. Also, the lattice pattern product of rounding error disappeared completely thanks to this process.

 figure: Fig. 8

Fig. 8 Comparison between the grayscale normalized map and the complementary color scale proposed (heat map).

Download Full Size | PDF

In Fig. 9, a representative result of this entire process overlapped with the original image is presented. Also, a color scale showing the relation between the color levels and the confidence values is added, helping the specialist to better assess the presented map. We can see one of the advantages of this method versus the binary approach: the false positives, product of retinal zones with complex patterns receive less votes than areas that truly belong to fluid regions. In the binary approach, despite having a more adjusted map, the information is given as received, only knowing that those areas were marked as pathological by the model. Heat maps, in contrast, show that despite the model considered some samples as pathological, the majority of the windows which overlapped that zone considered it as healthy. This approach offers more robustness from the model errors and a more adjusted idea about the confidence of the model in the pathological detection.

 figure: Fig. 9

Fig. 9 Final heat map, overlapped with the original OCT image. The color scale and its relationship with the resulting confidence values is also presented in the results.

Download Full Size | PDF

On the other hand, binary maps are less dependent on the used overlap. In both cases, the quantity of extracted samples (via adjusting the number of pixels the windows overlap between each other) determines the roughness of the map borders. However, the resolution of the color scale of the heat maps also depends on this number of samples. The higher the overlap, the more discrete values will be available to approximate a continuous confidence function. Binary maps always offer a somewhat consistent result, but heat maps suffer more in quality as the number of samples diminish. As seen in Fig. 10, higher overlap values result in smoother map edges and color gradients while lower values create block-like maps with more abrupt color level transitions.

 figure: Fig. 10

Fig. 10 Heat maps generated with a different sample overlap: 32px (a) and 56px (b).

Download Full Size | PDF

3. Results and discussion

The methodology is organized in two main parts: definition and training of the models and their use in the construction of the proposed cystoid fluid maps. In that sense, specific validation processes were organized for each one of these parts. The validation of the system was done using 100 OCT images captured by a CIRRUS HD-OCT Carl Zeiss Meditec confocal scanning laser ophtalmoscope and 223 OCT images from a Spectralis OCT confocal scanning laser ophtalmoscope from Heidelberg Engineering, summing a total of 323 OCT images provided by the opthalmologic services of the Complejo Hospitalario Universitario de Santiago (CHUS) and the Complejo Hospitalario Universitario de Ferrol (CHUF) from Galicia (Spain). Both capture devices are among the mostly used over the healthcare services. All these OCT images were taken centered in the macula, from different patients and also from both left and right eyes. The OCT images range in resolution from 924 × 279 to 1680 × 1050 pixels. A subset of these images from both devices was used to train and test the candidate models (the entire set is posteriorly used in the fluid map validation process).

The image dataset includes a significant variability of intensity and contrast configurations. The images were used directly to test the system without any preprocessing stage in order to preserve the original characteristics of the retinal tissue as observed in the images. Additionally, the images were labeled by an expert clinician, identifying the presence of cystoid fluid. This ground truth served as reference for the entire validation process. The dataset images contained healthy tissue and fluid regions, as well as other pathological structures. Only the fluid regions were considered for this work, but samples of these non-fluid pathological structures were also included in the possible patterns as non-fluid regions so the system could differentiate them in case they were both present in the analyzed OCT images.

All the experiments were repeated with three different configurations of the used datasets to test the capabilities of the methodology with the different tested devices. One of them only considered images of the Cirrus device, other containing only images of the Spectralis device and, finally, a third experiment branch conducted with all the images from both. Therefore, at the end of the first stage, we obtained three trained models: one for each device and a third combined one, trained with images from both capture devices. This way, the potential and capabilities of the proposed methodology were measured individually for each capture device but also for the simultaneous learning of both devices.

3.1. Validation of the construction and training of the models

From each of the considered datasets, a series of 61 × 61 representative samples were obtained for the two studied classes (fluid and non-fluid regions) as well as the total roster of 312 features that are extracted for each case. Then, a ranking of the 100 most representative features is created with the SFS algorithm (optimal selections were identified with lower sizes). This selector mostly chose, with the three tested datasets, the following characteristics: HOG, Gabor and LBP features. Regarding their relevance, different HOG markers and the skewness of the gray level distribution in the samples were always amongst the five highest ranked markers with the three tested configurations.

Finally, for each ordered increasing subset of this feature ranking, 50 randomly chosen divisions of the dataset in half for the training and testing were used to find the most suitable model. Each training partition was trained and evaluated using a 10-fold cross-validation with all the three classifiers (LDC, SVM and Parzen window). The resulting models were evaluated with the corresponding test subset. For each classifier type and studied image dataset (Cirrus, Spectralis and combined), the trained model with the subset of features that obtained the lowest minimum mean test error of the 50 iterations is chosen.

As said, a representative subset of images of each capture device was used to train and test the three models. From the Cirrus device 83 OCT histological images were considered, from where 1613 samples were extracted: 806 from fluid regions and 807 from non-fluid regions. The Spectralis subset for the testing and training of the models consisted of 73 representative histological images. From this image subset, 1634 samples were extracted: 778 containing fluid regions and 856 from the opposite class.

In Figs. 11, 12 and 13 the mean test error for each feature subset is displayed, with a vertical bar indicating the number of features that achieved the minimum test error for each classifier. The three experiments returned satisfactory results, achieving a maximum mean test success rate of 94.18% with the Cirrus images, 95.59% with the Spectralis images and 94.01% with the combination of both previous subsets (all these results reached with the LDC classifier).

 figure: Fig. 11

Fig. 11 Mean test error per feature subset of 50 iterations for the dataset trained with the Cirrus images. Vertical lines indicate the number of features that achieved the minimum mean test error value for each classifier.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 Mean test error per feature subset of 50 iterations for the dataset trained with the Spectralis images. Vertical lines indicate the number of features that achieved the minimum mean test error value for each classifier.

Download Full Size | PDF

 figure: Fig. 13

Fig. 13 Mean test error per feature subset of 50 iterations for the dataset trained with images coming from both capture devices. Vertical lines indicate the number of features that achieved the minimum mean test error value for each classifier.

Download Full Size | PDF

3.2. Validation of the cystoid fluid maps

As indicated, the trained models are used to generate the cystoid fluid maps. As the proposed methodology is a novel paradigm and offers an alternative to the classical segmentation, metrics related to it (like the Dice coefficient) can not be applied. Thus, as these representations are intended to ease the workload of an expert clinician facilitating the early diagnosis, we measure its utility in a real clinical screening scenario that was set in collaboration with the ophthalmic services from two different public hospital services.

In these experiments, all the images from each category (Cirrus, Spectralis and, consequently, the combined dataset) were classified into two categories based on the expert criteria: images which did not contain clinically relevant fluid regions and images with a significant amount of fluid presence in-between the retinal layers. That is, a clinically relevant / non-relevant pathological scenario. This distinction has into account the distribution of the fluid regions in addition to its amount, as groups of microcyst represent more severe clinical scenarios than other cases of fluid bodies distributed along the retina.

After the identification of all the images into these two categories (clinically relevant fluid presence / healthy or non-relevant fluid presence), the binary and heat maps are created for each analyzed image with the corresponding trained model. In particular, we generated two pairs of binary / heat maps: one of them using the trained model of the specific capture device used to create the OCT image and other pair using the trained model of the combined image dataset.

For the binary maps, we consider as significantly pathological when the area of the biggest identified fluid region presents a minimum established size.

To evaluate the heat maps, we studied the two variables that define them: the confidence values (reflected in the map as the different colors) and the pathological area in each confidence level that could be considered clinically relevant. All the maps were generated using an overlap between samples of 52px.

Figs. 14 and 15 present the results of this screening scenario in the determination of the minimum significant pathological size. In this case, we show the results using only the models trained exclusively with each device (and tested with images from the same device). As we can see, the use of both maps obtained satisfactory results. Cirrus binary maps correctly separated 80% of the images with their optimal configuration, while the Spectralis maps reached higher rates with a 92.83% of accuracy. Regarding heat maps, with their best configurations, Cirrus heat maps correctly classified 94% of the images, while the Spectralis heat maps reached a success rate of 95.96%. These tests showed how the methodology is valid and capable of solving this screening problem separately for each capture device.

 figure: Fig. 14

Fig. 14 Accuracy achieved using (a) the binary fluid maps and (b) the heat maps for the Cirrus image dataset. The color scale in the heat map test represents the percentage of correctly classified maps.

Download Full Size | PDF

 figure: Fig. 15

Fig. 15 Accuracy achieved using (a) the binary fluid maps and (b) the heat maps for the Spectralis image dataset. The color scale in the heat map test represents the percentage of correctly classified maps.

Download Full Size | PDF

Additionally, Fig. 16 presents the results of this experiment using the maps that were created with the combined image dataset. In this case, the binary maps reached an accuracy of 91.33%, significantly higher than the original Cirrus maps and slightly lower than the Spectralis individual test. Heat maps also successfully separated a 93.50% of the cases with their optimal configurations. Note how the approach presented in this work is able to simultaneously learn the fluid and the retinal patterns independently of the capture device with their respective noise patterns and visualization of the eye fundus characteristics.

 figure: Fig. 16

Fig. 16 Accuracy achieved using (a) the binary fluid maps and (b) the heat maps for the combined image dataset. The color scale in the heat map test represents the percentage of correctly classified maps.

Download Full Size | PDF

3.3. Discussion

Regarding the first stage (model training), in Fig. 17 we can see different representative image sections with windows that were correctly and incorrectly classified with each of the models. The colored square represents the 61 × 61 sample, from where the features set is extracted to classify the sample into a fluid (green) or non-fluid region (red). As seen in Figs. 17(c) and 17(d), the system is able to correctly identify as non-fluid even other pathological structures that present gray levels and patterns that could be confused with fluid regions. The model errors are usually related to windows particularly matching homogeneous dark areas with round edges [Fig. 17(e)], but the proposed visualization methods are robust to these misclassifications as they both also examine the surrounding regions to generate the final map. Nonetheless, the combined approach successfully learned the patterns from both devices, being able to identify the fluid and retinal tissue patterns independently of the capture device varying characteristics (like the retinal tissue representation, noise pattern, image resolution or possible irregularities in the capture process).

 figure: Fig. 17

Fig. 17 Examples of true positive (green) classified samples (a) and (b), true negative (red) classified samples (c) and (d) and misclassified samples (e) for each of the considered trained models.

Download Full Size | PDF

Regarding the map tests, as predicted, the results were susceptible to the model coverage. Depending on the focus of the samples used to train the models, the maps will show different confidence values and identification extensions. For this reason, the quality test of the maps evaluated the existence of a configuration that could separate the two considered classes rather than setting a predetermined one to test. The three tested models used to generate the maps presented a zone where both categories were separated successfully. The Cirrus trained dataset obtained the lowest values of the three models with broader and softer accuracy value gradients. If we compare both the Spectralis and the Cirrus binary map tests, we can see how the Spectralis maps achieved a more narrow peak shape. This means, for a human expert analyzing the results, that the maps created with the Spectralis model are easier to inspect (as the area threshold is more differentiated). Nonetheless, the hybrid dataset was able to improve the Cirrus results thanks to the help of the patterns also present in the Spectralis samples, obtaining the desired peak-shape. A similar scenario happened with the heat maps. While the three models achieved satisfactory accuracy measures (95.96% for the Spectralis maps, 94% for the Cirrus ones and 93.50% for the combined dataset), a more spread in the accuracies of the Cirrus test configurations show how the generated Spectralis maps hold a better understanding of the fluid and retinal patterns. However, the hybrid model was able to compensate this difference in coverage, represented in the graphs by the similar narrow pattern in the heat map tests and a higher minimum accuracy.

Apart from the presented statistics, an example of several obtained maps using each of the generated models can be seen in Figs. 18, 19 and 20. Each figure presents a small representative subset of images from the variety of cases that were used to train and evaluate the models and maps used in this work.

 figure: Fig. 18

Fig. 18 Representative map results of different complexities with images from the Spectralis capture device and the specific trained model.

Download Full Size | PDF

 figure: Fig. 19

Fig. 19 Representative map results of different complexities with images from the Cirrus capture device and the specific trained model.

Download Full Size | PDF

 figure: Fig. 20

Fig. 20 Representative map results of different complexities with images from both capture devices and the combined trained model.

Download Full Size | PDF

All these maps are easy and intuitive to revise by the human expert, facilitating enormously the analysis and diagnosis of the fluid presence, even in complex scenarios [Figs. 18(c6), 18(s3), 18(s6)], [Figs. 19(c6), 19(s3), 19(s6)] and [Figs. 20(c6), 20(s3), 20(s6)]. Moreover, this identifications can serve as input to posterior automatic procedures, as they serve as an abstraction of the image texture and regional information. Also, binary maps serve as a good indicator of where the models are getting confused and help to improve their weaknesses. In the Cirrus results, in the image Fig. 19(c4), we can see how the shadows show a bit of response despite not being fluid bodies; or the case of Fig. 18(s4), where dark areas of exudates present also small responses. Moreover, the maps show how the combination of both datasets helped in some cases. As reference, Fig. 20(c4) has less FPs than Fig. 19(c4). Also, Fig. 20(c6) shows a more adjusted map to the fluid pattern than Fig. 19(c6). On the other hand, this combined dataset slightly altered too the already well defined ones as a result of the learning process with both capture device images. As an example, Fig. 20(c3) presents an extended pattern in the darker foveal area that is nonexistent in the Cirrus maps. Also, Fig. 20(s4) shows a larger dark area surrounding the exudates being marked as cystoid fluid.

Despite this, heat maps still present in all three cases the relevant fluid areas as high-confidence values, with negligible variations between the single device trained model and the hybrid one. As seen in these examples, binary and heat maps offer robust results independently of the capture device that was used for training and show how both map approaches are complementary, helping to rapidly assess the presence and complexity of these fluid pathological structures.

4. Conclusions

The detection of fluid bodies is critical for the early diagnosis of pathologies like macular edema or age-related macular degeneration, among the main causes of blindness in developed countries. To date, most of the proposed methodologies that try to identify the pathological fluid accumulations in the intraretinal layers follow a classical segmentation approach.

While the state of the art obtained satisfactory results in the segmentation of the fluid regions, we propose an alternative paradigm that we consider also adequate for the clinical practice. As fluid accumulations may appear mixed with other pathologies, structures and shadows projected by the own capturing technique, a perfect segmentation is not always possible or at least extremely complicated. The borders in these cases appear to be merged, diffuse or even nonexistent.

The novel paradigm herein proposed, instead of cystoid fluid segmentation trains models to identify the presence or absence of intraretinal fluid by the analysis of squared regions. Then, using the trained models, the method automatically generates complementary binary and heat maps that offer a clean, direct and intuitive idea about the fluid accumulation in the eye fundus using the OCT scans as source of information.

Moreover, the proposed paradigm can be adapted to other pathologies and medical imaging modalities without extensive effort from the expert clinicians, as there is no need for a perfectly segmented ground truth. The models can be trained with a reduced set of representative samples from both healthy and pathological regions.

The methodology was validated in two stages: First, the training and test of the models were performed with a representative subset of 156 OCT images from both devices where we selected 3247 squared samples from both fluid and non-fluid regions. From these samples, 312 features are extracted and filtered using a Sequential Forward Selector (SFS) algorithm to identify the most discriminative ones. Then, an individual model for each analyzed capture device and a third one with both were trained using the features that were selected with the SFS. Three representative classifier types were tested for each dataset: an LDC, an SVM and a Parzen window. All the three configurations achieved satisfactory results with all the considered datasets, reaching with the combined dataset models a satisfactory 94.01% of mean test accuracy.

Secondly, we performed the map evaluation using the complete dataset of 323 OCT images. For this purpose, a real medical screening scenario was established to test the suitability of the maps to identify the clinically relevant fluid presence from a non-relevant one, using only the generated maps.

With the models that achieved the best results with each used dataset, both binary and heat maps were created for each tested image. The optimal configuration of binary maps using the combined image dataset successfully differentiated a 91.33% of the images whereas the optimal configuration of heat maps using images from both devices achieved a 93.50% of accuracy.

As these results show, in a medical screening scenario made in collaboration with two of the main medical public opthalmologic services from Galicia (Spain) and tested with two of the main OCT devices of the market, our proposal is suitable for a clinical domain and returns satisfactory results even with contourless fluid regions.

Moreover, this proposal was trained without needing a precise ground truth, and only with representative samples from a selection of the images. This means the system could be adapted to other pathologies or medical imaging domains with a reduced workload by the expert clinicians.

Being patent the ability of these maps for detecting these fluid regions and their usefulness for the clinical practice, as future work we plan to study the possibility of increasing the domain of the methodology to multiple pathological structures also present in the retina (like serous retinal detachment or diffuse retinal thickening). Moreover, to increase the sensibility of the system to the presence of microcysts, a specific window size and classifiers can be studied to create a two-step map creation. Additionally, we plan to use the cystoid fluid maps as input for the calculation of representative biomarkers for its use in the specific assessment of the pathological severity. Finally, to allow further comparison with alternate approaches, we plan to reassure the medical screening scenario testing with publicly available benchmarking datasets.

Funding

Instituto de Salud Carlos III (ISCIII) (PI14/02161, DTS15/00153); Ministerio de Economía, Industria y Competitividad, Gobierno de España (DPI2015-69948-R); Consellería de Cultura, Educación e Ordenación Universitaria, Xunta de Galicia (ED431C 2016-047, ED431G/01).

Acknowledgments

The authors take this opportunity to gratefully acknowledge the assistance and contributions of the opthalmologic services from the Complejo Hospitalario Universitario de Santiago (CHUS) and the Complejo Hospitalario Universitario de Ferrol (CHUF).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. A. Pose-Reino, F. Gómez-Ulla, B. Hayik, M. Rodríguez-Fernández, M. J. Carreira-Nouche, A. Mosquera-González, M. González Penedo, and F. Gude, “Computerized measurement of retinal blood vessel calibre: description, validation and use to determine the influence of ageing and hypertension,” J. Hypertens. 23, 843–850 (2005). [CrossRef]   [PubMed]  

2. T. Y. Wong, R. Klein, A. R. Sharrett, B. B. Duncan, D. J. Couper, J. M. Tielsch, B. E. Klein, and L. D. Hubbard, “Retinal arteriolar narrowing and risk of coronary heart disease in men and women: the atherosclerosis risk in communities study,” JAMA 287, 1153–1159 (2002). [CrossRef]   [PubMed]  

3. T. T. Nguyen, J. J. Wang, A. R. Sharrett, F. A. Islam, R. Klein, B. E. Klein, M. F. Cotch, and T. Y. Wong, “Relationship of retinal vascular caliber with diabetes and retinopathy,” Diabetes Care 31, 544–549 (2008). [CrossRef]  

4. H. Sánchez-Tocino, A. Álvarez-Vidal, M. J. Maldonado, J. Moreno-Montañés, and A. García-Layana, “Retinal thickness study with optical coherence tomography in patients with diabetes,” Invest. Ophthal. Vis. Sci. 43, 1588–1594 (2002). [PubMed]  

5. E. Gordon-Lipkin, B. Chodkowski, D. Reich, S. Smith, M. Pulicken, L. Balcer, E. Frohman, G. Cutter, and P. Calabresi, “Retinal nerve fiber layer is associated with brain atrophy in multiple sclerosis,” Neurology 69, 1603–1609 (2007). [CrossRef]   [PubMed]  

6. P. Jindahra, T. R. Hedges, C. E. Mendoza-Santiesteban, and G. T. Plant, “Optical coherence tomography of the retina: applications in neurology,” Curr. Opin. Neurol. 23, 16–23 (2010). [CrossRef]  

7. M. R. Hee, J. A. Izatt, E. A. Swanson, D. Huang, J. S. Schuman, C. P. Lin, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography of the human retina,” Arch. Ophthalmol. 113, 325–332 (1995). [CrossRef]   [PubMed]  

8. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178 (1991). [CrossRef]   [PubMed]  

9. L. de Sisternes, G. Jonna, J. Moss, M. F. Marmor, T. Leng, and D. L. Rubin, “Automated intraretinal segmentation of SD-OCT images in normal and age-related macular degeneration eyes,” Biomed. Opt. Express 8, 1926–1949 (2017). [CrossRef]   [PubMed]  

10. A. Montuoro, S. M. Waldstein, B. S. Gerendas, U. Schmidt-Erfurth, and H. Bogunović, “Joint retinal layer and fluid segmentation in OCT scans of eyes with severe macular edema using unsupervised representation and auto-context,” Biomed. Opt. Express 8, 1874–1888 (2017). [CrossRef]   [PubMed]  

11. J. Moura, J. Novo, M. Ortega, and P. Charlón, “3D retinal vessel tree segmentation and reconstruction with OCT images,” in Lecture Notes in Computer Science: Image Analysis and Recognition, ICIAR’16, vol. 9730 (2016), pp. 807–816.

12. S. Baamonde, J. Moura, J. Novo, and M. Ortega, “Automatic detection of epiretinal membrane in OCT images by means of local luminosity patterns,” in International Work-Conference on Artificial Neural Networks - IWANN’17, (2017), pp. 222–235.

13. T. Wang, Z. Ji, Q. Sun, Q. Chen, S. Yu, W. Fan, S. Yuan, and Q. Liu, “Label propagation and higher-order constraint-based segmentation of fluid-associated regions in retinal SD-OCT images,” Inf. Sci. 358, 92–111 (2016). [CrossRef]  

14. J. Moura, P. L. Vidal, J. Novo, J. Rouco, and M. Ortega, “Feature definition, analysis and selection for cystoid region characterization in optical coherence tomography,” in Knowledge-Based and Intelligent Information & Engineering Systems: Proceedings of the 21st International Conference KES-2017, Marseille, France, 6–8 September 2017., (2017), pp. 1369–1377.

15. G. Wilkins, O. Houghton, and A. Oldenburg, “Automated segmentation of intraretinal cystoid fluid in optical coherence tomography,” IEEE Transactions on Biomed. Eng. 59, 1109–1114 (2012). [CrossRef]  

16. S. Roychowdhury, D. D. Koozekanani, S. Radwan, and K. K. Parhi, “Automated localization of cysts in diabetic macular edema using optical coherence tomography images,” in Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE, (IEEE, 2013), pp. 1426–1429.

17. A. González, B. Remeseiro, M. Ortega, M. Penedo, and P. Charlón, “Automatic cyst detection in OCT retinal images combining region flooding and texture analysis,” IEEE Int. Symp. on Comput. Med. Syst.397–400 (2013).

18. G. Girish, A. R. Kothari, and J. Rajan, “Automated segmentation of intra-retinal cysts from optical coherence tomography scans using marker controlled watershed transform,” in Engineering in Medicine and Biology Society (EMBC), 2016 IEEE 38th Annual International Conference of the, (IEEE, 2016), pp. 1292–1295.

19. X. Chen, M. Niemeijer, L. Zhang, K. Lee, M. D. Abramoff, and M. Sonka, “Three-dimensional segmentation of fluid-associated abnormalities in retinal OCT: Probability constrained graph-search-graph-cut,” IEEE Transactions on Med. Imaging 31, 1521–1531 (2012). [CrossRef]  

20. X. Xu, K. Lee, L. Zhang, M. Sonka, and M. D. Abràmoff, “Stratified sampling voxel classification for segmentation of intraretinal and subretinal fluid in longitudinal clinical OCT data,” IEEE transactions on medical imaging 34, 1616–1623 (2015). [CrossRef]  

21. J. Wang, M. Zhang, A. D. Pechauer, L. Liu, T. S. Hwang, D. J. Wilson, D. Li, and Y. Jia, “Automated volumetric segmentation of retinal fluid on optical coherence tomography,” Biomed. Opt. Express 7, 1577–1589 (2016). [CrossRef]   [PubMed]  

22. M. Esmaeili, A. Dehnavi, H. Rabbani, and F. Hajizadeh, “Three-dimensional segmentation of retinal cysts from spectral-domain optical coherence tomography images by the use of three-dimensional curvelet based K-SVD,” J. Med. Signals Sensors 6, 166–171 (2016).

23. S. J. Chiu, M. J. Allingham, P. S. Mettu, S. W. Cousins, J. A. Izatt, and S. Farsiu, “Kernel regression based segmentation of optical coherence tomography images with diabetic macular edema,” Biomed. Opt. Express 6, 1172–1194 (2015). [CrossRef]   [PubMed]  

24. A. Rashno, D. Koozekanani, P. Drayna, B. Nazari, S. Sadri, H. Rabbani, and K. Parhi, “Fully-automated segmentation of fluid/cyst regions in optical coherence tomography images with diabetic macular edema using neutrosophic sets and graph algorithms,” IEEE Transactions on Biomed. Eng. 65, 989–1001 (2017). [CrossRef]  

25. A. Rashno, B. Nazari, D. Koozekanani, P. Drayna, S. Sadri, H. Rabbani, and K. Parhi, “Fully-automated segmentation of fluid regions in exudative age-related macular degeneration subjects: Kernel graph cut in neutrosophic domain,” PLoS One 12, e0186949 (2017). [CrossRef]   [PubMed]  

26. M. Sahoo, S. Pal, and M. Mitra, “Automatic segmentation of accumulated fluid inside the retinal layers from optical coherence tomography images,” Measurement 101, 138–144 (2017). [CrossRef]  

27. M. Wu, Q. Chen, X. He, P. Li, W. Fan, S. Yuan, and H. Park, “Automatic subretinal fluid segmentation of retinal SD-OCT images with neurosensory retinal detachment guided by enface fundus imaging,” IEEE Transactions on Biomed. Eng. 65, 87–95 (2018). [CrossRef]  

28. C. S. Lee, A. J. Tyring, N. P. Deruyter, Y. Wu, A. Rokem, and A. Y. Lee, “Deep-learning based, automated segmentation of macular edema in optical coherence tomography,” Biomed. Opt. Express 8, 3440–3448 (2017). [CrossRef]   [PubMed]  

29. T. Schlegl, S. M. Waldstein, H. Bogunovic, F. Endstraßer, A. Sadeghipour, A.-M. Philip, D. Podkowinski, B. S. Gerendas, G. Langs, and U. Schmidt-Erfurth, “Fully automated detection and quantification of macular fluid in OCT using deep learning,” Ophthalmology 125, 549–558 (2018). [CrossRef]  

30. K. Gopinath and J. Sivaswamy, “Segmentation of retinal cysts from optical coherence tomography volumes via selective enhancement,” IEEE J. Biomed. Heal. Informatics 2018, 1 (2018).

31. A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, “Relaynet: Retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional network,” CoRR 2161, 2161 (2017).

32. F. G. Venhuizen, B. van Ginneken, B. Liefers, F. van Asten, V. Schreur, S. Fauser, C. Hoyng, T. Theelen, and C. I. Sánchez, “Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography,” Biomed. Opt. Express 9, 1545–1569 (2018). [CrossRef]   [PubMed]  

33. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds. (Springer International Publishing, Cham, 2015), pp. 234–241. [CrossRef]  

34. R. Tennakoon, A. K. Gostar, R. Hoseinnezhad, and A. Bab-Hadiashar, “Retinal fluid segmentation in OCT images using adversarial loss based convolutional neural networks,” in 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), (2018), pp. 1436–1440.

35. G. N. Girish, B. Thakur, S. Roychowdhury, A. Kothari, and J. Rajan, “Segmentation of intra-retinal cysts from optical coherence tomography images using a fully convolutional neural network model,” IEEE J. Biomed. Heal. Informatics 99, 1 (2018).

36. G. Girish, V. Anima, A. R. Kothari, P. Sudeep, S. Roychowdhury, and J. Rajan, “A benchmark study of automated intra-retinal cyst segmentation algorithms using optical coherence tomography b-scans,” Comput. Methods Prog. Biomed. 153, 105–114 (2018). [CrossRef]  

37. S. Chiu, X. Li, P. Nicholas, C. Toth, J. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SD-OCT images congruent with expert manual segmentation,” Opt. Express 10, 19413–19428 (2010). [CrossRef]  

38. E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numer. Math. 1, 269–271 (1959). [CrossRef]  

39. J. Moura, J. Novo, J. Rouco, M. Penedo, and M. Ortega, “Automatic identification of intraretinal cystoid regions in optical coherence tomography,” in Conference on Artificial Intelligence in Medicine in Europe - AIME’17, (2017), pp. 305–315.

40. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Computer Vision and Pattern Recognition, CVPR’05, (2005), pp. 886–893.

41. T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis Mach. Intell. 24, 971–987 (2002). [CrossRef]  

42. D. Gabor, “Theory of communication,” J. Inst. Electr. Eng. 93, 429–457 (1946).

43. M. Haghighata, S. Zonouzb, and M. Abdel-Mottaleba, “Cloudid: Trustworthy cloud-based and cross-enterprise biometric identification,” Expert. Syst. with Appl. 42, 7905–7916 (2015). [CrossRef]  

44. R. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” Syst. Man Cybern. IEEE Trans. on SMC-3, 610–621 (1973). [CrossRef]  

45. S. Buczkowski, S. Kyriacos, F. Nekka, and L. Cartilier, “The modified box-counting method: Analysis of some characteristic parameters,” Pattern Recognit. 31, 411–418 (1998). [CrossRef]  

46. O. Al-Kadi and D. Watson, “Texture analysis of aggressive and nonaggressive lung tumor CE CT images,” IEEE Transactions on Biomed. Eng. 55, 1822–1830 (2008). [CrossRef]  

47. T. Lissack and K. Fu, “Error estimation in pattern recognition via l-distance between posterior density functions,” IEEE Transactions on Inf. Theory 22, 34–45 (1976). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (20)

Fig. 1
Fig. 1 OCT image portions with hardly to segment fluid areas.
Fig. 2
Fig. 2 Stages of the proposed methodology and respective steps.
Fig. 3
Fig. 3 ILM and RPE retinal layers in an OCT scan.
Fig. 4
Fig. 4 Original image and a representation of the minimum rectangular area that contains the ROI, represented as green (retinal ROI) and blue (non ROI area contained in the sampling area).
Fig. 5
Fig. 5 Binary map creation steps. With the classification results (a), we identify their original positions (b) in the OCT image and assign the surrounding pixels to their category (c).
Fig. 6
Fig. 6 Original retinal image ROI (a) and the resulting binary map (b), generated with a sample overlap of 52px.
Fig. 7
Fig. 7 Voting process steps. First, the classification results (a) are projected into the original image (b). Then, each window votes for their overlapping pixels (c). The resulting image of this voting process can be seen in (d).
Fig. 8
Fig. 8 Comparison between the grayscale normalized map and the complementary color scale proposed (heat map).
Fig. 9
Fig. 9 Final heat map, overlapped with the original OCT image. The color scale and its relationship with the resulting confidence values is also presented in the results.
Fig. 10
Fig. 10 Heat maps generated with a different sample overlap: 32px (a) and 56px (b).
Fig. 11
Fig. 11 Mean test error per feature subset of 50 iterations for the dataset trained with the Cirrus images. Vertical lines indicate the number of features that achieved the minimum mean test error value for each classifier.
Fig. 12
Fig. 12 Mean test error per feature subset of 50 iterations for the dataset trained with the Spectralis images. Vertical lines indicate the number of features that achieved the minimum mean test error value for each classifier.
Fig. 13
Fig. 13 Mean test error per feature subset of 50 iterations for the dataset trained with images coming from both capture devices. Vertical lines indicate the number of features that achieved the minimum mean test error value for each classifier.
Fig. 14
Fig. 14 Accuracy achieved using (a) the binary fluid maps and (b) the heat maps for the Cirrus image dataset. The color scale in the heat map test represents the percentage of correctly classified maps.
Fig. 15
Fig. 15 Accuracy achieved using (a) the binary fluid maps and (b) the heat maps for the Spectralis image dataset. The color scale in the heat map test represents the percentage of correctly classified maps.
Fig. 16
Fig. 16 Accuracy achieved using (a) the binary fluid maps and (b) the heat maps for the combined image dataset. The color scale in the heat map test represents the percentage of correctly classified maps.
Fig. 17
Fig. 17 Examples of true positive (green) classified samples (a) and (b), true negative (red) classified samples (c) and (d) and misclassified samples (e) for each of the considered trained models.
Fig. 18
Fig. 18 Representative map results of different complexities with images from the Spectralis capture device and the specific trained model.
Fig. 19
Fig. 19 Representative map results of different complexities with images from the Cirrus capture device and the specific trained model.
Fig. 20
Fig. 20 Representative map results of different complexities with images from both capture devices and the combined trained model.

Tables (2)

Tables Icon

Table 1 Comparative taxonomy of the state of the art. NS = not specified.

Tables Icon

Table 2 Brief descriptions of the defined feature categories.

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.