Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Robust layer segmentation of esophageal OCT images based on graph search using edge-enhanced weights

Open Access Open Access

Abstract

Automatic segmentation of esophageal layers in OCT images is crucial for studying esophageal diseases and computer-assisted diagnosis. This work aims to improve the current techniques to increase the accuracy and robustness for esophageal OCT image segmentation. A two-step edge-enhanced graph search (EEGS) framework is proposed in this study. Firstly, a preprocessing scheme is applied to suppress speckle noise and remove the disturbance in the esophageal structure. Secondly, the image is formulated into a graph and layer boundaries are located by a graph search. In this process, we propose an edge-enhanced weight matrix for the graph by combining the vertical gradients with a Canny edge map. Experiments on esophageal OCT images from guinea pigs demonstrate that the EEGS framework is more robust and more accurate than the current segmentation method. It can be potentially useful for the early detection of esophageal diseases.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical Coherence Tomography (OCT), which was first demonstrated by the MIT group in 1991 [1], is a powerful medical imaging technique. It can generate high-resolution, non-invasive, 3D images of biological tissues in real time. Initial applications of OCT were mainly in ophthalmology, where the microstructures revealed by OCT facilitated retinal disease diagnosis [2–4]. Endoscopic OCT is an important and rapidly growing branch of the OCT technology [5]. By combining fiber-optic flexible endoscopes, OCT is able to image internal luminal organs of human body with minimal invasiveness. It has been shown that gastrointestinal endoscopic OCT can visualize multiple esophageal tissue layers and pathological changes in a variety of esophageal diseases, such as eosinophilic esophagitis (EoE), Barrett’s esophagus (BE) and even esophageal cancer [6–8]. Recently, the development of ultrahigh-resolution gastrointestinal endoscopic OCT enables imaging of the esophagus with much finer details and improved contrast [9, 10]. Many esophageal diseases are manifest by changes in the tissue microstructures, such as changes in the esophageal layer thickness or disruption to the layers. Accurate quantification of the esophageal layered structures from gastrointestinal endoscopic OCT images can be potentially very valuable for objective diagnosis of the diseases and assessment of the disease severity as well as the exploration of potential structure-based biomarkers associated with disease progression [5, 11]. For instance, the OCT image of BE has an irregular mucosal surface and may present an absence of the layered architecture [12]; the OCT image of EoE is featured with increased basal zone thickness in the esophagus [11]. These diseased features can be easily detected provided that the esophageal OCT images are accurately segmented.

Traditional manual segmentation is time-consuming and subjective. As a result, the computer-aided automatic layer segmentation method is in urgent need. In the past few years, research on OCT images segmentation methods mostly targeted retina OCT images, and various algorithms have been published [13–16]. Representative methods can be grouped into the following four categories: the A-scan based methods [2,3], the active contour based methods [4, 17–19], machine learning based methods [20, 21] and the graph based methods [22]. Among these methods, the graph based method is the most widely used one in layer segmentation, and is proven to be quite successful [13, 22, 23]. Representative frameworks are the graph theory and dynamic programming (GTDP) [13] and the 3-D graph based segmentation [22]. It is worth mentioning that the newly developed deep learning algorithms have also been applied to retinal layer segmentation and achieved great success [24–27]. Studies on the segmentation of endoscopic OCT images are not as extensive as the macular ones. Representative researches can be found in the processing of cardiovascular [28–30] and esophageal OCT images [31–36]. As reported, the graph based method is also effective in segmenting cardiovascular [30] and esophageal tissue layers [36].

Segmentation of normal esophagus OCT images is supposed to detect layered tissue structures. Considering guinea pig as an example, the layerd structure includes the epithelium stratum corneum (SC), epithelium (EP), lamina propria (LP), muscularis mucosae (MM) and submucosa (SM) as illustrated in Fig. 1, which is the result of our proposed segmentation method. It can be found that these tissues have a similar layered architecture as the retina. In that case, automatic segmentation of esophageal OCT images has to address some common challenges in OCT image processing, such as speckle noise and motion artifacts [13, 36]. Moreover, the esophageal OCT image has some unique challenges resulting from the in vivo environment or the endoscopic setup, including the disturbance from the plastic sheath and the mucus, the discontinuous boundaries due to the non-uniform scanning speed and the irregular bending caused by the sheath distortion.

 figure: Fig. 1

Fig. 1 Demonstration of a segmented esophageal OCT image from guinea pig.

Download Full Size | PDF

Solutions of these common problems, such as speckle noise and the irregular bending have been reported in the literature. Representative speckle noise suppression algorithms include the median filter [3, 37], wavelet shrinkage [38], curvelet denoising [39] and the non-linear anisotropic diffusion filter [4, 22]. Among these methods, the median filter is not the best, but it has the advantages of easy parameter setting, simple algorithm realization and robust noise suppression, which make it popular in OCT image denoising and was adopted in our framework. It is noted that there are some more advanced denoising methods, such as the sparse representation based framework proposed by Fang [40, 41]. Since such methods are not easy to implement and may take more computation time than the simple median filter, they were not adopted in this reported work. The negative effect caused by tissue irregular bending can be reduced by image flattening [20], which is realized by using cross-correlation [20] or the baseline search [42]. Generally, the baseline-related method performs better, but robust baseline extraction is difficult in esophageal OCT images due to the disturbance of the plastic sheath and mucus. To improve the image quality and remove such disturbance, our study designed a comprehensive preprocessing scheme according to the specific problems of the esophageal OCT image, thus creating favorable conditions for the subsequent segmentation.

Considering the previously mentioned problems, this study proposed an edge-enhanced graph search (EEGS) framework to automatically segment esophageal tissue layers. The main contributions lie in two aspects: Firstly, a specific-designed preprocessing scheme is proposed to address the challenges in esophageal OCT images (e.g. speckle noise, plastic sheath and mucus disturbances and boundary distortion). Secondly, an edge-enhanced weight matrix that combines modified canny map [43, 44] and vertical gradients are employed for graph search. In that case, the local feature is preserved while the missing boundary in shadow regions is interpolated. Different from Yang’s work [44], the canny edge detector used in this study was modified to focus on horizontal features, which is consistent with the esophageal tissue orientation, thus making it more suitable for esophageal layer boundary detection.

The paper is organized as follows. Section 2 introduces the detailed process of the proposed EEGS framework. Section 3 illustrates the advantages of the EEGS framework by segmentation experiments on esophageal OCT images of guinea pigs. Comparisons with the GTDP framework and the clinical potential of EEGS are also included in this section. Discussions and conclusions are presented in Sections 4 and 5, respectively.

2. Framework for robust esophageal layer segmentation using EEGS

The proposed EEGS method is composed two major steps: 1) preprocessing and 2) graph search using weight matrix based on Canny edge detection. The flowchart of the proposed EEGS framework is illustrated in Fig. 2.

 figure: Fig. 2

Fig. 2 Flowchart of the proposed EEGS segmentation scheme.

Download Full Size | PDF

2.1. Preprocessing

In order to calculate reliable weights that accurately indicate layer boundaries and improve the segmentation performance, we designed a novel preprocessing scheme to deal with the disturbance in esophageal OCT images.

2.1.1. Denoising

In this study, we chose the simple median filter to suppress the speckle noise and its effectiveness in OCT image denoising has been proven by numerous studies [3, 37]. Besides, the median filter has the advantage of high efficiency and easy parameter setting comparing with other popular OCT denoising methods, such as the wavelet and diffusion filter. A representative original esophageal image and the image denoised by a 7 × 7 median filter are presented in Fig. 3.

 figure: Fig. 3

Fig. 3 Demonstration of (a) a representative original esophageal OCT images and (b) the image denoised by a 7 × 7 median filter.

Download Full Size | PDF

2.1.2. Removing plastic sheath

During endoscopic OCT imaging, the probe is protected from biofluid by a plastic sheath. The sheath boundary is so prominent that causes strong disturbance in the search of esophageal tissue layers. To remove the plastic sheath from the OCT image, its upper bound Pr1 and lower bound Pr2 should be determined first.

In this study, the GTDP algorithm [13] was adopted for the boundary identification of the plastic sheath. The GTDP represents image I as a graph G(V, E), where V denotes the graph nodes that correspond to image pixels and E is the edge connecting adjacent nodes. The weight for edge connecting adjacent pixels a and b was set as

wab=2(ga+gb)+wmin,
where ga and gb are the vertical intensity gradients normalized to [0, 1], and wmin is the minimum possible weight in the graph. The gradients are calculated by convolving the image with a mask k [36], which is defined by
k=113[0121012321000001232101210].
The path with minimal weight is the potential layer boundaries, which was solved by the Dijkstra algorithm [45].

The Pr1 is the boundary that separates the plastic sheath from the background, which possesses the highest intensity contrast. This character indicates the Pr1 owns the highest gradient that can be easily located by GTDP. Pr2 can also be determined using GTDP by limiting the search region with Pr1 and 10 pixels below Pr1. Ten pixels is the approximate sheath thickness in this study. The plastic sheath is then removed by shifting the pixels from Pr1 to Pr2 and the empty pixels are filled with a mirror image. The result is illustrated in Fig. 4.

 figure: Fig. 4

Fig. 4 Plastic sheath removal: (a) position of Pr1 and Pr2; (b) image with the plastic sheath removed.

Download Full Size | PDF

2.1.3. Lumen segmentation

The outer boundary of the esophageal lumen is defined as the baseline (Fig. 1). Baseline is important in this study because it is the foundation of the following image flattening and it also affects the effectiveness of the subsequent search for other layer boundaries.

The baseline extraction using GTDP is supposed to be easy since it is the most prominent layer boundary on the image without the plastic sheath as illustrated in Fig. 4(b). Nevertheless, the mucus may induce a great error to GTDP as displayed in Fig. 5(a). Noticing that the SC layer has the highest intensity in the image, which can be used to correct the mucus-influenced baseline.

 figure: Fig. 5

Fig. 5 Deomonstration of: (a) the initial baseline BA1; (b) the erroneous part of BA1; (c) corrected baseline and (d) flattened image.

Download Full Size | PDF

The detailed process is summarized below:

  1. Extract a preliminary baseline BA1 by GTDP as shown in Fig. 5(a).
  2. Find the up-most point that has an intensity higher than a predefined threshold in each column.
  3. Determine if there is a successive part in BA1 above the obtained points. Provided that BA1 is consistent with the obtained points, it can be marked as the valid baseline. Else, recognize the different part as the erroneous region (Fig. 5(b)), and continue to the following steps.
  4. Limit the graph search region for GTDP. As illustrated in Fig. 5(c), in the valid part of BA1, the graph search region is defined around BA1, while in the erroneous part, the graph search is conducted beneath BA1, thus eliminating the negative effects of mucus.
  5. Graph search in the re-defined region to get the final baseline (Fig. 5(c)).

2.1.4. Flattening

Based on the graph search theory, the layer boundary is identified by searching the minimum weighted path across the graph. When the weights are set uniformly, the graph search method tends to find the shortest geometric path. However, the in vivo esophageal OCT images are often accompanied with a steep slope and irregular bending due to tissue movements and sheath distortion, which make the interested boundary lie in complex curves. Flattening is an effective solution to this problem.

The flattened image is created based on the baseline obtained in the previous section. We shift each column up or down such that the baseline is flattened. Empty pixels resulting from the baseline shifting are filled with a mirror image. The final image is shown in Fig. 5(d), which is beneficial to the following segmentation.

2.2. Esophageal layer segmentation by EEGS

EEGS is composed of the following steps. Firstly, a modified Canny edge detector is designed to create a map showing local main edges. Secondly, a gradient map in the axial direction is generated using a convolution mask. In that case, an edge-enhanced graph combining the gradient and Canny maps is obtained. As a result, layer boundaries can be extracted by dynamic programming. Detailed realization is described as follows.

2.2.1. Modified Canny edge detection

The Canny edge detector [43] was modified to create an edge-enhanced weight matrix for the subsequent graph search. This process can be summaried by the following steps:

  1. Apply a Gaussian filter to smooth the image.
  2. Calculate the intensity gradients of the smoothed image. The gradient magnitude G and direction α can be determined by
    G=|Gy|,α=atan(GyGx)
    where Gx and Gy are the first derivative in the horizontal and vertical direction, respectively. The gradient magnitude is calculated along the vertical direction since the flattened esophageal tissue layers distribute horizontally.
  3. Apply non-maximum suppression to get rid of spurious response to edge detection. The matrix indicating edges can be described by
    Ie{0ifGp1if(G>p)(G>Ii)(G>Ij)0if(G>p)[(GIi)(GIj)]
    where p is a pre-defined threshold, Ii and Ij denotes the gradient magnitude of the pixel in the positive and negative gradient directions, respectively.

Consequently, a binary matrix Ie indicating image edges can be generated. An example of edge map Ie overlying the original image is shown in Fig. 6(b). By removing vertical edges, the modified canny detector can better describe esophageal tissue layers, thus creating an edge map more suitable for layer segmentation.

 figure: Fig. 6

Fig. 6 Edge maps overlying the original image (a) traditional Canny edge and (b) the modified Canny edge.

Download Full Size | PDF

2.2.2. Construction of edge-enhanced gradient map

In this study, the edge-enhanced gradient map M is defined as

M=Gr+w×Ie
where Gr denotes the vertical intensity gradient calculated by mask k (Eq. (2)), Ie represents the modified Canny edge map and w is a weight parameter. The combination of Gr and Ie has the following advantages. By using neighboring information, Gr provides complementary search guidance where the Canny detector loses its efficacy. Meanwhile, Ie calculated by the Canny strategy compensates for the lack of local precision of Gr caused by the local smoothing effects of k. As a result, M is able to preserve local details while interpolating information into the shadow regions.

2.2.3. Segmentation by EEGS

The EEGS framework uses the GTDP for layer boundary identification. Instead of setting the weight by Eq. (1), the edge weight in EEGS is defined as:

wab=2(Man+Mbn)+wmin
where, Man and Mbn are normalized edge-enhanced map values for connecting adjacent points a and b calculated by Eq. (5).

The extraction of each boundary is realized by performing EEGS iteratively in a limited search area. The area is defined using the previously-identified boundary and the prior knowledge of the tissue layer thickness with a ±20% tolerance [44, 46], so that each search region contains one boundary ideally. The prior knowledge can be obtained by manual segmentation. As a result, all of the six boundaries are acquired automatically.

3. Experiments

3.1. Experimental data

The proposed EEGS segmentation framework was tested on esophageal OCT images of guinea pigs, which were acquired by an 800-nm ultrahigh resolution gastrointestinal endoscopic OCT system [9, 10, 47]. A typical image is illustrated in Fig. 3(a). Some layer boundaries like SC, EP and LP can be visually observed, while the MM and SM layer boundaries have low-contrast and are difficult to identify. Besides, disturbance such as the speckle noise, plastic sheath and the mucus are clearly presented on the image.

3.2. EEGS performance on OCT images with different challenges

In vivo esophageal OCT images present unique difficulties for layer segmentation resulting from motion artifacts and intrinsic disturbance from the endoscopic equipment itself (such as the plastic sheath). Fig. 7 illustrates several typical ill-posed images. Specifically, Fig. 7(a) shows an image with irregular bending, which was caused by the sheath distortion; Fig. 7(c) has quite weak boundaries in some regions of the MM and SM layers; Fig. 7(e) presents discontinuous boundaries, which might be caused by the non-uniform rotation speed of the endoscope; mucus occurs in Fig. 7(g) and separates the probe from the tissue surface. All of the listed problems are addressed in our EEGS scheme by embedding procedures such as flattening, baseline correction and the Canny-based edge-enhanced strategy. Corresponding segmentation results are demonstrated in Figs. 7(b), 7(d), 7(f) and 7(h). Results show that the EEGS is able to accurately identify all the esophageal layers, which confirms the robustness of the proposed method.

 figure: Fig. 7

Fig. 7 Representative esophageal OCT images with (a) irregular bending; (c) weak boundary; (e) discontinuous boundary and (g) mucus. (b), (d), (f) and (k) are the corresponding segmentation results with the proposed EEGS method.

Download Full Size | PDF

3.3. Segmentation result analysis of the EEGS framework

To further confirm the effectiveness of the EEGS framework, we compared the proposed method with manual segmentation of three experienced observers. These observers have segmented numerous OCT images from different organs, such as the retina, esophagus and airway, using a freeform (drawing) method implemented in the open-source software ITK-SNAP [48]. Besides, the comparison of the EEGS and GTDP [13, 36] was also carried out to prove the advantages of the proposed method. The experimental data is composed of 100 esophageal OCT images, each with 2048 × 2048 pixels acquired from one healthy guinea pig. For a quantitative evaluation, we calculate the thickness of the five esophageal layers.

An intuitive segmentation comparison among EEGS, GTDP and one of the observers (Obs. 1) was demonstrated in Fig. 8. It can be seen that both EEGS and GTDP are consistent with the manual segmentation results for the right portion of the image, where the tissue layers are smooth and little disturbance exists. In comparison, for the left portion of the image where distortion occurs, differences between automatic and manual segmentation can be visually found. In that case, the EEGS result is closer to Obs.1 than GTDP because the modified Canny map in EEGS enhances the edge details, thus compensating for the loss of precision of the vertical gradients used by GTDP. The unsigned border position differences between the automatic and manual segmentations are listed in Table 1, where borders BD1 to BD6 represent the layer boundary from the top of SC layer to the bottom of SM layer and the data is presented in the form of mean ± standard deviation in micrometer. It can be found that the EEGS result is closer to the manual segmentation in all cases, which proves its better accuracy in layer boundary identification.

 figure: Fig. 8

Fig. 8 Comparison of segmentation results of EEGS, GTDP and Obs.1.

Download Full Size | PDF

Tables Icon

Table 1. Unsigned border position differences of the automatic segmentation methods and manual segmentation.

The average layer thickness of 100 esophageal OCT images and the corresponding standard deviation are listed in Table. 2. Using each of the manual segmentation as a reference separately, the differences of layer thickness between the automatic segmentation and the reference are listed in Table 3. Data in bold indicates the automatic segmentation results that are closer to the manual segmentation. Noticing that the EEGS segmentation results are closer to the manual reference values than the GTDP in all cases, which indicates the proposed EEGS is able to segment five esophageal layers more accurately than GTDP.

Tables Icon

Table 2. Layer thickness obtained by different methods for 100 esophageal OCT images of guinea pig.

Tables Icon

Table 3. Comparisons of esophageal layer thickness mesurements between EEGS and GTDP using manual segmentation as references.

Fig. 9 shows the scatter plots indicating the reliability of the thickness measurements using GTDP and EEGS in comparison with the reference annotations from Obs. 1, as well as the corresponding Bland-Altman plot. In Fig. 9, n is the point number, r denotes the correlation coefficient and LOA represents the limit of agreement with the 95% confidence interval. It can be found that the EEGS method offers a larger r value and a smaller LOA, which indicates its result is closer to the reference annotations.

 figure: Fig. 9

Fig. 9 Correlation analysis and the corresponding Bland-Altman plot of (a) the GTDP method and (b) the EEGS framework, compared to the manual segmentation of Obs.1.

Download Full Size | PDF

3.4. Clinical potential of EEGS

To demonstrate the clinical potential, the EEGS framework was employed to segment three sets of 30 guinea pig esophagus images, including two normal conditions and one EoE model [49]. EoE is an esophageal disorder featured with eosinophil-predominated allergic inflammation in the esophagus [11]. Representative OCT images of guinea pig esophagus segmented by EEGS are presented in Figs. 10(a) to 10(c), and the corresponding thicknesses of the five tissue layers are shown in Fig. 10(d). It is evident that the thickness of the SC layer of the EoE model is significantly thicker than the normal cases, which indicates our EEGS framework would potentially aid clinical diagnosis [11, 50].

 figure: Fig. 10

Fig. 10 Representative segmentation results of EEGS for (a) normal guinea pig 1; (b) normal guinea pig 2; (c) guinea pig EoE model and (d) comparison of the measured layer thicknesses.

Download Full Size | PDF

4. Discussions

Our image analyses were performed on a personal computer with an Intel Core i7 2.20 GHz CPU and 16 GB RAM. Using MATLAB, it takes about 12 seconds for the EEGS to preprocess and segment an esophageal OCT image with the size 2048 × 2048 pixels, which is less efficient than GTDP (about 8 seconds) due to the additional Canny edge detection. This computational efficiency is suboptimal for real-time processing. To reduce the segmentation time, more efficient GPU-based programming in C will be adopted in the future.

Since the esophageal OCT images were collected successively by the endoscope, the overlapped information in adjacent frames can be used to correct outliers, thus further improving the segmentation accuracy. In addition, the current algorithm requires some apriori knowledge such as the layer numbers to be segmented and the approximate layer thickness. Future work will try to find adaptive parameter setting methods to perform automatic segmentation with less or no user input.

The esophagus layer segmentation experiments on normal and EoE guinea pig models demonstrate the clinical potential of the EEGS framework. In the future, esophageal OCT images for human will be collected and studied, so that the criteria for diagnosing different esophageal diseases would be determined. As a result, an automatic diagnosis system for esophageal diseases will be developed.

5. Conclusions

The main contribution of this paper is proposing the EEGS scheme to accurately segment esophageal layers on OCT images. With reasonable preprocessing before segmentation, the negative effect caused by the OCT imaging system and in vivo motion artifacts is minimized. By introducing Canny edge detection in the construction of the edge-enhanced weight matrix, the local edge information is preserved while the lost information in the shadow region is interpolated. It is worth mentioning that the Canny method utilized in this study focuses on boundaries along the horizontal direction, thus matching the esophageal layer better. Experiments showed that the proposed EEGS method can achieve better esophageal layer segmentation results in accuracy and stability than the GTDP, and it has the potential to be used for diagnosing esophageal diseases.

Funding

Key Program for International S&T Cooperation Projects of China (2016YFE0107700 ); National Institutes of Health (NIH) (HL121788 and CA153023); Wallace H. Coulter Foundation.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References and links

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991). [CrossRef]   [PubMed]  

2. M. R. Hee, J. A. Izatt, E. A. Swanson, D. Huang, J. S. Schuman, C. P. Lin, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography of the human retina,” Arch. Ophthalmol. 113, 325–332 (1995). [CrossRef]   [PubMed]  

3. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Transactions on Med. Imaging 20, 900–916 (2001). [CrossRef]  

4. D. C. Fernandez, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express 13, 10200–10216 (2005). [CrossRef]  

5. M. J. Gora, M. J. Suter, G. J. Tearney, and X. D. Li, “Endoscopic optical coherence tomography: technologies and clinical applications [invited],” Biomed. Opt. Express 8, 2405–2444 (2017). [CrossRef]   [PubMed]  

6. X. D. Li, S. A. Boppart, J. Van Dam, H. Mashimo, M. Mutinga, W. Drexler, M. Klein, C. Pitris, M. L. Krinsky, M. E. Brezinski, and J. G. Fujimoto, “Optical coherence tomography: Advanced technology for the endoscopic imaging of Barrett’s esophagus,” Endoscopy 32, 921–930 (2000). [CrossRef]  

7. W. Hatta, K. Uno, T. Koike, S. Yokosawa, K. Iijima, A. Imatani, and T. Shimosegawa, “Optical coherence tomography for the staging of tumor infiltration in superficial esophageal squamous cell carcinoma,” Gastrointest. Endosc. 71, 899–906 (2010). [CrossRef]   [PubMed]  

8. M. J. Gora, J. S. Sauk, R. W. Carruth, K. A. Gallagher, M. J. Suter, N. S. Nishioka, L. E. Kava, M. Rosenberg, B. E. Bouma, and G. J. Tearney, “Tethered capsule endomicroscopy enables less invasive imaging of gastrointestinal tract microstructure,” Nat. Medicine 19, 238–240 (2013). [CrossRef]  

9. J. F. Xi, A. Q. Zhang, Z. Y. Liu, W. X. Liang, L. Y. Lin, S. Y. Yu, and X. D. Li, “Diffractive catheter for ultrahigh-resolution spectral-domain volumetric OCT imaging,” Opt. Lett. 39, 2016–2019 (2014). [CrossRef]   [PubMed]  

10. W. Yuan, R. Brown, W. Mitzner, L. Yarmus, and X. D. Li, “Super-achromatic monolithic microprobe for ultrahigh-resolution endoscopic optical coherence tomography at 800 nm,” Nat. Commun. 8, 1531 (2017). [CrossRef]  

11. Z. Y. Liu, J. F. Xi, M. Tse, A. C. Myers, X. D. Li, P. J. Pasricha, and S. Y. Yu, “Allergic inflammation-induced structural and functional changes in esophageal epithelium in a guinea pig model of eosinophilic esophagitis,” Gastroenterology 146, S92 (2014). [CrossRef]  

12. J. M. Poneros, S. Brand, B. E. Bouma, G. J. Tearney, C. C. Compton, and N. S. Nishioka, “Diagnosis of specialized intestinal metaplasia by optical coherence tomography,” Gastroenterology 120, 7–12 (2001). [CrossRef]   [PubMed]  

13. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18, 19413–19428 (2010). [CrossRef]   [PubMed]  

14. K. A. Vermeer, J. van der Schoot, H. G. Lemij, and J. F. de Boer, “Automated segmentation by pixel classification of retinal layers in ophthalmic OCT images,” Biomed. Opt. Express 2, 1743–1756 (2011). [CrossRef]   [PubMed]  

15. M. L. Wu, Q. Chen, X. J. He, P. Li, W. Fan, S. T. Yuan, and H. J. Park, “Automatic subretinal fluid segmentation of retinal SD-OCT images with neurosensory retinal detachment guided by enface fundus imaging,” IEEE Transactions on Biomed. Eng. 65, 87–95 (2018). [CrossRef]  

16. A. Lang, A. Carass, B. M. Jedynak, S. D. Solomon, P. A. Calabresi, and J. L. Prince, “Intensity inhomogeneity correction of SD-OCT data using macular flatspace,” Med. Image Analysis 43, 85–97 (2018). [CrossRef]  

17. A. Yazdanpanah, G. Hamarneh, B. Smith, and M. Sarunic, “Intra-retinal layer segmentation in optical coherence tomography using an active contour approach,” Med. Image Comput. Comput. Interv. - Miccai 2009, Pt Ii, Proc. 5762, 649 (2009).

18. M. A. Mayer, J. Hornegger, C. Y. Mardin, and R. P. Tornow, “Retinal nerve fiber layer segmentation on FD-OCT scans of normal subjects and glaucoma patients,” Biomed. Opt. Express 1, 1358–1383 (2010). [CrossRef]  

19. I. Ghorbel, F. Rossant, I. Bloch, S. Tick, and M. Paques, “Automated segmentation of macular layers in OCT images and quantitative evaluation of performances,” Pattern Recognit. 44, 1590–1603 (2011). [CrossRef]  

20. A. R. Fuller, R. J. Zawadzki, S. Choi, D. F. Wiley, J. S. Werner, and B. Hamann, “Segmentation of three-dimensional retinal image data,” IEEE Transactions on Vis. Comput. Graph. 13, 1719–1726 (2007). [CrossRef]  

21. A. Lang, A. Carass, M. Hauser, E. S. Sotirchos, P. A. Calabresi, H. S. Ying, and J. L. Prince, “Retinal layer segmentation of macular OCT images using boundary classification,” Biomed. Opt. Express 4, 1133–1152 (2013). [CrossRef]   [PubMed]  

22. M. K. Garvin, M. D. Abramoff, R. Kardon, S. R. Russell, X. D. Wu, and M. Sonka, “Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search,” IEEE Transactions on Med. Imaging 27, 1495–1505 (2008). [CrossRef]  

23. R. Kafieh, H. Rabbani, M. D. Abramoff, and M. Sonka, “Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map,” Med. Image Analysis 17, 907–928 (2013). [CrossRef]  

24. F. G. Venhuizen, B. van Ginneken, B. Liefers, M. J. J. P. van Grinsven, S. Fauser, C. Hoyng, T. Theelen, and C. I. Sanchez, “Robust total retina thickness segmentation in optical coherence tomography images using convolutional neural networks,” Biomed. Opt. Express 8, 3292–3316 (2017). [CrossRef]   [PubMed]  

25. C. S. Lee, A. J. Tyring, N. P. Deruyter, Y. Wu, A. Rokem, and A. Y. Lee, “Deep-learning based, automated segmentation of macular edema in optical coherence tomography,” Biomed. Opt. Express 8, 3440–3448 (2017). [CrossRef]   [PubMed]  

26. L. Y. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. T. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative amd patients using deep learning and graph search,” Biomed. Opt. Express 8, 2732–2744 (2017). [CrossRef]   [PubMed]  

27. J. Loo, L. Y. Fang, D. Cunefare, G. J. Jaffe, and S. Farsiu, “Deep longitudinal transfer learning-based automatic segmentation of photoreceptor ellipsoid zone defects on optical coherence tomography images of macular telangiectasia type 2,” Biomed. Opt. Express 9, 2681–2698 (2018). [CrossRef]  

28. G. J. Ughi, T. Adriaenssens, M. Larsson, C. Dubois, P. R. Sinnaeve, M. Coosemans, W. Desmet, and J. D’hooge, “Automatic three-dimensional registration of intravascular optical coherence tomography images,” J. Biomed. Opt. 17, 049803 (2012). [PubMed]  

29. Y. Gan, D. Tsay, S. B. Amir, C. C. Marboe, and C. P. Hendon, “Automated classification of optical coherence tomography images of human atrial tissue,” J. Biomed. Opt. 21, 101407 (2016). [CrossRef]  

30. G. Zahnd, A. Hoogendoorn, N. Combaret, A. Karanasos, E. Pery, L. Sarry, P. Motreff, W. Niessen, E. Regar, G. van Soest, F. Gijsen, and T. van Walsum, “Contour segmentation of the intima, media, and adventitia layers in intracoronary OCT images: application to fully automatic detection of healthy wall regions,” Int. J. Comput. Assist. Radiol. Surg. 12, 1923–1936 (2017). [CrossRef]   [PubMed]  

31. X. Qi, M. V. Sivak, G. Isenberg, J. E. Willis, and A. M. Rollins, “Computer-aided diagnosis of dysplasia in Barrett’s esophagus using endoscopic optical coherence tomography,” J. Biomed. Opt. 11, 044010 (2006). [CrossRef]  

32. X. Qi, Y. S. Pan, M. V. Sivak, J. E. Willis, G. Isenberg, and A. M. Rollins, “Image analysis for classification of dysplasia in Barrett’s esophagus using endoscopic optical coherence tomography,” Biomed. Opt. Express 1, 825–847 (2010). [CrossRef]  

33. P. B. Garcia-Allende, I. Amygdalos, H. Dhanapala, R. D. Goldin, G. B. Hanna, and D. S. Elson, “Morphological analysis of optical coherence tomography images for automated classification of gastrointestinal tissues,” Biomed. Opt. Express 2, 2821–2836 (2011). [CrossRef]   [PubMed]  

34. G. J. Ughi, M. J. Gora, A. F. Swager, A. Soomro, C. Grant, A. Tiernan, M. Rosenberg, J. S. Sauk, N. S. Nishioka, and G. J. Tearney, “Automated segmentation and characterization of esophageal wall in vivo by tethered capsule optical coherence tomography endomicroscopy,” Biomed. Opt. Express 7, 409–419 (2016). [CrossRef]   [PubMed]  

35. M. Kassinopoulos, J. Dong, G. J. Tearney, and C. Pitris, “Automated detection of esophageal dysplasia in in vivo optical coherence tomography images of the human esophagus,” Proc. SPIE , 10483, 104830R (2018).

36. J. L. Zhang, W. Yuan, W. X. Liang, S. Y. Yu, Y. M. Liang, Z. Y. Xu, Y. X. Wei, and X. D. Li, “Automatic and robust segmentation of endoscopic OCT images and optical staining,” Biomed. Opt. Express 8, 2697–2708 (2017). [CrossRef]   [PubMed]  

37. V. J. Srinivasan, B. K. Monson, M. Wojtkowski, R. A. Bilonick, I. Gorczynska, R. Chen, J. S. Duker, J. S. Schuman, and J. G. Fujimoto, “Characterization of outer retinal morphology with high-speed, ultrahigh-resolution optical coherence tomography,” Investig. Ophthalmol. & Vis. Sci. 49, 1571–1579 (2008). [CrossRef]  

38. G. Quellec, K. Lee, M. Dolejsi, M. K. Garvin, M. D. Abramoff, and M. Sonka, “Three-dimensional analysis of retinal layer texture: Identification of fluid-filled regions in SD-OCT of the macula,” IEEE Transactions on Med. Imaging 29, 1321–1330 (2010). [CrossRef]  

39. Z. P. Jian, L. F. Yu, B. Rao, B. J. Tromberg, and Z. P. Chen, “Three-dimensional speckle suppression in optical coherence tomography based on the curvelet transform,” Opt. Express 18, 1024–1032 (2010). [CrossRef]   [PubMed]  

40. L. Y. Fang, S. T. Li, R. P. McNabb, Q. Nie, A. N. Kuo, C. A. Toth, J. A. Izatt, and S. Farsiu, “Fast acquisition and reconstruction of optical coherence tomography images via sparse representation,” IEEE Transactions on Med. Imaging 32, 2034–2049 (2013). [CrossRef]  

41. L. Y. Fang, S. T. Li, D. Cunefare, and S. Farsiu, “Segmentation based sparse reconstruction of optical coherence tomography images,” IEEE Transactions on Med. Imaging 36, 407–421 (2017). [CrossRef]  

42. M. K. Garvin, M. D. Abramoff, X. D. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Transactions on Med. Imaging 28, 1436–1447 (2009). [CrossRef]  

43. J. Canny, “A computational approach to edge-detection,” IEEE Transactions on Pattern Analysis Mach. Intell. 8, 679–698 (1986). [CrossRef]  

44. Q. Yang, C. A. Reisman, Z. G. Wang, Y. Fukuma, M. Hangai, N. Yoshimura, A. Tomidokoro, M. Araie, A. S. Raza, D. C. Hood, and K. P. Chan, “Automated layer segmentation of macular OCT images using dual-scale gradient information,” Opt. Express 18, 21293–21307 (2010). [CrossRef]   [PubMed]  

45. E. Dijkstra, “A note on two problems in connexion with graphs,” Numer. Math. 1, 269–271 (1959). [CrossRef]  

46. Q. Yang, C. A. Reisman, K. P. Chan, R. Ramachandran, A. Raza, and D. C. Hood, “Automated segmentation of outer retinal layers in macular OCT images of patients with retinitis pigmentosa,” Biomed. Opt. Express 2, 2493–2503 (2011). [CrossRef]   [PubMed]  

47. W. Yuan, J. Mavadia-Shukla, J. F. Xi, W. X. Liang, X. Y. Yu, S. Y. Yu, and X. D. Li, “Optimal operational conditions for supercontinuum-based ultrahigh-resolution endoscopic OCT imaging,” Opt. Lett. 41, 250–253 (2016). [CrossRef]   [PubMed]  

48. P. A. Yushkevich, J. Piven, H. C. Hazlett, R. G. Smith, S. Ho, J. C. Gee, and G. Gerig, “User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability,” Neuroimage 31, 1116–1128 (2006). [CrossRef]   [PubMed]  

49. Z. Y. Liu, Y. T. Hu, X. Y. Yu, J. F. Xi, X. M. Fan, C. M. Tse, A. C. Myers, P. J. Pasricha, X. D. Li, and S. Y. Yu, “Allergen challenge sensitizes trpa1 in vagal sensory neurons and afferent c-fiber subtypes in guinea pig esophagus,” Am. J. Physiol. Liver Physiol. 308, G482–G488 (2015).

50. M. Baroni, P. Fortunato, and A. La Torre, “Towards quantitative analysis of retinal features in optical coherence tomography,” Med. Eng. & Phys. 29, 432–441 (2007). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Demonstration of a segmented esophageal OCT image from guinea pig.
Fig. 2
Fig. 2 Flowchart of the proposed EEGS segmentation scheme.
Fig. 3
Fig. 3 Demonstration of (a) a representative original esophageal OCT images and (b) the image denoised by a 7 × 7 median filter.
Fig. 4
Fig. 4 Plastic sheath removal: (a) position of Pr1 and Pr2; (b) image with the plastic sheath removed.
Fig. 5
Fig. 5 Deomonstration of: (a) the initial baseline BA1; (b) the erroneous part of BA1; (c) corrected baseline and (d) flattened image.
Fig. 6
Fig. 6 Edge maps overlying the original image (a) traditional Canny edge and (b) the modified Canny edge.
Fig. 7
Fig. 7 Representative esophageal OCT images with (a) irregular bending; (c) weak boundary; (e) discontinuous boundary and (g) mucus. (b), (d), (f) and (k) are the corresponding segmentation results with the proposed EEGS method.
Fig. 8
Fig. 8 Comparison of segmentation results of EEGS, GTDP and Obs.1.
Fig. 9
Fig. 9 Correlation analysis and the corresponding Bland-Altman plot of (a) the GTDP method and (b) the EEGS framework, compared to the manual segmentation of Obs.1.
Fig. 10
Fig. 10 Representative segmentation results of EEGS for (a) normal guinea pig 1; (b) normal guinea pig 2; (c) guinea pig EoE model and (d) comparison of the measured layer thicknesses.

Tables (3)

Tables Icon

Table 1 Unsigned border position differences of the automatic segmentation methods and manual segmentation.

Tables Icon

Table 2 Layer thickness obtained by different methods for 100 esophageal OCT images of guinea pig.

Tables Icon

Table 3 Comparisons of esophageal layer thickness mesurements between EEGS and GTDP using manual segmentation as references.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

w a b = 2 ( g a + g b ) + w min ,
k = 1 13 [ 0 1 2 1 0 1 2 3 2 1 0 0 0 0 0 1 2 3 2 1 0 1 2 1 0 ] .
G = | G y | , α = atan ( G y G x )
I e { 0 if G p 1 if ( G > p ) ( G > I i ) ( G > I j ) 0 if ( G > p ) [ ( G I i ) ( G I j ) ]
M = Gr + w × I e
w a b = 2 ( M a n + M b n ) + w min
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.