Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Advanced image processing for optical coherence tomographic angiography of macular diseases

Open Access Open Access

Abstract

This article provides an overview of advanced image processing for three dimensional (3D) optical coherence tomographic (OCT) angiography of macular diseases, including age-related macular degeneration (AMD) and diabetic retinopathy (DR). A fast automated retinal layers segmentation algorithm using directional graph search was introduced to separates 3D flow data into different layers in the presence of pathologies. Intelligent manual correction methods are also systematically addressed which can be done rapidly on a single frame and then automatically propagated to full 3D volume with accuracy better than 1 pixel. Methods to visualize and analyze the abnormalities including retinal and choroidal neovascularization, retinal ischemia, and macular edema were presented to facilitate the clinical use of OCT angiography.

© 2015 Optical Society of America

1. Introduction

Optical coherence tomography (OCT) provides cross-sectional and three-dimensional (3D) imaging of biological tissues, and is now a part of the standard of care in ophthalmology [1, 2]. Conventional OCT, however, is only sensitive to backscattered light intensity and is unable to directly detect blood flow and vascular abnormalities such as capillary dropout or pathologic vessel growth (neovascularization), which are the major vascular abnormalities associated with two of the leading causes of blindness, age-related macular degeneration (AMD) and proliferative diabetic retinopathy (PDR) [3]. Current techniques that visualize these abnormalities require an intravenous dye-based contrast such as fluorescein angiography (FA) or indocyanine green (ICG) angiography.

OCT angiography uses the motion of red blood cells against static tissue as intrinsic contrast. This approach eliminates the risk and reduces the time associated with dye injections [4, 5], making it more accessible for clinical use than FA or ICG. A novel 3D OCT angiography technique called split-spectrum amplitude-decorrelation angiography (SSADA) can detect motion-related amplitude-decorrelation on commercially available OCT machines. Using this algorithm, the contrast between static and non-static tissue enables visualization of blood flow, providing high resolution maps of microvascular networks in addition to the conventional structural OCT images [6, 7]. En face projection of the maximum decorrelation within anatomic layers (slabs) can produce angiograms analogous to traditional FA and ICG angiography [5, 8].

Applying SSADA-based OCT angiography, we and others have quantified vessel density and flow index [9–12], choroidal neovascularization (CNV) area [4, 13], and detected retinal neovascularization (RNV) [12, 14] and macular ischemia. Accurate segmentation is necessary for interpretation and quantification of 3D angiograms. However, in the diseased eye, pathologies such as drusen, cystoid macular edema, subretinal fluid, or pigment epithelial detachment distort the normal tissue boundaries. Such distortion increases the difficulty of automated slab boundary segmentation. Although researchers have been working on improving automated segmentation in pathological retina [15–17], there is still no fully automated method which guarantees success in all clinical cases and hence manual segmentation or correction is often required. Previously reported manual correction of segmentation is tedious and inefficient [18, 19]. In this manuscript, we provide an overview of our advanced image processing of SSADA-based OCT angiography, introduce an automated layer segmentation algorithm with expert correction, which is able to efficiently handle all clinical cases and show results of our technique applied to the processing of OCT angiograms of diseased eyes.

2. Methods

2.1 OCT angiography data acquisition

The OCT angiography data was acquired using a commercial spectral domain OCT instrument (RTVue-XR; Optovue). It has a center wavelength of 840 nm with a full-width half-maximum bandwidth of 45 nm and an axial scan rate of 70 kHz. Volumetric macular scans consisted of a 3 × 3 mm or 6 × 6 mm area with a 1.6 mm depth (304 × 304 × 512 pixels). In the fast transverse scanning direction, 304 A-scans were sampled. Two repeated B-scans were captured at a fixed position before proceeding to the next location. A total of 304 locations along a 3 mm or 6 mm distance in the slow transverse direction were sampled to form a 3D data cube. The SSADA algorithm split the spectrum into 11 sub-spectra and detected blood flow by calculating the signal amplitude-decorrelation between two consecutive B-scans of the same location. All 608 B-scans in each data cube were acquired in 2.9 seconds. Two volumetric raster scans, including one x-fast scan and one y-fast scan, were obtained and registered [20].

2.2 Overview of advanced image processing

Segmentation of OCT angiography 3D flow data allows visualization and analysis of isolated vascular beds. OCT structural images provide reference boundaries for the segmentation of 3D OCT angiograms. Useful reference boundaries (Fig. 1(A)) include, but are not limited to, the inner limiting membrane (ILM), outer boundary of the inner plexiform layer (IPL), inner nuclear layer (INL), outer boundary of the outer plexiform layer (OPL), outer nuclear layer (ONL), retinal pigment epithelium (RPE), and Bruch’s membrane (BM). Vascular layers or “slabs” are identified by two relevant tissue boundaries. For example, retinal circulation is between the boundaries Vitreous/ILM and OPL/ONL. B-scan images can be automatically segmented by a graph search technique [16, 21]. We used a conceptually simple directional graph search technique and simplified the complexity of the graph to reduce the computation time. When pathology severely disrupts normal tissue anatomy, manual correction is required. Manual correction of B-scan segmentation was propagated forward and backward across multiple B-scans, expediting image processing and reducing manpower cost. In evaluation, our approach shows good efficiency and accuracy in clinical cases.

 figure: Fig. 1

Fig. 1 Overview of OCT angiography image processing of a healthy macula. (A) The 3D OCT data (3 × 3 × 0.9 mm), after motion correction with structural information overlaid on angiography data. OCT angiogram is computed using the SSADA algorithm. (B-I) After segmentation of the retinal layers, 3D slabs are compressed to 2D and presented as en face maximum projection angiograms. (B) The vitreous angiogram shows the absence of flow. (C) The superficial inner retinal angiogram shows healthy retinal circulation with a small foveal avascular zone. (D) The deep inner retina angiogram shows the deep retinal plexus which is a network of fine vessels. (E) Inner retinal angiogram. (F) The healthy outer retinal slab should be absent of flow, but shows flow projection artifacts from the inner retina. (G) The outer retinal angiogram after projection removal (F minus E). (H) The choriocapillaris angiogram. (I) Retinal thickness map segmented from vitreous/ILM to RPE/BM, the color bar range is 0 to 600 μm. (J) Composite structural and angiogram B-scan images generated after removal of shadowgraphic projection. (K) Composite C-scan images generated by the flattening of OCT structural and angiogram data volume using RPE/BM.

Download Full Size | PDF

We created composite cross-sectional OCT images by combining color-coded angiogram B-scans (flow information) superimposed on gray-scale structural B-scans (Fig. 1(A)), presenting both blood flow and retinal structure together. This provided detailed information on the depth of the microvasculature network.

OCT angiograms are generated by summarizing the maximum decorrelation within a slab encompassed by relevant anatomic layers [6]. The 3D angiogram slabs are then compressed and presented as 2D en face images so they can be more easily interpreted in a manner similar to traditional angiography techniques. Using the segmentation of the vitreous/ILM, IPL/INL, OPL/ONL, and RPE/BM, five slabs can be visualized as shown in Figs. 1(B)-1(D), 1(G) and 1(H).

2.2.1 Advanced image processing: healthy eye

In a healthy eye, the vitreous is avascular, and there is no flow above the vitreous/ILM boundary. Therefore, the en face image will appear black (Fig. 1(B)). The superficial inner retinal angiogram (between vitreous/ILM and IPL/INL) shows healthy retinal circulation with a small foveal avascular zone (Fig. 1(C)). The deep inner retina angiogram (between IPL/INL and OPL/ONL) shows the deep retinal plexus which is a network of fine vessels (Fig. 1(D)). The inner retina angiogram (Fig. 1(E)) is a combination of two superficial slabs (Fig. 1(C) and 1(D)).

Blood flow from larger inner retinal vessels casts a fluctuating shadow, inducing signal variation in deeper layers. This variation is detected as decorrelation and results in a shadowgraphic flow projection artifact. Signal characteristics alone cannot distinguish this shadowgraphic flow projection from true deep-tissue blood flow, but it can be recognized by its vertical shadow in the cross-sectional OCT angiogram (white arrows in Fig. 1(A)). A comparison of Figs. 1(F) and 1(E) reveals projection and replication of the vascular patterns from superficial slabs in the deeper layers. This is particularly evident in the outer retinal slab, where the retinal pigment epithelium (RPE) is the dominant projection surface (Fig. 1(F)). Subtracting the angiogram of the inner retina from that of the outer retina can remove this artifact, producing an outer retinal angiogram devoid of flow, as would be expected in a healthy retina (Fig. 1(G)). Flow detected in the outer retinal angiogram after removal of projection artifact is pathologic [13, 22]. The choriocapillaris angiogram (RPE/BM to 15 µm below) shows nearly confluent flow (Fig. 1(H)). Figure 1(I) shows an en face thickness map of the retina, segmented from vitreous/ILM to RPE/BM.

After removal of flow projection, the outer retinal en face angiogram (Fig. 1(G)) is then used as the reference for removing shadowgraphic flow projection on cross-sectional images. This produces composite B-scan images with color-coded flow corresponding to various slabs, without vertical shadowgraphic artifacts in the outer retina (Fig. 1(J) compared to Fig. 1(A)). Similarly, we can generate composite C-scan images. Because of the curved nature of the retina, the volume data is flattened using RPE/BM to produce flat C-scan images (Fig. 1(K)).

2.3. Layer segmentation

2.3.1 Directional graph search

Graph search is a common technique for image segmentation [23–25]. We designed a directional graph search technique for retinal layer segmentation. Since retinal layers are primarily horizontal structures on B-scan structural images, we first defined an intensity gradient in depth along the A-line, with each pixel assigned a value Gx,z, where

Gx,z=Ix,z-Ix,z-1
and Ix,z is the intensity of the pixel, and Ix,z-1 is the intensity of the previous pixel located within the A-line. From this, we established a gradient image by normalizing each Gx,z value with the function
Cx,z(1)=Gx,zmin(G)max(G)min(G)
where C(1) is a normalized value between 0 and 1, and min(G) and max(G) are the minimum and maximum G, respectively, for the entire B-scan structural image containing W columns and H rows. An example of a gradient image is displayed in Fig. 2(B) and assigns light-to-dark intensity transitions as having low C(1) values, shown here with the dark line at the NFL/GCL, IPL/INL, OPL/ONL and RPE/BM tissue boundaries.

 figure: Fig. 2

Fig. 2 (A) Composite OCT B-scan images with color-coded angiography. Angiography data are overlaid onto the structure images to help graders better visualize the OCT angiography images. Angiography data in the inner retina (between Vitreous/ILM and OPL/ONL) is overlaid as purple, outer retina (between OPL/ONL and IS/OS) as yellow, and choroid (below RPE) as red. (B) Gradient image showing light-to-dark intensity transitions. (C) Inverse gradient image showing dark-to-light intensity transitions.

Download Full Size | PDF

Because retinal tissue boundaries displayed on structural OCT B-scans (Fig. 2(A)) have two types of intensity transitions (i.e. light-to-dark and dark-to-light [21]), an inverse gradient image was also generated using the function

Cx,z(2)=1Cx,z(1)
thereby defining dark-to-light intensity transitions with a low C(2) value, demonstrated by the horizontal black lines in Fig. 2(C) at vitreous/ILM, IS/OS boundary, and INL/OPL.

Graph search segments an image by connecting C values with the lowest overall cost. Typically, a graph search algorithm considers all 8 surrounding neighbors when determining the next optimal connection (Fig. 3(A)). Our directional graph search algorithm considers only 5 directional neighbors, as illustrated by the 5 dashed lines of Fig. 3(B). Because retinal layers are nearly flat, it can be assumed tissue boundaries will extend continuously across structural OCT B-scans, unlikely to reverse in direction. Since we perform the graph search directionally, starting left and extending right, the neighbor to Cx,z that is likely to have the lowest connection cost will be on the right side. Therefore, we exclude from the search the left side neighbors and the upward Cx,z-1 and downward Cx,z+1 neighbors. We can then include Cx+1,z-2 and Cx+1,z+2 positions to make our directional graph search sensitive to stark boundary changes. We assign a weight of 1 to Cx+1,z-1, Cx+1,z, and Cx+1,z+1 and a weight of 1.4 to Cx+1,z-2 and Cx+1,z+2, thus giving extra cost to curvy paths.

 figure: Fig. 3

Fig. 3 (A) Common graph search. (B) Directional graph search. The solid line represents a made move and dash line represent a possible move. C is the normalized gradient or normalized inverse gradient. x is the B-scan direction, between 1 and W, while z is the A-scan direction, between 1 and H.

Download Full Size | PDF

In order to automatically detect the start point of a retinal layer boundary, the directional graph search starts from a virtual start point located outside the graph, such that all adjacent neighbors are in the first column (Fig. 3(B)). The lowest cost of connecting C values then ends at the rightmost column. This directional graph search method reduces computation complexity since fewer neighbors are considered, and therefore improves segmentation efficiency.

Automated image segmentation of retinal layers using graph search is a common practice with image processing and has been described at length in the literature [16, 21, 26–28]. Similar to previous demonstrations [18, 21], our directional graph search detects seven boundaries of interest one by one on a B-scan image (Fig. 2(A)).

The processing time for segmenting the 7 boundaries on a 304 × 512 pixel image is 330 ms (Intel(R) Xeon(R) E3-1226 @ 3.30GHz, Matlab environment).

2.3.2 Propagated 2D automated segmentation

In the clinic, OCT images often contain pathological abnormalities such as cysts, exudates, drusen, and/or layer separation. These abnormalities are difficult to account for on conventional 2D and 3D segmentation algorithms [16, 28]. Figure 4(A1) shows an example of layer segmentation attracted to strong reflectors, exudates in this case. Our propagated 2D automated segmentation takes into consideration the segmentation result of the previous B-scan, assuming that boundaries do not change much in adjacent B-scans. Specifically, it first segments a B-scan frame with relatively few pathological structures, which is chosen by the user. To segment the remaining B-scans using directional graph search, we further confine the boundary to be within a range that is 15 µm above and below the same boundary in the previous B-scan frame. The segmentation is propagated to the rest of the volume frame by frame. Figure 4(A2) shows the propagated automated segmentation provides accurate segmentation even in tissue disrupted by exudates.

 figure: Fig. 4

Fig. 4 Comparison of performance on pathologic tissue using 2D automated segmentation (A1, B1), and propagated 2D automated segmentation (A2, B2). En face images B1 and B2 maps the position of the OPL/ONL boundary of A1 and A2. Red arrows in A1 and A2 point to the segmentation differences. The colorbar of B1 and B2 is the same as Fig. 1(I).

Download Full Size | PDF

The en face images Fig. 4(B1) and 4(B2) map the distance between the segmented OPL/ONL position and the bottom of the image, with each horizontal line corresponding to a B-scan frame. The conventional 2D algorithm (Fig. 4(B1)) shows segmentation errors, while an accurate segmentation from propagated 2D algorithm generates a continuous map (Fig. 4(B2)). This map facilitates monitoring and identification of possible segmentation errors.

2.3.3 Propagated 2D automated segmentation with intelligent manual correction

When propagated 2D automated segmentation fails, expert manual correction is required. In the manual correction mode, the user pinpoints several landmark positions with red crosses (Fig. 5(B), 4 red crosses). An optimal path through these landmarks is automatically determined using directional graph search. After manual corrections are made on B-scans within a volume, the corrected boundary curve is propagated to adjacent frames. For example, in Fig. 5, only frame n was manually corrected, and the manual correction successfully propagated to frame n + 30, as shown in Fig. 5 (propagated correction), identified by the red arrow.

 figure: Fig. 5

Fig. 5 Illustration of 2D automated segmentation with and without intelligent manual correction. Manual correction (middle image, red crosses) was performed on frame n, and the correction propagated to frame n + 30. Red arrows identify the segmentation differences.

Download Full Size | PDF

2.3.4 Semi-automatic segmentation

For cases where the retina is highly deformed and the automated segmentation completely fails, we devised a semi-automatic segmentation method. Similar to intelligent scissors [23], while the users moves the cursor along the boundary path, directional graph search is applied and displayed locally in real time (Fig. 6(A)).

 figure: Fig. 6

Fig. 6 (A) Interactive manual segmentation with intelligent scissors, showing a live segmentation of OPL/ONL when the mouse click at the red cross, setting the start point and moves to the green cross. (B) En face depth map with segmentation performed every 20 frames. (C) En face depth map after interpolation of (B).

Download Full Size | PDF

2.3.5 Interpolation mode

Automated/manual segmentation can also be applied at regular intervals (Fig. 6(B)), followed by interpolation across the entire volume. This greatly reduces the segmentation workload while maintaining reasonable accuracy. The frame interval for manual segmentation is determined according to the variation among B-scan frames, usually 10 to 20 for 3 × 3 mm scans and 5 to 10 for 6 × 6 mm scans. This provides a balance between segmentation accuracy and required manual segmentation workload.

2.3.6 Volume flattening

On 6 × 6 mm images, the automated segmentation using directional graph search may fail due to significant tissue curvature, as we will show in the result section (Fig. 9(A), inside the yellow box). A flattening procedure was utilized to solve this problem (Fig. 7). We first found the center of mass (pixel intensity) of each A-scan, represented as blue dots in Figs. 7(B1) and 7(C1). We then fitted a polynomial plane to these centers of mass. A shift along the depth (z) was performed to transform the volume, so that the curved plane transformed to a flat plane (Fig. 7(B2)), flattening the curved volume (compare Figs. 7(A1) and 7(A2)). By using the center of mass instead of an anatomic tissue plane, the volume flattening procedure is not subject to boundary distortion caused by pathology. Note that this volume flattening procedure is only used to aid the segmentation. For visualization, the volume is flattened using the RPE/BM boundary after segmentation.

 figure: Fig. 7

Fig. 7 Rendering of the 6 × 6 × 1.6 mm OCT retinal volume data, (A1) original, and (A2) flattened. In (B1) (B2) (C1) (C2), each blue dot represents the A-scan center of mass. The colored curved plane in (B1) shows the fitted center of mass plane, which can be thought of as an estimate of the retinal shape. In (B2), the curved plane is flattened. (C1) and (C2) are B-scan frames with the A-scan center of mass overlaid.

Download Full Size | PDF

2.3.7 Standard procedure

As a first step, volume flattening is performed on and only on 6 × 6 mm scans (2.3.6). The user then chooses a frame with few pathological structures, runs 2D automated segmentation (2.3.2), and corrects segmentation errors by providing several key points (2.3.3) or using semi-automatic segmentation (2.3.4). Propagated 2D automated segmentation (2.3.3) then segments the rest of the frames. If the user observes a segmentation error, he or she performs manual correction and then rerun the propagation. In rare cases when propagation fails to correct errors, the user can manually segment selected frames and perform interpolation (2.3.5). It should be noted that often the user only needs to use interpolation to segment one or two of the boundaries, and automatic segmentation is able to work out the rest of the boundaries. Segmentation is always performed under the supervision of the user to minimize errors.

3. Results and discussion

3.1 Study population

We systematically tested our segmentation technique in eyes with DR and AMD. In the DR study, 5 normal cases, 10 non-proliferative DR (NPDR), and 10 proliferative DR (PDR) were studied. The layers of interest for segmentation include vitreous/ILM, IPL/INL, OPL/ONL, RPE/BM. In the AMD study, 4 normal, 4 dry AMD and 4 wet AMD eyes were examined. The layers of interest for segmentation included vitreous/ILM, OPL/ONL, IS/OS, RPE/BM. Table 1 summarizes the average number of layers corrected and processing time.

Tables Icon

Table 1. Average time for processing different clinical cases

3.2 Layer segmentation performance

3.2.1 Automated segmentation of pathology

The 2D automated algorithm correctly segmented the tissue boundaries despite disruption caused from RNV (Fig. 8(A)), small exudates (Fig. 8(B)), small intraretinal cysts (Fig. 8(C)), or drusen with strong boundary (Fig. 8(D)). However, the algorithm failed in some severe pathological cases. During segmentation of epiretinal membrane (Fig. 8(E)), the NFL/GCL boundary was assigned to the epiretinal membrane, causing an upshift in the search region, and therefore incorrect segmentation for IPL/INL, INL/OPL, and OPL/ONL. Large exudates (Fig. 8(F)) distorted the OPL/ONL boundary and caused incorrect segmentation of NFL/GCL, IPL/INL, and INL/OPL. Also, large exudates where shown to cast a shadow artifact extending past the IS/OS boundary. Subretinal fluid and large intraretinal cysts (Fig. 8(G)) disrupted the IS/OS and OPL/ONL boundary, and as result, the NFL/GCL, IPL/INL and INL/OPL were also segmented incorrectly. Drusen with weak boundary intensity (Fig. 8(H)) caused the segmentation of the IS/OS and RPE to not accurately follow the more elevated drusens, and as a consequence, NFL/GCL, IPL/INL, and INL/OPL were also segmented incorrectly. In these cases, propagated 2D automated segmentation with intelligent manual correction was applied. And in rare cases, interpolation mode was used (e.g. IPL/INL and OPL/ONL in Fig. 11, PDR with edema).

 figure: Fig. 8

Fig. 8 Pathological cases where automated segmentation was accurate (A-D), and severe pathology cases where the automated segmentation contained errors (E-H).

Download Full Size | PDF

3.2.2 Segmentation processing time and accuracy

During segmentation, we recorded the number of manual corrections made on each type of boundary for both DR and AMD. The average number of manual corrections is given in Table 1. The automated segmentation of vitreous/ILM was highly accurate in DR and AMD cases. In severe DR cases, edema sometimes caused tissue boundaries to be located outside of the searching regions, and therefore required manual corrections of both the IPL/INL and OPL/ONL boundaries. Similarly, AMD with large drusen caused segmentation failure of OPL/ONL and IS/OS. RPE/BM needed to be manually corrected in AMD cases where the boundary became unclear. In general, an increase in severity of either DR or AMD required a longer average processing time. Compared to a purely manual segmentation approach (typically taking 3-4h to complete [19],), our intelligent manual correction method efficiently segmented tissue boundaries in eyes with DR and AMD, only taking 15 minutes to complete, including the most difficult case.

To evaluate segmentation accuracy, we compared the results of manual segmentation (using intelligent scissors) with those from our propagated automated segmentation with manual correction. For each case, 2 subjects were randomly chosen and 20 B-scans were randomly selected for evaluation. Three graders independently performed manual segmentation of each tissue boundary, with the help of intelligent scissors. The manually segmented boundaries were averaged among the three graders and taken as gold standard. The absolute errors of our propagated automated segmentation with manual correction was determined (mean ± std in unit of pixels). The result is given in Table 2. In more than 62% of images, the segmentation error is less than 1 pixels (3.1 µm).

Tables Icon

Table 2. Segmentation accuracy of different clinical cases

3.2.3. Volume flattening

The aforementioned volume flattening procedure was able to solve stark curvature segmentation errors. The yellow box in Fig. 9(A) demonstrates segmentation failure at multiple tissue boundaries in an area of stark curvature. By flattening the volumetric data, our automated segmentation algorithm was able to accurately segment all seven tissue boundaries inside the yellow box as shown in Fig. 9(B). When the image was restored to its original curvature, the corrected segmentation remained (Fig. 9(C)). This automated volume flattening allows for efficient image processing of large area OCT scans (e.g. 6 × 6 mm).

 figure: Fig. 9

Fig. 9 (A) Segmentation failure in a 6 × 6 mm image with stark curvature. Note the segmentation error inside the yellow box, and a zoom in is provided at the right side. (B) Corrected segmentation done on the flatten image. (C) Recovered image and segmentation from (B).

Download Full Size | PDF

3.3 Advanced image processing: clinical applications

3.3.1 Age related macular degeneration (3 × 3 mm scans)

CNV, which is the pathologic feature of wet AMD, occurs when abnormal vessels grow from the choriocapillaris and penetrate Bruch’s membrane into the outer retinal space [13]. Detection of CNV depends on the segmentation of three reference planes (Vitreous/ILM, OPL/ONL, and RPE/BM) used to generate three slabs: inner retina, outer retina, and choriocapillaris.

Figure 10 shows representative images of dry and wet AMD cases. Structural information from the OCT angiography scans was used to create retinal thickness (Figs. 10(B1) and 10(B2)) and RPE-drusen complex (RPEDC) maps (Figs. 10(D1) and 10(D2), distance between IS/OS and RPE/BM). The retinal thickness map is clinically useful in determining atrophy and exudation. The RPEDC map, representing the size and the volume of drusen, has been correlated with risk of clinical progression [29].

 figure: Fig. 10

Fig. 10 Representative images of AMD cases. The scan size is 3 × 3 mm. A are the composite B-scans. C are the composite en face angiogram of inner retina (purple) and outer retina (yellow). In C2, CNV can be seen as yellow vessels, the CNV area is 0.88 mm2. B are the retinal thickness maps. D are the RPEDC thickness (distance between IS/OS and RPE/BM) maps.

Download Full Size | PDF

Because of shadowgraphic flow projection, true CNV is difficult to identify in both the composite B-scan and en face angiogram. We used an previously published automated CNV detection algorithm to removed projection from the outer retinal [22]. A composite en face angiogram displaying the two retinal slabs in different colors shows the CNV in relation to the retinal angiogram (Figs. 10(C2) and 12(A1)). An overlay of this composite angiogram on the cross-sectional angiogram shows the depth of the CNV in relation to retinal structures (Fig. 10(A2)). The size of CNV can be quantified by calculating the area of the vessel in the outer retinal slab.

3.3.2 Diabetic retinopathy (3 × 3 mm scans)

RNV, or growth of new vessels above the ILM, is the hallmark of proliferative diabetic retinopathy (PDR). The presence of RNV is associated with high risk of vision loss and is an indication for treatment with panretinal photocoagulation, which reduces the risk of vision loss [30].

Segmenting along the vitreous/ILM border reveals the RNV in the vitreous slab, distinguishing it from intra-retinal microvascular abnormalities (IRMA), which can be difficult to distinguish clinically from early RNV (Fig. 11, case 3, PDR without edema and Fig. 12(A2)). By quantifying the RNV area, one can assess the extent and activity of PDR.

 figure: Fig. 11

Fig. 11 Representative results of DR cases. The scan size is 3 × 3 mm. (Row A) Edema, cyst, extrudes, RNV, and blood flow in different layers can be visualized on the composite B-scan images. (Row B) The composite en face angiogram of superficial inner retina and vitreous, where the RNV can be easily seen as pink vessels. The yellow line in row B marks the position of the B-scan slices in row A. (Row C) The angiogram of the deep inner retina. The vascular network is different from the superficial inner retina, although there are projection artifacts from the superficial inner retina. (Row D) shows the angiogram of inner retina with nonperfusion areas marked in light blue. The nonperfusion areas are 0.72 mm2, 0.52 mm2, 0.60 mm2, and 0.72 mm2, respectively. (Row E) The retinal thickness, i.e. the distance from Vitreous/ILM to RPE/BM. The color map is the same as in Fig. 1(I).

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 A1, B1, results of 6 × 6 mm scan for the wet AMD case in Fig. 10. A2, B2, results of 6 × 6 mm scan for the PDR without edema case in Fig. 11. The blue square marks the 3 × 3 mm range corresponding to Fig. 10 and Fig. 11.

Download Full Size | PDF

Figure 11 shows representative images of DR. The first row shows color-coded B-scans without flow projection artifacts and the boundaries for en face projection. Presenting the structural and flow information simultaneously clarifies the anatomic relationship between vessels and tissue planes. En face composite angiograms of the superficial and deep plexus (second and third rows, respectively) disclose vascular abnormalities including RNV, IRMA, thickening/narrowing of vessels, and capillary dropout as with typical dye-based angiography.

Capillary nonperfusion is a major feature of DR that is associated with vision loss and progression of disease [31, 32]. Using an automated algorithm, we identified and quantified capillary nonperfusion [4, 13] and created a nonperfusion map (Fig. 11(D)) showing blue areas with flow signal lower than 1.2 standard deviations above the mean decorrelation signal in the foveal avascular zone. Assessing the distance from vitreous/ILM to RPE/BM across the volume scan created the retinal thickness map (Fig. 11(E)). This allows the clinician to assess the central macula for edema, atrophy, and distortion of contour.

3.3.3 Clinical evaluation of 6 × 6 mm scans

The pathology in AMD and DR can extend beyond the central macular area. While OCT angiography cannot match the field of view of the current dye-based widefield techniques [33, 34], 6 × 6 mm OCT angiography scans cover a wider area and can reveal pathology not shown in 3 × 3 mm scans. Figures 12(A1) and 12(A2) show examples of 6 × 6 mm scans of the wet AMD case in Fig. 10 and PDR without edema case seen in Fig. 11, respectively. Although these scans are of lower resolution, 6 × 6 mm scans captured areas of capillary nonperfusion not present in the 3 × 3 mm scan area (black areas outside of the blue square).

4. Conclusion

We have described in detail advanced image processing for OCT angiography quantification and visualization. Our proposed segmentation method shows good accuracy and efficiency in clinical applications. In the current phase of development, segmentation still requires manual correction in a minority of cases, but its frequency and associated workload has been highly reduced with techniques such as semi-automatic segmentation, propagated manual corrections, and interpolation as compared to previous reports [18, 19]. We also showed innovative ways of visualizing OCT angiography data including composite B-scan images and composite en face angiograms. Integration of these methods into commercial OCT angiography instruments can potentially improve the utility and diagnostic accuracy of OCT angiography.

Acknowledgments

This work was supported by NIH grants DP3 DK104397, R01 EY024544, R01 EY023285, P30-EY010572, T32 EY23211; CTSA grant UL1TR000128; and an unrestricted grant from Research to Prevent Blindness. Financial interests: Yali Jia and David Huang have a significant financial interest in Optovue. David Huang also has a financial interest in Carl Zeiss Meditec. These potential conflicts of interest have been reviewed and managed by Oregon Health & Science University.

References and links

1. D. Huang, Y. Jia, and S. S. Gao, “Principles of Optical Coherence Tomography Angiography ” in OCT Angiography Atlas H. D. Lumbros, B. Rosenfield, P. Chen, C. Rispoli, M. Romano, eds. (Jaypee Brothers Medical Publishers, New Delhi, 2015).

2. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]   [PubMed]  

3. N. Congdon, B. O’Colmain, C. C. Klaver, R. Klein, B. Muñoz, D. S. Friedman, J. Kempen, H. R. Taylor, P. Mitchell, and Eye Diseases Prevalence Research Group, “Causes and prevalence of visual impairment among adults in the United States,” Arch. Ophthalmol. 122(4), 477–485 (2004). [CrossRef]   [PubMed]  

4. Y. Jia, S. T. Bailey, T. S. Hwang, S. M. McClintic, S. S. Gao, M. E. Pennesi, C. J. Flaxel, A. K. Lauer, D. J. Wilson, J. Hornegger, J. G. Fujimoto, and D. Huang, “Quantitative optical coherence tomography angiography of vascular abnormalities in the living human eye,” Proc. Natl. Acad. Sci. U.S.A. 112(18), E2395–E2402 (2015). [CrossRef]   [PubMed]  

5. M. P. López-Sáez, E. Ordoqui, P. Tornero, A. Baeza, T. Sainza, J. M. Zubeldia, and M. L. Baeza, “Fluorescein-induced allergic reaction,” Ann. Allergy Asthma Immunol. 81(5), 428–430 (1998). [CrossRef]   [PubMed]  

6. Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012). [CrossRef]   [PubMed]  

7. S. S. Gao, G. Liu, D. Huang, and Y. Jia, “Optimization of the split-spectrum amplitude-decorrelation angiography algorithm on a spectral optical coherence tomography system,” Opt. Lett. 40(10), 2305–2308 (2015). [CrossRef]   [PubMed]  

8. R. K. Wang, S. L. Jacques, Z. Ma, S. Hurst, S. R. Hanson, and A. Gruber, “Three dimensional optical angiography,” Opt. Express 15(7), 4083–4097 (2007). [CrossRef]   [PubMed]  

9. L. Liu, Y. Jia, H. L. Takusagawa, A. D. Pechauer, B. Edmunds, L. Lombardi, E. Davis, J. C. Morrison, and D. Huang, “Optical coherence tomography angiography of the peripapillary retina in glaucoma,” JAMA Ophthalmol. 133(9), 1045–1052 (2015). [CrossRef]   [PubMed]  

10. Y. Jia, E. Wei, X. Wang, X. Zhang, J. C. Morrison, M. Parikh, L. H. Lombardi, D. M. Gattey, R. L. Armour, B. Edmunds, M. F. Kraus, J. G. Fujimoto, and D. Huang, “Optical Coherence Tomography Angiography of Optic Disc Perfusion in Glaucoma,” Ophthalmology 121(7), 1322–1332 (2014). [CrossRef]   [PubMed]  

11. A. D. Pechauer, Y. Jia, L. Liu, S. S. Gao, C. Jiang, and D. Huang, “Optical Coherence Tomography Angiography of Peripapillary Retinal Blood Flow Response to Hyperoxia,” Invest. Ophthalmol. Vis. Sci. 56(5), 3287–3291 (2015). [CrossRef]   [PubMed]  

12. A. Ishibazawa, T. Nagaoka, A. Takahashi, T. Omae, T. Tani, K. Sogawa, H. Yokota, and A. Yoshida, “Optical Coherence Tomography Angiography in Diabetic Retinopathy: A Prospective Pilot Study,” Am. J. Ophthalmol. 160(1), 35–44 (2015). [CrossRef]   [PubMed]  

13. Y. Jia, S. T. Bailey, D. J. Wilson, O. Tan, M. L. Klein, C. J. Flaxel, B. Potsaid, J. J. Liu, C. D. Lu, M. F. Kraus, J. G. Fujimoto, and D. Huang, “Quantitative optical coherence tomography angiography of choroidal neovascularization in age-related macular degeneration,” Ophthalmology 121(7), 1435–1444 (2014). [CrossRef]   [PubMed]  

14. T. S. Hwang, Y. Jia, S. S. Gao, S. T. Bailey, A. K. Lauer, C. J. Flaxel, D. J. Wilson, and D. Huang, “Optical Coherence Tomography Angiography Features of Diabetic Retinopathy,” Retina 35(11), 2371–2376 (2015). [CrossRef]   [PubMed]  

15. S. J. Chiu, J. A. Izatt, R. V. O’Connell, K. P. Winter, C. A. Toth, and S. Farsiu, “Validated Automatic Segmentation of AMD Pathology Including Drusen and Geographic Atrophy in SD-OCT Images,” Invest. Ophthalmol. Vis. Sci. 53(1), 53–61 (2012). [CrossRef]   [PubMed]  

16. P. P. Srinivasan, S. J. Heflin, J. A. Izatt, V. Y. Arshavsky, and S. Farsiu, “Automatic segmentation of up to ten layer boundaries in SD-OCT images of the mouse retina with and without missing layers due to pathology,” Biomed. Opt. Express 5(2), 348–365 (2014). [CrossRef]   [PubMed]  

17. S. J. Chiu, M. J. Allingham, P. S. Mettu, S. W. Cousins, J. A. Izatt, and S. Farsiu, “Kernel regression based segmentation of optical coherence tomography images with diabetic macular edema,” Biomed. Opt. Express 6(4), 1172–1194 (2015). [CrossRef]   [PubMed]  

18. P. Teng, “Caserel - An Open Source Software for Computer-aided Segmentation of Retinal Layers in Optical Coherence Tomography Images,” (2013).

19. X. Yin, J. R. Chao, and R. K. Wang, “User-guided segmentation for volumetric retinal optical coherence tomography images,” J. Biomed. Opt. 19(8), 086020 (2014). [CrossRef]   [PubMed]  

20. M. F. Kraus, B. Potsaid, M. A. Mayer, R. Bock, B. Baumann, J. J. Liu, J. Hornegger, and J. G. Fujimoto, “Motion correction in optical coherence tomography volumes on a per A-scan basis using orthogonal scan patterns,” Biomed. Opt. Express 3(6), 1182–1199 (2012). [CrossRef]   [PubMed]  

21. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010). [CrossRef]   [PubMed]  

22. L. Liu, S. S. Gao, S. T. Bailey, D. Huang, D. Li, and Y. Jia, “Automated choroidal neovascularization detection algorithm for optical coherence tomography angiography,” Biomed. Opt. Express 6(9), 3564–3576 (2015). [CrossRef]   [PubMed]  

23. E. N. Mortensen and W. A. Barrett, “Intelligent scissors for image composition,” in Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, (ACM, 1995), pp. 191–198.

24. D. Pope, D. Parker, D. Gustafson, and P. Clayton, “Dynamic search algorithms in left ventricular border recognition and analysis of coronary arteries,” in IEEE Proceedings of Computers in Cardiology, 1984), 71–75.

25. X. Liu, D. Z. Chen, M. H. Tawhai, X. Wu, E. A. Hoffman, and M. Sonka, “Optimal Graph Search Based Segmentation of Airway Tree Double Surfaces Across Bifurcations,” IEEE Trans. Med. Imaging 32(3), 493–510 (2013). [CrossRef]   [PubMed]  

26. M. B. Merickel, Jr., M. D. Abràmoff, M. Sonka, and X. Wu, “Segmentation of the optic nerve head combining pixel classification and graph search,” in Medical Imaging, (International Society for Optics and Photonics, 2007), 651215.

27. M. K. Garvin, M. D. Abràmoff, R. Kardon, S. R. Russell, X. Wu, and M. Sonka, “Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search,” IEEE Trans. Med. Imaging 27(10), 1495–1505 (2008). [CrossRef]   [PubMed]  

28. X. Chen, M. Niemeijer, L. Zhang, K. Lee, M. D. Abràmoff, and M. Sonka, “Three-dimensional segmentation of fluid-associated abnormalities in retinal OCT: probability constrained graph-search-graph-cut,” IEEE Trans. Med. Imaging 31(8), 1521–1531 (2012). [CrossRef]   [PubMed]  

29. S. Farsiu, S. J. Chiu, R. V. O’Connell, F. A. Folgar, E. Yuan, J. A. Izatt, C. A. Toth, and Age-Related Eye Disease Study 2 Ancillary Spectral Domain Optical Coherence Tomography Study Group, “Quantitative Classification of Eyes with and without Intermediate Age-Related Macular Degeneration Using Optical Coherence Tomography,” Ophthalmology 121(1), 162–172 (2014). [CrossRef]   [PubMed]  

30. The Diabetic Retinopathy Study Research Group, “Photocoagulation treatment of proliferative diabetic retinopathy. Clinical application of Diabetic Retinopathy Study (DRS) findings, DRS Report Number 8,” Ophthalmology 88(7), 583–600 (1981). [PubMed]  

31. E. T. D. R. S. R Group, “Early Treatment Diabetic Retinopathy Study design and baseline patient characteristics. ETDRS report number 7,” Ophthalmology 98(5Suppl), 741–756 (1991).

32. M. S. Ip, A. Domalpally, J. K. Sun, and J. S. Ehrlich, “Long-term effects of therapy with ranibizumab on diabetic retinopathy severity and baseline risk factors for worsening retinopathy,” Ophthalmology 122(2), 367–374 (2015). [CrossRef]   [PubMed]  

33. S. Kiss and T. L. Berenberg, “Ultra Widefield Fundus Imaging for Diabetic Retinopathy,” Curr. Diab. Rep. 14(8), 514 (2014). [CrossRef]   [PubMed]  

34. P. S. Silva, J. D. Cavallerano, N. M. N. Haddad, H. Kwak, K. H. Dyer, A. F. Omar, H. Shikari, L. M. Aiello, J. K. Sun, and L. P. Aiello, “Peripheral lesions identified on ultrawide field imaging predict increased risk of diabetic retinopathy progression over 4 years,” Ophthalmology 122(5), 949–956 (2015). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Overview of OCT angiography image processing of a healthy macula. (A) The 3D OCT data (3 × 3 × 0.9 mm), after motion correction with structural information overlaid on angiography data. OCT angiogram is computed using the SSADA algorithm. (B-I) After segmentation of the retinal layers, 3D slabs are compressed to 2D and presented as en face maximum projection angiograms. (B) The vitreous angiogram shows the absence of flow. (C) The superficial inner retinal angiogram shows healthy retinal circulation with a small foveal avascular zone. (D) The deep inner retina angiogram shows the deep retinal plexus which is a network of fine vessels. (E) Inner retinal angiogram. (F) The healthy outer retinal slab should be absent of flow, but shows flow projection artifacts from the inner retina. (G) The outer retinal angiogram after projection removal (F minus E). (H) The choriocapillaris angiogram. (I) Retinal thickness map segmented from vitreous/ILM to RPE/BM, the color bar range is 0 to 600 μm. (J) Composite structural and angiogram B-scan images generated after removal of shadowgraphic projection. (K) Composite C-scan images generated by the flattening of OCT structural and angiogram data volume using RPE/BM.
Fig. 2
Fig. 2 (A) Composite OCT B-scan images with color-coded angiography. Angiography data are overlaid onto the structure images to help graders better visualize the OCT angiography images. Angiography data in the inner retina (between Vitreous/ILM and OPL/ONL) is overlaid as purple, outer retina (between OPL/ONL and IS/OS) as yellow, and choroid (below RPE) as red. (B) Gradient image showing light-to-dark intensity transitions. (C) Inverse gradient image showing dark-to-light intensity transitions.
Fig. 3
Fig. 3 (A) Common graph search. (B) Directional graph search. The solid line represents a made move and dash line represent a possible move. C is the normalized gradient or normalized inverse gradient. x is the B-scan direction, between 1 and W, while z is the A-scan direction, between 1 and H.
Fig. 4
Fig. 4 Comparison of performance on pathologic tissue using 2D automated segmentation (A1, B1), and propagated 2D automated segmentation (A2, B2). En face images B1 and B2 maps the position of the OPL/ONL boundary of A1 and A2. Red arrows in A1 and A2 point to the segmentation differences. The colorbar of B1 and B2 is the same as Fig. 1(I).
Fig. 5
Fig. 5 Illustration of 2D automated segmentation with and without intelligent manual correction. Manual correction (middle image, red crosses) was performed on frame n, and the correction propagated to frame n + 30. Red arrows identify the segmentation differences.
Fig. 6
Fig. 6 (A) Interactive manual segmentation with intelligent scissors, showing a live segmentation of OPL/ONL when the mouse click at the red cross, setting the start point and moves to the green cross. (B) En face depth map with segmentation performed every 20 frames. (C) En face depth map after interpolation of (B).
Fig. 7
Fig. 7 Rendering of the 6 × 6 × 1.6 mm OCT retinal volume data, (A1) original, and (A2) flattened. In (B1) (B2) (C1) (C2), each blue dot represents the A-scan center of mass. The colored curved plane in (B1) shows the fitted center of mass plane, which can be thought of as an estimate of the retinal shape. In (B2), the curved plane is flattened. (C1) and (C2) are B-scan frames with the A-scan center of mass overlaid.
Fig. 8
Fig. 8 Pathological cases where automated segmentation was accurate (A-D), and severe pathology cases where the automated segmentation contained errors (E-H).
Fig. 9
Fig. 9 (A) Segmentation failure in a 6 × 6 mm image with stark curvature. Note the segmentation error inside the yellow box, and a zoom in is provided at the right side. (B) Corrected segmentation done on the flatten image. (C) Recovered image and segmentation from (B).
Fig. 10
Fig. 10 Representative images of AMD cases. The scan size is 3 × 3 mm. A are the composite B-scans. C are the composite en face angiogram of inner retina (purple) and outer retina (yellow). In C2, CNV can be seen as yellow vessels, the CNV area is 0.88 mm2. B are the retinal thickness maps. D are the RPEDC thickness (distance between IS/OS and RPE/BM) maps.
Fig. 11
Fig. 11 Representative results of DR cases. The scan size is 3 × 3 mm. (Row A) Edema, cyst, extrudes, RNV, and blood flow in different layers can be visualized on the composite B-scan images. (Row B) The composite en face angiogram of superficial inner retina and vitreous, where the RNV can be easily seen as pink vessels. The yellow line in row B marks the position of the B-scan slices in row A. (Row C) The angiogram of the deep inner retina. The vascular network is different from the superficial inner retina, although there are projection artifacts from the superficial inner retina. (Row D) shows the angiogram of inner retina with nonperfusion areas marked in light blue. The nonperfusion areas are 0.72 mm2, 0.52 mm2, 0.60 mm2, and 0.72 mm2, respectively. (Row E) The retinal thickness, i.e. the distance from Vitreous/ILM to RPE/BM. The color map is the same as in Fig. 1(I).
Fig. 12
Fig. 12 A1, B1, results of 6 × 6 mm scan for the wet AMD case in Fig. 10. A2, B2, results of 6 × 6 mm scan for the PDR without edema case in Fig. 11. The blue square marks the 3 × 3 mm range corresponding to Fig. 10 and Fig. 11.

Tables (2)

Tables Icon

Table 1 Average time for processing different clinical cases

Tables Icon

Table 2 Segmentation accuracy of different clinical cases

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

G x,z = I x,z - I x,z-1
C x,z (1) = G x,z min(G) max(G)min(G)
C x,z (2) =1 C x,z (1)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.