Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automatic retinal nerve fiber bundle tracing based on large field of view polarization sensitive OCT data

Open Access Open Access

Abstract

A technique to accurately estimate trajectories of retinal nerve fiber bundles (RNFB) in a large field of view (FOV) image covering 45° is described. The method utilizes stitched projections of polarization-sensitive optical coherence tomography (PS-OCT) data, as well as a mathematical model of average RNFB trajectories as prior. The fully automatic process was applied to data recorded in healthy subjects and glaucoma patients and automatically detected individual RNFB trajectories are compared to manual traces.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

Glaucoma is a chronic optic neuropathy that severely damages the optic nerve head (ONH), causing a loss of the retinal nerve fiber layer (RNFL) and retinal ganglion cell (RGC) layer which consequently leads to visual field (VF) impairment [14]. The RNFL is composed by the RGC axons targeting the ONH where they leave the eye to travel towards the brain. The detection of RNFL damage by determining its thickness profile in comparison with normative values is one of the main sources in glaucoma diagnosis [5]. The measurement of RNFL thickness around the ONH is performed using imaging techniques such as optical coherence tomography (OCT) [611]. However, RNFL thickness varies considerably among healthy population, depending - among others - on age, sex, ethnicity and refractive error, as well as the location of the major retinal vessels [1219]. This interindividual variation of RNFL thickness is a major obstacle for early and accurate glaucoma diagnosis.

When measured around the ONH, the detection of sectorial damage of the RNFL provides information on RGC loss in a retinal sector along the trajectory of the affected retinal nerve fiber bundle (RNFB). There may be substantial RGC loss before a VF defect is detected [3,20]. Most knowledge about the trajectories of the RNFBs is derived from manual tracing in fundus photographs of the nerve fibers, which poses several problems. First, manual measurement is not suitable for diagnostic routine due to the amount of time necessary. It was thus mainly used for generating mathematical models of average RNFB trajectories. Secondly the reproducibility of manual RNFB tracing is limited and consequently manual RNFB tracing is not suitable for generating maps with high spatial resolution as this severely increases the overall grading time.

Polarization sensitive OCT (PS-OCT), an extension of OCT, has previously been reported for the investigation of RNFB trajectories [21]. PS-OCT allows to retrieve additional contrast images over a 3D volume [22]. The polarization state of the light may be affected by different light-tissue interactions, such as depolarization and birefringence. In the retina, fiber based tissues, such as the RNFL and the Henle’s fiber layer, are known to be birefringent [23]. The axons of the RGCs are composed, among other intracellular structures, of microtubules (MTs). These MTs, oriented in parallel within the nerve fibers, are the main contribution to the RNFL form birefringence [24,25]. In the case of glaucoma, the RNFL birefringence may vary as a consequence of alteration of the MTs organization or a reduction of their number, as investigated in primate model of glaucoma [26].

Assessing RNFB birefringence is therefore of great interest for early disease diagnostic and monitoring. In order to evaluate thickness, retardation, and birefringence information along individual RNFBs, which may allow for a more accurate localization of glaucoma damage, as well as a more direct comparison between structure and function of the tissue, their orientation and trajectory need to be retrieved.

In reality early detection of glaucoma is similarly impeded by the interindividual variability of RNFL bundle trajectories and their measurement. Thus any method to reduce variability of the RNFL bundle tracing will improve the diagnostic performance.

In this work we propose a method of fully automatically reconstructing individual retinal nerve fiber bundle traces from PS-OCT based RNFL measurements in healthy subjects and glaucoma patients, as well as quantifying the reproducibility of said traces and comparing it to manual tracing.

2. Methods

2.1 Instrumentation set-up

The imaging system used was a custom-built prototype PS-OCT operating at 860nm with an A-scan rate of 70kHz and a FOV of 28° $\times$ 21° (1024 A-Scans $\times$ 250 B-Scans). An integrated line scanning laser ophthalmoscope (LSLO), operating at 790nm with a refresh rate of 60Hz, is used for tracking purposes [27,28]. The axial and lateral resolutions of the system are 4µm (in tissue) and 17µm ($\frac {1}{e^2}$ intensity full width), respectively [29]. The lateral pixel spacing (assuming that 1° scanning angle corresponds to 300µm on the retina) is 8.2µm (x) and 25.2µm (y), the axial pixel spacing is 1.49µm in tissue. The system is controlled via a custom built LabView software that allows arbitrary positioning and movement of the OCT scanning beam along X and Y axis [30]. Additionally large FOV fundus photos (45° $\times$ 45°) were acquired using a separate Nonmyd WX-3D fundus camera (KowaCompany, Japan).

2.2 Data preparation

Volumetric PS-OCT raster scans each covering a FOV of 28° x 21° at 7 different retinal areas are recorded, with 3 consecutive acquisitions being recorded per area separately to increase the signal to noise ratio (SNR). These volumes are preprocessed, flattened and placed in the large FOV area defined by the fundus photography using the methods presented in [28]. The next step is to perform a retinal layer segmentation as it will be needed for subsequent steps. Particularly we are interested in the inner limiting membrane (ILM), the RNFL/ganglion cell layer (GCL) boundary, as well as the outer plexiform layer (OPL) / outer nuclear layer (ONL) boundary. Detecting retinal layers can be seen as an optimization problem which we tackle using a graph-based shortest path solution on a per B-Scan basis, similar to the implementation described in [28,31,32].

2.3 Subject selection

123 healthy subjects and 66 early glaucoma patients (mean deviation of visual field better than -6.0 dB) were acquired as part of an ongoing larger study. A subset containing 7 healthy subjects, as well as an additional 5 healthy subjects and 6 glaucoma patients that have undergone 3 scanning sessions is used for evaluating the methods proposed in this paper. These methods will later be used for evaluation of the entire data set. The study was approved by the university’s ethics committee and is in agreement with the tenets of the Declaration of Helsinki.

2.4 Estimation of nerve fiber orientation

With the data prepared the orientation of nerve fibers can be derived. There are various methods available for estimating nerve fiber orientation - each with unique benefits and drawbacks. Since these methods tend to perform better in some areas and worse in others we will be using multiple methods instead of only one. The ones we want to focus on are the mathematical model as developed by Jansonius et. al. [33,34], a polarization data based approach as proposed by Sugita et. al. [21] and a method considering pure intensity variations of the RNFL [35,36]. The orientation maps derived from those methods shall be denoted as $\phi _M$, $\phi _P$ and $\phi _F$ for mathematical model, polarization data and fingerprint orientation estimation method, respectively. A detailed explanation follows below.

2.4.1 Mathematical model

In [33] and [34] a mathematical model of the average trajectory of retinal nerve fibers is introduced by Jansonius et. al.. The parameters for that model were estimated based on manually assessed traces of nerve fiber bundles on fundus photos taken from 27 healthy subjects. A total of 1660 such traces (including 16816 sampling points) were used to estimate the model parameters. The model describes the trajectory of individual nerve fiber bundles using a second order polynomial in the form of

$$\varphi(\varphi_0,r)=\varphi_0+b(\varphi_0)\times(r-r_0)^{c(\varphi_0)}$$
with $\varphi _0=\varphi (r=r_0)$ being the angular position of the trajectory at its starting point at a circle with radius $r_0$ around the center of the ONH. The model further defines the constants to be
$$c=1.9+1.4 \times \tanh{\left(\frac{\varphi_0-121}{14}\right)}$$
for the superior hemifield and
$$c=1.0+0.5 \times \tanh{\left(\frac{-\varphi_0-90}{25}\right)}$$
for the inferior hemifield, as well as
$$\ln{\left(b\right)}={-}1.9+3.9 \times \tanh{\left(-\frac{\varphi_0-121}{14}\right)}$$
for the superior hemifield and
$$\ln{\left({-}b\right)}=0.7+1.5 \times \tanh{\left(-\frac{-\varphi_0-90}{25}\right)}$$
for the inferior hemifield. Though these values are not individualized they give a very good overview of the rough trajectory of the retinal nerve fiber bundles. Using these parameters we have reimplemented the model (Fig. 1). The next step is to fit the given model to our data. Prior to that, however, the polar coordinate system is modified such that the position of the fovea is located at (0,0), as explained in detail in appendix A of [33].

 figure: Fig. 1.

Fig. 1. Polar plot of the RNFB trajectories as given by the mathematical model described in [33,34]. Fovea is located at position (0,0) in the center. The ONH is the source of all traces 15° to the right.

Download Full Size | PDF

For this we first create traces in resolution of 1 tracing per degree around the optic nerve head (see Fig. 1) and then interpolating these results on a grid with the dimensions of our large field of view projections. In order to fit the model, where the fovea is located exactly in the center of the image, the fovea position is adjusted by translation for each individual subject. The center of the ONH is then fitted by rotation and scaling (keeping the fovea position fixed), completing our orientation map of the mathematical model $\phi _M$. Keep in mind that when rotating an orientation map $\phi$ by $\alpha$ degree, we have to compensate by also adding the value of $\alpha$ to the orientation values contained in $\phi$. The fitted orientation map can be seen in Fig. 2 for a representative case.

 figure: Fig. 2.

Fig. 2. Interpolated orientation map derived from mathematical model (colormap in radian). Black bars drawn in regular intervals to further indicate the orientation vector.

Download Full Size | PDF

2.4.2 Polarization data

As previously mentioned, the RNFL is known to be birefringent whereas the subsequent layers, from the GCL to the OPL, do not affect the polarization state of the light. The polarization parameters, optic axis orientation $\theta$ and phase retardation $\delta$, within the RNFL can thus be retrieved [37] by analyzing the OCT signal obtained from these polarization maintaining layers, after compensation of the corneal birefringence [23]. Near neighboring RNFBs being mainly oriented parallel to each other on a given position (x,y), the axis orientation is expected to be constant in depth. For a given retinal 3D volume, the RNFL retardation and axis orientation maps can be retrieved using Stokes vectors averaging with re-normalization, as described in [21]. After segmentation of the RNFL/GCL boundary (at depth locations $B_{NG}(x,y)$) and the OPL/ONL boundary (at depth locations $B_{OO}(x,y)$), the averaged normalized Stokes vectors ($\langle \tilde {Q}\rangle$, $\langle \tilde {U}\rangle$, $\langle \tilde {V}\rangle$) are obtained as below:

$$\begin{bmatrix} \langle\tilde{Q}(x,y)\rangle \\\langle\tilde{U}(x,y)\rangle \\\langle\tilde{V}(x,y)\rangle \end{bmatrix} = \frac{1}{B_{OO}(x,y)-B_{NG}(x,y)} \times \sum_{i=B_{NG}(x,y)}^{B_{OO}(x,y)} \begin{bmatrix} \cos{(2 \delta(x,y,i))} \\ \sin{(2 \delta(x,y,i))} \times \cos{(\pi - 2 \theta(x,y,i))}\\ \sin{(2 \delta(x,y,i))} \times \sin{(\pi - 2 \theta(x,y,i))}) \end{bmatrix}.$$

The RNFL $\delta _P$ and $\phi _P$ components can then be obtained from:

$$\begin{bmatrix} \delta_P(x,y)\\\phi_P(x,y) \end{bmatrix} = \frac{1}{2} \times \begin{bmatrix} \arccos{(\langle\tilde{Q}(x,y)\rangle_{R})}\\ \arctan(-\frac{\langle\tilde{V}(x,y)\rangle_{R}}{\langle\tilde{U}(x,y)\rangle_{R}}) \end{bmatrix},$$
with $(\langle \tilde {Q}\rangle _{R}, \langle \tilde {U}\rangle _{R}, \langle \tilde {V}\rangle _{R})$ the re-normalized Stokes vectors defined as below:
$$\begin{bmatrix} \langle\tilde{Q}(x,y)\rangle_{R}\\\langle\tilde{U}(x,y)\rangle_{R}\\\langle\tilde{V}(x,y)\rangle_{R} \end{bmatrix} = \frac{1}{nrm(x,y)} \times \begin{bmatrix} \langle\tilde{Q}(x,y)\rangle\\ \langle\tilde{U}(x,y)\rangle\\ \langle\tilde{V}(x,y)\rangle\\ \end{bmatrix}$$
and
$$nrm(x,y) = \sqrt{{\langle\tilde{Q}(x,y)}\rangle^2+{\langle\tilde{U}(x,y)}\rangle^2+{\langle\tilde{V}(x,y)}\rangle^2}.$$

The resulting RNFL retardation and axis orientation maps of a 3D volume scan area from a healthy subject, including the ONH are given in Fig. 3. As the axis orientation is a relative value, derived from the difference between the two OCT channels [37], further compensation of the variable angle offset is required (see section 2.5).

 figure: Fig. 3.

Fig. 3. Averaged polarization data of the RNFL of one 3D volume scan area, showing the retardation (a) and axis orientation (b) of a healthy subject.

Download Full Size | PDF

2.4.3 Nerve fiber layer intensity variation

Considering the retinal nerve fiber layer is made up of the nerve fiber bundles there are two ways to use the intensity channel of the volumetric raster scan OCT data to obtain information about the position/path of those bundles. One method would be to use retinal nerve fiber layer thickness data and checking the thickness variation as done in [21]. This is, however, highly dependent on the quality of the retinal layer segmentation and prone to failure in noisier data (as it often occurs in older and glaucoma subjects) and thus not our method of choice. Another way is to directly check the intensity value of the RNFL near the boundary to the ganglion cell layer (GCL). Since the RNFB have a higher reflectivity in the OCT scan they can be assessible by averaging the intensity near that boundary. Given an intensity OCT volume $V(x,y,z)$, a segmentation of the RNFL/GCL boundary $B_{NG}(x,y)$ and a segmentation of the ILM $B_{ILM}(x,y)$ - both describing the segmented layers by z-positions for all (x,y)-positions within the given volume V - we are able to create a nerve fiber projection image $NF_{MaxLen}(x,y)$ using

$$NF_{MaxLen}(x,y) = \frac{1}{BE_{MaxLen}(x,y)} \times \sum_{i=1}^{BE_{MaxLen}(x,y)}{V(x,y,B_{NG}(x,y)-i)},$$
with $(x,y)$ indicating the lateral positions of V and $BE_{MaxLen}(x,y)$ indicating the extent of the band at $(x,y)$-position in the z-dimension over which to average. To ensure that all these extents do not exceed the depth of the RNFL (to avoid including unwanted areas to $NF_{MaxLen}(x,y)$), $BE_{MaxLen}(x,y)$ is limited in size by $MaxLen$ in $BE_{MaxLen}(x,y)$ as follows:
$$BE_{MaxLen}(x,y) = min\left(B_{NG}\left(x,y\right)-B_{ILM}\left(x,y\right),MaxLen\right),$$
with $MaxLen$ being our target band height which for our data delivered good contrast for a value of 15. As can be seen in Fig. 4(A) this projection image clearly shows the pathway of the RNFBs which is best seen close to the Raphe. The paths of RNFBs are for the most part smooth and (within a small window) fairly parallel and show in the image as a set of ridges and grooves. Fingerprint analysis is a widely studied field that deals with images that have very similar characteristics to our RNFB projection images, since there are ridges and grooves too, that are mostly oriented similarly within a small window. Since there are already several proposed algorithms for extracting local orientation data out of fingerprint images [3843] we opted to use the underlying methods presented there, since they should be applicable to our image data with minimal modification. Our process for orientation estimation consists of the following steps: First we normalize our image by subtracting the background in a 145 $\times$ 145 Pixel window and rescaling the intensity values between $0.0$ and $1.0$. Next - based on the methods described in [38] we calculate a local gradient $G_x$ and $G_y$ in X- and Y-direction respectively. A first orientation estimate $\phi _{FI}$ (I index for initial) can be derived via
$$\phi_{FI}(x,y) = \arctan{(\frac{2\times G_{xy}}{(G_{xx}-G_{yy})+\sqrt{(G_{xx}-G_{yy})^2+(2\times G_{xy})^2}})},$$
with
$$\begin{bmatrix} G_{xx}(x,y)\\G_{yy}(x,y)\\G_{xy}(x,y) \end{bmatrix} = \frac{1}{(2 \times N + 1)^2} \times \sum_{i={-}N}^{N} \sum_{j={-}N}^{N} \begin{bmatrix} G_x(x+i,y+j)^2\\ G_y(x+i,y+j)^2\\ G_x(x+i,y+j)\times G_y(x+i,y+j) \end{bmatrix},$$
with $N$ being the half-size of the sliding window which in our case was set to 7. Note that retinal vessel locations are excluded from this calculation in order to not incorporate the gradient direction of those into the map. As seen in Fig. 4(B) this first estimate is very noisy and in some areas contains wrong orientation estimates. Since we expect the Jansonius mathematical model to be roughly accurate, but not precise, we compare our first orientation estimate with the data from this mathematical model to get a measure of the difference $DST_m$ to the model via
$$DST_m(x,y) = \sqrt{(\sin{(2\times \phi_{FI})}-\sin{(2\times \phi_M)})^2+(\cos{(2\times \phi_{FI})}-\cos{(2\times \phi_M)})^2}.$$

Additionally we calculate the measure of difference $DST_n$ to the neighbour, checking how well the orientation at any point $(x,y)$ fits its neighbouring points within a certain window via

$$DST_n(x,y) = \sqrt{s_{fit}^2+c_{fit}^2},$$
with
$$\begin{bmatrix} s_{fit}(x,y)\\c_{fit}(x,y) \end{bmatrix} = \begin{bmatrix} \sin{(2\times \phi_{FI}(x,y))}^2\\ \cos{(2\times \phi_{FI}(x,y))}^2 \end{bmatrix} \times \left(\frac{1}{(2\times N+1)^2} \times \sum_{i,j={-}N}^{N} \begin{bmatrix} \sin{(2\times \phi_{FI}(x+i,y+j))}\\ \cos{(2\times \phi_{FI}(x+i,y+j))} \end{bmatrix}\right)^2,$$
with $N$ being the half size of the Window. These two difference factors $DST_m$ and $DST_n$ are then combined and averaged to serve as weights for a dynamically sized weighted local average filter of the originally derived orientation $\phi _{FI}$ via
$$W(x,y) = 1-\frac{1}{2}\times \sqrt{\overline{DST_m}(x,y)^2+\overline{DST_n}(x,y)^2},$$
with $\overline {DST_m}$ and $\overline {DST_n}$ being sliding window average filtered versions of $DST_m$ and $DST_n$ respectively. Finally the actual orientation estimation $\phi _F$ of the RNFB projection can be obtained using a large weighted window average of the $\sin$ and $\cos$ components of the initial estimate $\phi _{FI}$.
$$\phi_F(x,y) = \frac{1}{2}\times \arctan{\left(\frac{s_\phi(x,y)}{c_\phi(x,y)}\right)},$$
with
$$\begin{bmatrix} s_\phi(x,y)\\c_\phi(x,y) \end{bmatrix} = \sum_{i,j={-}N}^{N} W(x+i,y+j) \times \begin{bmatrix} \sin{(2\times \phi_{FI}(x+i,y+j)}\\ \cos{(2\times \phi_{FI}(x+i,y+j)} \end{bmatrix}.$$

Note that the window size $N$ is not the same for the entire image, but instead dynamically gets smaller for positions $(x,y)$ near and at the fovea. Since at the fovea nerve fiber bundles come together from various directions, a large average window there would wash out these high orientation differences. The dynamic reduction of window size $N$ is implemented by checking for every position $(x,y)$ whether the window centered around it with the size $N$ would cover the position of the fovea. If so, $N$ is reduced for this position, such that the window does not include the fovea anymore. The final orientation map can be seen in Fig. 4(C).

 figure: Fig. 4.

Fig. 4. Projection of the RNFL, averaged near the RNFL/GCL boundary. Location of the raphe marked with red ellipse. Some RNFBs are clearly visible (A). Noisy orientation map derived via fingerprint orientation estimation techniques. Black bars drawn in to visualize direction (B). Further averaged and compensated orientation map (C).

Download Full Size | PDF

2.5 Combining orientation data

After obtaining orientation estimation data $\phi _M$, $\phi _P$ and $\phi _F$ from the mathematical model, the polarization data and the RNFB projection image respectively a global orientation map can be created. Except for $\phi _M$ all calculations were made on a per retinal scanning region basis - meaning that first we need to create a single large orientation map for the polarization and RNFB projection maps respectively, which is done using the methods we described in our previous paper [28]. Before the 3 large FOV orientation maps can be combined, however, the axis offset $ofs$ of $\phi _P$, as mentioned in section 2.4.2 has to be corrected. For this we take a broad circular band around the ONH and compare $\phi _F$ with $\phi _P + ofs$. To be precise we calculate the sum of squared differences of the double $\sin ()$ and $\cos ()$ component of both orientation maps - offsetting $\phi _P$ by an angle from -90° to +90° in increments of 1. The offset ($ofs$) that yields the lowest squared difference will then be applied to $\phi _P$.

Finally we can join the two large field maps $\phi _F$ and $\phi _P$ into the orientation map $OM$. To determine the contribution of the two maps for a given position $(x,y)$ we once again compare them to the mathematical model, using

$$W(x,y) = 1-\frac{1}{\sqrt{2}}\sqrt{(sin{(2 \times \phi)}-sin{(2 \times \phi_M)})^2+(cos{(2 \times \phi)}-cos{(2 \times \phi_M)})^2},$$
which is calculated once for $\phi = \phi _P$ giving us $W = W_P$ and once for $\phi = \phi _F$ giving $W = W_F$. After smoothing the weight contributions $W_P$ and $W_F$ with a sliding average filter, they are joined via
$$OM(x,y) = \frac{1}{2} \times \arctan{\left(\frac{s_{joined}}{c_{joined}}\right)}$$
with
$$\begin{bmatrix} s_{joined}(x,y)\\c_{joined}(x,y) \end{bmatrix} = \frac{1}{W_p+W_f} \times \begin{bmatrix} W_p \times \sin{(2 \times \phi_P)} + W_f \times \sin{(2 \times \phi_F)}\\ W_p \times \cos{(2 \times \phi_P)} + W_f \times \cos{(2 \times \phi_F)} \end{bmatrix}.$$

2.6 RNFB tracing

Given the combined large field orientation map $OM(x,y)$ the RNFB trajectories can be traced. Trajectories are calculated based on line integration by taking the angle at a position $(x,y)$ and moving a set amount in that direction, resulting in the new position $(x',y')$ from which point the process can be repeated until a certain termination point is reached. The equation for this is simply

$$\begin{bmatrix} x'\\y' \end{bmatrix} = r \times \begin{bmatrix} \cos{(OM(x,y))} & -\cos{(OM(x,y))}\\\sin{(OM(x,y))} & -\sin{(OM(x,y))} \end{bmatrix},$$
with $r$ being the step size in pixel. Note that we calculate 2 potential positions for $(x',y')$ since we cannot know whether e.g. an angle of 0° points left or right. The problem of choosing the correct direction is linked to the problem of choosing the best starting point. Intuitively one might want to trace RNFBs starting from a set of positions located in a circle around the ONH in which case we would always choose the $(x',y')$ that is further away from the center of the ONH. Since the path of RNFBs outwards of the ONH is divergent, this means that small inaccuracies of our orientation map will also diverge - thus increasing the error over the course of a trace. The more robust method is to trace inwards - meaning from the boundaries towards the ONH in which case we will always choose the $(x',y')$ with the minimum distance to the center of the ONH.

In practice, however, the most probable use-case is that a specific position of interest (e.g. a visual field test coordinate) is selected in the image and the trace passing through that point has to be determined. In this case we will simply calculate 2 sub-traces - One following the trajectory towards the ONH and one away from it - and then combine them.

3. Results

The accuracy of our RNFB tracing method was evaluated as follows. A subset of 7 out of 123 healthy subjects was selected for manual RNFB tracing which was performed by 3 trained clinical experts as manual graders using a custom-built manual tracing tool. Similar to the evaluation in [44] we matched the coordinates of the 24-2 humphrey visual field test to the retinal fundus image (using translation, rotation and scaling) such that the fovea and ONH-center locations of the two were superimposed. The manual graders were shown one such visual field coordinate at a time and were tasked with tracing the entire path up to the ONH for each visual field point. Graders were free to zoom in and toggle between a retinal fundus image or the stitched projection image of RNFL intensity near the RNFL/GCL boundary at any time during the process (Fig. 5).

 figure: Fig. 5.

Fig. 5. Custom tool for manual RNFB tracing. Individual visual field coordinates can be selected on the left, sample trace is drawn in in red. Top left shows intensity variation projection, bottom right shows retinal fundus image. Between the two can be toggled at any time.

Download Full Size | PDF

In order to get a measure of how well the different graders agree in their respective tracings, we perform an analysis similar to the one in [44]. For this we calculate the entry angle (EA) to the ONH (which is defined as the angle at which the trace enters an area within a radius of 4° around the center of the ONH) and then compare these EA between graders. We calculate an EA range of the 3 graders (which is simply the maximum difference between any two graders defined as $max(EA)-min(EA)$, compensated for the 360° wraparound), as well as an average EA of the 3 graders for any trace to get the offset between the EA of every grader and the average EA. In addition, we calculate the intra-class correlation coefficient (ICC) of the EA for the 3 graders using the "A-k" scheme (again compensated for the 360° wraparound) to confirm the reliability of the obtained results. Furthermore we want to analyze the variability between graders along the entire trace. We first transformed all traces to a polar coordinate system centered around the center of the ONH and interpolated the trace points using linear interpolation. Next an average trace for all 3 graders was calculated by averaging the 3 interpolated angles for every radius at fixed intervals. Given an averaged trace $T_{A}$ we can calculate the root mean square error (RMSE) between it and the trace of any grader $T(G)$ (with $G\in \mathbb {N}\ (1,3)$). To do this we take every radial position $r$ of the traces, denoted as $T_{A}(r)$ and $T(G,r)$ for the averaged trace and the each of the graders’ traces respectively and calculating the squared distance between each graders’ trace and the average trace to get an overall RMSE via:

$$RMSE = \frac{1}{3} \times \sum_{G=1}^{3} \sqrt{\frac{1}{length(T_A)} \times \sum_{r=4}^{length(T_A)} \left( \min \left( {\left( T_{A}\left(r\right)-T\left(G,r\right) \right) }^2 \right) \right)}.$$

For analysis of the results we collected the traces into 6 groups depending on their individual VF starting points based on the Garway-Heath scheme [45]. Table 1 shows the detailed results of that analysis.

Tables Icon

Table 1. Expert grader comparison of EA and trace overlap including standard deviation (SD), separated into sectors temporal (T), superior temporal (ST), inferior temporal (IT), superior nasal (SN), inferior nasal (IN), nasal (N) and overall

Using the graders’ traces as ground truth we can assess the accuracy of our automatic method, as well as our implementation of the mathematical model by Jansonius et. al. . For this we take the same visual field coordinates as we used for manual tracing and performed automatic tracing in direction of the ONH once using the mathematical model and once using our proposed method. RMSE is calculated the same way as previously, by comparing each point along the trace $T(r)$ to the corresponding point along the averaged grader trace $T_A(r)$, except that we don’t need to average this over multiple graders. Thus Eq. (24) can be modified into

$$RMSE = \sqrt{\frac{1}{length(T_A)} \times \sum_{r=4}^{length(T_A)} \left( \min \left( {\left( T_{A}\left(r\right)-T\left(r\right) \right) }^2 \right) \right)},$$

Table 2 shows the results of this analysis, again separated into groups. The complete automatic tracing using our proposed method can be seen in Fig. 6 for a healthy subject.

 figure: Fig. 6.

Fig. 6. Projection of the RNFL, averaged near the RNFL/GCL boundary. Tracings derived via our proposed method starting from 24-2 Humphrey VF points drawn in.

Download Full Size | PDF

Tables Icon

Table 2. Comparison of automatic tracing with the mathematical model

Finally, we can test the reproducibility of our automatic tracing method. For a subset in our study consisting of 5 healthy subjects and 6 glaucoma patients, we have acquired 3 complete sets of wide field OCT data on different days (average time difference in the sample was 19 days). These datasets are processed completely independent of one another, except for the retinal fundus image, which is the same for all 3 sets and is used for our registration and stitching [28]. The question of reproducibility of one algorithm in 3 different datasets of the same eye is actually the same as the question of variability of 3 different manual graders on the same dataset of one eye. Thus we used the exact same methods used for the analysis in Table 1 for this test, which are presented in Table 3.

In the case of the repeated measurements of Table 3 there was a total of 3 cases (1 healthy, 2 glaucoma) where one VF starting point was located so close to the raphe such that it was ambiguous whether to start tracing upwards or downwards (as seen in Fig. 7). In all 3 cases this VF point was located in the "IT" group and it caused one repetition to trace in the opposite direction to the other 2 repetitions, thus causing a severe increase in trace RMSE, as well as EA range and offset. We decided to repeat these calculations with the affected 3 traces excluded to get a clearer understanding of the algorithm performance. Table 4 shows the results after excluding the traces for "IT" and "Overall" - all other regions remain unaffected by this change.

 figure: Fig. 7.

Fig. 7. Automatic traces of repeated scans of a subject. One VF starting point was mistraced in the wrong direction (inside blue circle).

Download Full Size | PDF

Tables Icon

Table 3. Repeatability comparison of the automatic tracing with the averaged manual trace

Tables Icon

Table 4. Repeatability comparison after faulty VF trace exclusion

4. Discussion

We presented a new method for fully automatically tracing the trajectory of retinal nerve fiber bundles on stitched large FOV PS-OCT data. The algorithm itself works on any kind and size of PS-OCT volume, accurate stitching, however, allows even longer nerve fiber bundle trajectories to be traced.

Our orientation maps are generated using polarization data and intensity data of the RNFL, as well as a mathematical model to improve stability. Given that we need to access information from specific retinal layers it is of importance to provide an accurate and stable retinal layer segmentation which can prove difficult in low SNR data. The method of visualizing nerve fiber bundles using intensity data is dependent on the nerve fiber bundles having a significantly higher average intensity than the background. This effect will be greatly reduced with lower SNR and in the case of glaucoma [46], increasing the difficulty in extracting information.

A major point of weakness in our proposed method could be the use of fingerprint orientation estimation techniques. As discussed earlier, our intensity variation projection images bear similar features to fingerprint images which encouraged us to use these methods. One difference to the fingerprint case is the presence of additional different structural information in form of retinal vessels. For orientation estimation they are of course segmented and excluded, this process unfortunately, is not perfect and sometimes leaves false gradient information that can negatively influence the estimation process. The bigger difference, however, is the lack of uniformity in the spacing between RNFBs compared to fingerprint ridges. In typical fingerprint images the spacing between ridges and valleys remains pretty much constant over the entire image, whereas RNFBs tend to spread outwards the further they are from the ONH. This means that near the ONH they are squeezed so tightly, that neighboring bundles may occupy the same pixel on the image making it difficult to extract a gradient between them. It also means that further out - and this was especially noticed near the raphe - they are spread out very far, leaving large areas in between bundles with no usable gradient information due to the lack of bundles there which causes problems when creating an averaged orientation map. Another problem is the already mentioned lowered SNR of the OCT volume in cases of glaucoma, which negatively affects the reflectivity of the RNFBs, making it harder to extract orientation information using regular fingerprint orientation estimation techniques. Thus other methods, or additional modifications to the fingerprint orientation estimation techniques may need to be implemented to improve the accuracy and robustness of our method.

In our work we use the mathematical model as proposed by Jansonius et. al. to fill in the gaps in the estimated RNFB orientation map. If for a certain retinal region neither the axis orientation data from the PS-OCT, nor the orientation estimation from the intensity variation projection is usable, we blend in the mathematical model via interpolation. This is also true for regions where a RNFB trajectory briefly goes out of bounds of our stitched image (mainly the inner corners of our "cross" form). When generating our fingerprint-based orientation map we also rely on the mathematical model. This may introduce a slight bias or trend of our generated orientation map towards orientation map based on the mathematical model. Judging from our results, however, it is clear that our method outperforms the pure mathematical model. The mathematical model is presently still based on manual tracing and consequently inherently less accurate compared to our automatic tracing as our data show. For this reason it is expected that in more severe glaucoma cases, where a larger part of the tracing has to rely on the mathematical model this will have an effect on the results. However for the future we plan to redesign the mathematical model based on automatic traces in a larger number of healthy subjects and this redesign will also include a personalization, based on confounders like the positions of the major retinal blood vessels and axial eye length. Also the mathematical model is based on healthy eyes only and it cannot be excluded that the trajectories might be different in glaucoma as compared to healthy eyes although this seems rather unlikely to us.

The 3 trained graders were very consistent in their tracing results, only deviating on average 6.2° from their respective average entry angle of the ONH as can be seen in Table 1. The worst concordance was achieved in the ST and IT Garway-Heath sectors with 9.1° and 6.4° respectively, which is not surprising, considering that these two sectors cover the RNFBs with the longest pathways in our FOV. Additionally, the dominant retinal vessel arch reportedly increased the tracing difficulty significantly. The same pattern can also be seen when analyzing the RMSE along the traces where again the ST and IT sectors are above the average of 112$\mu m$ with 141$\mu m$ each. Interestingly, the EA ICC is still quite high even in those regions with values of 0.910 and 0.915 respectively and instead gets lower for SN and IN with only 0.828 and 0.858 respectively. Overall, however, the EA ICC also shows a high consistency of the graders’ tracing results with a value of 0.989.

Comparing the mathematical model and our method to the reference results of the manual grading it can be seen that the average EA offset overall is with 8.6° slightly lower than of the mathematical model with 9.8°, the SD of those offsets, however, is less than half with 7.5° to 16.6° respectively indicating considerably less variation in between subjects. Similarly the RMSE along the trace is quite a bit lower overall than of the mathematical model with 188$\mu m$ (SD=145) versus 210$\mu m$ (SD=183) respectively, as presented in Table 2.

When testing for repeatability, Tables 3 and 4 show that the EA avg offset is almost reduced to a third for healthy subjects with 2.2° and more than halved for glaucoma patients with 2.9° compared to manual grading with 6.2°. The EA ICC as well shows higher concordance of the repeated measurements than the manual grading with an overall value of 0.999 for healthy subjects as well as glaucoma patients. One has to keep in mind, however, that manual grading was only performed on healthy subjects, so the repeatability on glaucoma patients cannot directly be compared. We can see that the EA offset, as well as the trace RMSE is slightly higher on the glaucoma set than on the healthy set, which is to be expected, since patient scans often tend to have a lower signal quality than scans of healthy subjects.

As mentioned earlier there was a total of 3 repeated traces that had to be excluded because they run in opposite directions, as shown in Fig. 7. This only affected starting points very close to the raphe where the true trajectory can sometimes be ambiguous. This problem may be reduced by improving the orientation estimation of the intensity variation projection images as discussed previously. It should be mentioned, however, that even the clinical expert graders on initial training data inspection during training did not agree whether the trajectory goes up or down on some starting points close to the raphe. This just shows that the raphe can be a particularly difficult region for tracing.

We acknowledge the mentioned limitations to our study work, as well as our very limited set of data. This applies to both - our limited set of manually graded subjects, as well as the set of repeated measurements. As a proof of concept, however, we show that stitched large FOV PS-OCT data can be used for fully automatic individualized RNFB tracing on both healthy subjects and glaucoma patients. More work has to be done especially in fundus areas near the raphe and the presence of large vessels to further improve the method.

5. Conclusion

PS-OCT provides additional information on RNFB structures, giving the possibility to reconstruct trajectories of RNFBs more accurately. In contrast to mathematical models or even manual grading, we integrate the information from 3 different sources to construct the individual traces from measurements. Our preliminary results state that our method is superior to the mathematical model which is limited in individuum specific parametrization. Furthermore the results of our method are free from grader variability and highly reproducible. This will also open the way to analyse clinical parameters along the trajectory of arbitrary individual RNFBs without the time consuming manual grading.

Funding

Austrian Science Fund (KLI854, P30378).

Acknowledgments

We would like to thank Gerhard Garhöfer and Doreen Schmidl for their support.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time, but may be obtained from the authors upon reasonable request.

References

1. R. S. Harwerth, L. Carter-Dawson, F. Shen, E. L. Smith, and M. Crawford, “Ganglion cell losses underlying visual field defects from experimental glaucoma,” Invest. Ophthalmol. Vis. Sci. 40, 2242–2250 (1999).

2. H. A. Quigley and E. M. Addicks, “Quantitative studies of retinal nerve fiber layer defects,” Arch. Ophthalmol. 100(5), 807–814 (1982). [CrossRef]  

3. L. A. Kerrigan-Baumrind, H. A. Quigley, M. E. Pease, D. F. Kerrigan, and R. S. Mitchell, “Number of ganglion cells in glaucoma eyes compared with threshold visual field tests in the same persons,” Invest. Ophthalmol. Vis. Sci. 41, 741–748 (2000).

4. R. S. Harwerth and H. A. Quigley, “Visual field defects and retinal ganglion cell losses in patients with glaucoma,” Arch. Ophthalmol. 124(6), 853–859 (2006). [CrossRef]  

5. S. Frenkel, J. E. Morgan, and E. Blumenthal, “Histological measurement of retinal nerve fibre layer thickness,” Eye 19(5), 491–498 (2005). [CrossRef]  

6. J. S. Schuman, M. R. Hee, C. A. Puliafito, C. Wong, T. Pedut-Kloizman, C. P. Lin, E. Hertzmark, J. A. Izatt, E. A. Swanson, and J. G. Fujimoto, “Quantification of nerve fiber layer thickness in normal and glaucomatous eyes using optical coherence tomography: a pilot study,” Arch. Ophthalmol. 113(5), 586–596 (1995). [CrossRef]  

7. C. Bowd, L. M. Zangwill, C. C. Berry, E. Z. Blumenthal, C. Vasile, C. Sanchez-Galeana, C. F. Bosworth, P. A. Sample, and R. N. Weinreb, “Detecting early glaucoma by assessment of retinal nerve fiber layer thickness and visual function,” Invest. Ophthalmol. Vis. Sci. 42, 1993–2003 (2001).

8. F. A. Medeiros, L. M. Zangwill, C. Bowd, and R. N. Weinreb, “Comparison of the gdx vcc scanning laser polarimeter, hrt ii confocalscanning laser ophthalmoscope, and stratus oct optical coherence tomographfor the detection of glaucoma,” Arch. Ophthalmol. 122(6), 827–837 (2004). [CrossRef]  

9. A. Kanamori, A. Nagai-Kusuhara, M. F. Escaño, H. Maeda, M. Nakamura, and A. Negi, “Comparison of confocal scanning laser ophthalmoscopy, scanning laser polarimetry and optical coherence tomography to discriminate ocular hypertension and glaucoma at an early stage,” Graefe’s Arch. Clin. Exp. Ophthalmol. 244(1), 58–68 (2006). [CrossRef]  

10. P. Sharma, P. A. Sample, L. M. Zangwill, and J. S. Schuman, “Diagnostic tools for glaucoma detection and management,” Surv. Ophthalmol. 53(6), S17–S32 (2008). [CrossRef]  

11. K. A. Townsend, G. Wollstein, and J. S. Schuman, “Imaging of the retinal nerve fibre layer for glaucoma,” Br. J. Ophthalmol. 93(2), 139–143 (2009). [CrossRef]  

12. Q. M. Chi, G. Tomita, K. Inazumi, T. Hayakawa, T. Ido, and Y. Kitazawa, “Evaluation of the effect of aging on the retinal nerve fiber layer thickness using scanning laser polarimetry,” J. Glaucoma 4(6), 406–413 (1995).

13. D. Poinoosawmy, L. Fontana, J. Wu, F. Fitzke, and R. Hitchings, “Variation of nerve fibre layer thickness measurements with age and ethnicity by scanning laser polarimetry,” Br. J. Ophthalmol. 81(5), 350–354 (1997). [CrossRef]  

14. S. Funaki, M. Shirakashi, and H. Abe, “Relation between size of optic disc and thickness of retinal nerve fibre layer in normal subjects,” Br. J. Ophthalmol. 82(11), 1242–1245 (1998). [CrossRef]  

15. C. Bowd, L. M. Zangwill, E. Z. Blumenthal, C. Vasile, A. G. Boehm, P. A. Gokhale, K. Mohammadi, P. Amini, T. M. Sankary, and R. N. Weinreb, “Imaging of the optic disc and retinal nerve fiber layer: the effects of age, optic disc area, refractive error, and gender,” J. Opt. Soc. Am. A 19(1), 197–207 (2002). [CrossRef]  

16. J. L. Hougaard, C. Ostenfeld, A. Heijl, and B. Bengtsson, “Modelling the normal retinal nerve fibre layer thickness as measured by stratus optical coherence tomography,” Graefe’s Arch. Clin. Exp. Ophthalmol. 244(12), 1607–1614 (2006). [CrossRef]  

17. Q. Ghadiali, D. C. Hood, C. Lee, J. Manns, A. Llinas, L. K. Grover, V. C. Greenstein, J. M. Liebmann, J. G. Odel, and R. Ritch, “An analysis of normal variations in retinal nerve fiber layer thickness profiles measured with optical coherence tomography,” J. Glaucoma 17(5), 333–340 (2008). [CrossRef]  

18. G. Savini, M. Zanini, V. Carelli, A. Sadun, F. Ross-Cisneros, and P. Barboni, “Correlation between retinal nerve fibre layer thickness and optic nerve head size: an optical coherence tomography study,” Br. J. Ophthalmol. 89(4), 489–492 (2005). [CrossRef]  

19. I. Pereira, H. Resch, F. Schwarzhans, J. Wu, S. Holzer, B. Kiss, F. Frommlet, G. Fischer, and C. Vass, “Multivariate model of the intersubject variability of the retinal nerve fiber layer thickness in healthy subjects,” Invest. Ophthalmol. Visual Sci. 56(9), 5290–5298 (2015). [CrossRef]  

20. H. A. Quigley, G. R. Dunkelberger, and W. R. Green, “Retinal ganglion cell atrophy correlated with automated perimetry in human eyes with glaucoma,” Am. J. Ophthalmol. 107(5), 453–464 (1989). [CrossRef]  

21. M. Sugita, M. Pircher, S. Zotter, B. Baumann, P. Roberts, T. Makihira, N. Tomatsu, M. Sato, C. Vass, and C. K. Hitzenberger, “Retinal nerve fiber bundle tracing and analysis in human eye by polarization sensitive oct,” Biomed. Opt. Express 6(3), 1030–1054 (2015). [CrossRef]  

22. J. F. De Boer, C. K. Hitzenberger, and Y. Yasuno, “Polarization sensitive optical coherence tomography–a review,” Biomed. Opt. Express 8(3), 1838–1873 (2017). [CrossRef]  

23. M. Pircher, C. K. Hitzenberger, and U. Schmidt-Erfurth, “Polarization sensitive optical coherence tomography in the human eye,” Prog. Retinal Eye Res. 30(6), 431–451 (2011). [CrossRef]  

24. G. M. Pocock, R. G. Aranibar, N. J. Kemp, C. S. Specht, M. K. Markey, and H. G. Rylander, “The relationship between retinal ganglion cell axon constituents and retinal nerve fiber layer birefringence in the primate,” Invest. Ophthalmol. Visual Sci. 50(11), 5238–5246 (2009). [CrossRef]  

25. X.-R. Huang and R. W. Knighton, “Microtubules contribute to the birefringence of the retinal nerve fiber layer,” Invest. Ophthalmol. Visual Sci. 46(12), 4588–4593 (2005). [CrossRef]  

26. B. Fortune, C. F. Burgoyne, G. Cull, J. Reynaud, and L. Wang, “Onset and progression of peripapillary retinal nerve fiber layer (rnfl) retardance changes occur earlier than rnfl thickness changes in experimental glaucoma,” Invest. Ophthalmol. Visual Sci. 54(8), 5653–5661 (2013). [CrossRef]  

27. M. Sugita, S. Zotter, M. Pircher, T. Makihira, K. Saito, N. Tomatsu, M. Sato, P. Roberts, U. Schmidt-Erfurth, and C. K. Hitzenberger, “Motion artifact and speckle noise reduction in polarization sensitive optical coherence tomography by retinal tracking,” Biomed. Opt. Express 5(1), 106–122 (2014). [CrossRef]  

28. F. Schwarzhans, S. Desissaire, S. Steiner, M. Pircher, C. K. Hitzenberger, H. Resch, C. Vass, and G. Fischer, “Generating large field of view en-face projection images from intra-acquisition motion compensated volumetric optical coherence tomography data,” Biomed. Opt. Express 11(12), 6881–6904 (2020). [CrossRef]  

29. M. Sugita, M. Pircher, S. Zotter, B. Baumann, K. Saito, T. Makihira, N. Tomatsu, M. Sato, and C. K. Hitzenberger, “Analysis of optimum conditions of depolarization imaging by polarization-sensitive optical coherence tomography in the human retina,” J. Biomed. Opt. 20(1), 016011 (2015). [CrossRef]  

30. S. Desissaire, F. Schwarzhans, M. Salas, A. Wartak, G. Fischer, C. Vass, M. Pircher, and C. K. Hitzenberger, “Analysis of longitudinal sections of retinal vessels using doppler oct,” Biomed. Opt. Express 11(4), 1772–1789 (2020). [CrossRef]  

31. F. LaRocca, S. J. Chiu, R. P. McNabb, A. N. Kuo, J. A. Izatt, and S. Farsiu, “Robust automatic segmentation of corneal layer boundaries in sdoct images using graph theory and dynamic programming,” Biomed. Opt. Express 2(6), 1524–1538 (2011). [CrossRef]  

32. A. Mishra, A. Wong, K. Bizheva, and D. A. Clausi, “Intra-retinal layer segmentation in optical coherence tomography images,” Opt. Express 17(26), 23719–23728 (2009). [CrossRef]  

33. N. M. Jansonius, J. Nevalainen, B. Selig, L. Zangwill, P. Sample, W. Budde, J. Jonas, W. A. Lagrèze, P. Airaksinen, and R. Vonthein, “A mathematical description of nerve fiber bundle trajectories and their variability in the human retina,” Vision Res. 49(17), 2157–2163 (2009). [CrossRef]  

34. N. M. Jansonius, J. Schiefer, J. Nevalainen, J. Paetzold, and U. Schiefer, “A mathematical model for describing the retinal nerve fiber bundle trajectories in the human eye: average course, variability, and influence of refraction, optic disc size and optic disc position,” Exp. Eye Res. 105, 70–78 (2012). [CrossRef]  

35. K. Takayama, S. Ooto, M. Hangai, N. Ueda-Arakawa, S. Yoshida, T. Akagi, H. O. Ikeda, A. Nonaka, M. Hanebuchi, and T. Inoue, “High-resolution imaging of retinal nerve fiber bundles in glaucoma using adaptive optics scanning laser ophthalmoscopy,” Am. J. Ophthalmol. 155(5), 870–881.e3 (2013). [CrossRef]  

36. R. Cheloni and J. Denniss, “Depth-resolved variations in visibility of retinal nerve fibre bundles across the retina in enface oct images of healthy eyes,” Ophthalmic Physiol. Opt. 41(1), 179–191 (2021). [CrossRef]  

37. C. K. Hitzenberger, E. Götzinger, M. Sticker, M. Pircher, and A. F. Fercher, “Measurement and imaging of birefringence and optic axis orientation by phase resolved polarization sensitive optical coherence tomography,” Opt. Express 9(13), 780–790 (2001). [CrossRef]  

38. Y. Wang, J. Hu, and F. Han, “Enhanced gradient-based algorithm for the estimation of fingerprint orientation fields,” Appl. Math. Comput. 185(2), 823–833 (2007). [CrossRef]  

39. E. Zhu, J. Yin, C. Hu, and G. Zhang, “A systematic method for fingerprint ridge orientation estimation and image segmentation,” Pattern Recognit. 39(8), 1452–1472 (2006). [CrossRef]  

40. Z. Hou, W.-Y. Yau, and Y. Wang, “A review on fingerprint orientation estimation,” Secur. Commun. Networks 4(5), 591–599 (2011). [CrossRef]  

41. L. Liu and T.-S. Dai, “A reliable fingerprint orientation estimation algorithm,” J. Inform. Sci. Eng. 27, 353–368 (2011).

42. S. Liu, M. Liu, and Z. Yang, “Sparse coding based orientation estimation for latent fingerprints,” Pattern Recognit. 67, 164–176 (2017). [CrossRef]  

43. L. Ji and Z. Yi, “Fingerprint orientation field estimation using ridge projection,” Pattern Recognit. 41(5), 1491–1503 (2008). [CrossRef]  

44. J. Denniss, A. Turpin, F. Tanabe, C. Matsumoto, and A. M. McKendrick, “Structure–function mapping: variability and conviction in tracing retinal nerve fiber bundles and comparison to a computational model,” Invest. Ophthalmol. Visual Sci. 55(2), 728–736 (2014). [CrossRef]  

45. D. F. Garway-Heath, D. Poinoosawmy, F. W. Fitzke, and R. A. Hitchings, “Mapping the visual field to the optic disc in normal tension glaucoma eyes,” Ophthalmology 107(10), 1809–1815 (2000). [CrossRef]  

46. J. Van Der Schoot, K. A. Vermeer, J. F. de Boer, and H. G. Lemij, “The effect of glaucoma on the optical attenuation coefficient of the retinal nerve fiber layer in spectral domain optical coherence tomography images,” Invest. Ophthalmol. Visual Sci. 53(4), 2424–2430 (2012). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time, but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Polar plot of the RNFB trajectories as given by the mathematical model described in [33,34]. Fovea is located at position (0,0) in the center. The ONH is the source of all traces 15° to the right.
Fig. 2.
Fig. 2. Interpolated orientation map derived from mathematical model (colormap in radian). Black bars drawn in regular intervals to further indicate the orientation vector.
Fig. 3.
Fig. 3. Averaged polarization data of the RNFL of one 3D volume scan area, showing the retardation (a) and axis orientation (b) of a healthy subject.
Fig. 4.
Fig. 4. Projection of the RNFL, averaged near the RNFL/GCL boundary. Location of the raphe marked with red ellipse. Some RNFBs are clearly visible (A). Noisy orientation map derived via fingerprint orientation estimation techniques. Black bars drawn in to visualize direction (B). Further averaged and compensated orientation map (C).
Fig. 5.
Fig. 5. Custom tool for manual RNFB tracing. Individual visual field coordinates can be selected on the left, sample trace is drawn in in red. Top left shows intensity variation projection, bottom right shows retinal fundus image. Between the two can be toggled at any time.
Fig. 6.
Fig. 6. Projection of the RNFL, averaged near the RNFL/GCL boundary. Tracings derived via our proposed method starting from 24-2 Humphrey VF points drawn in.
Fig. 7.
Fig. 7. Automatic traces of repeated scans of a subject. One VF starting point was mistraced in the wrong direction (inside blue circle).

Tables (4)

Tables Icon

Table 1. Expert grader comparison of EA and trace overlap including standard deviation (SD), separated into sectors temporal (T), superior temporal (ST), inferior temporal (IT), superior nasal (SN), inferior nasal (IN), nasal (N) and overall

Tables Icon

Table 2. Comparison of automatic tracing with the mathematical model

Tables Icon

Table 3. Repeatability comparison of the automatic tracing with the averaged manual trace

Tables Icon

Table 4. Repeatability comparison after faulty VF trace exclusion

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

φ ( φ 0 , r ) = φ 0 + b ( φ 0 ) × ( r r 0 ) c ( φ 0 )
c = 1.9 + 1.4 × tanh ( φ 0 121 14 )
c = 1.0 + 0.5 × tanh ( φ 0 90 25 )
ln ( b ) = 1.9 + 3.9 × tanh ( φ 0 121 14 )
ln ( b ) = 0.7 + 1.5 × tanh ( φ 0 90 25 )
[ Q ~ ( x , y ) U ~ ( x , y ) V ~ ( x , y ) ] = 1 B O O ( x , y ) B N G ( x , y ) × i = B N G ( x , y ) B O O ( x , y ) [ cos ( 2 δ ( x , y , i ) ) sin ( 2 δ ( x , y , i ) ) × cos ( π 2 θ ( x , y , i ) ) sin ( 2 δ ( x , y , i ) ) × sin ( π 2 θ ( x , y , i ) ) ) ] .
[ δ P ( x , y ) ϕ P ( x , y ) ] = 1 2 × [ arccos ( Q ~ ( x , y ) R ) arctan ( V ~ ( x , y ) R U ~ ( x , y ) R ) ] ,
[ Q ~ ( x , y ) R U ~ ( x , y ) R V ~ ( x , y ) R ] = 1 n r m ( x , y ) × [ Q ~ ( x , y ) U ~ ( x , y ) V ~ ( x , y ) ]
n r m ( x , y ) = Q ~ ( x , y ) 2 + U ~ ( x , y ) 2 + V ~ ( x , y ) 2 .
N F M a x L e n ( x , y ) = 1 B E M a x L e n ( x , y ) × i = 1 B E M a x L e n ( x , y ) V ( x , y , B N G ( x , y ) i ) ,
B E M a x L e n ( x , y ) = m i n ( B N G ( x , y ) B I L M ( x , y ) , M a x L e n ) ,
ϕ F I ( x , y ) = arctan ( 2 × G x y ( G x x G y y ) + ( G x x G y y ) 2 + ( 2 × G x y ) 2 ) ,
[ G x x ( x , y ) G y y ( x , y ) G x y ( x , y ) ] = 1 ( 2 × N + 1 ) 2 × i = N N j = N N [ G x ( x + i , y + j ) 2 G y ( x + i , y + j ) 2 G x ( x + i , y + j ) × G y ( x + i , y + j ) ] ,
D S T m ( x , y ) = ( sin ( 2 × ϕ F I ) sin ( 2 × ϕ M ) ) 2 + ( cos ( 2 × ϕ F I ) cos ( 2 × ϕ M ) ) 2 .
D S T n ( x , y ) = s f i t 2 + c f i t 2 ,
[ s f i t ( x , y ) c f i t ( x , y ) ] = [ sin ( 2 × ϕ F I ( x , y ) ) 2 cos ( 2 × ϕ F I ( x , y ) ) 2 ] × ( 1 ( 2 × N + 1 ) 2 × i , j = N N [ sin ( 2 × ϕ F I ( x + i , y + j ) ) cos ( 2 × ϕ F I ( x + i , y + j ) ) ] ) 2 ,
W ( x , y ) = 1 1 2 × D S T m ¯ ( x , y ) 2 + D S T n ¯ ( x , y ) 2 ,
ϕ F ( x , y ) = 1 2 × arctan ( s ϕ ( x , y ) c ϕ ( x , y ) ) ,
[ s ϕ ( x , y ) c ϕ ( x , y ) ] = i , j = N N W ( x + i , y + j ) × [ sin ( 2 × ϕ F I ( x + i , y + j ) cos ( 2 × ϕ F I ( x + i , y + j ) ] .
W ( x , y ) = 1 1 2 ( s i n ( 2 × ϕ ) s i n ( 2 × ϕ M ) ) 2 + ( c o s ( 2 × ϕ ) c o s ( 2 × ϕ M ) ) 2 ,
O M ( x , y ) = 1 2 × arctan ( s j o i n e d c j o i n e d )
[ s j o i n e d ( x , y ) c j o i n e d ( x , y ) ] = 1 W p + W f × [ W p × sin ( 2 × ϕ P ) + W f × sin ( 2 × ϕ F ) W p × cos ( 2 × ϕ P ) + W f × cos ( 2 × ϕ F ) ] .
[ x y ] = r × [ cos ( O M ( x , y ) ) cos ( O M ( x , y ) ) sin ( O M ( x , y ) ) sin ( O M ( x , y ) ) ] ,
R M S E = 1 3 × G = 1 3 1 l e n g t h ( T A ) × r = 4 l e n g t h ( T A ) ( min ( ( T A ( r ) T ( G , r ) ) 2 ) ) .
R M S E = 1 l e n g t h ( T A ) × r = 4 l e n g t h ( T A ) ( min ( ( T A ( r ) T ( r ) ) 2 ) ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.