Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Eye tracking-based estimation and compensation of chromatic offsets for multi-wavelength retinal microstimulation with foveal cone precision

Open Access Open Access

Abstract

Multi-wavelength ophthalmic imaging and stimulation of photoreceptor cells require consideration of chromatic dispersion of the eye, manifesting in longitudinal and transverse chromatic aberrations. Contemporary image-based techniques to measure and correct transverse chromatic aberration (TCA) and the resulting transverse chromatic offset (TCO) in an adaptive optics retinal imaging system are precise but lack compensation of small but significant shifts in eye position occurring during in vivo testing. Here, we present a method that requires only a single measurement of TCO during controlled movements of the eye to map retinal chromatic image shifts to the image space of a pupil camera. After such calibration, TCO can be compensated by continuously monitoring eye position during experimentation and by interpolating correction vectors from a linear fit to the calibration data. The average change rate of TCO per head shift and the correlation between Kappa and the individual foveal TCA are close to the expectations based on a chromatic eye model. Our solution enables continuous compensation of TCO with high spatial precision and avoids high light intensities required for re-measuring TCO after eye position changes, which is necessary for foveal cone-targeted psychophysical experimentation.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Adaptive optics scanning laser ophthalmoscopy (AOSLO) coupled with microstimulation techniques enables imaging and simultaneous functional testing of targeted human photoreceptors in vivo. This approach was recently employed to study single cone photoreceptor function [1,2], retinal circuitry [3,4], color vision [5,6], and sensitivity changes during retinal disease [7,8].

The specific techniques employed in these studies, using two (or more) beams of light of different wavelengths for imaging and stimulation, are faced with a particular challenge arising from the chromatic dispersion characteristics of the human eye. Due to dispersion, light of different wavelengths will be focused axially displaced (termed longitudinal chromatic aberration, LCA) [9], and if incident at an oblique angle, focus points will also be laterally displaced (termed transverse chromatic aberration, TCA) [10].

The magnitude of LCA is relatively consistent across the population [9,11] and across the field of view [12,13], thus it can be corrected sufficiently for most eyes by adjusting the relative vergence between beams of different wavelength. In an AOSLO system, this is currently achieved by a coaxial displacement of the light sources’ entry points.

By setting the light sources at different distances, transverse chromatic offset (TCO) is induced in the eye. These lateral offsets in focus position on the retina have two origins. One is the imperfect axial positioning of the two (or more) beams. The second is of ocular origin, and closely related to TCA. When the eye is moved laterally in front of the displaced beams, their focus location will move laterally on the retina as a linear function of eye position, akin to a chromatic parallax. The magnitude of this offset is identical to TCA [14].

In an AOSLO, TCO can be directly measured by spatially registering simultaneously recorded retinal images with the two (or more) wavelengths. This method allows a determination of TCO of the order of arcseconds but requires high light intensities massively bleaching the photopigment and leading to strong fluctuations in retinal adaptation [14]. Therefore, continuous or repeated measurements of TCO during psychophysical sessions are unfeasible.

In earlier studies, TCO was thus statically compensated by assuming no lateral eye motion between measurements before and after an experimental run. Residual head and eye movement were thought to displace the stimulus within the cone’s inner segment diameter at the targeted eccentricities (about 1.2 arcmin at 2 degree eccentricity) [1,3,5,6]. With our eye tracking setup, we observed residual head motion during experiments that would change TCO in the range of ± 0.65 arcmin up to ± 1.27 arcmin. Therefore, AOSLO based microperimetry would require an eye position monitor system, limiting stimulus delivery to a certain range of eye positions. For stimulation of the smallest cones at the foveal center or rods, however, such static approach is not practicable, because the range of allowed eye positions would be too small.

A chromatic eye model by Thibos et al. [15–17] predicts a simple linear correlation between TCO and eye position offsets in front of the AOSLO beams, depending on the wavelengths, described by:

TCA  EyeDisplacement×LCA
This relationship was experimentally confirmed in previous studies [14,18], and applied to a method to infer TCO from pupil position, yet without focusing on the precision necessary to target single cones [19]. We here employ high-resolution eye tracking to demonstrate that transverse chromatic offsets can be compensated continuously and in real time to ensure cell sized precision during single cell stimulation of cones of the central fovea or rods.

2. Materials and methods

To map lateral eye position changes to transverse chromatic offset changes, we employed an adaptive optics scanning laser ophthalmoscope (AOSLO) to measure image-based retinal TCO [14] during controlled movements of the head and simultaneous eye tracking. Following such calibration, eye tracking-based TCO estimates were validated in a psychophysical experiment. TCO calibrations were performed in fourteen participants (9 female, 5 male) with no known vision abnormalities. Three participants (1 female, 2 male) took part in the subjective experiments. Mydriasis and cycloplegia were induced by instilling one drop of 1% Tropicamide 15 min before the beginning of a session. For each participant, a holder fixed to a custom dental impression (bite bar) was used to immobilize and control the position of the head during imaging. Written informed consent was obtained from each participant and all experimental procedures adhered to the tenets of the Declaration of Helsinki, in accordance with the guidelines of the independent ethics committee of the medical faculty at the Rheinische Friedrich-Wilhelms-Universität of Bonn.

2.1 Adaptive optics scanning laser ophthalmoscope (AOSLO)

A detailed description of our multi-wavelength AOSLO system is given in [20] with relevant details specified below.

The adaptive optics component consists of a Shack-Hartmann wavefront sensor (SHSCam AR-S-150-GE, Optocraft GmbH, Erlangen, Germany) and a deformable mirror (DM97, ALPAO, Montbonnot-Saint- Martin, France) and was operated in a closed-loop mode. The two wavelengths used (840 ± 12 nm and 543 ± 22 nm) were filtered from the spectrum of a supercontinuum light source (SuperK EXTREME, NKT Photonics, Birkerød, Denmark). 840 nm light was used for wavefront sensing and imaging. 543 nm (green) light was used for TCO determination and stimulation. Both channels were raster-scanned onto the retina with a beam size of about 7 mm at the pupil and generated a square field of 0.85 x 0.85 degrees of visual angle (corresponding to a sampling resolution of 0.10 arcmin per pixel). Backscattered light from the retina was detected in a confocal setup (pinhole sizes were 20 µm (840) and 30 µm (543)) with a photomultiplier tube (Photosensor module H7422, Hamamatsu Photonics, Hamamatsu, Japan).

2.2 Eye tracking camera

In order to accurately track the position of a participant’s eye in front of the AOSLO beam, we mounted a video camera coaxially to the AOSLO imaging and stimulation beams by means of a cold mirror (DMLP900L, Thorlabs, Munich, Germany) (Fig. 1). The camera was a 752 x 480 pixel CMOS sensor (DMK 23UV024, The Imaging Source, Bremen, Germany) with a 50 mm, f/1.26 objective lens (TCL 5026 5MP, The Imaging Source). A central 640 x 480 pixels subfield of the sensor was used in our custom eye tracking software. Camera focus was set to a fixed working distance of 350 mm. By judging the sharpness of the pupil image within the eye tracking user interface, eyes could be positioned at an approximate distance coinciding with a conjugate pupil plane of the AOSLO beam. Image magnification of the eye tracker was 0.030 mm per image pixel in the pupil plane. As a result of back-scattered light from the imaging beam (7.2 mm diameter), the camera captured an image of the retro-illuminated pupil on an otherwise dark background (Fig. 1(C, D)). The imaging beam also produced a bright visible reflection on the cornea front surface, the first Purkinje image, referred to as Purkinje image in the following. The bright image pixels of the Purkinje image were used to track the eye's lateral location. Because of the coaxiality between camera and illumination, we used the horizontal offset between the pupil’s center and the Purkinje image to calculate the horizontal component of angle Kappa, κ, (the angular subtense between visual and pupillary axis) as follows:

κx=arctan(PIxPCxPN¯)
PIx defined as the horizontal Purkinje image coordinate, PCx as the horizontal pupil center coordinate, and PN¯ as the distance from the entrance pupil to the Nodalpoint, which was assumed to be 4 mm [21]. The vertical component of Kappa was calculated in the same way. Additionally, we calculated the Hirschberg ratio (the inverse of the displacement of the Purkinje image from pupil center per degree of eye rotation) in four participants during controlled gaze shifts. The average Hirschberg ratio across four participants was 13.3 ± 0.7 °/mm, which corresponds to an average PN¯ of 4.2 mm. The individual values of PN¯ were used to calculate the participant’s Kappa more precisely and estimate TCO based on LCA and Kappa.

 figure: Fig. 1

Fig. 1 On-axis eye tracker with AOSLO as light source. (A) Photograph of the physical implementation from a top-oblique angle. The pink AOSLO beam is drawn for illustrative purposes and was not visible. (B) Side-view, to scale. The participant’s head could be moved via XYZ-microdrives attached to a bite bar. (C) Using the 840 nm beam of the AOSLO as light source, retinally back-scattered light illuminates the pupil from within the eyeball. The first Purkinje image was used to track the eye’s position relative to the AOSLO beam. “N” marks the first Nodal point of the eye and defines the intersection between pupillary and visual axes. Note that the camera is aligned with the visual axis. (D) Pupil and Purkinje image could be tracked with high precision (1 image pixel equaled 30 μm in the pupil plane). During operation, digital overlays could be displayed to aid positioning during calibration.

Download Full Size | PDF

2.3 Eye tracking software

A custom C++ program based on previously described algorithms [22,23] was written to display, track, and measure certain image features visualized by the eye tracking camera (Fig. 1(D)). Each of the following metrics was computed for every new frame with a refresh rate of 29.87 Hz. The pupil was fitted by a circle to determine its center and diameter. The Purkinje image coordinate was specified by the center of a second circle fit to the brightest pixels of the corneal reflection. Pixel detection thresholds for the circle fits were adjusted manually at the beginning of each session to ensure the best fits at the current image quality. To assist eye alignment, an AOSLO beam indicator marking the perimeter of the imaging beam was added. AOSLO beam position was captured at the beginning of every session based on the camera image of a white paper illuminated by the imaging beam in the pupil position. While running a TCO calibration sequence, all data was written to a log file containing frame number, time stamp, pupil center coordinates, Purkinje center coordinates, and pupil diameter.

2.4 TCO measurement

A detailed description of the method to objectively measure TCO with an AOSLO is given in [14]. In short, a video is recorded containing interleaved information of both imaging and stimulation light channels (here, 840 nm and 543 nm) in a subfield of the imaging raster (192 x 128 pixel). Images recorded in both channels were spatially cross-correlated frame by frame to compute the positional shift between the imaging light and stimulation light on the retina. Because this method is image-based, it requires video footage with resolved retinal structure which can be a limiting factor at the central fovea. As a side note, we observed that image structure moved if the confocal pinhole in front of the light detector is moved. As a consequence, TCO readings will change, although no changes occurred on the retina with the result that an uncorrected displaced pinhole leads to an over- or underestimation of TCO. To control for this additional system-related chromatic offset, we centered the confocal pinhole position on the beam’s point spread function by optimizing image brightness and contrast ahead of each session.

2.5 Calibration procedure

To correlate image-based measurements of TCO to eye position changes, a 20 sec dynamic calibration sequence was performed. During recording, the head (and therefore the eye) was moved in a somewhat random pattern with an extent of about ± 0.2 mm in each direction by manually turning the knobs of a XY micrometer stage attached to the bitebar. Meanwhile, the participant fixated on a target ensuring video acquisition of the same retinal location which was at ~0.4 degree eccentricity. Reminiscent of a clapperboard in film industry, we modulated the AOSLO beam through three quick full on and off cycles to flash the imaging beam at the beginning and at the end of the recording, a signal which could be accurately assigned to a single frame in both data streams. TCO video data and eye tracking data were processed offline with a custom Matlab (Mathworks, Inc., Natick, MA, USA) software in three steps (Fig. 2):

  • (1) Synchronization and interpolation: the two independent data streams were synchronized based on the flash-sequence flagging “start” and “end” by downsampling TCO data via linear interpolation - due to slightly differing framerates (TCO: 30.00 Hz, Eye tracker: 29.87 Hz) - to assign an individual TCO data sample to each Purkinje image position (Fig. 2(A)).
  • (2) Image noise removal: changes between consecutive TCO samples larger than 2 pixels (threshold determined empirically) were regarded as image noise and removed from further analysis (Fig. 2(B)). Single samples with change rates below threshold that lay in immediate neighboring blocks of samples flagged as noise were also removed.
  • (3) TCO-tracking correlation: the linear regression of TCO and eye tracking data for both horizontal and vertical component was computed and then used to estimate TCO from eye position. The linear regression (for horizontal eye shifts) was in the form of:
    TCOX(PIX)=m PIX+b
where PIx is the Purkinje’s image X-coordinate, m and b are the slope and a constant offset of the linear regression and TCOx is the resulting retinal image offset (Fig. 2(C)). The estimate for the vertical component was computed accordingly.

 figure: Fig. 2

Fig. 2 Calibration processing steps. TCO and eye tracking data were recorded simultaneously while the operator moves the participant’s eye in front of the system. (A) For synchronization, three quick full on and off cycles flashed the imaging beam (before and after vertical red lines). Due to slight differences in sampling rate, TCO data was down-sampled via linear interpolation. (B) Larger TCO data excursions (grey) were removed by thresholding for frame by frame TCO sample changes, ∆ TCO (bottom graph, red line marks 2-pixel-threshold). (C) Computation of the correlation between TCO and Purkinje image position by least-squares linear fit. The resulting function (shown as inset) was used to continuously estimate TCO based on eye tracking data.

Download Full Size | PDF

2.6 Experimental validation

For an objective validation of our eye tracking-based TCO estimation, we consecutively recorded 20 calibration sequences in one eye. We then used the linear regression function found in the first sequence and compared the measured TCO data at each eye position with the estimated TCO data by plugging in the eye position data in the calibration function for all following sequences. This was repeated for all 20 runs. Precision was estimated by using the x and y component of each frame’s error as a coordinate in a two-dimensional displacement histogram. Repeatability of the procedure was measured by comparing the TCO coordinates across all 20 calibrations by solving Eq. (3) for a single (average) eye position.

We validated TCO estimation in a psychophysical experiment with three participants. The participant’s task was to manually align their head in front of the AOSLO beam such that the features of a two-color centroiding stimulus produced in the AOSLO imaging raster were aligned. The stimulus feature offsets (a 2.3 arcmin dot of 543 nm light shown within a black target within the 840 nm imaging raster) were randomly chosen from sixteen offsets in a 4-by-4 grid with 0.4 arcmin spacing. Each offset was presented 6 times, for a total of 96 trials of subjective alignments. By the push of a button, the participant confirmed the correct alignment, the current Purkinje image location was stored, and a new stimulus offset presented. The predicted eye position based on the TCO calibration sequence and chromatic stimulus offsets were compared with the actual eye position at each trial.

3. Results

We performed TCO eye tracking calibrations in a total of 14 eyes, with multiple (3-20x) repetitions in all eyes, producing a total of 62 calibration sequences. We found that in order to collect sufficient data points (of both, image-based TCO and eye tracking data) during the 20 sec calibration, the most efficient head movement pattern resembled a square or circle. Figure 3 shows four example calibration sequences from three eyes with the produced raw data and subsequent regression analyses. To review calibration success, three different metrics were displayed after each calibration sequence, which also guided repeat decisions if a sequence had failed (e.g. due to insufficient image quality):

 figure: Fig. 3

Fig. 3 Calibration sequences recorded in three participants (P1, P2, P3). Top row: Captured Purkinje image positions during calibration. 2nd row: TCO data of the same calibration sequence, based on video data recorded at 0.4 degree eccentricity. 3rd and 4th row: Linear regression for horizontal and vertical eye shifts, respectively. Percent samples used (U), range of data points (Q80) and goodness of fit (R2) are metrics chosen to support the operator’s decision whether a calibration has to be repeated (an example is shown in the 4th column). Black data points are the usable samples after interpolation and removing data noise, grey data points represent the data samples flagged as noise (see Methods for details).

Download Full Size | PDF

  • (1) Percentage of clean data samples,
  • (2) Eye movement extent in both horizontal and vertical direction given as distance of the center 80% quantiles of all clean location samples (Q80),
  • (3) Coefficient of determination of the linear regression to the data (R2).

When image quality decreased, the TCO computation algorithm produced an increasing number of noisy data (Fig. 3, 4th column). We found that in order to ensure a reliable calibration, the percentage of used samples should exceed 33%. The second metric, Q80, should exceed 0.3 mm in vertical and horizontal direction for a successful calibration (see an example of this metric taking effect in Fig. 3, last column). The third metric, R2, was mostly greater than 0.96 and calibration sequences with R2 > 0.90 were considered to be sufficiently accurate.

Based on a chromatic model eye describing the relationship between LCA, TCA and eye position [17], we expected the correlation between eye lateral position and TCO to be linear (Eq. (2)). We found the slope, m, of the linear regression was constant across eyes but different for horizontal and vertical shifts. Considering the median slope of three consecutively recorded calibrations in each eye, the mean slope for horizontal eye shifts was 3.55 ± 0.08 arcmin/mm and 3.43 ± 0.12 arcmin/mm for vertical eye shifts across subjects (Fig. 4(A)). By pairing horizontal and vertical data for each eye, this difference was significant (p = 0.02, Wilcoxon signed rank test). The absolute offset, b, of the linear regression was found to vary across participants, reflecting an idiosyncratic component of TCA (Fig. 4(B)). Because we also tracked the pupil’s center during calibration we could compute TCO values when the pupil was computationally centered on the beam (Fig. 4(C)).

 figure: Fig. 4

Fig. 4 TCO data across 14 eyes of 14 participants. (A) Boxplot of the median calibration slope for each eye (3 repeats each). The average horizontal correlation slope across eyes was 3.55 ± 0.08 arcmin/mm, the average vertical correlation slope 3.43 ± 0.12 arcmin/mm. This difference between horizontal and vertical slope was significant (p = 0.02, Wilcoxon signed rank test). (B) Calculated TCO for each run with a Purkinje position centered on the AOSLO beam. (C) Calculated TCO for a centered pupil position. The one left eye included in this data set shows an absolute TCO with inversed sign (green diamonds). (D) Angle Kappa for all eyes. (E) Correlation of horizontal Kappa and horizontal TCO of the centered pupil. (F) Correlation of vertical Kappa and vertical TCO of the centered pupil. For plots (B-F), vertical and horizontal bars mark the 0.5 quantile of the calibration function (TCO) and the standard deviation (Kappa). Different symbols mark different eyes. If no error bar is visible, the error is smaller than the symbol.

Download Full Size | PDF

For a centered Purkinje position, individual TCOs were positioned close to each other, with an extent of 0.8 arcmin from lowest to highest value for both dimensions. The range of individual TCO values broadened when calculated for a centered pupil: TCO values for the right eyes spread within a range of 1.5 arcmin. TCO for the one left eye tested had the opposite sign, as expected. Finally, we calculated angle Kappa for each participant (Fig. 4(D)) and found a strong correlation between the TCO calculated for the centered pupil position and Kappa (Fig. 4(E, F)). The linear regression of horizontal TCO with Kappa was defined by a slope of m = −0.22 arcmin/deg (R2 = 0.98) and m = 0.18 arcmin/deg (R2 = 0.75) for the vertical dimension.

To estimate measurement error of our eye tracking based method, we compared 20 consecutively recorded calibration sequences of one eye with each other (Fig. 5). The 0.9 quantile of the framewise calculated difference between estimated and measured TCO was 1.04 pixel (0.10 arcmin) and 0.66 (0.07 arcmin) for horizontal and vertical axis, respectively (Fig. 5(A, B)). Exceptions were single events - presumably due to eye blinks or bad image quality - which could be as high as 2 pixel (0.2 arcmin) (Fig. 5(B)). By comparing each data set with each other in all possible pairwise combinations disallowing redundancy, 190 error sets containing a total of 88,266 error frames could be analyzed (Fig. 5(C)). As an estimate of precision we calculated the area containing 95% of all data points in a plot of positional error in both spatial dimensions between estimated and measured TCO. The extent of this area was ± 0.15 arcmin along the horizontal and ± 0.12 arcmin along the vertical axis. The area containing 50% of all positional errors had a width of ± 0.07 arcmin and a height of ± 0.05 arcmin. To analyze the repeatability of measurement, we first determined the average eye position across all 20 calibrations. This value was plugged in for each calibration function to calculate the TCO estimate for this position (Fig. 5(D)). The standard deviation for these 20 TCO estimates was 0.022 arcmin on the horizontal and 0.024 arcmin on the vertical axis. Across the three repeated calibrations in 14 participants we observed a median standard deviation for absolute TCO estimates with a centered Purkinje image of 0.029 arcmin (min = 0.012; max = 0.062) on the horizontal and 0.038 arcmin (min = 0.013; max = 0.070) on the vertical axis (Fig. 4(B)).

 figure: Fig. 5

Fig. 5 Error estimation. (A) Exemplary comparison between measured TCO (black) and estimated TCO (red) following a single calibration sequence. Grey areas mark examples where image quality did not allow TCO measurement. (B) Error of TCO estimation over time. Average ( ± std) estimation error was 0.05 ± 0.04 arcmin. Single events may exceed an estimation error of 1 pixel (1 pixel = 0.1 arcmin). (C) To determine TCO estimation precision, 20 consecutive calibration sequences were validated against each other. The framewise displacement error was plotted in both spatial dimensions with one-tenth pixel resolution (1 block = 0.01 arcmin). 95% of all displacement errors were within ± 0.15 arcmin (horizontal) and ± 0.12 arcmin (vertical). (D) Repeatability was tested by using the average eye position in the 20 calibration functions (open circles). The standard deviation across all data is visualized via the extent of the horizontal and vertical bars (STDx = 0.022 arcmin/ STDy = 0.024 arcmin).

Download Full Size | PDF

Finally, a subjective validation experiment addressed the psychophysical component of the TCO estimation. Three participants were asked to adjust their head position in front of the AOSLO beams to align a centroid stimulus containing controlled chromatic offsets (Fig. 6(A)). TCO estimation quality was assessed by comparing the applied chromatic stimulus offset with the resulting TCO estimated from the participant’s self-adjusted eye position (Fig. 6(B)). The average difference between calculated TCO from estimation and the actual applied offset was 2.49, 2.96, and 2.95 pixel (0.25, 0.30 and 0.30 arcmin) for P1, P2, and P3, respectively. Each participant showed an individual alignment strategy, resulting in clustering of certain displacement offsets. When pooling data across participants, no systematic direction for displacement errors was evident.

 figure: Fig. 6

Fig. 6 Subjective validation of our eye tracking based TCO estimation approach. (A) AOSLO raster with stimulus to scale. The task was to center the green circle within the red notches. (B) Results from all 3 participants in the same scale as the zoomed stimulus in (A) (white scale bar = 2 arcmin). The mean difference between predicted position from estimation and subjective alignment was 2.49, 2.96, and 2.95 pixels (0.25, 0.30 and 0.30 arcmin) for P1, P2, and P3, respectively. There was no systematic direction for displacement errors. The enlargement below each panel in (B) displays the pixelwise TCO correction errors for each participant. The white cross marks the zero difference position.

Download Full Size | PDF

4. Discussion

Here, we present a video-based eye tracking approach to estimate transverse chromatic offsets (TCO) in an adaptive optics scanning laser ophthalmoscope (AOSLO) retinal microstimulator. The ultimate goal of this work was to compensate system- and eye-inherent chromatic offsets with a precision enabling targeting and stimulation of the smallest retinal photoreceptors, foveal cones and rods, without the need of direct and continuous TCO measurement, while at the same time allowing small head (and eye) movements to occur. By using a corneal reflection of the AOSLO beam as eye position beacon, we achieved TCO estimation with low noise, high repeatability and high spatial precision. We here discuss our principle of Purkinje image based positional tracking, the relationship between transverse chromatic aberrations (TCA), TCO, and the angle Kappa of the eye, and finally its usability and application for single photoreceptor stimulation.

Many eye trackers use pupil coordinates such as its center to report absolute eye position, and the pupil center has been shown to correlate with objective measures of TCA [14,18,19]. For precise estimates of TCA, however, the location of the TCA-inducing beams relative to the eye’s optics – and not its pupil – is more relevant [24]. In general, correlations of pupil position with chromatic offsets were accurate enough to allow for TCO compensation. A direct comparison between pupil and Purkinje based TCO estimation over time (2h) revealed an average estimation error of up to 0.33 arcmin and 0.15 arcmin for pupil and Purkinje tracking respectively (data not shown). Such estimation error would be sufficient when targeting single cones at an eccentricity of 2 degree or further on the human retina, but too high for foveal photoreceptor stimulation (cone aperture size = 0.25 arcmin). We assume that the observed increase of estimation error is due to the fact that the axis defined by the pupil center can shift with respect to the visual axis, e.g. during pupil constriction and dilation [25]. Typical pupil trackers determine the pupil’s center applying a circle fit, which is sensitive to errors because circularity of the pupil changes with pupil size. Even during drug induced mydriasis pupil size changes asymmetrically [26], and is usually uncontrolled for during microstimulation experimentation. To overcome these deficiencies, we use in our approach the first Purkinje image, the reflection of the ASOLO beam on the front surface of the cornea, as the tracking signal for eye translation changes. The location of this reflection depends on both the absolute position of the eye relative to the light source and camera as well as the rotational state of the eye. In relationship with the positional changes of the pupil center during eye rotation, it can therefore also be used as a gaze signal [27].

In our setup, with a static camera coaxially aligned to the light source, lateral head and eye shifts as well as rotation of the eye ball will move the Purkinje image. Purkinje image movement caused by gaze shifts could induce an error in our TCO estimation, because measures of TCO changes with retinal eccentricity [28]. Eye ball rotation occurs in two different cases and in both cases we used the Hirschberg ratio to estimate the magnitude of eye rotation induced errors. The individual Hirschberg ratios of four eyes measured with our eye tracker were 12.2 degree/mm, 13.5 degree/mm, 13.6 degree/mm, and 13.7 degree/mm (average 13.3 degree/mm), in good accordance with values found in the literature [22,29]. In the first case, gaze changes occur due to the lateral head motion during calibration while the participant maintains visual fixation of a static visual target. Shifting the head by 0.4 mm in one direction will induce a rotation of the eyeball of 0.05 degree. This angle will move the Purkinje image about 0.0038 mm or 1/10 of a pixel in the camera image, and is thus negligible. In the second case, fixational eye movements during calibration or later compensation will shift the Purkinje image with a ratio of about 2.5 pixel per degree of gaze shift. Hence, to displace the Purkinje image by a full pixel the participant would have to rotate the eyeball by 0.4 degree. Typical fixational eye movements such as tremor and drift have an amplitude of about 0.01 degree, or 0.1 degree respectively [30]. Thus, only microsaccades could move the Purkinje image with detectable amplitudes. Because of the AOSLO’s inability to correct stimulus locations for the fast changes in retinal location during a microsaccade, however, TCO compensation would not be needed in that situation.

As hypothesized by a theoretical chromatic eye model, the relationship between ocular TCA and small displacements of eye position (< 2 mm) ought to be linear, and dependent on wavelength [15–17]. We found an average slope of 3.55 arcmin horizontal TCO change per mm eye shift, and 3.43 arcmin/mm for vertical TCO. This finding is in accordance with earlier reports (3 arcmin/mm, 3.5 arcmin/mm, and 3.86 arcmin/mm [14,18,19]), and matches the chromatic eye model, predicting a slope of 3.44 arcmin/mm for a LCA of 1.00 diopter when 840 nm and 543 nm light is used [15–17]. The residual variations we observed might be caused by individual differences in LCA. In our group of 14 participants, the observed difference between slopes of horizontal and vertical direction was significant, an observation which was also found in a similar study [19]. Following Eq. (1), such difference could be caused by differences in optical dispersion along the horizontal and vertical dimension of the eye. Although this kind of dispersion anisometry has not been reported before, it could be hypothesized that the dispersive characteristic of the eye also differs between horizontal and vertical meridians, because curvature and therefore refractive power of the cornea differs between those meridians [31,32].

We determined absolute TCO values in 14 participants with a centered Purkinje image (due to the fact that our light source, visual axes and camera are all coaxially aligned), and found values ranging within 0.6 arcmin along the horizontal and 0.8 arcmin along the vertical dimension. These ranges are clearly smaller than the ranges of absolute foveal TCA estimation reported by previous studies: about 1 arcmin [24,28], 2 arcmin [15,33], 4 arcmin [17] and 8 arcmin [34]. In all these studies, TCA was determined with a centered pupil. The range of absolute TCO for a centered pupil position in our study spanned 1.4 arcmin and 1.6 arcmin for horizontal and vertical dimension, including only right eyes. The closer range of absolute TCO - including the single left eye - for a centered Purkinje position can be explained by the fact that the foveal achromatic axis and the visual axis are closely related [21], and that our eye tracking camera was aligned with the visual axis. The remaining scatter could be caused by individual differences of the eye’s optics. The reason for our observation that the TCO values with a centered Purkinje do not vary around zero may be due to a minimal alignment difference between the imaging and stimulation light in the light delivery path, or - since the TCO measurement is image based - the IR and green detector positions.

To study the relationship between visual, pupillary and achromatic axes in more detail, we also measured individual Kappa angles in all eyes. The measured TCO values for a centered pupil where linearly correlated with Kappa (R2 = 0.98 for horizontal, R2 = 0.75 for vertical Kappa). This finding supports the chromatic eye model, where TCA is correlated with the angle Psi ψ (defined by pupil displacement and nodal point distance from the entrance pupil, like Kappa, here) [15,17]. This experimental confirmation has important implications for all ophthalmic applications where TCA is wished to be measured or corrected. First, we confirmed that the slope, m, of TCA per pupil displacement can be directly derived from individual LCA values of the eye. Second, if beams are centered in the pupil, a precise measurement of Kappa can be a direct measure for foveal TCA, once their relationship has been established in the system used, determining the offset b (Eq. (3)). We tested this in four participants, by recalculating Kappa for the individual Nodalpoint distance obtained from the participant’s Hirschberg ratio. In that situation, where the TCO estimation function based solely on Purkinje image and pupil center data, we would have made an average error of 0.13 ± 0.04 arcmin for horizontal TCO and 0.14 ± 0.07 arcmin for vertical TCO. Thus, only if a spatial precision on the order of single cones of the central fovea and rods is desired, a direct calibration as presented here would be required. Other applications like functional testing of pathologic areas of the retina in patients where direct TCO measure is unfeasible or unsuitable due to the high light intensities could utilize real time TCO compensation, simply by determining Kappa [7,8].

One of the important goals of the eye tracking based estimation of TCO was to increase estimation precision up to single foveal cone level. The cones of the central fovea are the smallest photoreceptor cells of the retina with an inner segment diameter of about 30 arcsec [35] comparable to the diameter of rods. With larger eccentricity, cones increase in diameter with the result that positional errors caused by a participant’s residual head movements become more negligible (Fig. 7). Typically, participants show individual amounts of residual head shifts. We determined in a few observers that, during experiments with a static TCO compensation method, where TCO is measured before and after an experiment without any ongoing monitoring, stimuli can be displaced up to 2 arcmin (Fig. 7, top row). To visualize the impact of our proposed TCO compensation method, the precision of ± 0.15 arcmin determined here for our system was used to exemplary shift the target position on a retinal image due to noise of the ongoing compensation. For an experiment targeting individual cones of the central retina or rods, a maximal positional error of a single pixel, or 6 arcsec, is tolerable - a requirement that is met by about 75% of the TCO compensated positions. At 0.4 degree eccentricity, positional errors up to 12 arcsec would be acceptable. This criterion was met for all (100%) estimated TCO values in our study. Additionally, variability across three consecutively recorded calibrations in 14 tested participants fell well within 0.1 arcmin, which was lower than the observed single frame measurement noise (Fig. 4(B) and 5).

 figure: Fig. 7

Fig. 7 Demonstration of TCO compensation error at different retinal eccentricities. Top: Enlarged view of a typical stimulation site in AOSLO based experiments with current static TCO compensation. Overlay shows actual light delivery based on minimal head shifts in front of the system recorded by our eye tracking system during experiments in three participants (P1-3). Middle: AOSLO image montage encompassing the central fovea, asterisk marks the preferred retinal location of fixation, rings denote eccentricity. Cone size increases rapidly with increasing eccentricity. Bottom: Enlarged view of the smallest cones in the central fovea, 0.4 degree and 0.9 degree eccentricity. Overlays display theoretical stimulus delivery with ongoing TCO compensation as presented in this study. Our typical cell sized square-stimulus expands 3 pixels (diffraction limited FWHM, 0.3 arcmin).

Download Full Size | PDF

In the subjective validation experiment, participants made an average error of 0.30 arcmin, corresponding to twice the objectively determined precision, and smaller than the expected Nyquist limit of 0.43 arcmin for a foveal cone density of 200,000 cells/mm2 [36], but slightly higher than the typically found average Vernier acuity of 0.19 arcmin [37] and 0.25 arcmin [38]. Because we did not observe any systematic displacements across the three participants, we conclude that the subjective validation confirmed the current calibration process. However, comparable to other eye tracker setups, we plan to include a brief subjective validation routine of the calibration before running single cell stimulation experiments to detect systematic mismatches between subjective and estimated TCO. These small chromatic offsets could occur, for example, due to alignment changes in the AOSLO setup.

As a practical note, for compensation of TCO during microstimulation experiments, we currently implemented a per-trial approach. The stimulus was shifted according to the current TCO by converting the current offset into a temporal signal, switching the light on and off at the according scanning raster positions (see [39] for more details). Because our eye tracking software is a standalone solution, its data has to be transmitted to our stimulation software, which is run in a Matlab environment. Purkinje image coordinates were sent from the eye tracker software to Matlab on request, triggered by the participant’s keystroke that also triggers stimulation presentation. The delay of this communication was measured by means of timestamps, and did not exceed 30 msec. Based on the calculations shown above, we disregard the amount of gaze shifts for this period of time. The average head shift within a 30 msec window across all our calibration sequences was 0.006 mm, resulting in a TCO glitch of 0.02 arcmin due to transmission delays introduced by the software interface.

In summary, the demonstrated eye tracking method seems a viable solution to estimate and compensate chromatic offset in real time within a fraction of arcmin, necessary for adaptive optics microstimulation of the smallest photoreceptors of the retina. Furthermore, we experimentally confirmed the fundamental relationship between angle Kappa and TCA, which could be a convenient implementation in applications where TCA is wished to be compensated but cannot be directly measured.

Funding

Emmy Noether Program of the German Research Foundation (DFG) (Ha5323-5/1).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. W. M. Harmening, W. S. Tuten, A. Roorda, and L. C. Sincich, “Mapping the perceptual grain of the human retina,” J. Neurosci. 34(16), 5667–5677 (2014). [CrossRef]   [PubMed]  

2. K. S. Bruce, W. M. Harmening, B. R. Langston, W. S. Tuten, A. Roorda, and L. C. Sincich, “Normal Perceptual Sensitivity Arising From Weakly Reflective Cone Photoreceptors,” Invest. Ophthalmol. Vis. Sci. 56(8), 4431–4438 (2015). [CrossRef]   [PubMed]  

3. W. S. Tuten, W. M. Harmening, R. Sabesan, A. Roorda, and L. C. Sincich, “Spatiochromatic Interactions between Individual Cone Photoreceptors in the Human Retina,” J. Neurosci. 37(39), 9498–9509 (2017). [CrossRef]   [PubMed]  

4. W. S. Tuten, R. F. Cooper, P. Tiruveedhula, A. Dubra, A. Roorda, N. P. Cottaris, D. H. Brainard, and J. I. W. Morgan, “Spatial summation in the human fovea: Do normal optical aberrations and fixational eye movements have an effect?” J. Vis. 18(8), 6 (2018). [CrossRef]   [PubMed]  

5. R. Sabesan, B. P. Schmidt, W. S. Tuten, and A. Roorda, “The elementary representation of spatial and color vision in the human retina,” Sci. Adv. 2(9), e1600797 (2016). [CrossRef]   [PubMed]  

6. B. P. Schmidt, A. E. Boehm, K. G. Foote, and A. Roorda, “The spectral identity of foveal cones is preserved in hue perception,” J. Vis. 18(11), 19 (2018). [CrossRef]   [PubMed]  

7. J. H. Tu, K. G. Foote, B. J. Lujan, K. Ratnam, J. Qin, M. B. Gorin, E. T. Cunningham Jr., W. S. Tuten, J. L. Duncan, and A. Roorda, “Dysflective cones: Visual function and cone reflectivity in long-term follow-up of acute bilateral foveolitis,” Am. J. Ophthalmol. Case Rep. 7, 14–19 (2017). [CrossRef]   [PubMed]  

8. Q. Wang, W. S. Tuten, B. J. Lujan, J. Holland, P. S. Bernstein, S. D. Schwartz, J. L. Duncan, and A. Roorda, “Adaptive optics microperimetry and OCT images show preserved function and recovery of cone visibility in macular telangiectasia type 2 retinal lesions,” Invest. Ophthalmol. Vis. Sci. 56(2), 778–786 (2015). [CrossRef]   [PubMed]  

9. D. A. Atchison and G. Smith, “Chromatic dispersions of the ocular media of human eyes,” J. Opt. Soc. Am. A 22(1), 29–37 (2005). [CrossRef]   [PubMed]  

10. P. A. Howarth, “The lateral chromatic aberration of the eye,” Ophthalmic Physiol. Opt. 4(3), 223–226 (1984). [CrossRef]   [PubMed]  

11. M. Vinas, C. Dorronsoro, D. Cortes, D. Pascual, and S. Marcos, “Longitudinal chromatic aberration of the human eye in the visible and near infrared from wavefront sensing, double-pass and psychophysics,” Biomed. Opt. Express 6(3), 948–962 (2015). [CrossRef]   [PubMed]  

12. M. C. Rynders, R. Navarro, and M. A. Losada, “Objective measurement of the off-axis longitudinal chromatic aberration in the human eye,” Vision Res. 38(4), 513–522 (1998). [CrossRef]   [PubMed]  

13. B. Jaeken, L. Lundström, and P. Artal, “Peripheral aberrations in the human eye for different wavelengths: off-axis chromatic aberration,” J. Opt. Soc. Am. A 28(9), 1871 (2011). [CrossRef]  

14. W. M. Harmening, P. Tiruveedhula, A. Roorda, and L. C. Sincich, “Measurement and correction of transverse chromatic offsets for multi-wavelength retinal microscopy in the living eye,” Biomed. Opt. Express 3(9), 2066–2077 (2012). [CrossRef]   [PubMed]  

15. L. N. Thibos, A. Bradley, D. L. Still, X. Zhang, and P. A. Howarth, “Theory and measurement of ocular chromatic aberration,” Vision Res. 30(1), 33–49 (1990). [CrossRef]   [PubMed]  

16. L. N. Thibos, M. Ye, X. Zhang, and A. Bradley, “The chromatic eye: a new reduced-eye model of ocular chromatic aberration in humans,” Appl. Opt. 31(19), 3594–3600 (1992). [CrossRef]   [PubMed]  

17. M. Rynders, B. Lidkea, W. Chisholm, and L. N. Thibos, “Statistical distribution of foveal transverse chromatic aberration, pupil centration, and angle psi in a population of young adult eyes,” J. Opt. Soc. Am. A 12(10), 2348–2357 (1995). [CrossRef]   [PubMed]  

18. C. M. Privitera, R. Sabesan, S. Winter, P. Tiruveedhula, and A. Roorda, “Eye-tracking technology for real-time monitoring of transverse chromatic aberration,” Opt. Lett. 41(8), 1728–1731 (2016). [CrossRef]   [PubMed]  

19. A. E. Boehm, C. M. Privitera, B. P. Schmidt, and A. Roorda, “Transverse chromatic offsets with pupil displacements in the human eye: sources of variability and methods for real-time correction,” Biomed. Opt. Express 10(4), 1691–1706 (2019). [CrossRef]   [PubMed]  

20. N. Domdei, L. Domdei, J. L. Reiniger, M. Linden, F. G. Holz, A. Roorda, and W. M. Harmening, “Ultra-high contrast retinal display system for single photoreceptor psychophysics,” Biomed. Opt. Express 9(1), 157–172 (2017). [CrossRef]   [PubMed]  

21. D. A. Atchison and P. Artal, Handbook of Visual Optics (Taylor & Francis Group, 2017).

22. F. Schaeffel, “Kappa and Hirschberg ratio measured with an automated video gaze tracker,” Optom. Vis. Sci. 79(5), 329–334 (2002). [CrossRef]   [PubMed]  

23. D. Ivanchenko, Z. M. Hafed, and F. Schaeffel, “How correlated are drifts in both eyes during fixational eye movements?” Invest. Ophthalmol. Vis. Sci. 59, 5792 (2018).

24. P. Simonet and M. C. Campbell, “The optical transverse chromatic aberration on the fovea of the human eye,” Vision Res. 30(2), 187–206 (1990). [CrossRef]   [PubMed]  

25. U. Wildenmann and F. Schaeffel, “Variations of pupil centration and their effects on video eye tracking,” Ophthalmic Physiol. Opt. 33(6), 634–641 (2013). [CrossRef]   [PubMed]  

26. T. A. Hoang, J. E. Macdonnell, M. C. Mangan, C. S. Monsour, B. L. Polwattage, S. F. Wilson, M. Suheimat, and D. A. Atchison, “Time Course of Pupil Center Location after Ocular Drug Application,” Optom. Vis. Sci. 93(6), 594–599 (2016). [CrossRef]   [PubMed]  

27. C. H. Morimoto and M. R. M. Mimica, “Eye gaze tracking techniques for interactive applications,” Comput. Vis. Image Underst. 98(1), 4–24 (2005). [CrossRef]  

28. S. Winter, R. Sabesan, P. Tiruveedhula, C. Privitera, P. Unsbo, L. Lundström, and A. Roorda, “Transverse chromatic aberration across the visual field of the human eye,” J. Vis. 16(14), 9 (2016). [CrossRef]   [PubMed]  

29. D. Model, M. Eizenman, and V. Sturm, “Fixation-free assessment of the Hirschberg ratio,” Invest. Ophthalmol. Vis. Sci. 51(8), 4035–4039 (2010). [CrossRef]   [PubMed]  

30. S. Martinez-Conde, S. L. Macknik, and D. H. Hubel, “The role of fixational eye movements in visual perception,” Nat. Rev. Neurosci. 5(3), 229–240 (2004). [CrossRef]   [PubMed]  

31. P. M. Kiely, G. Smith, and L. G. Carney, “The Mean Shape of the Human Cornea,” Opt. Acta (Lond.) 29(8), 1027–1040 (1982). [CrossRef]  

32. E. Iyamu and E. Osuobeni, “Age, gender, corneal diameter, corneal curvature and central corneal thickness in Nigerians with normal intra ocular pressure,” J. Optom. 5(2), 87–97 (2012). [CrossRef]  

33. Y. U. Ogboso and H. E. Bedell, “Magnitude of lateral chromatic aberration across the retina of the human eye,” J. Opt. Soc. Am. A 4(8), 1666–1672 (1987). [CrossRef]   [PubMed]  

34. S. Marcos, S. A. Burns, P. M. Prieto, R. Navarro, and B. Baraibar, “Investigating sources of variability of monochromatic and transverse chromatic aberrations across eyes,” Vision Res. 41(28), 3861–3871 (2001). [CrossRef]   [PubMed]  

35. C. A. Curcio, K. R. Sloan, R. E. Kalina, and A. E. Hendrickson, “Human photoreceptor topography,” J. Comp. Neurol. 292(4), 497–523 (1990). [CrossRef]   [PubMed]  

36. D. R. Williams and N. J. Coletta, “Cone spacing and the visual resolution limit,” J. Opt. Soc. Am. A 4(8), 1514–1523 (1987). [CrossRef]   [PubMed]  

37. J. L. Reiniger, A. C. Lobecke, R. Sabesan, M. Bach, F. Verbakel, J. de Brabander, F. G. Holz, T. T. J. M. Berendschot, and W. M. Harmening, “Habitual higher order aberrations affect Landolt but not Vernier acuity,” J. Vis. 19(5), 11 (2019). [CrossRef]   [PubMed]  

38. C. M. M. Abbud and A. A. V. Cruz, “Variability of Vernier acuity measurements in untrained subjects of different ages,” Braz. J. Med. Biol. Res. 35(2), 223–227 (2002). [CrossRef]   [PubMed]  

39. S. Poonja, S. Patel, L. Henry, and A. Roorda, “Dynamic visual stimulus presentation in an adaptive optics scanning laser ophthalmoscope,” J. Refract. Surg. 21(5), S575–S580 (2005). [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 On-axis eye tracker with AOSLO as light source. (A) Photograph of the physical implementation from a top-oblique angle. The pink AOSLO beam is drawn for illustrative purposes and was not visible. (B) Side-view, to scale. The participant’s head could be moved via XYZ-microdrives attached to a bite bar. (C) Using the 840 nm beam of the AOSLO as light source, retinally back-scattered light illuminates the pupil from within the eyeball. The first Purkinje image was used to track the eye’s position relative to the AOSLO beam. “N” marks the first Nodal point of the eye and defines the intersection between pupillary and visual axes. Note that the camera is aligned with the visual axis. (D) Pupil and Purkinje image could be tracked with high precision (1 image pixel equaled 30 μm in the pupil plane). During operation, digital overlays could be displayed to aid positioning during calibration.
Fig. 2
Fig. 2 Calibration processing steps. TCO and eye tracking data were recorded simultaneously while the operator moves the participant’s eye in front of the system. (A) For synchronization, three quick full on and off cycles flashed the imaging beam (before and after vertical red lines). Due to slight differences in sampling rate, TCO data was down-sampled via linear interpolation. (B) Larger TCO data excursions (grey) were removed by thresholding for frame by frame TCO sample changes, ∆ TCO (bottom graph, red line marks 2-pixel-threshold). (C) Computation of the correlation between TCO and Purkinje image position by least-squares linear fit. The resulting function (shown as inset) was used to continuously estimate TCO based on eye tracking data.
Fig. 3
Fig. 3 Calibration sequences recorded in three participants (P1, P2, P3). Top row: Captured Purkinje image positions during calibration. 2nd row: TCO data of the same calibration sequence, based on video data recorded at 0.4 degree eccentricity. 3rd and 4th row: Linear regression for horizontal and vertical eye shifts, respectively. Percent samples used (U), range of data points (Q80) and goodness of fit (R2) are metrics chosen to support the operator’s decision whether a calibration has to be repeated (an example is shown in the 4th column). Black data points are the usable samples after interpolation and removing data noise, grey data points represent the data samples flagged as noise (see Methods for details).
Fig. 4
Fig. 4 TCO data across 14 eyes of 14 participants. (A) Boxplot of the median calibration slope for each eye (3 repeats each). The average horizontal correlation slope across eyes was 3.55 ± 0.08 arcmin/mm, the average vertical correlation slope 3.43 ± 0.12 arcmin/mm. This difference between horizontal and vertical slope was significant (p = 0.02, Wilcoxon signed rank test). (B) Calculated TCO for each run with a Purkinje position centered on the AOSLO beam. (C) Calculated TCO for a centered pupil position. The one left eye included in this data set shows an absolute TCO with inversed sign (green diamonds). (D) Angle Kappa for all eyes. (E) Correlation of horizontal Kappa and horizontal TCO of the centered pupil. (F) Correlation of vertical Kappa and vertical TCO of the centered pupil. For plots (B-F), vertical and horizontal bars mark the 0.5 quantile of the calibration function (TCO) and the standard deviation (Kappa). Different symbols mark different eyes. If no error bar is visible, the error is smaller than the symbol.
Fig. 5
Fig. 5 Error estimation. (A) Exemplary comparison between measured TCO (black) and estimated TCO (red) following a single calibration sequence. Grey areas mark examples where image quality did not allow TCO measurement. (B) Error of TCO estimation over time. Average ( ± std) estimation error was 0.05 ± 0.04 arcmin. Single events may exceed an estimation error of 1 pixel (1 pixel = 0.1 arcmin). (C) To determine TCO estimation precision, 20 consecutive calibration sequences were validated against each other. The framewise displacement error was plotted in both spatial dimensions with one-tenth pixel resolution (1 block = 0.01 arcmin). 95% of all displacement errors were within ± 0.15 arcmin (horizontal) and ± 0.12 arcmin (vertical). (D) Repeatability was tested by using the average eye position in the 20 calibration functions (open circles). The standard deviation across all data is visualized via the extent of the horizontal and vertical bars (STDx = 0.022 arcmin/ STDy = 0.024 arcmin).
Fig. 6
Fig. 6 Subjective validation of our eye tracking based TCO estimation approach. (A) AOSLO raster with stimulus to scale. The task was to center the green circle within the red notches. (B) Results from all 3 participants in the same scale as the zoomed stimulus in (A) (white scale bar = 2 arcmin). The mean difference between predicted position from estimation and subjective alignment was 2.49, 2.96, and 2.95 pixels (0.25, 0.30 and 0.30 arcmin) for P1, P2, and P3, respectively. There was no systematic direction for displacement errors. The enlargement below each panel in (B) displays the pixelwise TCO correction errors for each participant. The white cross marks the zero difference position.
Fig. 7
Fig. 7 Demonstration of TCO compensation error at different retinal eccentricities. Top: Enlarged view of a typical stimulation site in AOSLO based experiments with current static TCO compensation. Overlay shows actual light delivery based on minimal head shifts in front of the system recorded by our eye tracking system during experiments in three participants (P1-3). Middle: AOSLO image montage encompassing the central fovea, asterisk marks the preferred retinal location of fixation, rings denote eccentricity. Cone size increases rapidly with increasing eccentricity. Bottom: Enlarged view of the smallest cones in the central fovea, 0.4 degree and 0.9 degree eccentricity. Overlays display theoretical stimulus delivery with ongoing TCO compensation as presented in this study. Our typical cell sized square-stimulus expands 3 pixels (diffraction limited FWHM, 0.3 arcmin).

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

TCA  EyeDisplacement×LCA
κ x =arctan( P I x P C x PN ¯ )
TC O X ( P I X )=m P I X +b
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.