Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Differential detection of retinal directionality

Open Access Open Access

Abstract

An adaptive optics fundus camera has been developed that uses simultaneous capture of multiple images via adjacent pupil sectors to provide directional sensitivity. In the chosen realization, a shallow refractive pyramid prism is used to subdivide backscattered light from the retina into four solid angles. Parafoveal fundus images have been captured for the eyes of three healthy subjects and directional scattering has been determined using horizontal and vertical differentials. The results for the photoreceptor cones, blood vessels, and the optic disc are discussed. In the case of cones, the observations are compared with numerical simulations based on a simplistic light-scattering model. Ultimately, the method may have diagnostic potential for diseases that perturb the microscopic structure of the retina.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Vision is sensitive to the angle of incidence of light on the retina as evidenced by the psychophysical Stiles-Crawford effect of the first kind (SCE-I) [1–5]. Likewise, light which is backscattered by the retina is nonuniform in the pupil in what is commonly referred to as the optical Stiles-Crawford effect (OSCE) [6–12]. For vision, only light in the outer photoreceptor segments matter, whereas for imaging scattering from refractive index inhomogeneities at any location within the eye may contribute to images (albeit confocal detection and/or optical coherence tomography can significantly enhance the depth selectivity). Whereas the SCE-I relies on subjective and time-consuming measurements, the OSCE allows for fast and objective determination of photoreceptor directionality. Thus, exploring the relationship between the SCE-I and the OSCE has diagnostic potential [3,12].

The causes of scattering within photoreceptors remain a topic of debate with likely contributions from inner/outer segment junctions and outer segment terminations [13–16]. One such contribution is the refractive index contrast between densely packed high-index mitochondria organelles and the surrounding cellular matrix in the ellipsoid [17] but also gaps, or lack of membrane invaginations (defects), have been suggested to cause reflections [18]. Backscattered light from the photoreceptors may be guided [2,3] or directional due to the spatial arrangement of the dominant scattering sources and the local variation in refractive index [4]. The OSCE distribution peaks approximately at the same position as the SCE-I in healthy eyes which suggests that cones point predominantly toward a common point [19]. Objective methods to analyze photoreceptor pointing include OSCE pupil imaging [8–11], high-resolution adaptive optics (AO) retinal imaging with fundus cameras using multiple pupil entrance points [20–22] or axial sweeping [23], scanning laser ophthalmoscopy (SLO) [24,25] and optical coherence tomography (OCT) [22,26,27]. The methods allow determination of an effective directionality parameter for light scattering that is similar, though not identical, to the directionality found by psychophysical means [2,4,10].

Annular beams have been used in AO-SLO to reduce the size of the scanning spot [28] and offset-pinhole methodology has facilitated imaging of weakly scattering structures by enhancing the contrast of photoreceptors, ganglion cells, and blood flow [29–32]. Here, we report on a new directional sensitive method that uses a structured pupil in the detection path. This approach differs from the offset-pinhole method by operating in the pupil rather than the image plane. In this way it directly probes the angular distribution of the backscattered imaging light rather than multiply-scattered (indirect imaging) light. Earlier OSCE studies have altered the direction of the incident light by altering the pupil entrance point while capturing individual images of backscattered light in the pupil or retinal image plane [20–22]. Here the light enters near the pupil center (which matters most for vision) and directionality is determined using intensity gradients from simultaneously captured images at different back-scattering angles.

Different structured pupils can be envisioned. In this first realization, we have chosen to use a shallow refractive pyramid prism to subdivide backscattered light from the retina into four adjacent quadrants resulting in four simultaneous retinal images. Pyramid prisms have previously been used to capture multiple images for wavefront sensing in AO [33–35] and to acquire depth information in microscopy [36]. Differential images can provide information about tilt of the scattering sources in a similar way to the quadrant detection used for sensing of cantilever deformations in atomic force microscopy [37]. Here, inclinations of retinal structures are determined using differential imaging to determine effective pointing directions in the horizontal and vertical directions, respectively.

The paper is organized as follows: Section 2 describes the experimental setup and realization. The theoretical model and simulation results are presented in Section 3. Section 4 shows the experimental findings that are discussed in more detail in Section 5. Finally, Section 6 contains the conclusions.

2. Experimental setup and realization

The quadrant pupil has been implemented in an AO fundus camera as shown in Fig. 1. The imaging channel uses a fiber-coupled LED at 660 nm wavelength with 25 nm bandwidth (Thorlabs) for flood illumination of a 2° retinal illumination patch in Maxwellian view.

 figure: Fig. 1

Fig. 1 Schematic layout of the flood-illumination fundus camera equipped with the refractive pyramid for quadrant detection of retinal directionality and AO wavefront correction of the eye and system. All lenses are AR-coated achromatic doublets (F1 to F11) and irises are used to set the beam width and screen against unwanted reflections in the optical path.

Download Full Size | PDF

Light which is backscattered by the retina is projected from the eye pupil via two 4f telescopes onto the glass pyramid (Eksma) with 170° apex angle mounted in a conjugated plane. Refractive errors induced by the pyramid prism are small due to its large apex angle. It could potentially also operate in reflection (if metal coated) which would eliminate chromatic and refractive errors. The resulting four images are captured simultaneously with a scientific CMOS camera (Neo 5.5 sCMOS Andor) at 33 frames-per-second (fps) with 11-bit resolution. This camera has 6.5 µm pixel size on a 16.6 mm × 14.0 mm sensor and quantum efficiency ∼55% at the imaging wavelength. The image magnification is × 9 from retina to camera (i.e., 775 pixels across the 2° visual degrees). In parallel, a Hartmann-Shack (HS) wavefront sensor (Thorlabs) is used in closed-loop with a deformable mirror (DM) membrane (Mirao-52e Imagine Eyes) and an 850 nm laser diode (Edmund Optics) for real-time aberration correction.

Beam splitters are used to combine and separate the imaging channel from the wavefront sensing. This could potentially be further improved with dichroic mirrors for increased light efficiency. The incident power is 70 µW (imaging channel) and 240 µW (wavefront sensing channel). These values are within the ANSI z136.1 safety standard for continuous light exposure of the eye for both the extended source (imaging) and for the wavefront sensing (focused beam) adding to a total of 37% of the maximum permissible exposure. Bandpass filters are used to block unwanted wavelengths of light from entering the HS-WFS and the sCMOS. The AO system operates continuously throughout the entire imaging session (to keep the root-mean-square wavefront error smaller than ∼0.07 µm) at a variable speed of 33 – 43 Hz and momentarily maintains the DM shape during eye blinks to avoid errors that could otherwise confound the wavefront correction.

Short video sequences (∼3 seconds) were captured for the nasal retina in the right eye of three healthy subjects aged 31-48 years (the authors: SQ, DV, and BV). The study complies with the Declaration of Helsinki for research involving human subjects. AO correction was activated first for all Zernike polynomials up to and including the 5th radial order for a 6.0 mm pupil to ensure diffraction-limited performance before the video sequence was initiated. The pupil was dilated, and accommodation partially paralyzed, by administrating 2 drops of 1% tropicamide before image acquisition. A distant green spot of light was used for gaze fixation to reduce unwanted eye motion.

3. Differential detection method and numerical scattering model

The four-faceted pyramid is placed in a conjugate pupil plane to provide simultaneous capture of four retinal intensity images (I1,I2,I3,I4) as shown in Fig. 1. When centered, symmetrical backscattering of light through four adjacent pupil sectors produces identical brightness of corresponding structures in all images. Obliqueness in the scattering is revealed by brightness differences between images. Thus, for photoreceptors, this will identify obliqueness with respect to the pyramid apex. For other retinal structures, including blood vessels and the optic disc, it will show preferential scattering directions related to the local topography being imaged.

3.1. Differential detection method

The pointing of individual scattering structures can be determined from the simultaneously-acquired images by calculating a difference vector (Δxm,n,Δym,n) at corresponding image pixels m,n. The vector components can be written as

Δxm,n=I2,m,nI1,m,nI2,m,n+I1,m,nL;Δym,n=I3,m,nI4,m,nI3,m,n+I4,m,nL
where L is a scaling factor that sets the length of all the vectors and the intensity values I1...4,m,n refer to detected brightness of corresponding pixels in the four images. It must be stressed that the value of L can be chosen at will for best visualization of local intensity differences as coordinate vectors. If chosen too short, vectors will appear as dots and if chosen too large vectors will point outside of the image frame. All vectors have components in the range of LΔxm,nL and LΔym,nL where the largest range is found when one pixel equals zero in any of the images.

The implications of Eq. (1) can be appreciated by the vectors in Fig. 2 for a simulated cone mosaic and the intensity differentials used to calculate the obliqueness of each cone. There is no relative tilt of an imaged pixel when both vector components are equal to zero. The absolute tilt is more challenging to determine as it depends on the axial eye length, and the diffraction of light from the retina to the pupil plane. Small structures will backscatter light into a wide angular distribution whereas wide structures will scatter into a smaller angular distribution. This is the approach taken in the numerical model for a schematic eye model (Sec. 3.2).

 figure: Fig. 2

Fig. 2 Simulated parafoveal cone mosaic imaging and analysis assuming circular reflective scattering layers in the ellipsoid of each photoreceptor. Each reflector is added random tip/tilt with uniform amplitude of up to ± 3°. The resulting intensity image with a full 6.0 mm circular pupil is shown in (a) for a mosaic of 19 cones across a 40 × 40 μm2 area. In (b) red arrows show the random tilts of each simulated cone (vectors overlaying the intensity mage). In (c) quadrant retinal images are shown together with the algebraic image sum. In (d) difference images reveal photoreceptor obliqueness mapped onto a grayscale between maximum + 3° (positive, white) and minimum −3° (negative, black) angles. The same grayscale is used both horizontally (tilt) and vertically (tip). Finally, in (e) the reconstructed obliqueness of each simulated cone is shown (blue arrows) as determined from Eq. (1) using only intensity differences at the central pixels of each reflector.

Download Full Size | PDF

It is of value to introduce a metric, σ, that can be used locally, and globally, to quantify the relative amount of obliqueness or disarray in captured images. This can be done by averaging over the number of pixels in a square image N×N as

σ=12N2n=1Nm=1N(Δxm,n/L)2+(Δym,n/L)2.
This metric increases when the obliqueness is large and equals zero if there is no obliqueness with respect to the chosen pupil reference point. The normalization with respect to L removes the dependence on the chosen scaling factor used in Eq. (1). Likewise, when only a chosen subset of pixels is used, the obliqueness can be normalized accordingly. This metric falls in the range of 0σ1 . Thus, if there is no local obliqueness all corresponding pixels carry equal intensity whereby σ=0.

Equation (1) and Eq. (2) can be modified to compensate for global tilt by subtracting the average vector (<Δxm,n>,<Δym,n>) from the local values, i.e., (Δxm,n<Δxm,n>,Δym,n<Δym,n>) and thereby obtain a modified metric σ˜ that does not include global tilt. This is the procedure followed for the experimental results shown in Sec. 4. Image normalization of all four retinal images is used before calculation of derivatives. This reduces global obliqueness whereby the two metrics σ and σ˜ become almost identical.

It must be stressed that apodization probes the directional light scattering directly by angular resolving the scattered light at the pupil albeit at the cost of spatial resolution. Thus, the method complements but does not replace high-resolution imaging techniques where the unrestricted pupil is used. Pupil apodization differs fundamentally from related techniques, such as the offset-pinhole method used with SLO [29–31], that sample the non-imaging (scattered) light distribution in the image plane at discrete off-axis (and thus non-confocal) points. Pupil apodization targets the directionality of the backscattered imaging light itself. This can also be appreciated from a theoretical description as follows. Differential retinal imaging with pupil apodization using two sectors GI and GII that may be quadrants, or adjacent half-pupils, can be expressed as a differential intensity image as

Idifferential=|FT{GI}O¯|2|FT{GII}O¯|2
where O¯ is the spatial distribution of light scattered by the retina, FT is the 2-D Fourier transform, and * denotes a convolution. When the scattering is symmetric the differential image determined from Eq. (3) equals zero. Multi-offset and indirect imaging modalities can be used to further enhance the visibility of multiply-scattered light in fundus images [31,32] but without assessing the directionality of the direct imaging light itself as done in the present study and exemplified by Eq. (3).

3.2. Modeled cone mosaic imaged with quadrant pupil detection

A photoreceptor imaging model has been developed to demonstrate the method for a quadrant pupil. An array of parafoveal photoreceptors with 5 μm diameter has been modeled (Matlab) using a circular mirror to represent the dominant scattering layer within each photoreceptor ellipsoid [4,17]. Random variations of scattering microstructures would, on average, be well represented by diffractive micromirrors, which is the approach taken here. A similar approach can be used for more complex scattering domains using dipolar [4] or electromagnetic wave propagation [38,39]. Random tilts of each cone within a uniform distribution of up to ± 3° was assumed. The simulated retinal area represents a 40 × 40 μm2 area with 20,000 cones/mm2 for a schematic eye with axial length feye = 22.2 mm and refractive index neye = 1.33. The simulated array of micromirrors was assumed to be illuminated by an axially incident plane wave of light and back-diffracted light, truncated by the eye pupil, has been reimaged using two Fourier transforms. Figure 2(a) shows the resulting intensity images for the cone mosaic with inclinations as shown in Fig. 2(b), corresponding to the vectors expressed by Eq. (1). Some brightness variations that might resemble mode structures can be observed which originate in the diffraction from the oblique micromirrors [4].

Images obtained through the four adjacent pupil sectors are shown in Fig. 2(c) with the algebraic sum of intensities I=I1+I2+I3+I4 shown in the middle. The sum resembles the full pupil image in Fig. 2(a) but is not identical since intensities rather than complex fields are added resulting in a lower resolution. Bright dots in the individual images correspond to cones that point mostly into the corresponding quadrant whereas dark areas show cones that point away. The preferential scattering direction, and thus the pointing, is more apparent from differential imaging as seen in Fig. 2(d) that shows pointing left, right, up, down, or along the diagonals with each sub-image scaled to the maximum contrast. Uniform differential images correspond to no obliqueness whereas brightness differences correspond to oblique light scattering as shown in Fig. 2(d). These images directly show the relative inclination of each modeled cone which cannot be deciphered from Fig. 2(a) alone.

To further validate the method, intensity differences at the central pixel of each simulated cone in the quadrant retinal images (Fig. 2(d)) were used with Eq. (1) to calculate the measured obliqueness of each. The resulting vectors are shown in Fig. 2(e). The reconstruction resembles the original although some differences can be seen. These are in part caused by the low central brightness for some of the simulated cones. This difference could potentially be reduced by using the average brightness across the imaged cones, rather than their central brightness only. Evaluation of obliqueness metric for the central pixels of the simulated cones in Fig. 2 results in σ˜ = 0.620. In turn, if all cone tilts equal zero (not shown) the intensity reconstruction finds a small value of 0.022.

Differential detection can be applied to photoreceptor cones as modeled above or indeed other retinal structures to reveal preferential directional scattering caused by topography, swelling or refractive-index inhomogeneities. For the experimental results described in Sec. 4 obliqueness vectors were determined at regular intervals which is more valuable and faster when determining a global obliqueness metric.

4. Experimental results

Three examples of relevance have been chosen for analysis: direct imaging of parafoveal cones (Sec. 4.1), retinal blood vessels (Sec. 4.2), and the optic nerve (Sec. 4.3).

4.1. Cone photoreceptors

An example of parafoveal cone mosaic imaging at ∼5° in the upper-nasal region for subject DV is shown in Fig. 3. This shows the combination of images captured with the camera. The accompanying playback visualizations have been reduced to 15 fps for ease of viewing.

 figure: Fig. 3

Fig. 3 Directional light scattering captured for the parafoveal region of subject DV showing (a) four simultaneous images and the sum of the images in the middle (Visualization 1) and (b) corresponding difference images (Visualization 2). Each image covers 2° visual degrees. The dual scalebar in (b) refers to horizontal tilt (left-to-right) and vertical tilt, or tip (down-to-up).

Download Full Size | PDF

Colinear registration of the four images is vital to identify directional scattering and minor translations of the individual image positions are needed. Repositioning was achieved by manual identification of a corresponding bright feature in all 4 images corresponding to local cross correlation before the analysis. Bright dots in Fig. 3(b) show cone photoreceptor scattering into the corresponding quadrant whereas dark areas correspond to a lack of scattering into that same quadrant. The range of grayscales show tip and tilts between these limits although unwanted glare light, caused by the lack of confocal detection, complicates the analysis. The obliqueness of scattering for individual photoreceptor cones can be estimated by identifying the same cone in more than one image and calculate its intensity-weighted center-of-mass. This determines the obliqueness as a fraction of the maximum obliqueness in the set of images according to the gray scale in the same way as in Fig. 2(d).

Color coding can be used to aid visualization. This is shown in Fig. 4 for the same data by overlapping the registered images. Downward scattering, i.e., the lower image in Fig. 3(a), has been excluded (to use a tricolor mapping) and the following coding has been used: scattering to the left (blue), scattering upwards (green), and scattering to the right (red). Colors may also be used separately on the horizontal and vertical directions (not shown). A prismatic triangular pyramid could potentially be used to capture just 3 images that could be projected directly onto the RGB colormap. The enlarged view in Fig. 4(b) shows that the cones scatter light in different directions as coded by the color and brightness. To visualize the local inclinations, Fig. 4(c) shows the same image but with obliqueness vectors determined from Eq. (1) superimposed (corrected for the global average tilt). The vectors are plotted at regular intervals, not always coinciding with the cones. Still, careful comparison of Fig. 4(b) and Fig. 4(c) gives added insight into the local preferential scattering directions that is not apparent in a conventional high-resolution AO fundus image.

 figure: Fig. 4

Fig. 4 Directional light scattering captured for the parafoveal region of subject DV showing (a) three superimposed simultaneous images left (blue), upper (green) and right (red) and (b) enlarged view of the central region along with the RGB color identification of relative cone pointing directions as determined from three of the four quadrant images. In (c) the inclination vectors have been overlaid to show the local pointing obliqueness calculated using Eq. (1) with all four images thereby removing the ambiguity in the vertical direction. For the chosen section in (c) the disarray parameter equals σ˜ = 0.091 and it varies in the range of 0.078 - 0.102 for the duration of the video sequence.

Download Full Size | PDF

4.2. Blood vessels

An example of blood vessel imaging for subject BV is shown in Fig. 5. The visibility of the vessel walls in difference images is highest in the direction perpendicular to the vessels and the outcome resembles that of phase contrast imaging and offset pinhole SLO [30,31].

 figure: Fig. 5

Fig. 5 Directional light scattering captured for the parafoveal region of subject BV showing the (a) four simultaneous images with the sum of the images in the middle and (b) corresponding difference images. Each image covers 2° visual degrees. In (c) cross-sectional cuts across the central part of the differential images (as indicated by arrows in (b)) are shown.

Download Full Size | PDF

The oblique light scattering is evident in Fig. 5(b) suggesting a mirror-like reflection off the vessel walls that enhances them in the orthogonal direction. This can be appreciated in the cross-sectional cuts shown in Fig. 5(c). The Michelson contrast of the vessel with respect to its immediate neighborhood equals approximately 8% (upper), 7% (lower), 16% (left) and 22% (right). Color coding can again be used to enhance the visibility of obliqueness. In Fig. 6, the left (blue), upper (green), and right (red) images have been superimposed. A color gradient blue-green-red can be noted across the vertical vessel suggesting a mirrorlike reflection. As a result, the overlaid obliqueness vectors in Fig. 6(c) are largely perpendicular to the vertical vessel.

 figure: Fig. 6

Fig. 6 Blood vessel visualization with color coding of scattering to the left (blue), upwards (green) and right (red) producing the (a) RGB color composite image. The magnified region (dashed white square) shows the color gradient across the vessel indicating the changing slope. In (c) the vector pointing has been overlaid to show the local inclinations calculated using Eq. (1). For the chosen section in (c) the disarray parameter equals σ˜ = 0.080 and it varies in the range of 0.064 - 0.086 for the duration of the video sequence.

Download Full Size | PDF

4.3. Optic nerve

Finally, an example of imaging near the optic nerve for subject SQ is shown in Fig. 7. The system magnification was reduced by a factor of 2.7 to capture these larger 5.4° images. Differential detection allows visualization of slopes that may be useful to monitor optic disc or cup changes. The corresponding color-coded image is shown in Fig. 8 where a color gradient can be appreciated in the transition zone at the optic disc as well as across the large vessels although an exact image overlay is complicated by the larger depth variation across the region. The vector plot shows a preferential tilt towards the right in the region of the disc.

 figure: Fig. 7

Fig. 7 Example of imaging near the optic nerve (seen in the lower right corner of the images) for subject SQ showing (a) four simultaneous images with the sum of the images in the middle and (b) corresponding difference images. To capture these images the system optics were changed to capture a 5.4° retinal view, rather than the 2° view used in the other examples.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Visualization with color coding of scattering to the left (blue), upwards (green) and right (red) producing an RGB color composite image near the optic disc (seen in the lower right part of the image). In (b) the pointing has been overlaid to show the local vectors calculated using Eq. (1). The disarray parameter across the image equals σ˜ = 0.090 and it varies in the range of 0.089 - 0.103 for the duration of the video sequence.

Download Full Size | PDF

The examples demonstrate that a quadrant pupil can be a useful addition for retinal imaging at both high and low magnifications by providing insight into directional scattering. Other bi- or multifaceted configurations can be envisioned that can highlight, possibly dynamically, any direction of relevance. The method may be implemented in other retinal imaging technologies (including SLO, where the differential imaging resembles offset-pinhole technology [29–31], and OCT [16,26,27]) that would better discriminate against straylight as required for accurate quantification of the obliqueness.

5. Discussion

We have presented a method that allows for obliqueness analysis in fundus images acquired through different pupil sectors. This method resolves the angular distribution of light directly in the pupil plane and thus gives additional information not readily available by other means.

Three different scenarios have been reported. The imaging of parafoveal cones, blood vessels, and the optic nerve all for healthy eyes. The results have been discussed in terms of local obliqueness vectors that map out variations in dominant scattering directions across simultaneously-captured images. The determined disarray parameter σ˜ is robust across the recorded video sessions varying less than 20%. The frame-to-frame variation, where eye motion is less, is smaller than 3%. In all the cases analyzed here only healthy subjects have been imaged and the obliqueness is small (σ˜< 0.103). The simulation of a small cone mosaic with obliqueness of up to 3° resulted in a larger disarray parameter, but cone disarray in healthy eyes is believed to be small.19,20 If comparing images obtained at different scales it may become relevant to introduce the disarray per unit retinal area or per unit visual angle. In terms of patient imaging, the range of disarray parameters relevant for different retinal diseases remains to be determined but it would be expected to result in larger values when imaging affected regions of similarly-sized structures.

The directional sensitivity is achieved at the cost of a lower resolution. Thus, it would be relevant to capture images first through the full pupil and subsequently, or simultaneously, capture the directional images in a separate imaging channel. The chosen division of the pupil has impact on the achievable resolution and causes a lack of symmetry in the point-spread-function. Moreover, the refraction by the pyramid, though shallow, induces errors that degrade the images. A better approach, though beyond this study, would be to use a full circular apodization pupil centered and off-centered respectively. This would retain the symmetry of the imaging system while still allow calculation of differentials. This could be realized with a spatial light modulator or a digital micromirror device that could sample the pupil at will [40]. This would have increased flexibility without additional image degradation beyond the reduction in sampled pupil size. Ultimately, such methods could be implemented into SLO and OCT systems that offer the best in terms of resolution and discrimination against unwanted components of light.

With further refinement, the method will be used for clinically-relevant imaging of retinal disease. The information acquired may potentially be beneficial in the registration of drusen in macular degeneration or cone dystrophy where the normal arrangement and pointing of photoreceptors may be perturbed via alterations of the retinal topography that would increase the introduced disarray metric. The approach may also prove valuable in the analysis of diabetic retinopathy if combined with polarimetry [41].

6. Conclusions

In this study, we have reported on a new directional sensitive fundus camera that uses a refractive pyramid in the conjugate pupil plane to allow simultaneous capture of four images corresponding to light scattered by the retina into adjacent solid angles in the horizontal and vertical directions. Differential detection reveals differences in the directional scattering. This may possibly be used to monitor progression of diseases that alter the retinal topography [42–47], observe light-induced changes [48,49] or regeneration of photoreceptor outer segments [50]. With further development, a different approach could be implemented for pupil apodization to avoid the refractive errors induced by the refractive pyramid. Options currently being explored in our laboratory include spatial light modulators and digital micromirror devices for adaptive modification of the effective pupil.

The strategy proposed here could with little alteration be implemented in SLO and OCT systems by either sequentially or simultaneously capturing light through different parts of the pupil from which the differential images can be derived. These scanning systems would benefit significantly by their confocal design by eliminating straylight that may otherwise confound the determination of oblique light scattering.

Funding

King Abdullah Scholarship Program; Science Foundation Ireland (08/IN.1/B2053).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. W. S. Stiles and B. H. Crawford, “The luminous efficiency of rays entering the eye pupil at different points,” Proc. R. Soc. Lond., B 112(778), 428–450 (1933). [CrossRef]  

2. B. Vohnsen, I. Iglesias, and P. Artal, “Guided light and diffraction model of human-eye photoreceptors,” J. Opt. Soc. Am. A 22(11), 2318–2328 (2005). [CrossRef]   [PubMed]  

3. G. Westheimer, “Directional sensitivity of the retina: 75 years of Stiles-Crawford effect,” Proc. Biol. Sci. 275(1653), 2777–2786 (2008). [CrossRef]   [PubMed]  

4. B. Vohnsen, “Directional sensitivity of the retina: A layered scattering model of outer-segment photoreceptor pigments,” Biomed. Opt. Express 5(5), 1569–1587 (2014). [CrossRef]   [PubMed]  

5. B. Vohnsen, A. Carmichael, N. Sharmin, S. Qaysi, and D. Valente, “Volumetric integration model of the Stiles-Crawford effect of the first kind and its experimental verification,” J. Vis. 17(12), 18 (2017). [CrossRef]   [PubMed]  

6. G. T. di Francia and L. Ronchi, “Directional scattering of light by the human retina,” J. Opt. Soc. Am. 42(10), 782–783 (1952). [CrossRef]   [PubMed]  

7. G. J. Van Blokland, “Directionality and alignment of the foveal receptors, assessed with light scattered from the human fundus in vivo,” Vision Res. 26(3), 495–500 (1986). [CrossRef]   [PubMed]  

8. J. M. Gorrand and F. Delori, “A reflectometric technique for assessing photoreceptor alignment,” Vision Res. 35(7), 999–1010 (1995). [CrossRef]   [PubMed]  

9. S. A. Burns, S. Wu, F. Delori, and A. E. Elsner, “Direct measurement of human-cone-photoreceptor alignment,” J. Opt. Soc. Am. A 12(10), 2329–2338 (1995). [CrossRef]   [PubMed]  

10. S. Marcos, S. A. Burns, and J. C. He, “Model for cone directionality reflectometric measurements based on scattering,” J. Opt. Soc. Am. A 15(8), 2012–2022 (1998). [CrossRef]   [PubMed]  

11. J.-M. Gorrand and M. Doly, “Alignment parameters of foveal cones,” J. Opt. Soc. Am. A 26(5), 1260–1267 (2009). [CrossRef]   [PubMed]  

12. P. J. DeLint, T. T. J. M. Berendschot, and D. van Norren, “A comparison of the Optical Stiles-Crawford effect and retinal densitometry in a clinical setting,” Invest. Ophthalmol. Vis. Sci. 39(8), 1519–1523 (1998). [PubMed]  

13. Y. Zhang, B. Cense, J. Rha, R. S. Jonnal, W. Gao, R. J. Zawadzki, J. S. Werner, S. Jones, S. Olivier, and D. T. Miller, “High-speed volumetric imaging of cone photoreceptors with adaptive optics spectral-domain optical coherence tomography,” Opt. Express 14(10), 4380–4394 (2006). [CrossRef]   [PubMed]  

14. R. F. Spaide and C. A. Curcio, “Anatomical correlates to the bands seen in the outer retina by optical coherence tomography: literature review and model,” Retina 31(8), 1609–1619 (2011). [CrossRef]   [PubMed]  

15. R. S. Jonnal, O. P. Kocaoglu, R. J. Zawadzki, S.-H. Lee, J. S. Werner, and D. T. Miller, “The cellular origins of the outer retinal bands in optical coherence tomography images,” Invest. Ophthalmol. Vis. Sci. 55(12), 7904–7918 (2014). [CrossRef]   [PubMed]  

16. R. S. Jonnal, I. Gorczynska, J. V. Migacz, M. Azimipour, R. J. Zawadzki, and J. S. Werner, “The properties of outer retinal band three investigated with adaptive-optics optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 58(11), 4559–4568 (2017). [CrossRef]   [PubMed]  

17. Q. V. Hoang, R. A. Linsenmeier, C. K. Chung, and C. A. Curcio, “Photoreceptor inner segments in monkey and human retina: mitochondrial density, optics, and regional variation,” Vis. Neurosci. 19(04), 395–407 (2002). [CrossRef]   [PubMed]  

18. M. Pircher, J. S. Kroisamer, F. Felberer, H. Sattmann, E. Götzinger, and C. K. Hitzenberger, “Temporal changes of human cone photoreceptors observed in vivo with SLO/OCT,” Biomed. Opt. Express 2(1), 100–112 (2011). [CrossRef]   [PubMed]  

19. A. M. Laties and J. M. Enoch, “An analysis of retinal receptor orientation. I. Angular relationship of neighboring photoreceptors,” Invest. Ophthalmol. 10(1), 69–77 (1971). [PubMed]  

20. A. Roorda and D. R. Williams, “Optical fiber properties of individual human cones,” J. Vis. 2(5), 404–412 (2002). [CrossRef]   [PubMed]  

21. H. J. Morris, L. Blanco, J. L. Codona, S. L. Li, S. S. Choi, and N. Doble, “Directionality of individual cone photoreceptors in the parafoveal region,” Vision Res. 117, 67–80 (2015). [CrossRef]   [PubMed]  

22. C. Miloudi, F. Rossant, I. Bloch, C. Chaumette, A. Leseigneur, J. A. Sahel, S. Meimon, S. Mrejen, and M. Paques, “The negative cone mosaic: A new manifestation of the optical Stiles-Crawford effect in normal eyes,” Invest. Ophthalmol. Vis. Sci. 56(12), 7043–7050 (2015). [CrossRef]   [PubMed]  

23. H. J. Morris, J. L. Codona, L. Blanco, and N. Doble, “Rapid measurement of individual cone photoreceptor pointing using focus diversity,” Opt. Lett. 40(17), 3982–3985 (2015). [CrossRef]   [PubMed]  

24. P. J. Delint, T. T. J. M. Berendschot, and D. van Norren, “Local photoreceptor alignment measured with a scanning laser ophthalmoscope,” Vision Res. 37(2), 243–248 (1997). [CrossRef]   [PubMed]  

25. D. Rativa and B. Vohnsen, “Analysis of individual cone-photoreceptor directionality using scanning laser ophthalmoscopy,” Biomed. Opt. Express 2(6), 1423–1431 (2011). [CrossRef]   [PubMed]  

26. W. Gao, B. Cense, Y. Zhang, R. S. Jonnal, and D. T. Miller, “Measuring retinal contributions to the optical Stiles-Crawford effect with optical coherence tomography,” Opt. Express 16(9), 6486–6501 (2008). [CrossRef]   [PubMed]  

27. A. Wartak, M. Augustin, R. Haindl, F. Beer, M. Salas, M. Laslandes, B. Baumann, M. Pircher, and C. K. Hitzenberger, “Multi-directional optical coherence tomography for retinal imaging,” Biomed. Opt. Express 8(12), 5560–5578 (2017). [CrossRef]   [PubMed]  

28. B. Vohnsen and D. Rativa, “Ultrasmall spot size scanning laser ophthalmoscopy,” Biomed. Opt. Express 2(6), 1597–1609 (2011). [CrossRef]   [PubMed]  

29. D. Scoles, Y. N. Sulai, C. S. Langlo, G. A. Fishman, C. A. Curcio, J. Carroll, and A. Dubra, “In vivo imaging of human cone photoreceptor inner segments,” Invest. Ophthalmol. Vis. Sci. 55(7), 4244–4251 (2014). [CrossRef]   [PubMed]  

30. T. Y. P. Chui, M. Dubow, A. Pinhas, N. Shah, A. Gan, R. Weitz, Y. N. Sulai, A. Dubra, and R. B. Rosen, “Comparison of adaptive optics scanning light ophthalmoscopic fluorescein angiography and offset pinhole imaging,” Biomed. Opt. Express 5(4), 1173–1189 (2014). [CrossRef]   [PubMed]  

31. E. A. Rossi, C. E. Granger, R. Sharma, Q. Yang, K. Saito, C. Schwarz, S. Walters, K. Nozato, J. Zhang, T. Kawakami, W. Fischer, L. R. Latchney, J. J. Hunter, M. M. Chung, and D. R. Williams, “Imaging individual neurons in the retinal ganglion cell layer of the living eye,” Proc. Natl. Acad. Sci. U.S.A. 114(3), 586–591 (2017). [CrossRef]   [PubMed]  

32. A. E. Elsner, S. A. Burns, J. J. Weiter, and F. C. Delori, “Infrared imaging of sub-retinal structures in the human ocular fundus,” Vision Res. 36(1), 191–205 (1996). [CrossRef]   [PubMed]  

33. R. Ragazzoni, “Pupil plane wavefront sensing with an oscillating prism,” J. Mod. Opt. 43(2), 289–293 (1996). [CrossRef]  

34. I. Iglesias, R. Ragazzoni, Y. Julien, and P. Artal, “Extended source pyramid wave-front sensor for the human eye,” Opt. Express 10(9), 419–428 (2002). [CrossRef]   [PubMed]  

35. V. Akondi, S. Castillo, and B. Vohnsen, “Digital pyramid wavefront sensor with tunable modulation,” Opt. Express 21(15), 18261–18272 (2013). [CrossRef]   [PubMed]  

36. J.-C. Baritaux, C. R. Chan, J. Li, and J. Mertz, “View synthesis with a partitioned-aperture microscope,” Opt. Lett. 39(3), 685–688 (2014). [CrossRef]   [PubMed]  

37. O. Marti, A. Ruf, M. Hipp, H. Bielefeldt, J. Colchero, and J. Mlynek, “Mechanical and thermal effects of laser irradiation on force microscope cantilevers,” Ultramicroscopy 42–44, 345–350 (1992). [CrossRef]  

38. M. Kreysing, L. Boyde, J. Guck, and K. J. Chalut, “Physical insight into light scattering by photoreceptor cell nuclei,” Opt. Lett. 35(15), 2639–2641 (2010). [CrossRef]   [PubMed]  

39. D. Valente and B. Vohnsen, “Retina-simulating phantom produced by photolithography,” Opt. Lett. 42(22), 4623–4626 (2017). [CrossRef]   [PubMed]  

40. B. Vohnsen, A. Carmichael Martins, S. Qaysi, and N. Sharmin, “Hartmann-Shack wavefront sensing without a lenslet array using a digital micromirror device,” Appl. Opt. 57(22), E199–E204 (2018). [CrossRef]   [PubMed]  

41. J. M. Bueno and B. Vohnsen, “Polarimetric high-resolution confocal scanning laser ophthalmoscope,” Vision Res. 45(28), 3526–3534 (2005). [CrossRef]   [PubMed]  

42. V. C. Smith, J. Pokorny, and K. R. Diddie, “Color matching and Stiles-Crawford effect in central serous choroidopathy,” Mod. Probl. Ophthalmol. 19, 284–295 (1978). [PubMed]  

43. D. G. Birch, M. A. Sandberg, and E. L. Berson, “The Stiles-Crawford effect in retinitis pigmentosa,” Invest. Ophthalmol. Vis. Sci. 22(2), 157–164 (1982). [PubMed]  

44. J. E. E. Keunen, V. C. Smith, J. Pokorny, and M. B. Mets, “Stiles-Crawford effect and color matching in Stargardt’s disease,” Am. J. Ophthalmol. 112(2), 216–217 (1991). [CrossRef]   [PubMed]  

45. M. J. Kanis, R. P. L. Wisse, T. T. J. M. Berendschot, J. van de Kraats, and D. van Norren, “Foveal cone-photoreceptor integrity in aging macula disorder,” Invest. Ophthalmol. Vis. Sci. 49(5), 2077–2081 (2008). [CrossRef]   [PubMed]  

46. A. Meadway, X. Wang, C. A. Curcio, and Y. Zhang, “Microstructure of subretinal drusenoid deposits revealed by adaptive optics imaging,” Biomed. Opt. Express 5(3), 713–727 (2014). [CrossRef]   [PubMed]  

47. N. S. Abdelfattah, H. Zhang, D. S. Boyer, P. J. Rosenfeld, W. J. Feuer, G. Gregori, and S. R. Sadda, “Drusen volume as a predictor of disease progression in patients with late age-related macular degeneration in the fellow eye,” Invest. Ophthalmol. Vis. Sci. 57(4), 1839–1846 (2016). [CrossRef]   [PubMed]  

48. B. Wang, Q. Zhang, R. Lu, Y. Zhi, and X. Yao, “Functional optical coherence tomography reveals transient phototropic change of photoreceptor outer segments,” Opt. Lett. 39(24), 6923–6926 (2014). [CrossRef]   [PubMed]  

49. Y. Lu, B. Wang, D. R. Pepperberg, and X. Yao, “Stimulus-evoked outer segment changes occur before the hyperpolarization of retinal photoreceptors,” Biomed. Opt. Express 8(1), 38–47 (2017). [CrossRef]   [PubMed]  

50. J. C. Horton, A. B. Parker, J. V. Botelho, and J. L. Duncan, “Spontaneous regeneration of human photoreceptor outer segments,” Sci. Rep. 5(1), 12364 (2015). [CrossRef]   [PubMed]  

Supplementary Material (2)

NameDescription
Visualization 1       Directional light scattering captured for the parafoveal region of subject DV showing four simultaneous images and the sum of the images in the middle (cones photoreceptor).
Visualization 2       The corresponding difference images of Visualization 1.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Schematic layout of the flood-illumination fundus camera equipped with the refractive pyramid for quadrant detection of retinal directionality and AO wavefront correction of the eye and system. All lenses are AR-coated achromatic doublets (F1 to F11) and irises are used to set the beam width and screen against unwanted reflections in the optical path.
Fig. 2
Fig. 2 Simulated parafoveal cone mosaic imaging and analysis assuming circular reflective scattering layers in the ellipsoid of each photoreceptor. Each reflector is added random tip/tilt with uniform amplitude of up to ± 3°. The resulting intensity image with a full 6.0 mm circular pupil is shown in (a) for a mosaic of 19 cones across a 40 × 40 μm2 area. In (b) red arrows show the random tilts of each simulated cone (vectors overlaying the intensity mage). In (c) quadrant retinal images are shown together with the algebraic image sum. In (d) difference images reveal photoreceptor obliqueness mapped onto a grayscale between maximum + 3° (positive, white) and minimum −3° (negative, black) angles. The same grayscale is used both horizontally (tilt) and vertically (tip). Finally, in (e) the reconstructed obliqueness of each simulated cone is shown (blue arrows) as determined from Eq. (1) using only intensity differences at the central pixels of each reflector.
Fig. 3
Fig. 3 Directional light scattering captured for the parafoveal region of subject DV showing (a) four simultaneous images and the sum of the images in the middle (Visualization 1) and (b) corresponding difference images (Visualization 2). Each image covers 2° visual degrees. The dual scalebar in (b) refers to horizontal tilt (left-to-right) and vertical tilt, or tip (down-to-up).
Fig. 4
Fig. 4 Directional light scattering captured for the parafoveal region of subject DV showing (a) three superimposed simultaneous images left (blue), upper (green) and right (red) and (b) enlarged view of the central region along with the RGB color identification of relative cone pointing directions as determined from three of the four quadrant images. In (c) the inclination vectors have been overlaid to show the local pointing obliqueness calculated using Eq. (1) with all four images thereby removing the ambiguity in the vertical direction. For the chosen section in (c) the disarray parameter equals σ ˜ = 0.091 and it varies in the range of 0.078 - 0.102 for the duration of the video sequence.
Fig. 5
Fig. 5 Directional light scattering captured for the parafoveal region of subject BV showing the (a) four simultaneous images with the sum of the images in the middle and (b) corresponding difference images. Each image covers 2° visual degrees. In (c) cross-sectional cuts across the central part of the differential images (as indicated by arrows in (b)) are shown.
Fig. 6
Fig. 6 Blood vessel visualization with color coding of scattering to the left (blue), upwards (green) and right (red) producing the (a) RGB color composite image. The magnified region (dashed white square) shows the color gradient across the vessel indicating the changing slope. In (c) the vector pointing has been overlaid to show the local inclinations calculated using Eq. (1). For the chosen section in (c) the disarray parameter equals σ ˜ = 0.080 and it varies in the range of 0.064 - 0.086 for the duration of the video sequence.
Fig. 7
Fig. 7 Example of imaging near the optic nerve (seen in the lower right corner of the images) for subject SQ showing (a) four simultaneous images with the sum of the images in the middle and (b) corresponding difference images. To capture these images the system optics were changed to capture a 5.4° retinal view, rather than the 2° view used in the other examples.
Fig. 8
Fig. 8 Visualization with color coding of scattering to the left (blue), upwards (green) and right (red) producing an RGB color composite image near the optic disc (seen in the lower right part of the image). In (b) the pointing has been overlaid to show the local vectors calculated using Eq. (1). The disarray parameter across the image equals σ ˜ = 0.090 and it varies in the range of 0.089 - 0.103 for the duration of the video sequence.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

Δ x m,n = I 2,m,n I 1,m,n I 2,m,n + I 1,m,n L;Δ y m,n = I 3,m,n I 4,m,n I 3,m,n + I 4,m,n L
σ= 1 2 N 2 n=1 N m=1 N ( Δ x m,n /L ) 2 + ( Δ y m,n /L ) 2 .
I differential = | FT{ G I } O ¯ | 2 | FT{ G II } O ¯ | 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.