Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-speed adaptive optics line-scan OCT for cellular-resolution optoretinography

Open Access Open Access

Abstract

Optoretinography–the non-invasive, optical imaging of light-induced functional activity in the retina–stands to provide a critical biomarker for testing the safety and efficacy of new therapies as well as their rapid translation to the clinic. Optical phase change in response to light, as readily accessible in phase-resolved OCT, offers a path towards all-optical imaging of retinal function. However, typical human eye motion adversely affects phase stability. In addition, recording fast light-induced retinal events necessitates high-speed acquisition. Here, we introduce a high-speed line-scan spectral domain OCT with adaptive optics (AO), aimed at volumetric imaging and phase-resolved acquisition of retinal responses to light. By virtue of parallel acquisition of an entire retinal cross-section (B-scan) in a single high-speed camera frame, depth-resolved tomograms at speeds up to 16 kHz were achieved with high sensitivity and phase stability. To optimize spectral and spatial resolution, an anamorphic detection paradigm was introduced, enabling improved light collection efficiency and signal roll-off compared to traditional methods. The benefits in speed, resolution and sensitivity were exemplified in imaging nanometer-millisecond scale light-induced optical path length changes in cone photoreceptor outer segments. With 660 nm stimuli, individual cone responses readily segregated into three clusters, corresponding to long, middle, and short-wavelength cones. Recording such optoretinograms on spatial scales ranging from individual cones, to 100 µm-wide retinal patches offers a robust and sensitive biomarker for cone function in health and disease.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optoretinography, defined generally as the non-invasive, optical imaging of light-induced functional activity in the retina, has the potential to serve as an effective biomarker for retinal health and provide new insight into basic visual processes. Despite its accessibility, the retina presents unique challenges for assessing its physiology in vivo in humans. The retinal reflectivity is low, and the numerical aperture of the eye’s optics further sets a limit on light collection efficiency and lateral resolution of imaging. The challenges are severely compounded due to fixational eye movements that occur at far greater spatial scales compared to the size of retinal neurons. The lack of a non-invasive and sensitive paradigm for the assessment of retinal function poses one of the most fundamental hurdles for developing new therapies and testing their safety & efficacy in humans. Electroretinography (ERG) is the current gold standard, despite the invasiveness of corneal electrodes and severe limits on spatial resolution and signal specificity of the elicited electrical activity [1,2]. Established electrophysiological tools such as patch clamp [3] and microelectrode arrays [4] are not yet feasible for study of retinal physiology in humans. Calcium and voltage-sensitive fluorescent dyes, while extremely powerful as tools for probing cellular physiology in rodent [5] and non-human primate retina [6], face regulatory barriers due to potential toxicity. Further, the bright visible illumination required for fluorescence excitation inadvertently stimulates photoreceptors.

Given how optical coherence tomography (OCT), scanning laser ophthalmoscopes (SLO) and fundus cameras are ubiquitous in ophthalmology, incorporating retinal function assessment within these existing imaging platforms would enable their widespread accessibility. Recently, advanced techniques incorporated into such platforms have been used to visualize spontaneous and light-induced retinal function in vivo in humans [714]. Changes in light scattering or intensity [7,12,1416], absorption [1719], polarization [2022] and optical path length (OPL) [10,2325] have been used as all-optical surrogates for function. In addition, variants of motion contrast incorporated into an SLO and OCT serve as an effective biomarker for visualizing blood flow and perfusion [2630]

Light-induced functional activity in cone photoreceptors has been especially accessible, given their higher relative reflectance and amenability for high-resolution en face imaging with adaptive optics (AO). Further, the extensive characterization of cone photopigments using genetics, densitometry, psychophysics and electrophysiology [3,19,31,32] has allowed relating these intrinsic optical signals to the underlying mechanisms. For instance, Cooper et al. [12] recently established that changes in near-infrared image intensity following visible stimuli, such as those observed in AO-SLO [7] and AO fundus camera [8] are mediated by the photoisomerization of pigment chromophores.

Direct measurements of depth-resolved optical phase changes in response to light are possible in OCT and have provided a sensitive probe for assessment of light-induced function. As examples, full-field swept-source OCT [23], and point-scan swept-source [24], and point-scan spectral domain OCT [10,25] have recently been used for phase-sensitive imaging of light-induced OPL change in the cone outer segments. Each modality must undergo an optimization between speed, sensitivity, and resolution in order to maximize performance. Full-field OCT loses confocal gating across both lateral dimensions, which degrades the x-y spatial resolution and contrast. On the other hand, phase-stable acquisition at high volume rates (hundreds of Hz) with no moving parts are possible that allows computational aberration correction [33,34]. The tunable laser illumination source has to be swept in wavelength at relatively slow rates to accommodate the commercially available 2D CMOS sensor acquisition speeds, restricting the overall time taken per volume to greater than 6 ms [23]. Point-scan OCT in swept-source and spectral domain configurations acquire volumes by maintaining confocal gating in both x and y-dimensions, and with AO provides cellular resolution and excellent contrast [3537]. The speed however is limited by the frequency of the swept laser source, the scanners and the spectrometer, to tens of Hz volume rates, and entails precise motion correction to reveal cellular structures [38,39]. To reach a favorable trade-off in speed and resolution between point-scan and full-field OCT, line-scan spectral-domain OCT [4042] and line-scan swept-source OCT [43] have been demonstrated. The line-scan geometry maintains confocal gating along one dimension, while achieving high speeds and phase stability due to the parallel acquisition of multiple effective A-lines simultaneously. Specifically, in line-scan spectral domain operation, one B-scan is captured per 2D camera frame. Ginner et, al. [42] demonstrated its application for computational aberration correction and Zhang et al. [41] incorporated AO, to improve resolution in retinal cross-sections, with maximum B-scan rates of 2500 Hz and 500 Hz respectively. In Zhang et al., AO-corrected OCT volumes and en face images were not obtained. In addition, Ginner et al. quantified the median axial velocity across 36 subjects and concluded that a B-scan rate greater than 3200 Hz was necessary to avoid greater than ${\pm} \pi $ radian phase change for interferometric applications.

Here we present the first volumetric, high-speed line-scan spectral domain OCT with hardware adaptive optics, aimed at phase-resolved acquisition of light induced retinal activity. We detail the design, development and characterization of the multimodal imaging system. The feasibility of imaging light induced OPL changes, both with and without AO in cone outer segments at high temporal resolution is demonstrated. Measurement of such light induced optical changes in the retina are referred to as an optoretinogram (ORG) in analogy to the classical ERG. This terminology was first proposed by Mulligan et.al [44,45]. and adopted recently by our group and others to refer to stimulus induced optical imaging of retinal responses using OCT phase and intensity signatures [4648]. Looking ahead, we suggest that the ORG terminology be used independent of the platform (OCT, SLO or fundus camera), type of optical change (intensity or phase) or spatial scale (individual cells to collection of neurons) to refer to light-induced changes in the retina in general.

2. Methods

2.1 Multimodal line-scan imager

A multimodal, line-scan retinal imager consisting of a line-scan spectral domain OCT and line-scan ophthalmoscope (LSO) was designed and constructed in free space as shown in Fig. 1. Below, we outline the system layout, specifications, and characterization.

 figure: Fig. 1.

Fig. 1. Optical layout of the multimodal line-scan retinal camera, consisting of a line-scan spectral domain OCT and a line-scan ophthalmoscope (LSO) and a visual stimulation channel. The top inset shows the measured axial PSF, the FWHM and best fit. The horizontal perspective (top view) beam path is shown and the different channels (OCT, Wavefront sensing, and visual stimulation) are denoted with different colors. SLD: superluminescent diode, RC: reflective collimator, CL: Achromatic cylindrical doublet lenses, BS: beam splitter, L: achromatic doublet lenses, M: mirror, GS: galvo scanner, DG: diffraction grating, ND: neutral density, DM: deformable mirror, LPF: long pass filter, SPF: short pass filter, LED: Light emitting diode, WS: wavefront sensor. P: pupil plane, I: imaging plane. The inset shows the anamorphic telescope that replaces lens L11 and described in detail in Fig. 3. Focal length of the lenses used are L1=L2 = 200 mm, L3=L6 = 250 mm, L4=L5 = 500 mm, L7=L8 = 400 mm, L9 = 300 mm, L10 = 200 mm, L12 = 150 mm, L13 = 200 mm, L14 = 100 mm, L15 = 175 mm, L16 = 125 mm, CL1=CL2=CL4 = 100 mm, CL3 = 250 mm.

Download Full Size | PDF

2.1.1 System layout and specifications

Three illumination and detection channels were incorporated. A superluminescent diode (M-S-840-B-I-20, Superlum, Ireland, λo = 840 nm, Δλ = 50 nm, axial resolution = 7.7 µm in air) source was used as illumination for OCT and for LSO, a 980 ± 10 nm superluminescent diode (IPSDD0906, Inphenix, USA) source served as illumination for wavefront sensing, and a 660 ± 10 nm LED in Maxwellian view was used for retinal stimulation. The three detection channels were dedicated to OCT, LSO and a Shack-Hartmann wavefront sensor.

2.1.2 Line-scan spectral domain OCT

The basic principle behind the operation of an OCT in line-scan spectral domain configuration involves illuminating the sample with a line-field. In detection, the backscattered linear spatial profile is diffracted into its spectral constituents with a linear diffraction grating, yielding a two-dimensional image on an aerial camera, corresponding to the spatial and spectral components along the two axes. Traditional OCT processing provides a cross-sectional B-scan in one aerial camera frame, the rate of which is limited by the maximum camera frame rate. For volumetric imaging, the line-field is scanned using a 1-dimensional (1-D) scanner. The detailed system layout is as follows.

A reflective collimator (RC) was used to provide an effective beam diameter of 4 mm for the OCT channel. A plate beamsplitter divided ∼30% and 70% of the OCT illumination into the sample and reference arms respectively. A cylindrical lens CL1 (effective focal length = 100 mm) was placed to generate a line field of 4 mm at the entrance pupil (P4) of the system, by focusing the collimated 4 mm spherical beam after the reflective collimator along one dimension. An adjustable pupil iris was placed at the entrance pupil to control the beam size.

A 1-D galvo scanner (GS), a deformable mirror (DM) and the eye’s pupil (P1) were optically conjugated using achromatic lens-based afocal telescopes (L1/L2, L3/L4, L5/L6). The beam size at the eye’s pupil was 4 mm and generated a 2 deg line field on the retina. The power was 3.2 mW at the eye’s pupil, which is well below the maximum permissible exposure for extended retinal illumination. The reference arm consisted of a second cylindrical lens CL2 (effective focal length = 100 mm) that re-collimated the OCT beam. The collimated beam was then propagated to the final reference arm mirror via three afocal telescopes. L8 and L9 were single achromatic lenses, but the beam propagated through them twice after reflection from the mirrors. The lenses in the reference arm were used for two purposes: first for compensating the dispersion generated in the sample arm due to the afocal lens-based telescopes, and second for reducing the diffraction of the beam generated by long-distance propagation (∼ 4 m total double pass from beamsplitter to eye and back). The aberrations introduced by the off-axis use of lenses L8 and L9 as shown in Fig. 1 is cancelled upon reflection from the flat mirror in the return path.

The final mirror of the reference arm was placed in a manual translation stage to match the path length of sample and reference arms to create an interferogram. The artificial pupil iris at P4 was set to 4 mm to reduce the effect of eye’s aberration in non-AO imaging. For AO imaging, the iris was opened fully, such that the deformable mirror aperture formed the limiting pupil, corresponding to 6.75 mm at the eye.

The sample and reference arm beams were combined in the detection arm to generate interference. The beams were propagated via an afocal telescope L11 & L12, a 1200 lp/mm transmissive grating and a focusing lens L13. An adjustable slit was placed at the image plane I5 to avoid the spurious back reflection from the lenses and to provide partial confocality for improving the image contrast. The slit width was adjusted empirically based on the visual examination of LSO image contrast.

A fast 2D CMOS camera (Photron, FASTCAM Mini Ax200, pixel size = 20 µm) placed at the focus of lens L13 captured the interferogram. The grating was optically conjugated to the ocular pupil plane. The camera was set to record a region-of-interest of 768 (spectrum dimension, λ) ×512 (line dimension, x) pixels. The spectral resolution corresponded to an axial range of 2.6 mm. The maximum speed of the camera can be set to 16.2 kHz at this region-of-interest, defining the maximum B-scan or M-scan rate achievable. The lens L11 was replaced with an anamorphic telescope (Fig. 1 bottom left inset) consisting of two cylindrical lenses in order to optimize spatial and spectral resolution simultaneously (section 2.1.2). The top inset in Fig. 1 shows the measured full width half maximum (FWHM) of the axial PSF of the system, equal to 7.6 µm at an axial depth of 480 µm. This is similar in magnitude to the FWHM of the coherence function computed theoretically (7.7 µm) for the SLD source spectrum (λo = 840 nm, Δλ = 50 nm, double-humped Gaussian spectrum).

Note that the beams shown in Fig. 1 represent the horizontal perspective (top view), and the detection path assumes a mirror placed at the retina. In Fig. 2, we illustrate how the beam profiles differ between the horizontal (top view) and vertical (side view) perspectives. The backscattered light from the retina propagated via the same path as the illumination but loses the cylindrical phase due the scattering properties of retina. Figure 2 bottom panel shows the beam propagation cross-sections across the different pupil and imaging planes in illumination and detection. Note that due to the propagation of the cylindrical wavefront through the system, the beam profile is linear at all the pupil and imaging planes in illumination, but with their orientation rotated by 90 degrees. The line field illuminating the retina effectively creates a set of beamlets emanating near-spherical wavefronts when backscattered, i.e. the cylindrical phase front in illumination is lost in the retinal backscatter [49]. Therefore, in detection, the series of linear beamlets create a spherical and linear beam profile at each pupil and imaging plane respectively. The resulting detection path for the on-axis field point is shown in Fig. 2. The backscattered beam overfilled the eye’s pupil leading to a larger effective beam diameter at the pupil plane in detection.

 figure: Fig. 2.

Fig. 2. The horizontal (top view) and vertical (side view) perspectives of the optical path are shown. The illumination and detection paths are shown with filled red and solid red respectively. The detection path is shown for the on-axis field point. Note the different beam profiles at the pupil and imaging planes in detection compared to illumination, shown as a cross-sectional view in the bottom panels. Note the beam has a linear and circular profile respectively in illumination and detection, at the pupil plane P1-P4. The beam profiles are linear at all imaging planes in illumination and detection except at the 2D camera. Also note that the beam is scanning in the image plane (I1-I3), which is not shown in the cross sectional view for simplicity. DG: diffraction grating. L: lens, CL: cylindrical lens, I: imaging plane, P: pupil plane.

Download Full Size | PDF

2.1.3 Wavefront sensing and correction

A custom-made Shack-Hartmann wavefront sensor and a DM (DM97-15, Alpao, France) were used for measuring and correcting ocular aberrations, respectively. The lenslet array had a pitch of 150 µm and a focal length of 6.7 mm. The Shack Hartmann spot pattern was captured in a CCD camera with pixel size of 6.45 µm. A short pass filter (SPF) with cut-on at 903 nm was used to combine the wavefront sensing (980 ± 10 nm) and OCT beams (840 ± 25 nm). The wavefront sensing illumination followed the same optical path as the OCT sample arm. A smaller beam diameter of 1.5 mm at the eye’s pupil was used and spatially offset from the corneal apex to avoid spurious reflections. In detection, an afocal telescope L15-L16 (Fig. 1) optically conjugated the entrance pupil (P4), and demagnified the beam to 4.8 mm diameter at the lenslet array. An adjustable iris was placed at the focus of L15 to further avoid spurious lens and corneal apex backreflections. A custom-made software allowed closed-loop operation of the wavefront sensor and deformable mirror. The RMS wavefront error was below 0.05 µm for AO correction over a 6.5 mm pupil.

2.1.4 Line-scan ophthalmoscope

The LSO used the same path as the sample and the detection arm till the grating. The zeroth-order beam of the grating, which is typically discarded in spectral-domain OCT systems, was used to build an LSO by placing a line-scan camera (Basler, sprint, spL2048-70km pixel size 10 µm) with a focusing lens L14. For LSO operation, the intense backreflection from the reference arm was manually blocked. The LSO and OCT cameras were optically conjugated using a model eye. The LSO en face images allowed precise adjustment of the image focus.

2.1.5 Visual stimulation

The visible stimulus channel was combined with the remaining beams using a long-pass filter (LPF) (cutoff: 685 nm) and introduced into the eye after the last telescope (L5-L6). A 660 ± 10 nm LED was focused at the eye’s pupil with an achromatic lens and provided a homogeneous visual field area of ∼ 37.5 deg2. A custom LED driver board was used to control the intensity of the LED and to interface timing with the OCT system. The driver board implemented a Pulse Frequency Modulation (PFM) control paradigm and LED was mounted on aluminum heat sinks, which taken together, ensured spectral output of the LED did not change as a function of intensity. An FPGA running custom VHDL code using an internal clock of 133 MHz generated the PFM and monitored a synchronized TTL pulse generated from the main OCT data acquisition board for when to trigger the stimulus. A maximum delay between OCT TTL signal state change to LED output change was two clock pulses (∼15 ns). The pulse width in the PFM paradigm is 2 µs, which exceeds the Nyquist criterion for the shortest duration pulse used for the experiments (5 ms).

2.1.6 Anamorphic detection layout and specification

In traditional point-scan spectral domain OCT, the spectrometer – grating, focusing lens and line-scan camera – are chosen in order to optimize spectral resolution or axial range. Of these parameters, the line-scan camera pixel size and grating choices are restricted to those available from commercial suppliers. Therefore, the focusing lens focal length is optimally selected to obtain the desired spectral resolution for a given source spectral bandwidth.

In addition to spectral resolution, the detection optics in line-scan spectral domain OCT need to achieve optimal spatial sampling along the line/spatial dimension in order to resolve fine retinal structures such as photoreceptors. The spatial resolution depends on the overall system magnification and thus clearly differs in its governing parameters compared to those limiting spectral resolution. To achieve an optimal trade-off in spectral and spatial resolution necessitates independent control over the system magnification in both lateral dimensions, such that the square pixel sampling of typical aerial cameras is sufficient. Conventional spherical lenses do not offer sufficient degrees of freedom to optimize these factors independently. An anamorphic configuration consisting of two cylindrical lenses offers a solution to optimize magnification asymmetrically. In an LSO, Lu et al. [50] demonstrated that such an anamorphic telescope allowed adequate spatial sampling and improved signal collection efficiency. For line-scan spectral domain OCT, we posited that such a configuration will reduce cross-talk between adjacent pixels in the spectral dimension, significantly improving signal roll-off as a result.

Figure 3 shows the anamorphic detection layout and performance, assuming the light is backscattered from the retina (Fig. 2, solid red line). In Fig. 3(a), the image plane Icirc (I4 in Fig. 1) prior to spherical achromat LA (L1 in Fig. 1) contains spherically symmetric point-spread-functions (PSFs) at each field point. Once collimated, the beam profile at the system’s exit pupil plane Pcirc (P4 in Fig. 1), diameter 6.75 mm, is also spherically symmetric. Next, a pair of positive cylindrical lens achromats CLx (EFL = 250 mm, CL3 in Fig. 1) and CLy (EFL = 100 mm, CL4 in Fig. 1) were placed, with their power axes oriented orthogonal to each other. The distance between the two lenses was set to the difference of their focal lengths. At the imaging plane, Ianamorphic, (I5 in Fig. 1) an elliptical PSF is obtained, whose major and minor axes sizes correspond to the ratio of the cylindrical lens focal lengths, equal to 2.5. Once the light from this imaging plane is collimated with a spherical lens (LB (L12 in Fig. 1)), the pupil plane Panamorphic (P5 in Fig. 1) containing the grating is similarly elliptical in shape, with the same ratio of its major and minor axes. Effectively, this configuration provides 2.5× higher sampling in the horizontal (spectral) compared to the vertical (spatial) dimension. In principle, the focal lengths of CLx and CLy can be chosen according to the camera, grating and source bandwidth in consideration.

 figure: Fig. 3.

Fig. 3. Anamorphic detection to optimize spatial and spectral resolution (a) Projection view (trimetric) of the anamorphic detection scheme for line-scan ophthalmoscopy. Note that the figure shows singlet lenses for simplicity, however all lenses were achromatic doublets. The pupil plane (Pcirc and Panamorphic) and imaging plane dimensions (Icirc and Ianamorphic) scale along their axes depending on the cylindrical lens (CLx and CLy) focal lengths. (b) The circular and anamorphic PSFs and their 1-dimensional cross-sections are shown along the line and spectral dimensions, overlayed on the aerial detector pixels, illustrated to scale. The number of camera pixels contained within one airy disk diameter (ADD) along the line and spectrum dimensions is indicated for both circular and anamorphic PSFs. Along the line dimension, the spatial sampling is identical between both as evident in the 1-dimensional cross-section, however, the anamorphic PSF offers increased light collection per pixel. Along the spectrum dimension, the resolution is increased by 2.5× in the anamorphic PSF in comparison to the circular PSF, as evidenced by the greater separation in the 1-dimensional cross-section. (c) Signal roll-off versus depth is shown to demonstrate the effect of the spherical and anamorphic PSFs. Also shown is the measured signal roll-off for our anamorphic detection spectrometer that shows good correspondence with theoretical computation and demonstrates the substantial improvement over circular PSFs. (d) The theoretical imaging performance of anamorphic detection represented as spot diagrams for different field points and wavelengths along with the diffraction limited airy ellipses for reference. The spot diagrams at three imaging planes are shown, corresponding to (i) the imaging plane after the cylindrical lenses (Ianamorphic) prior to diffraction via the grating (ii) detector plane, with an ideal paraxial lens as the spectrometer focusing element to demonstrate the effect of the cylindrical lens telescope plus diffraction, on imaging performance and (iii) detector plane with the used commercial achromatic doublet as the focusing optic to demonstrate the effect of all optics on imaging, as realized practically in the system.

Download Full Size | PDF

The spectrometer was designed to capture a 52 nm spectral range in 768 pixels of the camera, providing a spectral resolution of 0.07 nm/pixel. This spectral resolution was achieved via a 1200-line pairs/mm grating, 200 mm spectrometer lens and an elliptical beam on the grating, with major and minor axes equal to 10 mm and 4 mm respectively. The anamorphic ratios were chosen to provide 5 camera pixels within one diffraction-limited airy disk diameter. The airy disk diameter was computed assuming the 4 mm minor axis size for the pupil, center wavelength of 840 nm and spectrometer lens of 200 mm. The spatial resolution was calculated by accounting for the camera pixel size (20 microns) and the magnification between the eye’s retinal plane and the detector plane, equal to 16.67. Each detector pixel thus subtended 1.2 microns on the retina, and provided a Nyquist-limited sampling resolution equal to 2.4 microns

In Fig. 3(b), consider the difference between a spherical and an anamorphic PSF in optimizing resolution, signal collection and crosstalk in both dimensions. For the line/spatial dimension oriented along the major axis of the elliptical PSF, the anamorphic PSF allowed increased signal collection per pixel, in addition to adequate spatial sampling. A significant improvement is expected for the spectrum dimension oriented along the minor axis of the elliptical PSF, because of the reduced cross-talk between adjacent pixels. The most appreciable advantage of the reduced spectral cross-talk is expected in the improved OCT signal roll-off performance (Fig. 3(c)), provided the PSF is diffraction-limited for all field and spectrum points (Fig. 3(d)).

Signal roll-off under conditions of a spherical and an anamorphic PSF were theoretically computed. For the spherical PSF, the diffraction-limited airy disk diameter (ADD) encompassed 5 camera pixels along both dimensions. For the anamorphic PSF, the ADD was reduced to 2 pixels in the spectral dimensions, while the ADD in the spatial dimension was same. These PSF dimensions were computed based on the spectrometer lens (L13 in Fig. 1) focal length, camera pixel size and the beam size at the pupil plane Panamorphic containing the grating. First, the interference modulation in wavelength space was created as a function of OPL or axial depth and superimposed with the measured source spectrum. The interference modulation was convolved with the circular and elliptical PSFs to reflect the effect of diffraction due to the finite size of the PSFs. In principle, the PSF dimensions at each spectral position will essentially be weighted by the wavelength, but this wavelength-dependent variation for the short 50 nm bandwidth source used here was not taken into account for this calculation. Instead, all PSFs were computed for a center wavelength equal to 840 nm. Once convolved, the interferograms were subjected to traditional OCT reconstruction – k-space remapping, Fourier transform and mirror image removal. The peak signal at each axial depth is plotted in decibels (dB) in Fig. 3(c) for both the anamorphic and spherical PSFs. The improvement in signal roll-off is evident for the anamorphic PSF with this theoretical treatment. Figure 3(c) also shows how the measured signal roll-off agrees well with theory for the anamorphic detection. The 6 dB signal roll-off occurred experimentally at 2.1 mm. At 1.5 mm and 2.1 mm axial depths, the anamorphic PSF outperforms the circular PSF by 10.2 dB and 29.8 dB, respectively.

Figure 3(d) shows the spatial imaging performance across discrete fields and wavelength points of the system modeled in ZEMAX optical design software. Note that the ellipses in Fig. 3(d) spot diagrams denote the diffraction-limited “Airy ellipses” (as opposed to Airy disks), because the diameters of their major and minor axes are determined by the powers of cylindrical lenses. The ZEMAX model started from the pupil plane Pcirc, with the specified imaging field points and wavelengths. It contained the rest of the optical elements in the detection path (CLx, CLy and LB), plus a transmissive 1200 lp/mm grating and a focusing lens (L13 in Fig. 1, EFL = 200) prior to the camera image plane. The distances between CLx and CLy, and between the focusing lens and the image plane were optimized in the model to provide minimum spot sizes at Ianamorphic and camera sensor, respectively. Three cases are shown to separate the contributions due to the cylindrical lens telescope, diffraction due to the grating and the achromatic doublet spectrometer focus lens. The first column corresponds to the spot diagrams at the slit image plane Ianamorphic where performance is diffraction-limited. The spot diagrams after diffraction through the grating and focused on the camera by an ideal paraxial lens is shown in the middle column, demonstrating diffraction-limited performance also. The last column shows the same, but with the off-the-shelf achromatic doublet focus lens (L13 in Fig. 1) used in the system. The field curvature typically observed in spectrometer designs is apparent with the doublet. Despite the use of an off-the-shelf achromat, the spot sizes for all the field and wavelength points, except 0 deg/840 nm, are diffraction-limited. A custom-design achromat or a lower grating pitch leading to a smaller Bragg angle of diffraction ought to improve performance further for such an anamorphic configuration. Overall, anamorphic detection improves spectral resolution, signal roll-off and light collection efficiency, without adversely affecting imaging resolution. Further, it allows customizing camera regions-of-interest for increased acquisition speeds, and the use of commercially-available parts, i.e. transmissive gratings and achromatic focusing optics.

2.3 Acquisition and processing

For OCT, the 2D images from the camera are stored on the on-board memory and transferred via a dedicated ethernet connection to the computer. For LSO, the image stream from the line-scan camera is transferred via a frame-grabber (NI-PCIe-1427)) to the same computer. Synchronization of the galvo scanner, OCT, LSO acquisition, and the visual stimulus was achieved by an analog output board (NI-PCIe-6738). A custom-built software was developed in LABVIEW to display the LSO image stream and real time OCT B-scan reconstruction.

Typical OCT reconstruction steps were followed. Each acquired B-scan was background subtracted to improve the SNR and the phase noise. The background was calculated as the average of 500 images recorded only with reference beam. The background subtracted B-scans in each OCT volume $({I({x,y,\lambda } )} )$ was resampled $({I({x,y,k} )} )$ and Fourier transformed along the spectrum dimension for each pixel x on the line to extract the depth information from the complex numbers $I({x,y,z} )$.

$$I({x,y,z} )= F({I({x,y,k} )} )= A({x,y,z} )\exp ({ - i\varphi ({x,y,z} )} )$$
For retrieving structure, only the absolute value of the complex number was used. Each OCT volume was segmented using an open-source software [51] to obtain en face images by taking maximum intensity projections (MIPs) of 3 pixels centered at the cone inner-outer segment junction (ISOS) and cone outer segment tips (COST). The segmentation step also provided axial shifts between different volumes in a time series due to eye motion. The series of en face intensity images at the COST layer were spatially registered using a strip-based registration algorithm [52] and averaged to improve signal-to-noise ratio (SNR).

For light-induced OPL changes, each OCT volume was registered using the axial and lateral shifts from the prior step ${I_{reg}}({x,y,z} )$, and these volumes are used for further analysis. First, each volume was referenced to the mean of all the volumes that were recorded before the start of stimulus to cancel the arbitrary phase at each pixel.

$$I_{reg}^{tref}({x,y,z} )= {I_{reg}}{(x,y,z)^{(i )}} \times {\left( {\frac{{\mathop \sum \nolimits_{k = 1}^n {I_{reg}}{{({x,y,z} )}^{(k )}}}}{n}} \right)^\ast }$$
Where $I_{reg}^{tref}({x,y,z} )$ is time referenced complex volumes, i = 1,2..m is volume number (m is the total number of volumes recorded in each measurement), k is the volume recorded before the stimulus, n is the total number of volumes recorded before the stimulus, and * represents complex conjugate. Once referenced over time, the mean of 3 axial pixel complex values, centered at the boundaries of the ISOS,$\; I_{ISOS}^{tref}({x,y} )$ and the cone outer segment tips (COST), $I_{COST}^{tref}({x,y} )$ was calculated. The phase difference between these two layers was obtained by first multiplying the complex conjugate of COST with the ISOS layer.
$${I_{ISOS/COST}}({x,y} )= I_{ISOS}^{tref}({x,y} )\times I{_{COST}^{tref\ast }}({x,y} )$$
In each volume, the complex numbers ${I_{ISOS/COST}}({x,y} )$ were either averaged over a small retinal area (0.27 deg2) for measurements without AO, or averaged over the collection aperture of a cone for measurements with AO. Each individual measurement consisting of one volume series provided ‘m’ complex numbers, converted into a time series, (t), based on the volume acquisition rate ranging between 15–324 Hz. For multiple measurements, the complex average of individual time series was obtained. The phase responses ($\Delta {\phi _{ISOS/COST}}(t )$) were computed by calculating the argument of the complex time series. For phase responses that exceeded ${\pm} \pi $ radians, phase was unwrapped along the time dimension. The change in OPL ($\Delta OPL(t ))$ was calculated as
$$\varDelta OPL(t )= {\lambda _o} \times \frac{{\varDelta {\phi _{ISOS/COST}}(t )}}{{4\pi }}$$
where ${\lambda _o}$ is the central wavelength.

2.4 System characterization

The system performance was characterized with respect to sensitivity, signal roll-off and phase sensitivity.

2.4.1 Sensitivity and signal roll-off

The sensitivity of the system was measured by imaging OCT B-scans with a flat mirror and a neutral density filter placed in the sample arm, at the plane corresponding to eye’s pupil. The axial position of the reference mirror was set to 315 µm in depth. The OCT interferograms were subjected to k-space remapping, Fourier transform and mirror image removal to retrieve the SNR in dB. The system sensitivity was obtained by adjusting the SNR for the neutral density filter attenuation. This process was repeated at several exposure times/frame rates, ranging from 62.5 to 400 µsec (2.5 to 16 kHz). For shot-noise limited performance, a linear increase in sensitivity is predicted with integration time. The collimator and cylindrical lens focus together induce a Gaussian distribution of intensity along the linear illumination. Therefore, the SNR follows the same distribution along the line, the variation in magnitude of which is indicated by the error bars at each frame rate in Fig. 4(a). Here, for the lowest frame rate of 2.5 kHz, the sensitivity was 89.6 ± 2.2 dB and decreased linearly with log of integration time (Fig. 4(a)). Sensitivity was also measured as a function of depth to retrieve the signal roll-off (Fig. 3(c)).

 figure: Fig. 4.

Fig. 4. Signal-to-noise and phase sensitivity characterization (a) Sensitivity in dB varies linearly with log of exposure time of the aerial sensor indicating near shot noise limited performance. (b) Measured phase sensitivity (bottom panel) in milliradians (mrad) along the line illumination for a coverslip. The axially referenced (front and back surface of cover slip) and laterally referenced (two points separated on the line) phase sensitivity is plotted and are similar. The phase sensitivity is worse at the edges of the line profile consistent with the lower SNR (top panel) resulting from a Gaussian intensity distribution along the line illumination. (c) Measured phase sensitivity versus SNR follows a linear relationship and is close to the theoretical shot-noise limited phase sensitivity, defined as the square-root of SNR-1. The data points and error bars in (a) and (c) denote the mean and standard deviation sensitivity, SNR and phase noise for 236 pixels along the line, corresponding to 70% of the spatial extent of the Gaussian illumination (vertical dashed lines in b). (d) Optical phase change represented in terms of optical path length in air resulting from a piezo-transducer held under a sinusoidal voltage signal. The phase was calculated by laterally referencing two points along the line, spaced by 75 pixels. A single response and the mean of 15 responses show good agreement.

Download Full Size | PDF

2.4.2 Phase sensitivity

Phase sensitivity (σΔφ) in OCT is intimately linked to SNR, motion (Δx) of the sample, and the spot size (‘d’) [53,54] as shown in the Eq. (5) below.

$${\sigma _{\varDelta \varphi }} = \sqrt {\left( {\frac{1}{{SNR}}} \right) + \left( {\frac{{4\pi }}{3}} \right)\left( {1 - exp \left( { - 2{{\left( {\frac{{\varDelta x}}{d}} \right)}^2}} \right)} \right)} $$

Therefore, it is instructive to quantify phase sensitivity as a function of SNR and sample/eye motion to understand the systematic limits for interferometric applications. First, the phase sensitivity was measured for different SNRs using an artificial model eye with a 50 mm achromat focus lens and a coverslip as the retina. M-scans, i.e. B-scans at the same location versus time, were recorded by varying SNR using a variable neutral density filter in the sample arm. A set of thousand M-scans were reconstructed, and the phase difference between the front and back surface of the cover slip was determined for a single point over time. This denoted the axial referencing method for computing phase sensitivity. Note that in line-scan operation, the parallel acquisition of a B-scan allows lateral referencing (also called self-phase referencing [55]) to remove common mode noise. For lateral referencing, the phase difference between two immediately adjacent pixels on the front surface of the cover glass was measured. This was repeated for each pair of adjacent pixels ranging across the length of the line illumination, denoting the lateral referencing method for computing phase sensitivity. A measure of phase sensitivity or stability is given by the standard deviation of the measured phase differences. The effect of the galvo scanner jitter was quantified by measuring phase sensitivity under three conditions: a) galvo powered off, 1000 M-scans, b) galvo powered on, but actively reset to zero volts, 1000 M-scans and c) a linear galvo scan across 60 locations in a 0.75 deg field of view for 700 volumes. Two separate analyses were conducted to account for relative versus absolute phase variations. For absolute phase variation, the phase on the front surface of cover slip over time was obtained for conditions (a) and (b) in an M-scan. For c) the phase in the same spatial location (B-scan) in each volume was obtained with time. The standard deviation of the raw unreferenced phase versus time was equal to 0.92,1.02, and 12.7 radians respectively for the three conditions a), b) and c).

For relative phase variations, the phase-difference between the front and back surface of the cover slip versus time was computed for all conditions. For c), the phase difference in the same spatial location (B-scan) in each volume was obtained with time. The standard deviation of the axially-referenced phase versus time was equal to 7.5, 7.2 and 10.9 mrad respectively for the three conditions a), b) and c). Even though the absolute phase variations during galvo scans are high, the relative phase noise remains low due to the cancellation of common mode noise.

Figure 4(b) shows the measured phase noise and SNR corresponding to the highest sample power for the ∼300 pixels along the line dimension. The Gaussian intensity profile along the line illumination reduced SNR at the edges, leading to an increase in phase noise. At all points along the line, the axial and lateral referencing produce similar phase noise, limited to less than 15 mrad at this sample power. The Gaussian variation in SNR and phase noise across the line field is captured by error bars in Fig. 4(a) and 4(c). Figure 4(c) shows the relation between the SNR and phase sensitivity – a linear decrease in phase sensitivity vs. SNR is observed. Upon setting the dependence of phase sensitivity on motion, the term ‘Δx’, to zero, in Eq. (5), the shot-noise limited phase sensitivity for a static sample reduces to the reciprocal of the square root of SNR (theoretical curve – solid line – in Fig. 4(c)). This simple treatment assumes the same SNR for the front and back surface of the coverslip. We followed the approach of Tong et al. [54] to compute composite SNR arising due to the differential front and back surface SNRs to allow for a more thorough assessment. Their approach is reproduced here for completeness. Assume SNR (z1,t) and SNR (z2,t) as the SNRs versus time for the front and back surface of the cover slip. The expression for phase sensitivity, computed as the standard deviation of the time-varying axially-referenced phase difference between the front and back surface is expressed as:

$${\sigma _{\varDelta \varphi }} = \sqrt {\left( {\frac{1}{{2SNR({{z_1},t} )}}} \right) + \left( {\frac{1}{{2SNR({{z_2},t} )}}} \right)} $$

The composite SNR, encompassing the front and back surface, can then be calculated by setting the term ‘Δx’, to zero, in Eq. (5) and equating the resultant RHS of Eq. (5) to Eq. (6), to obtain

$$SNR = \frac{{2SNR({{z_1},t} )SNR({{z_2},t} )}}{{SNR({{z_1},t} )+ SNR({{z_2},t} )}}$$

The measured composite SNR from Eq. (7) is plotted on the x-axis of Fig. 4(c) against the measured phase sensitivity. The good match between the measured and theoretical phase noise demonstrates that the system phase sensitivity is near the shot-noise limit. For in vivo retinal imaging, it is important to differentiate between the factors affecting phase noise for absolute versus relative phase dynamics. For absolute phase dynamics, where say two neighboring A-lines are compared to perform OCT angiography, eye motion and galvo jitter serve to degrade phase sensitivity and is captured by term ‘Δx/d’ in Eq. (5) [53,56]. For relative or axially referenced phase differences, as for the cover slip or in the cone outer segments, the ‘Δx/d’ term drops out due to identical noise considerations, and Eq. (5) reduces to Eq. (6). Thus, the SNR alone dictates the minimum detectable phase difference for the latter, as long as the evolution of phase is computed from the same retinal location or cell of interest upon eye motion registration. More generally, this analysis is instructive of the system’s limits on signal level and tolerable eye motion considerations for recording different spatiotemporal regimes of functional activity in the retina – from angiography [56], the slow and large amplitude osmotic expansion in cone photoreceptors [23], to the rapid and tiny mechanical deformation measured in cones in vivo [47] and cultured neurons ex vivo [57]. The right y-axis in Fig. 4(c) denotes the corresponding OPL change in nanometers for 840 nm center wavelength. A good starting point is defining the absolute spatiotemporal limits under ideal conditions without eye motion and no averaging. For instance, imaging action potential in mammalian neuronal somas (∼ 1 nm, 1 ms Ling et al. [57]) and the early contraction in human cones (∼5 nm, 2.5 ms Pandiyan et al. [47]), requires an SNR of ∼ 37 dB (0.01 radians) and ∼23 dB (0.07 radians) respectively. Their temporal resolution defines the minimum acquisition speed in order to register and track the phase evolution from the same cells across time.

To further test the ability of laterally referenced phase differenced measurements to quantify axial change, we replaced the coverslip with a piezoelectric transducer at the focal plane of the model eye. The piezoelectric transducer deforms in shape according to the drive voltage. The amplitude and frequency of axial physical displacement was controlled with a sinusoidal voltage waveform. We sought to calculate the minimum axial displacement measurable on the surface of the transducer. The phase difference was calculated between two points separated laterally on the surface of the transducer by 75 pixels, converted to OPL change and plotted versus the number of camera frames in Fig. 4(d). A minimum amplitude of 3 nm was observed in a single response that was repeatable across 15 repeats.

2.4.3 Effect of eye motion on phase sensitivity at different B-scan rates

Axial bulk motion adversely affects phase sensitivity and can be quantified by measuring the phase difference between two consecutive B-scans. To compensate axial bulk motion reliably, the resulting maximum phase change should ideally lie within ${\pm} \mathrm{\pi}$ radians. Here, we estimated the average phase change between two consecutive B-scans recorded at different speeds ranging from 0.5 to 16 kHz. M-scans were recorded with the galvo set to 0 volts and volumes were recorded with the galvo scanning, corresponding to an overall field of view equal to 2 × 3.5 deg. Average phase difference due to axial bulk motion was compared in M-mode and volumetric imaging across different camera frame rates. The complex phase in the (n+1)th B-scan was averaged and multiplied with the complex conjugate of the average complex phase in the nth B-scan. For M-mode, ‘n’ signifies the number of B-scan acquisitions at the same spatial location, while for volumetric acquisition, ‘n’ signifies the number of scans. These measurements were made without AO, for a 4 mm pupil with defocus optimized.

2.5 In vivo imaging protocol

Two cyclopleged (0.5% tropicamide) subjects free of retinal disease participated in the study. The research was approved by the University of Washington institutional review board and all subjects signed an informed consent before their participation and after the nature and possible consequences of the study was explained. All procedures involving human subjects were in accordance with the tenets of the Declaration of Helsinki. Structural and functional imaging of the retina between 2–7 deg temporal eccentricity was conducted. For the light-induced OPL response experiments, the subject was dark adapted for 3 to 4 minutes, and OCT volumes were acquired for 1-3 seconds at different volume rates. For imaging without AO, defocus was optimized either by translating the eye’s pupil and L6 together as a unit, or using the DM, to achieve best focus of cone photoreceptors in the real-time LSO image stream. After 10-20 volumes, the stimulus illuminated the retina. The above procedure was repeated under conditions where higher order aberrations were corrected with AO. For the AO ORG experiments, 40 volumes were recorded at the speed of 6 kHz B-scan rate and three measurements were averaged.

3. Results

3.1 Axial bulk motion for different B-scan rates

In Fig. 5(a), B-scan rates less than 2 kHz show substantial phase instabilities, leading to phase wrapping between consecutive B-scans. Such phase drifts during a single B-scan acquisition are attributed to eye motion and lead to fringe washout and reduced SNR. Fringe washout is a well-recognized issue in spectral domain OCT operation while acquiring A-scans at low speeds, evident here in line-scan geometry for low B-scan rates. This observation aligns well with Ginner et al. [42].

 figure: Fig. 5.

Fig. 5. Axial bulk motion for a range of M-scan and B-scan rates. Average phase difference between successive M-scans with (a) 0.5 kHz, 1 kHz and 2 kHz frame rates show severe phase wrapping that prevents its optimal correction and with (b) 4 kHz, 8kHz, 12 kHz and 16 kHz frame rates show no phase wrapping enabling accurate correction of axial bulk motion. (c) The average phase difference between successive B-scans along the scan direction in a volume shows no phase wrapping when measured in the range between 4–16 kHz B-scan rates. (d) Axial bulk motion quantified as average phase difference in the two modes of operation – M-scan and multiple B-scans per volume, decreases monotonically with increasing frame rate. The error bars denote the mean variability, calculated as the mean of the standard deviation across the 10 measurements for each speed.

Download Full Size | PDF

For B-scan rates in the range between 4 - 16 kHz, we observed that the average phase difference between consecutive B-scans remained below ${\pm} \pi $ radians, both in M-mode (Fig. 5(b)) and volumetric (Fig. 5(c)) acquisition. We observed that the average phase difference decreases monotonically with increasing acquisition rate (Fig. 5(d)). The average phase differences shown in the Fig. 5(d) were obtained for 10 sets of measurements at each speed, for M-mode and volumetric acquisition. The mean and standard deviation axial bulk motion for volumetric acquisition is expectedly higher, due to the galvo scanning a different part of retina with each scan, but is not statistically significant.

3.2 Imaging retinal structure

Figure 6 shows OCT images with and without AO from the instrument. The OCT volumes are recorded at the speed of 5000 - 6000 B-scans/sec at 7 deg retinal eccentricity. Figures 6(a) and 6(b) show single B-scans taken without and with AO. The corresponding averaged B-scans and en face projection at the COST layer are shown in Figs. 6(c-f). Figure 7 shows retinal images taken with AO with the LSO and OCT mode of operation. The focus was optimized at the outer retina to reveal cone photoreceptors in Figs. 6 and 7. Figures 7(a-b & c-f) show the volume and cross-sectional images obtained at 2 and 4 deg temporal. From the AO-corrected volumes (Figs. 7(a) & (b)), the substantially higher signal in the outer retina compared to the inner retina is indicative of the small depth of field of imaging, resulting from the larger numerical aperture with AO. The LSO images in Figs. 7(c) and 7(e) were obtained at a high integration time, and low speed of 30 Hz. This was to account for the limitation in signal, arising by placing the LSO focus optic and detector at the zeroth order of the diffraction grating, accounting for ∼10% of the light in the detection arm. Figures 7(d) and 7(f) represent maximum intensity projections obtained at the cone outer segment tips (COST). The OCT volumes were recorded at 5000 B-scans/sec. In both LSO and OCT en face images, cone photoreceptors were clearly resolved. At eccentricities closer to the fovea, imaging resolution was restricted by Nyquist sampling of detection, optimized as such to favor speed and light collection efficiency per pixel.

 figure: Fig. 6.

Fig. 6. Cross-sectional and en face view of retina without and with adaptive optics. Single and averaged B-scans of retina at 7 deg. temporal to fovea are shown without (a,c) and with (b,d) adaptive optics. Corresponding en face images at COST layer are shown in (e) and (f). Separate scale bars are indicated for the B-scans and the en face images.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. High-resolution imaging of retinal structure with AO line-scan spectral domain OCT and line-scan ophthalmoscope. Three dimensional rendering of AO-OCT volumes acquired at 2 and 4 deg temporal eccentricity are shown in (a) and (b). The corresponding en face images at the COST layer show cone photoreceptors in (d) and (f). LSO en face images are shown in (c) and (e). Scale bars for the volumes are indicated and are equal to 100 µm for the en face images.

Download Full Size | PDF

3.3 Optoretinography without AO

The OCT volumes were segmented for the layers corresponding to cone inner-outer segment junction (ISOS) and COST. The phase at the ISOS and COST layers was axially referenced (${\phi _{COST}} - {\phi _{ISOS}})$ with respect to each another to denote the light-induced OPL change in the cone outer segment after 660 nm light stimulus, and plotted versus time to reveal its dynamics. The change in OPL obtained from the phase difference (${\phi _{COST}} - {\phi _{ISOS}})$ was averaged over 0.27 deg2 on the retina. Each curve represents an optoretinogram or ORG, and was obtained without AO for a 4 mm pupil and defocus optimized. Figures 8(a) and 8(b) were obtained at volume rates of 324 Hz and 120 Hz respectively to reveal light induced OPL dynamics in cone photoreceptors across different temporal scales.

 figure: Fig. 8.

Fig. 8. Optoretinograms (ORGs) with 660 nm stimulus acquired without adaptive optics. After stimulus onset, cone outer segments undergo a rapid shrinkage in OPL (a) followed by a slower expansion (b). The response magnitude and rate of change scale with increasing stimulus strength. The phase response was averaged over a retinal area of 0.27 deg2. In a), each curve is an average of two separate measurements, while single measurements are shown in b). The early reduction (ORG early response) and later expansion (ORG late response) were measured at 324 Hz and 120 Hz respectively. For the control (no stimulus case), the mean ± standard deviation OPL calculated over ∼1.2 seconds was 2.2 ± 4.9 nm for the single measurement.

Download Full Size | PDF

In Fig. 8(a), a rapid reduction in OPL immediately after stimulus onset is observed. The maximum amplitude of shrinkage in OPL was 9 nm, 19 nm and 29 nm for the three stimulus intensity levels. Note that only two separate measurements were averaged for the traces in Fig. 8(a) to reduce phase noise for these minute OPL magnitudes recorded at high speed. In Fig. 8(b), while the OPL decrease is not readily apparent at the lower speed, an increase in OPL is observed after stimulus onset, saturating at 152 nm, 294 nm and 444 nm, ∼500 ms post-stimulus. While the fidelity of these curves will certainly improve with averaging repeat measures, the curves in Fig. 8(b) represent single unaveraged recordings in order to establish the limits of phase stability in the line-scan paradigm. For the control (no stimulus case), the mean ± standard deviation OPL calculated over ∼1.2 seconds was 2.2 ± 4.9 nm for the single measurement (Fig. 8(b)). In general, the amplitude of the initial reduction and later expansion of OPL scales with the stimulus energy, tested in the range between 14.3 × 106 to 62.4 × 106 photons/µm2 for the late expansion and 0.9 × 106 to 5.4 × 106 photons/µm2 for the early reduction.

The scaling of the ORG response – both the reduction and increase in OPL – with stimulus energy has been demonstrated before using point-scan spectral domain and swept-source OCT. The late response has been attributed to be the influx of water to maintain osmotic balance during the phototransduction cascade [15]. Based on the latency and the magnitude of the stimulus strength that characterize the rapid shrinkage, we earlier concluded that the reduction in OPL is the optical manifestation of the early receptor potential observed previously in vitro, attributed to charge movement across the outer segment disc membranes during photoisomerization [47,5860].

3.4 Optoretinography with AO

Similar experiments as Fig. 8 were repeated with the increased resolution offered by AO that allowed discerning the ORG in individual cones, in response to the 660 nm stimulus. Figure 9(a) shows a cone photoreceptor image at 4.75 deg eccentricity obtained with AO-OCT. The mean ± standard deviation of the ORG in individual cones (n = 450 cones) is plotted in Fig. 9(c). The ORG maximum amplitude and time course is variable across cones. The histogram of the response amplitude averaged between 0.5 to 1.2 seconds was subjected to Gaussian mixture model clustering analysis (Fig. 9(d)). Three clusters emerged from this analysis, where each cone belonged to a sub-group with high probability (mean ± standard deviation: 0.99 ± 0.05 across all cones) and low uncertainty (< 0.18%). For a cone within each sub-group, the probability is defined as the ratio of its component Gaussian to the sum of all Gaussians, while the uncertainty is defined as the area of overlap between the component Gaussians. Based on the intersection of the component Gaussians, these clusters were labelled as those belonging to S, M and L-cone photoreceptors as shown in Fig. 9(b), ordered by their decreasing sensitivity to 660 nm wavelength. In Figs. 9(b)-(d), the color of individual cones and their corresponding ORGs are labelled as ‘red’, ‘green’ or ‘blue’ to denote the three cone types, on the basis of their spectral identity obtained from the clustering analysis. The maximum amplitude (mean ± standard deviation) of L and M cones was 228 ± 40 nm, 90 ± 26 nm, corresponding to a photopigment bleach of 21.8% and 3.8% respectively. At 660 nm wavelength, S-cone photopigments are bleached to a negligible degree (<0.001%) resulting in an insignificant OPL change of 12 ± 22 nm.

 figure: Fig. 9.

Fig. 9. Single cone optoretinograms (ORGs) with 660 nm stimulus acquired with AO. (a) Cone photoreceptor image at 4.75 deg temporal eccentricity obtained at the COST layer from a set of registered AO-OCT volumes. (b) Long-, Middle- and Short- (LMS) cones marked as ‘red’, ‘green’ and ‘blue’, classified based on Gaussian mixture model clustering of the ORG magnitude shown in (d). (c) The mean and standard deviation of the ORG in individual L-, M- and S-cones. Solid lines and error bars denote the mean and ± 1 std across the population of cones. (d) A histogram showing the distribution of ORG magnitude. The black dashed line denotes the sum of three Gaussians obtained from the clustering analysis. The individual component Gaussians are shaded red, green, blue, denoting the LMS-cones respectively. Volume rate was 15 Hz. Scale bar: 50 µm.

Download Full Size | PDF

4. Discussions and conclusions

The line-scan spectral domain operation with AO offers an excellent opportunity to optimize the triad of speed, sensitivity and resolution and would benefit applications of OCT ranging from angiography to clinical retinal structure-function imaging. The B-scan rates up to a maximum of 16 kHz, to the best of our knowledge, represents the highest speeds for two-dimensional cross-sectional OCT imaging of the retina. This speed is limited by the maximum camera frame rate. The camera cost is higher than conventional CCD and CMOS sensors, though at ∼$\$$40,000, our sensor is relatively less expensive than some science-grade CCDs and still faster burst-mode cameras that can cost up to ∼$\$$100,000. The camera provided fast phase stable acquisition for multiple effective A-lines in parallel and is thus essential for resolving the rapid light-induced retinal responses, that may otherwise by severely confounded by eye motion. Specifically, volume acquisition rates as high as 324 Hz was required to resolve responses with short latency and fast dynamics such as the early axial contraction of outer segments. Alternatively, analyzing the phase at the resolution of individual B-scans, either acquired in succession within one volume, or in separate volumes, provides increased temporal resolution at the cost of complete loss in spatial information [25,47]. We summarized the relationship between acquisition speed and axial bulk motion for in vivo imaging across a large temporal range extending measurements of Ginner et al. up to 16 kHz B-scan rates. Given the impact of axial bulk motion on fringe contrast in spectral domain OCT and its application to interferometric imaging, this quantification will allow future optimizing of speed and sensitivity for line-scan operation.

An added benefit of parallel phase-stable acquisition is the ability to perform lateral or self-phase referencing. The optical phase at any given single point in time and space by itself is essentially noisy due to both light-tissue interaction and external factors such as air currents, galvo jitter and eye motion. Thus, overcoming common mode noise is essential to reveal robust OPL changes. This is performed by a combination of referencing the phase in space along the axial direction – for instance COST vs. ISOS; and/or referencing phase in time – with stimulus vs. without stimulus. We show a demonstration of lateral phase referencing using a piezotransducer possible due to parallel cross-sectional imaging. While the minimum phase change detectable is still restricted ultimately by SNR, lateral phase referencing enables measuring OPL change across two laterally separated points in space, two neighboring cones or blood vessels for example.

The parallel phase-stable acquisition comes at the cost of loss of confocality across one lateral dimension which affects the imaging contrast. The anamorphic detection configuration that we introduce for line-field spectral domain OCT, while optimizing spatial and spectral resolution independently, also simultaneously increases light collection efficiency per pixel and as result, benefits the contrast. The advantages of the anamorphic configuration are evident in the improved signal roll-off and the Nyquist-limited imaging of cone photoreceptors. In addition, this is achieved by the use of commercially available parts allowing efficient future implementation. We demonstrate the feasibility of recording light-induced responses across small retinal areas without AO (defocus optimized) and in individual cones with AO. The former is encouraging as an avenue to perform optoretinography over larger fields of view and smaller undilated pupils in a clinical population and underlines the feasibility of such recordings without the use of additional hardware for AO. With AO, we revealed that the ORGs can be assigned to individual cones that enables their segregation into L, M and S-cones based on the magnitude of their response to 660 nm stimulus. In comparison to AO retinal densitometry [18,19], this technique of classifying the cone spectral types provides not only exceedingly higher precision but also considerably shorter imaging durations, as characterized by Zhang et al. using a point-scan spectral domain AO-OCT [25].

Some shortcomings remain to be addressed in future work. First, line-scan geometry may cause multiple scattering artifacts for imaging structures near and distal to the highly scattering pigment epithelium. We have not measured here the extent to which multiple scattering affects the fidelity of imaging and remains an active area of investigation. Second, the cylindrical lens used to generate a line illumination maintains the Gaussian intensity distribution, resulting from fiber collimation, across the imaging field. Phase sensitivity characterization across the line dimension showed a slightly higher phase noise floor at the edges of the line illumination due to the lower intensity and SNR. A Powell lens [61] may be considered instead to generate a line profile with uniform intensity distribution. Lastly, though the en face LSO image of photoreceptors was very helpful in optimizing the focus of OCT, the same illumination was used for both OCT and LSO, preventing their simultaneous operation due to the intense reflection from the OCT reference arm saturating the LSO sensor. In the future, it will be straightforward to incorporate different wavelengths for both to allow simultaneous LSO and OCT operation.

In conclusion, the high-speed line-scan spectral domain OCT equipped with AO demonstrated here allows high-resolution volumetric imaging and acquisition of light-induced functional changes from cone photoreceptors across several spatial scales. The key features – high speed, anamorphic detection, AO or non-AO operation – summarized above are fundamental to the application of this novel technology for interferometric imaging of neural activity in photoreceptors and holds promise for future applications to other retinal cells in general. While this work represents an important step forward in the technology for all-optical, non-invasive interrogation of retinal activity, understanding the underlying mechanisms [15,47,58] of such ORGs is key for its application as a biomarker to study basic visual processes and retinal health in disease and therapies.

Funding

Burroughs Wellcome Fund (Careers at the Scientific Interfaces Award); M.J. Murdock Charitable Trust; Foundation Fighting Blindness; Research to Prevent Blindness (Career Development Award, Unrestricted grant to UW Ophthalmology); National Eye Institute (EY027941, EY029710, P30EY001730, U01EY025501).

Acknowledgments

We thank Daniel Palanker, Austin Roorda, Hyle Park and Yuhua Zhang for helpful discussions.

Disclosures

VPP and RS have a commercial interest in a US patent describing the technology for the line-scan OCT for optoretinography.

References

1. D. L. McCulloch, M. F. Marmor, M. G. Brigell, R. Hamilton, G. E. Holder, R. Tzekov, and M. Bach, “ISCEV Standard for full-field clinical electroretinography (2015 update),” Doc. Ophthalmol. 130(1), 1–12 (2015). [CrossRef]  

2. D. C. Hood, M. Bach, M. Brigell, D. Keating, M. Kondo, J. S. Lyons, M. F. Marmor, D. L. McCulloch, A. M. Palmowski-Wolfe, and I. S. F. C. E. O. Vision, “ISCEV standard for clinical multifocal electroretinography (mfERG)(2011 edition),” Doc. Ophthalmol. 124(1), 1–13 (2012). [CrossRef]  

3. D. Baylor, B. Nunn, and J. Schnapf, “Spectral sensitivity of cones of the monkey Macaca fascicularis,” J. Physiol. 390(1), 145–160 (1987). [CrossRef]  

4. G. D. Field, J. L. Gauthier, A. Sher, M. Greschner, T. A. Machado, L. H. Jepson, J. Shlens, D. E. Gunning, K. Mathieson, W. Dabrowski, L. Paninski, A. M. Litke, and E. J. Chichilnisky, “Functional connectivity in the retina at the resolution of photoreceptors,” Nature 467(7316), 673–677 (2010). [CrossRef]  

5. L. Yin, Y. Geng, F. Osakada, R. Sharma, A. H. Cetin, E. M. Callaway, D. R. Williams, and W. H. Merigan, “Imaging light responses of retinal ganglion cells in the living mouse eye,” J. Neurophysiol. 109(9), 2415–2421 (2013). [CrossRef]  

6. J. E. McGregor, L. Yin, Q. Yang, T. Godat, K. T. Huynh, J. Zhang, D. R. Williams, and W. H. Merigan, “Functional architecture of the foveola revealed in the living primate,” PLoS One 13(11), e0207102 (2018). [CrossRef]  

7. K. Grieve and A. Roorda, “Intrinsic signals from human cone photoreceptors,” Invest. Ophthalmol. Visual Sci. 49(2), 713–719 (2008). [CrossRef]  

8. R. S. Jonnal, J. Rha, Y. Zhang, B. Cense, W. Gao, and D. T. Miller, “In vivo functional imaging of human cone photoreceptors,” Opt. Express 15(24), 16141–16160 (2007). [CrossRef]  

9. K. Bizheva, R. Pflug, B. Hermann, B. Považay, H. Sattmann, P. Qiu, E. Anger, H. Reitsamer, S. Popov, and J. Taylor, “Optophysiology: depth-resolved probing of retinal physiology with functional ultrahigh-resolution optical coherence tomography,” Proc. Natl. Acad. Sci. 103(13), 5066–5071 (2006). [CrossRef]  

10. R. S. Jonnal, O. P. Kocaoglu, Q. Wang, S. Lee, and D. T. Miller, “Phase-sensitive imaging of the outer retina using optical coherence tomography and adaptive optics,” Biomed. Opt. Express 3(1), 104–124 (2012). [CrossRef]  

11. X. Yao and B. Wang, “Intrinsic optical signal imaging of retinal physiology: a review,” J. Biomed. Opt. 20(9), 090901 (2015). [CrossRef]  

12. R. F. Cooper, W. S. Tuten, A. Dubra, D. H. Brainard, and J. I. Morgan, “Non-invasive assessment of human cone photoreceptor function,” Biomed. Opt. Express 8(11), 5098–5112 (2017). [CrossRef]  

13. J. J. Hunter, W. H. Merigan, and J. B. Schallek, “Imaging Retinal Activity in the Living Eye,” Annu. Rev. Vis. Sci. 5(1), 15–45 (2019). [CrossRef]  

14. M. Pircher, J. Kroisamer, F. Felberer, H. Sattmann, E. Götzinger, and C. Hitzenberger, “Temporal changes of human cone photoreceptors observed in vivo with SLO/OCT,” Biomed. Opt. Express 2(1), 100–112 (2011). [CrossRef]  

15. P. Zhang, R. J. Zawadzki, M. Goswami, P. T. Nguyen, V. Yarov-Yarovoy, M. E. Burns, and E. N. Pugh, “In vivo optophysiology reveals that G-protein activation triggers osmotic swelling and increased light scattering of rod photoreceptors,” Proc. Natl. Acad. Sci. 114(14), E2937–E2946 (2017). [CrossRef]  

16. Q. Zhang, R. Lu, B. Wang, J. D. Messinger, C. A. Curcio, and X. Yao, “Functional optical coherence tomography enables in vivo physiological assessment of retinal rod and cone photoreceptors,” Sci. Rep. 5(1), 9595 (2015). [CrossRef]  

17. P. Bedggood and A. Metha, “Optical imaging of human cone photoreceptors directly following the capture of light,” PLoS One 8(11), e79251 (2013). [CrossRef]  

18. R. Sabesan, H. Hofer, and A. Roorda, “Characterizing the Human Cone Photoreceptor Mosaic via Dynamic Photopigment Densitometry,” PLoS One 10(12), e0144891 (2015). [CrossRef]  

19. A. Roorda and D. R. Williams, “The arrangement of the three cone classes in the living human eye,” Nature 397(6719), 520–522 (1999). [CrossRef]  

20. J. F. De Boer, C. K. Hitzenberger, and Y. Yasuno, “Polarization sensitive optical coherence tomography–a review,” Biomed. Opt. Express 8(3), 1838–1873 (2017). [CrossRef]  

21. C. K. Hitzenberger, E. Götzinger, M. Sticker, M. Pircher, and A. F. Fercher, “Measurement and imaging of birefringence and optic axis orientation by phase resolved polarization sensitive optical coherence tomography,” Opt. Express 9(13), 780–790 (2001). [CrossRef]  

22. M. Pircher, E. Götzinger, R. Leitgeb, H. Sattmann, O. Findl, and C. K. Hitzenberger, “Imaging of polarization properties of human retina in vivo with phase resolved transversal PS-OCT,” Opt. Express 12(24), 5940–5951 (2004). [CrossRef]  

23. D. Hillmann, H. Spahr, C. Pfäffle, H. Sudkamp, G. Franke, and G. Hüttmann, “In vivo optical imaging of physiological responses to photostimulation in human photoreceptors,” Proc. Natl. Acad. Sci. 113(46), 13138–13143 (2016). [CrossRef]  

24. M. Azimipour, J. V. Migacz, R. J. Zawadzki, J. S. Werner, and R. S. Jonnal, “Functional retinal imaging using adaptive optics swept-source OCT at 1.6 MHz,” Optica 6(3), 300–303 (2019). [CrossRef]  

25. F. Zhang, K. Kurokawa, A. Lassoued, J. A. Crowell, and D. T. Miller, “Cone photoreceptor classification in the living human eye from photostimulation-induced phase dynamics,” Proceedings of the National Academy of Sciences, 201816360 (2019).

26. C.-L. Chen and R. K. Wang, “Optical coherence tomography based angiography,” Biomed. Opt. Express 8(2), 1056–1082 (2017). [CrossRef]  

27. Y. Jia, S. T. Bailey, T. S. Hwang, S. M. McClintic, S. S. Gao, M. E. Pennesi, C. J. Flaxel, A. K. Lauer, D. J. Wilson, and J. Hornegger, “Quantitative optical coherence tomography angiography of vascular abnormalities in the living human eye,” Proc. Natl. Acad. Sci. 112(18), E2395–E2402 (2015). [CrossRef]  

28. D. Y. Kim, J. Fingler, R. J. Zawadzki, S. S. Park, L. S. Morse, D. M. Schwartz, S. E. Fraser, and J. S. Werner, “Optical imaging of the chorioretinal vasculature in the living human eye,” Proc. Natl. Acad. Sci. 110(35), 14354–14359 (2013). [CrossRef]  

29. A. Joseph, A. Guevara-Torres, and J. Schallek, “Imaging single-cell blood flow in the smallest to largest vessels in the living retina,” eLife 8, e45077 (2019). [CrossRef]  

30. M. Salas, M. Augustin, L. Ginner, A. Kumar, B. Baumann, R. Leitgeb, W. Drexler, S. Prager, J. Hafner, and U. Schmidt-Erfurth, “Visualization of micro-capillaries using optical coherence tomography angiography with and without adaptive optics,” Biomed. Opt. Express 8(1), 207–222 (2017). [CrossRef]  

31. J. Carroll, J. Neitz, and M. Neitz, “Estimates of L: M cone ratio from ERG flicker photometry and genetics,” J. Vis. 2(8), 1 (2002). [CrossRef]  

32. A. Stockman and L. T. Sharpe, “The spectral sensitivities of the middle- and long-wavelength-sensitive cones derived from measurements in observers of known genotype,” Vision Res. 40(13), 1711–1737 (2000). [CrossRef]  

33. D. Hillmann, H. Spahr, C. Hain, H. Sudkamp, G. Franke, C. Pfäffle, C. Winter, and G. Hüttmann, “Aberration-free volumetric high-speed imaging of in vivo retina,” Sci. Rep. 6(1), 35209 (2016). [CrossRef]  

34. P. Xiao, V. Mazlin, K. Grieve, J.-A. Sahel, M. Fink, and A. C. Boccara, “In vivo high-resolution human retinal imaging with wavefront-correctionless full-field OCT,” Optica 5(4), 409–412 (2018). [CrossRef]  

35. M. Pircher and R. J. Zawadzki, “Review of adaptive optics OCT (AO-OCT): principles and applications for retinal imaging [Invited],” Biomed. Opt. Express 8(5), 2536–2562 (2017). [CrossRef]  

36. R. J. Zawadzki, S. S. Choi, A. R. Fuller, J. W. Evans, B. Hamann, and J. S. Werner, “Cellular resolution volumetric in vivo retinal imaging with adaptive optics–optical coherence tomography,” Opt. Express 17(5), 4084–4094 (2009). [CrossRef]  

37. Y. Jian, S. Lee, M. J. Ju, M. Heisler, W. Ding, R. J. Zawadzki, S. Bonora, and M. V. Sarunic, “Lens-based wavefront sensorless adaptive optics swept source OCT,” Sci. Rep. 6(1), 27620 (2016). [CrossRef]  

38. M. Pircher, E. Götzinger, H. Sattmann, R. A. Leitgeb, and C. K. Hitzenberger, “In vivo investigation of human cone photoreceptors with SLO/OCT in combination with 3D motion correction on a cellular level,” Opt. Express 18(13), 13935–13944 (2010). [CrossRef]  

39. Z. Liu, K. Kurokawa, F. Zhang, J. J. Lee, and D. T. Miller, “Imaging and quantifying ganglion cells and other transparent neurons in the living human retina,” Proc. Natl. Acad. Sci. 114(48), 12803–12808 (2017). [CrossRef]  

40. Y. Nakamura, S. Makita, M. Yamanari, M. Itoh, T. Yatagai, and Y. Yasuno, “High-speed three-dimensional human retinal imaging by line-field spectral domain optical coherence tomography,” Opt. Express 15(12), 7103–7116 (2007). [CrossRef]  

41. Y. Zhang, J. Rha, R. S. Jonnal, and D. T. Miller, “Adaptive optics parallel spectral domain optical coherence tomography for imaging the living retina,” Opt. Express 13(12), 4792–4811 (2005). [CrossRef]  

42. L. Ginner, A. Kumar, D. Fechtig, L. M. Wurster, M. Salas, M. Pircher, and R. A. Leitgeb, “Noniterative digital aberration correction for cellular resolution retinal optical coherence tomography in vivo,” Optica 4(8), 924–931 (2017). [CrossRef]  

43. D. J. Fechtig, B. Grajciar, T. Schmoll, C. Blatter, R. M. Werkmeister, W. Drexler, and R. A. Leitgeb, “Line-field parallel swept source MHz OCT for structural and functional retinal imaging,” Biomed. Opt. Express 6(3), 716–735 (2015). [CrossRef]  

44. J. B. Mulligan, “The Optoretinogram at 38,” (2018).

45. J. B. Mulligan, D. I. MacLeod, and I. C. Statler, “In search of an optoretinogram,” (1994).

46. V. P. Pandiyan, A. M. Bertelli, J. A. Kuchenbecker, B. H. Park, D. V. Palanker, A. Roorda, and R. Sabesan, “Optoretinogram: stimulus-induced optical changes in photoreceptors observed with phase-resolved line-scan OCT,” Invest. Ophthalmol. Visual Sci. 60, 1426 (2019).

47. V. P. Pandiyan, A. M. Bertelli, J. Kuchenbecker, K. C. Boyle, T. Ling, B. H. Park, A. Roorda, D. Palanker, and R. Sabesan, “The optoretinogram reveals the primary steps of phototransduction in the living human eye,” Sci. Adv. 6, eabc1124 (2020). [CrossRef]  

48. P. Zhang, B. Shibata, G. Peinado, R. J. Zawadzki, P. FitzGerald, and E. N. Pugh, “Measurement of Diurnal Variation in Rod Outer Segment Length In Vivo in Mice With the OCT Optoretinogram,” Invest. Ophthalmol. Visual Sci. 61(3), 9 (2020). [CrossRef]  

49. P. Artal, S. Marcos, R. Navarro, and D. R. Williams, “Odd aberrations and double-pass measurements of retinal image quality,” J. Opt. Soc. Am. A 12(2), 195–201 (1995). [CrossRef]  

50. J. Lu, B. Gu, X. Wang, and Y. Zhang, “High speed adaptive optics ophthalmoscopy with an anamorphic point spread function,” Opt. Express 26(11), 14356–14374 (2018). [CrossRef]  

51. P.-Y. Teng, “Caserel - An Open Source Software for Computer- aided Segmentation of Retinal Layers in Optical Coherence Tomography Images,” https://doi.org/10.5281/zenodo.17893.

52. S. B. Stevenson and A. Roorda, “Correcting for miniature eye movements in high resolution scanning laser ophthalmoscopy,” Proc. SPIE 5688, 12 (2005). [CrossRef]  

53. B. H. Park, M. C. Pierce, B. Cense, S.-H. Yun, M. Mujat, G. J. Tearney, B. E. Bouma, and J. F. De Boer, “Real-time fiber-based multi-functional spectral-domain optical coherence tomography at 1.3 µm,” Opt. Express 13(11), 3931–3944 (2005). [CrossRef]  

54. M. Q. Tong, M. M. Hasan, S. S. Lee, M. R. Haque, D.-H. Kim, M. S. Islam, M. E. Adams, and B. H. Park, “OCT intensity and phase fluctuations correlated with activity-dependent neuronal calcium dynamics in the Drosophila CNS,” Biomed. Opt. Express 8(2), 726–735 (2017). [CrossRef]  

55. Z. Yaqoob, W. Choi, S. Oh, N. Lue, Y. Park, C. Fang-Yen, R. R. Dasari, K. Badizadegan, and M. S. Feld, “Improved phase sensitivity in spectral domain phase microscopy using line-field illumination and self phase-referencing,” Opt. Express 17(13), 10681–10687 (2009). [CrossRef]  

56. B. Braaf, K. V. Vienola, C. K. Sheehy, Q. Yang, K. A. Vermeer, P. Tiruveedhula, D. W. Arathorn, A. Roorda, and J. F. de Boer, “Real-time eye motion correction in phase-resolved OCT angiography with tracking SLO,” Biomed. Opt. Express 4(1), 51–65 (2013). [CrossRef]  

57. T. Ling, K. C. Boyle, V. Zuckerman, T. Flores, C. Ramakrishnan, K. Deisseroth, and D. Palanker, “High-speed interferometric imaging reveals dynamics of neuronal deformation during the action potential,” Proc. Natl. Acad. Sci. 117(19), 10278–10285 (2020). [CrossRef]  

58. K. C. Boyle, Z. C. Chen, T. Ling, V. P. Pandiyan, J. Kuchenbecker, R. Sabesan, and D. Palanker, “On mechanisms of light-induced deformations in photoreceptors,” bioRxiv, 2020.2001.2008.897728 (2020).

59. K. T. Brown and M. Murakami, “A New Receptor Potential of the Monkey Retina with no Detectable Latency,” Nature 201(4919), 626–628 (1964). [CrossRef]  

60. A. L. Hodgkin and P. M. Obryan, “Internal recording of the early receptor potential in turtle cones,” J. Physiol. 267(3), 737–766 (1977). [CrossRef]  

61. I. Powell, “Design of a laser beam line expander,” Appl. Opt. 26(17), 3705–3709 (1987). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Optical layout of the multimodal line-scan retinal camera, consisting of a line-scan spectral domain OCT and a line-scan ophthalmoscope (LSO) and a visual stimulation channel. The top inset shows the measured axial PSF, the FWHM and best fit. The horizontal perspective (top view) beam path is shown and the different channels (OCT, Wavefront sensing, and visual stimulation) are denoted with different colors. SLD: superluminescent diode, RC: reflective collimator, CL: Achromatic cylindrical doublet lenses, BS: beam splitter, L: achromatic doublet lenses, M: mirror, GS: galvo scanner, DG: diffraction grating, ND: neutral density, DM: deformable mirror, LPF: long pass filter, SPF: short pass filter, LED: Light emitting diode, WS: wavefront sensor. P: pupil plane, I: imaging plane. The inset shows the anamorphic telescope that replaces lens L11 and described in detail in Fig. 3. Focal length of the lenses used are L1=L2 = 200 mm, L3=L6 = 250 mm, L4=L5 = 500 mm, L7=L8 = 400 mm, L9 = 300 mm, L10 = 200 mm, L12 = 150 mm, L13 = 200 mm, L14 = 100 mm, L15 = 175 mm, L16 = 125 mm, CL1=CL2=CL4 = 100 mm, CL3 = 250 mm.
Fig. 2.
Fig. 2. The horizontal (top view) and vertical (side view) perspectives of the optical path are shown. The illumination and detection paths are shown with filled red and solid red respectively. The detection path is shown for the on-axis field point. Note the different beam profiles at the pupil and imaging planes in detection compared to illumination, shown as a cross-sectional view in the bottom panels. Note the beam has a linear and circular profile respectively in illumination and detection, at the pupil plane P1-P4. The beam profiles are linear at all imaging planes in illumination and detection except at the 2D camera. Also note that the beam is scanning in the image plane (I1-I3), which is not shown in the cross sectional view for simplicity. DG: diffraction grating. L: lens, CL: cylindrical lens, I: imaging plane, P: pupil plane.
Fig. 3.
Fig. 3. Anamorphic detection to optimize spatial and spectral resolution (a) Projection view (trimetric) of the anamorphic detection scheme for line-scan ophthalmoscopy. Note that the figure shows singlet lenses for simplicity, however all lenses were achromatic doublets. The pupil plane (Pcirc and Panamorphic) and imaging plane dimensions (Icirc and Ianamorphic) scale along their axes depending on the cylindrical lens (CLx and CLy) focal lengths. (b) The circular and anamorphic PSFs and their 1-dimensional cross-sections are shown along the line and spectral dimensions, overlayed on the aerial detector pixels, illustrated to scale. The number of camera pixels contained within one airy disk diameter (ADD) along the line and spectrum dimensions is indicated for both circular and anamorphic PSFs. Along the line dimension, the spatial sampling is identical between both as evident in the 1-dimensional cross-section, however, the anamorphic PSF offers increased light collection per pixel. Along the spectrum dimension, the resolution is increased by 2.5× in the anamorphic PSF in comparison to the circular PSF, as evidenced by the greater separation in the 1-dimensional cross-section. (c) Signal roll-off versus depth is shown to demonstrate the effect of the spherical and anamorphic PSFs. Also shown is the measured signal roll-off for our anamorphic detection spectrometer that shows good correspondence with theoretical computation and demonstrates the substantial improvement over circular PSFs. (d) The theoretical imaging performance of anamorphic detection represented as spot diagrams for different field points and wavelengths along with the diffraction limited airy ellipses for reference. The spot diagrams at three imaging planes are shown, corresponding to (i) the imaging plane after the cylindrical lenses (Ianamorphic) prior to diffraction via the grating (ii) detector plane, with an ideal paraxial lens as the spectrometer focusing element to demonstrate the effect of the cylindrical lens telescope plus diffraction, on imaging performance and (iii) detector plane with the used commercial achromatic doublet as the focusing optic to demonstrate the effect of all optics on imaging, as realized practically in the system.
Fig. 4.
Fig. 4. Signal-to-noise and phase sensitivity characterization (a) Sensitivity in dB varies linearly with log of exposure time of the aerial sensor indicating near shot noise limited performance. (b) Measured phase sensitivity (bottom panel) in milliradians (mrad) along the line illumination for a coverslip. The axially referenced (front and back surface of cover slip) and laterally referenced (two points separated on the line) phase sensitivity is plotted and are similar. The phase sensitivity is worse at the edges of the line profile consistent with the lower SNR (top panel) resulting from a Gaussian intensity distribution along the line illumination. (c) Measured phase sensitivity versus SNR follows a linear relationship and is close to the theoretical shot-noise limited phase sensitivity, defined as the square-root of SNR-1. The data points and error bars in (a) and (c) denote the mean and standard deviation sensitivity, SNR and phase noise for 236 pixels along the line, corresponding to 70% of the spatial extent of the Gaussian illumination (vertical dashed lines in b). (d) Optical phase change represented in terms of optical path length in air resulting from a piezo-transducer held under a sinusoidal voltage signal. The phase was calculated by laterally referencing two points along the line, spaced by 75 pixels. A single response and the mean of 15 responses show good agreement.
Fig. 5.
Fig. 5. Axial bulk motion for a range of M-scan and B-scan rates. Average phase difference between successive M-scans with (a) 0.5 kHz, 1 kHz and 2 kHz frame rates show severe phase wrapping that prevents its optimal correction and with (b) 4 kHz, 8kHz, 12 kHz and 16 kHz frame rates show no phase wrapping enabling accurate correction of axial bulk motion. (c) The average phase difference between successive B-scans along the scan direction in a volume shows no phase wrapping when measured in the range between 4–16 kHz B-scan rates. (d) Axial bulk motion quantified as average phase difference in the two modes of operation – M-scan and multiple B-scans per volume, decreases monotonically with increasing frame rate. The error bars denote the mean variability, calculated as the mean of the standard deviation across the 10 measurements for each speed.
Fig. 6.
Fig. 6. Cross-sectional and en face view of retina without and with adaptive optics. Single and averaged B-scans of retina at 7 deg. temporal to fovea are shown without (a,c) and with (b,d) adaptive optics. Corresponding en face images at COST layer are shown in (e) and (f). Separate scale bars are indicated for the B-scans and the en face images.
Fig. 7.
Fig. 7. High-resolution imaging of retinal structure with AO line-scan spectral domain OCT and line-scan ophthalmoscope. Three dimensional rendering of AO-OCT volumes acquired at 2 and 4 deg temporal eccentricity are shown in (a) and (b). The corresponding en face images at the COST layer show cone photoreceptors in (d) and (f). LSO en face images are shown in (c) and (e). Scale bars for the volumes are indicated and are equal to 100 µm for the en face images.
Fig. 8.
Fig. 8. Optoretinograms (ORGs) with 660 nm stimulus acquired without adaptive optics. After stimulus onset, cone outer segments undergo a rapid shrinkage in OPL (a) followed by a slower expansion (b). The response magnitude and rate of change scale with increasing stimulus strength. The phase response was averaged over a retinal area of 0.27 deg2. In a), each curve is an average of two separate measurements, while single measurements are shown in b). The early reduction (ORG early response) and later expansion (ORG late response) were measured at 324 Hz and 120 Hz respectively. For the control (no stimulus case), the mean ± standard deviation OPL calculated over ∼1.2 seconds was 2.2 ± 4.9 nm for the single measurement.
Fig. 9.
Fig. 9. Single cone optoretinograms (ORGs) with 660 nm stimulus acquired with AO. (a) Cone photoreceptor image at 4.75 deg temporal eccentricity obtained at the COST layer from a set of registered AO-OCT volumes. (b) Long-, Middle- and Short- (LMS) cones marked as ‘red’, ‘green’ and ‘blue’, classified based on Gaussian mixture model clustering of the ORG magnitude shown in (d). (c) The mean and standard deviation of the ORG in individual L-, M- and S-cones. Solid lines and error bars denote the mean and ± 1 std across the population of cones. (d) A histogram showing the distribution of ORG magnitude. The black dashed line denotes the sum of three Gaussians obtained from the clustering analysis. The individual component Gaussians are shaded red, green, blue, denoting the LMS-cones respectively. Volume rate was 15 Hz. Scale bar: 50 µm.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y , z ) = F ( I ( x , y , k ) ) = A ( x , y , z ) exp ( i φ ( x , y , z ) )
I r e g t r e f ( x , y , z ) = I r e g ( x , y , z ) ( i ) × ( k = 1 n I r e g ( x , y , z ) ( k ) n )
I I S O S / C O S T ( x , y ) = I I S O S t r e f ( x , y ) × I C O S T t r e f ( x , y )
Δ O P L ( t ) = λ o × Δ ϕ I S O S / C O S T ( t ) 4 π
σ Δ φ = ( 1 S N R ) + ( 4 π 3 ) ( 1 e x p ( 2 ( Δ x d ) 2 ) )
σ Δ φ = ( 1 2 S N R ( z 1 , t ) ) + ( 1 2 S N R ( z 2 , t ) )
S N R = 2 S N R ( z 1 , t ) S N R ( z 2 , t ) S N R ( z 1 , t ) + S N R ( z 2 , t )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.