Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Combined hardware and computational optical wavefront correction

Open Access Open Access

Abstract

In many optical imaging applications, it is necessary to overcome aberrations to obtain high-resolution images. Aberration correction can be performed by either physically modifying the optical wavefront using hardware components, or by modifying the wavefront during image reconstruction using computational imaging. Here we address a longstanding issue in computational imaging: photons that are not collected cannot be corrected. This severely restricts the applications of computational wavefront correction. Additionally, performance limitations of hardware wavefront correction leave many aberrations uncorrected. We combine hardware and computational correction to address the shortcomings of each method. Coherent optical backscattering data is collected using high-speed optical coherence tomography, with aberrations corrected at the time of acquisition using a wavefront sensor and deformable mirror to maximize photon collection. Remaining aberrations are corrected by digitally modifying the coherently-measured wavefront during imaging reconstruction. This strategy obtains high-resolution images with improved signal-to-noise ratio of in vivo human photoreceptor cells with more complete correction of ocular aberrations, and increased flexibility to image at multiple retinal depths, field locations, and time points. While our approach is not restricted to retinal imaging, this application is one of the most challenging for computational imaging due to the large aberrations of the dilated pupil, time-varying aberrations, and unavoidable eye motion. In contrast with previous computational imaging work, we have imaged single photoreceptors and their waveguide modes in fully dilated eyes with a single acquisition. Combined hardware and computational wavefront correction improves the image sharpness of existing adaptive optics systems, and broadens the potential applications of computational imaging methods.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical imaging is often limited by wavefront aberrations, which may be inherent to the imaging system itself or introduced by the imaged sample. By compensating for the wavefront error using adaptive optics, it is possible to acquire high resolution images even in the presence of aberrations [1–4]. Conventionally, this has been achieved using a wavefront sensor and deformable mirror to measure and correct the aberrated optical wavefront. We term this method hardware adaptive optics (HAO).

HAO physically modifies the optical wavefront to allow tight focus of the imaging beam. This allows for high signal even in the presence of large aberrations. However, the HAO correction is optimized for a single depth within the sample, and thus, image quality is not optimal for other depth locations. The wavefront correction also suffers from limited sampling and inherent measurement and fitting errors, and is sensitive to system misalignment. Misalignment is particularly prevalent when acquiring data on living subjects due to involuntary head and eye motion and eye blinks. These effects can be reduced, but not eliminated.

Previous attempts to improve HAO images have relied upon amplitude-based deconvolution [5,6]. These methods neglect the phase information and operate upon the amplitude or intensity image only. Small values in the optical transfer function lead to either signal loss or noise amplification, making these methods poorly suited for imaging in scattering samples.

Optical coherence tomography (OCT) is a broadband interferometric imaging method which measures coherently backscattered light, and can be combined with HAO for imaging samples that aberrate the wavefront [7–10]. By acquiring interferometric data using OCT, the complex wavefront is measured, and the pupil phase can be digitally adjusted to compensate for aberrations [11]. This is done by applying a phase-only filter in the spatial frequency domain and is analogous to the physical operation of a deformable mirror in adaptive optics. Therefore, we term this method computational adaptive optics (CAO). Using CAO, computational wavefront correction has been demonstrated for a variety of biological imaging applications [11–15].

With CAO, the image formation process can continue after data acquisition, and therefore the data does not need to be aberration free when acquired. This reduces the burden on HAO to provide optimal correction at the time of imaging and the need to maintain optimal sample alignment. Additionally, the aberration correction can be fine-tuned to each depth layer, field position, and time point of acquisition. However, because CAO does not physically modify the wavefront, any photons lost due to the presence of aberrations are not recovered using CAO. When the imaging beam is aberrated, the input signal strength is distributed away from the nominal focus. The back scattered photons are then rejected by the confocal detection of point-scanning OCT. This causes a drop in the detected OCT signal strength, leading to a loss in signal-to-noise ratio (SNR) that cannot be completely recovered using computational methods alone.

One alternative to scanned OCT is full-field OCT which removes the confocal gate and allows collection of aberrated photons. Using CAO, full-field OCT has acquired high-resolution images of the retina through a dilated pupil without hardware wavefront correction [16]. However, full-field OCT has stricter imaging speed requirements to achieve sufficient phase stability, lower diffraction-limited resolution, and reduced contrast due to the noise arising from scattered photons when compared to scanned OCT.

We combine HAO and CAO together to address the shortcomings of each method and to demonstrate how their strengths can be integrated to provide more complete correction of wavefront aberrations. A high-speed OCT system equipped with HAO is used to acquire 3D, phase-stable interferometric data with high SNR. This data is then reconstructed using CAO to remove aberrations left uncorrected by HAO. Together, HAO + CAO achieves improved resolution when compared to HAO, and improved SNR when compared to CAO.

This work is similar to, but distinct from point-spread function engineering [17], where hardware is used to introduce a known intensity profile onto the optical beam which can then be digitally corrected. For example, wavefront coding or airy beam imaging can create a distorted point-spread function which increases the depth-of-field [18–20]. Likewise, artificially introduced astigmatism can be used for depth localization of sparse signals [21,22]. In point-spread function engineering, the desired wavefront modification is known. In the case of combined HAO + CAO imaging, the wavefront aberrations are sample-induced, dynamic, and unknown. Rather than modify an ideal wavefront to suit some other purpose, the goal is to thoroughly correct the distorted wavefront for improved imaging capability, demonstrated here by in vivo retinal imaging.

Imaging the living human retina requires imaging through the optics of the eye itself, which are severely aberrated when the pupil of the eye is large [23]. To obtain a high numerical aperture and therefore high resolution, the pupil must be dilated, resulting in strong ocular aberrations and photon loss. Because of this, previous demonstrations of in vivo CAO-only imaging using scanned OCT were performed on undilated subjects [13,24,25]. Therefore, retinal imaging is an application well-suited for a combined hardware and computational approach.

2. Methods

2.1 Adaptive optics imaging system

A high-speed adaptive optics OCT system was used to physically correct aberrations and acquire retinal OCT data [26]. Like most HAO systems, the sample arm used mirrors instead of lenses to eliminate back reflections that may interfere with the wavefront measurement. Unfortunately, the use of spherical mirrors introduces strong astigmatism that must be compensated by placing some mirrors out-of-plane. The system was a unique design which used toroidal mirrors to allow for in-the-plane alignment without strong system aberrations, the details of which were previously published in Ref [27]. An updated version of the system published in Ref [26] was used for the experiments presented here. Key differences from the original design include the use of a single deformable mirror and four interleaved spectrometers.

The HAO system consisted of a deformable mirror (DM 97, ALPAO) and a custom Shack-Hartmann wavefront sensor, constructed with a 20 x 20 lenslet array in front of a sCMOS camera (Neo, Andor). A superluminescent diode centered at 790 nm with 47 nm bandwidth was used for both imaging and wavefront sensing, giving an axial resolution in tissue (n=1.38) of 4.7 µm. The pupil size was 6.67 mm at the eye, resulting in a theoretical diffraction limited transverse resolution of 2.4 µm. The spectral domain OCT system used four interleaved spectrometers operating at line rates of 250 kHz each, for an effective line rate of 1 MHz. All images were acquired at the 1 MHz rate.

2.2 Human subject imaging

Data was acquired from the right eyes of two healthy male subjects, ages 27 years and 26 years. These are referred to as Subject 1 and Subject 2, respectively. The left eye was covered by a patch, and the right eye was dilated using 0.5% tropicamide. Imaging was performed at 0.5°, 1°, 3.5°, 7.5°, and 12.5° temporal (T) to the foveal center. The OCT data was acquired with the HAO system running in closed loop feedback, and a real-time display was used to place the focus near the cone photoreceptors prior to data acquisition [28]. Due to the relatively high numerical aperture of the dilated pupil, only posterior retinal layers had sufficient SNR for observation.

OCT data was acquired in bursts of 30 sequential volumes at 10 volumes per second, for a total acquisition time of 3 seconds. This acquisition scheme was standard for HAO to ensure that sufficient motion-free volumes were acquired for imaging of small features such as rods, and to avoid imaging during large motion artifacts such as saccades. This method was not altered for CAO and was sufficient to acquire many phase stable OCT volumes within a single burst. All procedures on the subjects adhered to the tenets of the Declaration of Helsinki and were approved by the Institutional Review Board of Indiana University.

2.3 Phase stability

Computational wavefront correction requires a stable phase relationship during the measurement of each location within the imaging volume. In a dynamic sample such as the human eye, there is significant motion which is largely overcome by using high imaging speeds. Although the 1 MHz line rate was sufficient to overcome much of the eye motion, it was important to measure the system stability due to the use of galvanometer scanning mirrors at such high frame rates and the fluctuations of the deformable mirror. This was analyzed using a model eye with the HAO system operating with and without closed-loop feedback. Repeated frames were acquired at the same location by fixing the position of the slow-axis scanning mirror. Complex conjugate multiplication was performed between consecutively acquired frames (along the temporal axis), and the result was averaged along both depth and the fast-scanning axis. This canceled out any transverse motion and measured purely axial motion, to which computational OCT is most sensitive. Following volume phase stabilization [29], the standard deviation of the phase motion was 0.06 radians, well below the previously determined threshold of 0.3 radians [30].

Phase motion in the HAO-OCT data set was corrected using the axial motion stabilization method outlined in Ref [29], with the modification that the mean was taken along the entire length of the fast-scanning axis. This gave a single phase correction for each fast-axis B-scan. Each depth image then underwent additional preprocessing prior to CAO optimization. The Fourier spectrum was centered to remove any linear phase ramp across the depth image. This was done by calculating the centroid of the Fourier spectrum. The final preprocessing step was to filter out the remaining phase noise. This was performed sequentially along each dimension by modifying the phase, ϕi, at each pixel according to the algorithm

ϕi={mean(ϕi1,ϕi+1),if(ϕiϕi1)(ϕi+1ϕi)<0and|ϕi1ϕi+1|<πϕi,otherwise.
The result was a smooth phase profile suitable for CAO processing.

2.4 CAO processing

Residual aberrations in the HAO-OCT data were corrected using CAO in post-processing. CAO was used to optimize the wavefront correction for each depth layer independently using a phase-only pupil filter in the spatial frequency domain, as illustrated in Fig. 1(a). The en face OCT data was Fourier transformed to the spatial frequency domain via a 2D Fourier transform. The phase of the wavefront was then modified by multiplication with a 2D wavefront correction filter prior to inverse Fourier transforming back to the spatial domain. Depth layers corresponding to reflections from the photoreceptors were extracted from single HAO and HAO + CAO volumes for direct comparison, as illustrated in Fig. 1(b).

 figure: Fig. 1

Fig. 1 (a) Illustration of the CAO processing method on a human subject. Each depth plane in the retina is optimized independently using an aberration phase filter in the spatial frequency domain. (b) Depth planes corresponding to the photoreceptor reflections are extracted from the HAO and HAO + CAO volumes for comparison. Cross-sectional and depth-profile projections are shown to illustrate this process, where extracted depth layers are used to generate the cone and rod mosaics. IS/OS: Inner segment/outer segment junction. COST: Cone outer segment tips. ROST: Rod outer segment tips. Scalebar is 20 µm.

Download Full Size | PDF

Cone mosaic images were generated by a maximum projection of the inner segment/outer segment junction (IS/OS) and cone outer segment tip (COST) depth layers [10]. The rod mosaic image was taken directly from the rod outer segment tip (ROST) depth layer. Prior to extracting individual cell layers, the HAO and HAO + CAO data were co-registered to remove any translation introduced by CAO processing, and flattened to remove tip, tilt, and slowly-varying axial eye motion [31,32].

The optimal aberration correction filter was determined via stochastic optimization of the image sharpness over the first five Zernike orders, using the resilient backpropagation procedure outlined in [33]. The CAO phase filter extended to the maximum theoretical cutoff frequency of the confocal system, defined as two times the spatial frequency coverage of the 6.67 mm pupil at the eye [34]. A single correction was used for the entire field-of-view, which is roughly half the size of the expected isoplanatic patch on the retina [35]. The image sharpness metric was calculated from the complex OCT signal S(x,y) as the sum of the squared intensity,

x,y[S(x,y)S*(x,y)]2.

The CAO procedure was tested on the COST layer at 12.5°T in Subject 2 to determine the run time and image sharpness improvement for an increasing number of Zernike modes. The maximum Zernike mode was increased from 2nd to 10th order (excluding piston, tip, and tilt), and the optimization was run 10 times at each step. The optimization was performed on the 300 x 300 pixel image using MATLAB 2015b on an Intel Core i7-6950X processor. Optimization up to 5th order (20th Zernike term) was determined to be a good balance between optimization time and image improvement, with an average runtime and sharpness improvement of 12 seconds and 42%, respectively. This was used as the default setting for processing other retinal data sets.

3. Results and discussion

Retinal data was acquired from living human subjects with fully dilated pupils using a high-speed adaptive optics OCT system. Representative cone photoreceptor mosaics for each possible combination of HAO and CAO are given in Fig. 2, along with the peak SNR in each case. Each image shows the OCT amplitude presented on a common grayscale normalized to the HAO + CAO image. The peak SNR was calculated as

SNRpeak=10log10(max[S(x,y)S*(x,y)]2σnoise2).
Images acquired without HAO (Fig. 2(a)) correspond to a fixed defocus applied to the deformable mirror based upon the subject's eyeglass prescription (−2 diopter). In the no-AO case, the strong ocular aberrations lead to poor SNR and poor resolution. Note that in many subjects, the SNR without adaptive optics may be so low that no photoreceptors are visible. CAO recovers the diffraction-limited resolution, and the peak SNR increases due to higher peak signal of the corrected point-spread function (PSF). However, the total signal collected remains constant before and after computational correction.

 figure: Fig. 2

Fig. 2 Human photoreceptor imaging with and without wavefront correction. (a) Cone mosaic without hardware wavefront correction. CAO corrects the ocular aberrations but cannot recover lost photons. (b) Cone mosaic with hardware wavefront correction. Photon collection is improved by HAO, and remaining aberrations are corrected using CAO. All images are displayed on a common amplitude scale to highlight differences in signal level. The peak SNR is given in decibels. Data taken at 12.5° temporal to the fovea in Subject 2. Scalebar is 20 µm.

Download Full Size | PDF

The greatest improvement in SNR comes from the addition of HAO (Fig. 2(b)), which increased the peak signal by nearly an order of magnitude over the no-AO case. Still, the point-spread function remains somewhat aberrated. The HAO + CAO image shows both improved resolution and the greatest increase in SNR when compared to the no-AO image, a 12.7 dB increase. This demonstrates the synergy between the hardware and computational wavefront corrections.

A comparison between HAO and HAO + CAO is shown in Fig. 3 for cone photoreceptor mosaics at increasing eccentricity from the foveal center. The cone mosaic images are compiled from the inner segment/outer segment junction (IS/OS) and cone outer segment tips (COST) depth layers [10]. Each was independently optimized with CAO. Improvement in the visualization of the cone photoreceptors across the field-of-view is seen following CAO residual aberration correction. The cones are smallest and most densely packed near the fovea. Signal traces taken through adjacent cones at 0.5° temporal (T) to the fovea shows improved resolution with an increase in the peak signal of each cone and a lower minimum between the two cones, a result of improvement in the PSF and image sharpness. Cones at 0.5°T, 1°T, and 3.5°T show similar improvement, with narrow and symmetric intensity profiles. At greater retinal eccentricities, the IS/OS reflection supports higher order optical modes [36], as was recently found with HAO [37,38]. Our HAO + CAO results confirm and extend these findings by better elucidating the modes. An example of this can be seen in the 7.5°T and 12.5°T mosaics (see also Fig. 2), where the higher-order modes are more clearly delineated following CAO optimization.

 figure: Fig. 3

Fig. 3 HAO + CAO cone photoreceptor mosaic over 0.4° x 0.5° field-of-view at multiple retinal eccentricities. The top of each image is toward the fovea (nasal direction), and the fast-scanning axis is along the vertical dimension. Zoomed images correspond to the boxed areas in the cone mosaics. Signal traces are taken through the red lines in the zoomed images. Plots indicate the corresponding HAO (blue) and HAO + CAO (red) signals, with HAO + CAO showing an improved resolution. Scalebar is 20 µm. (See Visualization 1 and Visualization 2 for comparison across the full field-of-view.)

Download Full Size | PDF

Because rods are absent near the foveal center and have a density that increases sharply with retinal eccentricity, these cells were examined at 12.5°T. The HAO + CAO rod mosaics from the rod outer segment tips (ROST) layer are shown in Fig. 4. Following CAO residual aberration correction, multiple individual rods appear resolved (Figs. 4(a) and 4(b)). In addition to the rods themselves, a repeating pattern of dark areas emerges which corresponds to pseudo-shadows of the cone photoreceptors [37,39,40]. This is confirmed by overlaying the COST layer onto the ROST layer, shown in Figs. 4(b) and 4(e), where the bright cone reflections are color coded magenta to facilitate comparison. The signal traces in Figs. 4(c) and 4(f) correspond to the rods highlighted by the white arrows in Figs. 4(a) and 4(d), indicating that imaging of the rod photoreceptors is improved following CAO. The expected spacing of the rod photoreceptors is 2.5-3 µm [41,42], which approaches the 2.4 µm theoretical diffraction-limited resolution of the imaging system. When imaging objects near the resolution limit using coherent light, interference effects dominate [43]. Consequently, some rods within the field-of-view remain obscured by speckle. However, rods that are sufficiently separated are resolved.

 figure: Fig. 4

Fig. 4 (a, d) HAO + CAO rod photoreceptor mosaic taken at 12.5° temporal to the fovea. Zoomed images correspond to boxed areas of corresponding color in the mosaic. Multiple individual rod photoreceptors can be resolved in the HAO + CAO data. (b, e) Dark patches in the rod mosaic correspond to pseudo-shadows of the cone photoreceptors, demonstrated by presenting the COST in magenta overlay. (c, f) Signal traces through the rod photoreceptors indicated by the white arrows in (a, d). Scalebar is 20 µm.

Download Full Size | PDF

In addition to being optimized for each eccentricity and depth layer, the CAO correction was also optimized for each time point. A time sequence of the CAO residual wavefront correction is shown in Fig. 5 for the peak COST depth layer, acquired at an OCT volume rate of 10 Hz. The optimized Zernike coefficients reveal temporal dynamics that were left uncorrected by HAO, but corrected with CAO. The residual wavefront corrections appear to vary around a general profile over time. For example, positive defocus (Z4), negative astigmatism (Z5), and positive spherical aberration (Z12) are present at each time point. However, the weights vary significantly, with many of the other aberrations changing sign as well as strength. The temporal variation in the residual aberration correction is likely due to the temporal dynamics of ocular aberrations [44], which interact with the varying state of the deformable mirror, resulting in a new residual aberration at each time point.

 figure: Fig. 5

Fig. 5 Cone photoreceptors at the same location imaged across multiple time-points and the corresponding CAO residual aberration corrections. The optimized CAO Zernike weights (numbered per the ANSI Z80.28 standard [45]) are shown for each time-point, along with the CAO phase filter (without defocus for improved visualization). A single photoreceptor is encircled to aid the reader in tracking the photoreceptor mosaic over time. Data taken at 3.5° temporal to the fovea in Subject 2. Scalebar is 20 µm.

Download Full Size | PDF

The root mean square (RMS) strength of the residual aberrations corrected by CAO and the corresponding image sharpness improvement are shown in Table 1, corresponding to the peak IS/OS and COST depth layers. The RMS wavefront variation was calculated over the spatial frequencies corresponding to the 6.67 mm physical pupil, and is given as a fraction of the central wavelength, λ. Note that there is a non-trivial relationship between residual aberration RMS and sharpness improvement, as each aberration mode has a unique influence upon the value of the metric [46].

Tables Icon

Table 1. Residual wavefront RMS as a fraction of wavelength (λ) and image sharpness improvement (%)

The impact of ocular aberrations typically increases with retinal eccentricity, placing a greater burden on the HAO system, which could be shared by CAO. There is also increased difficultly of obtaining optimal alignment of the subject pupil with the HAO system at larger retinal eccentricities. The results in Table 1 follow that general trend, showing more improvement in the image sharpness metric with increasing eccentricity. The image sharpness improvement was calculated as the increase in the sharpness metric defined in Eq. (2), with the modification that the metric was normalized by the sum of the pixel intensities to account for any small variations in image power resulting from the computational correction.

These results were achieved by computationally correcting up to the 20th Zernike mode, while the HAO system corrected up to 70 singular-value modes. Therefore, the residual aberrations do not result from a limited number of modes corrected by HAO, but from the accuracy with which the modes are measured and corrected. Calibration error, fitting error, measurement error, and bandwidth error all contribute to the presence of residual aberrations [47]. The computed pupil also has many more adjustable elements than the number of actuators on the deformable mirror used in this study, which may partially explain the improvement gained from CAO.

The term computed pupil refers to the spatial frequency coverage of the OCT system accessed by taking the 2D transverse Fourier transform of the OCT signal, as illustrated in Fig. 1(a). The computed pupil is circular and extends to the cutoff frequency of the imaging system. Within the computed pupil, the phase of each pixel is digitally modified using double-precision floating-point numbers, making the pixels equivalent to piston-only actuators with nearly infinite stroke. Pixels outside the computed pupil are left unmodified. The number of pixels across the computed pupil is termed the number of computational actuators.

For the imaging protocol used here, the en face image size was 300 x 300 pixels originally acquired with 0.4 x 0.5 µm spacing and a 6.67 mm physical pupil. This resulted in 14,111 piston-only computational actuators within the computed pupil. For comparison, the Alpao DM 97 used in this study had 97 discrete actuators with approximately Gaussian influence functions. For a piston-only wavefront corrector to achieve equivalent performance to a corrector with a Gaussian influence function, the required number of actuators is predicted to increase by a factor of 10 to 40 depending upon pupil size, among other factors [23]. In this study, the piston-only computed pupil had approximately 150 times more actuators than the Gaussian influence function DM. This is several times higher than what was needed for comparable performance. This suggests the computed pupil will have superior wavefront correction performance. However, this should be confirmed in a future study with a direct comparison between HAO and CAO in a controlled setting.

Improvement in resolution and image sharpness following CAO is not only beneficial for interpretation by human users, but also for automated image analysis. For example, image blur is the primary source of error in automated cone detection algorithms [48]. This is because the automated count may misidentify multiple cones as a single cone due to the broad PSF. Therefore, it was expected that the performance of such algorithms will improve with residual aberration correction. This was tested on cone mosaics with primarily single-mode reflections (0.5°T, 1°T, 3.5°T) using the algorithm of Li and Roorda [49].

The results are shown in Fig. 6, where the cone density calculated using the automated method on the HAO and HAO + CAO images is compared to a manual count, which we use as a gold standard. As expected, the increase in performance is greatest near the fovea where the cones are most dense and have a diameter near the diffraction-limit of the imaging system. Photoreceptor densities are more accurately measured following CAO residual aberration correction, with an improvement in accuracy from 86% to 94% near the fovea. Representative images showing the automated counts for HAO and HAO + CAO are also shown. Several cones which are misidentified under HAO are accurately detected following CAO, explaining the increase in the automated photoreceptor density measurement with HAO + CAO.

 figure: Fig. 6

Fig. 6 Manual and automated measurement of cone densities with and without CAO. The automated HAO + CAO count is closer to the manual count in each case, especially near to the fovea. Representative automated counting results show improved cone detection with HAO + CAO. Estimated cone locations are indicated by the yellow markers. Scalebar is 2 µm.

Download Full Size | PDF

4. Conclusions

These results demonstrate the effectiveness of HAO + CAO for 3D imaging through dynamic aberrations. This combination of hardware and computational wavefront correction can potentially improve the performance of state-of-the-art HAO systems. Alternatively, it could lead to simplified, less-expensive, and more user-friendly HAO systems since complete physical correction of aberrations is no longer necessary. For human subject imaging, the combination of HAO + CAO could improve the subject's experience as it may allow data to be collected more quickly. For example, residual aberrations in Fig. 3 obscured the multi-mode reflection of the photoreceptors at 12.5°T. Without computational wavefront correction this would require realigning and re-imaging the subject, likely during an additional lab visit with pupil dilation. With residual aberration correction using CAO, this becomes unnecessary.

The combination of HAO and CAO may be particularly beneficial in diseased eyes where aberrations are commonly more difficult to correct due to aging of the ocular media and abnormalities in the reflectance profile of the retina that can impact the wavefront sensor performance. In these subjects, HAO would be used to correct the strongest aberrations to increase photon collection prior to high-fidelity correction by computational means. These subjects are expected to be less stable and may require higher imaging speeds to retain phase stability. Additionally, full integration of HAO and CAO for clinical use would require a near real-time implementation of the automated CAO optimization procedure. Manual CAO correction has previously been implemented for real-time display using a graphics processing unit [50]. It is anticipated that parallelization will enable optimization of multiple depth layers simultaneously to produce HAO + CAO images at an acceptable speed for clinical use.

This combined hardware and computational approach to wavefront correction is expected to be useful in other applications outside of retinal imaging. HAO for optical microscopy with direct wavefront sensing is often inaccurate due to poor performance of the wavefront sensor in thick samples [4,51]. The aberrated image acquired using direct wavefront sensing could be used as a high-SNR starting point for further computational correction. Sensorless adaptive optics, in which a deformable mirror is adjusted to maximize an appropriate image sharpness metric, must optimize the mirror shape for each scan point at the time of acquisition [10,46,52]. Therefore, the image acquisition time is dramatically increased when obtaining an optimal wavefront correction. Using CAO, the sensorless AO correction does not need to be optimal, since uncorrected aberrations can be removed post-acquisition. This could dramatically reduce the acquisition time for sensorless AO, enabling imaging of more dynamic samples.

Funding

National Institutes of Health (NIH) (R01-CA213149, R01-EB013723, R01-EB023232, RO1-EY018339, P30-EY019008); Air Force Office of Scientific Research (FA9550-17-1-0387).

Acknowledgements

The authors thank Darold Spillman for technical support, and Andrew J. Bower for helpful discussion. We also thank Timothy Turner, William Monette, and Thomas Kemerly for computer software, electronic, and machining support. Additional information can be found at http://biophotonics.illinois.edu and http://www.opt.indiana.edu/dtmiller/Index.aspx.

Research reported in this publication was supported by the National Institute for Biomedical Imaging and Bioengineering; the National Cancer Institute; and the National Eye Institute of the National Institutes of Health under award number R01EB023232, R01EB13723, R01CA213149, R01EY018339, P30EY019008 respectively and Air Force Office of Scientific Research under award number FA9550-17-1-0387. One hundred percent of the total project costs were financed with Federal money and zero percent of the total costs were financed by nongovernmental sources. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Disclosures

SAB: Patents licensed by the Massachusetts Institute of Technology related to optical coherence tomography (P).

References and links

1. J. M. Beckers, “Adaptive optics for astronomy: Principles, performance, and applications,” Annu. Rev. Astron. Astrophys. 31(1), 13–62 (1993). [CrossRef]  

2. R. Davies and M. Kasper, “Adaptive optics for astronomy,” Annu. Rev. Astron. Astrophys. 50(1), 305–351 (2012). [CrossRef]  

3. J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14(11), 2884–2892 (1997). [CrossRef]   [PubMed]  

4. M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light Sci. Appl. 3(4), e165 (2014). [CrossRef]  

5. I. Iglesias and P. Artal, “High-resolution retinal images obtained by deconvolution from wave-front sensing,” Opt. Lett. 25(24), 1804–1806 (2000). [CrossRef]   [PubMed]  

6. J. C. Christou, A. Roorda, and D. R. Williams, “Deconvolution of adaptive optics retinal images,” J. Opt. Soc. Am. A 21(8), 1393–1401 (2004). [CrossRef]   [PubMed]  

7. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]   [PubMed]  

8. J. G. Fujimoto and E. A. Swanson, “The development, commercialization, and impact of optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 57(9), OCT1–OCT13 (2016). [CrossRef]   [PubMed]  

9. Y. Zhang, B. Cense, J. Rha, R. S. Jonnal, W. Gao, R. J. Zawadzki, J. S. Werner, S. Jones, S. Olivier, and D. T. Miller, “High-speed volumetric imaging of cone photoreceptors with adaptive optics spectral-domain optical coherence tomography,” Opt. Express 14(10), 4380–4394 (2006). [CrossRef]   [PubMed]  

10. R. S. Jonnal, O. P. Kocaoglu, R. J. Zawadzki, Z. Liu, D. T. Miller, and J. S. Werner, “A review of adaptive optics optical coherence tomography: technical advances, scientific applications, and the future,” Invest. Ophthalmol. Vis. Sci. 57(9), OCT51–OCT68 (2016). [CrossRef]   [PubMed]  

11. S. G. Adie, B. W. Graf, A. Ahmad, P. S. Carney, and S. A. Boppart, “Computational adaptive optics for broadband optical interferometric tomography of biological tissue,” Proc. Natl. Acad. Sci. U.S.A. 109(19), 7175–7180 (2012). [CrossRef]   [PubMed]  

12. Y.-Z. Liu, N. D. Shemonski, S. G. Adie, A. Ahmad, A. J. Bower, P. S. Carney, and S. A. Boppart, “Computed optical interferometric tomography for high-speed volumetric cellular imaging,” Biomed. Opt. Express 5(9), 2988–3000 (2014). [CrossRef]   [PubMed]  

13. N. D. Shemonski, F. A. South, Y.-Z. Liu, S. G. Adie, P. S. Carney, and S. A. Boppart, “Computational high-resolution optical imaging of the living human retina,” Nat. Photonics 9(7), 440–443 (2015). [CrossRef]   [PubMed]  

14. F. A. South, Y.-Z. Liu, P. S. Carney, and S. A. Boppart, “Computed optical interferometric imaging: Methods, achievements, and challenges,” IEEE J. Sel. Top. Quantum Electron. 22(3), 186–196 (2016). [CrossRef]   [PubMed]  

15. Y.-Z. Liu, F. A. South, Y. Xu, P. S. Carney, and S. A. Boppart, “Computational optical coherence tomography [Invited],” Biomed. Opt. Express 8(3), 1549–1574 (2017). [CrossRef]   [PubMed]  

16. D. Hillmann, H. Spahr, C. Hain, H. Sudkamp, G. Franke, C. Pfäffle, C. Winter, and G. Hüttmann, “Aberration-free volumetric high-speed imaging of in vivo retina,” Sci. Rep. 6(1), 35209 (2016). [CrossRef]   [PubMed]  

17. Z. S. Hegedus and V. Sarafis, “Superresolving filters in confocally scanned imaging systems,” J. Opt. Soc. Am. A 3(11), 1892–1896 (1986). [CrossRef]  

18. E. R. Dowski Jr and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34(11), 1859–1866 (1995). [CrossRef]   [PubMed]  

19. S. Jia, J. C. Vaughan, and X. Zhuang, “Isotropic three-dimensional super-resolution imaging with a self-bending point spread function,” Nat. Photonics 8(4), 302–306 (2014). [CrossRef]   [PubMed]  

20. T. Vettenburg, H. I. C. Dalgarno, J. Nylk, C. Coll-Lladó, D. E. K. Ferrier, T. Čižmár, F. J. Gunn-Moore, and K. Dholakia, “Light-sheet microscopy using an Airy beam,” Nat. Methods 11(5), 541–544 (2014). [CrossRef]   [PubMed]  

21. B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319(5864), 810–813 (2008). [CrossRef]   [PubMed]  

22. S. Quirin, S. R. P. Pavani, and R. Piestun, “Optimal 3D single-molecule localization for superresolution microscopy with aberrations and engineered point spread functions,” Proc. Natl. Acad. Sci. U.S.A. 109(3), 675–679 (2012). [CrossRef]   [PubMed]  

23. N. Doble, D. T. Miller, G. Yoon, and D. R. Williams, “Requirements for discrete actuator and segmented wavefront correctors for aberration compensation in two large populations of human eyes,” Appl. Opt. 46(20), 4501–4514 (2007). [CrossRef]   [PubMed]  

24. A. Kumar, L. M. Wurster, M. Salas, L. Ginner, W. Drexler, and R. A. Leitgeb, “In-vivo digital wavefront sensing using swept source OCT,” Biomed. Opt. Express 8(7), 3369–3382 (2017). [CrossRef]   [PubMed]  

25. L. Ginner, A. Kumar, D. Fechtig, L. M. Wurster, M. Salas, M. Pircher, and R. A. Leitgeb, “Noniterative digital aberration correction for cellular resolution retinal optical coherence tomography in vivo,” Optica 4(8), 924–931 (2017). [CrossRef]  

26. O. P. Kocaoglu, T. L. Turner, Z. Liu, and D. T. Miller, “Adaptive optics optical coherence tomography at 1 MHz,” Biomed. Opt. Express 5(12), 4186–4200 (2014). [CrossRef]   [PubMed]  

27. Z. Liu, O. P. Kocaoglu, and D. T. Miller, “In-the-plane design of an off-axis ophthalmic adaptive optics system using toroidal mirrors,” Biomed. Opt. Express 4(12), 3007–3029 (2013). [CrossRef]   [PubMed]  

28. B. A. Shafer, J. E. Kriske, O. P. Kocaoglu, T. L. Turner, Z. Liu, J. J. Lee, and D. T. Miller, “Adaptive-optics optical coherence tomography processing using a graphics processing unit,” 36th Ann Int Conf IEEE Eng. Med. Biol. Soc.3877–3880 (2014). [CrossRef]  

29. N. D. Shemonski, S. S. Ahn, Y.-Z. Liu, F. A. South, P. S. Carney, and S. A. Boppart, “Three-dimensional motion correction using speckle and phase for in vivo computed optical interferometric tomography,” Biomed. Opt. Express 5(12), 4131–4143 (2014). [CrossRef]   [PubMed]  

30. N. D. Shemonski, S. G. Adie, Y.-Z. Liu, F. A. South, P. S. Carney, and S. A. Boppart, “Stability in computed optical interferometric tomography (Part I): Stability requirements,” Opt. Express 22(16), 19183–19197 (2014). [CrossRef]   [PubMed]  

31. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33(2), 156–158 (2008). [CrossRef]   [PubMed]  

32. R. S. Jonnal, O. P. Kocaoglu, Q. Wang, S. Lee, and D. T. Miller, “Phase-sensitive imaging of the outer retina using optical coherence tomography and adaptive optics,” Biomed. Opt. Express 3(1), 104–124 (2012). [CrossRef]   [PubMed]  

33. P. Pande, Y.-Z. Liu, F. A. South, and S. A. Boppart, “Automated computational aberration correction method for broadband interferometric imaging techniques,” Opt. Lett. 41(14), 3324–3327 (2016). [CrossRef]   [PubMed]  

34. M. Gu, C. J. R. Sheppard, and X. Gan, “Image formation in a fiber-optical confocal scanning microscope,” J. Opt. Soc. Am. A 8(11), 1755–1761 (1991). [CrossRef]  

35. P. Bedggood, M. Daaboul, R. Ashman, G. Smith, and A. Metha, “Characteristics of the human isoplanatic patch and implications for adaptive optics retinal imaging,” J. Biomed. Opt. 13(2), 024008 (2008). [CrossRef]   [PubMed]  

36. V. Lakshminarayanan and J. M. Enoch, “Biological waveguides,” in Handbook of Optics Vol III, M. Bass, ed., 2nd ed. (McGraw-Hill, 2001).

37. F. Felberer, J.-S. Kroisamer, B. Baumann, S. Zotter, U. Schmidt-Erfurth, C. K. Hitzenberger, and M. Pircher, “Adaptive optics SLO/OCT for 3D imaging of human photoreceptors in vivo,” Biomed. Opt. Express 5(2), 439–456 (2014). [CrossRef]   [PubMed]  

38. Z. Liu, O. P. Kocaoglu, T. L. Turner, and D. T. Miller, “Modal content of living human cone photoreceptors,” Biomed. Opt. Express 6(9), 3378–3404 (2015). [CrossRef]   [PubMed]  

39. S.-H. Lee, J. S. Werner, and R. J. Zawadzki, “Improved visualization of outer retinal morphology with aberration cancelling reflective optical design for adaptive optics - optical coherence tomography,” Biomed. Opt. Express 4(11), 2508–2517 (2013). [CrossRef]   [PubMed]  

40. Z. Liu, O. P. Kocaoglu, and D. T. Miller, “3D imaging of retinal pigment epithelial cells in the living human retina,” Invest. Ophthalmol. Vis. Sci. 57(9), OCT533 (2016). [CrossRef]   [PubMed]  

41. C. A. Curcio, K. R. Sloan, R. E. Kalina, and A. E. Hendrickson, “Human photoreceptor topography,” J. Comp. Neurol. 292(4), 497–523 (1990). [CrossRef]   [PubMed]  

42. D. Merino, J. L. Duncan, P. Tiruveedhula, and A. Roorda, “Observation of cone and rod photoreceptors in normal subjects and patients using a new generation adaptive optics scanning laser ophthalmoscope,” Biomed. Opt. Express 2(8), 2189–2201 (2011). [CrossRef]   [PubMed]  

43. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts & Company, 2004).

44. H. Hofer, P. Artal, B. Singer, J. L. Aragón, and D. R. Williams, “Dynamics of the eye’s wave aberration,” J. Opt. Soc. Am. A 18(3), 497–506 (2001). [CrossRef]   [PubMed]  

45. ANSI Z80.28, Methods of Reporting Optical Aberrations of Eyes (2004).

46. A. Facomprez, E. Beaurepaire, and D. Débarre, “Accuracy of correction in modal sensorless adaptive optics,” Opt. Express 20(3), 2598–2612 (2012). [CrossRef]   [PubMed]  

47. J. Porter, H. Queener, J. Lin, K. Thorn, and A. Awwal, eds., Adaptive Optics for Vision Science (Wiley, 2006).

48. L. Mariotti and N. Devaney, “Performance analysis of cone detection algorithms,” J. Opt. Soc. Am. A 32(4), 497–506 (2015). [CrossRef]   [PubMed]  

49. K. Y. Li and A. Roorda, “Automated identification of cone photoreceptors in adaptive optics retinal images,” J. Opt. Soc. Am. A 24(5), 1358–1363 (2007). [CrossRef]   [PubMed]  

50. H. Tang, J. A. Mulligan, G. R. Untracht, X. Zhang, and S. G. Adie, “GPU-based computational adaptive optics for volumetric optical coherence microscopy,” Proc. SPIE 9720, 97200O (2016). [CrossRef]  

51. S. A. Rahman and M. J. Booth, “Direct wavefront sensing in adaptive optical microscopy using backscattered light,” Appl. Opt. 52(22), 5523–5532 (2013). [CrossRef]   [PubMed]  

52. J. Zeng, P. Mahou, M.-C. Schanne-Klein, E. Beaurepaire, and D. Débarre, “3D resolved mapping of optical aberrations in thick tissues,” Biomed. Opt. Express 3(8), 1898–1913 (2012). [CrossRef]   [PubMed]  

Supplementary Material (2)

NameDescription
Visualization 1       Cone photoreceptor mosaic before and after CAO wavefront correction. Subject 1.
Visualization 2       Cone photoreceptor mosaic before and after CAO wavefront correction. Subject 2.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 (a) Illustration of the CAO processing method on a human subject. Each depth plane in the retina is optimized independently using an aberration phase filter in the spatial frequency domain. (b) Depth planes corresponding to the photoreceptor reflections are extracted from the HAO and HAO + CAO volumes for comparison. Cross-sectional and depth-profile projections are shown to illustrate this process, where extracted depth layers are used to generate the cone and rod mosaics. IS/OS: Inner segment/outer segment junction. COST: Cone outer segment tips. ROST: Rod outer segment tips. Scalebar is 20 µm.
Fig. 2
Fig. 2 Human photoreceptor imaging with and without wavefront correction. (a) Cone mosaic without hardware wavefront correction. CAO corrects the ocular aberrations but cannot recover lost photons. (b) Cone mosaic with hardware wavefront correction. Photon collection is improved by HAO, and remaining aberrations are corrected using CAO. All images are displayed on a common amplitude scale to highlight differences in signal level. The peak SNR is given in decibels. Data taken at 12.5° temporal to the fovea in Subject 2. Scalebar is 20 µm.
Fig. 3
Fig. 3 HAO + CAO cone photoreceptor mosaic over 0.4° x 0.5° field-of-view at multiple retinal eccentricities. The top of each image is toward the fovea (nasal direction), and the fast-scanning axis is along the vertical dimension. Zoomed images correspond to the boxed areas in the cone mosaics. Signal traces are taken through the red lines in the zoomed images. Plots indicate the corresponding HAO (blue) and HAO + CAO (red) signals, with HAO + CAO showing an improved resolution. Scalebar is 20 µm. (See Visualization 1 and Visualization 2 for comparison across the full field-of-view.)
Fig. 4
Fig. 4 (a, d) HAO + CAO rod photoreceptor mosaic taken at 12.5° temporal to the fovea. Zoomed images correspond to boxed areas of corresponding color in the mosaic. Multiple individual rod photoreceptors can be resolved in the HAO + CAO data. (b, e) Dark patches in the rod mosaic correspond to pseudo-shadows of the cone photoreceptors, demonstrated by presenting the COST in magenta overlay. (c, f) Signal traces through the rod photoreceptors indicated by the white arrows in (a, d). Scalebar is 20 µm.
Fig. 5
Fig. 5 Cone photoreceptors at the same location imaged across multiple time-points and the corresponding CAO residual aberration corrections. The optimized CAO Zernike weights (numbered per the ANSI Z80.28 standard [45]) are shown for each time-point, along with the CAO phase filter (without defocus for improved visualization). A single photoreceptor is encircled to aid the reader in tracking the photoreceptor mosaic over time. Data taken at 3.5° temporal to the fovea in Subject 2. Scalebar is 20 µm.
Fig. 6
Fig. 6 Manual and automated measurement of cone densities with and without CAO. The automated HAO + CAO count is closer to the manual count in each case, especially near to the fovea. Representative automated counting results show improved cone detection with HAO + CAO. Estimated cone locations are indicated by the yellow markers. Scalebar is 2 µm.

Tables (1)

Tables Icon

Table 1 Residual wavefront RMS as a fraction of wavelength (λ) and image sharpness improvement (%)

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

ϕ i ={ mean( ϕ i1 , ϕ i+1 ), if ( ϕ i ϕ i1 )( ϕ i+1 ϕ i )<0 and | ϕ i1 ϕ i+1 |<π ϕ i , otherwise .
x,y [ S(x,y) S * (x,y) ] 2 .
SNR peak =10 log 10 ( max [ S(x,y) S * (x,y) ] 2 σ noise 2 ).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.