Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Quantitative analysis of illumination and detection corrections in adaptive light sheet fluorescence microscopy

Open Access Open Access

Abstract

Light-sheet fluorescence microscopy (LSFM) is a high-speed, high-resolution and minimally phototoxic technique for 3D imaging of in vivo and in vitro specimens. LSFM exhibits optical sectioning and when combined with tissue clearing techniques, it facilitates imaging of centimeter scale specimens with micrometer resolution. Although LSFM is ubiquitous, it still faces two main challenges that effect image quality especially when imaging large volumes with high-resolution. First, the light-sheet illumination plane and detection lens focal plane need to be coplanar, however sample-induced aberrations can violate this requirement and degrade image quality. Second, introduction of sample-induced optical aberrations in the detection path. These challenges intensify when imaging whole organisms or structurally complex specimens like cochleae and bones that exhibit many transitions from soft to hard tissue or when imaging deep (> 2 mm). To resolve these challenges, various illumination and aberration correction methods have been developed, yet no adaptive correction in both the illumination and the detection path have been applied to improve LSFM imaging. Here, we bridge this gap, by implementing the two correction techniques on a custom built adaptive LSFM. The illumination beam angular properties are controlled by two galvanometer scanners, while a deformable mirror is positioned in the detection path to correct for aberrations. By imaging whole porcine cochlea, we compare and contrast these correction methods and their influence on the image quality. This knowledge will greatly contribute to the field of adaptive LSFM, and imaging of large volumes of tissue cleared specimens.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Three dimensional (3D) imaging of specimens with high spatial and temporal resolution is desirable in many biomedical applications [1,2]. Prominent microscopy techniques that aspire to achieve these properties are confocal microscopy [3], 2-photon microscopy [4], and light sheet microscopy [510]. In traditional confocal and 2-photon microscopes the imaging is performed using a focused laser beam that is scanned point-by-point across the specimen, and optical sectioning is achieved either by a pinhole (confocal) or by multiphoton absorption (2-photon). The major drawback associated with these point scanning systems is that they are slow [11], and confocal microscopy can also cause extensive photobleaching and phototoxicity [11].

Light-sheet fluorescence microscopy (LSFM) is a fast imaging technique that can provide optical sectioning and high spatial resolution [1214]. In LSFM the illumination and detection light paths are separated and orthogonal. A thin light-sheet is generated and used to illuminate a single plane in the specimen, thus providing optical sectioning and minimizing phototoxicity and photobleaching. The detection path then simultaneously collects the emitted photons across the full field-of-view, similarly to wide field imaging, hence the faster acquisition time relative to point scanning microscopy techniques [5,7,15].

Recent advancements in tissue clearing and labeling techniques allow researchers to render entire organs transparent and overcome scattering and absorption [1629,21,30]. These clearing techniques strive for high throughput microscopes, and LSFM is commonly used to image centimeter scale specimens with large variety of spatial resolutions [13,14]. However, when imaging large volumes, LSFM image quality depends on two major factors: First, the geometrical position of the light sheet illumination beam relative to the detection focal plane. Simply put, to achieve good imaging quality, the light-sheet plane and the focal lens plane of the detection objective need to be co-planar. The coplanarity violation (Fig. 1(a)) generally leads to variant focus and therefore non-uniform image quality across the field-of-view [12,14,26,3133]. The coplanarity violation typically arises and depends on the specimen structure and composition, as well as on the quality of the tissue clearing. The tissue clearing process often trades off the specimen optical clarity with maintaining weak/fragile molecular signals [34]. In these cases, the illumination error becomes location dependent, and requiring correction in each individual tile and at several depths at each tile (e.g., every mm). This correction is typically done by a pair of galvanometer scanners or spatial light modulator (SLM) [12]. The error in the angle or location of the light sheet illumination is either estimated manually which is a tedious and time-consuming process or computationally [14,35].

 figure: Fig. 1.

Fig. 1. Aberrations in light-sheet fluorescence microscopy. (a) The geometrical position of the light sheet illumination beam (green) relative to the detection focal plane (red) needs to satisfy the coplanarity condition (a1), and any violation of the coplanarity condition in angle or translation will degrade the image quality across the field-of-view (a2). (b) Deep in the tissue, the specimen distorts the wavefront thus causing aberrations. (c) Practically, the image quality is affected both from sample induced illumination and detection errors.

Download Full Size | PDF

Aberrations in the detection path are the second major factor that affects the image quality. These aberrations result from: heterogeneity of the refractive index of the sample, using optical components outside their nominal range e.g., using water immersion objective with high refractive index immersion media that is used for tissue clearing, and more. All the above leads to a distorted wavefront as shown in Fig. 1(b) that directly leads to aberrations [12,14,3638]. Approaches to remove these aberrations are referred as adaptive optics [12,3944]. In adaptive optics, a sophisticated optical device such as deformable mirror (DM) or SLM is introduced at the exit pupil of the detection lens and this optical device acts as aberration compensator once the aberration is estimated. The aberration estimation can be done with [4547] or without [4851] a wave front sensor, see the following reviews for extensive discussion on the pros and cons for each approach [52,53]. Here, we decided to pursue an iterative method to estimate the different Zernike modes of each aberration [37,51,54]. The major disadvantage of this iterative process is that it is slow, as it requires multiple snapshots of the same field-of-view. Although the iterative process is a time-consuming approach, the estimation accuracy is very high, and it can be implemented to a large number of Zernike modes (∼ 3-20).

It should be noted that in low-resolution LSFM imaging (∼1-2 × magnification and numerical aperture (NA) < 0.05) the image quality is less susceptible to aberrations and the coplanarity requirement is relatively easy to satisfy. This is the case as low NA objectives have a large depth of field that can tolerate angular and translational movements in the illumination beam.

From the above discussion, previous work on LSFM was dedicated either to correct the illumination beam and satisfy the coplanarity condition, or to correct aberrations in the detection path. However, in high-resolution LSFM imaging of large volumes, correcting only one path will often be insufficient to acquire uniform high-quality image across the entire field-of-view (Fig. 1(c)). Here, we integrated both corrections in a single optical setup, and addressed the following questions (1) What is the extent of the sample induced aberrations. (2) Quantitatively, how much each approach improves the image quality, and what is the effect of dual correction on the image quality. (3) If correcting both the illumination and detection side, what should be the sequence of operations that yields superior imaging quality. The answers to these questions will greatly advance the field of adaptive LSFM, and to adaptation of LSFM to imaging large tissue specimens with high quality and resolution.

2. Methods

The following sections describe the methods used in this work. Different computational techniques can be used for estimating the angular errors in the illumination side, here, we used a machine learning based method to estimate the angular error of the illumination [55]. This method required capturing only two defocused images to estimate the illumination angles (roll and yaw). For estimating the detection side aberrations, an iterative method that estimated the Zernike coefficients was used [37,51,54]. A schematic of the optical setup is shown in Fig. 2, this setup modulated the light sheet illumination beam (roll and yaw angles) using two dual galvanometer scanners, whereas a DM was used for correcting the aberrations in the detection plane. Here, tissue cleared porcine cochleae were used in the proof-of-principle experiments, as bones in general and cochleae in particular were challenging specimens to image using LSFM.

 figure: Fig. 2.

Fig. 2. A schematic of the optical setup used for correcting illumination and detection errors in light sheet fluorescence microscopy. Two dual galvanometer scanners are used to correct for the roll and yaw angles of the illumination beam, a translation stage was used to correct for illumination beam defocus, and a deformable mirror was used to correct for aberrations in the detection side.

Download Full Size | PDF

2.1. Sample preparation

In this study, four pig cochlea samples from newborn Yorkshire pigs were cleared and labeled using a modified BoneClear protocol [26]. The typical dimensions of the sample were 7 × 6 × 5 mm3 and the imaging was done up to 5 mm deep inside the sample. The cochlea samples were labeled using Rabbit anti-MYO7a (Abcam; Ab-3481,) and Rabbit anti-SOX2 (Abcam; 703-605-155). The secondary antibody was Cy3 AffiniPure Donkey Anti-Rabbit IgG (Millipore; 711-165-152). In general, the non-specific binding of the secondary antibody within the bone was used as the imaging target, as well as the auto-fluorescence of the sample. Therefore, the conclusions from this study could be generalized to any clearing protocol that used antibody staining. All the animals were harvested under the regulation and approval of the Institutional Animal Care and Use Committee (IACUC) at North Carolina State University. The fluorescent bead sample was prepared by adding 2% agarose in deionized water and heating it in a microwave for 20 seconds, followed by cooling it to room temperature and adding 20 µl of bead solution (ThermoFisher; A34305). The final solution is kept in −4 °C for 30 min to solidify the solution and then used for imaging.

2.2. Experimental setup

After clearing, samples were mounted, and placed in a custom-built immersion chamber filled with dibenzyl-ether (DBE) with refractive index (RI) of 1.562. The experimental setup for controlling the illumination beam angle was described in detail in [26]. Figure 2 shows a schematic that excluded mirrors and lenses for brevity. Briefly, a Gaussian beam was emitted by a continuous-wave laser (Coherent; OBIS LS 561-50), the beam was then expanded and dithered up and down across the field-of-view at a high frequency (600 Hz) to generate the light sheet illumination. The calculated waist full-width-half-maxima (FWHM) was ∼8 µm. The scanning galvo system (Cambridge Technology; 6215H) dithered the illumination beam and controlled the roll angle relative to the detection focal plane. The control on the scanning galvo was achieved using a dual-channel arbitrary function generator (Tektronix; AFG31022A). This function generator can synchronize the phase between its two channels. To control the yaw angle, an additional dual galvo system was integrated (Thorlabs; GVS202), this system was controlled using analog signal (Measurement Computing; USB-1208HS-4AO). The detection objective lens (10× numerical aperture (NA) 0.6, Olympus; XLPLN10XSVMP-2) was mounted on a translation stage (Newport; 561D-XYZ, and CONEX-TRB12CC motor) to correct for any defocus errors between the illumination beam and the detection lens focal plane. Immediately after the detection objective an emission filter (AVRO; FF01-593/40-25) was placed in the detection path, followed by a mirror to direct the light to the DM (ALPAO; DM69-15). The DM had 69 actuators, with a stoke range of 40 µm. The light from the DM was reflected towards a tube lens (ASI; TL180-MMC) followed by a CMOS camera (Hamamatsu, C13440-20CU). Then the DM system was used to estimate and correct for any global aberrations that occurred during the alignment. A graphical user interface was written in MATLAB (2019b), and it was used during image acquisition. In detail explanation about the setup can be found in [26,55,56].

2.3. Estimation of the sample induced angular error in the illumination beam path

The estimation of the illumination beam angle relative to the detection objective focal plane was done based on [55]. This approach utilized deep learning framework to detect the extent of defocus present in subsections of the field-of-view. Then, a plane was fitted to the predicted defocus values using linear regression and hence the roll and yaw angles were calculated relative to the focal plane of the detection plane. The input to the deep learning network required two defocus images with a fixed focal distance apart from each other (6 µm). To correct the angles, the galvo systems diverted the illumination beam (Fig. 2). The exact voltage that corresponded to the correction in the yaw and roll angles was obtained using a calibration table. The details of generating the table can be found in [55]. After the correction, the image quality was dramatically improved. This pipeline was integrated into the custom built LSFM.

2.4. Estimation of aberrations in the detection path

The detection side aberrations were estimated without a wave front sensor. Instead, an iterative grid search experimental method was used to estimate the Zernike polynomials amplitudes. Briefly, using the Fourier diffraction theory [5759], and under unit magnification, the aberrated incoherent point spread function (PSF) can be written as:

$${h_{abr}} = {|{\mathrm{\Im }({p({\vec{r}} )exp({j\rho ({\vec{r}} )} )} )} |^2}$$
where $\mathrm{\Im }$ is the Fourier transform, $\; \vec{r}$ is the spatial coordinate vector, $\; p({\vec{r}} )$ is the amplitude of the pupil function and $\rho ({\vec{r}} )$ is the aberrated phase introduced by the imaging system or specimen. The aberrated phase can be further broken down to:
$$\rho ({\vec{r}} )= \; \mathop \sum \nolimits_{n = 1}^\infty {a_n}{Z_n}({\vec{r}} )$$
where Zn are the Zernike polynomial and an are the corresponding amplitudes. In the iterative method, a compensating phase was displayed in the Fourier plane. The contribution of the compensating phase on the incoherent point spread function (PSF) can be written as:
$${h_{cor}} = {|{\mathrm{\Im }({p({\vec{r}} )\textrm{exp}({j\rho ({\vec{r}} )- j\gamma ({\vec{r}} )} )} )} |^2}$$
$$\gamma ({\vec{r}} )= \; \mathop \sum \nolimits_{n = 1}^N {b_n}{Z_n}({\vec{r}} )$$
where bn are the corresponding amplitudes to Zn which are displayed on the DM for aberration compensation, and N is the number of Zernike polynomial or modes of aberrations that will be used in the correction process. In our implementation, we used the following Zernike polynomial which corresponded to the following aberrations: Coma, Astigmatism, Trefoil and Spherical. Please note that the defocus aberration was corrected on the illumination side, and therefore, we excluded it.

The goal of the iterative process was to search for the values of bn that minimized the argument of the exponent term in Eq. (3) and therefore cancel the aberrations. The iterative process was divided into two stages, coarse and fine search [37]. In the coarse search, a range of amplitudes (five values) was displayed on the DM for the first mode (Vertical Coma), and the corresponding images were captured. Then an image assessment metric (Shannon entropy [60]) was used and a curve fitting between image metric and Zernike amplitude determined the optimum amplitude of the aberration. For the next mode, a similar process was performed, while keeping the amplitude of the previous mode. This process was repeated until all the modes were sequentially determined. In the fine search, the range of the five amplitudes was finer, and their value was centered around the optimized coarse amplitude. Depending upon the severity of aberration 2–3 stages of fine search were required to estimate the final Zernike amplitudes. In total ∼ 15 images per aberration were captured in the aberration estimation session.

The image assessment was conducted on a small patch (∼130 × 130 µm2) rather than the full FOV. This patch was focused before the iterative process began. Locally, in this small well focused patch, the angular error had a relatively small effect. Please note, adaptive optics assumes that the aberrations are uniform across the entire FOV, as the DM can only correct for invariant aberrations. Therefore, correcting the aberrations for the entire FOV based on a smaller patch was a reasonable assumption (see Fig. S1).

2.5. Statistical analysis and plotting

MATLAB (2019b) was used to calculate the Shannon entropy for the captured images, and to find the Zernike amplitudes using curve fitting. Two-sample t-Test (Origin software) was used to calculate the significance between different type of corrections. The plots in Fig. 3 and Fig. 6 were created using Origin software.

3. Results

All the experiments were performed on tissue cleared porcine cochleae samples. The cochleae were bulky, curved and veiled in a dense optic capsule, therefore hard to tissue clear and image [26]. Consequently, the cochlea was a worthy specimen to examine the impact of the specimen on the illumination beam and detection side aberrations.

3.1. Sample induced illumination and detection errors

The aim of the following experiment was to quantify the extent of illumination and detection errors that were introduced by the specimen. To accomplish that, first, we made sure that without a specimen, the coplanarity requirement holds. The voltage values that hold the galvomirror scanners in these positions were added as a constant bias. Then, we imaged a sparse bead specimen on its surface, and we removed any detection side aberrations using a constant bias on the DM. Any corrections that would follow, were therefore sample related.

Then, the specimens were placed in the chamber, and the corrections were recorded at various depths and spatial locations. In total, 16 locations were examined across 4 specimens. Figure 3(a) shows the extent of the roll and yaw angles that were used to satisfy the coplanarity requirement. The mean values and the standard deviations of the of the yaw and roll angles were 0.27°±2.03° and 0.51°±3.72°, respectively. As expected, the mean values were approximately zero, as the sample induced errors should not have a clear tendency to refract the illumination light to any particular direction. These results demonstrated that the illumination beam direction was greatly influenced by the specimen.

For detection side aberrations, we decided to correct for four most common set of aberrations: Coma, Astigmatism, Trefoil and Spherical. The focus was corrected in the illumination side, and in general, any number of aberrations can be corrected for, on the expense of increasing the number of captured images during the iterative aberration estimation process (see section 2.4). Figure 3(b) shows the different amplitudes of the Zernike modes that were introduced by the specimens at 21 locations. The high variation in the amplitude values, demonstrated the need for a local correction. Please note that in some modes of aberration the overall change in amplitude was small (e.g., Spherical), however, it did not imply that the influence on the image quality was low. We believe that it was the case, as when moving from low to high order aberrations (e,g, coma to spherical), the polynomial order increased and hence a slight amplitude change can have a major local effect on the pupil function. To verify that we did not mix the effects of the illumination beam errors with the detection aberrations, a small patch of ∼ 200 × 200 pixels (i.e., 130 × 130 µm2) within the field-of-view was used to estimate the aberration. Therefore, if the field-of-view was in focus, the illumination beam error had a minimal effect. In addition, we also calculated the Zernike amplitudes for different positions of the estimation patch across the FOV after illumination correction. The estimated amplitudes were almost identical (∼5% variation) regardless on the position of the patch (Fig. S1).

 figure: Fig. 3.

Fig. 3. Quantifying the extent of sample induced aberrations in porcine cochlea. (a) The estimated corrections in the yaw and roll angles to satisfy the coplanarity condition, after the introduction of the sample. The values were recorded from various depths and spatial positions, and they showed a large variability that required local correction in the illumination beam. (b) The estimated Zernike amplitudes at various depths and spatial positions due to the sample induced aberrations. The box shows the mean and standard deviation of the values, whereas the whiskers show the maximum and minimum values.

Download Full Size | PDF

Please note, the DM cannot correct for varying aberrations within the field-of-view e.g., tilted illumination beam. The tip and tilt aberrations that the DM could correct for, were only related to the tip and tilt of the wavefront after the objective lens.

3.2. Empirically determining the order of correction

This section was dedicated to demonstrating the improvement in image quality, when both the illumination and detection side corrections were applied. First, we wanted to address the order of correction, simply put, which correction should be performed first. Therefore, we performed two experiments (Fig. S2): in the first experiment, the detection side correction was performed using the DM. The iterative process occurred on a small, manually focused patch in the center of image (∼200 × 200 pixels, white box in Fig. 4(a)), and only then the illumination beam was corrected to satisfy the coplanarity requirement. In the second case the order of correction was reversed, see Fig. S2 for the exact sequence.

 figure: Fig. 4.

Fig. 4. The effects of correction sequence on image quality. The sample was imaged using LSFM (a) without any correction, (b) corrected for aberrations by Zernike amplitude estimation, and (c) corrected for aberrations by Zernike amplitude estimation followed by illumination correction. The improvement in image quality is evident with each correction step. The specimen is tissue cleared porcine cochlea stained with MYO7a antibody.

Download Full Size | PDF

Figure 4 shows the results of the first experiment, i.e., the uncorrected image (Fig. 4(a)), the detection corrected image (Fig. 4(b)), and finally when the illumination beam correction was applied as well (Fig. 4(c)). The color coded zoomed in images show qualitatively the progressive improvement in the image quality after the corrections. The field-of-view of Fig. 4(a) – (c) has slightly changed (∼20 µm in depth) after each correction from two reasons: First, the iterative detection correction process required capturing multiple images that reduced the overall intensity (photo-bleaching). Therefore, after the detection correction, we moved the sample a few microns in depth to a nearby plane that was not photo-bleached. Second, after the illumination correction, the field-of-view varied due to the change in the illumination angle. Simply put, the same plane was not illuminated any longer. Despite that, we have kept the center of FOV constant before and after the illumination correction. Experimentally we verified that the translation of the sample by few a micrometer in depth did not change drastically the image quality or the content of the image (Fig. S3).

In the detection corrected image (Fig. 4(b)), it was observed that aberrations from the captured images were removed, nevertheless, not all parts of the image were in focus. The reason being, when illumination correction was not performed, the light sheet illumination and the detection focal plane were not coplanar, thus resulting in variable defocus across the FOV (see Fig. 1(a)). When the illumination beam angle was corrected as well, the full FOV of the image came into focus, hence improving the overall imaging quality

Figure 5 shows the results of the second experiment, i.e., the uncorrected image (Fig. 5(a)), the illumination corrected image (Fig. 5(b)), and finally when the detection side aberrations were corrected as well (Fig. 5(c)). Again, the color coded zoomed in images show the progressive improvement in the image quality, and the white box shows the region of interest that was used for the iterative process. In Fig. 5(b) the illumination correction improved the focus quality of the full image, however the strong aberrations still appeared in the image. When adding the detection correction, the aberration disappeared and overall improvement in the image quality was achieved.

 figure: Fig. 5.

Fig. 5. The effects of correction sequence on image quality. Sample imaged using LSFM (a) without any correction, (b) illumination correction, (c) illumination correction followed by detection correction using Zernike amplitude estimation. The improvement in the image quality is evident with each correction step. The specimen is tissue cleared porcine cochlea stained with MYO7a antibody.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. The improvement in image quality as a function of correction. The box shows the mean and standard deviation of the values, whereas the whiskers show the maximum and minimum values. The p value for t-test between the correction are shown as “**” = p ≤0.01, “***” = p ≤0.001 and “****” = p ≤0.0001

Download Full Size | PDF

Qualitatively from Figs. 4,5 and S4, it seemed that the sequence of correcting the illumination first, provided better results, and that in general the illumination beam correction improved the image quality more than detection side corrections. We next decided to quantitatively assess this observation.

3.3. Impact of individual and combined corrections on image quality - quantitative evaluation

To quantitatively estimate the image quality in an unbiased method, we first identified a robust image quality measure which concur with our manual evaluation. We have tried various methods (e.g., Image contrast, Wavelet variance, and more [6165], see Fig. S5-S9), and generally, methods that directly measure contrast tended to provide high values to aberrated images. We have found that the Discreet Cosine Transform Energy (DCTE) [64] values were in-line with our visual assessment of quality - by observing the 2D shape of the point like objects present in the corrected images. In DCTE, the alternating and constant components of the DCT transformed image are calculated, and their ratio is being used as a focus measure. DCTE works equally well on high and low contrast images [64]. Therefore, the image assessment was done using DCTE. Figure 6. shows the DCTE values before and after various corrections, and different correction sequences. To generate the figure, at least 10 cases were captured for each condition, and over 4 porcine cochlea samples. It was evident that illumination correction by itself significantly improved the image quality in comparison with detection side corrections 0.017 ± 0.001 versus 0.014 ± 0.001 (mean ± standard deviation). Additionally, the combined sequence of illumination correction followed by detection correction provided better imaging quality than its counterpart 0.023 ± 0.002 versus 0.019 ± 0.001 (mean ± standard deviation).

This result can be intuitively explained as the DM can only correct invariant aberrations i.e., aberrations that do not change across the field-of-view. However, before the illumination correction, the aberrations were variant, as each location in the field-of-view, experienced a different level of defocus. Therefore, the angular correction of the illumination beam makes the aberration more invariant, and therefore, more suitable for aberration correction using DM.

3.4. How persistent is the correction?

Finally, we wanted to qualitatively estimate how often the corrections will have to be done within a depth scan in a single tile. Simply put, after a correction was done at one single plane (reference plane), how deep within the tissue can we image before the aberrations would emerge again. This point was vital to the detection side correction, which was the time limiting process that required the acquisition of ∼ 100 images, whereas the illumination correction required capturing only two images. To answer this question, images within the same tile were captured at variable depths after the aberrations were estimated and corrected based on a reference plane (Fig. 7). Figure 7(a) shows the plane that the illumination and detection corrections were applied on (i.e., the reference plane), while Fig. 7(b) and Fig. 7(c) show the planes that were 1 and 2 mm deeper, respectively. Please note, the image assessment was conducted on only a small portion of the FOV that was manually focused, and therefore, the angular error had a relatively small effect. Qualitatively, it seemed that the aberration correction held up to 1 mm from the reference plane, before the image quality deteriorated and required repeating the aberration estimation at the detection side. Whereas quantitatively, on comparing the obtained DCTE values per depth with the values from Fig. 6, the aberration correction holds up to 1 mm deep inside the sample. By and large, the conclusion that the detection side aberration should be evaluated every 1 mm should be taken with a grain of salt, and this value was highly dependent on the sample complexity, and quality of the tissue clearing process.

4. Discussion and conclusion

Here, we investigated the effects of sample induced aberrations on LSFM imaging, and we have found that in the extreme case, of imaging challenging samples such as tissue cleared porcine cochlea, these aberrations were substantial and greatly affected the image quality. Although tissue clearing methods rendered the sample optically clear, refractive index mismatches between the immersion medium and sample were still present. The refractive index mismatch in addition with the complex geometry of the sample, manipulated the optical path of the light sheet and introduced aberrations locally. Therefore, individual corrections for each tile in the sample were required. After establishing the extent of aberrations, we have utilized an adaptive light-sheet microscope, that can (1) change the yaw and roll angles of the light-sheet illumination beam in order to fulfill the coplanarity requirement. (2) Correct for detection side aberrations using DM, which was inserted at the exit pupil plane of the detection objective. Therefore, we have compared the benefits of each correction approach, and found that the illumination side correction had bigger influence on the image quality (21%), when compared using the DCTE metric. Then, both the illumination and detection corrections were applied, and our results showed that the sequence of illumination correction followed by detection correction provided superior results by 21% in comparison with its counterpart. The reason being that after correcting for angles in the illumination path, the aberration was approximately invariant across the field of view. This is essential, as the DM can only correct for invariant aberrations, and therefore the DM cannot handle cases where the illumination beam was not parallel to the detection focal plane. Overall, when applying both corrections the image quality was improved in comparison with only correcting for the illumination or only correcting for the detection side aberrations by ∼35% and 64% respectively.

 figure: Fig. 7.

Fig. 7. (a) A porcine cochlea that was imaged using LSFM after illumination correction and detection correction. (b) The same sample was imaged one mm deep from the correction plane using the same correction parameters as in (a). (c) The same sample was imaged two mm deep from the correction plane using the same correction parameters as in (a). The decrement in the image quality shows that the correction parameters provide reasonable image quality even when imaging one mm deep relative to the correction plane.

Download Full Size | PDF

In conclusion, we have showed the effects of sample induced aberrations, and filled a gap in the literature by combining illumination and detection corrections in LSFM. Our results and conclusions are derived from one type of sample and based on a relatively aggressive tissue clearing protocol that is optimized for bones. Therefore, the magnitude of the illumination and detection errors could change based on the sample at hand, the tissue clearing and labeling protocol that is utilized, and the optical setup. However, we believe that the general trends reported here should be consistent among different samples and preparations. Another factor that can further improve the illumination correction is by utilizing a detection lens with higher NA and consequently smaller depth of field. The small depth of field is more sensitive to defocus and consequently the estimation of the illumination beam angular error can be more accurate. Whereas, when it comes to detection correction, instruments with higher NA will experience larger aberrations [66], these profound aberrations will have to be corrected by the DM.

A challenge associated with this work is the time-consuming process of estimating the detection side aberrations. However, this can be resolved by selecting only a few locations along the depth scan to perform the correction, as the correction holds for ∼1 mm deep. Alternatively, we could develop faster methods to evaluate the aberrations in the detection path. In the future, we will work on deep learning-based method that could predict the aberrations in fewer number or shots, as well as combining both illumination and detection correction techniques so that aberration and angles both can be estimated simultaneously.

Funding

Center for Human Heath and the Environment, North Carolina State University (ES025128); Life Sciences Research Foundation.

Acknowledgments

The authors would like to thank Dr. Adele Moatti and the NCSU Central Procedure Lab for their help with tissue collection.

Disclosures

The authors declare no conflicts of interests.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. P. J. Keller, “Imaging morphogenesis: technological advances and biological insights,” Science 340(6137), 1234168 (2013). [CrossRef]  

2. C. A. Combs, “Fluorescence microscopy: a concise guide to current imaging methods,” Curr. Protoc. Neurosci. Editor. Board Jacqueline N Crawley Al 0 2, Unit2.1 (2010).

3. J. Jonkman, C. M. Brown, G. D. Wright, K. I. Anderson, and A. J. North, “Tutorial: guidance for quantitative confocal microscopy,” Nat. Protoc. 15(5), 1585–1611 (2020). [CrossRef]  

4. R. K. P. Benninger and D. W. Piston, “Two-photon excitation microscopy for the study of living cells and tissues,” Curr. Protoc. Cell Biol. Editor. Board Juan Bonifacino Al 0 4, Unit-4.1124 (2013).

5. E. H. K. Stelzer, F. Strobl, B.-J. Chang, F. Preusser, S. Preibisch, K. McDole, and R. Fiolka, “Light sheet fluorescence microscopy,” Nat. Rev. Methods Primer 1(1), 1–25 (2021). [CrossRef]  

6. P. J. Verveer, J. Swoger, F. Pampaloni, K. Greger, M. Marcello, and E. H. K. Stelzer, “High-resolution three-dimensional imaging of large specimens with light sheet–based microscopy,” Nat. Methods 4(4), 311–313 (2007). [CrossRef]  

7. M. Weber and J. Huisken, “Light sheet microscopy for real-time developmental biology,” Curr. Opin. Genet. Dev. 21(5), 566–572 (2011). [CrossRef]  

8. T. Vettenburg, H. I. C. Dalgarno, J. Nylk, C. Coll-Lladó, D. E. K. Ferrier, T. Čižmár, F. J. Gunn-Moore, and K. Dholakia, “Light-sheet microscopy using an Airy beam,” Nat. Methods 11(5), 541–544 (2014). [CrossRef]  

9. E. G. Reynaud, J. Peychl, J. Huisken, and P. Tomancak, “Guide to light-sheet microscopy for adventurous biologists,” Nat. Methods 12(1), 30–34 (2015). [CrossRef]  

10. P. A. Santi, “Light sheet fluorescence microscopy: a review,” J Histochem Cytochem. 59(2), 129–138 (2011). [CrossRef]  

11. C. M. St. Croix, S. H. Shand, and S. C. Watkins, “Confocal microscopy: comparisons, applications, and problems,” BioTechniques 39(6S), S2–S5 (2005). [CrossRef]  

12. L. A. Royer, W. C. Lemon, R. K. Chhetri, Y. Wan, M. Coleman, E. W. Myers, and P. J. Keller, “Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms,” Nat. Biotechnol. 34(12), 1267–1278 (2016). [CrossRef]  

13. T. Chakraborty, M. K. Driscoll, E. Jeffery, M. M. Murphy, P. Roudot, B. J. Chang, S. Vora, W. M. Wong, C. D. Nielson, H. Zhang, V. Zhemkov, C. Hiremath, E. D. De La Cruz, Y. Yi, I. Bezprozvanny, H. Zhao, R. Tomer, R. Heintzmann, J. P. Meeks, D. K. Marciano, S. J. Morrison, G. Danuser, K. M. Dean, and R. Fiolka, “Light-sheet microscopy of cleared tissues with isotropic, subcellular resolution,” Nat. Methods 16(11), 1109–1113 (2019). [CrossRef]  

14. L. A. Royer, W. C. Lemon, R. K. Chhetri, and P. J. Keller, “A practical guide to adaptive light-sheet microscopy,” Nat. Protoc. 13(11), 2462–2500 (2018). [CrossRef]  

15. P. J. Keller and H.-U. Dodt, “Light sheet microscopy of living or cleared specimens,” Curr. Opin. Neurobiol. 22(1), 138–143 (2012). [CrossRef]  

16. C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016). [CrossRef]  

17. E. Lee, J. Choi, Y. Jo, J. Y. Kim, Y. J. Jang, H. M. Lee, S. Y. Kim, H.-J. Lee, K. Cho, N. Jung, E. M. Hur, S. J. Jeong, C. Moon, Y. Choe, I. J. Rhyu, H. Kim, and W. Sun, “ACT-PRESTO: Rapid and consistent tissue clearing and labeling method for 3-dimensional (3D) imaging,” Sci. Rep. 6(1), 18631 (2016). [CrossRef]  

18. H. Hama, H. Hioki, K. Namiki, T. Hoshida, H. Kurokawa, F. Ishidate, T. Kaneko, T. Akagi, T. Saito, T. Saido, and A. Miyawaki, “ScaleS: an optical clearing palette for biological imaging,” Nat. Neurosci. 18(10), 1518–1529 (2015). [CrossRef]  

19. M. R. Cronan, A. F. Rosenberg, S. H. Oehlers, J. W. Saelens, D. M. Sisk, K. L. Jurcic Smith, S. Lee, and D. M. Tobin, “CLARITY and PACT-based imaging of adult zebrafish and mouse for whole-animal analysis of infections,” Dis. Model. Mech. 8(12), 1643–1650 (2015). [CrossRef]  

20. I. Costantini, J.-P. Ghobril, A. P. Di Giovanna, A. L. A. Mascaro, L. Silvestri, M. C. Müllenbroich, L. Onofri, V. Conti, F. Vanzi, L. Sacconi, R. Guerrini, H. Markram, G. Iannello, and F. S. Pavone, “A versatile clearing agent for multi-modal brain imaging,” Sci. Rep. 5(1), 9808 (2015). [CrossRef]  

21. K. Chung, J. Wallace, S.-Y. Kim, S. Kalyanasundaram, A. S. Andalman, T. J. Davidson, J. J. Mirzabekov, K. A. Zalocusky, J. Mattis, A. K. Denisin, S. Pak, H. Bernstein, C. Ramakrishnan, L. Grosenick, V. Gradinaru, and K. Deisseroth, “Structural and molecular interrogation of intact biological systems,” Nature 497(7449), 332–337 (2013). [CrossRef]  

22. M. E. Boutin and D. Hoffman-Kim, “Application and assessment of optical clearing methods for imaging of tissue-engineered neural stem cell spheres,” Tissue Eng. Part C Methods 21(3), 292–302 (2015). [CrossRef]  

23. Y. Aoyagi, R. Kawakami, H. Osanai, T. Hibi, and T. Nemoto, “A rapid optical clearing protocol using 2,2′-thiodiethanol for microscopic observation of fixed mouse brain,” PLoS ONE 10(1), e0116280 (2015). [CrossRef]  

24. K. Sung, Y. Ding, J. Ma, H. Chen, V. Huang, M. Cheng, C. F. Yang, J. T. Kim, D. Eguchi, D. Di Carlo, T. K. Hsiai, A. Nakano, and R. P. Kulkarni, “Simplified three-dimensional tissue clearing and incorporation of colorimetric phenotyping,” Sci. Rep. 6(1), 30736 (2016). [CrossRef]  

25. M. Belle, D. Godefroy, C. Dominici, C. Heitz-Marchaland, P. Zelina, F. Hellal, F. Bradke, and A. Chédotal, “A simple method for 3D analysis of immunolabeled axonal tracts in a transparent nervous system,” Cell Rep. 9(4), 1191–1201 (2014). [CrossRef]  

26. A. Moatti, Y. Cai, C. Li, T. Sattler, L. Edwards, J. Piedrahita, F. S. Ligler, and A. Greenbaum, “Three-dimensional imaging of intact porcine cochlea using tissue clearing and custom-built light-sheet microscopy,” Biomed. Opt. Express 11(11), 6181–6196 (2020). [CrossRef]  

27. E. A. Susaki, K. Tainaka, D. Perrin, F. Kishino, T. Tawara, T. M. Watanabe, C. Yokoyama, H. Onoe, M. Eguchi, S. Yamaguchi, T. Abe, H. Kiyonari, Y. Shimizu, A. Miyawaki, H. Yokota, and H. R. Ueda, “Whole-brain imaging with single-cell resolution using chemical cocktails and computational analysis,” Cell 157(3), 726–739 (2014). [CrossRef]  

28. B. Yang, J. B. Treweek, R. P. Kulkarni, B. E. Deverman, C.-K. Chen, E. Lubeck, S. Shah, L. Cai, and V. Gradinaru, “Single-cell phenotyping within transparent intact tissue through whole-body clearing,” Cell 158(4), 945–958 (2014). [CrossRef]  

29. N. Renier, Z. Wu, D. J. Simon, J. Yang, P. Ariel, and M. Tessier-Lavigne, “iDISCO: a simple, rapid method to immunolabel large tissue samples for volume imaging,” Cell 159(4), 896–910 (2014). [CrossRef]  

30. H. R. Ueda, A. Ertürk, K. Chung, V. Gradinaru, A. Chédotal, P. Tomancak, and P. J. Keller, “Tissue clearing and its applications in neuroscience,” Nat. Rev. Neurosci. 21(2), 61–79 (2020). [CrossRef]  

31. J. Huisken and D. Y. R. Stainier, “Even fluorescence excitation by multidirectional selective plane illumination microscopy (mSPIM),” Opt. Lett. 32(17), 2608–2610 (2007). [CrossRef]  

32. H.-U. Dodt, U. Leischner, A. Schierloh, N. Jährling, C. P. Mauch, K. Deininger, J. M. Deussing, M. Eder, W. Zieglgänsberger, and K. Becker, “Ultramicroscopy: three-dimensional visualization of neuronal networks in the whole mouse brain,” Nat. Methods 4(4), 331–336 (2007). [CrossRef]  

33. A. Diaspro, F. Federici, and M. Robello, “Influence of refractive-index mismatch in high-resolution three-dimensional confocal microscopy,” Appl. Opt. 41(4), 685–690 (2002). [CrossRef]  

34. K. R. Weiss, F. F. Voigt, D. P. Shepherd, and J. Huisken, “Tutorial: practical considerations for tissue clearing and imaging,” Nat. Protoc. 16(6), 2732–2748 (2021). [CrossRef]  

35. R. Tomer, K. Khairy, F. Amat, and P. J. Keller, “Quantitative high-speed imaging of entire developing embryos with simultaneous multiview light-sheet microscopy,” Nat. Methods 9(7), 755–763 (2012). [CrossRef]  

36. D. Turaga and T. E. Holy, “Aberrations and their correction in light-sheet microscopy: a low-dimensional parametrization,” Biomed. Opt. Express 4(9), 1654–1661 (2013). [CrossRef]  

37. C. Bourgenot, C. D. Saunter, J. M. Taylor, J. M. Girkin, and G. D. Love, “3D adaptive optics in a light sheet microscope,” Opt. Express 20(12), 13252–13261 (2012). [CrossRef]  

38. C. Zhang, W. Sun, Q. Mu, Z. Cao, X. Zhang, S. Wang, and L. Xuan, “Analysis of aberrations and performance evaluation of adaptive optics in two-photon light-sheet microscopy,” Opt. Commun. 435, 46–53 (2019). [CrossRef]  

39. A. Hubert, F. Harms, R. Juvénal, P. Treimany, X. Levecq, V. Loriette, G. Farkouh, F. Rouyer, and A. Fragola, “Adaptive optics light-sheet microscopy based on direct wavefront sensing without any guide star,” Opt. Lett. 44(10), 2514–2517 (2019). [CrossRef]  

40. Y. Liu, K. Lawrence, A. Malik, C. E. Gunderson, R. Ball, J. D. Lauderdale, and P. Kner, “Imaging neural activity in zebrafish larvae with adaptive optics and structured illumination light sheet microscopy,” in Adaptive Optics and Wavefront Control for Biological Systems V (SPIE, 2019), 10886, pp. 10–17.

41. V. Marx, “Microscopy: hello, adaptive optics,” Nat. Methods 14(12), 1133–1136 (2017). [CrossRef]  

42. A. Hubert, A. Hubert, F. Harms, S. Imperato, V. Loriette, C. Veilly, X. Levecq, G. Farkouh, F. Rouyer, and A. Fragola, “Adaptive optics light-sheet microscopy for functional neuroimaging,” in Biophotonics Congress 2021 (2021), Paper NM2C.4 (Optical Society of America, 2021), p. NM2C.4.

43. J. M. Girkin and M. T. Carvalho, “The light-sheet microscopy revolution,” J. Opt. 20(5), 053002 (2018). [CrossRef]  

44. C. Bourgenot, J. M. Taylor, C. D. Saunter, J. M. Girkin, and G. D. Love, “Light sheet adaptive optics microscope for 3D live imaging,” in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XX (SPIE, 2013), 8589, pp. 131–139.

45. J.-W. Cha, J. Ballesta, and P. T. C. So, “Shack-Hartmann wavefront-sensor-based adaptive optics system for multiphoton microscopy,” J. Biomed. Opt. 15(4), 046022 (2010). [CrossRef]  

46. A. G. Basden, D. Atkinson, N. A. Bharmal, U. Bitenc, M. Brangier, T. Buey, T. Butterley, D. Cano, F. Chemla, P. Clark, M. Cohen, J.-M. Conan, F. J. de Cos, C. Dickson, N. A. Dipper, C. N. Dunlop, P. Feautrier, T. Fusco, J. L. Gach, E. Gendron, D. Geng, S. J. Goodsell, D. Gratadour, A. H. Greenaway, A. Guesalaga, C. D. Guzman, D. Henry, D. Holck, Z. Hubert, J. M. Huet, A. Kellerer, C. Kulcsar, P. Laporte, B. Le Roux, N. Looker, A. J. Longmore, M. Marteaud, O. Martin, S. Meimon, C. Morel, T. J. Morris, R. M. Myers, J. Osborn, D. Perret, C. Petit, H. Raynaud, A. P. Reeves, G. Rousset, F. Sanchez Lasheras, M. Sanchez Rodriguez, J. D. Santos, A. Sevin, G. Sivo, E. Stadler, B. Stobie, G. Talbot, S. Todd, F. Vidal, and E. J. Younger, “Experience with wavefront sensor and deformable mirror interfaces for wide-field adaptive optics systems,” Mon. Not. R. Astron. Soc. 459(2), 1350–1359 (2016). [CrossRef]  

47. A. L. Rukosuev, A. V. Kudryashov, A. N. Lylova, V. V. Samarkin, and Y. V. Sheldakova, “Adaptive optics system for real-time wavefront correction,” Atmos Ocean Opt 28(4), 381–386 (2015). [CrossRef]  

48. Y. Liu and P. Kner, “Sensorless adaptive optics for light sheet microscopy,” in Imaging and Applied Optics Congress (2020), Paper OF2B.2 (Optical Society of America, 2020), p. OF2B.2.

49. M. J. Booth, “Wavefront sensorless adaptive optics for large aberrations,” Opt. Lett. 32(1), 5–7 (2007). [CrossRef]  

50. A. Jesacher and M. J. Booth, “Sensorless adaptive optics for microscopy,” in MEMS Adaptive Optics V (SPIE, 2011), 7931, pp. 115–123.

51. M. J. Booth, “Wave front sensor-less adaptive optics: a model-based approach using sphere packings,” Opt. Express 14(4), 1339–1352 (2006). [CrossRef]  

52. Y. Liu, K. Lawrence, J. D. Lauderdale, and P. Kner, “Sensorless and sensor based adaptive optics for light sheet microscopy,” in Adaptive Optics and Wavefront Control for Biological Systems VI (SPIE, 2020), 11248, pp. 8–14.

53. M. J. Booth, “Adaptive optics in microscopy,” Phil. Trans. R. Soc. A. 365(1861), 2829–2843 (2007). [CrossRef]  

54. A. Facomprez, E. Beaurepaire, and D. Débarre, “Accuracy of correction in modal sensorless adaptive optics,” Opt. Express 20(3), 2598–2612 (2012). [CrossRef]  

55. C. Li, M. R. Rai, H. T. Ghashghaei, and A. Greenabum, “Illumination angle correction during image acquisition in light-sheet fluorescence microscopy using deep learning,” Biomed. Opt. Express 13(2), 888–901 (2022). [CrossRef]  

56. C. Li, A. Moatti, X. Zhang, H. T. Ghashghaei, and A. Greenabum, “Deep learning-based autofocus method enhances image quality in light-sheet fluorescence microscopy,” Biomed. Opt. Express 12(8), 5214–5226 (2021). [CrossRef]  

57. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005).

58. K. M. Hampson, R. Turcotte, D. T. Miller, K. Kurokawa, J. R. Males, N. Ji, and M. J. Booth, “Adaptive optics for high-resolution imaging,” Nat. Rev. Methods Primer 1(1), 68 (2021). [CrossRef]  

59. P. Janout, P. Páta, P. Skala, and J. Bednář, “PSF estimation of space-variant ultra-wide field of view imaging systems,” Appl. Sci. 7(2), 151 (2017). [CrossRef]  

60. A. Namdari and Z. Li, “A review of entropy measures for uncertainty quantification of stochastic processes,” Adv Mech. Eng. 11(6), 168781401985735 (2019). [CrossRef]  

61. H. Nanda and R. Cutler, Practical Calibrations for a Real-Time Digital Omnidirectional Camera (Technical Sketches, Computer Vision and Pattern Recognition, 2001).

62. E. Krotkov and J.-P. Martin, “Range from focus,” in 1986 IEEE International Conference on Robotics and Automation Proceedings (1986), 3, pp. 1093–1098.

63. F. S. Helmli and S. Scherer, “Adaptive shape from focus with an error estimation in light microscopy,” in ISPA 2001. Proceedings of the 2nd International Symposium on Image and Signal Processing and Analysis. In Conjunction with 23rd International Conference on Information Technology Interfaces (IEEE Cat, 2001), pp. 188–193.

64. C.-H. Shen and H. H. Chen, “Robust focus measure for low-contrast images,” in 2006 Digest of Technical Papers International Conference on Consumer Electronics (2006), pp. 69–70.

65. G. Yang and B. J. Nelson, “Wavelet-based autofocusing and unsupervised segmentation of microscopic images,” in Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), Cat. No.03CH37453 (2003), 3, pp. 2143–2148 vol.3.

66. N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14(4), 374–380 (2017). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental Document

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Aberrations in light-sheet fluorescence microscopy. (a) The geometrical position of the light sheet illumination beam (green) relative to the detection focal plane (red) needs to satisfy the coplanarity condition (a1), and any violation of the coplanarity condition in angle or translation will degrade the image quality across the field-of-view (a2). (b) Deep in the tissue, the specimen distorts the wavefront thus causing aberrations. (c) Practically, the image quality is affected both from sample induced illumination and detection errors.
Fig. 2.
Fig. 2. A schematic of the optical setup used for correcting illumination and detection errors in light sheet fluorescence microscopy. Two dual galvanometer scanners are used to correct for the roll and yaw angles of the illumination beam, a translation stage was used to correct for illumination beam defocus, and a deformable mirror was used to correct for aberrations in the detection side.
Fig. 3.
Fig. 3. Quantifying the extent of sample induced aberrations in porcine cochlea. (a) The estimated corrections in the yaw and roll angles to satisfy the coplanarity condition, after the introduction of the sample. The values were recorded from various depths and spatial positions, and they showed a large variability that required local correction in the illumination beam. (b) The estimated Zernike amplitudes at various depths and spatial positions due to the sample induced aberrations. The box shows the mean and standard deviation of the values, whereas the whiskers show the maximum and minimum values.
Fig. 4.
Fig. 4. The effects of correction sequence on image quality. The sample was imaged using LSFM (a) without any correction, (b) corrected for aberrations by Zernike amplitude estimation, and (c) corrected for aberrations by Zernike amplitude estimation followed by illumination correction. The improvement in image quality is evident with each correction step. The specimen is tissue cleared porcine cochlea stained with MYO7a antibody.
Fig. 5.
Fig. 5. The effects of correction sequence on image quality. Sample imaged using LSFM (a) without any correction, (b) illumination correction, (c) illumination correction followed by detection correction using Zernike amplitude estimation. The improvement in the image quality is evident with each correction step. The specimen is tissue cleared porcine cochlea stained with MYO7a antibody.
Fig. 6.
Fig. 6. The improvement in image quality as a function of correction. The box shows the mean and standard deviation of the values, whereas the whiskers show the maximum and minimum values. The p value for t-test between the correction are shown as “**” = p ≤0.01, “***” = p ≤0.001 and “****” = p ≤0.0001
Fig. 7.
Fig. 7. (a) A porcine cochlea that was imaged using LSFM after illumination correction and detection correction. (b) The same sample was imaged one mm deep from the correction plane using the same correction parameters as in (a). (c) The same sample was imaged two mm deep from the correction plane using the same correction parameters as in (a). The decrement in the image quality shows that the correction parameters provide reasonable image quality even when imaging one mm deep relative to the correction plane.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

h a b r = | ( p ( r ) e x p ( j ρ ( r ) ) ) | 2
ρ ( r ) = n = 1 a n Z n ( r )
h c o r = | ( p ( r ) exp ( j ρ ( r ) j γ ( r ) ) ) | 2
γ ( r ) = n = 1 N b n Z n ( r )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.