Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Accurate piecewise centroid calculation algorithm for wavefront measurement in adaptive optics

Open Access Open Access

Abstract

Adaptive optics using direct wavefront sensing (direct AO) is widely used in two-photon microscopy to correct sample-induced aberrations and restore diffraction-limited performance at high speeds. In general, the direct AO method employs a Sharked-Hartman wavefront sensor (SHWS) to directly measure the aberrations through a spot array. However, the signal-to-noise ratio (SNR) of spots in SHWS varies significantly within deep tissues, presenting challenges for accurately locating spot centroids over a large SNR range, particularly under extremely low SNR conditions. To address this issue, we propose a piecewise centroid calculation algorithm called GCP, which integrates three optimal algorithms for accurate spot centroid calculations under high-, medium-, and low-SNR conditions. Simulations and experiments demonstrate that the GCP can accurately measure aberrations over a large SNR range and exhibits robustness under extremely low-SNR conditions. Importantly, GCP improves the AO working depth by 150 µm compared to the conventional algorithm.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Adaptive optics (AO), which can recover diffraction-limited resolution by measuring and correcting optical aberration, has been widely utilized in two-photon microscopy (TPM) to facilitate the observation within deep biological tissues [1,2]. Among the AO schemes, the direct wavefront sensing method using a Shack-Hartmann wavefront sensor (SHWS) has attracted much interest because of its fast aberration measurement ability [310]. However, the accuracy of the algorithm for calculating the wavefront significantly restricts its application, particularly under low-signal-to-noise-ratio conditions [4,6]. Developing new algorithms to improve wavefront measurement accuracy will significantly improve AO performance [11] and, as a result, improve the image quality of TPM.

An SHWS consists of a microlens array and a camera [2]. The microlens array converts the wavefront from a guide star (GS) into a spot array that can be captured by the camera. The displacement of the centroids of each spot compared to a reference spot is calculated to indicate the wavefronts. The calculated wavefront is then used to determine the optical aberration, and aberration correction is performed using a deformable mirror for spatial light modulation. Therefore, the accuracy of the spot centroid on the SHWS is crucial for AO performance.

Many centroid calculation algorithms have been proposed to locate spot centroids [11,12]. The generous algorithm is the center of gravity algorithm (COG). Because of its simplicity and low computational demand, the COG algorithm is suitable for wavefront measurements. However, the COG algorithm is sensitive to noise and produces considerable deviations under low signal-to-noise ratio (SNR) conditions. Several advanced COG algorithms have been developed to mitigate noise [11]. For example, thresholding COG (T-COG) uses a threshold to alleviate the noise of small amplitude [13], weighted COG (WCOG) uses a weighting matrix to increase the signal weight and decrease the noise weight [14,15], windowing process COG (W-COG) uses small windows to suppress noise outside the spot region [16,17], and morphology COG (M-COG) uses morphological information to identify a region where the spot is located and eliminate the noise outside this region [6]. However, under extremely low-SNR conditions, these methods fail to extract accurate centroid positions.

The cross-correlation (Corr) algorithm is another powerful centroid calculation algorithm [18,19]. It locates spot centroids by searching for the peak correlation between the target and reference images. In principle, Corr algorithms have good anti-noise performance; however, they require a large amount of computation. Currently, deep learning algorithms have been successfully used to estimate centroids using artificial neural networks [2023]. However, it not only requires large amounts of reliable data for training but also powerful computing power.

In addition to noise and computational requirements, the inhomogeneity of the spot pattern is another prominent issue for the accurate determination of the wavefront. When imaging biological tissues, the strong optical absorption caused by blood vessels in the light path significantly decreases the intensity of the relative spots on the SHWS [4,6]. As a result, the SNR of different spots on the SHWS varies dramatically. None of the aforementioned methods can work effectively over a large SNR range with minimal errors and small computations.

In this study, we propose a piecewise centroid calculation algorithm called GCP (algorithm based on gravity, correlation, and padding). The GCP method integrates three optimal algorithms to accurately compute the centroids of spots from high to extremely low SNR conditions. The capacity of the GCP was verified through simulations and experiments. The results suggest the wavefront error by GCP is only 0.2 µm when the mean SNR (SNRm) of the SHWS image is around 4. Moreover, compared with the conventional T-COG algorithm, GCP achieves an extended AO working depth of 150 µm when imaging brain slices.

2. Materials and Methods

2.1 Two-photon fluorescence microscope with a direct adaptive optics module

The experiment setup consisted of an adaptive optics two-photon microscope, including a two-photon imaging module, a wavefront sensor, and a wavefront correction device (Fig. 1(a)). Briefly, the excitation light (920 nm, 80 MHz repeat rate) from a Ti: Sapphire laser (Chameleon Ultra, Coherent) was expanded tenfold by a lens pair (AC254-30-B, AC254-300-B, Thorlabs) to overfill the aperture of a deformable mirror (DM140A-35-P01, Thorlabs). After the DM, the excitation beam was scanned two-dimensionally using a pair of galvanometer mirrors (Galvo X and Y, TS8203, Sunny Technology) and then focused onto the samples using a water-immersion objective (N16XLWD-PF, Nikon). The DM, Galvo X, Galvo Y, and rear pupil of the objective were mutually conjugated by three pairs of relay lenses operating in the 4f configuration (from the DM to the objective: AC254-300-B, AC254-250-B; AC508-080-AB, AC508-080-AB; and two AC508-080-AB, AC508-200-AB, Thorlabs). For two-photon imaging, fluorescence was collected by the objective, reflected by a dichroic mirror DCM1 (T715LP, Chroma), and focused by a lens pair (LA1145-A, LA1805-A, Thorlabs) onto a photomultiplier tube (PMT, H7422P-40, Hamamatsu). A bandpass filter (ET520/40 nm, Chroma) was placed in front of the PMT to purify the fluorescence.

For AO correction, DCM1 was removed from the light path using a switch (ELL6 K; Thorlabs). The fluorescence from the guide star was then descanned by Galvos X and Y, separated from the excitation light by a dichroic mirror (DMLP650R, Thorlabs), expanded by a 4-f configuration (AC254-100-AB, AC254-200-AB, Thorlabs), purified by a fluorescence filter (FF01-720/SP-25, Semrock), and finally detected by the SHWS. The SHWS consisted of a microlens array (#64–483, Edmund Optics) conjugated with the objective rear pupil and a camera (Dhyana 400BSI, Tucsen) placed at the focal plane of the microlens array.

Custom MATLAB code based on the ScanImage program was used to capture two-photon excitation images, measure and compensate for optical aberrations, and control the devices.

2.2 Calibration between DM and SHWS

To relate the spot shifts in the SHWS to the actuator movement in the DM, the DM and SHWS were calibrated. The system aberration caused by the misalignment of lenses was measured using the methods in Ref. [6]. The measured results showed that the peak-to-valley (PV) value of the system aberration was lower than 0.1 µm, which indicates that the image resolution cannot be degraded. Therefore, the system aberration can be ignored in the following steps.

The key to the DM-SHWS calibration is to obtain an influence matrix, M, which relates the spot shifts in the SHWS and the actuator movement in the DM. First, a flat mirror was placed at the back pupil plane of the objective to connect the DM and the SHWS in the laser path. Here, the DM, pupil plane, and lens array plane of the SHWS were mutually conjugated. Second, the DM flat command was applied to create a plane wavefront, generating a reference spot array in the SHWS. Third, the influence matrix M was obtained by sequentially moving each actuator of the DM and recording the corresponding spot shifts on the SHWS. The rows in M represent the spot shifts in the SHWS in response to each actuator movement.

To ensure that the calibration laser had the same emission angle and size as the fluorescence, the laser was aligned along the system’s optical axis and cut off by an aperture to fit the fluorescent beam size.

2.3 Sample aberration measurement

First, a fluorescent SHWS reference was obtained from an aberration-free sample, i.e. a fluorescent slide (FSK2, Thorlabs). The spot locations of the fluorescent SHWS reference are denoted as S0 = (x1xN, y1yN), where xi and yi are the spot centroids of subimage i, and N is the number of subimages in the SHWS. Then, the fluorescent slide was replaced with the target sample, and the laser was scanned within a small FOV (50 µm × 50 µm) to record the sample-induced SHWS images. It should be noted that all the fluorescent SHWS images were subtracted by a dynamic background image, which is captured by turning off the SHWS switch before each measurement of fluorescent SHWS images. Subtracting the background from fluorescent SHWS images can effectively suppress ambient noise. Subsequently, the spot locations of the sample-induced background-subtracted SHWS images (S1) were calculated using a centroid-calculation algorithm. Finally, the actuator movements of the DM, Ф, could be computed as Eq. (1).

$${\boldsymbol \varPhi } = {\textbf M}{\boldsymbol D} = {\textbf M}({{\boldsymbol S}_1} - {{\boldsymbol S}_0}),$$
where Ф is the reconstructed wavefront, the elements in Ф are {a1a140}, ai is the movement of the ith DM actuator, 140 is the number of DM actuators; D is the spot shifts of the SHWS image, the elements in D are {Δx1…ΔxN, Δy1…ΔyN }, and Δxi, Δyi are horizontal and vertical spot-shifts of the ith subimage. Here, we use the DM actuator movements to denote the wavefront because the actuators determine the DM surface and reshape the light wavefront.

2.4 SHWS simulation model

A numerical simulation model of the SHWS was established to evaluate the accuracy of the centroid algorithms. We created ten sets of simulated SHWS images, each with varying SNRm levels ranging from 3.3 to 8.5. In each set, there were eight SHWS images, each representing a different type of sample aberration. Each SHWS image consisted of 196 subimages with SNR randomly and normally distributed between (SNRm-1.5) and (SNRm + 1.5).

Fluorescent spots within the subimages were generated using Monte Carlo simulations. The Monte Carlo method, which introduces Poisson noise at low SNRs, was employed to imitate the stochastic behavior of the photons. In Monte Carlo simulations, the probability of photons occurring at a given position (x, y) are described by Eq. (2)

$$I(x,y) = {\left( {\frac{{\sin ({Q({x - {x_0}} )} )\sin ({Q({y - {y_0}} ))} )}}{{{Q^2}({x - {x_0}} )({y - {y_0}} )}}} \right)^2},$$
where Q is a scaling factor that determines the spot size of a subimage, x0 and y0 are the locations of the spot centers. The photon number of the spots was set from 500 to 50000, resulting in a change in the subimage SNR from 2.5 to 10.

To emulate the noise characteristics of the subimages in a real experimental situation, two types of noise were considered: Gaussian noise and salt noise. Gaussian noise was superimposed onto the subimages using a MATLAB function, ‘imnoise,’ with a mean value of 1.9E-2 and a variance of 2.5E-4. Salt noise, on the other hand, was introduced with a density ranging from 1% to 0% for sub-images with SNR from 2.5 to 10.

2.5 SNR calculation

The SNR of the subimage is defined as Eq. (3):

$$SNR\textrm{ } = \frac{{{I_m} - {\mu _n}}}{{{\sigma _n}}},$$
where Im is the mean of the signal (the first 40 pixels with the largest gray value); µn and σn are the mean and standard deviation of the noise (the pixels other than the signal pixels).

The SNRm of an SHWS image is defined as the mean SNR value of all subimages.

2.6 GCP centroid localization algorithm

To improve the SHWS accuracy, the GCP method was proposed to precisely locate the centroids of all subimages under different SNR conditions. The GCP method is a piecewise algorithm based on the SNR of the sub-images in the SHWS. First, for an SNR > 7, the GCP method calculates the centroid using the T-COG principle. Briefly, the process includes the following steps. (1) A threshold value is applied to remove noise lower than the threshold. In this work, the threshold is set as (Imax-Imin)/2, where Imax and Imin denote the maximum and minimum intensity value of the subimage; (2) The centroid can be calculated by

$$({{x_c},{y_c}} )= \left( {\frac{{\sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {{x_{nm}}{I_{nm}}} } }}{{\sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {{I_{nm}}} } }},\frac{{\sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {{y_{nm}}{I_{nm}}} } }}{{\sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {{I_{nm}}} } }}} \right),$$
where (xc, yc) are the calculated spot centroid of the sub-image, (M, N) are the width and height of the subimage, (xnm, ynm) are the coordinates of the pixels at (n, m), and Inm is the corresponding intensity.

Secondly, for 3 < SNR ≤ 7, the spot centroids of subimages are calculated by the principle of T-Corr. The process includes three steps: (1) A threshold, which is set as (Imax-Imin)/10, is applied to the subimage to remove noise. (2) The cross-correlation between the reference image and the SHWS subimage is computed. The intensity of the reference image can be obtained using Eq. (2) by assigning (M/2 and N/2) to (x0, y0). (3) A 3 × 3 image window centered at the pixel with the maximal cross-correlation is set, and then the spot centroid of the subimage is located using a 2D quadratic polynomial fitting method through the image window.

Finally, for SNR ≤ 3 and a small number of mislocated centroids, a novel Padding process was proposed. The process is as follows:

  • (1) Search for mislocated centroids, which are from the centroids calculated using T-COG and T-Corr. The search process is based on the assumption that if these centroids are mislocated, they will not guide the compensation on the DM correctly and will lead to large residual errors of spot shifts after compensation. The residual error (E) was estimated by performing a virtual compensation process using Eq. (5):
    $${\boldsymbol E} = {{\boldsymbol D}_1} - {{\mathbf{M}}^{ - 1}}{\textbf M}{{\boldsymbol D}_1},$$
    where M-1 is the inverse matrix of the influence matrix M, which relates the DM actuator movements to the spot shifts in the SHWS image. Noted that for the subimages with SNR ≤ 3, the spot shifts in D1 are temporarily set as 0 to complete the calculation. The mislocated centroids can then be determined using the following criteria:
    $$\left. \begin{array}{l} {\boldsymbol E} = ({\boldsymbol E}{x_i},{\boldsymbol E}{y_i}),i = 1,2,3\ldots ,197\\ {\boldsymbol E}x_{_i}^2 + {\boldsymbol E}y_{_i}^2 > 16 \end{array} \right\}$$
  • (2) Set the spot shifts of the mislocated subimages and the subimages with SNR ≤ 3 as NaN.
  • (3) Replace the NaN values with the average values of the neighboring spot shifts, which include the upper, lower, left, and right spot shifts of the targeted subimage. This procedure was repeated until all the NaN values were padded.

2.7 Centroid localization accuracy and wavefront accuracy

The centroid localization accuracy (CLE) is defined as the distance between the ground-truth centroids and the calculated centroids, as shown in Eq. (7):

$$\textrm{CLE } = \sqrt {{{({{x_c} - {x_0}} )}^2}\textrm{ + }{{({{y_c} - {y_0}} )}^2}} ,$$
where (xc, yc) are the calculated centroid and (x0, y0) are the ground truth centroid.

To quantify the wavefront measurement accuracy, the wavefront error is defined as the root mean square of the wavefront difference between the calculated wavefront and ground truth, as shown in Eq. (8):

$$\begin{array}{{c}} {{\Phi _{rms}}\textrm{ } = \sqrt {\sum\limits_{i = 1}^{140} {{{({a_{ci}} - {a_{0i}} - u)}^2}/140} } }\\ {u = \sum\limits_{i = 1}^{140} {({a_{ci}} - {a_{0i}})/140} } \end{array},$$
where aci and a0i represent the calculated and ground truth movements of ith DM actuator, respectively, and 140 is the total number of DM actuators.

2.8 Preparation of mouse brain slices

To prepare the brain slices, Thy1-GFP-M (Stock number 007788, The Jackson Laboratory) mice (∼ six weeks old) were first deeply anesthetized with a mixture of 2% α-chloralose and 10% urethane (8 mL/kg) by intraperitoneal injection. Then, transcranial perfusion with PBS and 4% (wt/vol) paraformaldehyde (PFA) was performed. After the perfusion, the mice were sacrificed. Next, the brains were excised and fixed with 4% PFA at 4 °C overnight. Finally, 2-mm-thick coronal slices were sectioned freehand using a brain matrix. The slices were placed in a custom-built container that allowed the cleared slices immersed in 80% glycerin during imaging. The container was sealed with 1-mm-thickness PDMS that introduced spherical aberration for AO performance evaluation.

All experiments were performed in compliance with the protocols approved by the Guangdong Provincial Animal Care and Use Committee and following the guidelines of the Animal Experimentation Ethics Committee of Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences.

3. Results

3.1 Principle of GCP algorithm

The GCP method enables the accurate location of all subimages with different SNR conditions in the SHWS. In TPM with direct AO, the SHWS is typically used to measure sample-induced aberration from the fluorescence of a GS (Fig. 1(a)). However, owing to some scattering and absorption structures in the tissue, such as cell nuclei and blood, the fluorescence at different angles can be attenuated differently, leading to an obvious variation in spot intensity in the SHWS (Fig. 1(b)). It is found that the SNR of spots in all SHWS subimages varied from 2.9 to 11 when imaging the neural circuit of a living mouse under a large vessel and at a depth of ∼250 µm below the pia (Fig. 1(c)). Here, SNR is defined as the ratio of the signal-to-noise difference to the noise standard deviation (see Materials and Methods). To calculate spot centroids with such a large SNR range, we presented an SNR-based piecewise GCP algorithm. As shown in Fig. 1(d), the GCP method calculates the spot centroids using the principles of T-COG and T-Corr for SNR of >7 and 7 ≥ SNR > 3, respectively, to boost the computing speed and ensure the accuracy of the spot location. Next, for spots with extremely low SNR conditions (SNR≤3), the GCP uses a novel padding strategy to precisely estimate the centroids of these spots using the spot shifts of nearby subimages. Details are presented in the Materials and Methods section.

 figure: Fig. 1.

Fig. 1. Principle of GCP and adaptive optics two-photon microscope setup. (a) Schematic of the system setup. EOM, electro-optical modulator; DM, deformable mirror; SL, scan lens; DCM, Dichroic Mirror; Obj, objective lens; F, filter; PMT, photomultiplier tube; SHWS, Shack–Hartmann wavefront sensor. (b) Scattering and absorption structures in the sample decrease the fluorescence from GS at different angles, leading to obvious intensity variation at the objective pupil plane. (c) An SHWS image was captured at 250 µm below the pia, showing the unevenly distributed spot intensities. (d) GCP integrates three optimal spot centroid calculation algorithms according to SNR. T-COG: thresholding center of gravity; T-Corr: thresholding correlation.

Download Full Size | PDF

3.2 Simulation analysis of the feasibility of GCP over a large SNR range

To validate the GCP algorithm, we established an SHWS simulation model to assess the accuracy of spot centroid localization using the GCP method and five other classical centroid-calculation algorithms (T-COG, WCOG, WT-COG, M-COG, and Corr). The model creates an SHWS image with 196 spots with a SNR range from 2.5 to 9. Subsequently, those spots centroids were calculated using the six algorithms. The resulting centroid localization errors between the ground truth and the calculated centroids were computed and plotted as a function of the spot SNR, as shown in Fig. 2(a). It can be seen that the location errors by the six algorithms are all less than three pixels when SNR > 7, whereas the location accuracy by GCP and Corr is higher than that of the other algorithms in the SNR range between 3 and 7. When the SNR was less than 3, the centroid localization error of GCP was still less than 3 pixels, whereas the other algorithms showed obvious localization errors (larger than 5 pixels). Three subimages with different SNR and the corresponding centroids calculated using T-COG, Corr, and GCP are presented in Fig. 2(b). These simulation results suggest that the GCP maintains a high accuracy of centroid localization over a large SNR range and exhibits great robustness under low SNR conditions.

 figure: Fig. 2.

Fig. 2. Simulation results of localization accuracy of GCP and five other algorithms at different SNR conditions. (a) Centroid localization error versus spot SNR (b) subimages at different SNR conditions located by T-COG, Corr, and GCP. COG: center of gravity; T-COG: thresholding COG; WCOG: weighted COG; WT-COG: windowing process T-COG; M-COG: morphology COG; Corr: cross-correlation.

Download Full Size | PDF

3.3 Experiment validation of the localization accuracy by GCP

To further validate the feasibility of GCP, we captured the SHWS images from a mouse brain slice at a depth of 250 µm and compared the centroid localization accuracy by both GCP and T-COG. The T-COG algorithm was selected as a reference for comparison because it is the mainstream method for calculating spot centroids in direct AO [13]. The experiment was conducted using a two-photon microscope with a direct AO module, and the GS were GFP-labeled neuronal bodies (see Materials and Methods). To obtain SHWS images with different SNRs, the SHWS exposure time was changed from 2500 ms to 31 ms, while the laser power, imaging area, and other AO parameters were kept constant. As shown in Fig. 3(a), the average SNR (SNRm) over all SHWS subimages changed from 9.3 to 3.8 when the SHWS exposure time was decreased from 2500 ms to 31 ms. In the following analysis, the result calculated for the SHWS image with an exposure time of 2500 ms (SNRm = 9.3) was regarded as the ground truth, and the errors in other SNRm conditions were evaluated based on the difference between the calculation results of the corresponding exposure times and the ground truth.

 figure: Fig. 3.

Fig. 3. Direct wavefront sensing experiments validate the AO performance of GCP at different SNR conditions. (a) SNR of SHWS spots versus exposure time of SHWS. (b) Centroid localization error of GCP and T-COG versus spot SNR. (c) Wavefront error of GCP and T-COG at different exposure times. The wavefront error is the root mean square of the wavefront difference between the calculated wavefront and the ground truth. (d) SNR map of the spot array at the exposure time of 63 ms. (e) Spot shifts map and residual wavefront (the difference between the calculated wavefront and ground truth) computed by T-COG. (f) Spot shifts map and residual wavefront computed by GCP.

Download Full Size | PDF

To compare the localization accuracy of GCP and T-COG, we calculated the spot locations of all SHWS subimages using the two algorithms and plotted the centroid localization error as a function of the SNR (Fig. 3(b)). The experimental results demonstrated that GCP enabled the accurate calculation of spot centroids over a large SNR range, whereas the T-COG method could not effectively locate spot centroids with SNR < 5. In addition, compared with the simulation results (Fig. 2(a)), a larger localization error of the T-COG can be found in the experiments when the SNR is less than 4.5. This inconsistency may be attributed to more complex noise components, including scattered fluorescence and excitation laser reflections. However, the proposed GCP method was insensitive to these noise components.

Next, we reconstructed the wavefront from the calculated spot positions and compared the wavefront errors of both GCP and T-COG. The standard deviation of the residual wavefronts was used to evaluate the wavefront errors. As depicted in Fig. 3(c), when SHWS SNRm was reduced from 7 to 3.8, the wavefront errors of GCP were all less than 0.3 µm, while the wavefront error of T-COG reached up to 2.7 µm.

Finally, we analyzed the SNR distribution, spot shift maps, and residual wavefronts of the T-COG and the GCP algorithms with an SHWS exposure time of 63 ms (Fig. 3(d)-(f)). It was found that the SNR of the SHWS subimages was low, and the SNR distribution was not uniform (Fig. 3(d)) because of the uneven distribution of scattering and absorption structures within the brain slice. A comparison of the spot shifts and residual wavefronts indicates that the GCP method can effectively reconstruct the wavefront even under low SNR conditions and significantly outperforms T-COG in terms of spot localization accuracy (Fig. 3(c)-(f)).

3.4 GCP improves the image quality under different SNR conditions

To evaluate the TPM image intensity and resolution improvements achieved using the GCP algorithm, we imaged neurons near the wavefront measuring area and applied wavefront correction using T-COG and GCP under different SNR conditions (Fig. 4). All imaging and display parameters were kept constant to ensure a fair comparison. When the SHWS SNRm was decreased from 6.9 to 3.8 by changing the SHWS exposure time, the images corrected with the GCP algorithm maintained high contrast and resolution, whereas in the images corrected by T-COG, the resolution degradation was obvious, and some dendrite structures become invisible with SNRm of 3.8 (Fig. 4(a)). To quantify the AO performance, six neuron bodies were selected to calculate their intensity improvement, and a region containing fine structures was translated into its spatial frequency domain to estimate the image resolution. Both GCP and T-COG provided 1-µm resolution and 1.75-times intensity improvement when SNRm was 6.9 (Fig. 4(b) and (c)). However, when the SNRm was reduced to 3.8, the GCP algorithm maintained its AO improvement performance, whereas T-COG failed to improve the intensity and resolution, even degrading the image quality (Fig. 4(b) and (c)).

 figure: Fig. 4.

Fig. 4. TPM imaging of Thy1-GFP-M neuron with direct AO through T-COG or GCP at different SNR conditions. (a) MIP images of the brain slice at depth from 200-250 µm. (b) Intensity improvements achieved by T-COG and GCP at different SNR conditions. (c) Spatial frequency representation of the boxed area in a, indicating the image resolution of T-COG and GCP. Scale bar = 40 µm.

Download Full Size | PDF

3.5 GCP improves the working depth of direct AO

Finally, we validated the practicability of the GCP for large-depth TPM imaging of biological samples. High-resolution TPM imaging of deep tissues with direct AO is still challenging [4,6]. As the imaging depth increases, the detected fluorescent photons in the SHWS will greatly decrease because of the significant enhancement in light scattering and absorption by the tissue. We imaged a brain slice at depths from 250 to 500 µm and compared the working depths of both GCP and T-COG (Fig. 5). The excitation power was adjusted from 10 to 30 mW to keep the measured fluorescence signals from different depths almost the same. The depth-dependent aberrations were measured at depths of 250, 300, 350, 400, and 450 µm, then the corresponding aberration correction patterns were loaded onto the DM for imaging the next 50-µm slice.

 figure: Fig. 5.

Fig. 5. TPM imaging of Thy1-GFP-M brain tissue at depths from 250 to 500 µm, AO compensated by T-COG and GCP. (a) 3D volume rendering of neuron network with AO correction of the GCP. (b) AO-off, T-COG-, and GCP-compensated neuronal maximum intensity images (MIP) images at depths of 250, 350, and 450 µm. The SHWS SNR maps are in the lower right corner of the first column. The calculated wavefronts of T-COG and GCP are in the lower right corner of the second and third columns, respectively. (c) The SNR of SHWS decreased with depth. (d) Intensity improvement achieved by two methods at different depths. (e) Intensity plots corresponding to the yellow line in b, showing that fine structures can be resolved after the GCP AO at 500 µm depth. Scale bar = 20 µm.

Download Full Size | PDF

As expected, the SNRm of the SHWS image was decreased from 5.5 to 3.5 as the imaging depth increased from 250 to 500 µm (Fig. 5(a)-(c)). To assess the intensity improvement, we calculated the improvement ratio of the neuron intensity after AO compensation using the two algorithms. Figures 5(b)-(d) show that the GCP algorithm enables an improvement ratio from 1.5 to 2 as the imaging depth increased from 250 to 500 µm. In contrast, the T-COG algorithm failed to improve the fluorescence signal when the depth exceeded 350 µm. For the resolution assessment, an intensity plot (Fig. 5(e)) shows that the dendrites at a depth of 450 µm can be resolved by GCP, while the T-COG algorithm introduced additional aberrations that degrade the image resolution. In conclusion, the GCP algorithm exhibited an effective AO working depth 150 µm deeper than that of the T-COG algorithm.

4. Discussion

In this study, we present a GCP algorithm to accurately locate the spot centroids of SHWS images under high to extremely low SNR conditions. The GCP method is a piecewise algorithm that enables accurate and quick computation of all spot centroids using a single method. Especially under low SNR conditions (SNR ≤ 3), the GCP method uses a Padding strategy to accurately estimate the centroids of these spots with the information of neighboring spot shifts. The feasibility of the GCP method was verified through simulations and experiments. The results suggest that the GCP method can effectively calculate aberrated wavefronts with minimal error and few computations under different SNR conditions. Compared with the conventional T-COG algorithm, GCP can provide 9 times smaller wavefront error in low SNR conditions and a 150 µm deeper AO working depth when imaging brain slices.

However, there are still some limitations of the study. Firstly, there may be some deviations in the SNR evaluation method, particularly in low-SNR situations. Here, we used the maximum intensity of the subimages as the signal, which is a simple approach for quantifying the signal level. However, under extremely low SNR conditions, the maximum intensity of the subimage is mainly noise rather than the signal. Consequently, when the signal approaches zero, the SNR value is still evaluated as 2.5 in the simulation and 2.9 in the experiment. In future studies, we believe that the development of deep-learning algorithms may help avoid this problem and evaluate the SNR levels of spots with high accuracy. Secondly, the AO working depth in this study was limited by the green fluorescent guide star, which peaks at 515 nm and is prone to scattering and absorption of tissue. We believe that the working depth will be further improved by combining the proposed GCP method with a near-infrared guide star [24]. Finally, it should be noted that the effectiveness of GCP is limited by the padding spots ratio, defined as the number of padding spots divided by the total count of SHWS spots, in low SNR condition. Here, a ratio of 40% is recommend as the applicable criteria of GCP when the SNRm of SHWS images is close to 3.

In summary, GCP is a piecewise algorithm for accurately locating spot centroids in SHWS images under different SNR conditions. We believe that the proposed method has the potential to facilitate high-resolution TPM imaging at large depths.

Funding

National Natural Science Foundation of China (62105353, 81927803, 82071972, 82102106, 92159104); Natural Science Foundation of Guangdong Province (2020B121201010, 2021A1515012022); Scientific Instrument Innovation Team of the Chinese Academy of Sciences (GJJSTD20180002); Shenzhen Basic Research Program (RCJC20200714114433058, RCYX20210609104445093, ZDSY20130401165820357); Shenzhen Institutes of Advanced Technology Innovation Program for Excellent Young Researchers (E1G029).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. K. M. Hampson, R. Turcotte, D. T. Miller, et al., “Adaptive optics for high-resolution imaging,” Nat. Rev. Methods Primers 1(1), 68 (2021). [CrossRef]  

2. J. W. Cha, J. Ballesta, and P. T. So, “Shack-Hartmann wavefront-sensor-based adaptive optics system for multiphoton microscopy,” J. Biomed. Opt. 15(4), 046022 (2010). [CrossRef]  

3. K. Wang, D. E. Milkie, A. Saxena, et al., “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014). [CrossRef]  

4. K. Wang, W. Sun, C. T. Richie, et al., “Direct wavefront sensing for high-resolution in vivo imaging in scattering tissue,” Nat. Commun. 6(1), 7276 (2015). [CrossRef]  

5. W. Zheng, Y. Wu, P. Winter, et al., “Adaptive optics improves multiphoton super-resolution imaging,” Nat. Methods 14(9), 869–872 (2017). [CrossRef]  

6. R. Liu, Z. Li, J. S. Marvin, et al., “Direct wavefront sensing enables functional imaging of infragranular axons and spines,” Nat. Methods 16(7), 615–618 (2019). [CrossRef]  

7. Z. Qin, C. Chen, S. He, et al., “Adaptive optics two-photon endomicroscopy enables deep-brain imaging at synaptic resolution over large volumes,” Sci. Adv. 6(40), eabc6521 (2020). [CrossRef]  

8. C. Chen, Z. Qin, S. He, et al., “High-resolution two-photon transcranial imaging of brain using direct wavefront sensing,” Photonics Res. 9(6), 1144–1156 (2021). [CrossRef]  

9. P. Zhang, D. J. Wahl, J. Mocci, et al., “Adaptive optics scanning laser ophthalmoscopy and optical coherence tomography (AO-SLO-OCT) system for in vivo mouse retina imaging,” Biomed. Opt. Express 14(1), 299–314 (2023). [CrossRef]  

10. Z. P. Yu, H. H. Li, and P. X. Lai, “Wavefront Shaping and Its Application to Enhance Photoacoustic Imaging,” Appl. Sci. 7(12), 1320 (2017). [CrossRef]  

11. S. Thomas, “Optimized centroid computing in a Shack-Hartmann sensor,” Proc. SPIE 5490, 1238 (2004). [CrossRef]  

12. P. Wei, X. Li, X. Luo, et al., “Analysis of the wavefront reconstruction error of the spot location algorithms for the Shack–Hartmann wavefront sensor,” Opt. Eng. 59(04), 1 (2020). [CrossRef]  

13. X. Li, X. Li, and C. Wang, “Optimum threshold selection method of centroid computation for Gaussian spot,” Proc. SPIE 9675, 967517 (2015). [CrossRef]  

14. S.-H. Baik, S.-K. Park, C.-J. Kim, et al., “A center detection algorithm for Shack–Hartmann wavefront sensor,” Opt. Laser Technol. 39(2), 262–267 (2007). [CrossRef]  

15. K. L. Baker and M. M. Moallem, “Iteratively weighted centroiding for Shack-Hartmann wave-front sensors,” Opt. Express 15(8), 5147–5159 (2007). [CrossRef]  

16. P. M. Prieto, F. Vargas-Martın, S. Goelz, et al., “Analysis of the performance of the Hartmann–Shack sensor in the human eye,” J. Opt. Soc. Am. A 17(8), 1388–1398 (2000). [CrossRef]  

17. X. Yin, X. Li, L. Zhao, et al., “Adaptive thresholding and dynamic windowing method for automatic centroid detection of digital Shack-Hartmann wavefront sensor,” Appl. Opt. 48(32), 6088–6098 (2009). [CrossRef]  

18. L. Wei, G. Shi, J. Lu, et al., “Centroid offset estimation in the Fourier domain for a highly sensitive Shack–Hartmann wavefront sensor,” J. Opt. 15(5), 055702 (2013). [CrossRef]  

19. N. Anugu, P. J. V. Garcia, and C. M. Correia, “Peak-locking centroid bias in Shack–Hartmann wavefront sensing,” Mon. Not. R. Astron. Soc. 476(1), 300–306 (2018). [CrossRef]  

20. Z. Li and X. Li, “Centroid computation for Shack-Hartmann wavefront sensor in extreme situations based on artificial neural networks,” Opt. Express 26(24), 31675–31692 (2018). [CrossRef]  

21. L. Hu, S. Hu, W. Gong, et al., “Deep learning assisted Shack–Hartmann wavefront sensor for direct wavefront detection,” Opt. Lett. 45(13), 3741–3744 (2020). [CrossRef]  

22. Y. Guo, L. Zhong, L. Min, et al., “Adaptive optics based on machine learning: a review,” Opto-Electron. Adv. 5(7), 200082 (2022). [CrossRef]  

23. L. Hu, S. Hu, W. Gong, et al., “Learning-based Shack-Hartmann wavefront sensor for high-order aberration detection,” Opt. Express 27(23), 33504–33517 (2019). [CrossRef]  

24. R. Yi, P. Das, F. Lin, et al., “Fluorescence enhancement of small squaraine dye and its two-photon excited fluorescence in long-term near-infrared I&II bioimaging,” Opt. Express 27(9), 12360–12372 (2019). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Principle of GCP and adaptive optics two-photon microscope setup. (a) Schematic of the system setup. EOM, electro-optical modulator; DM, deformable mirror; SL, scan lens; DCM, Dichroic Mirror; Obj, objective lens; F, filter; PMT, photomultiplier tube; SHWS, Shack–Hartmann wavefront sensor. (b) Scattering and absorption structures in the sample decrease the fluorescence from GS at different angles, leading to obvious intensity variation at the objective pupil plane. (c) An SHWS image was captured at 250 µm below the pia, showing the unevenly distributed spot intensities. (d) GCP integrates three optimal spot centroid calculation algorithms according to SNR. T-COG: thresholding center of gravity; T-Corr: thresholding correlation.
Fig. 2.
Fig. 2. Simulation results of localization accuracy of GCP and five other algorithms at different SNR conditions. (a) Centroid localization error versus spot SNR (b) subimages at different SNR conditions located by T-COG, Corr, and GCP. COG: center of gravity; T-COG: thresholding COG; WCOG: weighted COG; WT-COG: windowing process T-COG; M-COG: morphology COG; Corr: cross-correlation.
Fig. 3.
Fig. 3. Direct wavefront sensing experiments validate the AO performance of GCP at different SNR conditions. (a) SNR of SHWS spots versus exposure time of SHWS. (b) Centroid localization error of GCP and T-COG versus spot SNR. (c) Wavefront error of GCP and T-COG at different exposure times. The wavefront error is the root mean square of the wavefront difference between the calculated wavefront and the ground truth. (d) SNR map of the spot array at the exposure time of 63 ms. (e) Spot shifts map and residual wavefront (the difference between the calculated wavefront and ground truth) computed by T-COG. (f) Spot shifts map and residual wavefront computed by GCP.
Fig. 4.
Fig. 4. TPM imaging of Thy1-GFP-M neuron with direct AO through T-COG or GCP at different SNR conditions. (a) MIP images of the brain slice at depth from 200-250 µm. (b) Intensity improvements achieved by T-COG and GCP at different SNR conditions. (c) Spatial frequency representation of the boxed area in a, indicating the image resolution of T-COG and GCP. Scale bar = 40 µm.
Fig. 5.
Fig. 5. TPM imaging of Thy1-GFP-M brain tissue at depths from 250 to 500 µm, AO compensated by T-COG and GCP. (a) 3D volume rendering of neuron network with AO correction of the GCP. (b) AO-off, T-COG-, and GCP-compensated neuronal maximum intensity images (MIP) images at depths of 250, 350, and 450 µm. The SHWS SNR maps are in the lower right corner of the first column. The calculated wavefronts of T-COG and GCP are in the lower right corner of the second and third columns, respectively. (c) The SNR of SHWS decreased with depth. (d) Intensity improvement achieved by two methods at different depths. (e) Intensity plots corresponding to the yellow line in b, showing that fine structures can be resolved after the GCP AO at 500 µm depth. Scale bar = 20 µm.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

Φ = M D = M ( S 1 S 0 ) ,
I ( x , y ) = ( sin ( Q ( x x 0 ) ) sin ( Q ( y y 0 ) ) ) Q 2 ( x x 0 ) ( y y 0 ) ) 2 ,
S N R   = I m μ n σ n ,
( x c , y c ) = ( m = 1 M n = 1 N x n m I n m m = 1 M n = 1 N I n m , m = 1 M n = 1 N y n m I n m m = 1 M n = 1 N I n m ) ,
E = D 1 M 1 M D 1 ,
E = ( E x i , E y i ) , i = 1 , 2 , 3 , 197 E x i 2 + E y i 2 > 16 }
CLE  = ( x c x 0 ) 2  +  ( y c y 0 ) 2 ,
Φ r m s   = i = 1 140 ( a c i a 0 i u ) 2 / 140 u = i = 1 140 ( a c i a 0 i ) / 140 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.