Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Rapid 3D isotropic imaging of whole organ with double-ring light-sheet microscopy and self-learning side-lobe elimination

Open Access Open Access

Abstract

Bessel-like plane illumination forms a new type of light-sheet microscopy with ultra-long optical sectioning distance that enables rapid 3D imaging of fine cellular structures across an entire large tissue. However, the side-lobe excitation of conventional Bessel light sheets severely impairs the quality of the reconstructed 3D image. Here, we propose a self-supervised deep learning (DL) approach that can completely eliminate the residual side lobes for a double-ring-modulated non-diffraction light-sheet microscope, thereby substantially improving the axial resolution of the 3D image. This lightweight DL model utilizes the own point spread function (PSF) of the microscope as prior information without the need for external high-resolution microscopy data. After a quick training process based on a small number of datasets, the grown-up model can restore sidelobe-free 3D images with near isotropic resolution for diverse samples. Using an advanced double-ring light-sheet microscope in conjunction with this efficient restoration approach, we demonstrate 5-minute rapid imaging of an entire mouse brain with a size of ∼12 mm × 8 mm × 6 mm and achieve uniform isotropic resolution of ∼4 µm (1.6-µm voxel) capable of discerning the single neurons and vessels across the whole brain.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Rapid 3D fluorescence imaging of specifically-labelled large tissues remains a major challenge for current fluorescence microscopy. Light-sheet fluorescence microscopy (LSFM) has recently emerged from a variety of candidates owing to its selective plane illumination mode capable of three-dimensionally imaging thick samples at high speed and low photon invasion [111]. A basic hyperbolic light sheet can be generated by simply focusing a Gaussian laser beam along one dimension using a cylindrical lens. Its property is usually described by three parameters: thickness, axial propagation distance, and width, which together determine the performance of the light-sheet sectioning. An optimal light sheet geometry is always towards maintaining very long propagation distance as well as ultra-thin thickness, which is unfortunately a paradox owing to the diffraction of laser beam [12]. Therefore, to form a thin light-sheet for high axial resolution often results in a very limited field of view inappropriate for imaging large samples. Axially-swept light-sheet microscopes use refocusing apparatus, such as electrically tunable lenses (ETL) and piezo-mounted mirrors, to scan the thin-but-narrow Gaussian sheet axially and thereby achieve uniform plane illumination across an extended field of view [1113]. Aside from the complicated opto-electronic synchronization, the extra response time from these refocusing units also leads to compromised imaging speed of ASLM techniques. Tuning the conventional Gaussian sheet into a more advanced shape, e.g., Bessel-like sheet, represents another type of attempt that can directly extend the range of plane illumination while maintain a sharp axial sectioning [1416]. A well-known example is to use an axicon or annular mask to generate a long Bessel-like optical needle with thin central lobe diameter and scan the 1D needle into a 2D sheet with large illumination size and small axial extent [15]. While the direct FOV extension by Bessel-like LSFM notably improves the imaging efficiency for large samples, its congenital side lobe issue also leads to excessive out-of-focus fluorescence excitation that can severely degrade the image quality. To address this problem, several technical improvements, such as two-photon excitation, confocal slit synchronization, photobleaching imprinting, have been made to suppress either the excitation or fluorescing by the side lobes of Bessel sheet [1720]. In our previous work, we also designed a double-ring mask to directly generate static Bessel light-sheet with minimized side lobes [21]. While this technique, termed double-ring SPIM, indeed helps to produce a wide laser sheet with superior quality for macro-scale imaging, the side lobe issue still arises when the light sheet propagation necessarily becomes ultralong under a large FOV.

In contrast to the approaches that rely on the improvements of optics to obtain high-quality images, the advent of deep neural networks has recently provided a paradigm shift. It utilizes an end-to-end data training which drives the network to learn how the image data are degraded. In this way, the deep neural networks can computationally restore the quality of image data measured under an imperfect condition. Deep learning has become an efficient technique for restoring microscopy data from a variety of imaging challenges such as light scattering, weak signal, and diffraction limit [2231]. This enlightens us to use this powerful tool to further wipe off the side effect by the residual side lobes and thus drive the double-ring SPIM towards a perfect imaging condition.

Here we propose a deep-learning-based side lobe elimination model in combination with a double-ring SPIM to achieve rapid 3D imaging of whole transparent mouse brain at near isotropic resolution. The development of self-supervised model requires only a small amount of semi-synthetic data to quickly train the network to remove the residual side lobes signals for raw double-ring SPIM data. This side lobe elimination microscopy accomplishes thin and uniform 4-µm optical sectioning under a large imaging FOV of ∼10 mm2 especially suited for high-throughput, high-resolution 3D imaging of large samples.

2. Principle

2.1 Theory and simulation of the double-ring intensity modulation

We compared the geometry of Gaussian, Bessel, and double-ring modulated light sheet in the y-z plane. The middle image in Fig. 1 represents the low-sidelobe pattern generated by the double-ring intensity-modulated mask, with a Rayleigh range of 3520 $ \mathrm{\mu}\textrm{m}$ and side lobe ratio as low as 15%. The top image shows the simulation result of the Bessel mode, where the sidelobe ratio reaches 28% at a Rayleigh range of 2500 $ \mathrm{\mu}\textrm{m}$. The bottom image corresponds to the simulation result of the Gaussian mode, which has a much smaller Rayleigh range of measuring only 750 $\mathrm{\mu}\textrm{m}$. The double-ring modulated light-sheet has a lower side lobe ratio compared to the Bessel light-sheet, and it has a significantly longer Rayleigh length compared to the Gaussian light-sheet.

 figure: Fig. 1.

Fig. 1. Simulation results of Bessel light-sheet, double-ring light-sheet, and Gaussian light-sheet. Intensity distribution of the three modes in the y-z plane, illustrates that the double-ring SPIM has a larger illumination field of view and lower side lobe ratio. The curve on the right represents the intensity distribution along the z-direction. In all three simulations, the full width at half maximum (FHWM) is 7.5 $ \mathrm{\mu}\textrm{m}$. Scale bar, 1000 $\mathrm{\mu}\textrm{m}.$

Download Full Size | PDF

2.2 Principle of using semi-synthetic training data in deep learning networks

We use high-resolution lateral images as ground truth (GT) and perform convolution operations on them to generate image pairs that perform the same signal with different image qualities. These semi-synthetic data are suitable for training the side lobe elimination model (SLEM), which enables to remove side lobe signals as well as achieve isotropic resolution.

Our system employs a double-ring intensity mask, which modulates the focused Gaussian beam to form a low side lobe double-ring modulated beam, as shown in Fig. 2(a). By designing the mask appropriately, the generated double-ring modulated beam can cover a field of view of ∼3 mm. Through axial scanning using a sample stage, we obtain low side lobe three-dimensional data.

During training, we input a small region of interest (ROI) data, such as 200${\times} $200${\times} $100, and the PSF of the system. The network convolves the system's axial PSF with each x-y plane to obtain the simulated x-z plane. Next, the network crops the x-y plane and the simulated x-z plane into small blocks of size 64${\times} $64${\times} $1. After data preprocessing, the network corresponds these simulated x-z plane images as low-resolution data and the high-resolution x-y plane images as label data to initiate the training. The training process begins, and the neural network learns how to better restore high-resolution images from elongated and sidelobe-containing inputs. The intermediate outputs are compared with the fixed label data to optimize the network’s loss function iteratively, enabling the network to predict high-resolution outputs more accurately [32], as shown in Fig. 2(b). Once the network is trained, we use it to restore larger ROIs [Fig. 2(c)]. We input 1000${\times} $1000${\times} $1000 three-dimensional raw data into the well-trained network and yield improved 3D output which is free from side lobe signals and with isotropic resolution.

 figure: Fig. 2.

Fig. 2. Workflow of SLEM. (a) Data preparation, including data acquisition, cropping small regions for training, convolution with the PSF, and cropping into small patches for input to the network (Steps 1-4). (b) Network training, where the network generates intermediate outputs based on input simulated low-resolution data and quantitatively compares them with the label data (high-resolution data without side lobes). The system loss function is calculated, and network optimization is iteratively performed (Steps 5-8). (c) Network restoration step, where the full-field low-resolution images are input to the trained network, and enhanced high-resolution outputs are obtained.

Download Full Size | PDF

2.3 Structure of the deep-learning network

As shown in Fig. 3, the network adopts a two-dimensional U-Net convolutional neural network architecture [23,24]. The encoding path consists of multiple convolutional layers and downsampling layers, which enable the network to learn global image features as the number of layers increases. The shallow layers combine low-level features to form high-level features in the deep layers. The decoding path, on the other hand, is the reverse of the encoding path, achieved through multiple convolutional layers and upsampling layers, reconstructing the image from high-level to low-level features. To transmit more contextual information to the deep layers, a connection known as feature concatenation channels is established between the layer $i $ and $n - i$, where n indicates the total number of convolutional layers.

The network takes 512${\times} $512 two-dimensional images as input. Firstly, the input image undergoes two convolutions with 3${\times} $3 kernel size. The convolved image is then downsampled to the next layer using a MaxPooling layer. The downsampled image goes through two more convolutions followed by downsampling at each subsequent layer. This process continues iteratively. The upsampling process is achieved through an upsampling layer. The last convolutional layer uses a 1${\times} $1 kernel to allow feature fusion across channels.

 figure: Fig. 3.

Fig. 3. Illustration of our U-Net-based network. In the encoding path, the input image is progressively downsampled through convolutional layers, which capture and extract hierarchical features at different scales. This path is responsible for capturing the context and extracting high-level representations. The decoding path, which is symmetrical to the encoding path, gradually upsamples the feature maps using transposed convolutions. Skip connections are also established between the corresponding layers in the encoding and decoding paths. These skip connections allow the network to combine low-level and high-level features, preserving fine-grained details during the upsampling process.

Download Full Size | PDF

2.4 Loss function

All algorithms in deep learning rely on minimizing a function what we call the loss function. In SLEM we use a common loss function, the mean squared error (MSE). For this traditional image regression task using the convolutional neural network, ${g_\theta }$ is the function with its model parameters $ \theta $. The model parameters are chosen by minimizing the loss functions ${L_{mse}}(\theta )$. The equation means that during each epoch, the network will calculate the mean squared error of every pixel of the paired data to optimize the model parameters $ \theta $, until the loss function reaches a stable minimum [23].

T refers to the number of the paired data, N refers to the total number of pixels in one image.

$${{L_{mse}}(\theta )= \frac{1}{T}\frac{1}{N}\mathop \sum \limits_{t = 1}^T \mathop \sum \limits_{i = 1}^N {{({y_i^t - {g_\theta }{{({{x^t}} )}_i}} )}^2}}$$

2.5 Performance criteria

SSIM is a widely used criterion that measures the similarity of structures in images based on their brightness, contrast, and structure. The output of SSIM ranges from 0 to 1, where a higher value indicates a higher level of fidelity [33].

$${SSIM = \frac{{({2{\mu_\textrm{M}}{\mu_\textrm{N}} + {\textrm{C}_1}} )({2{\sigma_{\textrm{MN}}} + {\textrm{C}_2}} )}}{{({{\mu_\textrm{M}}^2 + {\mu_N}^2 + {\textrm{C}_1}} )({{\sigma_\textrm{M}}^2 + {\sigma_N}^2 + {\textrm{C}_2}} )}}}$$

M is the output result of the network, N refers to the ground truth, ${\mu _\textrm{M}}$ and $ {\mu _\textrm{N}}$ are the means value of M and N, ${\sigma _\textrm{M}}\; \textrm{and}\; {\sigma _N}$ are the variances of M and N, and ${\sigma _{\textrm{MN}}}$ is the covariance between M and N. ${\textrm{C}_1}\; \textrm{and\; }{\textrm{C}_2}$ are constants used to avoid division by values close to zero.

We use Eq. (2) to evaluate the signal-to-noise ratio (SNR) of the image.

$${SNR = \frac{\mu }{\sigma } = \frac{{S - {I_{background}}}}{{\sqrt {S - {I_{background}} + \sigma _{background}^2} }}}$$

S is the average signal intensity value of the full field of view, ${I_{background}}$ and ${\sigma _{background}}$ are the mean standard deviation of the background, respectively.

$${MSE = \frac{1}{{H \times W}}\mathop \sum \limits_{i = 1}^H \mathop \sum \limits_{j = 1}^W {{({M({i,j} )- N({i,j} )} )}^2}}$$
$${PSNR = 10{{\log }_{10}}\left( {\frac{{{{({{2^n} - 1} )}^2}}}{{MSE}}} \right)}$$

$MSE$ represents the Mean Square Error between the current image M and the reference image N. H and W are the height and width of the images, respectively, and n is the number of bits per pixel. PSNR, measured in dB, indicates the level of distortion, with higher values indicating lower distortion.

3. Result

3.1 Performance characterization of SLEM

We first used 4${\times} $ detection to compare the effective imaging FOV and resolution of conventional Gaussian light sheet, double-ring SPIM, and SLEM models. We imaged fluorescent beads (0.5-µm diameter, Lumisphere, BaseLine Chromtech) embedded in 1% agarose gel to compare the point spread functions (PSFs) of three modes Fig. 4(a). While it can be observed that the double-ring mask makes the entire PSF uniform across the x-z image, it generates noticeable axial elongation and the appearance of side lobe with ∼17% intensity of the central peak. After being recovered by the side lobe elimination mode (SLEM), the beads shown in the entire 4${\times} $ FOV demonstrate uniform-and-sharp optical sectioning Fig. 4(a). Five reconstructed beads at different x positions were further extracted from the entire FOV to analyze their axial performance Fig. 4(b). The axial resolution of the five points is shown as a line chart in Fig. 4(c). It can be clearly seen that compared to the Gaussian beam, double-ring SPIM mode exhibits uniform axial resolution across the entire field of view. Furthermore, with the reconstruction by SLEM, the axial resolution in the entire FOV is optimized to match the lateral resolution, achieving isotropic resolution within a ∼3 mm field of view with the side lobes eliminated. We can also get this conclusion with the full-FOV PSFs statistics presented in Table 1.

 figure: Fig. 4.

Fig. 4. Comparative PSFs of Gaussian SPIM, double-ring SPIM and SLEM. (a) From top to bottom, the images show the x-z plane of Gaussian PSF, PSF of double-ring SPIM, and PSF of SLEM. (b) Five PSFs from the central and peripheral fields of view in (a) are zoomed in to show the details. (c) The statistical plots of the axial full width at half maximum (FWHM) for the five PSFs in (b) demonstrate that the PSF of SLEM exhibits nearly the same axial resolution as the original x-y plane.

Download Full Size | PDF

Tables Icon

Table 1. Comparative PSFs of double-ring SPIM and SLEM. We calculate the mean value and standard deviation of thirty PSFs in the whole FOV.

3.2 Verification of the fidelity of SLEM

To assess the fidelity of the SLEM model, we employed a strategy where the original 150${\times} $150${\times} $200-pixel stack was divided into image sequences, with 160 images used for training and 40 images for validation. The entire set of 2D images was convolved with the system's PSF to generate simulated low-resolution data, as shown in the leftmost column of Fig. 5. The synthetic low-resolution data in the leftmost column reveals the elongation and side lobes, causing some neural fibers hard to be distinguished. Following the steps mentioned above, paired data were input for network training. The low-resolution data from the validation set were input to the well-trained network to recover high-resolution enhanced data, as displayed in the second column of Fig. 5. The error maps of the restored HR relative to the GT are shown in the third column, with SSIM values of 0.907, 0.910, and 0.826, and PSNR values of 33.2, 35.105, and 30.145, indicating sufficient structural similarity by SLEM.

 figure: Fig. 5.

Fig. 5. Fidelity validation of SLEM. The third column shows the z-projection of the neuronal signal in the x-y plane, serving as the ground truth (GT) for the network. It is convolved with the system PSF to generate simulated low-resolution data (LR) shown in the leftmost column. The LR data is inputted into the trained network to obtain the restored high-resolution (HR) result shown in the second column. The rightmost column displays the error map between the GT and HR. Scale bar, 50 $\mathrm{\mu}\textrm{m}.$

Download Full Size | PDF

3.3 Restoration of cortical neural fibers

Figure 6(a) illustrates the workflow of our whole-brain imaging and data analysis. A cleared mouse brain (Thy1-GFP-M, ∼12${\times} $8${\times} $6mm3) [36] was first imaged using double-ring SPIM mode under two opposite views, with totally 20 tiles (10 for each view) acquired to cover the entire brain. We used ImageJ's stitching function to obtain the 3D image of complete brain, resulting in low-resolution whole-brain data. SLEM program was then applied to generate a high axial-resolution output (1.6-$\mathrm{\mu}\textrm{m}$ iso-voxel size). After obtaining the isotropic whole-brain data, we performed brain region segmentation using ImageJ's BIRDS plugin [35], followed by cell density statistics or vessel tracing in different regions.

 figure: Fig. 6.

Fig. 6. SLEM imaging workflow and cortical neural fiber recovery. (a) Demonstrates the workflow of two-views imaging, stitching, network enhancement, registration, counting, and tracking. (b) Shows a 4${\times} $ full field rendering of neural fiber signals. Scale bar, 400$\mathrm{\mu}\textrm{m}$. (c) Represents a zoom-in 3D rendering of two regions in (b), with the original raw data shown above and the enhanced data displayed in the bottom row. (d) Demonstrates magnified views of local ROIs in (c), show the detailed resolution of the fibers. The intensity distributions along the vertical axis of a 55-pixel length are displayed in (e).

Download Full Size | PDF

Figure 6(b) shows a cortical region with uniform resolution across the full 4${\times} $ FOV. Figure 6(c), 6(d) shows the magnified views of two regions of interest (ROI) by raw double-ring SPIM and SLEM, clearly demonstrating that after network enhancement, more fine nerve fibers are resolved. The axial elongation of cell bodies is also recovered. We selected two magnified regions to better compare the network's enhancement effect on neural fibers Fig. 6(d), and the intensity profiles after normalization are plotted in Fig. 6(e). The left curve demonstrates that four adjacent fibers become distinguishable with enhanced contrast after network enhancement.

3.4 Whole-brain 3D imaging with brain region segmentation and quantification

In addition to the recovery of neural fibers, we also used SLEM to explore neuron cyto-architectures at different brain sub-regions. We registered the whole brain to the Allen Brain Atlas (ABA) using BIRDS [35] and segmented the whole brain into more than 600 regions of Fig. 7(a). An anatomical identity was then assigned to each cell in each region. We selected three brain regions, isocortex, Hippocampal formation (HPF), and Midbrain (MB), for cell counting. While the raw 3D image can be quickly obtained by double-ring SPIM, the resolved axial signals remained too crowded for single-cell counting. After SLEM enhancement, single cells previously indistinguishable in x-z plane became clearly separated Fig. 7(b). We further verified the improved counting accuracy by SLEM at a small, selected region, where each cell within the three-dimensional volume was better resolved with increased contrast Fig. 7(c). An accurate cell counting of which the number is 84 was then accomplished as a result of SLEM enhancement. Magnified views of the three brain regions are shown in Fig. 7(d)–7(f), with three sub-regions highlighted at the top left corner.

 figure: Fig. 7.

Fig. 7. Whole-brain segmentation and cellular analysis based on SLEM imaging. (a) Coronal view, transverse view, and sagittal view of whole mouse brain. (b) A selected ROI in HPF compares the resolved neuron cell bodies by raw double-ring SPIM and SLEM. Scale bar, 100$\mathrm{\mu}\textrm{m}.{\; }$(c) further shows a magnified 3D rendering of the ROI outlined by the dashed box in (b), comparing the resolution and counting accuracy before and after SLEM enhancement. Scale bar, 30$\mathrm{\mu}\textrm{m}$. (d)-(f) 3D visualizations of the isocortex, MB (Medial Branch), and HPF (Hippocampal Formation) region, showing the volume rendering and x-y, x-z plane representations of selected areas. Scale bar, 50 $ \mathrm{\mu}\textrm{m}$

Download Full Size | PDF

3.5 Whole-brain SLEM imaging and quantitative analyses of blood vessel

We further demonstrated SLEM imaging of blood vessels of the whole mouse brain and presented the vessel tracing results in specific brain regions. Laser with 532-nm wavelength was used to excite the blood vessels (LEL-Dylight649). We first finished high-throughput imaging rapidly in ∼5 min. The whole brain blood vascular system image is shown in Fig. 8(a). Image results of ROI in Hippocampal formation (HPF) by double-ring SPIM and SLEM are compared in Fig. 8(b), to show the quality of x-z plane of SLEM being as high as that from x-y plane results. It can be observed by the magnified views that the original signal exhibits elongation, while the enhanced axial view presents the same resolution as the x-y plane, with continuous signals and clean background without noticeable artifacts Fig. 8(c). Furthermore, owing to the high spatial resolution achieved at whole-brain scale, our method also showed the potential of tracking the vessels across entire brain. Figure 8(d) demonstrated the 3D enhanced image, the segmentation of vessels, and the accurate colocalization of the segmentation and the blood vessels of the ROI. Furthermore, we manually assigned a starting point of a long-distance blood vessel, and conducted statistical analysis including vessel branching level, length from the starting point, and vessel mean diameter, using Imaris. We selected 1st, 2nd, and 3rd order vessels originating from the starting point, as shown in Fig. 8(e). As the order increases, the diameter of vessel branching becomes smaller, yet they can still be accurately segmented.

 figure: Fig. 8.

Fig. 8. 3D SLEM imaging and quantitative analyses of vessels in whole mouse brain (a) 3D visualization of the whole mouse brain, showing LEL-Dylight649 labelled vessels. Scale bar, 1m$\textrm{m}.$ (b) From left to right, the original x-y plane, the recovered high-resolution axial plane, and the original low-resolution axial plane are displayed. The axial plane after enhancement exhibits the same resolution as the x-y plane. Scale bar, 100$\mathrm{\mu}\textrm{m}.$ (c) Magnified views of a small ROI (dotted box) in (b) reveal that several adjacent blood vessels that were indistinguishable in the original low-resolution image are clearly resolved. Scale bar, 50$\mathrm{\mu}\textrm{m}.$ (d) The 3D rendering of the recovered vessels, the rendering of the vessel tracing, and the accuracy of tracing are shown from left to right. Scale bar, 30$\mathrm{\mu}\textrm{m}.$ (b)-(d) belongs to ROI2. (e) The applications of vessel tracing: vessel classification (left), length measurement from the starting point (middle), and average vessel diameter (right). Scale bar, 50$\mathrm{\mu}\textrm{m}.$

Download Full Size | PDF

4. Methods

4.1 Imaging system design

In this paper, we designed and built a double-ring mask-based dual-sided light-sheet microscope. The illumination optical path includes mainly a coupled four-wavelength laser generation module, a double-ring mask-based light-sheet generation module, and a light-sheet relay module. In the light-sheet generation module, a double-ring intensity mask was chosen as the element to generate a non-diffractive static light-sheet from Gaussian light-sheet. The pattern of the double-ring masks is shown in Fig. 1, and the simulation results are demonstrated. The mask can produce a 7.5µm-thick light sheet with a side-lobe ratio of 17%, covering a confocal range of about 3317µm. The detection optical path consists of an upright OLYMPUS MVX10 microscope, which can provide various magnifications from1.26${\times} $-12.6${\times} $. In our experiments, we mainly apply 4${\times} $ as the detection magnification, which corresponds to a field of view of 3328 µm.

We mainly apply MATLAB for the simulation of the double-ring mask, whose corresponding physical optics principle can be illustrated by the following equation. If we simplify the optical path to a system consisting of a mask and an objective lens, and the relay lens is omitted in the simulation process. The incident wave U is modulated to ${U_1}$ after passing through the mask, and F represents the intensity modulation on the wavefront caused by the mask.

$${{U_1} = {U_0}F}$$

The mask is located at the pupil plane behind the objective lens. According to the Fresnel diffraction model, the amplitude distribution at the pupil plane behind the objective lens is

$${{U_2} = \frac{{exp({ikf} )}}{{i\lambda f}}\int \mathop \int \nolimits_{ - \infty }^\infty {U_1}({{x_1},{y_1}} )exp\left[ {\frac{{ik}}{{2f}}[{{{({x - {x_1}} )}^2} + {{({y - {y_1}} )}^2}} ]} \right]d{x_1}d{y_1}}$$
where f is the focal length of the objective lens and $\lambda $ is the wavelength of the incident light wave. After the light wave pass through the objective, the complex amplitude is:
$${{U_3} = {U_2}exp\left[ {\frac{{ - ik}}{{2f}}({x_2^2 + y_2^2} )} \right]}$$

Using the Fresnel diffraction formula again, the amplitude of the wavefront at a distance d behind the objective lens is:

$${{U_4} = \frac{{exp({ikd} )}}{{i\lambda d}}\int \mathop \int \limits_{ - \infty }^\infty {U_3}({{x_3},{y_3}} )exp\left[ {\frac{{ik}}{{2d}}[{{{({x - {x_3}} )}^2} + {{({y - {y_3}} )}^2}} ]} \right]d{x_3}d{y_3}}$$

4.2 Brain clearing

We labeled the vasculature in Thy1-GFP-M mouse brain with DyLight 649 L. esculentum (Tomato) lectin (LEL-Dylight649, DL-1178, Vector Laboratories). Briefly, LEL-Dylight649 was diluted in sterile saline to a concentration of 0.5 mg/ml (0.1 ml per mouse) and injected into the mice via tail vein. After injection, the animals were placed in a warm cage for 30 min prior to perfusion. The mouse brains were collected after perfusion and post-fixed by 4% PFA overnight. The fixed brains were then cleared by the FDISCO clearing protocol as previously described [36].

4.3 Transparent whole-brain imaging

Our double-ring SPIM is based on an orthogonal plane illumination / wide-field detection configuration, which contains a lab-built double-ring-modulated plane excitation path generating Bessel-like laser sheet with axial extent ∼7.5 $\mathrm{\mu}\textrm{m}$ and an upright OLYMPUS MVX10 microscope [34] serving as a zoomable fluorescence detection system. The brain was clarified using FDISCO method. We mounted the transparent whole-brain sample onto a self-designed holder by glue (Loctite) and performed imaging in DBE index-matched solvent. To ensure sufficient overlap for signal alignment, we ensured a 20% overlap between adjacent fields of view. The whole brain imaging was performed under 4${\times} $ magnification. The axial step size is 3$\mathrm{\mu}\textrm{m}$. A Hamamatsu Flash 4.0 v2 sCMOS camera (pixel size of 6.5 $\mathrm{\mu}\textrm{m}$) was used to collect the plane image at a high frame rate of 100 fps. For a mouse brain with size of ∼12${\times} $8${\times} $5mm3, 20 (5${\times} $4) 3D image tiles were used to cover the entire brain

5. Discussion and conclusion

We report SLEM, which combines non-diffractive optical elements with semi-synthetic data-driven deep learning networks capable of achieving high-throughput 3D imaging of whole brains at isotropic resolution. Compared to the conventional Gaussian light-sheet mode, thin-and-wide non-diffractive static light-sheet can be readily generated through the simple addition of a double-ring intensity modulation mask. The integration of SLEM network further removes the axial artifacts by the side lobe excitation of light sheet. The network training only requires self-supervised data from small regions captured by the microscope, eliminating the need for additional in-situ ground truth data. This fast-training networks can achieve stable high-resolution outputs with minimal artifacts, eliminating side-lobes across various signal regions. In our demonstrations, SLEM microscopy achieves authentic single-cell resolution across whole mouse brain with merely 5-minutes acquisition, enabling quantitative analysis, such as cell counting and vessel tracing, for various brain regions. Its fusion of optics and computation provides a simple-but-powerful paradigm shift in 3D fluorescence microscopy for advancing a variety of biological assays in neuroscience, tissue histology, and digital pathology.

Appendix A: Recovery of brain regions with different signal-to-noise ratios

We applied SLEM for recovery of image from different brain regions and demonstrated a significant improvement in resolution and contrast using Fast Fourier Transform (FFT) and signal-to-noise ratio (SNR) analysis. Due to variations in scattering and light penetration depth across different brain regions, the SNR and side lobe distribution of the images might differ. We selected dense structure regions with different SNRs and cropped them into small blocks of 80${\times} $80${\times} $80 as training data for the network, resulting in the outcomes shown in the third row of Fig. 9. Enhanced images in the x-z plane show same resolution with x-y plane without significant background artifacts, and the frequency spectrum exhibits obvious extension. We also performed Richardson-Lucy (RL) deconvolution with the same system PSF for comparison. After ten iterations, the results in the second row of Fig. 9 clearly indicate that RL deconvolution did not improve the quality of image significantly. This indicates that the predicted results of the network are more stable and perform excellently across regions with different structures, resulting in a significant improvement in resolution, SNR and the complete elimination of side lobes caused by double-ring SPIM.

 figure: Fig. 9.

Fig. 9. Recovery of brain regions with different signal-to-noise (SNR) ratios. The quality of image restoration by the network is demonstrated in regions of high SNR (left) and low SNR (right). The middle column shows magnified images of the regions indicated by dashed boxes in the left column. It can be observed that the signals recovered by the network achieve three-dimensional isotropic resolution. We calculated the Fast Fourier Transform (FFT) of the images in the first column to visually demonstrate the improvement in resolution, as shown in the third column. It can be observed that the SNR is significantly enhanced, the spectrum is visibly widened, especially in axial direction.

Download Full Size | PDF

Appendix B: Comparison of the results of SLEM and RL deconvolution on the PSF of the entire FOV

We compared the effectiveness of SLEM with traditional RL deconvolution. We selected the same PSF for both network training and RL deconvolution. The enhancement effect of the PSF on the entire FOV is shown in Fig. 10(a). To illustrate the details, we extracted150${\times} $150-pixel ROIs and displayed it in Fig. 10(b). SLEM demonstrated significant suppression of side lobes and isotropic enhancement, while RL deconvolution after 10 iterations showed no apparent effect. Figure 10(c) compares the intensity profiles of a selected point, where SLEM not only narrowed the FWHM but also reduced the side lobe ratio to nearly 0%. To avoid the influence of random point selection, we conducted statistical analysis of the resolution and side lobe ratio in the x, y, and z directions for 10 PSFs across the whole FOV. According to the statistical results depicted in the box plots, SLEM restoration achieved the isotropic resolution, with a uniform side lobe ratio of 0%.

 figure: Fig. 10.

Fig. 10. Comparison of SLEM and RL deconvolution methods on the entire FOV. (a) Comparison of the axial restoration results between SLEM and RL deconvolution using the same PSF. Scale bar, 200 $\mathrm{\mu}\textrm{m}$. (b) Magnified details of three regions indicated by solid boxes in (a). Scale bar, 50 $\mathrm{\mu}\textrm{m}$. The axial intensity profiles of a selected point are shown in (c). (d) Statistics of three-dimensional resolution and side lobe ratio for 10 PSFs within the entire FOV. It can be observed that while all three methods show uniform axial resolution across the x-z plane, only the SLEM method achieves three-dimensional isotropic resolution and side lobe elimination.

Download Full Size | PDF

Appendix C: Validation of the image degradation model

In generating the paired data for training SLEM, we have two main steps to accomplish the degradation of the x-y plane high-resolution image. Step 1: Generate anisotropic data. We convolve the high-resolution x-y plane image with PSF of the system so that the x-y plane image obtains the elongation and sidelobe. Step 2: Add noise; We approximate the real x-z image by adjusting the mean and variation of the random noise. The second column of Fig. 11 shows the simulated axial data obtained after two steps of the degradation model. When this semi-synthesized data and the real x-z plane data have similar Fourier domain distributions, this degradation process is considered credible.

 figure: Fig. 11.

Fig. 11. Verification of the image degradation model. We visualized three samples to verify the accuracy of our degradation model, which are bead, neuron, and vessel images. The original image in x-y plane (first column) is convolved with system PSF and added noise to obtain the synthetic LR image (second column), and the comparison of the synthetic LR image with the original x-z image (third column) in the frequency domain reveals that they are very similar in terms of the structure of the mouse brain. On the bead samples, the simulated LR axial resolution is slightly lower than the original axial resolution, probably due to the aberration in the x-y plane. Scale bar of first row, 50 $\mathrm{\mu}\textrm{m}$. Scale bar of second and third row, 100 $\mathrm{\mu}\textrm{m}$.

Download Full Size | PDF

Funding

National Natural Science Foundation of China (21874052, 21927802, 61860206009, T2225014).

Disclosures

The authors declare no conflicts of interest.

Data availability

Code Availability. The codes for SLEM in this paper are publicly available in [37].

References

1. J. Huisken, J. Swoger, F. Del Bene, et al., “Optical sectioning deep inside live embryos by selective plane illumination microscopy,” Science 305(5686), 1007–1009 (2004). [CrossRef]  

2. H. U. Dodt, U. Leischner, A. Schierloh, et al., “Ultramicroscopy: Three-dimensional visualization of neuronal networks in the whole mouse brain,” Nat. Methods 4(4), 331–336 (2007). [CrossRef]  

3. P. J. Keller, A. D. Schmidt, J. Wittbrodt, et al., “Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy,” Science 322(5904), 1065–1069 (2008). [CrossRef]  

4. Y. Wu, P. Wawrzusin, J. Senseney, et al., “Spatially isotropic four-dimensional imaging with dual-view plane illumination microscopy,” Nat. Biotechnol. 31(11), 1032–1038 (2013). [CrossRef]  

5. M. B. Ahrens, M. B. Orger, D. N. Robson, et al., “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods 10(5), 413–420 (2013). [CrossRef]  

6. A. K. Glaser, K. W. Bishop, L. A. Barner, et al., “A hybrid open-top light-sheet microscope for versatile multi-scale imaging of cleared tissues,” Nat. Methods 19(5), 613–619 (2022). [CrossRef]  

7. J. Nie, S. Liu, T. Yu, et al., “Fast, 3D isotropic imaging of whole mouse brain using multiangle-resolved subvoxel SPIM,” Adv. Sci. 7(3), 1901891 (2020). [CrossRef]  

8. F. Zhao, Y. Yang, Y. Li, et al., “Efficient and cost-effective 3D cellular imaging by sub-voxel-resolving light-sheet add-on microscopy,” J. Biophotonics 13(6), e201960243 (2020). [CrossRef]  

9. P. Fei, J. Nie, J. Lee, et al., “Sub-voxel light-sheet microscopy for high-resolution, high-throughput volumetric imaging of large biomedical specimens,” Adv. Photonics 1(01), 016002 (2019). [CrossRef]  

10. B. Yang, M. Lange, A. Millett-Sikking, et al., “DaXi-high-resolution, large imaging volume and multi-view single-objective light-sheet microscopy,” Nat. Methods 19(4), 461–469 (2022). [CrossRef]  

11. A. K. Chmielewski, A. Kyrsting, P. Mahou, et al., “Fast imaging of live organisms with sculpted light sheets,” Sci. Rep. 5(1), 9385 (2015). [CrossRef]  

12. L. Gao, L. Shao, C. D. Higgins, et al., “Noninvasive imaging beyond the diffraction limit of 3D dynamics in thickly fluorescent specimens,” Cell 151(6), 1370–1385 (2012). [CrossRef]  

13. P. N. Hedde and E. Gratton, “Selective plane illumination microscopy with a light sheet of uniform thickness formed by an electrically tunable lens,” Microsc. Res. Tech. 81(9), 924–928 (2018). [CrossRef]  

14. T. A. Planchon, L. Gao, D. E. Milkie, et al., “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8(5), 417–423 (2011). [CrossRef]  

15. C. Fang, T. Yu, T. Chu, et al., “Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy,” Nat. Commun. 12(1), 107 (2021). [CrossRef]  

16. B.-C. Chen, W. R. Legant, K. Wang, et al., “Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution,” Science 346(6208), 1257998 (2014). [CrossRef]  

17. F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2(12), 932–940 (2005). [CrossRef]  

18. J. L. Fan, J. A. Rivera, W. Sun, et al., “High-speed volumetric two-photon fluorescence imaging of neurovascular dynamics,” Nat. Commun. 11(1), 6020 (2020). [CrossRef]  

19. E. Baumgart and U. Kubitscheck, “Scanned light sheet microscopy with confocal slit detection,” Opt. Express 20(19), 21805–21814 (2012). [CrossRef]  

20. B. Xiong, X. Han, J. Wu, et al., “Improving axial resolution of Bessel beam light-sheet fluorescence microscopy by photobleaching imprinting,” Opt. Express 28(7), 9464–9476 (2020). [CrossRef]  

21. Y. Zhao, M. Zhang, and W. Zhang, “Isotropic super-resolution light-sheet microscopy of dynamic intracellular structures at subsecond timescales,” Nat. Methods 19(3), 359–369 (2022). [CrossRef]  

22. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

23. M. Weigert, U. Schmidt, T. Boothe, et al., “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018). [CrossRef]  

24. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds. (Springer International Publishing, Cham, 2015), pp. 234–241.

25. W. Ouyang, A. Aristov, M. Lelek, et al., “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36(5), 460–468 (2018). [CrossRef]  

26. C. Belthangady and L. A. Royer, “Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction,” Nat. Methods 16(12), 1215–1225 (2019). [CrossRef]  

27. H. Wang, Y. Rivenson, Y. Jin, et al., “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019). [CrossRef]  

28. H. Zhang, C. Fang, X. Xie, et al., “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” Biomed. Opt. Express 10(3), 1044–1063 (2019). [CrossRef]  

29. L. Xiao, C. Fang, L. Zhu, et al., “Deep learning-enabled efficient image restoration for 3D microscopy of turbid biological specimens,” Opt. Express 28(20), 30234–30247 (2020). [CrossRef]  

30. H. Zhang, Y. Zhao, C. Fang, et al., “Exceeding the limits of 3D fluorescence microscopy using a dual-stage-processing network,” Optica 7(11), 1627–1640 (2020). [CrossRef]  

31. F. Zhao, L. Zhu, C. Fang, et al., “Deep-learning super-resolution light-sheet add-on microscopy (Deep-SLAM) for easy isotropic volumetric imaging of large biological specimens,” Biomed. Opt. Express 11(12), 7273–7285 (2020). [CrossRef]  

32. H. Zhao, O. Gallo, I. Frosio, et al., “Loss functions for neural networks for image processing,” arXiv, arXiv:1511.08861 (2015). [CrossRef]  

33. Z. Wang, A. C. Bovik, H. R. Sheikh, et al., “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

34. C. A. Werley, M. Chien, and A. E. Cohen, “Ultrawidefield microscope for high-speed fluorescence imaging and targeted optogenetic stimulation,” Biomed. Opt. Express 8(12), 5794–5813 (2017). [CrossRef]  

35. X. Wang, W. Zeng, X. Yang, et al., “Bi-channel image registration and deep-learning segmentation (BIRDS) for efficient, versatile 3D mapping of mouse brain,” eLife 10, e63455 (2021). [CrossRef]  

36. Y. Qi, T. Yu, J. Xu, et al., “FDISCO: Advanced solvent-based clearing method for imaging whole organs,” Sci. Adv. 5(1), eaau8355 (2019). [CrossRef]  

37. X. Y. Guo, “SLEM,” Github, 2023, https://github.com/XinyiGuo2023/SLEM.

Data availability

Code Availability. The codes for SLEM in this paper are publicly available in [37].

37. X. Y. Guo, “SLEM,” Github, 2023, https://github.com/XinyiGuo2023/SLEM.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Simulation results of Bessel light-sheet, double-ring light-sheet, and Gaussian light-sheet. Intensity distribution of the three modes in the y-z plane, illustrates that the double-ring SPIM has a larger illumination field of view and lower side lobe ratio. The curve on the right represents the intensity distribution along the z-direction. In all three simulations, the full width at half maximum (FHWM) is 7.5 $ \mathrm{\mu}\textrm{m}$. Scale bar, 1000 $\mathrm{\mu}\textrm{m}.$
Fig. 2.
Fig. 2. Workflow of SLEM. (a) Data preparation, including data acquisition, cropping small regions for training, convolution with the PSF, and cropping into small patches for input to the network (Steps 1-4). (b) Network training, where the network generates intermediate outputs based on input simulated low-resolution data and quantitatively compares them with the label data (high-resolution data without side lobes). The system loss function is calculated, and network optimization is iteratively performed (Steps 5-8). (c) Network restoration step, where the full-field low-resolution images are input to the trained network, and enhanced high-resolution outputs are obtained.
Fig. 3.
Fig. 3. Illustration of our U-Net-based network. In the encoding path, the input image is progressively downsampled through convolutional layers, which capture and extract hierarchical features at different scales. This path is responsible for capturing the context and extracting high-level representations. The decoding path, which is symmetrical to the encoding path, gradually upsamples the feature maps using transposed convolutions. Skip connections are also established between the corresponding layers in the encoding and decoding paths. These skip connections allow the network to combine low-level and high-level features, preserving fine-grained details during the upsampling process.
Fig. 4.
Fig. 4. Comparative PSFs of Gaussian SPIM, double-ring SPIM and SLEM. (a) From top to bottom, the images show the x-z plane of Gaussian PSF, PSF of double-ring SPIM, and PSF of SLEM. (b) Five PSFs from the central and peripheral fields of view in (a) are zoomed in to show the details. (c) The statistical plots of the axial full width at half maximum (FWHM) for the five PSFs in (b) demonstrate that the PSF of SLEM exhibits nearly the same axial resolution as the original x-y plane.
Fig. 5.
Fig. 5. Fidelity validation of SLEM. The third column shows the z-projection of the neuronal signal in the x-y plane, serving as the ground truth (GT) for the network. It is convolved with the system PSF to generate simulated low-resolution data (LR) shown in the leftmost column. The LR data is inputted into the trained network to obtain the restored high-resolution (HR) result shown in the second column. The rightmost column displays the error map between the GT and HR. Scale bar, 50 $\mathrm{\mu}\textrm{m}.$
Fig. 6.
Fig. 6. SLEM imaging workflow and cortical neural fiber recovery. (a) Demonstrates the workflow of two-views imaging, stitching, network enhancement, registration, counting, and tracking. (b) Shows a 4${\times} $ full field rendering of neural fiber signals. Scale bar, 400$\mathrm{\mu}\textrm{m}$. (c) Represents a zoom-in 3D rendering of two regions in (b), with the original raw data shown above and the enhanced data displayed in the bottom row. (d) Demonstrates magnified views of local ROIs in (c), show the detailed resolution of the fibers. The intensity distributions along the vertical axis of a 55-pixel length are displayed in (e).
Fig. 7.
Fig. 7. Whole-brain segmentation and cellular analysis based on SLEM imaging. (a) Coronal view, transverse view, and sagittal view of whole mouse brain. (b) A selected ROI in HPF compares the resolved neuron cell bodies by raw double-ring SPIM and SLEM. Scale bar, 100$\mathrm{\mu}\textrm{m}.{\; }$(c) further shows a magnified 3D rendering of the ROI outlined by the dashed box in (b), comparing the resolution and counting accuracy before and after SLEM enhancement. Scale bar, 30$\mathrm{\mu}\textrm{m}$. (d)-(f) 3D visualizations of the isocortex, MB (Medial Branch), and HPF (Hippocampal Formation) region, showing the volume rendering and x-y, x-z plane representations of selected areas. Scale bar, 50 $ \mathrm{\mu}\textrm{m}$
Fig. 8.
Fig. 8. 3D SLEM imaging and quantitative analyses of vessels in whole mouse brain (a) 3D visualization of the whole mouse brain, showing LEL-Dylight649 labelled vessels. Scale bar, 1m$\textrm{m}.$ (b) From left to right, the original x-y plane, the recovered high-resolution axial plane, and the original low-resolution axial plane are displayed. The axial plane after enhancement exhibits the same resolution as the x-y plane. Scale bar, 100$\mathrm{\mu}\textrm{m}.$ (c) Magnified views of a small ROI (dotted box) in (b) reveal that several adjacent blood vessels that were indistinguishable in the original low-resolution image are clearly resolved. Scale bar, 50$\mathrm{\mu}\textrm{m}.$ (d) The 3D rendering of the recovered vessels, the rendering of the vessel tracing, and the accuracy of tracing are shown from left to right. Scale bar, 30$\mathrm{\mu}\textrm{m}.$ (b)-(d) belongs to ROI2. (e) The applications of vessel tracing: vessel classification (left), length measurement from the starting point (middle), and average vessel diameter (right). Scale bar, 50$\mathrm{\mu}\textrm{m}.$
Fig. 9.
Fig. 9. Recovery of brain regions with different signal-to-noise (SNR) ratios. The quality of image restoration by the network is demonstrated in regions of high SNR (left) and low SNR (right). The middle column shows magnified images of the regions indicated by dashed boxes in the left column. It can be observed that the signals recovered by the network achieve three-dimensional isotropic resolution. We calculated the Fast Fourier Transform (FFT) of the images in the first column to visually demonstrate the improvement in resolution, as shown in the third column. It can be observed that the SNR is significantly enhanced, the spectrum is visibly widened, especially in axial direction.
Fig. 10.
Fig. 10. Comparison of SLEM and RL deconvolution methods on the entire FOV. (a) Comparison of the axial restoration results between SLEM and RL deconvolution using the same PSF. Scale bar, 200 $\mathrm{\mu}\textrm{m}$. (b) Magnified details of three regions indicated by solid boxes in (a). Scale bar, 50 $\mathrm{\mu}\textrm{m}$. The axial intensity profiles of a selected point are shown in (c). (d) Statistics of three-dimensional resolution and side lobe ratio for 10 PSFs within the entire FOV. It can be observed that while all three methods show uniform axial resolution across the x-z plane, only the SLEM method achieves three-dimensional isotropic resolution and side lobe elimination.
Fig. 11.
Fig. 11. Verification of the image degradation model. We visualized three samples to verify the accuracy of our degradation model, which are bead, neuron, and vessel images. The original image in x-y plane (first column) is convolved with system PSF and added noise to obtain the synthetic LR image (second column), and the comparison of the synthetic LR image with the original x-z image (third column) in the frequency domain reveals that they are very similar in terms of the structure of the mouse brain. On the bead samples, the simulated LR axial resolution is slightly lower than the original axial resolution, probably due to the aberration in the x-y plane. Scale bar of first row, 50 $\mathrm{\mu}\textrm{m}$. Scale bar of second and third row, 100 $\mathrm{\mu}\textrm{m}$.

Tables (1)

Tables Icon

Table 1. Comparative PSFs of double-ring SPIM and SLEM. We calculate the mean value and standard deviation of thirty PSFs in the whole FOV.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

L m s e ( θ ) = 1 T 1 N t = 1 T i = 1 N ( y i t g θ ( x t ) i ) 2
S S I M = ( 2 μ M μ N + C 1 ) ( 2 σ MN + C 2 ) ( μ M 2 + μ N 2 + C 1 ) ( σ M 2 + σ N 2 + C 2 )
S N R = μ σ = S I b a c k g r o u n d S I b a c k g r o u n d + σ b a c k g r o u n d 2
M S E = 1 H × W i = 1 H j = 1 W ( M ( i , j ) N ( i , j ) ) 2
P S N R = 10 log 10 ( ( 2 n 1 ) 2 M S E )
U 1 = U 0 F
U 2 = e x p ( i k f ) i λ f U 1 ( x 1 , y 1 ) e x p [ i k 2 f [ ( x x 1 ) 2 + ( y y 1 ) 2 ] ] d x 1 d y 1
U 3 = U 2 e x p [ i k 2 f ( x 2 2 + y 2 2 ) ]
U 4 = e x p ( i k d ) i λ d U 3 ( x 3 , y 3 ) e x p [ i k 2 d [ ( x x 3 ) 2 + ( y y 3 ) 2 ] ] d x 3 d y 3
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.