Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dual-wavelength multimodal multiphoton microscope with SMA-based depth scanning

Open Access Open Access

Abstract

We report on a multimodal multiphoton microscopy (MPM) system with depth scanning. The multimodal capability is realized by an Er-doped femtosecond fiber laser with dual output wavelengths of 1580 nm and 790 nm that are responsible for three-photon and two-photon excitation, respectively. A shape-memory-alloy (SMA) actuated miniaturized objective enables the depth scanning capability. Image stacks combined with two-photon excitation fluorescence (TPEF), second harmonic generation (SHG), and third harmonic generation (THG) signals have been acquired from animal, fungus, and plant tissue samples with a maximum depth range over 200 µm.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

As a novel technique for non-invasive and label-free imaging, multiphoton microscopic imaging utilizes nonlinear excitation of fluorescence and scattering signals to visualize living tissues in high resolution with deep penetration [16]. It has shown its importance in the medical fields such as cancer research, neurology, stomatology, cardiovascular disease diagnoses, and chronic disease diagnoses [410]. Apart from medical research and clinical uses, the technology of MPM is also utilized in studying the microstructure, biochemistry, and metabolism of plants, animals, and a variety of other living forms [1117]. MPM also has its advantage in reducing phototoxicity compared with conventional confocal microscopy [2].

As the excitation beam is focused into the sample and nonlinear signals are generated only from the highly localized focal volume with high laser power density, MPM provides high resolution in both the XY plane and the axial (Z) direction [11,18]. Compact MPM endoscopy with 3D imaging capability is highly desirable for in vivo imaging to reveal detailed structures underneath tissue surface and to differentiate tissue layers [11,19]. While XY scanning in MPM endoscopy has been reported using piezoelectric tube and microelectromechanical (MEMS) scanners [2023], achieving depth scanning has been a challenge due to the requirement of compact size and precision actuation in the depth direction.

Motorized translation stages have been demonstrated in previous research work but are considered not suitable for clinical applications due to their bulky size [22]. Piezoelectric actuators have been reported to be utilized to realize depth scanning [5,24,25]. In 2019, A. Dilipkumar et al. demonstrated a two-photon imaging system with an electrically tunable lens (ETL) based depth scanning and a gradient index (GRIN) lens objective [5]. However, the numerical aperture (NA) of the GRIN lens objective was relatively low (NA = 0.38) and the axial resolution was over 30 µm with a scanning range limited to 200 µm [5]. Besides, the ETL showed a significantly deteriorated linearity and precision in focal positions under high current (short focal length). Moreover, ETLs have a limited focusing power tuning range of 10 to 25 diopters typically [20], which limits its use in short-focusing applications and prevents it from being integrated into the objective [25]. In 2019, A. Li et al. reported a two-photon imaging probe with depth scanning realized by a piezoelectric stage [20]. A GRIN lens with a relatively high NA of 0.8 was adopted but the field of view (FOV) in the XY plane was limited to ∼120 µm, and the penetration was also limited to 200 µm [20]. The piezoelectric stage (PP-18, Micronix) has a size of 22×17×10 mm3 (specification from manufacturer), and when the lens is mounted it makes the module size even larger. Due to the small electrically induced strains, piezoelectric actuators require displacements magnification configurations such as cantilever-type benders, which occupy space [24]. Therefore, Piezoelectric actuators with a decent movement range are relatively bulky in size (or long in the axial dimension).

Shape-memory-alloy (SMA) actuators can provide potential improvement to these limitations when utilized for depth scanning. In an SMA actuator, current is applied to an SMA wire where Joule heating causes the phase transformation of the malleable crystalline structure, and the movement of the actuator is determined by the equilibrium of a counterforce (e.g., provided by a spring) and the SMA wire [26,27]. Compared with piezo-based or ETL-based scanning, SMA actuators provide reduced device size and improved scanning range at a lower cost [28,29]. SMA actuators also have the advantage of operating at very low voltage (<5 V) with a relatively fast response speed (order of 10–100 ms). In 2010, Y. Wu et al. presented an SMA-based depth-scanner with an actuation range of 150 µm [30]. However, the scanner was open-loop operated and the accuracy was limited to ∼10 µm, and the one-way full-range scanning took ∼1 s [30]. In 2017, A. Li et al. demonstrated a closed-loop controlled SMA depth scanner for MPM [31]. In this design, they improved the travel range to 490 µm. However, the scanning speed declined significantly under closed-loop control and required ∼10 s to reach the target position [31]. It can be seen that in previous research, the trade-off of speed and accuracy limited the development of SMA-actuated depth-scanning MPM systems and a faster miniaturized closed-loop SMA actuator can increase the potential of SMA actuators in this application.

On the other hand, multimodal imaging is critical for achieving label-free MPM imaging because different nonlinear effects such as TPEF, SHG, 3PEF (three-photon excitation fluorescence), and THG reveal different biochemical properties and tissue structures [3237]. TPEF signals are generated from intrinsic fluorophores such as nicotinamide adenine dinucleotide (NADH), flavin adenine dinucleotide (FAD), and elastin [38]. SHG signals are generated from ordered non-centrosymmetric structures such as collagen fibers [39]. 3PEF also excites fluorescence signals but with lower efficiency than TPEF. THG signals are related to heterogeneous boundaries such as aqueous-lipidic interfaces and air-aqueous interfaces [32,40]. Especially, adding THG to multimodal MPM can provide important information of tissue structures that lack TPEF or SHG contrast, such as lipid and interfaces [41,42]. Therefore, by combining both two-photon (e.g. TPEF and SHG) and three-photon (e.g. THG) imaging, complementary information can be obtained from multiple channels and be merged to form information-richer images. Coherent anti-Stokes Raman scattering (CARS) is another type of nonlinear microscopy that can detect the vibrational signatures of molecules [43,44]. CARS has been utilized for non-invasive imaging of lipids in biological samples [43,3]. Compared with THG, CARS requires two laser sources for the pump and probe light, respectively.

Exciting both two-photon and three-photon signals is a challenge due to the involvement of various wavelengths. To prevent the potential THG signal from entering the ultraviolet range, an excitation laser with long-wavelength (>1200 nm) is typically required [45]. In 2018, F. Akhoundi et al. demonstrated a multimodal MPM probe (for SHG, THG, and 3PEF) with an excitation wavelength at 1700nm [46]. While SHG and THG images were acquired successfully from unstained samples and 3PEF images were acquired successfully from stained samples, the 1700-nm laser was not able to excite intrinsic TPEF signals, and furthermore, there was no automatic depth-scanning capability in their system. To excite common endogenous fluorophores in tissue for label-free TPEF imaging, a shorter excitation wavelength (<900 nm) is typically required [45,47]. Therefore, the scheme of dual excitation wavelengths has become an important approach to acquiring both intrinsic TPEF signals and three-photon signals. In 2018, A. Filippi et al. demonstrated multimodal label-free ex vivo MPM imaging using dual excitation wavelengths of 800 nm (for TPEF and SHG imaging) and 1200 nm (for THG imaging) but without depth scanning [48]. In 2020, our team reported a multimodal MPM system with dual excitation wavelengths for the intrinsic TPEF, SHG, and THG imaging but still without depth scanning capability [32].

In this study, we demonstrate a compact MPM system with both depth scanning and multimodal imaging capabilities. The depth scanning feature is enabled by a miniaturized SMA-based objective. The SMA actuator provides a relatively long scan range, high response speed, and actuate positioning at a miniature device size. Our custom-designed depth-scanning objective has a balance between the FOV and NA. The multimodal imaging capability is realized by the dual excitation wavelengths of 790 nm and 1580 nm from a single Er-doped fiber laser. The dual-wavelength excitation scheme ensures that intrinsic TPEF, SHG, and THG signals can be all acquired label-free from the sample. The compact all-fiber femtosecond excitation source further improves the mobility of this MPM system for potential clinical use. Image stacks of TPEF, SHG, and THG signals at various depths are acquired from several tissue samples to demonstrate the capability of multimodal and label-free imaging.

2. Experimental setup

Figure 1 shows the setup of the multimodal multiphoton microscopic imaging system. A home-built 1580 nm mode-locked Er-doped fiber laser with an average output power of ∼230 mW, a pulsewidth of ∼80 fs, and a repetition rate of 47 MHz is employed as the excitation light source [49]. A periodically poled MgO:LiNbO3 (PPLN) crystal (MSHG1550-0.5-0.3, Covesion) converts the 1580 nm pulse to 790 nm with an efficiency of ∼35%. The PPLN has a broad acceptance bandwidth of 42 nm which maintains the pulsewidth at ∼80 fs for the 790 nm wavelength [49]. The 790 nm wavelength is utilized to excite TPEF and SHG signals while the 1580 nm wavelength is utilized to excite the THG signal. Convex lenses L1 and L2 with the focal length of fL1 = fL2 = 7.5 mm serve to focus the laser beam into the PPLN and to collimate the beam emitted out from the PPLN, respectively. Two achromatic lenses, L3 and L4, serve as the scan lens and the tube lens, respectively, of the microscope. The scan lens has a focal length of fL3 = 25 mm and the tube lens has a focal length of fL4 = 50 mm. A gold-plated bonded MEMS mirror (13Z2.1-2400, Mirrorcle Technologies) with a diameter of Φ = 2.4 mm is employed for XY scanning. The overall size for the MEMS module is ∼15×20×5 mm3. The MEMS mirror has a resonance frequency of 1.6 kHz. For scanning a 2D frame of 512×512 pixels, a maximum speed of ∼4 FPS at a pixel dwell time of 1 µs can be achieved. However, for increasing the signal-to-noise ratio (SNR), the pixel dwell time in the experiments is set to be 10 µs and that results in a scanning speed of ∼0.4 FPS.

 figure: Fig. 1.

Fig. 1. Setup of the multimodal MPM imaging system. L – lens; PPLN – periodically poled MgO:LiNbO3; F – filter; MEMS – microelectromechanical system; OBJ – objective lens pack; MMF – multimode fiber; DBS – dichroic beam splitter; PMT – photomultiplier tube; HV – high voltage; DAQ – data acquisition; PC – personal computer; SMA – shape memory alloy; PCIe – peripheral component interconnect express; I2C – inter-integrated circuit.

Download Full Size | PDF

A custom-designed SMA-actuated depth-scanning objective consisting of an aspheric lens (352080, LightPath) and a plano-convex lens (45469, Edmund Optics) is designed to minimize spherical aberration [50]. Figure 2 shows the objective lens layout and Zemax simulation of root-mean-square (RMS) wavefront error versus field of view. The aspheric lens has an effective focal length of fAL = 3.9 mm and an NA of 0.55. The plano-convex lens has an effective focal length of fCL = 18 mm and an NA of 0.17. The effective focal length of the compound objective is fobj = 3.6 mm and the NA is calculated by Zemax to be 0.53. At both 790 nm and 1580 nm, the RMS wavefront error remains below diffraction-limited within the ∼0.15 mm field of view (one side). During depth imaging, the aspheric lens is actuated by the SMA actuator, and the sample is immersed in water. The depth-scanning SMA actuator is controlled by an Arduino microcontroller (Mega 2560, Arduino) through the I2C communication protocol.

 figure: Fig. 2.

Fig. 2. Design of the objective. (a) Optical layout; (b) Zemax simulation of RMS wavefront error vs. field of view. BM – laser beam; AL – aspheric lens; AR – air; CL – plano-convex lens; WT – water (water is used as a sample in simulation); DL – diffraction-limited RMS wavefront error.

Download Full Size | PDF

Emitted signals from the focal volume in the sample are collected backward through the objective, and separated from the excitation beam path by a single-edge dichroic beam splitter (FF665-Di02, Semrock) with an edge wavelength of 665 nm. A multimode fiber with a diameter of 1.5 mm is employed to collect the signals. Another dichroic beam splitter (FF414-Di01, Semrock) separates the TPEF and SHG signals before reaching two photomultiplier tubes (PMTs). TPEF and SHG signals are acquired simultaneously by PMT1 (H9305-03, Hamamatsu) and PMT2 (H6780-20, Hamamatsu), respectively, under the excitation of 790 nm. THG signal is acquired serially by PMT1, under the excitation of 1580 nm, after the two-photon imaging. Signals from the PMTs are delivered to two photon-counting units. A data acquisition (DAQ) board (PCIe-6363, National Instrument) is utilized to control the scanning of the MEMS mirror and to receive signals from the counting units. The DAQ module communicates with a PC through a PCIe slot. The PC runs a home-developed Visual C++ program that controls the DAQ module, communicates with the Arduino microcontroller (which controls the SMA actuator) through a serial port, and handles the image formation and storage. The controlling and signal flowchart is also shown in Fig. 1.

3. SMA-based depth scanning

Here we will explain the SMA-enabled depth scanning. Figure 3 demonstrates the engineering design and assembly of the SMA-actuated objective lens unit. The aspheric lens of the objective is attached to an SMA actuator (ASA10080-101, Actuator Solutions) which allows for a 370 µm translation of the lens. The plano-convex lens is kept stationary which ensures stable and safe positioning of tissue. Immersion water is applied between the plano-convex lens and the tissue surface to reduce sample-induced aberration. When focusing light from an objective lens into a sample, the refractive index of the lens immersion medium (e.g. air for the aspheric lens) is different from that of the sample, which causes light refraction at the interface. Thus, a correction factor is needed to convert the actuation of the aspheric lens to the translation of the focal plane inside the tissue [51]. The correction factor is measured experimentally to be ∼1.2 by comparing the distance travelled by the aspheric lens with the distance moved by the sample plane controlled by a piezo stage. The experimental measurement matches with the correction factor obtained by Zemax simulation.

 figure: Fig. 3.

Fig. 3. Construction of the SMA-actuated objective lens pack. AC – actuator; AD – actuator driving component; LH – lens holder; AL – aspheric lens; CL – plano-convex lens; OC – objective chassis; EP – epoxy; double-headed arrows indicate moving parts and directions.

Download Full Size | PDF

In Fig. 3, the aspheric lens is mounted in a 3D-printed lens holder that is attached to the SMA actuator’s ring-shaped driving component. The SMA actuator is mounted in a 3D-printed objective chassis which also holds the plano-convex lens in place. All 3D-printed components are printed by a laser stereolithography printer (Form 3, Formlabs) with photopolymer resin (Clear V4, Formlabs) with a layer thickness setting of 25 µm. Mounted components are bonded using epoxy (EP) (DP490, 3M) after alignment. The physical dimension of the SMA actuator (including the mounted aspheric lens) and the whole objective unit are 10×10×4 mm3 and 12×12×7 mm3, respectively.

A laser Doppler vibrometer (LDV) (OFV-5000/OFV-551 with displacement decoder DD-200, Polytec) is utilized to record the actuation trajectory of the SMA actuator. Figure 4(a) shows a fast depth scanning over the maximum travel distance of 370 µm at a step size of 37 µm and 0.55 sec per step. It shows that the SMA can scan over a long travel range at a relatively fast speed while also maintaining a uniform actuation motion. Figure 4(b) shows the actuator movement with a large step size of 200 µm and a long cycle duration of 30 s, which shows that the actuator can perform large stepping and hold a stable position if a long integration time is needed. The precise and fast scanning of the SMA is enabled by a closed-loop control where the position information is fed back by a Hall sensor integrated inside the SMA actuator. Figure 4(c) shows the Hall sensor detected position versus the command position for a forward and backward scanning over a travel distance of 370 µm. The Hall sensor data indicates relatively high linearity and precision in the actuator positioning. Due to the closed-loop control, the hysteresis of the SMA is also significantly reduced. Based on the LDV movement test shown in Fig. 4(a), the hysteresis is calculated to be ∼0.86 µm within the full range of 370 µm. Based on the Hall sensor data shown in Fig. 4(c), the hysteresis is calculated to be 0.21 µm for the same range of 370 µm. Stability is calculated to be 0.19 µm, which is defined as the root-mean-square deviation (RMSD) between the measured position and the command position for measuring the fluctuation over time.

 figure: Fig. 4.

Fig. 4. Performance characterization of the SMA actuator. (a) LDV measurement of the actuator movement in the full range of 370 µm; (b) LDV measurement of the actuator movement with a large step size of 200 µm and a long cycle duration of 30 s; (c) Actuator position measured by Hall sensor vs. command position under closed-loop control.

Download Full Size | PDF

Figures 5(a) and 5(b) show the response curve measured by the Hall sensor for forwarding and backward actuation, respectively, with different command travel distances. The Hall sensor has a dynamic sampling rate of up to 200 Hz. The measurement is taken with the lens mounted, as the weight influences the response time. For the full range of 370 µm, the response time is ∼150 ms for both actuating forward and backward. The response time reduces with the travel distance and for mid-to-small steps below 50 µm, the response time is within 20 ms. In comparison, the response time can also be obtained from the LDV data, and it is measured to be 148 ms and 24 ms for the 370 µm and 50-µm range, respectively. Table 1 summarizes the actuator performance measured by the LDV and the Hall sensor, respectively. As we can see, both measurements show very consistent results. Compared with the ETL depth scanning used in previous research that shows a depth resolution over 30 µm [5], the SMA actuated aspheric lens provides higher resolution and higher linearity and precision in positioning. And compared with the piezoelectric solution in previous research that has a dimension of 22×17×10 mm3 plus the lens [20], our SMA-actuated scanner has a significantly smaller dimension which is important for clinical imaging probes.

 figure: Fig. 5.

Fig. 5. Actuator response time measured with the lens mounted. (a) Actuating forward; (b) Actuating backward.

Download Full Size | PDF

Tables Icon

Table 1. Performance characterization of the SMA Actuator

4. Characterization of field of view and resolution

The FOV in the XY plane is related to the focal length of the objective, scan lens, and tube lens, as well as the scanning angle of the MEMS mirror. According to the geometric relationship between the components, it can be calculated by the following equation

$$FOV = 2{f_{\textrm{obj}}}\frac{{{f_{\textrm{scan}}}\tan \theta }}{{{f_{\textrm{tube}}}}}$$
where fobj, fscan, and ftube are the focal lengths of the objective, scan lens, and tube lens, respectively; and θ is the maximum angle of the beam relative to the optical axis after the MEMS. Because the MEMS mirror scans fast in the horizontal direction, a sinusoidal driving signal is adopted for its horizontal scanning to avoid high-frequency components in a triangle wave or sawtooth wave under the Fourier transform. Maximum scanning angles in the horizontal and vertical directions are ±2.3° and ±3.5°, respectively. Given that fobj = 3.6 mm, fscan = 25 mm and ftube = 50 mm, the FOV can be calculated to be 145×220 µm2. Experimentally, the FOV in the XY plane is measured to be ∼144×224 µm2 by imaging a stripe microscope target (200 lines/mm). The measured result is consistent with the calculated FOV. The system records 512×512 matrices as the raw data, and after correcting the non-uniformity from the horizontal sinusoidal scanning, the formed images show a rectangular FOV. The pixel resolution is 328×512 which is adequate, according to the Nyquist-Shannon sampling theorem [52], to present the images with the FOV of 144×224 µm2 under the XY resolution of 0.88 µm.

Based on the assumption that the back aperture of the objective lens is uniformly illuminated and the focus is diffraction-limited, the theoretical limit of the resolution can be defined as following equations by calculating the illumination point spread function (PSF) [4,53].

$${R_{\textrm{xy2P}}} = \sqrt {2\ln 2} \frac{{0.320\lambda }}{{NA}}$$
$${R_{\textrm{xy3P}}} = 2\sqrt {\frac{{\ln 2}}{3}} \frac{{0.320\lambda }}{{NA}}$$
$${R_{\textrm{z2P}}} = \sqrt {2\ln 2} \frac{{0.532\lambda }}{{n - \sqrt {{n^2} - N{A^2}} }}$$
$${R_{\textrm{z3P}}} = 2\sqrt {\frac{{\ln 2}}{3}} \frac{{0.532\lambda }}{{n - \sqrt {{n^2} - N{A^2}} }}$$
where Rxy2P, Rz2P, Rxy3P, and Rz3P are the resolutions in the XY plane and in the axial (Z) direction for two-photon imaging and three-photon imaging, respectively. Here λ is the excitation wavelength and n is the refractive index of the medium. Given that NA = 0.53, λ = 0.79 µm for two-photon imaging, λ = 1.58 µm for three-photon imaging, and n = 1.333 for water, the theoretical resolution values can be calculated as Rxy2P = 0.56 µm, Rz2P = 4.50 µm, Rxy3P = 0.92 µm, and Rz3P = 7.35 µm.

Experimentally, the PSF can be measured by imaging a point object [5456]. The resolution of two-photon imaging is measured from fluorescent carboxylate microspheres with a diameter of ∼0.1 µm (Fluoresbrite YG, Polysciences) embedded in an agarose phantom. The resolution of THG imaging is determined by measuring a waveguide wire (with a width and height of ∼500 nm) on a silicon photonic chip. Figure 6 shows the experimentally measured full-width-half-maximum (FWHM) resolutions of the MPM system. The XY and Z resolution of the two-photon imaging (TPEF) is measured to be 0.9 µm and 10.1 µm, respectively; and that of the THG imaging is measured to be 1.4 µm and 15.3 µm, respectively. Table 2 summarizes the theoretical and measured FOV and resolutions of the MPM system. In both the XY plane and axial direction, the resolution of three-photon imaging is about 50% worse than that of two-photon imaging due to the twice longer excitation wavelength.

 figure: Fig. 6.

Fig. 6. Experimentally measured full-width-half-maximum (FWHM) resolution of the MPM system. (a) TPEF resolution in the XY plane; (b) TPEF resolution in the Z (depth) direction; (c) THG resolution in the XY plane; (d) THG resolution in the Z (depth) direction.

Download Full Size | PDF

Tables Icon

Table 2. Measured FOV and resolution of the MPM system

The experimentally measured resolution has not reached the diffraction-limited resolution calculated by Eqs. (2)–(5). One major reason is that the theoretical calculation assumes the objective is uniformly illuminated (overfilled), but experimentally due to the small MEMS mirror, the excitation beam diameter is limited, and the back aperture of the objective lens is not fully illuminated under the 25-mm scan lens. To verify this, the scan lens is replaced by another lens with a focal length of fL3 = 16 mm. The 16-mm scan lens expands the beam diameter by ∼1.6 times more than the 25-mm scan lens to illuminate the back aperture of the objective lens for a higher effective NA. Using the 16-mm scan lens, the XY and Z resolutions of two-photon imaging are improved to 0.7 µm and 8.9 µm, respectively; and those of the THG imaging are improved to 1.1 µm and 14.8 µm, respectively. This improves the resolution closer to the theoretical limit but at the cost of a reduced FOV as per Eq. (1). With the 16-mm scan lens, the FOV in the XY plane is measured to be ∼93×145 µm2, which is reduced by ∼58% compared with that by the 25-mm scan lens. Other possible factors for the difference between experimental and theoretical resolution include chromatic aberration and astigmatism. The excitation laser has relatively broad bandwidths of ∼118 nm (–10 dB) at 1580 nm and ∼30 nm (–10 dB) at 790 nm [49]. And since the objective is not designed to be achromatic, different wavelengths in this band can be focused on slightly different depths and making chromatic aberration a contributor to the resolution limitation. Astigmatism caused by alignment errors is also considered to be a potential factor that influences the resolution.

5. Image processing method for weak signals in multimodal MPM

In multimodal MPM imaging, the SNR of the different modalities (channels) can vary significantly. Due to differences in biochemical compositions and structural properties, signal intensities of the TPEF, SHG, and THG channels can vary significantly in different tissues. While potentially still revealing important tissue features, some samples generate very weak signals on certain channels. Compared with two-photon imaging, the THG signal is especially weak among most animal tissues due to the much lower efficiency of the three-photon nonlinear effect. A higher power laser can improve the SNR for the THG channel but at the cost of increased thermal impact on the tissue. To extract and present the information carried by the channel with ultra-weak signal, image post-processing is required as a necessary step to improve the image contrast.

Under the scenario of multimodal imaging, when merging the different channels, signals on the channel with weak signals can be hard to see. By simply brightening up the weak signal, the noise will also be brightened, and the SNR of the other channels will be affected when multiple channels are merged. Common contrast enhancement and denoising algorithms are found mostly not effective in enhancing the brightness of features while reducing the noise in low-SNR images of various samples. Therefore, combining low-SNR channels in multimodal images while simultaneously presenting the feature with weak signals and reducing the impact on the SNR of the other channels becomes one challenge in the image post-processing of multimodal MPM.

To effectively process images with ultra-low signal and SNR, a custom denoising algorithm is developed. First, a global average of the pixel intensity over the entire image is obtained. Second, a kernel size (e.g. 20×20) is selected which can be set according to the spatial features of the image. Third, for each pixel, a locally averaged pixel intensity is calculated within the size of the kernel around that pixel. Fourth, the local average is compared to the global average and a denoising factor is obtained based on the difference between these two values. A pixel with a local average intensity significantly lower than the global average is considered to have a high chance of being a noise pixel. Thus, the lower the local average, the smaller a denoising factor will be applied. Finally, the intensity of the targeted pixel is adjusted by multiplying with the applied denoising factor.

Figure 7 shows the comparison between ultra-low SNR images before processing, processed by the ImageJ outlier-removal denoising, and processed by the custom denoising algorithm. SNR is calculated and shown in each image in Fig. 7. To quantify SNR, the signal level is defined as the global average intensity of the entire image, and the noise level is defined as the local average intensity from a small area in the background region (determined by the lowest kernel average). When processed by our algorithm, the noise is reduced while the signal representing sample structures is preserved with similar strength compared with the raw images and as a result, the SNR is significantly improved. On the contrary, regular outlier-removal denoising fails to increase the SNR. With the improved SNR, features shown in the low-signal channel can be presented in the merged multimodal images without introducing excessive noise into the merged images.

 figure: Fig. 7.

Fig. 7. Ultra-low SNR images before and after denoising.

Download Full Size | PDF

In our algorithm, the intensity of a targeted pixel is adjusted by a denoising factor that depends on its local average. For bright signal regions, the denoising factor is set to be 1. The lower the local average (higher chance of being noise), a smaller denoising factor will be applied. Thus, the algorithm effectively reduces the noise while preserving the brightness of the signal regions. By choosing the proper kernel size, we can also preserve the spatial features of the tissue image. The algorithm is effective in enhancing the contrast of the weak channel and thus improving the visualization of the tissue structure. However, for quantitative imaging, the image processing should avoid altering the strength of the signals, and calibration approaches may be necessary to calibrate the relative strengths among the multiple channels.

6. Results and discussions

To demonstrate the capability of multimodal MPM imaging with depth scanning, various animal tissue samples and plant samples are imaged. Three channels of contrast signals are acquired, where TPEF and SHG signals are excited by 790 nm pulses and THG signal is excited by 1580 nm pulses. Depth scanning is achieved by actuating the objective and the penetration depth is limited by signal attenuation due to light absorption and scattering in tissue. The SMA-actuated objective acquires image stacks with a step size of 4 µm, and the results typically show ∼250 µm penetration into soft tissues and ∼150 µm penetration into the bone tissue. The laser power applied on samples is ∼30 mW and ∼60 mW for the 790-nm and 1580-nm wavelengths, respectively. There is a focal shift of ∼80 µm between the two- and three-photon raw images due to the different excitation wavelengths, and it is corrected when merging the channels. The TPEF, SHG, and THG channels are color-coded in red, green, and blue, respectively, in the combined multimodal images.

Figure 8 shows the acquired images from animal tissues such as mouse ear pinna, fish skin, and mouse femur at different depths. The images show different layer structures on both the skin tissue and the bone tissue with complementary information from the three channels differentiating a variety of structures and substances inside. In Figs. 8(a)–8(c), the epidermis cellular layer can be identified from the mouse ear pinna images as small honeycomb structures (diameter of ∼8 µm) near the tissue surface. The dark center indicates the nucleus and TPEF signals mainly come from the surrounding cytoplasm and intercellular substances. Large honeycomb structures (diameter of ∼27 µm) can be observed starting from 70–90 µm depth with TPEF and THG signals and those could be cartilage. SHG signals are also observed and they mainly come from collagen. In Fig. 8(d), the fish skin shows SHG signals with a high intensity that indicates a high concentration of collagen fibers. Lipid-like circular structures showing TPEF signals in the center and THG signals on the boundary are observed between the depths of ∼50–200 µm. In Figs. 8(e) and 8(f), the lacuna-canalicular framework can be observed from the mouse femur images. For the animal tissue images shown in Fig. 8, the signal level and SNR of THG images are much lower compared with that of the two-photon images so the THG images are averaged over 3 frames to improve the SNR.

 figure: Fig. 8.

Fig. 8. Multimodal MPM images from animal tissues at different depths. (a)–(c) Mouse ear pinna; (d) fish skin; (e) and (f) mouse femur. THG images are averaged over 3 frames; TPEF and SHG images are not averaged. Red – TPEF; Green – SHG; Blue – THG. Scale bar is 50 µm.

Download Full Size | PDF

Figure 9 shows the acquired images from fungus and plant tissues such as the lamella (grill) of Amanita muscaria (commonly known as fly agaric), lamina of Acer rubrum (commonly known as red maple) leaf, and leaf midrib. Fungus and plant samples in Fig. 9 provide mainly TPEF signals but with some important features differentiated by the THG channel. Unlike animal tissues which have abundant collagen fibers, only very weak SHG signals are obtained from fungus and plant tissues. Lamellae are the structures that contain spores of the mushroom [57,58], and in Fig. 9(a), spores are differentiated from the lamella tissue by the THG signal and shown as round or oval-shaped structures in blue with diameters of around 6 to 10 µm. Mushroom spores show relatively strong THG signals and only THG signals under the described excitation wavelengths. While the background lamella tissue shows mainly TPEF signals, the different structures are effectively differentiated by the multimodal capability of the MPM system. Stomata of the Acer rubrum leaf can be seen in Fig. 9(b) with strip structures underneath showing THG signals in part of the FOV. In Fig. 9(c), cell walls showing TPEF signals as well as some subcellular features can be clearly seen from the leaf midrib. Honeycomb structures can be identified from deeper layers with various detailed structures inside showing TPEF, SHG, and THG signals. THG signals from the fungus and leaves are stronger compared with that from some of the animal tissues and the images shown in Fig. 9 are not averaged over frames.

 figure: Fig. 9.

Fig. 9. Multimodal MPM images from fungus and plant tissue samples at different depths. (a) Lamella (grill) of Amanita muscaria (fly agaric) mushroom; (b) lamina of Acer rubrum (red maple) leaf; (c) leaf midrib. All TPEF, SHG, and THG images are not averaged. Red – TPEF; Green – SHG; Blue – THG. Scale bar is 50 µm.

Download Full Size | PDF

The SNR decreases as the imaging plane gets deeper into the tissue, and images are typically blurred out when the penetration depth exceeds 250 µm for animal soft tissues or 150 µm for bone, fungus, or leaf tissues. Thanks to the custom water-dipping objective that minimizes spherical aberration, this penetration depth is approximately two times deeper than the result from a single aspheric lens working in the air. Under the circumstance of no tissue clearing processes such as CUBIC and 3DISCO, our achieved penetration is deeper compared with the penetration depths of 64–120 µm reported in recent studies of depth scanning MPM systems [5,20,30]. Scattering of the excitation beam by the tissue is considered to be one major reason that causes the quality of the focal point deep inside to deteriorate and thus, limits the penetration depth. For the THG excitation, water absorption is another cause that could limit laser penetration, as the absorption coefficient for water around the wavelength of ∼1500 nm reaches ∼20 cm-1, which is significantly higher than that of 0.02 cm-1 around ∼800 nm [5961].

In future studies, a 1580 nm laser with a higher power and shorter pulsewidth may be employed to improve the THG image quality for animal tissue imaging. However, the higher laser power comes at the cost of increased thermal damage risks, and shorter pulsewidth will lead to broader bandwidth which can worsen the chromatic aberration. This is a trade-off that needs to be considered during optical design. Lowering the repetition rate in exchange for higher peak power is also likely beneficial for THG excitation [62], and it can be a potential approach for future improvements. Optics that are optimized for both 790 nm and 1580 nm wavelengths may potentially be developed in future work to further help address the limitation of THG signal strength. As reported in several studies [63,64], adaptive optics may potentially be used to correct aberrations from the microscope’s internal optics, the objective, and also those induced by the tissue, thus further increasing resolution and imaging depth.

7. Conclusion

In this study, a fiber-laser-based compact multimodal MPM system with depth-scanning capability has been developed and demonstrated. Our home-built Er-doped femtosecond fiber laser with PPLN provides excitation wavelengths of 1580 nm and 790 nm for three-photon and two-photon multimodal imaging, respectively. A custom-designed miniaturized objective incorporates an SMA actuator for depth scanning. Multimodal image stacks containing TPEF, SHG, and THG channels have been acquired from animal, fungus, and plant tissues. The image stacks cover the depth range of over 250 µm for animal soft tissues and of 150 µm for animal bone, fungus, and plant tissues. Complementary and comprehensive information is obtained label-free due to the multimodal and depth-scanning capabilities. A variety of structures such as the epidermis cellular layer and collagen fibers, the lacuna-canalicular framework in bone tissue, spores of fungi, and the stomata, cell wall, and subcellular features from leaves can be differentiated by multimodal imaging. Our miniaturized multimodal MPM with the depth-scanning objective shows great potential in future MPM probe and endoscope designs for clinical use.

Funding

Natural Sciences and Engineering Research Council of Canada (RGPIN-2017-05913, CHRP 508405-17); Canadian Institutes of Health Research (CPG-151974).

Acknowledgments

We thank the Centre for Disease Modeling and Animal Care Services at the University of British Columbia for providing the mouse tissue sample.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248(4951), 73–76 (1990). [CrossRef]  

2. M. Kaur, P. M. Lane, and C. Menon, “Endoscopic Optical Imaging Technologies and Devices for Medical Purposes: State of the Art,” Appl. Sci. 10(19), 6865 (2020). [CrossRef]  

3. R. Li, X. Wang, Y. Zhou, H. Zong, M. Chen, and M. Sun, “Advances in nonlinear optical microscopy for biophotonics,” J. Nanophotonics 12(3), 033007 (2018). [CrossRef]  

4. W. R. Zipfel, R. M. Williams, and W. W. Webb, “Nonlinear magic: multiphoton microscopy in the biosciences,” Nat. Biotechnol. 21(11), 1369–1377 (2003). [CrossRef]  

5. A. Dilipkumar, A. Al-Shemmary, L. Kreiß, K. Cvecek, B. Carlé, F. Knieling, J. G. Menezes, O. M. Thoma, M. Schmidt, M. F. Neurath, M. Waldner, O. Friedrich, and S. Schürmann, “Label-Free Multiphoton Endomicroscopy for Minimally Invasive In Vivo Imaging,” Adv. Sci. 6(8), 1801735 (2019). [CrossRef]  

6. P. T. C. So, C. Y. Dong, B. R. Masters, and K. M. Berland, “Two-photon excitation fluorescence microscopy,” Ann. Rev. Biomed. Eng. 2(1), 399–429 (2000). [CrossRef]  

7. M. C. Skala, J. M. Squirrell, K. M. Vrotsos, J. C. Eickhoff, A. Gendron-Fitzpatrick, K. W. Eliceiri, and N. Ramanujam, “Multiphoton microscopy of endogenous fluorescence differentiates normal, precancerous, and cancerous squamous epithelial tissues,” Cancer Res. 65(4), 1180–1186 (2005). [CrossRef]  

8. R. Aviles-Espinosa, S. I. Santos, A. Brodschelm, W. G. Kaenders, C. Alonso-Ortega, D. Artigas, and P. Loza-Alvarez, “Third-harmonic generation for the study of Caenorhabditis elegans embryogenesis,” J. Biomed. Opt. 15(4), 046020 (2010). [CrossRef]  

9. J. Kang, U. Kang, H. S. Nam, W. Kim, H. J. Kim, R. H. Kim, J. W. Kim, and H. Yoo, “Label-free multimodal microscopy using a single light source and detector for biological imaging,” Opt. Lett. 46(4), 892–895 (2021). [CrossRef]  

10. T. Boulesteix, A. M. Pena, N. Pagès, G. Godeau, M. P. Sauviat, E. Beaurepaire, and M. C. Schanne-Klein, “Micrometer scale Ex Vivo multiphoton imaging of unstained arterial wall structure,” Cytom. A 69A(1), 20–26 (2006). [CrossRef]  

11. K. König, “Multiphoton microscopy in life sciences,” J. Microsc. 200(2), 83–104 (2000). [CrossRef]  

12. M. J. Aragon, M. Wang, J. Shea, A. T. Mok, H. Kim, K. M. Lett, N. Barkdull, C. B. Schaffer, C. Xu, and N. Yapici, “Non-invasive multiphoton imaging of neural structure and activity in Drosophila,” bioRxiv:798686 (2019).

13. S. Lin, H. Tan, C. Kuo, R. Wu, S. Wang, W. Chen, S. Jee, and C. Dong, “Multiphoton autofluorescence spectral analysis for fungus imaging and identification,” Appl. Phys. Lett. 95(4), 043703 (2009). [CrossRef]  

14. R. Cisek, L. Spencer, N. Prent, D. Zigmantas, G. S. Espie, and V. Barzda, “Optical microscopy in photosynthesis,” Photosynth. Res. 102(2-3), 111–141 (2009). [CrossRef]  

15. B. A. Grubbs, N. P. Etter, W. E. Slaughter, A. M. Pittsford, C. R. Smith, and P. D. Schmitt, “A low-cost beam-scanning second harmonic generation microscope with application for agrochemical development and testing,” Anal. Chem. 91(18), 11723–11730 (2019). [CrossRef]  

16. M. Chen, G. Zhuo, K. Chen, P. Wu, T. Hsieh, T. Liu, and S. Chu, “Multiphoton imaging to identify grana, stroma thylakoid, and starch inside an intact leaf,” BMC Plant Biol. 14(1), 175 (2014). [CrossRef]  

17. E. Wild, J. Dent, G. O. Thomas, and K. C. Jones, “Visualizing the air-to-leaf transfer and within-leaf movement and distribution of phenanthrene: further studies utilizing two-photon excitation microscopy,” Environ. Sci. Technol. 40(3), 907–916 (2006). [CrossRef]  

18. W. Wu, Q. Liu, C. Brandt, and S. Tang, “Multimodal Multiphoton Microscope with Depth Scanning,” Proc. SPIE 11965, 119650J (2022). [CrossRef]  

19. E. E. Hoover and J. A. Squier, “Advances in multiphoton microscopy technology,” Nat. Photonics 7(2), 93–101 (2013). [CrossRef]  

20. A. Li, G. Hall, D. Chen, W. Liang, B. Ning, H. Guan, and X. Li, “A biopsy-needle compatible varifocal multiphoton rigid probe for depth-resolved optical biopsy,” J. Biophotonics 12(1), e201800229 (2019). [CrossRef]  

21. D. R. Rivera, C. M. Brown, D. G. Ouzounov, I. Pavlova, D. Kobat, W. W. Webb, and C. Xu, “Compact and flexible raster scanning multiphoton endoscope capable of imaging unstained tissue,” Proc. Natl. Acad. Sci. U.S.A. 108(43), 17598–17603 (2011). [CrossRef]  

22. G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5(1), 18303 (2015). [CrossRef]  

23. Y. Zhao, M. Sheng, L. Huang, and S. Tang, “Design of a fiber-optic multiphoton microscopy handheld probe,” Biomed. Opt. Express 7(9), 3425–3437 (2016). [CrossRef]  

24. L. Chen, M. Ghilardi, J. J. C. Busfield, and F. Carpi, “Electrically tunable lenses: a review,” Front. Robot. AI 8, 678046 (2021). [CrossRef]  

25. M. Sato, Y. Motegi, S. Yagi, K. Gengyo-Ando, M. Ohkura, and J. Nakai, “Fast varifocal two-photon microendoscope for imaging neuronal activity in the deep brain,” Biomed. Opt. Express 8(9), 4049–4060 (2017). [CrossRef]  

26. D. E. Hodgson, M. H. Wu, and R. J. Biermann, “Shape Memory Alloys,” in ASM Handbook, Volume 2: Properties and Selection: Nonferrous Alloys and Special-Purpose Materials (ASM Handbook Committee, 1990), pp. 897–902.

27. J. M. Jani, M. Leary, and A. Subic, “Designing shape memory alloy linear actuators: A review,” J. Intell. Mater. Syst. Struct. 28(13), 1699–1718 (2017). [CrossRef]  

28. Z. Qiu and W. Piyawattanametha, “MEMS Actuators for Optical Microendoscopy,” Micromachines 10(2), 85 (2019). [CrossRef]  

29. J. M. Jani, M. Leary, A. Subic, and M. A. Gibson, “A review of shape memory alloy research, applications and opportunities,” Mater. Des. 56, 1078–1113 (2014). [CrossRef]  

30. Y. Wu, Y. Zhang, J. Xi, M. Li, and X. Li, “Fiber-optic nonlinear endomicroscopy with focus scanning by using shape memory alloy actuation,” J. Biomed. Opt. 15(6), 060506 (2010). [CrossRef]  

31. A. Li, W. Liang, H. Guan, Y. A. Gau, D. E. Bergles, and X. Li, “Focus scanning with feedback-control for fiber-optic nonlinear endomicroscopy,” Biomed. Opt. Express 8(5), 2519–2527 (2017). [CrossRef]  

32. L. Huang, X. Zhou, Q. Liu, C. E. MacAulay, and S. Tang, “Miniaturized multimodal multiphoton microscope for simultaneous two-photon and three-photon imaging with a dual-wavelength Er-doped fiber laser,” Biomed. Opt. Express 11(2), 624–635 (2020). [CrossRef]  

33. N. Olivier, M. A. Luengo-Oroz, L. Duloquin, E. Faure, T. Savy, I. Veilleux, X. Solinas, D. Débarre, P. Bourgine, and A. Santos, “Cell lineage reconstruction of early zebrafish embryos using label-free nonlinear microscopy,” Science 329(5994), 967–971 (2010). [CrossRef]  

34. B. Weigelin, G. J. Bakker, and P. Friedl, “Intravital third harmonic generation microscopy of collective melanoma cell invasion: principles of interface guidance and microvesicle dynamics,” IntraVital 1(1), 32–43 (2012). [CrossRef]  

35. S. Dietzel, J. Pircher, A. K. Nekolla, M. Gull, A. W. Brändli, U. Pohl, and M. Rehberg, “Label-free determination of hemodynamic parameters in the microcirculaton with third harmonic generation microscopy,” PLoS One 9(6), e99615 (2014). [CrossRef]  

36. S. Mehravar, B. Banerjee, H. Chatrath, B. Amirsolaimani, K. Patel, C. Patel, R. A. Norwood, N. Peyghambarian, and K. Kieu, “Label-free multi-photon imaging of dysplasia in Barrett’s esophagus,” Biomed. Opt. Express 7(1), 148–157 (2016). [CrossRef]  

37. E. Gavgiotaki, G. Filippidis, M. Kalognomou, A. A. Tsouko, I. Skordos, C. Fotakis, and I. Athanassakis, “Third harmonic generation microscopy as a reliable diagnostic tool for evaluating lipid body modification during cell activation: the example of BV-2 microglia cells,” J. Struct. Biol. 189(2), 105–113 (2015). [CrossRef]  

38. P. J. Campagnola and L. M. Loew, “Second-harmonic imaging microscopy for visualizing biomolecular arrays in cells, tissues and organisms,” Nat. Biotechnol. 21(11), 1356–1360 (2003). [CrossRef]  

39. P. J. Campagnola, A. Lewis, and L. M. Loew, “High-resolution nonlinear optical imaging of live cells by second harmonic generation,” Biophys. J. 77(6), 3341–3349 (1999). [CrossRef]  

40. Y. Barad, H. Eisenberg, M. Horowitz, and Y. Silberberg, “Nonlinear scanning laser microscopy by third harmonic generation,” Appl. Phys. Lett. 70(8), 922–924 (1997). [CrossRef]  

41. K. Harpel, R. D. Baker, B. Amirsolaimani, S. Mehravar, J. Vagner, T. O. Matsunaga, B. Banerjee, and K. Kieu, “Imaging of targeted lipid microbubbles to detect cancer cells using third harmonic generation microscopy,” Biomed. Opt. Express 7(7), 2849–2860 (2016). [CrossRef]  

42. N. V. Kuzmin, P. Wesseling, P. C. de Witt Hamer, D. P. Noske, G. D. Galgano, H. D. Mansvelder, J. C. Baayen, and M. L. Groot, “Third harmonic generation imaging for fast, label-free pathology of human brain tumors,” Biomed. Opt. Express 7(5), 1889–1904 (2016). [CrossRef]  

43. A. Volkmer, “Vibrational imaging and microspectroscopies based on coherent anti-Stokes Raman scattering microscopy,” J. Phys. D: Appl. Phys. 38(5), R59–R81 (2005). [CrossRef]  

44. T. B. Huff and J. X. Cheng, “In vivo coherent anti-Stokes Raman scattering imaging of sciatic nerve tissue,” J. Microsc. 225(2), 175–182 (2007). [CrossRef]  

45. C. Lefort, “A review of biomedical multiphoton microscopy and its laser sources,” J. Phys. D: Appl. Phys. 50(42), 423001 (2017). [CrossRef]  

46. F. Akhoundi, Y. Qin, N. Peyghambarian, J. K. Barton, and K. Kieu, “Compact fiber-based multi-photon endoscope working at 1700nm,” Biomed. Opt. Express 9(5), 2326–2335 (2018). [CrossRef]  

47. W. R. Zipfel, R. M. Williams, R. Christie, A. Y. Nikitin, B. T. Hyman, and W. W. Webb, “Live tissue intrinsic emission microscopy using multiphoton-excited native fluorescence and second harmonic generation,” Proc. Natl. Acad. Sci. U. S. A. 100(12), 7075–7080 (2003). [CrossRef]  

48. A. Filippi, E. D. Sasso, L. Iop, A. Armani, M. Gintoli, M. Sandri, G. Gerosa, F. Romanato, and G. Borile, “Multimodal label-free ex vivo imaging using a dual-wavelength microscope with axial chromatic aberration compensation,” J. Biomed. Opt. 23(9), 1–9 (2018). [CrossRef]  

49. L. Huang, X. Zhou, and S. Tang, “Optimization of frequency-doubled Er-doped fiber laser for miniature multiphoton endoscopy,” J. Biomed. Opt. 23(12), 126503 (2018). [CrossRef]  

50. C. Brandt, W. Wu, Q. Liu, and S. Tang, “Endoscopic MPM objective designed for depth scanning,” Proc. SPIE 11937, 1193706 (2022). [CrossRef]  

51. Y. Zhou, K. K. H. Chan, T. Lai, and S. Tang, “Characterizing refractive index and thickness of biological tissues using combined multiphoton microscopy and optical coherence tomography,” Biomed. Opt. Express 4(1), 38–50 (2013). [CrossRef]  

52. R. H. Webb and C. K. Dorey, “The Pixilated Image,” in Handbook of Biological Confocal Microscopy, J. B. Pawley, ed. (Springer, 1995), pp. 55–67.

53. B. Richards and E. Wolf, “Electromagnetic diffraction in optical systems, II. Structure of the image field in an aplanatic system,” Proc. R. Soc. Lond. A 253(1274), 358–379 (1959). [CrossRef]  

54. H. Yoo, I. Song, and D. G. Gweon, “Measurement and restoration of the point spread function of fluorescence confocal microscopy,” J. Microsc. 221(3), 172–176 (2006). [CrossRef]  

55. P. J. Shaw and D. J. Rawlins, “The point-spread function of a confocal microscope: its measurement and use in deconvolution of 3-D data,” J. Microsc. 163(2), 151–165 (1991). [CrossRef]  

56. R. Juškaitis and T. Wilson, “The measurement of the amplitude point spread function of microscope objective lenses,” J. Microsc. 189(1), 8–11 (1998). [CrossRef]  

57. D. Arora, Basidiomycotina (Basidiomycetes),” in Mushrooms Demystified, 2nd ed. (Ten Speed Press, 1986), pp. 282–283.

58. D. Michelot and L. M. Melendez-Howell, “Amanita muscaria: chemistry, biology, toxicology, and ethnomycology,” Mycol. Res. 107(2), 131–146 (2003). [CrossRef]  

59. D. M. Wieliczka, S. Weng, and M. R. Querry, “Wedge shaped cell for highly absorbent liquids: infrared optical constants of water,” Appl. Opt. 28(9), 1714–1719 (1989). [CrossRef]  

60. M. Yildirim, N. Durr, and A. Ben-Yakar, “Tripling the maximum imaging depth with third-harmonic generation microscopy,” J. Biomed. Opt. 20(9), 096013 (2015). [CrossRef]  

61. R. C. Smith and K. S. Baker, “Optical properties of the clearest natural waters (200–800 nm),” Appl. Opt. 20(2), 177–184 (1981). [CrossRef]  

62. N. G. Horton, K. Wang, D. Kobat, C. G. Clark, F. W. Wise, C. B. Schaffer, and C. Xu, “In vivo three-photon microscopy of subcortical structures within an intact mouse brain,” Nat. Photonics 7(3), 205–209 (2013). [CrossRef]  

63. Z. Qin, C. Chen, S. He, Y. Wang, K. F. Tam, N. Y. Ip, and J. Y. Qu, “Adaptive optics two-photon endomicroscopy enables deep-brain imaging at synaptic resolution over large volumes,” Sci. Adv. 6(40), eabc6521 (2020). [CrossRef]  

64. C. Wang and N. Ji, “Characterization and improvement of three-dimensional imaging performance of GRIN-lens-based two-photon fluorescence endomicroscopes with adaptive optics,” Opt. Express 21(22), 27142–27154 (2013). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Setup of the multimodal MPM imaging system. L – lens; PPLN – periodically poled MgO:LiNbO3; F – filter; MEMS – microelectromechanical system; OBJ – objective lens pack; MMF – multimode fiber; DBS – dichroic beam splitter; PMT – photomultiplier tube; HV – high voltage; DAQ – data acquisition; PC – personal computer; SMA – shape memory alloy; PCIe – peripheral component interconnect express; I2C – inter-integrated circuit.
Fig. 2.
Fig. 2. Design of the objective. (a) Optical layout; (b) Zemax simulation of RMS wavefront error vs. field of view. BM – laser beam; AL – aspheric lens; AR – air; CL – plano-convex lens; WT – water (water is used as a sample in simulation); DL – diffraction-limited RMS wavefront error.
Fig. 3.
Fig. 3. Construction of the SMA-actuated objective lens pack. AC – actuator; AD – actuator driving component; LH – lens holder; AL – aspheric lens; CL – plano-convex lens; OC – objective chassis; EP – epoxy; double-headed arrows indicate moving parts and directions.
Fig. 4.
Fig. 4. Performance characterization of the SMA actuator. (a) LDV measurement of the actuator movement in the full range of 370 µm; (b) LDV measurement of the actuator movement with a large step size of 200 µm and a long cycle duration of 30 s; (c) Actuator position measured by Hall sensor vs. command position under closed-loop control.
Fig. 5.
Fig. 5. Actuator response time measured with the lens mounted. (a) Actuating forward; (b) Actuating backward.
Fig. 6.
Fig. 6. Experimentally measured full-width-half-maximum (FWHM) resolution of the MPM system. (a) TPEF resolution in the XY plane; (b) TPEF resolution in the Z (depth) direction; (c) THG resolution in the XY plane; (d) THG resolution in the Z (depth) direction.
Fig. 7.
Fig. 7. Ultra-low SNR images before and after denoising.
Fig. 8.
Fig. 8. Multimodal MPM images from animal tissues at different depths. (a)–(c) Mouse ear pinna; (d) fish skin; (e) and (f) mouse femur. THG images are averaged over 3 frames; TPEF and SHG images are not averaged. Red – TPEF; Green – SHG; Blue – THG. Scale bar is 50 µm.
Fig. 9.
Fig. 9. Multimodal MPM images from fungus and plant tissue samples at different depths. (a) Lamella (grill) of Amanita muscaria (fly agaric) mushroom; (b) lamina of Acer rubrum (red maple) leaf; (c) leaf midrib. All TPEF, SHG, and THG images are not averaged. Red – TPEF; Green – SHG; Blue – THG. Scale bar is 50 µm.

Tables (2)

Tables Icon

Table 1. Performance characterization of the SMA Actuator

Tables Icon

Table 2. Measured FOV and resolution of the MPM system

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

F O V = 2 f obj f scan tan θ f tube
R xy2P = 2 ln 2 0.320 λ N A
R xy3P = 2 ln 2 3 0.320 λ N A
R z2P = 2 ln 2 0.532 λ n n 2 N A 2
R z3P = 2 ln 2 3 0.532 λ n n 2 N A 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.