Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Video-rate full-ring ultrasound and photoacoustic computed tomography with real-time sound speed optimization

Open Access Open Access

Abstract

Full-ring dual-modal ultrasound and photoacoustic imaging provide complementary contrasts, high spatial resolution, full view angle and are more desirable in pre-clinical and clinical applications. However, two long-standing challenges exist in achieving high-quality video-rate dual-modal imaging. One is the increased data processing burden from the dense acquisition. Another one is the object-dependent speed of sound variation, which may cause blurry, splitting artifacts, and low imaging contrast. Here, we develop a video-rate full-ring ultrasound and photoacoustic computed tomography (VF-USPACT) with real-time optimization of the speed of sound. We improve the imaging speed by selective and parallel image reconstruction. We determine the optimal sound speed via co-registered ultrasound imaging. Equipped with a 256-channel ultrasound array, the dual-modal system can optimize the sound speed and reconstruct dual-modal images at 10 Hz in real-time. The optimized sound speed can effectively enhance the imaging quality under various sample sizes, types, or physiological states. In animal and human imaging, the system shows co-registered dual contrasts, high spatial resolution (140 µm), single-pulse photoacoustic imaging (< 50 µs), deep penetration (> 20 mm), full view, and adaptive sound speed correction. We believe VF-USPACT can advance many real-time biomedical imaging applications, such as vascular disease diagnosing, cancer screening, or neuroimaging.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Simultaneous ultrasound (US) and photoacoustic (PA) imaging extend the advantages of single modalities via offering co-registered acoustic and optical contrasts [15]. Pulse-echo ultrasonography distinguishes anatomical structures based on their acoustic impedance mismatches and is safe, low cost, and easy to operate [6,7]. PA imaging provides optical absorption contrasts of different chromophores and achieves high-resolution imaging in deep tissue [816]. PA imaging has been widely exploited in pharmacokinetic studying [1719], early-tumor detection [2024], and neuroimaging [25]. Dual-modal US and PA imaging inherits their advantages and offers high-resolution structural, functional, and molecular information. Recently, dual-modal US/PA imaging has demonstrated tremendous promise in the guidance of orthotopic-tumor therapy [26,27], and precise drug delivery [28].

Full-ring dual-modal US and PA imaging provide a full view, higher spatial resolution, and deeper penetration compared with linear-array or multi-segment array imaging [29,30]. The cylindrically focused full-ring array transducer can also mitigate the out-of-focus artifacts [29]. One remaining problem for the full-ring array is that the large element number and the special transducer geometry make it challenging to reconstruct the dual-modal images in real-time (Supplementary Table S1). To date, high-speed full-view US/PA imaging has not been thoroughly investigated. Another problem is the speed of sound (SoS). US and PA reconstructions require prior knowledge of the SoS. Even a small error in SoS may distort and defocus the reconstructed image [31]. The SoS depends on many factors, such as the sample size, the temperature, and even the physiological status [32]. Thus, an accurate SoS value is often unknown and varies in different objects. PA-feature-driven algorithms have been exploited to estimate an averaged SoS value or an SoS map [3335]. The effectiveness of these methods may be degraded by sparse vessel features, out-of-plane artifacts, and optical heterogeneity. Furthermore, iterative computation or operator intervention in these methods is often time-consuming and especially when cross-sections or samples change. Transmission mode ultrasound-computed tomography has been developed to map the SoS distribution [29], however, the tomographic computation usually costs several minutes to hours for each slice and is not ideal for real-time SoS compensation.

To achieve real-time SoS compensation in dual-modal US/PA imaging, we develop an efficient SoS optimization method and implement it in the video-rate full-ring ultrasound and photoacoustic computed tomography (VF-USPACT). We use a 256-element ring-array transducer to realize full-view US/PA imaging. The US and PA modes are interleaved and can reconstruct images at 10 Hz in real-time. To speed up image reconstruction, we use a look-up table of the distance of flight (DoF), optimize the reconstruction region without compromising the resolution, and implement the computation on the graphics processing unit (GPU). To optimize the SoS in real-time, we first reconstruct multiple US images using different estimated SoS values and then determine the matched SoS value from the maximal coherence factors of these US images. This method avoids iteration and can rapidly determine the optimal SoS value. Compared with PA-feature-driven methods, the US-based method is more robust because of abundant US features and immunity to optical heterogeneity. We demonstrate the VF-USPACT system in real-time dual-modal imaging of phantom, small animal, and human subjects.

2. Dual-modal imaging platform

2.1 Experimental setup

The video-rate full-ring ultrasound and photoacoustic computed tomography (VF-USPACT) platform comprise five parts: (i) a nanosecond pulsed laser for PA excitation; (ii) a 1 ${\times} $ 4 optical fiber bundle; (iii) a customized 256-element ring-shaped US transducer; (iv) a linear scanner; and (v) a 256-channel US/PA data acquisition (DAQ) system. The laser source is a Q-switched Nd: YAG laser (Spectra-Physics, Santa Clara, CA, USA) and offers 5-8-ns pulses at 20 Hz. Two wavelengths of 532 nm and 1064 nm can be selected. The laser beam is coupled into the optical fiber bundle. The distal end of each branch is a 32 mm ${\times} $ 1 mm rectangle and is fixed near the US transducer array. To achieve uniform illumination, the distal end is oriented at about 60° relative to the imaging plane (Fig. 1(a)). To find the optimal illumination angle, we adjusted the fiber bundle to achieve the strongest PA signals (Supplementary Fig. S1). The ring-shaped transducer has 256 elements to provide full-view in-plane acoustic detection. The ring-array radius is 40 mm. Each element is 14 mm ${\times} $ 0.88 mm and cylindrically focused in the elevational direction with an acoustic numerical aperture (NA) of 0.2 (Fig. 1(c)). The transducer has a central frequency of 6.25 MHz and a two-way (Transmission and receiving) bandwidth of 58.4% and a one-way (Receiving) bandwidth of 76.8% (Supplementary Fig. S2). The US/PA DAQ system is Vantage-256 from Verasonics Inc. The 256-channel is one-to-one mapped with the transducer array for high-speed dual-modal imaging. Each channel has an independent amplifier with a tunable gain of up to 54 dB. Both the US and PA signals are sampled at 25 MHz and 14-bits resolution.

 figure: Fig. 1.

Fig. 1. Speed of sound adaptive video-rate full-ring ultrasound and photoacoustic computed tomography (VF-USPACT) platform. (a) The layout of the experimental setup. (b) Close up of the red dashed box region in (a), which is a photograph of the 3D-printed animal holder for trunk imaging. The animal's paws are secured to the holder. (c) Diagram of a 256-element cylindrically focused full-ring array transducer. (d) US transmission and receiving sequence, which is based on sequential active excitation of each element (Red dot) and parallel detection by 128 elements (Green dots). Acoustic simulation at the 1st position is plotted. The white solid line defines the reconstruction region at one transmission event. (e) Simulated acoustic focus field in the x-z plane. (f) The line profiles are at the center (Red dashed line) and off-center from 5.5 mm (Blue dashed line) in (e). (g) PA receiving sequence, which is parallelly detected by the 256 elements after each laser pulse. The white box defines the reconstruction region. (h) Interleaved timing sequence for US/PA acquisition, image reconstruction, SoS correction, and laser trigger. One video-rate mode is 10-Hz US/PA imaging (Left shadowed area), and another mode is single-shot US + 20-Hz PA imaging. AI, Anesthesia inflow; AU, anesthesia unit; amp., amplitude; DAQ, data acquisition; FWHM, full width at half maximum; FB, fiber bundle; NA, numerical aperture; NIR, near-infrared; PA, photoacoustic; Rcv, receive; SR, support; SoS, speed of sound; Tx, transmit; TTH, transfer to host; TM, thermocouple; US, ultrasound; WT, water tank.

Download Full Size | PDF

2.2 Data acquisition

In US imaging, we employed a synthetic transmit aperture (STA) approach to collect the US signals. The ring-array elements were sequentially excited with a 1-cycle sinusoidal pulse. Back-scattered US signals were parallelly detected by 128 elements without time delay. The 128 receiving elements are on the same side as the emitting element. The remaining 128 elements were electronically switched off to reduce interferences. The transmitting/receiving (Tx/Rcv) matrix in Fig. 1(d) shows the element arrangement. In PA imaging, all the 256 elements simultaneously receive acoustic signals in ∼50 µs after each laser pulse (Fig. 1(g)). We use a wavelength of 1064-nm because of high pulse energy, reduced scattering, and relatively low background absorption [3,23]. We use the laser trigger to synchronize the DAQ. The dual-modal imaging speed depends on the time interval between US excitations (250 µs ${\times} $ 255), the Q-switch delay (200 µs), the PA acquisition time (50 µs), and the laser pulse repetition rate (20 Hz). The current system can alternate US and PA imaging up to 10 Hz (Fig. 1(h)).

3. Methods and experiments

3.1 Speed of sound calculation and image reconstruction

Compensation for the heterogeneous and time-variant speed of sound (SoS) can improve the US and PA imaging quality but significantly increase the computation time. Moreover, the procedures often need operator intervention. Thus, real-time compensation for the SoS has not been achieved in high-speed US/PA imaging. To address this issue, we compute and compensate for the average SoS in the field of view. This approach enables real-time reconstruction of US and PA images at a video rate. At the same time, the average SoS can effectively improve the image quality.

We adaptively computed the SoS using the coherence factor (CF) of the US signals. The CF is defined as the ratio between the total coherent energy and the total incoherent energy and increases when the phase aberration is shrinking. Because an accurate SoS can minimize the phase aberration in US imaging (Supplementary Fig. S3) [31,36], we can find the optimal SoS via searching the maximum CF value of the US image. The CF value can be determined from

$$C{F_{{v_i}}}(n )= \frac{{{{\left|{\mathop \sum \nolimits_{Rcv = 1}^{{N_{elm}}} \mathop \sum \nolimits_{Tx = 1}^{{N_{elm}}} I(n )} \right|}^2}}}{{{N_{elm}} \times \mathop \sum \nolimits_{Rcv = 1}^{{N_{elm}}} {{\left|{\mathop \sum \nolimits_{Tx = 1}^{{N_{elm}}} I(n )} \right|}^2}}}, $$
where n represents the reconstructed pixel position. ${v_i}$ is an estimated SoS. $Tx$ and $Rcv$ are the transmitting and receiving elements. ${N_{elm}}$ is the number of transducer elements. $I(n )$ is the delayed channel data according to the ${v_i}$ and the distance of flight (DoF).

To find the optimal SoS value, we acquired a series of CF maps and calculate their CF summation (CFS) values at different SoS. Spline interpolation was finally implemented on the CFS curve to localize the optimal SoS. The optimal SoS is determined from

$$\widehat {SoS} = \mathop {argmax}\limits_{SoS} f({CFS} ), $$
where f is the interpolation operator. Considering the directivity of the US transducer, we confined the reconstruction to a small fan region (white solid line in Fig. 1(d)). To accelerate computation, a look-up table of the DoF was pre-calculated and stored in the memory. The DoF table (${N_y} \times {N_x} \times {N_{elm}}$) is a 3D matrix that defines the distance between each reconstructed pixel to each receiving element. The ${N_y} \times {N_x}$ is the image size.

After determining the optimal SoS value, the US and PA images were reconstructed using a weight-based delay-and-sum method [7,37]. The weight is used to compensate for the directional sensitivity and detection sensitivity in both the US and PA reconstruction. The compensation formula is written as,

$$\begin{array}{*{20}{c}} {w_{\theta k}^i = {\boldsymbol abs}({\cos ({\theta_k^i} )\cdot \sin ({\hat{X}} )/\hat{X}} )}\\ {\hat{X} = width\cdot f/c\cdot \pi \cdot sin({\theta_k^i} )} \end{array}, $$
where $w_{\theta k}^i$ is the weight value for the reconstructed ith pixel. ${\boldsymbol abs}$ represents the absolute value function. The $\theta _k^i$ is the angle of the acoustic wave from the ${i^{th}}$ pixel to the normal direction of the ${k^{th}}$ element, $width$ is the width of the transducer element, f is the central frequency of the transducer, and ith is the sound speed of the coupling medium. Here, $\cos ({\theta_k^i} )$ accounts for the directional sensitivity and $\sin ({\hat{X}} )/\hat{X})$ compensates for the detection sensitivity related to the transducer element size and the central frequency. The term of $\sin ({\hat{X}} )/\hat{X})$ is unity if the reconstruction ignores the detection sensitivity variation.

The raw data were preprocessed using a 3rd-order digital Butterworth bandpass filter (0.05-8.5 MHz). For PA reconstruction, the image size is 30 mm ${\times} $ 30 mm (Fig. 1(g)), and the raw data were truncated by half [8]. Within this region, the resolution is better than 380 µm (Figs. 3(a)-c). In addition, the confined reconstruction region reduces the computational burden and enables real-time imaging.

After reconstruction, we processed the US image using envelope detection, logarithmic compression, and median filter [3]. To denoise and enhance the contrast of the PA image, we implemented a nonlocal means filter [38], contrast-limited adaptive histogram equalization (CLAHE), and a vessel filter algorithm [29]. To enhance visibility, we used a snake-based active contour algorithm to segment the region of interest [39]. However, to maintain fidelity, the quantified parameters were determined from the reconstructed images without contrast enhancement. All data processing steps were implemented in MATLAB (2019b, MathWorks, USA) on a computer (Inter Core i7@2.60 GHz, 16 GB of RAM, NVIDIA GeForce RTX 2060).

3.2 Dual-modal ultrasound and photoacoustic simulation

We simulated how the SoS values were calculated using the k-Wave toolbox [40] in two different numerical phantoms, i.e., a simple numerical phantom and a realistic breast phantom. The coupling medium between the phantom and the detector is water with 1500-m/s SoS and 1000-kg/m3 density. The geometry of the simple numerical phantom is shown in Fig. 2(d). To generate acoustic heterogeneity, we added random Gaussian noises to the acoustic impedances in the hypoechoic region [41]. The average SoS is 1538 m/s and the standard deviation is 39.4 m/s. The maximal acoustic impedance is 1.7 MRayl. For the realistic numerical phantom (Fig. 2(f)), we modified it from a clinical magnetic resonance angiography (MRA) breast data set [42]. Rich anatomical structures, including fibroglandular, fat, and skin, can be visualized. We modified the acoustic impedances from 1.5 MRayl to 1.7 MRayl. Numerical vessel structures were added to the phantoms for PA imaging (Figs. 2(d) and 2(f)).

 figure: Fig. 2.

Fig. 2. Simulation on US/PA imaging with sound speed optimization. (a) Received channel data map. The received signals from transmission and reflection can be identified. The transmitted element is sequentially excited along with the red dashed line. (b) Pixel-based CF maps were calculated with different average SoS values. (c) CFS at different average SoS values. (d) Simple numerical phantom simulation and reconstruction. The phantom contains two anechoic regions with different diameters. Left: GT image. Overlayed US and PA images were reconstructed with a wrong SoS (Middle) and with the optimized SoS by CFS calculation (Right). (e) Comparing the diameters of anechoic regions reconstructed with the wrong and optimized SoS values. (f) Realistic numerical breast phantom simulation and reconstruction. Left: GT image. Overlayed US and PA images were reconstructed with a wrong SoS (Middle) and with the optimized SoS by CFS calculation (Right). (g) Zoom-in images of the white solid box in (f). The arrows show the improved details. CF, coherence factor; CFS, coherence factor summation; FG, fibroglandular; GT, ground truth.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Performance characterization of the VF-USPACT imaging system. (a) US (Left) and PA (Right) imaging a tungsten wire with a diameter of 20 µm, which is small enough to be regarded as a spatial point source. These images were acquired when the tungsten wire was located at different positions. Measured in-plane axial and tangential resolution of (b) the US and (c) the PA as a function of distance from the array center to the edge. The shadowed regions show a range with isotropic resolution and the reconstructed regions are also highlighted with dashed line boxes in (a). (d) Photograph of the three-layer coupling media phantom. (e) Reconstructed US (Left) and PA (Right) images using adaptive SoS. PDMS, polydimethylsiloxane.

Download Full Size | PDF

For US simulation, we sequentially transmitted a 1-cycle sinusoidal pulse from the individual elements. The pulse was modulated with the Hanning window before being sent on the transducer. 128 contiguous elements simultaneously received echoes after each transmission. In the PA simulation, the blood vessels were assigned an initial pressure, and all 256 elements received the signals. We set the sampling frequency to 53.2566 MHz for both US and PA. The total running steps are 3600 for US and 1800 for PA. The computational grid consists of 1200 ${\times} $ 1200 pixels including a perfectly matched layer (damping block). The pixel size is 100 µm on each side. We added frequency-dependent acoustic attenuation ($\mathrm{\alpha } = 0.75{f^{1.5}}$) in simulation. The simulations were implemented on GPU to reduce the computation time.

3.3 System characterization

We characterized the axial, tangential, and elevational resolutions (Figs. 1(e)-f and Figs. 3(a)-c). Theoretically, both the axial and tangential resolution is dependent on the transducer bandwidth and the axial resolution is spatially invariant. However, the tangential resolution also depends on the detector aperture size and is spatially variant [43]. To determine the reconstructed region with acceptable spatial resolution, we measured the spatial resolution by imaging a 20-µm-diameter tungsten wire at different positions.

To validate the accuracy of US and PA reconstruction using the adaptive SoS value, we made a three-layer phantom (Fig. 3(d)). The innermost layer is a cylinder with a diameter of 10.4 mm. The cylinder is made of agar-water gel (6% w/w) with 0.9% (v/v) intralipid. The middle layer is thin (<0.1 mm) black tape. The outermost hollow cylinder has a diameter of 15 mm and is made of tissue-mimic poly-dimethylsiloxane (PDMS). Because of the acoustic heterogeneity, the multi-layer phantom and the surrounded water have an impedance mismatch.

3.4 Animal preparation and imaging

Adult six-week-old female nude mice (BALB/c mouse, ∼28 g) were used for mouse trunk imaging. In experiments, the mouse was vertically fixed using a lab-made animal holder (Fig. 1(b)) and anesthetized with ∼2% vaporized isoflurane at 0.8 L/min. The animal holder composes of three parts to secure the animal for elevational scanning. The top part is a hollow tube with a mouth clamp. Vaporized isoflurane flows through the tube to the mouse's nose. The animal’s nose and mouth can be fixed to the mouth clamp. The middle part is a transparent rubber rod for linking the top and bottom components. The bottom is a supporting plate with a hole in the middle to allow the mouse tail to pass over. The length of the rod linker can be retractable to accommodate mice with different weights. Both the fore and hind paws were attached to the holder using strings. The animal was immersed in deionized water, and its scanning cross-section position can be adjusted by a motor (Fig. 1(a)). The water temperature was maintained at 30°C and monitored with a thermocouple.

We used two different speeds, 10 Hz and 20 Hz, to image the anatomy and dynamics of the mice. At 10 Hz, we acquired co-registered US/PA images with optimized SoS from the upper thoracic cavity to the pelvic cavity with a 1-mm step size in the elevational scanning direction. We also continuously recorded US/PA images at the cross-section of the thoracic cavity and the abdominal cavity. At 20 Hz, we acquired one US image and optimize the SoS at the beginning. Then we continuously recorded PA images with the optimized SoS. The wavelength for PA imaging was 1064 nm and the laser fluence on the skin was approximately 15.9 mJ/cm2, well below the ANSI limit of 100 mJ/cm2. All the animal procedures have been approved by the animal ethical committee of the City University of Hong Kong.

3.5 Hemodynamic imaging of the heart

We visualized and analyzed the hemodynamics in the heart wall with 20-Hz PA imaging. We recorded 16 seconds (320 frames) in the thoracic cavity. A region of interest (Line marked in Fig. 5(a)) on the heart wall was selected and segmented from the PA images. We calculated the displacement induced by the heartbeat and respiration. The displacement changes form a time trace. The time trace (red solid line) is extracted via averaging the amplitude in the displacement direction. The Fourier analysis of the time trace shows the respiration and heartbeat frequencies. In the spectral analysis, we used a second-order high-pass filter (0.15-Hz cutoff frequency) to remove low-frequency interferences. To calculate the main artery maps, we processed every pixel of the time-lapsed PA images using the Fourier analysis. We computed the magnitude at the heart-beat frequency and used it to encode each pixel with pseudo-colors.

3.6 Human imaging

To demonstrate potential clinical applications, we conducted human finger joint imaging. Because the VF-USPACT system can optimize the SoS in real-time, the image quality is robust even in unknown acoustic coupling media. We deliberately set the water temperature to 23 °C with 1491.3-m/s SoS [44]. Different finger diameters further induce variation in the average SoS value. We subsequently acquired co-registered US/PA images of the joints in the five fingers. The optimal average SoS values were calculated in real-time and used in the US and PA image reconstructions. The optical wavelength for PA imaging was 1064 nm and the laser fluence on the skin was approximately 12.7 mJ/cm2. All human experimental procedures have been carried out in conformity with the research committee of the City University of Hong Kong.

4. Results and discussion

4.1 Simulation results

We validated the SoS optimization method in simulation. In the simulation, we tested a simple numerical phantom (Fig. 2(d)) and a realistic breast phantom (Fig. 2(f)). Figs. 2(a)–2(c) illustrates how the optimal SoS is determined. Because we turned off 128 elements that are on the opposite side of the transmitting element, we can only see the directly transmitted and phantom-reflected signals (Fig. 2(a)). We computed a series of CF maps of the received US signals when varying the SoS value from low to high (Fig. 2(b)). Then the optimal SoS value was determined from the maximum CF summation (CFS) value (Visualization 1). Conventionally, a pre-determined SoS value may become inaccurate due to variations in anatomical structures, tissue size, physiological status, and temperature [32]. Wrong SoS may cause image distortion or blurred boundaries (Indicated by arrows in the middle images in Figs. 2(d) and 2(f)). We can observe distorted and split PA features. The optimized SoS can effectively reduce these distortions (Right images in Figs. 2(d) and 2(f)). As shown in Fig. 2(e), the diameters of the two anechoic regions in the numerical phantom are well corrected with the adaptive SoS. From the zoom-in images (Fig. 2(g)), we see that some structures, such as the fibroglandular (FG) and fat, can also be distinguished clearly with the optimized SoS.

4.2 System performance characterization

To shorten the processing time, we confined the US reconstruction region to a fan-shaped region at each Tx/Rcv event and the PA region to a rectangle region at each laser pulse. The region size is determined by the sensitivity of the acoustic field (Fig. 1(d)) and the spatial resolution (Figs. 3(a)-c). The maximal reconstructed region is 30 mm ${\times} $ 30 mm. The results also show the system has a nearly isotropic resolution within a region of 15 mm ${\times} $ 15 mm. The center has the highest resolution, which is 140.3 µm for the US and 151.7 µm for PA in the axial direction, and 141.5 µm for the US and 158.9 µm for PA in the tangential direction.

We also validated the SoS optimization method in vitro experiments. For in vitro validation, we imaged a three-layer phantom (Fig. 3(d)) and measured the rod diameter in the innermost layer. Although there exists a large acoustic mismatch between different layers in the phantom and the coupling medium (Water), we can reconstruct undistorted images and correct the rod size (Fig. 3(e)) by using the optimized SoS.

4.3 Dual-modal imaging of whole-body anatomy and dynamics

We used the VF-USPACT system to non-invasively image the small-animal whole-body. The nude mouse was immobilized in the animal holder and positioned in the center of the US transducer array (Fig. 1(b)). We acquired a series of images at different cross-sectional positions of the animal. Four representative images from the upper thoracic cavity to the pelvic cavity are shown in Fig. 4(a). At each cross-sectional position, the mouse was firstly imaged with one US image to optimize the SoS, and then continuously imaged with PA at 20-Hz for 16-seconds (320 frames). The optimal averaged SoS values at different cross-sections are shown in Fig. 4(b). We compared the CF method using US data and different autofocus function methods using PA data (Supplementary Fig. S4). The CF method is more robust to determine the optimal sound speed. We also compared PA images at the liver region reconstructed with the optimized SoS (1515 m/s) and a wrong SoS value (1510 m/s). A 0.3% change of the SoS value may bring visual deception when selecting the optimal SoS value subjectively. As a reference, a temperature change of 1.5 to 4.5 degrees can cause a 0.2% to 0.6% drop in sound speed, which may lead to obvious image artifacts and sometimes is more dominant than the acoustic heterogeneity [32]. The reconstruction results show that the SoS optimization method can minimize blurring and artifacts of the blood vessel features (Visualization 2). The dual-modal US/PA imaging provides complementary contrasts and different anatomical details (Fig. 4(c)). The vessels (Heart wall in the thoracic cavity, abdominal aorta, vena cava, and vena porta in the abdominal cavity) and vascularized organs (Liver, kidney, and spleen in the abdominal cavity) are highlighted in the PA images. Some other organs, for example, the stomach (Abdominal cavity), the bladder, and the iliac body (Pelvic cavity) are unobservable in PA images but can be easily identified in the US images. The main reason is that US and PA imaging has different signal generation mechanisms. The stomach, bladder, and iliac body cannot provide enough contrast in the PA image. However, the stomach with diffuse reflections appears hypoechoic in US imaging, the bladder shows anechoic, and the iliac body shows hyperechoic contrast. Therefore, these structures can be distinguished in the US image. Because the 20-Hz PA imaging speed is higher than the Nyquist sampling rate of the mouse heartbeat under anesthetic conditions, we can record the respiration and heartbeat motions (Visualization 3). Via temporal spectral analysis, we extracted the respiration frequency (0.2 Hz) and the heartbeat frequency (3.7 Hz) from the PA images (Figs. 5(a)-b). The instantaneous dynamics monitoring of the heartbeat is promising in cardiovascular disease diagnosis. The liver and the kidney functions are intimately related to blood circulation. Using the detected heartbeat frequency, we processed the 16-second PA datasets at the cross-sections of the liver (Visualization 4) and the kidney (Visualization 5). The arteries with heartrate-synchronized pulsation can be separated from others (Figs. 5(c)-d), which may be useful in the diagnosis or assessment of atherosclerosis or other arterial obstruction diseases [45].

 figure: Fig. 4.

Fig. 4. Label-free VF-USPACT of small-animal anatomy. (a) Photograph of the mouse and imaged cross-sections. (b) Calculated average SoS values at different cross-sections. (c) Representative cross-sections were imaged with the VF-USPACT system. BM, backbone muscles; CFS, coherence factor summation.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Label-free VF-USPACT of small-animal dynamics. (a) Displacements of the heart wall (along the red solid line marked on the PA image) show respiration and heartbeats. The traces of the heart wall motion are highlighted with red solid lines. (b) Fourier transform of the displacement shows the respiratory and heartbeat frequencies. Arterial maps were encoded with the heartbeat frequency in the (c) liver region and (d) the kidney region.

Download Full Size | PDF

To demonstrate the 10-Hz US/PA imaging, we monitored two different cross-sections from the upper thoracic cavity to the abdominal cavity (Visualization 6). Each cross-section records 16.2-seconds of co-registered US/PA images (162 frames for each). The SoS value was updated when the cross-sectional position changes. Both the US and PA images were reconstructed using the matched SoS value. We also conducted a whole-body scanning from the upper thoracic cavity to the pelvic cavity with high speed and uncompromised image quality (Visualization 7). These results show that SoS adaptive VF-USPACT system holds great potential in pre-clinical research, such as anatomical and hemodynamic imaging, monitoring biodistribution, and clearance of drugs in different organs.

4.4 Dual-modal imaging of human finger joints

We demonstrated the potential clinical translation of VF-USPACT for human extremities (Finger joints) imaging. Although the SoS value is dependent on the object size, medium temperature, and even the physiological status, the VF-USPACT system can provide robust high-quality images for the finger joints (Fig. 6(a)). The fingers have varying diameters. The average SoS value of the little finger is 1496 m/s, smaller than the SoS (1500 m/s) of the thumb finger. However, the thumb finger features show larger distortions if using the same SoS with the little finger (Figs. 6(b)-c). The co-registered US/PA images exhibit rich features, such as the skin, blood vessels, and bones. The regions with high PA signals amplitude from the blood vessels are corresponding to anechoic regions in the US images (First row in Fig. 6(a)). The complementary contrasts can improve the accuracy in characterizing arthritis conditions and diagnosing peripheral vascular diseases, skin malignancies, or diabetic foot [46,47].

 figure: Fig. 6.

Fig. 6. Label-free VF-USPACT of human finger joints. (a) Both US and PA images were reconstructed using the optimized SoS at different cross-sections. The white dashed lines at the top show the high PA signals from blood vessels corresponding to anechoic regions in the US images. (b) Thumb finger images were reconstructed using the SoS value from the little finger. (c) Comparison of the thumb finger images, which were reconstructed using the SoS value of the optimized one and the value from the little finger. Zoom-in images are from the green dashed box in (a) and (b). The arrows show the improved details.

Download Full Size | PDF

5. Conclusions

We report VF-USPACT which provides full view dual-contrast imaging with high speed and optimized SoS. Co-registered US and PA images provide complementary contrasts and reveal features that are not readily distinguishable by a single modality. Our system is comparable with the state-of-the-art hybrid dual-modal US/PA system (Supplementary Table S1) but is featured in real-time SoS optimization in dual-modal imaging reconstruction. The automatic SoS calculation reduces the reconstruction time by avoiding subjective variability or time-consuming iteration. Because the SoS optimization uses only the US data, optical fluence attenuation does not affect its accuracy. VF-USPACT is suitable for scenarios that require real-time processing and displaying, for example, evaluating vascular perfusion function, investigating pharmacokinetics and pharmacodynamics spanning different organs, or intraoperative monitoring. The high imaging speed enables whole-body imaging and continuously collecting of dynamic physiological information. Animal and human imaging results demonstrate the excellent ability in fast dual-contrast imaging. VF-USPACT is also applicable to transcranial dual-modal US and PA imaging. However, different from the acoustic propagation in the soft tissue, the US signals usually experience significant reverberation, mode conversion, refraction, and attenuation through the skull. Therefore, we envision it is better to calculate the coherence factor in different regions (skull and brain cortex) and confirm optimal sound speeds in the sub-regions.

In conclusion, VF-USPACT offers superior abilities in preclinical and human imaging. We believe the system can accelerate the pre-clinical research and facilitate clinical translation of dual-modal US/PA imaging to more real-time applications.

Funding

National Natural Science Foundation of China (62135006, 81627805, 81930048); City University of Hong Kong (7005207, 7020004); University Grants Committee (11101618, 11103320, 11215817).

Acknowledgments

A set of ultrasound/photoacoustic anatomical images was used for comparison in Supplementary Table S1 from the company of iThera Medical.

Disclosures

Lidai Wang has a financial interest in PATech Limited, which, however, did not support this work. All authors declare no competing interests.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. G. S. Jeng, M. L. Li, M. W. Kim, S. J. Yoon, J. J. Pitre, D. S. Li, I. Pelivanov, and M. O’Donnell, “Real-time interleaved spectroscopic photoacoustic and ultrasound (PAUS) scanning with simultaneous fluence compensation and motion correction,” Nat. Commun. 12(1), 716 (2021). [CrossRef]  

2. J. Robin, R. Rau, B. Lafci, A. Schroeter, M. Reiss, X. L. Deán-Ben, O. Goksel, and D. Razansky, “Hemodynamic response to sensory stimulation in mice: Comparison between functional ultrasound and optoacoustic imaging,” NeuroImage 237, 118111 (2021). [CrossRef]  

3. Y. Zhang and L. Wang, “Video-rate ring-array ultrasound and photoacoustic tomography,” IEEE Trans. Med. Imaging 39(12), 4369–4375 (2020). [CrossRef]  

4. S. Park, J. Jang, J. Kim, Y. S. Kim, and C. Kim, “Real-time triple-modal photoacoustic, ultrasound, and magnetic resonance fusion imaging of humans,” IEEE Trans. Med. Imaging 36(9), 1912–1921 (2017). [CrossRef]  

5. A. Wiacek, K. C. Wang, H. Wu, and M. A. L. Bell, “Photoacoustic-guided laparoscopic and open hysterectomy procedures demonstrated with human cadavers,” IEEE Trans. Med. Imaging 40(12), 3279–3292 (2021). [CrossRef]  

6. J. Robin, A. Ozbek, M. Reiss, X. L. Dean-Ben, and D. Razansky, “Dual-mode volumetric photoacoustic and contrast enhanced ultrasound imaging with spherical matrix arrays,” IEEE Trans. Med. Imaging 41(4), 846–856 (2022). [CrossRef]  

7. Y. Zhang, Y. Wang, P. Lai, and L. Wang, “Video-rate dual-modal wide-beam harmonic ultrasound and photoacoustic computed tomography,” IEEE Trans. Med. Imaging 41(3), 727–736 (2022). [CrossRef]  

8. L. Li, L. Zhu, C. Ma, L. Lin, J. Yao, L. Wang, K. Maslov, R. Zhang, W. Chen, J. Shi, and L. V. Wang, “Single-impulse panoramic photoacoustic computed tomography of small-animal whole-body dynamics at high spatiotemporal resolution,” Nat. Biomed. Eng. 1(5), 0071 (2017). [CrossRef]  

9. C. Liu and L. Wang, “Functional photoacoustic microscopy of hemodynamics: a review,” Biomed. Eng. Lett. 12(2), 97–124 (2022). [CrossRef]  

10. J. Chen, Y. Zhang, S. Bai, J. Zhu, P. Chirarattananon, K. Ni, Q. Zhou, and L. Wang, “Dual-foci fast-scanning photoacoustic microscopy with 3.2-MHz A-line rate,” Photoacoustics 23, 100292 (2021). [CrossRef]  

11. L. Wang, K. Maslov, and L. V. Wang, “Single-cell label-free photoacoustic flowoxigraphy in vivo,” Proc. Natl. Acad. Sci. U. S. A. 110(15), 5759–5764 (2013). [CrossRef]  

12. S. Na, J. J. Russin, L. Lin, X. Yuan, P. Hu, K. B. Jann, L. Yan, K. Maslov, J. Shi, D. J. Wang, C. Y. Liu, and L. V. Wang, “Massively parallel functional photoacoustic computed tomography of the human brain,” Nat. Biomed. Eng. 6(5), 584–592 (2022). [CrossRef]  

13. C. Liu, J. Chen, Y. Zhang, J. Zhu, and L. Wang, “Five-wavelength optical-resolution photoacoustic microscopy of blood and lymphatic vessels,” Adv. Photonics 3(1), 016002 (2021). [CrossRef]  

14. D. Li, Y. Zhang, C. Liu, J. Chen, D. Sun, and L. Wang, “Review of photoacoustic imaging for microrobots tracking in vivo [Invited],” Chin. Opt. Lett. 19(11), 111701 (2021). [CrossRef]  

15. J. Zhu, C. Liu, Y. Liu, J. Chen, Y. Zhang, K. Yao, and L. Wang, “Self-fluence-compensated functional photoacoustic microscopy,” IEEE Trans. Med. Imaging 40(12), 3856–3866 (2021). [CrossRef]  

16. J. Chen, Y. Zhang, X. Li, J. Zhu, D. Li, S. Li, C.-S. Lee, C.-S. Lee, L. Wang, L. Wang, and L. Wang, “Confocal visible/NIR photoacoustic microscopy of tumors with structural, functional, and nanoprobe contrasts,” Photonics Res. 8(12), 1875–1880 (2020). [CrossRef]  

17. A. Taruttis, S. Morscher, N. C. Burton, D. Razansky, and V. Ntziachristos, “Fast multispectral optoacoustic tomography (msot) for dynamic imaging of pharmacokinetics and biodistribution in multiple organs,” PLoS One 7(1), e30491 (2012). [CrossRef]  

18. J. Zhang, G. Wen, W. Wang, K. Cheng, Q. Guo, S. Tian, C. Liu, H. Hu, Y. Zhang, H. Zhang, L. Wang, and H. Sun, “Controllable cleavage of C-N bond-based fluorescent and photoacoustic dual-modal probes for the detection of H2S in living mice,” ACS Appl. Bio Mater. 4(3), 2020–2025 (2021). [CrossRef]  

19. S. Li, Q. Deng, Y. Zhang, X. Li, G. Wen, X. Cui, Y. Wan, Y. Huang, J. Chen, Z. Liu, L. Wang, and C. S. Lee, “Rational design of conjugated small molecules for superior photothermal theranostics in the NIR-II biowindow,” Adv. Mater. 32(33), 2001146 (2020). [CrossRef]  

20. L. Lin, P. Hu, J. Shi, C. M. Appleton, K. Maslov, L. Li, R. Zhang, and L. V. Wang, “Single-breath-hold photoacoustic computed tomography of the breast,” Nat. Commun. 9(1), 2352 (2018). [CrossRef]  

21. G. Wen, X. Li, Y. Zhang, X. Han, X. Xu, C. Liu, K. W. Y. Chan, C. S. Lee, C. Yin, L. Bian, and L. Wang, “Effective phototheranostics of brain tumor assisted by near-infrared-II light-responsive semiconducting polymer nanoparticles,” ACS Appl. Mater. Interfaces 12(30), 33492–33499 (2020). [CrossRef]  

22. L. Li, A. A. Shemetov, M. Baloban, P. Hu, L. Zhu, D. M. Shcherbakova, R. Zhang, J. Shi, J. Yao, L. V. Wang, and V. V. Verkhusha, “Small near-infrared photochromic protein for photoacoustic multi-contrast imaging and detection of protein interactions in vivo,” Nat. Commun. 9(1), 2734 (2018). [CrossRef]  

23. C. Yin, X. Li, G. Wen, B. Yang, Y. Zhang, X. Chen, P. Zhao, S. Li, R. Li, L. Wang, C. S. Lee, and L. Bian, “Organic semiconducting polymer amphiphile for near-infrared-II light-triggered phototheranostics,” Biomaterials 232, 119684 (2020). [CrossRef]  

24. M. Zha, X. Lin, J. S. Ni, Y. Li, Y. Zhang, X. Zhang, L. Wang, and K. Li, “An ester-substituted semiconducting polymer with efficient nonradiative decay enhances NIR-II photoacoustic performance for monitoring of tumor growth,” Angew. Chemie Int. Ed. 59(51), 23268–23276 (2020). [CrossRef]  

25. J. Yao and L. V. Wang, “Special Section on the BRAIN Initiative: Photoacoustic brain imaging: from microscopic to macroscopic scales,” Neurophotonics 1(1), 011003 (2014). [CrossRef]  

26. S. Qi, Y. Zhang, G. Liu, J. Chen, X. Li, Q. Zhu, Y. Yang, F. Wang, J. Shi, C. S. Lee, G. Zhu, P. Lai, L. Wang, and C. Fang, “Plasmonic-doped melanin-mimic for CXCR4-targeted NIR-II photoacoustic computed tomography-guided photothermal ablation of orthotopic hepatocellular carcinoma,” Acta Biomater. 129, 245–257 (2021). [CrossRef]  

27. L. He, Y. Zhang, J. Chen, G. Liu, J. Zhu, X. Li, D. Li, Y. Yang, C. S. Lee, J. Shi, C. Yin, P. Lai, L. Wang, and C. Fang, “A multifunctional targeted nanoprobe with high NIR-II PAI/MRI performance for precise theranostics of orthotopic early-stage hepatocellular carcinoma,” J. Mater. Chem. B 9(42), 8779–8792 (2021). [CrossRef]  

28. T. Wei, J. Liu, D. Li, S. Chen, Y. Zhang, J. Li, L. Fan, Z. Guan, C. M. Lo, L. Wang, K. Man, and D. Sun, “Development of magnet-driven and image-guided degradable microrobots for the precise delivery of engineered stem cells for cancer therapy,” Small 16(41), 1906908 (2020). [CrossRef]  

29. E. Merčep, J. L. Herraiz, X. L. Deán-Ben, and D. Razansky, “Transmission–reflection optoacoustic ultrasound (TROPUS) computed tomography of small animals,” Light: Sci. Appl. 8(1), 18 (2019). [CrossRef]  

30. X. L. Deán-Ben and D. Razansky, “On the link between the speckle free nature of optoacoustics and visibility of structures in limited-view tomography,” Photoacoustics 4(4), 133–140 (2016). [CrossRef]  

31. S. Jeon, W. Choi, B. Park, and C. Kim, “A deep learning-based model that reduces speed of sound aberrations for improved in vivo photoacoustic imaging,” IEEE Trans. Image Process. 30, 8773–8784 (2021). [CrossRef]  

32. D. van de Sompel, L. S. Sasportas, A. Dragulescu-Andrasi, S. Bohndiek, and S. S. Gambhir, “Improving image quality by accounting for changes in water temperature during a photoacoustic tomography scan,” PLoS One 7(10), e45337 (2012). [CrossRef]  

33. B. E. Treeby, T. K. Varslot, E. Z. Zhang, J. G. Laufer, and P. C. Beard, “Automatic sound speed selection in photoacoustic image reconstruction using an autofocus approach,” J. Biomed. Opt. 16(9), 090501 (2011). [CrossRef]  

34. M. Cui, H. Zuo, X. Wang, K. Deng, J. Luo, and C. Ma, “Adaptive photoacoustic computed tomography,” Photoacoustics 21, 100223 (2021). [CrossRef]  

35. S. Mandal, E. Nasonova, X. L. Deán-Ben, and D. Razansky, “Optimal self-calibration of tomographic reconstruction parameters in whole-body small animal optoacoustic imaging,” Photoacoustics 2(3), 128–136 (2014). [CrossRef]  

36. R. Rau, D. Schweizer, V. Vishnevskiy, and O. Goksel, “Ultrasound Aberration Correction based on Local Speed-of-Sound Map Estimation,” IEEE Int. Ultrason. Symp. IUS 2019-October, 2003–2006 (2019).

37. Y. Zhang and L. Wang, “Adaptive dual-speed ultrasound and photoacoustic computed tomography,” Photoacoustics 27, 100380 (2022). [CrossRef]  

38. S. Siregar, R. Nagaoka, I. Ul Haq, and Y. Saijo, “Non local means denoising in photoacoustic imaging,” Jpn. J. Appl. Phys. 57(7S1), 07LB06 (2018). [CrossRef]  

39. K. Basak, X. Luís Deán-Ben, S. Gottschalk, M. Reiss, and D. Razansky, “Non-invasive determination of murine placental and foetal functional parameters with multispectral optoacoustic tomography,” Light: Sci. Appl. 8(1), 71 (2019). [CrossRef]  

40. B. E. Treeby and B. T. Cox, “k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt. 15(2), 021314 (2010). [CrossRef]  

41. S. Agrawal, T. Suresh, A. Garikipati, A. Dangi, and S. R. Kothapalli, “Modeling combined ultrasound and photoacoustic imaging: Simulations aiding device development and artificial intelligence,” Photoacoustics 24, 100304 (2021). [CrossRef]  

42. Y. Lou, W. Zhou, T. P. Matthews, C. M. Appleton, and M. A. Anastasio, “Generation of anatomically realistic numerical phantoms for photoacoustic and ultrasonic breast imaging,” J. Biomed. Opt. 22(4), 041015 (2017). [CrossRef]  

43. M. Xu and L. V. Wang, “Analytic explanation of spatial resolution related to bandwidth and detector aperture size in thermoacoustic or photoacoustic reconstruction,” Phys. Rev. E 67(5), 056605 (2003). [CrossRef]  

44. W. Marczak, “Water as a standard in the measurements of speed of sound in liquids,” J. Acoust. Soc. Am. 102(5), 2776–2779 (1997). [CrossRef]  

45. P. Wray, L. Lin, P. Hu, and L. V. Wang, “Photoacoustic computed tomography of human extremities,” J. Biomed. Opt. 24(2), 1 (2019). [CrossRef]  

46. P. J. van den Berg, K. Daoudi, H. J. Bernelot Moens, and W. Steenbergen, “Feasibility of photoacoustic/ultrasound imaging of synovitis in finger joints using a point-of-care system,” Photoacoustics 8, 8–14 (2017). [CrossRef]  

47. A. Buehler, D. Soliman, J. Aguirre, M. Schwarz, M. Omar, and V. Ntziachristos, “Broadband mesoscopic optoacoustic tomography reveals skin layers,” Opt. Lett. 39(21), 6297–6300 (2014). [CrossRef]  

Supplementary Material (8)

NameDescription
Supplement 1       Supplemental document
Visualization 1       Pixel-based CF maps and the CFS at different SoS values. The left panel shows the CF maps versus different SoS values from 1470 m/s to 1620 m/s. And the right panel shows corresponding CFS values.
Visualization 2       Comparison of PA images reconstruction using a wrong SoS selection (1510 m/s) and adaptive SoS computation (1515 m/s). The time-lapsed images (16 seconds) were acquired at single-shot US + 20-Hz PA imaging speed mode.
Visualization 3       In vivo label-free dual-modal US/PA imaging of the cross-section of a mouse heart (Thoracic cavity). The time-lapsed images were acquired at single-shot US + 20-Hz PA imaging speed mode.
Visualization 4       In vivo label-free dual-modal US/PA imaging of the cross-section of a mouse liver (Abdominal cavity). The time-lapsed images (16 seconds) were acquired at single-shot US + 20-Hz PA imaging speed mode.
Visualization 5       In vivo label-free dual-modal US/PA imaging of the cross-section of a mouse kidney (Abdominal cavity). The time-lapsed images (16 seconds) were acquired at single-shot US + 20-Hz PA imaging speed mode.
Visualization 6       In vivo label-free dual-modal US/PA imaging from a cross-section (Heart) of the upper thoracic cavity to a cross-section (Liver) of the abdominal cavity. The time-lapsed images were acquired at a 10-Hz US/PA speed mode.
Visualization 7       In vivo label-free dual-modal US/PA whole-body imaging from the upper thoracic cavity to the pelvic cavity. The step size of scanning is 1mm. The time-lapsed images were acquired at a 10-Hz US/PA speed mode.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Speed of sound adaptive video-rate full-ring ultrasound and photoacoustic computed tomography (VF-USPACT) platform. (a) The layout of the experimental setup. (b) Close up of the red dashed box region in (a), which is a photograph of the 3D-printed animal holder for trunk imaging. The animal's paws are secured to the holder. (c) Diagram of a 256-element cylindrically focused full-ring array transducer. (d) US transmission and receiving sequence, which is based on sequential active excitation of each element (Red dot) and parallel detection by 128 elements (Green dots). Acoustic simulation at the 1st position is plotted. The white solid line defines the reconstruction region at one transmission event. (e) Simulated acoustic focus field in the x-z plane. (f) The line profiles are at the center (Red dashed line) and off-center from 5.5 mm (Blue dashed line) in (e). (g) PA receiving sequence, which is parallelly detected by the 256 elements after each laser pulse. The white box defines the reconstruction region. (h) Interleaved timing sequence for US/PA acquisition, image reconstruction, SoS correction, and laser trigger. One video-rate mode is 10-Hz US/PA imaging (Left shadowed area), and another mode is single-shot US + 20-Hz PA imaging. AI, Anesthesia inflow; AU, anesthesia unit; amp., amplitude; DAQ, data acquisition; FWHM, full width at half maximum; FB, fiber bundle; NA, numerical aperture; NIR, near-infrared; PA, photoacoustic; Rcv, receive; SR, support; SoS, speed of sound; Tx, transmit; TTH, transfer to host; TM, thermocouple; US, ultrasound; WT, water tank.
Fig. 2.
Fig. 2. Simulation on US/PA imaging with sound speed optimization. (a) Received channel data map. The received signals from transmission and reflection can be identified. The transmitted element is sequentially excited along with the red dashed line. (b) Pixel-based CF maps were calculated with different average SoS values. (c) CFS at different average SoS values. (d) Simple numerical phantom simulation and reconstruction. The phantom contains two anechoic regions with different diameters. Left: GT image. Overlayed US and PA images were reconstructed with a wrong SoS (Middle) and with the optimized SoS by CFS calculation (Right). (e) Comparing the diameters of anechoic regions reconstructed with the wrong and optimized SoS values. (f) Realistic numerical breast phantom simulation and reconstruction. Left: GT image. Overlayed US and PA images were reconstructed with a wrong SoS (Middle) and with the optimized SoS by CFS calculation (Right). (g) Zoom-in images of the white solid box in (f). The arrows show the improved details. CF, coherence factor; CFS, coherence factor summation; FG, fibroglandular; GT, ground truth.
Fig. 3.
Fig. 3. Performance characterization of the VF-USPACT imaging system. (a) US (Left) and PA (Right) imaging a tungsten wire with a diameter of 20 µm, which is small enough to be regarded as a spatial point source. These images were acquired when the tungsten wire was located at different positions. Measured in-plane axial and tangential resolution of (b) the US and (c) the PA as a function of distance from the array center to the edge. The shadowed regions show a range with isotropic resolution and the reconstructed regions are also highlighted with dashed line boxes in (a). (d) Photograph of the three-layer coupling media phantom. (e) Reconstructed US (Left) and PA (Right) images using adaptive SoS. PDMS, polydimethylsiloxane.
Fig. 4.
Fig. 4. Label-free VF-USPACT of small-animal anatomy. (a) Photograph of the mouse and imaged cross-sections. (b) Calculated average SoS values at different cross-sections. (c) Representative cross-sections were imaged with the VF-USPACT system. BM, backbone muscles; CFS, coherence factor summation.
Fig. 5.
Fig. 5. Label-free VF-USPACT of small-animal dynamics. (a) Displacements of the heart wall (along the red solid line marked on the PA image) show respiration and heartbeats. The traces of the heart wall motion are highlighted with red solid lines. (b) Fourier transform of the displacement shows the respiratory and heartbeat frequencies. Arterial maps were encoded with the heartbeat frequency in the (c) liver region and (d) the kidney region.
Fig. 6.
Fig. 6. Label-free VF-USPACT of human finger joints. (a) Both US and PA images were reconstructed using the optimized SoS at different cross-sections. The white dashed lines at the top show the high PA signals from blood vessels corresponding to anechoic regions in the US images. (b) Thumb finger images were reconstructed using the SoS value from the little finger. (c) Comparison of the thumb finger images, which were reconstructed using the SoS value of the optimized one and the value from the little finger. Zoom-in images are from the green dashed box in (a) and (b). The arrows show the improved details.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

C F v i ( n ) = | R c v = 1 N e l m T x = 1 N e l m I ( n ) | 2 N e l m × R c v = 1 N e l m | T x = 1 N e l m I ( n ) | 2 ,
S o S ^ = a r g m a x S o S f ( C F S ) ,
w θ k i = a b s ( cos ( θ k i ) sin ( X ^ ) / X ^ ) X ^ = w i d t h f / c π s i n ( θ k i ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.