Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Photoacoustic-guided surgery from head to toe [Invited]

Open Access Open Access

Abstract

Photoacoustic imaging–the combination of optics and acoustics to visualize differences in optical absorption – has recently demonstrated strong viability as a promising method to provide critical guidance of multiple surgeries and procedures. Benefits include its potential to assist with tumor resection, identify hemorrhaged and ablated tissue, visualize metal implants (e.g., needle tips, tool tips, brachytherapy seeds), track catheter tips, and avoid accidental injury to critical subsurface anatomy (e.g., major vessels and nerves hidden by tissue during surgery). These benefits are significant because they reduce surgical error, associated surgery-related complications (e.g., cancer recurrence, paralysis, excessive bleeding), and accidental patient death in the operating room. This invited review covers multiple aspects of the use of photoacoustic imaging to guide both surgical and related non-surgical interventions. Applicable organ systems span structures within the head to contents of the toes, with an eye toward surgical and interventional translation for the benefit of patients and for use in operating rooms and interventional suites worldwide. We additionally include a critical discussion of complete systems and tools needed to maximize the success of surgical and interventional applications of photoacoustic-based technology, spanning light delivery, acoustic detection, and robotic methods. Multiple enabling hardware and software integration components are also discussed, concluding with a summary and future outlook based on the current state of technological developments, recent achievements, and possible new directions.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Corrections

18 March 2021: Typographical corrections were made to Figs. 1, 2, and the body text.

1. Introduction

From a historical perspective, photoacoustic imaging has shown great promise and remarkable impact in several applications, including small animal imaging [1], vascular network imaging [2,3], and molecular imaging [4]. The technology has only recently begun to show viability as a method for guidance of multiple surgeries and procedures. This review summarizes the promise of photoacoustic imaging for surgical guidance, spanning organs from head to toe, as illustrated in Fig.  1. On a rudimentary level, photoacoustic imaging is advantageous for surgical guidance because its penetration depth exceeds purely optical imaging methods, its spatial resolution is similar to ultrasound imaging, and it has the ability to determine functional information based on changes in optical properties [5]. These general statements also apply to photoacoustic imaging as an entire field when discussing its possibilities for clinical (and not necessarily surgical) translation.

 figure: Fig. 1.

Fig. 1. Summary of photoacoustic-guided surgery applications stratified by organ.

Download Full Size | PDF

After performing a deeper analysis, it is clear that the concept of photoacoustic-guided surgery has become increasingly more attainable over the years because of advances that include demonstrations of the clinical imaging of metal [6,7], as well as the ability to independently visualize common anatomical structures that are naturally present during surgery (e.g., blood vessels [8,9], nerves [1012], tendons [12], lipids [13,14], and biological tissues [1517]. The transition from benchtop to more mobile and portable photoacoustic systems [1821] has also played a role in this important endeavor. The proliferation of open research-based ultrasound systems [20,2226] and new data-driven image formation methods [2729] have additionally fueled the innovation of novel photoacoustic imaging systems that will be useful for surgical guidance. When combined with existing optical engineering tools and the ability to miniaturize complete photoacoustic systems (e.g., catheter and endoscopy based photoacoustic imaging [30]), interest from surgeons and similar stakeholders has started to rise, as demonstrated by the number of authors with clinical affiliations rising from 5 in 2012 to 27 in 2020, based on the original research contributions referenced in Fig.  1. Additional optics-related advances include the flexible separation of light delivery systems from acoustic reception [31,32] and demonstrations of interstitial irradiation [33], bringing light as close to the surgical site as possible [3438]. With external ultrasound reception, these advances enable deeper acoustic penetration depths, while maximizing optical penetration, effectively increasing the overall penetration depth of the technology over previous reports [39,40]. This increase is otherwise not possible with the traditional imaging system designs that either attach light sources to the ultrasound probe or maintain light sources at fixed distances from acoustic receivers. A summary of these critical events appears in the timeline shown in Fig.  2.

 figure: Fig. 2.

Fig. 2. Timeline of key enabling events advancing new possibilities for photoacoustic-guided surgery, with the size of each circle representing the number of clinical co-authors of the papers summarized in Fig.  1. As the year 2021 has not yet concluded at the time of publishing, this datapoint is expected to contain an incomplete clinical co-author count.

Download Full Size | PDF

Although the advances described above are primarily focused on photoacoustic-guided surgery, a corollary set of observations and statements can be made regarding photoacoustic-guided interventions, which do not necessarily require surgery. Some interventions may be performed in outpatient clinics rather than in operating rooms. While the term “intervention” encompasses both surgeries and outpatient procedures, the term surgery is a subset of interventions and represents a narrower set of procedures. Nonetheless, both surgical and non-surgical interventions have the potential to benefit from photoacoustic-based image guidance. Therefore, both types of interventions are included in our review, which is primarily focused on surgical interventions.

Photoacoustic imaging is useful for surgical and interventional guidance because it provides the ability to distinguish any structure that has a higher optical absorption than surrounding tissue. This feature is beneficial for visualizing many of the common surgical contents named above (e.g., blood vessels, nerves), metal that is often implanted during surgeries and procedures (e.g., brachytherapy seeds [33,4143], surgical tools or instruments [36]), and contrast agents that may be injected into the body during surgery. This technique also shares many of the same benefits with the common interventional guidance method of ultrasound imaging, including the ability to produce real-time images for surgical guidance, the promise of portable systems, and the absence of harmful ionizing radiation.

A wide range of comprehensive review articles on photoacoustic imaging technology exist [5,44,45], while a growing number of literature in the past three years have summarized contributions to surgical [46,47], interventional [4750], and clinical [51,52] applications of photoacoustic technology. This review is distinct from previous literature surveys because our focus is the historical progression of applications that extend beyond theoretical, simulated, and experimental phantom demonstrations of feasibility. We provide a critical assessment of the significance, innovations, advantages, and limitations of each application. We also summarize technological details, quantify key findings, and discuss possible future directions.

This review is organized as follows. Section 2 provides a brief overview of the photoacoustic imaging process, with more details available in a few of the landmark papers describing the technology [5,53]. Section 3 summarizes specific applications of photoacoustic surgical guidance, organized by the various organs within the human body, as detailed in Fig. 1, followed by applications that are relevant to multiple organs, then emerging applications with single demonstrations of feasibility. Visual icons of associated body parts accompany the example photoacoustic images in this section. Section 4 discusses the stage of development for each presented procedure. Section 5 describes independent, integrated, custom, and auxiliary hardware that fuels the field of photoacoustic imaging for surgical and interventional guidance. Section 6 summarizes software options and common image quality assessment methods. Section 7 shares possible future directions for photoacoustic-guided surgery, summarizing visions detailed in various papers on this topic, interjected with our own thoughts. Finally, Section 8 concludes the review with a summary and outlook.

2. Photoacoustic imaging overview

Photoacoustic images are generated through a multi-step process including a light source to illuminate optically absorbing targets of interest, followed by thermal expansion, which results in an initial pressure distribution that relaxes as a pressure wave propagates throughout a localized region of the body. Mathematically, the optically induced initial pressure distribution can be represented by the following equation [5]:

$$p_{0}(\lambda)=\Gamma \mu_{\mathrm{a}}(\lambda) \Phi\left(\lambda\right)$$
where $\Gamma$ is the Grüneisen parameter, $\mu _{\mathrm {a}}$ is the absorption coefficient, $\Phi$ is the optical fluence, and $\lambda$ is the optical wavelength. The propagated acoustic wave contains broadband ultrasonic frequencies. Forming a photoacoustic image from the raw data received by acoustic sensors requires mapping the received acoustic pressure waves to an approximation of the initial pressure distribution described in Eq. (1). Because the acoustic response can be received by standard clinical ultrasound transducers in current implementations of photoacoustic-guided surgery, the process of photoacoustic image reconstruction for this application often utilizes the standard concept of receive beamforming in ultrasound imaging, as discussed in more detail in Section 6.1.

Due to the dependence of Eq. (1) on the optical wavelength, photoacoustic imaging offers the additional advantage of both targeted imaging and functional imaging. Targeted imaging is enabled by selecting the wavelength for the imaging task based on the absorbing chromophores of interest. Functional imaging is enabled by selecting the wavelength that optimally differentiates two chromophores, allowing for the calculation of physiological parameters indicative of function, such as oxygen saturation. Figure 3 shows the optical absorption properties of a variety of chromophores. These chromophores can be divided into two categories: (1) endogenous or (2) exogenous. Endogenous chromophores are native to the body and enable the creation of photoacoustic images without requiring additional contrast agents. The most common endogenous chromophore is hemoglobin (both oxygenated and deoxygenated), which is present in blood. Targeted imaging of hemoglobin enables visualization and surgical targeting or avoidance of critical blood vessels hidden by tissue. Functional imaging of hemoglobin enables the diagnosis of tumors based on vascularity, informing surgical resection strategies. Other endogenous chromophores include lipids (e.g., within the myelin sheath of nerves) and collagen (e.g., within connective tissue and tendons), which are important for surgeries and interventions that target or avoid nerves and tendons [12] and other biological structures dominated by these chromophores.

 figure: Fig. 3.

Fig. 3. Optical absorption spectra of a variety of endogenous chromophores (solid) including water, oxygenated hemoglobin [54], deoxygenated hemoglobin [54], lipids [55,56], and collagen [57], and exogenous chromophores (dashed) including stainless steel [7,58], methylene blue [59] and indocyanine green [60,61]. Stainless steel is composed of a surface passivation layer of primarily Cr$_2$O$_3$, which is the primary optical absorber in the clinical imaging of metal [7,58].

Download Full Size | PDF

Exogenous chromophores are introduced to the body, including contrast agents, metallic interventional tools (e.g., needles, drill bits), and metal-coated implants (e.g., brachytherapy seeds). Contrast agents are often utilized to improve photoacoustic visualization of structures located deep within tissue and structures that lack sufficient endogenous chromophores to produce photoacoustic contrast. Two of the most common, FDA-approved contrast agents include methylene blue [62,63] and indocyanine green (ICG) [6466], which have optical absorption peaks at 668 and 800 nm, respectively. In addition to these existing options, novel and emerging contrast agents have been created for targeted imaging [6769] or for multi-modality functionality [7072], the latter of which enables surgical application and validation with established imaging modalities. More details on contrast agents for photoacoustic imaging are available in review articles on this topic [7375].

3. Surgical and interventional applications for multiple organ sites

3.1 Brain

Tumor resection within the brain is currently performed based on pre-operative computed tomography (CT) images, which are not updated with anatomical changes that naturally occur during the procedure [76]. Photoacoustic imaging has the potential to provide real-time information for guidance and assessment of tumor margins during the resection of brain tumors, such as gliomas, despite the presence of possible brain shifts during surgery. Najafzadeh et al. [77] presented a simulation-based study aimed at using multiwavelength excitation for precise localization and safe maximal resection of gliomas, despite their similar appearance to surrounding tissue. While still preliminary, results are also promising to provide detailed images of blood vessels from residual glioma, which is critical for the detection of residual glioma.

Targeting a similar problem, Jia et al. [68] proposed the use of a combination of fluoresence molecular imaging and photoacoustic tomography and demonstrated the proposed system in vivo with a mouse model. This approach enabled visualization of tumors through the intact skull using targeted nanoparticles (IRDye899-Hfn). Although the data was not shown, the authors state that the combination of photoacoustic tomography and fluorescence molecular imaging resulted in extended survival times for the in vivo mice.

In addition to the visualization of blood vessels, it is important for surgeons to differentiate between blood vessels from nerves in order to minimize iatrogenic complications. Graham et al. [78] proposed multispectral photoacoustic imaging to visualize nerves and blood vessels using longer (i.e., 1230 nm) and shorter (i.e., 750 nm) wavelengths, respectively. This hypothesis was validated in an ex vivo experiment demonstrating 18.2 dB blood vessel contrast compared to 0.61 dB nerve contrast at 750 nm laser wavelength. At 1230 nm laser wavelength, this trend was reversed, and the nerve contrast was 10.7 dB compared to 6.6 dB blood vessel contrast. These results indicate the feasibility of simultaneous nerve and blood vessel visualization during surgery when using a fast-tuning system that exploits the optical absorption peaks of hemoglobin and lipids.

3.2 Pituitary

The pituitary gland sits at the base of the brain within the skull. The resection of tumors on this gland is typically performed with an endonasal, transsphenoidal approach, which requires insertion of surgical instruments into the nostrils to access the sphenoid sinus and remove sphenoid bone. The internal carotid arteries (ICAs) are located behind the sphenoid bone [79,80] and must be avoided in order to prevent accidental injury, which could be catastrophic, possibly leading to patient death. Guidance is currently perfrormed with pre-operative CT images, which do not show real-time change in anatomy or the proximity of surgical tools relative to the ICAs.

To improve real-time surgical guidance, Bell et al. [81] demonstrated short-lag spatial coherence (SLSC) beamforming techniques to visualize metal targets through varying thicknesses of human temporal bone with an increase in signal contrast of 11-27 dB compared to more conventional delay-and-sum (DAS) beamforming. Simulations were then performed to determine target localization capabilities despite sound speed differences between surrounding tissue and bone [38]. Results demonstrated $\leq$ 2 mm localization errors and elucidated benefits of fiber positioning flexibility relative to the ultrasound transducer. Transitioning to targets that included bovine blood and a sheep brain, ex vivo experiments were performed to visualize blood vessels in the presence of 0 to 2.0 mm thick human sphenoid and temporal bones. Results demonstrated vessel visualization with contrast up to 19.2 dB with minimum required energies for visualization ranging from 1.2 to 5.9 mJ depending on the thickness of bone [82]. Kim et al. [83] then introduced the concept of a telerobotic photoacoustic image-guided navigation system to locate the cross-sectional carotid artery centers. The system was validated with the open source da Vinci Research Kit electronics and software [84] and phantom experiments, demonstrating mean overall system accuracy of 1.27 mm. Future directions of this work include the implementation of guidance virtual fixtures to constrain tool motion [83].

 figure: Fig. 4.

Fig. 4. Example photoacoustic image guidance during an endonasal transsphenoidal surgery, showing capability to visualize and avoid the right internal carotid artery (RCA) during pituitary tumor resection [35]. Photoacoustic signals were overlaid on co-registered CT or ultrasound images acquired with the ultrasound probe placed on the eyelid of a human cadaver. The SLSC beamforming approach provides clearer visualization of the RCA when compared to DAS beamforming of the same signals. (Adapted with permission from Graham et al., Photoacoustics 19, 100183 (2020). Copyright 2020 Elsevier.)

Download Full Size | PDF

To prepare this technology to transition from ex vivo bone samples to patients, Eddins and Bell [36] designed a custom light delivery system surrounding a metallic drill, and the prototype was evaluated by imaging through cadaveric bone specimen ranging in thickness from 0.5 mm to 4 mm, showing increased vessel visibility as bone thickness decreased. Graham et al. [35] presented an alternative light delivery system design that is independent of surgical tools and investigated optimal locations for the acoustic receiver, newly identifying the eye as a potential region for photoacoustic signal reception during surgery. This optimal location was determined using simulations and validated using a combination of experimental setups including an empty human skull, a human skull filled with brain tissue and eyes, and finally the first demonstration of photoacoustic images from within an intact human cadaver head. Figure 4 shows example photoacoustic images overlaid on either the ultrasound or CT images from [35] with the ultrasound probe placed on the eyelid of the cadaver. In this probe position, the DAS image contains a significant level of background noise, reducing contrast of the ICA. However, the SLSC image removes the background noise and enables visualization of the ICA with 30 dB contrast. Overall, results demonstrated a system capable of producing photoacoustic images with contrast of the ICAs as high as 35 dB with the ultrasound probe in the ocular region and with laser energies within currently acceptable safety limits. These results are encouraging for development of photoacoustic systems that assist with avoiding accidental patient death during surgery.

3.3 Spine

Reports of photoacoustic image guidance within the spine have focused on two procedures: (1) spinal fusion surgeries and (2) stem cell delivery into the spinal cord. During spinal fusion surgeries, screws are inserted through the pedicles of vertebrae to connect vertebrae with a metal rod in order to stabilize a damaged spine. Misplacement of these screws can cause significant complications, including numbness, lower-extremity paraplegia, and neurological deficits [85]. The first demonstration of photoacoustic signal visualization within a vertebra was achieved with prototype drill bits [86]. Specifically, an optical fiber was inserted into the hollow core of either a single- or a multi-hole drill bit, and a cannulated motor component was used to couple the rotating drill bit to the stationary light source. This design was validated with the drill tip inserted in the pedicle of a thoracic vertebra from a human cadaver.

Shubert and Bell [87] then investigated 3D photoacoustic imaging to determine the ideal entry point for the drill by detecting cancellous bone associated with the pedicle and differentiating this ideal target region from the surrounding cortical bone of the vertebrae. From left to right, Fig. 5(a) shows intersecting biplanar views of the 3D photoacoustic volume, lateral-elevational slices of this 3D volume, and the same lateral-elevational slices overlaid on co-registered ultrasound images. The top and bottom rows of Fig. 5(a) show examples associated with cortical and cancellous bone, respectively. Signals from the cancellous bone within the pedicle have lower amplitudes and are more diffuse than the higher amplitude, more compact signals from the cortical bone surrounding the pedicle.

 figure: Fig. 5.

Fig. 5. Example spinal surgery applications targeting spinal fusion surgeries [87] and targeting stem cell delivery into the spinal cord [67]. (a) Biplanar views of the 3D photoacoustic volume, lateral elevational photoacoustic image slices, and lateral elevational photoacoustic image slices overlaid on the co-registered ultrasound image (from left to right, respectively), demonstrating differences in photoacoustic signal appearance between cortical (orange arrow) and cancellous (blue arrow) bone. (b) In vivo 3D and 2D (top and bottom, respectively) photoacoustic images overlaid on ultrasound images of PBNC-labeled stem cells after injection and needle removal in the spinal cord. (Adapted from: J. Shubert and M. A. L. Bell, Phys. Med. Biol. 63(14), 144001 (2018). Copyright 2018 Author(s), licensed under a Creative Commons Attribution 3.0 Unported License; K. Kubelick and S. Emelianov, Neurophotonics 7, 030501 (2020). Copyright 2020 Author(s), licensed under a Creative Commons Attribution 4.0 License.)

Download Full Size | PDF

Photoacoustic imaging was additionally proposed as a method to ensure the correct drill trajectory once inside the pedicle, then validated using a human cadaver spine with surrounding tissue attachments intact [88]. Due to image artifacts caused by sound speed differences when imaging through bone, González and Bell [89] introduced and applied locally-weighted short-lag spatial coherence (LW-SLSC) beamforming to improve visualization of single- and multi-hole drill bit tips, which were tracked over an insertion distance of 0-8 mm with mean location errors ranging 1.02-1.5 mm in a human vertebra specimen cleaned from surrounding tissue attachments. Artifacts that contributed to larger errors were not present in follow up human cadaver experiments, indicating that the presence of surrounding tissue assists with reducing image artifacts [88].

In addition to corrective surgery, neurodegenerative diseases and traumatic spinal cord injuries can potentially be treated through the delivery of stem cells into the spinal cord. Photoacoustic imaging was proposed to provide real-time guidance to targeted areas [67,69]. Specifically, Donnelly et al. [69] tagged stem cells with gold nanospheres, followed by ultrasound and photoacoustic imaging to confirm injection locations and quantify the volume of cells injected. Ex vivo rat experiments and spectral unmixing techniques were employed to separate photoacoustic signals from the needle, gold nanoparticles, and endogenous absorbers (i.e., oxygenated and deoxygenated hemoglobin), resulting in visualization of as few as 1,000 stem cells. Kubelick and Emelianov [67,90] introduced an improvement by using Prussian blue nanocube (PBNC) labeled stem cells, which enabled postoperative MRI guidance that was not available with the gold nanospheres. Photoacoustic imaging was used to guide needle insertion and injection of stem cells into the spinal cord of in vivo rats [67]. Figure 5(b) shows in vivo photoacoustic images overlaid on ultrasound images of the PBNC-labeled stem cells after injection and needle removal, including the volumetric ultrasound/photoacoustic images visualizing the bolus of injected stem cells (top) and an axial cross-sectional view of the volumetric acquisition (bottom). The locations of stem cells were visualized using 3D volumetric photoacoustic images (as shown in Fig. 5(b)) and verified with post-operative MRI and histology, demonstrating the accuracy of the imaging system and encouraging future work in the quantification of stem cell delivery using this platform. Future directions of this work include establishing a quantitative basis for photoacoustic monitoring of stem cells to measure concentration and dose delivered and longitudinal monitoring to understand stem cell behavior and therapy progression over time [67].

3.4 Breast

From an interventional perspective, photoacoustic imaging of breast tissue primarily targets margin detection in breast conserving surgery, which is critical to ensure cancerous tissue is removed during the procedure, as explored by multiple groups. Xi et al. [91] demonstrated photoacoustic tomography in order to map tumors in 3D in an in vivo mouse model. Micro-electromechanical system (MEMS) mirrors enabled scanning for tomographic reconstructions, with SNR decreasing from 32 dB to 18 dB depending on the target depth (i.e., 0-2.3 mm). Improving on this work, nanoparticle-enhanced photoacoustic and fluorescence imaging was shown to reduce the rate of cancer recurrence [71]. The dual photoacoustic and fluorescence imaging system used photoacoustic imaging to plan tumor resection (because of the deeper penetration offered by photoacoustic imaging), followed by fluorescence imaging to guide the tumor resection. The system was demonstrated with in vivo mice, resulting in visualization of tumor margins up to a tumor depth of 31 mm. Overall, the system resulted in a reduction in local tumor recurrence from 33.3% in the control group to 8.7% in the targeted group.

Kosik et al. [92] tested an interoperative photoacoustic screening (iPAS) system on three human lumpectomy tissue samples using transmission mode photoacoustic imaging and robotic data acquisition. Figure 6(a) shows the iPAS system results on a human lumpectomy specimen where the hypointense areas in the ultrasound and lipid-weighted 930 nm iPAS image show good agreement with the hyperintense area in the hemoglobin-weighted 690 nm iPAS image. Results demonstrated differences in lipid concentration between benign and malignant breast tissue while imaging with an estimated resolution of 2.5 mm. While these results are promising for future investigations, the use of excised specimens removed expected vascularity from the samples, resulting in differentiation based on the lipid absorption spectrum as an indicator of malignancy, rather than the expected hemoglobin spectrum.

 figure: Fig. 6.

Fig. 6. Example applications targeting breast conserving surgery [92,93]. (a) Interoperative photoacoustic screening (iPAS) assessment of a human lumpectomy specimen showing agreement between the hypoechoic and hyperechoic ultrasound regions with the 930 nm and 690 nm iPAS images, respectively [92]. (b) Positive and negative (top and bottom, respectively) margin of a human lumpectomy sample with component 1 and component 2 photoacoustic images representing hemoglobin and fat, respectively. In the binary cancer map, magenta indicates normal and blue indicates cancer [93]. (Adapted from: I. Kosik et al., Journal of Biomedical Optics, 24, 056002 (2019). Copyright 2019 Author(s), licensed under a Creative Commons Attribution 4.0 License; R. Li et al. Biomedical Optics Express, 6, 1273-1281 (2015). Copyright 2015 Optical Society of America.)

Download Full Size | PDF

Finally, Li et al. [93] relied on multispectral photoacoustic tomography and the local optical absorption of fat and hemoglobin for image contrast. A pixel containing fat but no hemoglobin was characterized as normal, while a pixel containing both fat and hemoglobin or hemoglobin but no fat was characterized as tumor. Figure 6(b) shows examples of positive and negative cancer margins (top and bottom, respectively) in ultrasound images, hemoglobin components, fat components, cancer maps with magenta representing normal tissue and blue representing cancer, and the histology ground truth images (from left to right, respectively). The system visualized structures as deep as 3 mm from the superficial detector and light source, with an axial resolution of 125 $\mu$m. However, the system requires samples to be excised, preserved in formalin, embedded in gel phantoms, and placed in a tank with PBS solution. A solution that mitigates these steps for speedier intraoperative feedback is preferred.

3.5 Heart

Demonstrations of photoacoustic-based guidance in the heart revolves around cardiac catheterization procedures. For example, two approaches to track the catheter were proposed: (1) photoacoustic active ultrasound and (2) photoacoustic-based visual servoing. In the first approach, an active ultrasound element was employed to track the cardiac catheter through the combination of one transmission fiber and one reception fiber using a Fabry-Perot hydrophone [94]. In the second approach, using photoacoustic images as the computer vision for a robot-based tracking system, Graham et al. [95] demonstrated a photoacoustic-based visual servoing system as a method to guide cardiac catheterization procedures. In addition, a system was proposed to identify contact with the cardiac wall in order to improve ablation procedures. The complete system was validated in two in vivo swine experiments, demonstrating 3D root mean square errors from 1.24-1.54 mm as verified by a 3D electromagnetic-based cardiac mapping system. The first known photoacoustic images from within an in vivo beating heart with size and anatomy similar to human hearts were also presented and are shown in Fig. 7(a). These images were insightful to determine catheter tip contact with the endocardium in cases where ultrasound failed because of the poor catheter tip contrast in the ultrasound images. In addition, differences in photoacoustic signal amplitudes were observed in the absence or presence of this contact. Using a similar system, Allman et al. [96] implemented deep learning methods [28] to improve segmentation of catheter tips during these cardiac catheter interventions, resulting in correct classification of 91.4% of true sources.

 figure: Fig. 7.

Fig. 7. Example cardiac applications targeting cardiac catheterizations [95] and radiofrequency ablation monitoring [97]. (a) In vivo photoacoustic images of a cardiac catheter in contact (top) and not in contact (bottom) with an in vivo swine heart. (b) Pre- and post-ablation regions at three different wavelengths and a corresponding dual wavelength image visualizing the ablated region (arrow). (Adapted from: Graham et al., IEEE Trans. Med. Imaging 39(4), 1015–1029 (2020). Copyright 2020 Author(s), licensed under a Creative Commons Attribution 4.0 License; S. Iskander-Risk et al., Biomedical Optics Express 9, 1309-1322 (2018). Copyright 2018 Optical Society of America.)

Download Full Size | PDF

Once inside the heart, Iskander-Rizk et al. [97] developed a dual-wavelength technique based on the ratio of photoacoustic signals at two different wavelengths in order to distinguish radio-frequency ablation (RFA) lesions from normal tissue [97]. Figure 7(b) shows example results from this study demonstrating the visualization both pre- and post-ablation at 3 different wavelengths. The dual wavelength image shown at the bottom of Fig. 7(b) was formed using the 790 nm and 930 nm wavelength results to successfully distinguish the RFA lesion from the normal tissue with a diagnostic accuracy of 97%, compared to a diagnostic accuracy of 82% with the single-wavelength (i.e., 640 nm) approach. Iskander-Rizk et al. [98] later demonstrated the proposed system operating in real time on an ex vivo passively beating heart model using a photoacoustic-enabled RFA catheter [98]. Possible future directions of this work include integration with electroanatomical maps in order to provide a 3D rendering of the extent of the photoacoustic-assessed lesion [98].

3.6 Liver and pancreas

Photoacoustic-guidance in abdominal surgery is focused on either the avoidance of major abdominal hemorrhage or targeting vessels for cauterization during abdominal surgeries in the liver and pancreas. Kempski et al. [101,102] demonstrated feasibility in a series of in vivo swine experiments, where the photoacoustic signal was observed to be focused when visualizing a major vessel in the liver, rather than diffuse in nearby regions of the liver, distinguishable with subtle adjustment of the light source relative to a stationary ultrasound probe. Blood vessels in the in vivo liver were visualized with a contrast of 10 to 15 dB with laser energies ranging from 20 to 40 mJ. Similarly in the pancreas, vessels were visualized with a contrast of 17.3 dB with an energy of 36 mJ, which was determined to be sufficient to visualize vessels in the pancreas. Postsurgery histopathological analyses showed no necrosis of pancreas tissue and necrosis of liver tissue exposed to laser energy for approximately 40 or 80 minute time durations, respectively. These findings motivate the importance of future investigations with laser time durations that are more representative of surgical guidance applications. This work is the first to demonstrate photoacoustic visualization of blood vessels in the in vivo liver and pancreas and the results are expected to inform future investigations in this area. Possible future directions include development of illumination methods that enable distinctions between focused and diffuse signals without requiring motion of the light source [101,102].

3.7 Kidney

Interventional photoacoustic applications for the kidney are centered on treatment monitoring during shockwave lithotripsy, which is used to break kidney stones. Specifically, Li et al. [99] proposed an integrated system capable of photoacoustic tomography and passive cavitation detection during shockwave lithotripsy. Figure 8(a) shows example photoacoustic images from an in vivo mouse obtained after applying 200 and 1000 shockwave pulses (top and bottom, respectively). The induced hemorrhage is visible as the bright region in the photoacoustic image near the shockwave focus. Follow-up work in [100] introduces a uniform light delivery system to be inserted through the urethra to image the kidneys, which is a similar route previously taken for transurethral illumination to achieve prostate imaging [37]. The new design was tested on an in vivo swine, with example results shown in Fig. 8(b). While the larger illumination area enables visualization of multiple contents in a single image, one disadvantage of larger illumination areas is the potential for more acoustic clutter when compared to smaller illumination areas [47], which effectively reduces the overall image quality and ability to resolve vasculature and other contents. However, for this shockwave lithotripsy application, the purpose is to monitor vascular damage over time, rather than resolve individual blood vessels, thus acoustic clutter introduced by the larger illumination area is not a major concern.

 figure: Fig. 8.

Fig. 8. Example renal application targeting visualization of vascular injury monitoring during shockwave lithotripsy [99,100]. (a) Photoacoustic tomography images in in vivo mice after 200 and 1,000 shockwave pulses (top and bottom, respectively) with hemmorrhage observed at the shockwave focus (arrow). (b) Example of the proposed internal diffuser (top) used to produce vascular images from an in vivo swine kidney. The ultrasound image is shown for anatomical orientation (bottom left) and the photoacoustic image is overlaid on the ultrasound image (bottom right). (Adapted from: M. Li et al., IEEE Transactions on Medical Imaging 39, 468-477 (2019). M. Li et al., IEEE Transactions on Medical Imaging 40(1), 346-356 (2021). Copyright 2020 IEEE).

Download Full Size | PDF

3.8 Uterus

Photoacoustic image guidance within and around the uterus has focused on two main procedures: (1) hysterectomy (i.e., the surgical removal of the uterus) and (2) fetal surgery. One critical concern during hysterectomy is the avoidance of accidental injury to the ureters when severing the main blood supply to the uterus (i.e., the uterine artery) [103]. With a 3D printed uterine vessel model covered in ex vivo bovine tissue, Allard et al. [34,104] demonstrated a photoacoustic approach to hysterectomy guidance with a specialized light delivery system that surrounded the teleoperated scissor and a da Vinci robot, which is used to perform teleopeerative surgeries. Robot trajectories were recorded and displayed relative to the solid model of the 3D printed vessel network to confirm visualization of image contents. After initial feasibility was demonstrated, Wiacek et al. [62] analyzed the ability of photoacoustic imaging to differentiate between the ureter and the uterine artery. Methylene blue was used to improve ureter contrast and to simultaneously differentiate the ureter from the uterine artery with multi-wavelength photoacoustic imaging. This configuration was tested in vessel-mimicking phantom experiments resulting in contrast differences between the ureter and uterine artery of 2.77 dB with an illumination wavelength of 690 nm and 12.87 dB with an illumination wavelength of 750 nm. Wiacek et al. [63] then demonstrated a dual-wavelength photoacoustic system in a human cadaver with surrounding anatomy in tact, obtaining contrast differences between the ureter and uterine artery. These contrast differences measured 0.1 dB at a wavelength of 690 nm and 4.5 dB at a wavelength of 750 nm. Follow-up work in [63] demonstrated the first known surgical guidance system that converts photoacoustic signals to an auditory sound in efforts to alert surgeons of the risk of ureteral injury.

In fetal surgery, visualizing the placental vasculature has the potential to guide procedures such as minimally-invasive fetoscopic laser photocoagulation, which is used as a treatment for twin-to-twin transfusion syndrome (TTTS) and is associated with high perinatal mortality when left untreated. Xia et al. [105] visualized the placental vasculature with multispectral photoacoustic imaging. Light was delivered directly to the placental surface with an optical fiber inserted into a fetoscope and a fiber-optic hydrophone enabled ultrasonic tracking for improved visualization of the position of the light delivery fiber. The system was validated on an ex vivo human placenta specimen. Correspondence between the measured photoacoustic spectrum and the optical absorption spectrum of deoxygenated blood was observed and submillimeter tracking accuracy was achieved. Maneas et al. [106] followed up with a clinical system and demonstrated its ability to visualize vessels at a depth of up to 7 mm relative to the superficially placed light source and detector. Figure 9 shows example images of vasculature within an ex vivo human placenta. The 2D photoacoustic and ultrasound image shows superficial blood vessels indicated by red arrows, which correspond to the blood vessels in the photograph (also indicated by red arrows). The structure of these vessels is more prominent in the 3D photoacoustic image, displayed as a maximum intensity projection. This display highlights the benefit of 3D photoacoustic imaging.

 figure: Fig. 9.

Fig. 9. Example uterus applications from [106] targeting minimally invasive fetal interventions. The green dotted line in the photograph indicates the 2D cross section visualized in the 2D ultrasound and photoacoustic images. (Adapted from E. Maneas et al., Journal of Biophotonics 13, e201900167 (2020). Copyright 2019 Author(s), licensed under a Creative Commons Attribution 4.0 License.)

Download Full Size | PDF

3.9 Prostate

The primary work for photoacoustic-guidance in the prostate encompasses three areas related to prostate cancer: (1) prostate imaging and biopsy, (2) prostate brachytherapy guidance, and (3) radical prostatectomy. Prostate cancer is typically diagnosed using a combination of a prostate-specific antigen (PSA) blood test and a digital rectal examination (DRE) followed by a prostate biopsy. Prostate biopsies are performed using transrectal ultrasound (TRUS) based on a systematic sampling of tissue, however the accuracy is poor [107]. To improve the differentiation between malignant, benign, and normal prostate tissue, Dogra et al. [108] demonstrated the use of multispectral photoacoustic imaging to create maps of oxygenated and deoxygenated hemoglobin, lipids, and water. Statistically significant distribution differences of deoxygenated hemoglobin and lipids were observed between malignant and normal prostate tissue, resulting in 81.3% sensitivity and 96.2% specificity. With a similar goal of distinguishing benign from malignant prostate tissue and using this information to determine the optimal biopsy target, Bungart et al. [109] demonstrated the use of photoacoustic tomography and ultrasound. Paired with a texture-based image processing technique, the algorithm employs k-means clustering to determine the optimal biopsy target based on photoacoustic images at 1064 nm, resulting in identification of 100% of the primary lesions and 67% of the secondary lesions. The algorithm was validated using ex vivo specimens that were rinsed in saline prior to imaging and results were confirmed with histopathology.

Prostate brachytherapy is a form of radiation therapy used to treat prostate cancer in which small radioactive seeds are inserted into the prostate. The placement of the seeds is typically performed under TRUS guidance, however the small seeds are difficult to visualize with ultrasound alone. Su et al. [43] proposed the use of photoacoustic imaging to improve brachytherapy seed visualization and placement accuracy. The proposed approach was demonstrated in a gelatin phantom with contrast improvements for seeds in the long-axis orientation from 2.7 dB to 27.9 dB compared to ultrasound alone. Although artifacts resulting from angular dependence of the photoacoustic signal were observed, the results were promising for additional investigation. Kuo et al. [42] presented a photoacoustic imaging system for integration with the current TRUS system used for prostate brachytherapy visualization and localization. The system was validated with successful visualization of brachytherapy seeds in an ex vivo canine prostate. Bell et al. [33,37,41] demonstrated the ability of SLSC imaging to improve visualization of prostate brachytherapy seeds, allowing for higher contrast and SNR at lower laser energies. During ex vivo studies [33], improvements ranged from 3-25 dB in contrast compared to the traditional DAS beamformer at a laser energy of approximately 8 mJ. The SLSC technique was demonstrated in vivo [41] in a canine model using a novel interstitial light delivery system and a TRUS probe, with 10-20 dB improvements in contrast over DAS, with laser energies ranging 6.8 to 10.5 mJ and fiber-to-seed distances as large as 9.5 mm. During a combination of in vivo and ex vivo experiments, visualization was also demonstrated using a novel transurethral light delivery system and a TRUS probe, enabling improved and consistent visualization of brachytherapy seeds with SLSC beamforming, with seed contrasts of 28-32 dB regardless of the distance of the seed from the probe [37]. Figure 10 shows example results of this system demonstrating CT, ultrasound, and photoacoustic images of three brachytherapy seeds in a canine prostate. DAS beamforming with a narrow light beam enabled selective illumination of individual seeds (depending on the direction and orientation of the light relative to the seeds), while SLSC beamforming enabled consistent visualization of all brachytherapy seeds.

 figure: Fig. 10.

Fig. 10. Example prostate application from [37] targeting prostate brachytherapy. Postoperative CT image of three brachytherapy seeds in an in vivo canine prostate and the corresponding ultrasound and DAS/SLSC photoacoustic images of these seeds using a transrectal ultrasound probe and transurethral light delivery. (Reprinted from M. A. L. Bell et al., Journal of Biomedical Optics 20, 036002 (2015). Copyright 2015 Author(s), licensed under a Creative Commons Attribution 3.0 Unported License.)

Download Full Size | PDF

Finally, radical prostatectomy is a method of treating prostate cancer by removing the entire prostate gland and seminal vesicles. This procedure is commonly performed with robotic assistance, and Moradi et al. [110] demonstrated a method to assist with determining optimal resection margins during robotic radical prostatectomy using photoacoustic imaging. The da Vinci Research tool kit [84] was employed to maneuver the transducer to form a cylindrical detection surface around the prostate. The system utilizes a “pick-up" transducer that can be held by one of the patient side manipulators of the robot [111] and was demonstrated with a gelatin-based phantom with a prostate-sized inclusion. Expanding on this work, [112] employed a shared control configuration with virtual fixtures to constrain the motion of the pick-up transducer. Tested on a phantom containing pencil leads within chicken breast tissue, the system demonstrated mean position and orientation errors of 5.77 mm and 2.82$^\circ$, respectively. Additional details on this robotic integration are available in Section 5.3.4.

3.10 Multi-organ applications

3.10.1 Intraoperative tumor margin assessment of intact or resected tissue

Assessing tumor margins is an important step for surgical applications that resect abnormal or cancerous tissue. Surgeons need to ensure that cancerous tissue is removed from the body, while keeping as much normal tissue as possible. This type of tumor margin assessment is critical in the brain [68,77] and breast [71,9193], as described in Sections 3.1 and 3.4, respectively. In addition, to improve the ability for intraoperative assessment of tumor margins, Yu et al. [113] demonstrated the use of photoacoustic imaging to detect hepatic micrometastases from melanoma at an early stage. Specifically, a spectral unmixing approach was implemented to differentiate chromophores in both benign and malignant tissue, based on differences in dynamic oxygen saturation between the two tissue types [113]. The system was validated on an in vivo mouse model and compared to MRI, PET/CT, and bioluminescence imaging, demonstrating the ability to detect metastases as small as 400 $\mu$m at a tumor depth of up to 7 mm.

Qi et al. [72] demonstrated the use of a specialized nanoparticle-based contrast agent with triple-modality functionality, allowing for fluorescence, photoacoustic, and Raman properties to be tuned and boosted based on the molecular structure. In initial experiments on three in vivo mice, fluorescence and photoacoustic imaging provided preoperative tumor information with a maximum signal-to-background ratio of 7 after 24 hours. In addition, the intraoperative fluorescence-Raman imaging was performed in twenty in vivo mice, resulting in no tumor recurrence after a 60-day observation period. Overall, this contrast agent has the potential to improve cancer imaging and resection in a variety of oncology applications.

During microsurgeries, a microscope is used to perform procedures, and tumor margins can be assessed with high resolution. Therefore, aiming to improve surgical vision for microsurgeries, the work in [114] demonstrated a near-infared virtual intraoperative photoacoustic optical coherence tomography (NIR-VISPAOCT) system that combines PAM and OCT with a conventional surgical microscope to provide surgeons with real-time comprehensive biological information including tumor margins, tissue structure, and a magnified view of the region of interest. Figure 11(a) shows an example PAM image overlaid on the OCT image during an in vivo demonstration of NIR-VISPAOCT-guided needle insertion. The NIR-VISPAOCT system has the potential to improve tumor margin assessment and surgical vision in the brain and eye (i.e., neurosurgeries and opthalmological surgery) as well as other microsurgeries.

 figure: Fig. 11.

Fig. 11. Photoacoustic needle visualization examples that span a range of organs and applications, including microsurgeries on the brain and eyes [114], percutaneous ablation on the liver, lung, kidney, and bone [115], and robot assisted biopsy [116]. (a) Photoacoustic microscopy (PAM) image overlaid on optical coherence tomography (OCT) image acquired during an in vivo demonstration of near-infared virtual intraoperative photoacoustic optical coherence tomography (NIR-VISPAOCT)-guided needle insertion. (b) Schematic diagram, corresponding ultrasound image, and photoacoustic image overlaid on ultrasound image (from left to right, respectively) obtained during RFA needle insertion into bovine liver through a layer of chicken tissue. (c) Pairs of ultrasound and overlaid photoacoustic images obtained in the presence of a needle inserted in fat and liver tissue (left and right, respectively). (Adapted from: D. Lee et al., Scientific Reports 6, 35176 (2016). Copyright 2016 Author(s), licensed under a Creative Commons Attribution 4.0 License; K.J. Francis and S. Manohar, Physics Medicine & Biology 64, 184001 (2019). Copyright 2019 IOPscience; M. A. L. Bell and J. Shubert, Scientific Reports 8, 1-12 (2018). Copyright 2018 Author(s), licensed under a Creative Commons Attribution 4.0 License.)

Download Full Size | PDF

Thawani et al. [70] demonstrated a novel contrast agent for use in tumor margin assessment by combining two FDA-approved components, ICG and superparamagnetic iron oxide (SPIO) nanoparticles. The contrast agent (ICG-SPIO clusters) enabled preoperative MRI detection and intraoperative photoacoustic imaging. The performance of ICG-SPIO clusters were assessed in a randomized, blinded surgical trial, in which 12 mice underwent microscopic surgery and 12 mice underwent photoacoustic-guided surgery to assess margins and resect an invasive tumor model. After the 42 day endpoint, the photoacoustic-guided surgery cohort demonstrated improved survival, indicated by recurrent tumors in 3 of 12 mice (compared to 8 of 12 mice in the microscopic surgery cohort). These surgical resection results are promising for future work with ICG-SPIO clusters and for investigation into cell-specific packaging to enable additional surgical applications. Other applications of tumor margin assessment based on photoacoustic images are discussed in the review by Valluru et al. [117].

3.10.2 Needle visualization during percutaneous interventions

The following percutaneous interventions focus on two categories: (1) ablation and (2) biopsy. Percutaneous ablation can be used to treat tumors in organs such as the liver, lung, kidney, and bone [118]. Francis and Manohar [115] proposed the use of photoacoustic imaging to visualize the radiofrequency ablation needle, guide this needle the ablation site, and quantify the extent of ablation achieved. Figure 11(b) shows a schematic diagram and corresponding ultrasound and photoacoustic images demonstrating visualization of an RFA needle after insertion into bovine liver through a layer of chicken breast tissue. In the ultrasound image (i.e., Fig. 11(b) center), the tissue boundary between the liver and chicken breast tissue is visible, however the RFA needle is not distinguishable. With the addition of photoacoustic imaging (i.e., Fig. 11(b) right), the RFA needle was accurately identified and tracked.

Ultrasound is often used to guide percutaneous biopsies, however poor needle contrast with surrounding tissue and the presence of artifacts (e.g., acoustic clutter, reverberation from metal) have motivated investigations into using photoacoustic image guidance as an alternative. Kim et al. [66] proposed one of the first systems to guide sentinel lymph node biopsy with a hand-held ultrasound probe and bifurcated fiber bundles integrated with the probe. ICG was used to improve photoacoustic contrast, and the presence of signal was confirmed with in vivo and ex vivo fluorescence imaging. Piras et al. [31] improved this approach by separating the light source from the acoustic receiver and bringing the light source closer to the region of interest. Specifically, an optical fiber delivering 1064 nm wavelength laser light was inserted into the hollow core of a biopsy needle. This novel approach was demonstrated in a phantom consisting of a fish heart embedded in chicken breast tissue, and photoacoustic imaging enabled improved visualization compared to ultrasound imaging of both the ground fish heart and the needle, with contrast improvements of 48% and 17%, respectively. Expanding this optical fiber-based approach, Xia et al. [32] demonstrated a multispectral photoacoustic imaging system by delivering excitation light ranging 750-900 nm and 1150-1300 nm from within the cannula of a needle. The system was evaluated in phantoms and ex vivo tissue demonstrating axial resolution of 100 $\mu$m and submillimeter depth-dependent lateral resolution. In addition, two veins in ex vivo human placenta samples were visualized, and the photoacoustic signal amplitude as a function of wavelength was similar to the optical absorption spectra of deoxygenated blood.

Transitioning this approach to a compact light-emitting diode (LED) solution, Xia et al. [21] employed a commercially available LED-based system to visualize clinical metal needles. The imaging depth of the system was characterized and superficial vasculature in humans was imaged. Results showed needle visualization with 1.2 to 2.2 times higher SNR than ultrasound alone over insertion angles from 26$^\circ$ to 51$^\circ$ and a maximum imaging depth of 38 mm from the superficial detector when averaging between 128 and 2560 frames.

Other applications include percutaneous needle biopsy using a fiber-coupled pulsed laser diode inserted within a custom ring transducer [119], as well as needle tracking with a fiber inserted into the hollow core of a biopsy needle and external ultrasound reception [116]. More specifically, the work in [116] demonstrated the use of a robot arm to hold the ultrasound probe to relieve the operator from searching for and staying centered on photoacoustic signals within the body [116]. Figure 11(c) shows an example demonstration of images produced with this approach when visualizing the needle tip in fat and liver tissues. The needle tip was not able to be visualized in the ultrasound images of some of the tissues, and it is better localized in the photoacoustic images.

3.11 Emerging applications with single demonstrations of feasibility

Photoacoustic imaging for surgical guidance continues to expand with single demonstrations of feasibility in veins, lungs, and the foot. In the veins, Yan et al. [120] propose a combined ultrasound and photoacoustic laser ablation system to track the catheter and monitor temperature in real time during endovenous laser ablation for treatment of varicose veins. In the lungs, [65] proposes the use of photoacoustic imaging enabled by ICG as a contrast agent for the diagnosis and resection of indeterminate pulmonary nodules that are suspicious for lung cancer. Finally, in the foot, Wang et al. [121] demonstrated a miniaturized photoacoustic tomography system based on a waterproof linear array ultrasound transducer. The system has the potential to assist with treatment planning and post-surgical monitoring of revascularization surgery, which is used to treat chronic foot ulcers.

4. Developmental stages

 figure: Fig. 12.

Fig. 12. Developmental stages for specific surgeries and interventions, namely neurosurgery [35,36,38,68,70,77,78,8183,114], spinal fusion surgery [8689], spinal stem cell delivery [67,69,90], breast conserving surgery [71,9193], cardiac catheterization procedures [9498], pulmonary interventions [65], abdominal surgery [101,102], shock wave lithotripsy [99,100], hysterectomy [34,62,63,104], fetal interventions [105,106], prostate biopsy [108,109], prostate brachytherapy [33,37,4143], endovenous laser ablation [120], and foot revascularization surgery [121].

Download Full Size | PDF

In order to achieve photoacoustic imaging for surgical and interventional guidance, the testing and evaluation of photoacoustic imaging systems and methods must undergo a series of developmental stages. The development of a new application typically starts with theoretical derivations and simulations, followed by experiments with tissue-mimicking phantoms. These phantoms can contain elements of ex vivo tissue embedded with structures of interest (e.g., blood-filled tubes, brachytherapy seeds). A common flaw with this type of phantom design is drawing conclusions about suitability of the imaging method when using tissue that is not representative of the optical and scattering properties for the proposed surgical or interventional task (e.g., using chicken breast to represent prostate tissue). In addition to embedding structures within tissue, the ex vivo tissue (or other structures of interest) may also be surrounded by an acoustic coupling medium. Common coupling media include plastisol [122], gelatin, gelwax [123,124], or polyacrylamide [125]. Water, milk, and intralipids are additional liquified coupling media that have been used to make phantoms [126]. A leaf phantom has also been introduced to mimic vasculature [127]. These phantoms are typically confined to a container with finite dimensions, and the associated designs are typically suitable for initial exploration when multiple properties of the imaging environment are replicated (e.g., target shape and sizes, optical and acoustic properties, surrounding stuctures that will be encountered). The same phantom designs used for initial exploration may potentially be relevant for standardization of established technology across multiple hospitals and clinics after translation of the imaging technology to in vivo usage on human patients.

Figure 12 shows the experimental stages of development for multiple surgeries and procedures, which are listed on the left column. This list was derived from the literature cited in the figure caption, which includes the 14 applications summarized in Fig. 1 (i.e., the single-organ applications of Section 3) with demonstrations of feasibility that extend beyond the simulation and phantom stages. The developmental stages presented in Fig. 12 include testing with experimental phantoms, ex vivo tissue, small animals, large animals, human cadavers, and in vivo patient imaging. The presence of a bar indicates that the application has been tested at the specified developmental stage. An assessment of Fig. 12 offers a visual indication of missing and skipped steps along the presented continuum of developmental stages needed to translate the proposed photoacoustic technology to patients for each application indicated. With the exception of one application only tested on patients (i.e., foot revascularization surgery), the most commonly skipped stages include testing with small and large animals and testing with human cadavers. These three categories can possibly be considered as one entity with at least one option that is absolutely necessary and others that are optional or simply not possible. For example, animal testing is not possible if there are no suitable animal models, which is the case with neurosurgical guidance to remove pituitary tumors [35,38,81] or surgical guidance during gynecological procedures like hysterectomies [63]. Similarly, if animal models are sufficient, then testing with human cadavers may not be necessary. While small animal imaging has been useful to resolve vessel vasuculature in microsurgical applications [114], visualize cavitation and vascular injury during shock-wave lithotripsy [99], and visualize stem cell injection for spinal procedures [67], this stage of development has known limitations with regard to the required penetration depth needed to reliably image humans and guide surgeries. Therefore, success with small animals is not always indicative of successful translation to humans due to expected challenges introduced by acoustic clutter, optical scattering, and acoustic penetration. Coupling small animal imaging with either large animal imaging or human cadaver studies is therefore considered necessary prior to patient testing of any small animal demonstrations. Considering these multiple factors when developing the next generation of photoacoustic technology for surgical guidance will help to streamline and amplify potential impact and significance.

Table 1 summarizes important quantitative details about the surgeries and interventions listed in Fig. 12. Specifically, the 11 applications in Fig. 12 that extended beyond the developmental stage of ex vivo tissue were selected for inclusion. This tabular summary includes key optical details (e.g., laser wavelengths, energies, and fluence, which are important considerations for optical system design), the distance of visualized structures from the ultrasound receiver (i.e., acoustic penetration depth, which differs from optical penetration depths for separated light sources and acoustic receivers), and a summary of related quantitative details relevant to the specified task. For the optical details of energy and fluence, which are proportional to each other through the illumination area, only author-reported values appear in Table 1.

Tables Icon

Table 1. Summary of photoacoustic-guided surgery or interventional applications with demonstrated feasibility beyond the developmental stage of ex vivo tissue.

5. Photoacoustic imaging hardware

5.1 Light transmission

Optical excitation to generate the photoacoustic effect starts with light transmission hardware, which has been demonstrated with one of four options: (1) benchtop laser, (2) portable laser, (3) pulsed laser diode, or (4) light emitting diode. These four options are shown in the left region of the Venn diagram in Fig. 13. The smaller PLDs and LEDs are advantageous because they occupy less space in the operating room or interventional suite. However, these smaller options suffer from limited peak energies, which ultimately limit the overall quality of photoacoustic images as discussed in more detail in Section 5.3.3. The most common benchtop laser source is a Q-switched neodynium-doped yttrium aluminum garnet (Nd:YAG) laser. Other options include Ti:Sapphire or dye laser systems. The currently available portable systems can be thought of as benchtop systems on wheels and therefore similarly use these common laser sources. These portable laser systems are more desirable than the benchtop lasers for mobility in the operating room or interventional suite. Although not shown, these light transmission hardware options are typically coupled to optical fibers to deliver light to the surgical site [37,116,128].

 figure: Fig. 13.

Fig. 13. Venn diagram illustrating that the hardware required for photoacoustic imaging is a combination of optical and acoustic components. Examples of light transmission hardware from smallest to largest include a pulsed laser diode (PLD) (LS Series, Laser Components, Olching, Germany), light emitting diode (LED) array (Prexion Corporation, Tokyo, Japan), a benchtop laser (Vibrant B-355II, Opotek, Santa Clara, CA, USA), and a mobile laser (Phocus Mobile, Opotek, Santa Clara, CA, USA). Examples of research-based sound reception hardware in order of readiness for surgical use include Alpinion ECUBE-12R (Alpinion, Seoul, South Korea), Verasonics Vantage (Verasonics, Kirkland, WA, USA), and SonixDAQ (Ultrasonix, British Columbia, Canada). An example complete photoacoustic imaging system is the Vevo LAZR small animal ultrasound and photoacoutic imaging system (Visualsonics, Toronto, Canada).

Download Full Size | PDF

Regarding common requirements for these four options, the optical excitation pulses must be sufficiently short to ensure thermal and stress confinement within the excited volume [53], which are characterized by the absence of thermal diffusion and acoustic relaxation, respectively. Therefore, excitation pulses are typically on the order of nanoseconds. The excitation wavelengths of larger laser systems are typically within the range of 400-1800 nm [44], spanning both the visible and near infared (NIR) regions in order to visualize the chromophores shown in Fig. 3 within the typical optical window for human tissue. By contrast, the smaller LED [19] and PLD [129] options are typically single-wavelength options, with multiple stacks required to achieve similar spectroscopic abilities to the larger laser systems. Despite this trade-off, a single wavelength can potentially be sufficient for guidance of interventional procedures, particularly if the primary goal is visualization of a tool tip and its proximity to a structure of interest that can be visualized using the same wavelength (e.g., major blood vessels, nerves, contrast agents injected into the urinary tract to avoid ureteral injury). One disadvantage of LED options for surgical guidance is that the larger stacks required for sufficient energy delivery yield systems that are too bulky for navigation in tight spaces. Therefore, these emerging systems currently have limited utility for interventional applications that require internal illumination within needles or mounting of light delivery systems on surgical tools, and they are better suited for external illumination (i.e., placement on an organ surface or outside of the body).

5.2 Acoustic reception

The acoustic hardware required to receive a photoacoustic response includes an ultrasound transducer connected to an ultrasound system. Each ultrasound system contains the computing hardware, data acquisition (DAQ) boards, and electronics required to convert the received signals into images. Access to co-registered ultrasound images is often critical to provide anatomical context for photoacoustic images that provide structural and functional contrast information primarily based on optical absorption. Three of the primary research-based ultrasound systems or system components that have enabled custom sequences for developing and displaying co-registered ultrasound and photoacoustic images in real-time are illustrated in the right region of the Venn diagram in Fig. 13. The SonixDAQ (Ultrasonix, British Columbia, Canada) enabled access to raw channel data [24], which is needed to beamform raw data to create photoacoustic images. This hardware was an essential addition to a series of the manufacturer’s clinical ultrasound system, because ultrasound and photoacoustic image formation require uniquely different beamforming time delays. Multiple of these DAQ units can also be combined to create custom photoacoustic systems with the addition of pre-amplification circuitry to boost input signals, monitors for image display, and ultrasound transducers for acoustic reception [130]. The Verasonics (Verasonics, Kirkland, WA, USA) [22] and Alpinion (Alpinion, Seoul, South Korea) [20] systems offer open access to raw data with internal rather than external components.

The ultrasound transducer hardware that interfaces with ultrasound systems include piezoelectric elements, capacitive micromachined ultrasonic transducer (CMUT) arrays, and all-optical methods. Piezoelectric lead zirconate titanate (PZT) elements [131] are the primary sensors contained within most commercially available ultrasound transducers. Although piezoelectric elements are widely produced for planar and volumetric ultrasound imaging (which is generally useful to minimize cost and fabrication), these elements are bandlimited in comparison to the broadband acoustic frequency component of photoacoustic signals [131]. Conversely, CMUT arrays have wider bandwidths than piezoelectric transducers, can be integrated with existing electronics, and can be fabricated into two dimensional arrays. These advantages provide a viable alternative for photoacoustic imaging systems [132,133]. However, due to constantly shifting operating points, CMUTs can fail easily and have a short shelf life due to premature breakdown [134], which is not attractive for surgical guidance.

All-optical methods provide additional benefits over piezoelectric arrays and CMUTs, as they are smaller compared to piezoelectric elements, while maintaining high SNR and minimizing cross-talk [135,136]. In addition, all-optical methods have a level of transparency that could be leveraged for minimal visual blockage of the surgical field, enable forward-viewing configurations, have intrinsically broadband acoustic reception frequencies, and offer the fine spatial sampling required for high resolution images [137]. These all-optical methods are most commonly based on a Fabry–Pérot interferometer comprised of a transparent film sandwiched between a pair of parallel mirrors. An incident acoustic wave modulates the optical thickness of the film, producing an optical phase shift that is converted to a measured intensity modulation [138]. Benefits of the Fabry–Pérot interferometer include it being inexpensive to fabricate and suitable for combination with other imaging modalities such as optical coherence tomography (OCT) and ultrasound [136]. In addition, optical detection methods are promising for non-contact photoacoustic imaging [139,140], which is advantageous in comparison to piezoelectric elements and CMUTs, yet this non-contact approach suffers from sensitivity to motion and mechanical noise [141,142]. Additional limitations of all-optical methods include temperature sensitivity [139] and the speed with which they can deliver real-time updates for surgical guidance [143] without the use of multiple scanning beams [144,145] or full-field illumination [146]. Other methods for all-optical detection are summarized in reviews by Dong et al. [147] and Wissmeyer et al. [139]. More details on these and other sensors for photoacoustic imaging are available in the review by Manwar et al. [148].

5.3 Integration approaches

5.3.1 Customized photoacoustic imaging systems

The basic principles and hardware for light transmission and acoustic reception may be integrated to achieve customized photoacoustic hardware that is necessary to support surgical and interventional applications. One of the most important integration approaches is the combination of ultrasound and laser systems, as illustrated at the center of Fig.  13, which shows a commercially available Vevo LAZR small animal ultrasound and photoacoutic imaging system (Visualsonics, Toronto, Canada). This system has been used for visualization of stem cell delivery in the spinal cord [67,69,90], assessment of tumor margins [70,113], and localization of indeterminate pulmonary nodules [65]. The Acoustic X (Cyberdyne, Tsukuba, Japan) is a research-based, commercially available LED imaging system that has been characterized [19] and demonstrated to visualize needles and vasculature for guidance of minimally invasive procedures [21]. In addition to these commercial systems, any combination of suitable light delivery hardware and acoustic reception devices has the technical potential to be integrated to create a custom photoacoustic imaging system for surgical guidance, as previously demonstrated for hysterectomy guidance [64], visualization of prostate brachytherapy [37,41], and tumor margin assessment for breast conserving surgery [92].

 figure: Fig. 14.

Fig. 14. Custom light delivery systems for (a) minimally invasive fetal interventions [106], (b) neurosurgery [36], (c) visualization and detection of gynecological malignancies [149], and (d) endo-cavity imaging of adenocarcinomas [150]. (Adapted from: E. Maneas et al., Journal of Biophotonics 13, e201900167 (2020). Copyright 2019 Author(s), licensed under a Creative Commons Attribution 4.0 License; Eddins and Bell, Journal of Biomedical Optics 22, 041011 (2017). Copyright 2017 Author(s), licensed under a Creative Commons Attribution 3.0 Unported License; M. Basij et al., Photoacoustics 15, 100139 (2019). Copyright 2019 Elsevier; G. Yang et al., Photoacoustics 13, 66-75 (2019). Copyright 2019 Elsevier.)

Download Full Size | PDF

5.3.2 Customized light delivery systems

Two additional categories of custom integration methods include the merger of: (1) light delivery systems with ultrasound reception and (2) light delivery systems with surgical tool tips (e.g., needles [116], catheters [95], drill bits [36,86], da Vinci scissor tool [34]). Figure 14 shows a sampling of custom integrated hardware designed over the past five years. A light delivery system developed by Maneas et al. [106] to image the human placenta is presented in Fig. 14(a). The system utilizes a Fabry-Perot-based planar sensor that directly touches the fetal surface enabling high resolution imaging. The system has a lateral field of view of 14 mm x 14 mm, a spatial resolution between 50 and 125 $\mu$m, and an imaging depth of approximately 10 mm.

Figure 14(b) shows a custom light delivery system designed by Eddins and Bell [36] to attach to a surgical drill for applications in neurosurgery. The design was optimized using a combination of Monte Carlo and Zemax simulations to ensure uniform illumination. The optimized design was built to include a 3D printed piece that houses seven optical fibers spaced evenly around the drill shaft. These fibers enable an increased spot size compared to a single fiber bundle and provide an illumination area between 42 and 76 mm$^2$ depending on the distance from the detector surface. Similar designs enabled the light sources to surround a surgical tool for applications in teleoperated hysterectomy procedures [34].

Figure 14(c) shows a custom light delivery system integrated into an endoscope for visualization and detection of gynecological malignancies [149]. The system contains a phased array ultrasound transducer and six side-firing optical fibers polished at an 18$^\circ$ angle to offer an optimized illumination area and an imaging depth of approximately 35 mm. The fibers and transducer are integrated into a custom housing sheath resulting in a 7.5 mm-diameter endoscopic probe capable of volumetric dual-modal ultrasound and photoacoustic images within the narrow cervical canal.

Figure 14(d) shows a custom light delivery system integrated with an endocavity probe for imaging of adenocarcinomas in the colon and cervix. The design employs ball-shaped lenses adhered to four 1 mm multi-mode end-firing optical fibers, which are attached to a transvaginal ultrasound transducer through the use of a custom-made sheath. Light delivery efficiency and uniformity were optimized first with simulations followed by validation in phantoms, ex vivo human colorectal cancer samples, and in vivo human palmar vein. The use of ball-shaped lenses increased the numerical aperture of each fiber, resulting in increased fluence on the central imaging area and improved light homogeneity.

In addition to these examples, Ansari et al. [137] demonstrated a novel, miniature, flexible, forward-viewing photoacoustic endoscopy probe capable of high-resolution 3D imaging. The probe employed a planar Fabry-Perot ultrasound sensor at the tip, and the acoustic field was mapped with a flexible fiber bundle and a miniature optical relay system. Excitation light was delivered through a custom fiber bundle branching into 7 multimode fibers polished at 22$^{\circ }$ and yielding a 6 mm beam at the probe tip, enabling visualization in a phantom with up to 8 mm depth. The 7.4 mm-diameter probe is potentially MRI-compatible and can be integrated with widefield endoscopy or other optical imaging techniques. Possible applications for this probe design include tumor margin assessment, guiding needle biopsies, and assessing minimally invasive laser photocoagulation therapy [137].

5.3.3 Light delivery design considerations

Custom light delivery methods are often designed to increase the illumination area over that offered by a single optical fiber (with no special modifications to its tip), offering the benefits of reduced fluence and larger viewing fields for photoacoustic-guided surgery. Reduced fluence is particularly important when considering clinical translation within current laser safety limits for skin [151]. However, with increased illumination, there is an increase in artifacts caused by more acoustic pathways that become available after optical excitation, giving rise to acoustic clutter [47,116,152], as noted for the transurethral design summarized in Section 3.7. In comparison, the transurethral light delivery design for brachytherapy seed imaging discussed in Section 3.9 had a narrower illumination area, which enabled the selective visualization of only a few seeds within the light beam path with amplitude-based DAS beamforming. This comparison highlights the importance of tailoring the light illumination area to the specific task, as the larger illumination area is expected to introduce more reflection artifacts that could be mistaken for brachytherapy seeds [41], while a smaller illumination could potentially result in missed hemorrage events in a lithotripsy application [100], particularly if amplitude-based beamforming is necessary.

 figure: Fig. 15.

Fig. 15. Integration of photoacoustic imaging with robotic systems, targeting minimally invasive surgery [155] and radical prostatectomy [110]. (a) Photoacoustic images, live stereo endoscope video, and solid models of the tool, laser beam, and ultrasound probe are transferred to the photoacoustic image guidance module (in 3-D slicer) through a combination of the da Vinci Research Kit (dVRK) image-guided therapy (IGT) module and the cisst stereo vision library (SVL) for visualization [84,156158]. The visualizations are then sent to the da Vinci stereo viewer. (b) Arrangement of the transrectal ultrasound (TRUS) and “pick up" ultrasound probes with respect to the prostate. (Reprinted from N. Gandhi et al., Journal of Biomedical Optics 22, 121606 (2017). Copyright 2017 Author(s), licensed under a Creative Commons Attribution 3.0 Unported License; H. Moradi et al., IEEE Transactions on Medical Imaging 38, 57-68 (2019). Copyright 2019 IEEE).

Download Full Size | PDF

There are at least two methods to address trade-offs among larger illumination areas, reduced fluence, and increased acoustic clutter. First, advanced signal processing (e.g., coherence-based beamforming) enables the removal of acoustic clutter by understanding, modeling, and designing methods that address the physics by which it occurs. Second, the future introduction of tissue specific laser safety limits has the potential to raise fluence limits and avoid the need for increased illumination area. Current laser safety limits [151] are defined for the skin and eyes. While the community has largely adopted these limits for other tissues, differences in tissue properties indicate potential differences in the damage threshold for tissues other than the skin and eyes [47,153].

5.3.4 Integration with robotics

Surgeries are trending toward robotic assistance as a result of advantages for patients that include smaller incisions, reduced hospital stays, greater precision, and the mitigation of hand tremors that can cause errors [154]. With the additional benefits of improved surgical vision and ergonomic considerations for the surgeon, these robotic surgical assistants enable surgeons to synthesize multiple sources of information to make and execute decisions more nimbly than the processing and reaction times that are typical of humans. The most common surgical robot is the da Vinci robot (Intuitive Surgical, Sunnyvale, CA), a teleorobotic system operated from a master console that remotely controls patient side manipulators (PSMs) that are attached to various surgical tools. Augmentation of this robot with photoacoustic imaging has demonstrated feasibility and promise in two specific applications of hysterectomy [34] and radical prostatectomy [110,112], and more generally for any minimally invasive surgery that requires distinction of structures that appear with photoacoustic contrast [155]. In addition, Kim et al. [83] proposed the idea of locating target centers with assistance from guidance virtual fixtures that constrain tool motion, which is the focus of the control strategy implemented to constrain probe motion in the radical prostatectomy application [112] described in Section 3.9. Therefore, multiple components of a photoacoustic imaging system can be cooperatively controlled by the surgeons and the robot.

Figure 15 shows two applications integrating the da Vinci robot with a photoacoustic imaging system for applications in minimally invasive surgery [155] and radical prostatectomy [110]. Specifically, Fig. 15(a) shows the integrated system architectures demonstrated for minimally invasive surgery where photoacoustic images, live stereo endoscope video, and solid models of the tool, laser beam, and ultrasound probe are are transferred to a 3-D slicer [156,157] photoacoustic image guidance module for visualization. The visualizations are then sent to the da Vinci stereo viewer. Figure 15(b) shows the arrangement of a TRUS probe and a pick-up ultrasound transducer with respect to the prostate, bladder, and rectum.

In addition to existing surgical robotic systems, a robot arm can be attached to any of the photoacoustic system imaging components, including the ultrasound transducer, optical fiber, or surgical tool that is integrated with custom optical hardware. An example of the integration of the ultrasound probe was used for biopsy guidance, as described in Section 3.10.2, and catheter guidance, as described in Section 3.5. These systems can be considered image-guided robots as they enable tracking of the surgical instrument (i.e., biopsy needle or catheter) based on the photoacoustic images. Keeping photoacoustic targets centered in the image, image-guided robots prevent the need for additional operators to manually hold and track the imaging target. More specifically, the system in [116] was designed to visualize tool, needle, and catheter tips, with mean tracking accuracies on the order of 0.57 - 1.79 mm, as reported in [95,116]. Demonstrated with lower frequency ultrasound probes utilized for acoustic penetration as deep as 8 cm [95], these accuracies are within the field of view of the imaging system, which will ensure a needle tip, catheter, or tool will remain within the field of view.

6. Software options for image formation and image display

6.1 Beamforming

Beamforming is an image formation technique most commonly implemented with clinical ultrasound probes containing an array of elements. This image formation method can be grouped into three categories based on the type of information displayed and the fundamental image formation process: (1) amplitude-based beamformers, (2) coherence-based beamformers, and (3) combination approaches. Amplitude-based beamformers display brightness information based on the received pressure signals across the transducer elements. The most common amplitude-based beamformer is DAS, which compensates for time of arrival differences from a photoacoustic source to a detector position, followed by a summation of signals received by the detectors in an array. Low input laser energies, sound speed differences, and acoustic clutter are common factors that degrade the quality of DAS images. These factors motivated the creation of advanced beamforming methods, which were mainly developed for other types of imaging (e.g., ultrasound imaging), then applied to photoacoustic data. For example, coherence factor (CF) weighting [159] provides weights based on the coherence of the received signal, minimum variance (MV) beamforming [160] adaptively determines the optimal weights for each element prior to summation, and delay multiply and sum (DMAS) beamforming [161] employs a combinatorial coupling of received signals.

Coherence-based beamformers directly display the spatial coherence of the received signals from acoustic pressure waves. For example, SLSC beamforming, which was initially developed for ultrasound imaging [162], was later applied to photoacoustic imaging to improve visualization of prostate brachytherapy seeds [33] and other inclusions [163], leading to the development of a theoretical foundation for SLSC beamforming in photoacoustic imaging [164]. Benefits for surgical guidance include the removal of incoherent noise and artifacts from images that do not require amplitude information to navigate the surgical or interventional landscape. Disadvantages include the loss of amplitude information that may be required for spectroscopic analysis and quantitative functional characterization of tissue contents. Although the approach requires multiple computationally intense correlation calculations, Gonzalez and Bell [165] demonstrated a graphical processing unit (GPU)-based implementation of the SLSC algorithm on photoacoustic data enabling imaging as fast as 41.2 Hz (i.e., a processing time of 24.3 ms) depending on the pulse repetition frequency of the laser. Locally weighted short-lag spatial coherence (LW-SLSC) beamforming [89] was later introduced as a regularized version of SLSC beamforming to improve surgical drill tip visualization and localization.

We define combination beamforming approaches as permutations of existing beamforming methods that do not require a fundamental modification to a previously introduced beamforming process. For example, MV and CF were combined to suppress off-axis photoacoustic signals [166], MV and CF weighting were combined with techniques such as delay multiply and sum (DMAS) to improve resolution and reduce sidelobes [167,168], and SLSC was combined with DAS to produce SLSC-weighted DAS images that reduce clutter while retaining a component of the amplitude information [169]. Although these combinations often leverage the advantages of their individual component beamforming processes, they are often more computationally intensive than their original components. However, Jeon et al. [168] optimized the DMAS-CF beamformer by reformulating the combinatorial multiplication to implement it in real-time on a clinical system with a reconstruction time of 11.3 ms, which translates to a frame rate of 88.5 Hz.

6.2 Model-based image reconstruction

Photoacoustic images may be reconstructed from models that are more complicated than the beamforming principle, which we summarize as a simple focusing of received acoustic energy to a specific location. More complicated models can be thought of as a pair of optical and acoustic inverse problems. Solving the optical inverse problem is necessary for quantitative photoacoustic imaging [170], while solving the acoustic inverse problem enables image reconstruction. Focusing on the acoustic inverse problem, model-based image reconstruction techniques solve the wave equation, based on the initial conditions provided by Eq. (1). These techniques broadly include three categories [171,172]: (1) backprojection, (2) Fourier-domain reconstruction, and (3) time reversal.

Backprojection utilizes the inverse of the spherical Radon transform to provide an exact reconstruction [173]. The DAS beamformer is considered a backprojection method that offers direct mapping based on time-of-flight, as discussed in Section 6.1. More advanced algorithms for far-field detection were developed by Xu et al. for a spherical [174], planar, and cylindrical [174] geometry, resulting in a method for universal backprojection [173]. Fourier-domain reconstructions first solve the inverse problem in the Fourier-domain, then transform the solution back to the spatial domain, offering improved computational efficiency compared to the Radon transform [175]. Xu et al. presented an exact frequency-domain reconstruction algorithm for both a planar geometry [176] and a cylindrical geometry [177]. Treeby and Cox [178] introduced a one-step model-based image reconstruction based on the fast Fourier transform (FFT) by mapping time domain information into a third spatial dimension. This technique was released with the k-Wave photoacoustic simulation toolbox [178]. Finally, time-reversal considers retransmitting the received pressure waves in order to reconstruct the target initial pressure distribution [171,179]. Based on the Green’s function, Xu et al. proposed the first time-reversal algorithm [180], which was later utilized for arbitrary detection boundaries [181]. Additional details regarding model-based photoacoustic image reconstruction are available in the review by Rosenthal et al. [172].

While acoustic inverse image reconstruction methods are beneficial because they are based on robust physical principles, they often require matrix sizes on the order of gigabytes [172], resulting in increased computational complexity compared to beamforming methods. These methods often perform best when full 360$^\circ$ views of the image target are available. When balancing these qualities against requirements for surgical guidance, two immediate disadvantages emerge. First, the goal of surgical guidance applications is typically to visualize or quantify properties of a specific imaging target, rather than to perform the perfect physics-based reconstruction of the target. Second, acoustic receiver views tend to be limited and constrained to minimally exposed tissue during surgery, tight surgical workspaces, or external placement on body surfaces, and none of these options lend themselves to full 360$^\circ$ views. Therefore, beamforming is considered sufficient (when compared to model-based image formation) to provide information for surgical and interventional guidance under these common constraints.

6.3 Deep learning-based image formation

Deep learning is a data-driven approach to image formation that is increasingly being investigated as an alternative to beamforming and more traditional model-based approaches. Reiter and Bell [27] introduced the first exploration of deep learning for interventional guidance, with follow-up work presented by Allman et al. [28]. Specifically, interventional photoacoustic targets can be modeled as point-like sources (e.g., needle tips, catheter tips, or surgical tool tips attached to a single optical fiber). Using k-Wave [178] simulations to learn the physics of wave propagation from these photoacoustic sources, a deep neural network was trained to identify patterns and features from wavefronts displayed in the input raw sensor data (also known as raw channel data). Features such as the unique shape-to-depth relationship between detected wavefront peaks from sources and corresponding sensor locations were leveraged to differentiate sources from artifacts. Trained only on simulated data, the network was successfully transferred to ex vivo and in vivo data [96]. In related work, Johnstonbaugh et al. [182] demonstrated the use of an encoder-decoder convolutional neural network to identify the origin of photoacoustic wavefronts inside an optically scattering deep-tissue medium in order to improve visualization of deep-seated photoacoustic targets. Deep learning methods introduced for more general photoacoustic reconstruction may also be applied to surgical and interventional guidance [183185]. For example, Hauptmann et al. [183] designed a deep neural network to create high resolution 3D images from restricted photoacoustic measurements. Specifically, the authors propose three possible methods based on total variation, backpropagation, and iterative reconstruction to learn the process of photoacoustic image reconstruction. In addition, Antholzer et al. [184,185] developed a deep convolutional neural network to reconstruct sparse data in photoacoustic tomography. Since these initial papers, there has been a significant body of work focusing on deep learning for both photoacoustic image reconstruction and improving photoacoustic image quality [186189]. Other deep learning approaches are summarized in reviews by Hauptmann and Cox [29] and Yang et al. [190].

6.4 Image display and image post-processing

After forming images based on physical and mathematical principles, the following additional steps may be applied to present images in a suitable format for surgeons and interventionalists with four broad categories: (1) dynamic range alterations, (2) frame-averaging, (3) co-registration, and (4) post-processing. First, dynamic range alterations can be performed to control the level of background noise in the image [102]. Dynamic range adjustment can also be implementing by thresholding images to remove low-amplitude signals. Second, to boost the presentation of images for surgeons, multiple frames of photoacoustic data can be averaged together, reducing noise and improving the image SNR. However, this approach introduces the limitation of reduced frame rates which may be remedied with higher input energies or high pulse repetition frequencies.

Third, as demonstrated in Figs. 7 and 11(a), co-registration of photoacoustic images with other interventional imaging options such as CT, OCT, and most commonly ultrasound imaging may also be necessary after a photoacoustic image has been formed. Ultrasound imaging is beneficial over MRI and CT because these methods are not updated during surgery. However, sometimes ultrasound imaging is not sufficient to provide appropriate anatomical context of critical or surrounding structures (e.g., ureters in hysterectomy [63], accurate needle detection during biopsy [21,116]).

Fourth, image post-processing may be necessary to achieve segmentations for interventional tasks such as spectral unmixing [69], robotic visual servoing [95,116], or real-time measurements of local oxygen saturation during a surgical procedure [113]. Another post-processing option is to make 2D images that encode depth information through the use of maximum amplitude projection images, as shown in Fig. 9.

6.5 Image quality assessment methods

Although perfection of physics-based reconstructions is not always necessary for interventional guidance, as noted in Section 6.2, acceptable image quality is an absolute requirement. Assessment of image quality is particularly important to balance this trade-off for surgical and interventional guidance applications, because poor image quality could result in incorrect or error-prone decisions. Approaches that rely on the photoacoustic images will also be affected, such as segmentation [95,116,165], conversion of proximity information to an auditory output [63], and robotic image-guidance [83,110]. Sources of noise and artifacts in photoacoustic images for surgical and interventional guidance include reflections and reverberations from echogenic structures [28,41,191], acoustic clutter [41,116], out-of-plane signals appearing as artifacts [41,102], low laser energies [192], random fluctuations in the source distribution (e.g., variations in fluence at the absorber surface [192], variations in the optical absorption within the absorber [164]), receiver electronics sensitivity [192,193], and the limited view of the ultrasound transducer [194]. These noise and artifact sources have the combined terminal effect of producing images with inaccurate information for surgical or interventional guidance.

There are a few common variables and assessment metrics that can be combined to determine the suitability of system designs, as summarized in Table 2. Variables include the mean of regions within and outside a target of interest (i.e., $\mu _i$ and $\mu _o$, respectively), the standard deviation of these same regions (i.e., $\sigma _i$ and $\sigma _o$, respectively), and the histograms of these same regions (i.e., $h_i$ and $h_o$, respectively). Traditional metrics such as contrast, resolution, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) provide insight into the quality of photoacoustic images. However these traditional metrics are highly susceptible to dynamic range alterations, resulting in higher contrast or CNR values, without visual improvement in target detectability. To improve these shortcomings, the generalized contrast-to-noise ratio (gCNR) was proposed for ultrasound images [195] and later translated to photoacoustic imaging [196]. The gCNR metric offers a robust definition of target visibility based on the separation of histograms in regions defined as the target and background, enabling a more robust benchmark to compare images created from different algorithms. The remaining metrics in Table 2 are either directly or indirectly related to the resolution of the photoacoustic image system used for surgical or interventional guidance. In summary, these assessment methods quantify important properties of target appearances in images (e.g., expected size, positional accuracy, and detectability) in order to ultimately determine imaging system suitability for a particular surgical and interventional guidance task.

Tables Icon

Table 2. Common image quality and assessment metrics and their application to surgical and interventional guidance system design: Contrast, signal-to-noise ratio (SNR); contrast-to-noise ratio (CNR); generalized contrast-to-noise ratio (gCNR); full width at half-maximum (FWHM); root mean squared error (RMSE); mean absolute error (MAE).

7. Possible future directions

Oftentimes, surgeons must be in tune with various aspects of surgery, such as tactile feedback, operating room sounds that are either familiar or unusual, and the constant alerts provided by physiology monitoring systems. With this existing cognitive load added to the the cognitive abilities needed to perform surgery, it is not realistic to expect surgeons to focus their attention solely on photoacoustic image guidance systems, regardless of their expected utility to improve surgical outcomes. Therefore, the ambient condition of the operating room or interventional suite must be considered to minimize disturbances when designing photoacoustic-based input information to guide interventional procedures, to demarcate tumor boundaries, and to alert surgeons of the impending risk of injury.

Options to more evenly distribute the multiple sources of information when incorporating photoacoustic systems include photoacoustic-based auditory information [64] or conversion of photoacoustic image information to other visual cues such as blinking warning lights that are mounted on surgical instruments [87]. During intraoperative tumor margin assessment, an uncertain illumination region can potentially be addressed with a visible aiming beam that illuminates targets of interest, enabling visual confirmation of the intended field-of-view [114], assuming that the wavelengths used for the imaging system are not already in the visible range. In this case, wavelengths of visible light can potentially be interspersed with the photoacoustic imaging laser pulses to achieve the proposed goal. This concept can potentially be extended to other areas, enabling selective visualization of the illuminating beam in order to confirm regions interrogated to elicit photoacoustic responses.

In addition to display format considerations, the presented methods described in Section 3, which are proposed for specific surgical procedures or targeted organs, may be applied to other areas. For example, ultrasound and photoacoustic image-guidance of stem cell injections and stem cell monitoring has the potential to be broader than proposed for the spine [67,69,90], and similar innovations could be carried over for treatment of diabetes, heart disease, and stroke [197]. In addition, as a non-ionizing imaging tool, photoacoustic imaging could be used to replace fluoroscopy. This idea was first proposed for cardiac catheterization procedures [95], however the concept is also applicable to multiple applications of surgical and interventional guidance that typically utilize fluoroscopy. Similarly, methods that have been developed for diagnostic applications, such uterine or cervical cancer detection [150,198], may be extended to usage on intended organs during an operation or intervention (e.g., gynecological surgery). Integrated photoacoustic and ultrasound balloon-tip catheter imaging probes, which have been demonstrated for photoacoustic imaging during colonoscopy applications [199,200], have the potential to be extended to other regions of the body to improve acoustic coupling in otherwise challenging spaces, such as the nasal cavity [35]. Additional novel system designs and form factors to enhance system capabilities include the presence of external ultrasound probes in multiple locations to improve detection [35] or the use of flexible ultrasound arrays that conform to various organ shapes and sizes to maximize image quality by offering more viewing angles for image reconstruction [47]. Some of these system designs may be prototyped purely in silico with the advent of new and emerging theoretical principles dedicated to surgical guidance, as recently described for newly introduced photoacoustic spatial coherence theory [201].

8. Summary and outlook

This review summarizes applications of photoacoustic imaging for surgical and interventional guidance in organs spanning the brain, pituitary, spine, breast, heart, lungs, liver, kidney, pancreas, uterus, prostate, leg, and foot. Specific surgical and interventional applications include neurosurgery, spinal fusion surgeries, spinal stem cell delivery, breast conserving surgery, cardiac catheterization, pulmonary interventions, abdominal surgery, shock-wave lithotripsy, hysterectomy, fetal interventions, prostatectomy, prostate biopsy, prostate brachytherapy, endovenous laser ablation, foot revascularization surgery, tumor margin assessment, and percutaneous interventions. We consider the developmental stage of each presented application on its pathway toward clinical or surgical translation, and we share our assessment of optional and necessary steps along this pathway. The basic requirements to develop systems for photoacoustic-guided surgery include a merger of common light sources, acoustic reception hardware, and custom integration approaches. Custom approaches include commercially available integration of optical and acoustic systems, integration of optical fibers and piezoelectric elements with the tips of common surgical instruments, customized optical and acoustic devices that interface with the human body during surgery, and integration with robotics. In tandem with hardware customization, software customization is equally necessary to maximize image quality, with common and recommended assessment criteria for surgical guidance including metrics focused on target size, position, and detectablitiy. These assessment criteria apply to both endogenous chromophores that naturally exist during surgery (e.g., hemoglobin, lipids) and exogoneous chromophores that are introduced to assist the surgery or interventional procedure (e.g., metallic tool tips, contrast agents). With possible new directions that range from highly practical to exceptionally visionary, a bright future lights the pathway toward surgical and interventional adoption of photoacoustic-based technologies.

Funding

National Science Foundation (CAREER Award ECCS-1751522, SCH Award IIS-2014088); Alfred P. Sloan Foundation; National Institutes of Health (R00-EB018994, R21-EB025621).

Acknowledgments

This work was supported the NSF CAREER Award ECCS-1751522, NSF SCH Award IIS-2014088, NIH R00-EB018994, NIH Trailblazer Award R21 EB025621, and the Alfred P. Sloan Research Fellowship.

Disclosures

The authors declare no conflicts of interest.

References

1. X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnol. 21(7), 803–806 (2003). [CrossRef]  

2. S. A. Ermilov, T. Khamapirad, A. Conjusteau, M. H. Leonard, R. Lacewell, K. Mehta, T. Miller, and A. A. Oraevsky, “Laser optoacoustic imaging system for detection of breast cancer,” J. Biomed. Opt. 14(2), 024007 (2009). [CrossRef]  

3. S. Manohar, S. E. Vaartjes, J. C. van Hespen, J. M. Klaase, F. M. van den Engh, W. Steenbergen, and T. G. Van Leeuwen, “Initial results of in vivo non-invasive cancer imaging in the human breast using near-infrared photoacoustics,” Opt. Express 15(19), 12277–12285 (2007). [CrossRef]  

4. K. Homan, S. Kim, Y.-S. Chen, B. Wang, S. Mallidi, and S. Emelianov, “Prospects of molecular photoacoustic imaging at 1064 nm wavelength,” Opt. Lett. 35(15), 2663–2665 (2010). [CrossRef]  

5. P. Beard, “Biomedical photoacoustic imaging,” Interface Focus. 1(4), 602–631 (2011). [CrossRef]  

6. J. L.-S. Su, B. Wang, and S. Y. Emelianov, “Photoacoustic imaging of coronary artery stents,” Opt. Express 17(22), 19894–19901 (2009). [CrossRef]  

7. J. L. Su, A. B. Karpiouk, B. Wang, and S. Y. Emelianov, “Photoacoustic imaging of clinical metal needles in tissue,” J. Biomed. Opt. 15(2), 021309 (2010). [CrossRef]  

8. R. Fainchtein, B. J. Stoyanov, J. C. Murphy, D. A. Wilson, and D. F. Hanley, “In-situ determination of concentration and degree of oxygenation of hemoglobin in neural tissue by pulsed photoacoustic spectroscopy,” in Optical Tomography and Spectroscopy of Tissue: Theory, Instrumentation, Model, and Human Studies II, vol. 2979 (International Society for Optics and Photonics, 1997), pp. 417–428.

9. C. Hoelen, F. De Mul, R. Pongers, and A. Dekker, “Three-dimensional photoacoustic imaging of blood vessels in tissue,” Opt. Lett. 23(8), 648–650 (1998). [CrossRef]  

10. T. P. Matthews, C. Zhang, D.-K. Yao, K. I. Maslov, and L. V. Wang, “Label-free photoacoustic microscopy of peripheral nerves,” J. Biomed. Opt. 19(01), 1 (2014). [CrossRef]  

11. J. M. Mari, S. West, P. C. Beard, and A. E. Desjardins, “Multispectral photoacoustic imaging of nerves with a clinical ultrasound system,” in Photons Plus Ultrasound: Imaging and Sensing 2014, vol. 8943 (International Society for Optics and Photonics, 2014), p. 89430W.

12. J. M. Mari, W. Xia, S. J. West, and A. E. Desjardins, “Interventional multispectral photoacoustic imaging with a clinical ultrasound probe for discriminating nerves and tendons: an ex vivo pilot study,” J. Biomed. Opt. 20(11), 110503 (2015). [CrossRef]  

13. S. Sethuraman, J. H. Amirian, S. H. Litovsky, R. W. Smalling, and S. Y. Emelianov, “Spectroscopic intravascular photoacoustic imaging to differentiate atherosclerotic plaques,” Opt. Express 16(5), 3362–3367 (2008). [CrossRef]  

14. B. Wang, J. L. Su, J. Amirian, S. H. Litovsky, R. Smalling, and S. Emelianov, “Detection of lipid in atherosclerotic vessels using ultrasound-guided spectroscopic intravascular photoacoustic imaging,” Opt. Express 18(5), 4889–4897 (2010). [CrossRef]  

15. A. A. Oraevsky, S. L. Jacques, and F. K. Tittel, “Determination of tissue optical properties by piezoelectric detection of laser-induced stress waves, in Laser-Tissue Interaction IV,” vol. 1882 (International Society for Optics and Photonics, 1993), pp. 86–101.

16. F. Cross, R. Al-Dhahir, P. Dyer, and A. MacRobert, “Time-resolved photoacoustic studies of vascular tissue ablation at three laser wavelengths,” Appl. Phys. Lett. 50(15), 1019–1021 (1987). [CrossRef]  

17. A. Karabutov, N. Podymova, and V. Letokhov, “Time-resolved laser optoacoustic tomography of inhomogeneous media,” Appl. Phys. B 63(6), 545–563 (1996). [CrossRef]  

18. K. Daoudi, P. Van Den Berg, O. Rabot, A. Kohl, S. Tisserand, P. Brands, and W. Steenbergen, “Handheld probe integrating laser diode and ultrasound transducer array for ultrasound/photoacoustic dual modality imaging,” Opt. Express 22(21), 26365–26374 (2014). [CrossRef]  

19. A. Hariri, J. Lemaster, J. Wang, A. S. Jeevarathinam, D. L. Chao, and J. V. Jokerst, “The characterization of an economic and portable LED-based photoacoustic imaging system to facilitate molecular imaging,” Photoacoustics 9, 10–20 (2018). [CrossRef]  

20. J. Kim, S. Park, Y. Jung, S. Chang, J. Park, Y. Zhang, J. F. Lovell, and C. Kim, “Programmable real-time clinical photoacoustic and ultrasound imaging system,” Sci. Rep. 6(1), 35137 (2016). [CrossRef]  

21. W. Xia, M. Kuniyil Ajith Singh, E. Maneas, N. Sato, Y. Shigeta, T. Agano, S. Ourselin, S. J West, and A. E Desjardins, “Handheld real-time LED-based photoacoustic and ultrasound imaging system for accurate visualization of clinical metal needles and superficial vasculature to guide minimally invasive procedures,” Sensors 18(5), 1394 (2018). [CrossRef]  

22. P. J. Kaczkowski and R. E. Daigle, “The verasonics ultrasound system as a pedagogic tool in teaching wave propagation, scattering, beamforming, and signal processing concepts in physics and engineering,” J. Acoust. Soc. Am. 129(4), 2648 (2011). [CrossRef]  

23. M. Walczak, M. Lewandowski, and N. Żołek, “A real-time streaming DAQ for Ultrasonix research scanner,” in 2014 IEEE International Ultrasonics Symposium (IEEE, 2014), pp. 1257–1260.

24. C. C. Cheung, C. Alfred, N. Salimi, B. Y. Yiu, I. K. Tsang, B. Kerby, R. Z. Azar, and K. Dickie, “Multi-channel pre-beamformed data acquisition system for research on advanced ultrasound imaging methods,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 59(2), 243–253 (2012). [CrossRef]  

25. C. Jia, J. Xia, I. M. Pelivanov, S.-W. Huang, Y. Jin, C. H. Seo, L. Huang, J. F. Eary, X. Gao, and M. O’Donnell, “Contrast-enhanced photoacoustic imaging,” in 2010 IEEE International Ultrasonics Symposium, (IEEE, 2010), pp. 507–510.

26. N. Kuo, H. J. Kang, T. DeJournett, J. Spicer, and E. Boctor, “Photoacoustic imaging of prostate brachytherapy seeds in ex vivo prostate,” in Medical Imaging 2011: Visualization, Image-Guided Procedures, and Modeling, vol. 7964 (International Society for Optics and Photonics, 2011), p. 796409.

27. A. Reiter and M. A. L. Bell, “A machine learning approach to identifying point source locations in photoacoustic data,” in Photons Plus Ultrasound: Imaging and Sensing 2017, vol. 10064 (International Society for Optics and Photonics, 2017), p. 100643J.

28. D. Allman, A. Reiter, and M. A. L. Bell, “Photoacoustic source detection and reflection artifact removal enabled by deep learning,” IEEE Trans. Med. Imaging 37(6), 1464–1477 (2018). [CrossRef]  

29. A. Hauptmann and B. Cox, “Deep learning in photoacoustic tomography: Current approaches and future directions,” J. Biomed. Opt. 25(11), 112903 (2020). [CrossRef]  

30. J.-M. Yang, C. Li, R. Chen, Q. Zhou, K. K. Shung, and L. V. Wang, “Catheter-based photoacoustic endoscope,” J. Biomed. Opt. 19(06), 1 (2014). [CrossRef]  

31. D. Piras, C. Grijsen, P. Schutte, W. Steenbergen, and S. Manohar, “Photoacoustic needle: minimally invasive guidance to biopsy,” J. Biomed. Opt. 18(7), 070502 (2013). [CrossRef]  

32. W. Xia, D. I. Nikitichev, J. M. Mari, S. J. West, R. Pratt, A. L. David, S. Ourselin, P. C. Beard, and A. E. Desjardins, “Performance characteristics of an interventional multispectral photoacoustic imaging system for guiding minimally invasive procedures,” J. Biomed. Opt. 20(08), 1 (2015). [CrossRef]  

33. M. A. L. Bell, N. Kuo, D. Y. Song, and E. M. Boctor, “Short-lag spatial coherence beamforming of photoacoustic images for enhanced visualization of prostate brachytherapy seeds,” Biomed. Opt. Express 4(10), 1964–1977 (2013). [CrossRef]  

34. M. Allard, J. Shubert, and M. A. L. Bell, “Feasibility of photoacoustic-guided teleoperated hysterectomies,” J. Med. Imag. 5(02), 1 (2018). [CrossRef]  

35. M. T. Graham, J. Huang, F. Creighton, and M. A. L. Bell, “Simulations and human cadaver head studies to identify optimal acoustic receiver locations for minimally invasive photoacoustic-guided neurosurgery,” Photoacoustics 19, 100183 (2020). [CrossRef]  

36. B. Eddins and M. A. L. Bell, “Design of a multifiber light delivery system for photoacoustic-guided surgery,” J. Biomed. Opt. 22(04), 1 (2017). [CrossRef]  

37. M. A. L. Bell, X. Guo, D. Y. Song, and E. M. Boctor, “Transurethral light delivery for prostate photoacoustic imaging,” J. Biomed. Opt. 20(3), 036002 (2015). [CrossRef]  

38. M. A. L. Bell, A. K. Ostrowski, K. Li, P. Kazanzides, and E. M. Boctor, “Localization of transcranial targets for photoacoustic-guided endonasal surgeries,” Photoacoustics 3(2), 78–87 (2015). [CrossRef]  

39. C. Kim, T. N. Erpelding, L. Jankovic, M. D. Pashley, and L. V. Wang, “Deeply penetrating in vivo photoacoustic imaging using a clinical ultrasound array system,” Biomed. Opt. Express 1(1), 278–284 (2010). [CrossRef]  

40. T. Mitcham, K. Dextraze, H. Taghavi, M. Melancon, and R. Bouchard, “Photoacoustic imaging driven by an interstitial irradiation source,” Photoacoustics 3(2), 45–54 (2015). [CrossRef]  

41. M. A. L. Bell, N. P. Kuo, D. Y. Song, J. U. Kang, and E. M. Boctor, “In vivo visualization of prostate brachytherapy seeds with photoacoustic imaging,” J. Biomed. Opt. 19(12), 126011 (2014). [CrossRef]  

42. N. P. Kuo, H. J. Kang, D. Y. Song, J. U. Kang, and E. M. Boctor, “Real-time photoacoustic imaging of prostate brachytherapy seeds using a clinical ultrasound system,” J. Biomed. Opt. 17(6), 066005 (2012). [CrossRef]  

43. J. L. Su, R. R. Bouchard, A. B. Karpiouk, J. D. Hazle, and S. Y. Emelianov, “Photoacoustic imaging of prostate brachytherapy seeds,” Biomed. Opt. Express 2(8), 2243–2254 (2011). [CrossRef]  

44. R. Bouchard, O. Sahin, and S. Emelianov, “Ultrasound-guided photoacoustic imaging: current state and future development,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 61(3), 450–466 (2014). [CrossRef]  

45. T. Vu, D. Razansky, and J. Yao, “Listening to tissues with new light: recent technological advances in photoacoustic imaging,” J. Opt. 21(10), 103001 (2019). [CrossRef]  

46. S. H. Han, “Review of photoacoustic imaging for imaging-guided spinal surgery,” Neurospine 15(4), 306–322 (2018). [CrossRef]  

47. M. A. Lediju Bell, “Photoacoustic imaging for surgical guidance: Principles, applications, and outlook,” J. Appl. Phys. 128(6), 060904 (2020). [CrossRef]  

48. T. Zhao, A. E. Desjardins, S. Ourselin, T. Vercauteren, and W. Xia, “Minimally invasive photoacoustic imaging: Current status and future perspectives,” Photoacoustics 16, 100146 (2019). [CrossRef]  

49. M. S. Karthikesh and X. Yang, “Photoacoustic image-guided interventions,” Exp. Biol. Med. 245(4), 330–341 (2020). [CrossRef]  

50. S. Iskander-Rizk, A. F. van der Steen, and G. Van Soest, “Photoacoustic imaging for guidance of interventions in cardiovascular medicine,” Phys. Med. Biol. 64(16), 16TR01 (2019). [CrossRef]  

51. I. Steinberg, D. M. Huland, O. Vermesh, H. E. Frostig, W. S. Tummers, and S. S. Gambhir, “Photoacoustic clinical imaging,” Photoacoustics 14, 77–98 (2019). [CrossRef]  

52. A. B. E. Attia, G. Balasundaram, M. Moothanchery, U. Dinish, R. Bi, V. Ntziachristos, and M. Olivo, “A review of clinical photoacoustic imaging: Current and future trends,” Photoacoustics 16, 100144 (2019). [CrossRef]  

53. M. Xu and L. V. Wang, “Photoacoustic imaging in biomedicine,” Rev. Sci. Instrum. 77(4), 041101 (2006). [CrossRef]  

54. S. Prahl, “Optical absorption of hemoglobin,” (1999).

55. R. L. van Veen, H. Sterenborg, A. Pifferi, A. Torricelli, and R. Cubeddu, “Determination of VIS-NIR absorption coefficients of mammalian fat, with time-and spatially resolved diffuse reflectance and transmission spectroscopy,” in Biomedical Topical Meeting, (Optical Society of America, 2004), p. SF4.

56. R. R. Anderson, W. Farinelli, H. Laubach, D. Manstein, A. N. Yaroslavsky, J. Gubeli III, K. Jordan, G. R. Neil, M. Shinn, W. Chandler, G. P. Williams, S. V. Benson, D. R. Douglas, and H. Dylla, “Selective photothermolysis of lipid-rich tissues: A free electron laser study,” Lasers Surg. Med. 38(10), 913–919 (2006). [CrossRef]  

57. S. K. V. Sekar, I. Bargigia, A. Dalla Mora, P. Taroni, A. Ruggeri, A. Tosi, A. Pifferi, and A. Farina, “Diffuse optical characterization of collagen absorption from 500 to 1700 nm,” J. Biomed. Opt. 22(1), 015006 (2017). [CrossRef]  

58. B. Karlsson, C. G. Ribbing, A. Roos, E. Valkonen, and T. Karlsson, “Optical properties of some metal oxides in solar absorbers,” Phys. Scr. 25(6A), 826–831 (1982). [CrossRef]  

59. S. Prahl, “Methylene blue spectra,” (2017).

60. M. Landsman, G. Kwant, G. Mook, and W. Zijlstra, “Light-absorbing properties, stability, and spectral stabilization of indocyanine green,” J. Appl. Physiol. 40(4), 575–583 (1976). [CrossRef]  

61. S. Prahl, “Optical Absorption of Indocyanine Green (ICG),” (2018).

62. A. Wiacek, K. C. Wang, and M. A. L. Bell, “Techniques to distinguish the ureter from the uterine artery in photoacoustic-guided hysterectomies,” in Photons Plus Ultrasound: Imaging and Sensing 2019, vol. 10878 (International Society for Optics and Photonics, 2019), p. 108785K.

63. A. Wiacek, K. C. Wang, H. Wu, and M. A. L. Bell, “Dual-wavelength photoacoustic imaging for guidance of hysterectomy procedures,” in Advanced Biomedical and Clinical Diagnostic and Surgical Guidance Systems XVIII, vol. 11229 (International Society for Optics and Photonics, 2020), p. 112291D.

64. J. Humbert, O. Will, T. Peñate-Medina, O. Peñate-Medina, O. Jansen, M. Both, and C.-C. Glüer, “Comparison of photoacoustic and fluorescence tomography for the in vivo imaging of ICG-labelled liposomes in the medullary cavity in mice,” Photoacoustics 20, 100210 (2020). [CrossRef]  

65. C. Y. Lee, K. Fujino, Y. Motooka, A. Gregor, N. Bernards, H. Ujiie, T. Kinoshita, K. Y. Chung, S. H. Han, and K. Yasufuku, “Photoacoustic imaging to localize indeterminate pulmonary nodules: A preclinical study,” PLoS One 15(4), e0231488 (2020). [CrossRef]  

66. C. Kim, T. N. Erpelding, W. J. Akers, K. Maslov, L. Song, L. Jankovic, J. A. Margenthaler, S. Achilefu, and L. V. Wang, “Photoacoustic image-guided needle biopsy of sentinel lymph nodes,” in Photons Plus Ultrasound: Imaging and Sensing 2011, vol. 7899 (International Society for Optics and Photonics, 2011), p. 78990K.

67. K. P. Kubelick and S. Y. Emelianov, “In vivo photoacoustic guidance of stem cell injection and delivery for regenerative spinal cord therapies,” Neurophotonics 7(03), 1 (2020). [CrossRef]  

68. X. Jia, K. Fan, R. Zhang, D. Zhang, J. Zhang, Y. Gao, T. Zhang, W. Li, J. Li, X. Yan, and J. Tian, “Precise visual distinction of brain glioma from normal tissues via targeted photoacoustic and fluorescence navigation,” Nanomedicine 27, 102204 (2020). [CrossRef]  

69. E. M. Donnelly, K. P. Kubelick, D. S. Dumani, and S. Y. Emelianov, “Photoacoustic image-guided delivery of plasmonic-nanoparticle-labeled mesenchymal stem cells to the spinal cord,” Nano Lett. 18(10), 6625–6632 (2018). [CrossRef]  

70. J. P. Thawani, A. Amirshaghaghi, L. Yan, J. M. Stein, J. Liu, and A. Tsourkas, “Photoacoustic-guided surgery with indocyanine green-coated superparamagnetic iron oxide nanoparticle clusters,” Small 13(37), 1701300 (2017). [CrossRef]  

71. L. Xi, G. Zhou, N. Gao, L. Yang, D. A. Gonzalo, S. J. Hughes, and H. Jiang, “Photoacoustic and fluorescence image-guided surgery using a multifunctional targeted nanoprobe,” Ann. Surg. Oncol. 21(5), 1602–1609 (2014). [CrossRef]  

72. J. Qi, J. Li, R. Liu, Q. Li, H. Zhang, J. W. Lam, R. T. Kwok, D. Liu, D. Ding, and B. Z. Tang, “Boosting fluorescence-photoacoustic-Raman properties in one fluorophore for precise cancer surgery,” Chem 5(10), 2657–2677 (2019). [CrossRef]  

73. J. Weber, P. C. Beard, and S. E. Bohndiek, “Contrast agents for molecular photoacoustic imaging,” Nat. Methods 13(8), 639–650 (2016). [CrossRef]  

74. G. P. Luke, D. Yeager, and S. Y. Emelianov, “Biomedical applications of photoacoustic imaging with exogenous contrast agents,” Ann. Biomed. Eng. 40(2), 422–437 (2012). [CrossRef]  

75. Q. Fu, R. Zhu, J. Song, H. Yang, and X. Chen, “Photoacoustic imaging: contrast agents and their biomedical applications,” Adv. Mater. 31, 1805875 (2018). [CrossRef]  

76. Z. Z. Zhang, L. B. Shields, D. A. Sun, Y. P. Zhang, M. A. Hunt, and C. B. Shields, “The art of intraoperative glioma identification,” Front. Oncol. 5, 175 (2015). [CrossRef]  

77. E. Najafzadeh, H. Ghadiri, M. Alimohamadi, P. Farnia, M. Mehrmohammadi, and A. Ahmadian, “Evaluation of multi-wavelengths LED-based photoacoustic imaging for maximum safe resection of glioma: a proof of concept study,” Int. J. Comput. Assist. Radiol. Surg. 15(6), 1053–1062 (2020). [CrossRef]  

78. M. T. Graham, J. Y. Guo, and M. A. L. Bell, “Simultaneous visualization of nerves and blood vessels with multispectral photoacoustic imaging for intraoperative guidance of neurosurgeries,” in Advanced Biomedical and Clinical Diagnostic and Surgical Guidance Systems XVII, vol. 10868 (International Society for Optics and Photonics, 2019), p. 108680R.

79. M. Buchfelder and J. Kreutzer, “Transcranial surgery for pituitary adenomas,” Pituitary 11(4), 375–384 (2008). [CrossRef]  

80. M. S. Agam, M. A. Wedemeyer, B. Wrobel, M. H. Weiss, J. D. Carmichael, and G. Zada, “Complications associated with microscopic and endoscopic transsphenoidal pituitary surgery: experience of 1153 consecutive cases treated at a single tertiary care pituitary center,” J. Neurosurg. 130(5), 1576–1583 (2019). [CrossRef]  

81. M. A. L. Bell, A. K. Ostrowski, P. Kazanzides, and E. Boctor, “Feasibility of transcranial photoacoustic imaging for interventional guidance of endonasal surgeries,” in Photons Plus Ultrasound: Imaging and Sensing 2014, vol. 8943 (International Society for Optics and Photonics, 2014), p. 894307.

82. M. A. L. Bell, A. B. Dagle, P. Kazanzides, and E. M. Boctor, “Experimental assessment of energy requirements and tool tip visibility for photoacoustic-guided endonasal surgery,” in Photons Plus Ultrasound: Imaging and Sensing 2016, vol. 9708 (International Society for Optics and Photonics, 2016), p. 97080D.

83. S. Kim, Y. Tan, P. Kazanzides, and M. A. L. Bell, “Feasibility of photoacoustic image guidance for telerobotic endonasal transsphenoidal surgery,” in 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob), (IEEE, 2016), pp. 482–488.

84. P. Kazanzides, Z. Chen, A. Deguet, G. S. Fischer, R. H. Taylor, and S. P. DiMaio, “An open-source research kit for the da Vinci® Surgical System,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), (IEEE, 2014), pp. 6434–6439.

85. O. P. Gautschi, B. Schatlo, K. Schaller, and E. Tessitore, “Clinically relevant complications related to pedicle screw placement in thoracolumbar surgery and their management: a literature review of 35, 630 pedicle screws,” FOC 31(4), E8 (2011). [CrossRef]  

86. J. Shubert and M. A. L. Bell, “A novel drill design for photoacoustic guided surgeries,” in Photons Plus Ultrasound: Imaging and Sensing 2018, vol. 10494 (International Society for Optics and Photonics, 2018), p. 104940J.

87. J. Shubert and M. A. L. Bell, “Photoacoustic imaging of a human vertebra: implications for guiding spinal fusion surgeries,” Phys. Med. Biol. 63(14), 144001 (2018). [CrossRef]  

88. E. A. Gonzalez, A. Jain, and M. A. L. Bell, “Combined ultrasound and photoacoustic image guidance of spinal pedicle cannulation demonstrated with intact ex vivo specimens,” Transactions on Biomedical Engineering (2021).

89. E. Gonzalez, A. Wiacek, and M. A. L. Bell, “Visualization of custom drill bit tips in a human vertebra for photoacoustic-guided spinal fusion surgeries,” in Photons Plus Ultrasound: Imaging and Sensing 2019, vol. 10878 (International Society for Optics and Photonics, 2019), p. 108785M.

90. K. P. Kubelick and S. Y. Emelianov, “Prussian blue nanocubes as a multimodal contrast agent for image-guided stem cell therapy of the spinal cord,” Photoacoustics 18, 100166 (2020). [CrossRef]  

91. L. Xi, S. R. Grobmyer, L. Wu, R. Chen, G. Zhou, L. G. Gutwein, J. Sun, W. Liao, Q. Zhou, H. Xie, and H. Jiang, “Evaluation of breast tumor margins in vivo with intraoperative photoacoustic imaging,” Opt. Express 20(8), 8726–8731 (2012). [CrossRef]  

92. I. Kosik, M. Brackstone, A. Kornecki, A. Chamson-Reig, P. Wong, M. H. Araghi, and J. J. Carson, “Intraoperative photoacoustic screening of breast cancer: a new perspective on malignancy visualization and surgical guidance,” J. Biomed. Opt. 24(05), 1 (2019). [CrossRef]  

93. R. Li, P. Wang, L. Lan, F. P. Lloyd, C. J. Goergen, S. Chen, and J.-X. Cheng, “Assessing breast tumor margin by multispectral photoacoustic tomography,” Biomed. Opt. Express 6(4), 1273–1281 (2015). [CrossRef]  

94. X. Guo, B. Tavakoli, H.-J. Kang, J. U. Kang, R. Etienne-Cummings, and E. M. Boctor, “Photoacoustic active ultrasound element for catheter tracking,” in Photons Plus Ultrasound: Imaging and Sensing 2014, vol. 8943 (International Society for Optics and Photonics, 2014), p. 89435M.

95. M. Graham, F. Assis, D. Allman, A. Wiacek, E. Gonzalez, M. Gubbi, J. Dong, H. Hou, S. Beck, J. Chrispin, and M. A. L. Bell, “In vivo demonstration of photoacoustic image guidance and robotic visual servoing for cardiac catheter-based interventions,” IEEE Trans. Med. Imaging 39(4), 1015–1029 (2020). [CrossRef]  

96. D. Allman, F. Assis, J. Chrispin, and M. A. L. Bell, “A deep learning-based approach to identify in vivo catheter tips during photoacoustic-guided cardiac interventions,” in Photons Plus Ultrasound: Imaging and Sensing 2019, vol. 10878 (International Society for Optics and Photonics, 2019), p. 108785E.

97. S. Iskander-Rizk, P. Kruizinga, A. F. Van Der Steen, and G. van Soest, “Spectroscopic photoacoustic imaging of radiofrequency ablation in the left atrium,” Biomed. Opt. Express 9(3), 1309–1322 (2018). [CrossRef]  

98. S. Iskander-Rizk, P. Kruizinga, R. Beurskens, G. Springeling, F. Mastik, N. M. de Groot, P. Knops, A. F. van der Steen, and G. van Soest, “Real-time photoacoustic assessment of radiofrequency ablation lesion formation in the left atrium,” Photoacoustics 16, 100150 (2019). [CrossRef]  

99. M. Li, B. Lan, G. Sankin, Y. Zhou, W. Liu, J. Xia, D. Wang, G. Trahey, P. Zhong, and J. Yao, “Simultaneous photoacoustic imaging and cavitation mapping in shockwave lithotripsy,” IEEE Trans. Med. Imaging 39(2), 468–477 (2020). [CrossRef]  

100. M. Li, T. Vu, G. Sankii, B. Winship, K. Boydston, R. Terry, P. Zhong, and J. Yao, “Internal-illumination photoacoustic tomography enhanced by a graded-scattering fiber diffuser,” IEEE Trans. Med. Imaging 40(1), 346–356 (2021). [CrossRef]  

101. K. M. Kempski, A. Wiacek, J. Palmer, M. Graham, E. González, B. Goodson, D. Allman, H. Hou, S. Beck, J. He, and M. A. L. Bell, “In vivo demonstration of photoacoustic-guided liver surgery,” in Photons Plus Ultrasound: Imaging and Sensing 2019, vol. 10878 (International Society for Optics and Photonics, 2019), p. 108782T.

102. K. M. Kempski, A. Wiacek, M. Graham, E. González, B. Goodson, D. Allman, J. Palmer, H. Hou, S. Beck, J. He, and M. A. L. Bell, “In vivo photoacoustic imaging of major blood vessels in the pancreas and liver during surgery,” J. Biomed. Opt. 24(12), 1 (2019). [CrossRef]  

103. R. H. Blackwell, E. J. Kirshenbaum, A. S. Shah, P. C. Kuo, G. N. Gupta, and T. M. Turk, “Complications of recognized and unrecognized iatrogenic ureteral injury at time of hysterectomy: a population based analysis,” J. Urol. 199(6), 1540–1545 (2018). [CrossRef]  

104. M. Allard, J. Shubert, and M. A. L. Bell, “Feasibility of photoacoustic guided hysterectomies with the da vinci robot,” in Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling, vol. 10576 (International Society for Optics and Photonics, 2018), p. 105760A.

105. W. Xia, E. Maneas, D. I. Nikitichev, C. A. Mosse, G. S. Dos Santos, T. Vercauteren, A. L. David, J. Deprest, S. Ourselin, P. C. Beard, and A. E. Desjardins, “Interventional photoacoustic imaging of the human placenta with ultrasonic tracking for minimally invasive fetal surgeries,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2015), pp. 371–378.

106. E. Maneas, R. Aughwane, N. Huynh, W. Xia, R. Ansari, M. Kuniyil Ajith Singh, J. C. Hutchinson, N. J. Sebire, O. J. Arthurs, J. Deprest, S. Ourselin, P. C. Beard, A. Melbourne, T. Vercauteren, A. L. David, and A. E. Desjardins, “Photoacoustic imaging of the human placental vasculature,” J. Biophotonics 13(4), e201900167 (2020). [CrossRef]  

107. M. A. Bjurlin, H. B. Carter, P. Schellhammer, M. S. Cookson, L. G. Gomella, D. Troyer, T. M. Wheeler, S. Schlossberg, D. F. Penson, and S. S. Taneja, “Optimization of initial prostate biopsy in clinical practice: sampling, labeling and specimen processing,” J. Urol. 189(6), 2039–2046 (2013). [CrossRef]  

108. V. S. Dogra, B. K. Chinni, K. S. Valluru, J. V. Joseph, A. Ghazi, J. L. Yao, K. Evans, E. M. Messing, and N. A. Rao, “Multispectral photoacoustic imaging of prostate cancer: preliminary ex-vivo results,” J. Clin. Imaging Sci. 3, 4141 (2013). [CrossRef]  

109. B. L. Bungart, L. Lan, P. Wang, R. Li, M. O. Koch, L. Cheng, T. A. Masterson, M. Dundar, and J.-X. Cheng, “Photoacoustic tomography of intact human prostates and vascular texture analysis identify prostate cancer biopsy targets,” Photoacoustics 11, 46–55 (2018). [CrossRef]  

110. H. Moradi, S. Tang, and S. E. Salcudean, “Toward intra-operative prostate photoacoustic imaging: configuration evaluation and implementation using the da Vinci research kit,” IEEE Trans. Med. Imaging 38(1), 57–68 (2019). [CrossRef]  

111. C. Schneider, J. Guerrero, C. Nguan, R. Rohling, and S. Salcudean, “Intra-operative "pick-up" ultrasound for robot assisted surgery with vessel extraction and registration: A feasibility study,” in International Conference on Information Processing in Computer-Assisted Interventions, (Springer, 2011), pp. 122–132.

112. H. Moradi, S. Tang, and S. E. Salcudean, “Toward robot-assisted photoacoustic imaging: implementation using the da Vinci research kit and virtual fixtures,” IEEE Robot. Autom. Lett. 4(2), 1807–1814 (2019). [CrossRef]  

113. Q. Yu, S. Huang, Z. Wu, J. Zheng, X. Chen, and L. Nie, “Label-free visualization of early cancer hepatic micrometastasis and intraoperative image-guided surgery by photoacoustic imaging,” J. Nucl. Med. 61(7), 1079–1085 (2020). [CrossRef]  

114. D. Lee, C. Lee, S. Kim, Q. Zhou, J. Kim, and C. Kim, “In vivo near infrared virtual intraoperative surgical photoacoustic optical coherence tomography,” Sci. Rep. 6(1), 35176 (2016). [CrossRef]  

115. K. J. Francis and S. Manohar, “Photoacoustic imaging in percutaneous radiofrequency ablation: device guidance and ablation visualization,” Phys. Med. Biol. 64(18), 184001 (2019). [CrossRef]  

116. M. A. L. Bell and J. Shubert, “Photoacoustic-based visual servoing of a needle tip,” Sci. Rep. 8(1), 1–12 (2018). [CrossRef]  

117. K. S. Valluru, K. E. Wilson, and J. K. Willmann, “Photoacoustic imaging in oncology: translational preclinical and early clinical experience,” Radiology 280(2), 332–349 (2016). [CrossRef]  

118. M. Ahmed, C. L. Brace, F. T. Lee Jr, and S. N. Goldberg, “Principles of and advances in percutaneous ablation,” Radiology 258(2), 351–369 (2011). [CrossRef]  

119. J. Zhang, S. Agrawal, A. Dangi, N. Frings, and S.-R. Kothapalli, “Computer assisted photoacoustic imaging guided device for safer percutaneous needle operations,” in Photons Plus Ultrasound: Imaging and Sensing 2019, vol. 10878 (International Society for Optics and Photonics, 2019), p. 1087866.

120. Y. Yan, S. John, M. Ghalehnovi, L. Kabbani, N. A. Kennedy, and M. Mehrmohammadi, “Photoacoustic imaging for image-guided endovenous laser ablation procedures,” Sci. Rep. 9(1), 2933 (2019). [CrossRef]  

121. Y. Wang, Y. Zhan, L. M. Harris, S. Khan, and J. Xia, “A portable three-dimensional photoacoustic tomography system for imaging of chronic foot ulcers,” Quant. Imaging Med. Surg. 9(5), 799 (2019). [CrossRef]  

122. W. C. Vogt, C. Jia, K. A. Wear, B. S. Garra, and T. J. Pfefer, “Biologically relevant photoacoustic imaging phantoms with tunable optical and acoustic properties,” J. Biomed. Opt. 21(10), 101405 (2016). [CrossRef]  

123. S. L. Vieira, T. Z. Pavan, J. E. Junior, and A. A. Carneiro, “Paraffin-gel tissue-mimicking material for ultrasound-guided needle biopsy phantom,” Ultrasound Medicine & Biol. 39(12), 2477–2484 (2013). [CrossRef]  

124. E. Maneas, W. Xia, O. Ogunlade, M. Fonseca, D. I. Nikitichev, A. L. David, S. J. West, S. Ourselin, J. C. Hebden, T. Vercauteren, and A. E. Desjardins, “Gel wax-based tissue-mimicking phantoms for multispectral photoacoustic imaging,” Biomed. Opt. Express 9(3), 1151–1163 (2018). [CrossRef]  

125. B. Arnal, C.-W. Wei, C. Perez, T.-M. Nguyen, M. Lombardo, I. Pelivanov, L. D. Pozzo, and M. O’Donnell, “Sono-photoacoustic imaging of gold nanoemulsions: Part II. Real time imaging,” Photoacoustics 3(1), 11–19 (2015). [CrossRef]  

126. W. C. Vogt, C. Jia, K. A. Wear, B. S. Garra, and T. J. Pfefer, “Phantom-based image quality test methods for photoacoustic imaging systems,” J. Biomed. Opt. 22(9), 1–14 (2017). [CrossRef]  

127. B. Huang, J. Xia, K. I. Maslov, and L. V. Wang, “Improving limited-view photoacoustic tomography with an acoustic reflector,” J. Biomed. Opt. 18(11), 110505 (2013). [CrossRef]  

128. J. Zhou and J. V. Jokerst, “Photoacoustic imaging with fiber optic technology: A review,” Photoacoustics 20, 100211 (2020). [CrossRef]  

129. P. K. Upputuri and M. Pramanik, “Fast photoacoustic imaging systems using pulsed laser diodes: a review,” Biomed. Eng. Lett. 8(2), 167–181 (2018). [CrossRef]  

130. L. Lin, P. Hu, J. Shi, C. M. Appleton, K. Maslov, L. Li, R. Zhang, and L. V. Wang, “Single-breath-hold photoacoustic computed tomography of the breast,” Nat. Commun. 9(1), 2352 (2018). [CrossRef]  

131. M. J. Zipparo, K. K. Shung, and T. R. Shrout, “Piezoceramics for high-frequency (20 to 100 mhz) single-element imaging transducers,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 44(5), 1038–1048 (1997). [CrossRef]  

132. S. Vaithilingam, T.-J. Ma, Y. Furukawa, I. O. Wygant, X. Zhuang, A. De La Zerda, O. Oralkan, A. Kamaya, R. B. Jeffrey, and B. T. Khuri-Yakub, “Three-dimensional photoacoustic imaging using a two-dimensional CMUT array,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 56(11), 2411–2419 (2009). [CrossRef]  

133. S.-R. Kothapalli, G. A. Sonn, J. W. Choe, A. Nikoozadeh, A. Bhuyan, K. K. Park, P. Cristman, R. Fan, A. Moini, B. C. Lee, J. Wu, T. E. Carver, D. Trivedi, L. Shiiba, I. Steinberg, D. M. Huland, R. M. F. J. C. Liao, J. D. Brooks, P. T. Khuri-Yakub, and S. S. Gambhir, “Simultaneous transrectal ultrasound and photoacoustic human prostate imaging,” Sci. Transl. Med. 11(507), eaav2169 (2019). [CrossRef]  

134. R. Manwar, T. Simpson, A. Bakhtazad, and S. Chowdhury, “Fabrication and characterization of a high frequency and high coupling coefficient CMUT array,” Microsyst. Technol. 23(10), 4965–4977 (2017). [CrossRef]  

135. E. Z. Zhang, B. Povazay, J. Laufer, A. Alex, B. Hofer, B. Pedley, C. Glittenberg, B. Treeby, B. Cox, P. Beard, and W. Drexler, “Multimodal photoacoustic and optical coherence tomography scanner using an all optical detection scheme for 3D morphological skin imaging,” Biomed. Opt. Express 2(8), 2202–2215 (2011). [CrossRef]  

136. E. Z. Zhang and P. C. Beard, “A miniature all-optical photoacoustic imaging probe,” in Photons Plus Ultrasound: Imaging and Sensing 2011, vol. 7899 (International Society for Optics and Photonics, 2011), p. 78991F.

137. R. Ansari, E. Z. Zhang, A. E. Desjardins, and P. C. Beard, “Miniature all-optical flexible forward-viewing photoacoustic endoscopy probe for surgical guidance,” Opt. Lett. 45(22), 6238–6241 (2020). [CrossRef]  

138. P. C. Beard, F. Perennes, and T. N. Mills, “Transduction mechanisms of the Fabry-Perot polymer film sensing concept for wideband ultrasound detection,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 46(6), 1575–1582 (1999). [CrossRef]  

139. G. Wissmeyer, M. A. Pleitez, A. Rosenthal, and V. Ntziachristos, “Looking at sound: optoacoustics with all-optical ultrasound detection,” Light: Sci. Appl. 7(1), 53 (2018). [CrossRef]  

140. J. Horstmann, H. Spahr, C. Buj, M. Münter, and R. Brinkmann, “Full-field speckle interferometry for non-contact photoacoustic tomography,” Phys. Med. Biol. 60(10), 4045–4058 (2015). [CrossRef]  

141. H. Li, F. Cao, Z. Yu, and P. Lai, “Interferometry-free noncontact photoacoustic detection method based on speckle correlation change,” Opt. Lett. 44(22), 5481–5484 (2019). [CrossRef]  

142. Z. Hosseinaee, M. Le, K. Bell, and P. H. Reza, “Towards non-contact photoacoustic imaging,” Photoacoustics 20, 100207 (2020). [CrossRef]  

143. R. Ansari, E. Z. Zhang, A. E. Desjardins, and P. C. Beard, “All-optical forward-viewing photoacoustic probe for high-resolution 3D endoscopy,” Light: Sci. Appl. 7(1), 75 (2018). [CrossRef]  

144. N. Huynh, O. Ogunlade, E. Zhang, B. Cox, and P. Beard, “Photoacoustic imaging using an 8-beam Fabry-Perot scanner,” in Photons Plus Ultrasound: Imaging and Sensing 2016, vol. 9708 (International Society for Optics and Photonics, 2016), p. 97082L.

145. N. Huynh, F. Lucka, E. Zhang, M. Betcke, S. Arridge, P. Beard, and B. Cox, “Sub-sampled Fabry-Perot photoacoustic scanner for fast 3D imaging,” in Photons Plus Ultrasound: Imaging and Sensing 2017, vol. 10064 (International Society for Optics and Photonics, 2017), p. 100641Y.

146. N. Huynh, E. Zhang, M. Betcke, S. Arridge, P. Beard, and B. Cox, “Single-pixel optical camera for video rate ultrasonic imaging,” Optica 3(1), 26–29 (2016). [CrossRef]  

147. B. Dong, C. Sun, and H. F. Zhang, “Optical detection of ultrasound in photoacoustic imaging,” IEEE Trans. Biomed. Eng. 64(1), 4–15 (2017). [CrossRef]  

148. R. Manwar, K. Kratkiewicz, and K. Avanaki, “Overview of ultrasound detection technologies for photoacoustic imaging,” Micromachines 11(7), 692 (2020). [CrossRef]  

149. M. Basij, Y. Yan, S. S. Alshahrani, H. Helmi, T. K. Burton, J. W. Burmeister, M. M. Dominello, I. S. Winer, and M. Mehrmohammadi, “Miniaturized phased-array ultrasound and photoacoustic endoscopic imaging system,” Photoacoustics 15, 100139 (2019). [CrossRef]  

150. G. Yang, E. Amidi, S. Nandy, A. Mostafa, and Q. Zhu, “Optimized light delivery probe using ball lenses for co-registered photoacoustic and ultrasound endo-cavity subsurface imaging,” Photoacoustics 13, 66–75 (2019). [CrossRef]  

151. American National Standards Institute, American National Standard for Safe Use of Lasers: ANSI Z136.1–2000 (Laser Institute of America, 2007).

152. M. A. Lediju, M. J. Pihl, J. J. Dahl, and G. E. Trahey, “Quantitative assessment of the magnitude, impact and spatial extent of ultrasonic clutter,” Ultrason. Imaging 30(3), 151–168 (2008). [CrossRef]  

153. J. Huang, A. Wiacek, K. M. Kempski, T. Palmer, J. Izzi, S. Beck, and M. A. L. Bell, “Empirical assessment of laser safety for photoacoustic-guided liver surgeries,” Biomed. Opt. Express 12(3), 1205–1216 (2021). [CrossRef]  

154. T. L. Ghezzi and O. C. Corleta, “30 years of robotic surgery,” World J. Surg. 40(10), 2550–2557 (2016). [CrossRef]  

155. N. Gandhi, M. Allard, S. Kim, P. Kazanzides, and M. A. L. Bell, “Photoacoustic-based approach to surgical guidance performed with and without a da Vinci robot,” J. Biomed. Opt. 22(12), 1 (2017). [CrossRef]  

156. R. Kikinis, S. D. Pieper, and K. G. Vosburgh, “3D Slicer: a platform for subject-specific image analysis, visualization, and clinical support,” in Intraoperative imaging and image-guided therapy, (Springer, 2014), pp. 277–289.

157. J. Tokuda, G. S. Fischer, X. Papademetris, Z. Yaniv, L. Ibanez, P. Cheng, H. Liu, J. Blevins, J. Arata, A. J. Golby, T. Kapur, S. Pieper, E. C. Burdette, G. Fichtinger, C. M. Tempany, and N. Hata, “OpenIGTLink: an open network protocol for image-guided therapy environment,” The Int. J. Med. Robotics Comput. Assist. Surg. 5(4), 423–434 (2009). [CrossRef]  

158. P. Kazanzides, A. Deguet, B. Vagvolgyi, Z. Chen, and R. H. Taylor, “Modular interoperability in surgical robotics software,” Mech. Eng. 137(09), S19–S22 (2015). [CrossRef]  

159. K. Hollman, K. Rigby, and M. O’donnell, “Coherence factor of speckle from a multi-row probe,” in 1999 IEEE Ultrasonics Symposium. Proceedings. International Symposium (Cat. No. 99CH37027), vol. 2 (IEEE, 1999), pp. 1257–1260.

160. J. F. Synnevag, A. Austeng, and S. Holm, “Adaptive beamforming applied to medical ultrasound imaging,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 54(8), 1606–1613 (2007). [CrossRef]  

161. G. Matrone, A. S. Savoia, G. Caliano, and G. Magenes, “The delay multiply and sum beamforming algorithm in ultrasound B-mode medical imaging,” IEEE Trans. Med. Imaging 34(4), 940–949 (2015). [CrossRef]  

162. M. A. Lediju, G. E. Trahey, B. C. Byram, and J. J. Dahl, “Short-lag spatial coherence of backscattered echoes: Imaging characteristics,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 58(7), 1377–1388 (2011). [CrossRef]  

163. B. Pourebrahimi, S. Yoon, D. Dopsa, and M. C. Kolios, “Improving the quality of photoacoustic images using the short-lag spatial coherence imaging technique,” in Photons Plus Ultrasound: Imaging and Sensing 2013, vol. 8581 (International Society for Optics and Photonics, 2013), p. 85813Y.

164. M. T. Graham and M. A. L. Bell, “Photoacoustic spatial coherence theory and applications to coherence-based image contrast and resolution,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 67(10), 2069–2084 (2020). [CrossRef]  

165. E. A. Gonzalez and M. A. L. Bell, “GPU implementation of photoacoustic short-lag spatial coherence imaging for improved image-guided interventions,” J. Biomed. Opt. 25(07), 1 (2020). [CrossRef]  

166. S. Park, A. B. Karpiouk, S. R. Aglyamov, and S. Y. Emelianov, “Adaptive beamforming for photoacoustic imaging,” Opt. Lett. 33(12), 1291–1293 (2008). [CrossRef]  

167. M. Mozaffarzadeh, A. Mahloojifar, M. Orooji, K. Kratkiewicz, S. Adabi, and M. Nasiriavanaki, “Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm,” J. Biomed. Opt. 23(02), 1 (2018). [CrossRef]  

168. S. Jeon, E.-Y. Park, W. Choi, R. Managuli, K. jong Lee, and C. Kim, “Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans,” Photoacoustics 15, 100136 (2019). [CrossRef]  

169. E. J. Alles, M. Jaeger, and J. C. Bamber, “Photoacoustic clutter reduction using short-lag spatial coherence weighted imaging,” in 2014 IEEE International Ultrasonics Symposium, (IEEE, 2014), pp. 41–44.

170. B. T. Cox, J. G. Laufer, P. C. Beard, and S. R. Arridge, “Quantitative spectroscopic photoacoustic imaging: a review,” J. Biomed. Opt. 17(6), 061202 (2012). [CrossRef]  

171. B. E. Treeby, E. Z. Zhang, and B. T. Cox, “Photoacoustic tomography in absorbing acoustic media using time reversal,” Inverse Probl. 26(11), 115003 (2010). [CrossRef]  

172. A. Rosenthal, V. Ntziachristos, and D. Razansky, “Acoustic inversion in optoacoustic tomography: A review,” Curr. Med. Imaging 9(4), 318–336 (2014). [CrossRef]  

173. M. Xu and L. V. Wang, “Universal back-projection algorithm for photoacoustic computed tomography,” Phys. Rev. E 71(1), 016706 (2005). [CrossRef]  

174. M. Xu and L. V. Wang, “Time-domain reconstruction for thermoacoustic tomography in a spherical geometry,” IEEE Trans. Med. Imaging 21(7), 814–822 (2002). [CrossRef]  

175. K. P. Köstli, M. Frenz, H. Bebie, and H. P. Weber, “Temporal backward projection of optoacoustic pressure transients using fourier transform methods,” Phys. Med. Biol. 46(7), 1863–1872 (2001). [CrossRef]  

176. Y. Xu, D. Feng, and L. V. Wang, “Exact frequency-domain reconstruction for thermoacoustic tomography. i. planar geometry,” IEEE Trans. Med. Imaging 21(7), 823–828 (2002). [CrossRef]  

177. Y. Xu, M. Xu, and L. V. Wang, “Exact frequency-domain reconstruction for thermoacoustic tomography. II. cylindrical geometry,” IEEE Trans. Med. Imaging 21(7), 829–833 (2002). [CrossRef]  

178. B. E. Treeby and B. T. Cox, “k-wave: Matlab toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt. 15(2), 021314 (2010). [CrossRef]  

179. M. Fink, “Time reversal of ultrasonic fields. I. Basic principles,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 39(5), 555–566 (1992). [CrossRef]  

180. Y. Xu and L. V. Wang, “Time reversal and its application to tomography with diffracting sources,” Phys. Rev. Lett. 92(3), 033902 (2004). [CrossRef]  

181. P. Burgholzer, G. J. Matt, M. Haltmeier, and G. Paltauf, “Exact and approximative imaging methods for photoacoustic tomography using an arbitrary detection surface,” Phys. Rev. E 75(4), 046706 (2007). [CrossRef]  

182. K. Johnstonbaugh, S. Agrawal, D. A. Durairaj, C. Fadden, A. Dangi, S. P. K. Karri, and S.-R. Kothapalli, “A deep learning approach to photoacoustic wavefront localization in deep-tissue medium,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 67(12), 2649–2659 (2020). [CrossRef]  

183. A. Hauptmann, F. Lucka, M. Betcke, N. Huynh, J. Adler, B. Cox, P. Beard, S. Ourselin, and S. Arridge, “Model-based learning for accelerated, limited-view 3-d photoacoustic tomography,” IEEE Trans. Med. Imaging 37(6), 1382–1393 (2018). [CrossRef]  

184. S. Antholzer, M. Haltmeier, R. Nuster, and J. Schwab, “Photoacoustic image reconstruction via deep learning,” in Photons Plus Ultrasound: Imaging and Sensing 2018, vol. 10494 (International Society for Optics and Photonics, 2018), p. 104944U.

185. S. Antholzer, M. Haltmeier, and J. Schwab, “Deep learning for photoacoustic tomography from sparse data,” Inverse Probl. Sci. Eng. 27(7), 987–1005 (2019). [CrossRef]  

186. S. Guan, A. A. Khan, S. Sikdar, and P. V. Chitnis, “Fully Dense UNet for 2-D Sparse Photoacoustic Tomography Artifact Removal,” IEEE J. Biomed. Heal. Informatics 24(2), 568–576 (2020). [CrossRef]  

187. T. Vu, M. Li, H. Humayun, Y. Zhou, and J. Yao, “A generative adversarial network for artifact removal in photoacoustic computed tomography with a linear-array transducer,” Exp. Biol. Med. 245(7), 597–605 (2020). [CrossRef]  

188. A. Hariri, K. Alipour, Y. Mantri, J. P. Schulze, and J. V. Jokerst, “Deep learning improves contrast in low-fluence photoacoustic imaging,” Biomed. Opt. Express 11(6), 3360–3373 (2020). [CrossRef]  

189. E. M. A. Anas, H. K. Zhang, J. Kang, and E. Boctor, “Enabling fast and high quality led photoacoustic imaging: a recurrent neural networks based approach,” Biomed. Opt. Express 9(8), 3852–3866 (2018). [CrossRef]  

190. C. Yang, H. Lan, F. Gao, and F. Gao, “Deep learning for photoacoustic imaging: a survey,” arXiv preprint arXiv:2008.04221 (2020).

191. M. K. A. Singh, V. Parameshwarappa, E. Hendriksen, W. Steenbergen, and S. Manohar, “Photoacoustic-guided focused ultrasound for accurate visualization of brachytherapy seeds with the photoacoustic needle,” J. Biomed. Opt. 21(12), 120501 (2016). [CrossRef]  

192. B. Stephanian, M. T. Graham, H. Hou, and M. A. L. Bell, “Additive noise models for photoacoustic spatial coherence theory,” Biomed. Opt. Express 9(11), 5566–5582 (2018). [CrossRef]  

193. A. M. Winkler, K. I. Maslov, and L. V. Wang, “Noise-equivalent sensitivity of photoacoustics,” J. Biomed. Opt. 18(9), 097003 (2013). [CrossRef]  

194. D. M. Egolf, R. K. Chee, and R. J. Zemp, “Sparsity-based reconstruction for super-resolved limited-view photoacoustic computed tomography deep in a scattering medium,” Opt. Lett. 43(10), 2221–2224 (2018). [CrossRef]  

195. A. Rodriguez-Molares, O. M. H. Rindal, J. D’hooge, S.-E. Måsøy, A. Austeng, M. A. L. Bell, and H. Torp, “The generalized contrast-to-noise ratio: a formal definition for lesion detectability,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 67(4), 745–759 (2020). [CrossRef]  

196. K. M. Kempski, M. T. Graham, M. R. Gubbi, T. Palmer, and M. A. L. Bell, “Application of the generalized contrast-to-noise ratio to assess photoacoustic image quality,” Biomed. Opt. Express 11(7), 3684–3698 (2020). [CrossRef]  

197. B. Abbaspanah, M. Momeni, M. Ebrahimi, and S. H. Mousavi, “Advances in perinatal stem cells research: a precious cell source for clinical applications,” Regen. Medicine 13(5), 595–610 (2018). [CrossRef]  

198. Y. Lin, P. Andreae, Z. Li, J. Cai, and H. Li, “Real-time co-registered photoacoustic and ultrasonic imaging for early endometrial cancer detection driven by cylindrical diffuser,” J. Innovative Opt. Health Sci. 12(02), 1950002 (2019). [CrossRef]  

199. G. Xu, H. Lei, Y. Zhu, L. Ni, L. Johnson, K. Eaton, J. Rubin, X. Wang, and P. Higgins, “Quantitative assessment of intestinal fibrosis in vivo with spectroscopic and strain by endoscopic photoacoustic imaging,” in Clinical and Translational Biophotonics, (Optical Society of America, 2020), pp. TM2B–2.

200. Y. Zhu, L. Ni, L. Johnson, J. Yuan, X. Wang, P. Higgins, and G. Xu, “Characterizing intestinal obstruction using a photoacoustic-ultrasound catheter (conference presentation),” in Photons Plus Ultrasound: Imaging and Sensing 2020, vol. 11240 (International Society for Optics and Photonics, 2020), p. 112401L.

201. M. A. L. Bell, “Photoacoustic vision for surgical guidance,” in 2020 IEEE International Ultrasonics Symposium (IUS), (IEEE, 2020), pp. 1–6.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Summary of photoacoustic-guided surgery applications stratified by organ.
Fig. 2.
Fig. 2. Timeline of key enabling events advancing new possibilities for photoacoustic-guided surgery, with the size of each circle representing the number of clinical co-authors of the papers summarized in Fig.  1. As the year 2021 has not yet concluded at the time of publishing, this datapoint is expected to contain an incomplete clinical co-author count.
Fig. 3.
Fig. 3. Optical absorption spectra of a variety of endogenous chromophores (solid) including water, oxygenated hemoglobin [54], deoxygenated hemoglobin [54], lipids [55,56], and collagen [57], and exogenous chromophores (dashed) including stainless steel [7,58], methylene blue [59] and indocyanine green [60,61]. Stainless steel is composed of a surface passivation layer of primarily Cr $_2$ O $_3$ , which is the primary optical absorber in the clinical imaging of metal [7,58].
Fig. 4.
Fig. 4. Example photoacoustic image guidance during an endonasal transsphenoidal surgery, showing capability to visualize and avoid the right internal carotid artery (RCA) during pituitary tumor resection [35]. Photoacoustic signals were overlaid on co-registered CT or ultrasound images acquired with the ultrasound probe placed on the eyelid of a human cadaver. The SLSC beamforming approach provides clearer visualization of the RCA when compared to DAS beamforming of the same signals. (Adapted with permission from Graham et al., Photoacoustics 19, 100183 (2020). Copyright 2020 Elsevier.)
Fig. 5.
Fig. 5. Example spinal surgery applications targeting spinal fusion surgeries [87] and targeting stem cell delivery into the spinal cord [67]. (a) Biplanar views of the 3D photoacoustic volume, lateral elevational photoacoustic image slices, and lateral elevational photoacoustic image slices overlaid on the co-registered ultrasound image (from left to right, respectively), demonstrating differences in photoacoustic signal appearance between cortical (orange arrow) and cancellous (blue arrow) bone. (b) In vivo 3D and 2D (top and bottom, respectively) photoacoustic images overlaid on ultrasound images of PBNC-labeled stem cells after injection and needle removal in the spinal cord. (Adapted from: J. Shubert and M. A. L. Bell, Phys. Med. Biol. 63(14), 144001 (2018). Copyright 2018 Author(s), licensed under a Creative Commons Attribution 3.0 Unported License; K. Kubelick and S. Emelianov, Neurophotonics 7, 030501 (2020). Copyright 2020 Author(s), licensed under a Creative Commons Attribution 4.0 License.)
Fig. 6.
Fig. 6. Example applications targeting breast conserving surgery [92,93]. (a) Interoperative photoacoustic screening (iPAS) assessment of a human lumpectomy specimen showing agreement between the hypoechoic and hyperechoic ultrasound regions with the 930 nm and 690 nm iPAS images, respectively [92]. (b) Positive and negative (top and bottom, respectively) margin of a human lumpectomy sample with component 1 and component 2 photoacoustic images representing hemoglobin and fat, respectively. In the binary cancer map, magenta indicates normal and blue indicates cancer [93]. (Adapted from: I. Kosik et al., Journal of Biomedical Optics, 24, 056002 (2019). Copyright 2019 Author(s), licensed under a Creative Commons Attribution 4.0 License; R. Li et al. Biomedical Optics Express, 6, 1273-1281 (2015). Copyright 2015 Optical Society of America.)
Fig. 7.
Fig. 7. Example cardiac applications targeting cardiac catheterizations [95] and radiofrequency ablation monitoring [97]. (a) In vivo photoacoustic images of a cardiac catheter in contact (top) and not in contact (bottom) with an in vivo swine heart. (b) Pre- and post-ablation regions at three different wavelengths and a corresponding dual wavelength image visualizing the ablated region (arrow). (Adapted from: Graham et al., IEEE Trans. Med. Imaging 39(4), 1015–1029 (2020). Copyright 2020 Author(s), licensed under a Creative Commons Attribution 4.0 License; S. Iskander-Risk et al., Biomedical Optics Express 9, 1309-1322 (2018). Copyright 2018 Optical Society of America.)
Fig. 8.
Fig. 8. Example renal application targeting visualization of vascular injury monitoring during shockwave lithotripsy [99,100]. (a) Photoacoustic tomography images in in vivo mice after 200 and 1,000 shockwave pulses (top and bottom, respectively) with hemmorrhage observed at the shockwave focus (arrow). (b) Example of the proposed internal diffuser (top) used to produce vascular images from an in vivo swine kidney. The ultrasound image is shown for anatomical orientation (bottom left) and the photoacoustic image is overlaid on the ultrasound image (bottom right). (Adapted from: M. Li et al., IEEE Transactions on Medical Imaging 39, 468-477 (2019). M. Li et al., IEEE Transactions on Medical Imaging 40(1), 346-356 (2021). Copyright 2020 IEEE).
Fig. 9.
Fig. 9. Example uterus applications from [106] targeting minimally invasive fetal interventions. The green dotted line in the photograph indicates the 2D cross section visualized in the 2D ultrasound and photoacoustic images. (Adapted from E. Maneas et al., Journal of Biophotonics 13, e201900167 (2020). Copyright 2019 Author(s), licensed under a Creative Commons Attribution 4.0 License.)
Fig. 10.
Fig. 10. Example prostate application from [37] targeting prostate brachytherapy. Postoperative CT image of three brachytherapy seeds in an in vivo canine prostate and the corresponding ultrasound and DAS/SLSC photoacoustic images of these seeds using a transrectal ultrasound probe and transurethral light delivery. (Reprinted from M. A. L. Bell et al., Journal of Biomedical Optics 20, 036002 (2015). Copyright 2015 Author(s), licensed under a Creative Commons Attribution 3.0 Unported License.)
Fig. 11.
Fig. 11. Photoacoustic needle visualization examples that span a range of organs and applications, including microsurgeries on the brain and eyes [114], percutaneous ablation on the liver, lung, kidney, and bone [115], and robot assisted biopsy [116]. (a) Photoacoustic microscopy (PAM) image overlaid on optical coherence tomography (OCT) image acquired during an in vivo demonstration of near-infared virtual intraoperative photoacoustic optical coherence tomography (NIR-VISPAOCT)-guided needle insertion. (b) Schematic diagram, corresponding ultrasound image, and photoacoustic image overlaid on ultrasound image (from left to right, respectively) obtained during RFA needle insertion into bovine liver through a layer of chicken tissue. (c) Pairs of ultrasound and overlaid photoacoustic images obtained in the presence of a needle inserted in fat and liver tissue (left and right, respectively). (Adapted from: D. Lee et al., Scientific Reports 6, 35176 (2016). Copyright 2016 Author(s), licensed under a Creative Commons Attribution 4.0 License; K.J. Francis and S. Manohar, Physics Medicine & Biology 64, 184001 (2019). Copyright 2019 IOPscience; M. A. L. Bell and J. Shubert, Scientific Reports 8, 1-12 (2018). Copyright 2018 Author(s), licensed under a Creative Commons Attribution 4.0 License.)
Fig. 12.
Fig. 12. Developmental stages for specific surgeries and interventions, namely neurosurgery [35,36,38,68,70,77,78,8183,114], spinal fusion surgery [8689], spinal stem cell delivery [67,69,90], breast conserving surgery [71,9193], cardiac catheterization procedures [9498], pulmonary interventions [65], abdominal surgery [101,102], shock wave lithotripsy [99,100], hysterectomy [34,62,63,104], fetal interventions [105,106], prostate biopsy [108,109], prostate brachytherapy [33,37,4143], endovenous laser ablation [120], and foot revascularization surgery [121].
Fig. 13.
Fig. 13. Venn diagram illustrating that the hardware required for photoacoustic imaging is a combination of optical and acoustic components. Examples of light transmission hardware from smallest to largest include a pulsed laser diode (PLD) (LS Series, Laser Components, Olching, Germany), light emitting diode (LED) array (Prexion Corporation, Tokyo, Japan), a benchtop laser (Vibrant B-355II, Opotek, Santa Clara, CA, USA), and a mobile laser (Phocus Mobile, Opotek, Santa Clara, CA, USA). Examples of research-based sound reception hardware in order of readiness for surgical use include Alpinion ECUBE-12R (Alpinion, Seoul, South Korea), Verasonics Vantage (Verasonics, Kirkland, WA, USA), and SonixDAQ (Ultrasonix, British Columbia, Canada). An example complete photoacoustic imaging system is the Vevo LAZR small animal ultrasound and photoacoutic imaging system (Visualsonics, Toronto, Canada).
Fig. 14.
Fig. 14. Custom light delivery systems for (a) minimally invasive fetal interventions [106], (b) neurosurgery [36], (c) visualization and detection of gynecological malignancies [149], and (d) endo-cavity imaging of adenocarcinomas [150]. (Adapted from: E. Maneas et al., Journal of Biophotonics 13, e201900167 (2020). Copyright 2019 Author(s), licensed under a Creative Commons Attribution 4.0 License; Eddins and Bell, Journal of Biomedical Optics 22, 041011 (2017). Copyright 2017 Author(s), licensed under a Creative Commons Attribution 3.0 Unported License; M. Basij et al., Photoacoustics 15, 100139 (2019). Copyright 2019 Elsevier; G. Yang et al., Photoacoustics 13, 66-75 (2019). Copyright 2019 Elsevier.)
Fig. 15.
Fig. 15. Integration of photoacoustic imaging with robotic systems, targeting minimally invasive surgery [155] and radical prostatectomy [110]. (a) Photoacoustic images, live stereo endoscope video, and solid models of the tool, laser beam, and ultrasound probe are transferred to the photoacoustic image guidance module (in 3-D slicer) through a combination of the da Vinci Research Kit (dVRK) image-guided therapy (IGT) module and the cisst stereo vision library (SVL) for visualization [84,156158]. The visualizations are then sent to the da Vinci stereo viewer. (b) Arrangement of the transrectal ultrasound (TRUS) and “pick up" ultrasound probes with respect to the prostate. (Reprinted from N. Gandhi et al., Journal of Biomedical Optics 22, 121606 (2017). Copyright 2017 Author(s), licensed under a Creative Commons Attribution 3.0 Unported License; H. Moradi et al., IEEE Transactions on Medical Imaging 38, 57-68 (2019). Copyright 2019 IEEE).

Tables (2)

Tables Icon

Table 1. Summary of photoacoustic-guided surgery or interventional applications with demonstrated feasibility beyond the developmental stage of ex vivo tissue.

Tables Icon

Table 2. Common image quality and assessment metrics and their application to surgical and interventional guidance system design: Contrast, signal-to-noise ratio (SNR); contrast-to-noise ratio (CNR); generalized contrast-to-noise ratio (gCNR); full width at half-maximum (FWHM); root mean squared error (RMSE); mean absolute error (MAE).

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

p 0 ( λ ) = Γ μ a ( λ ) Φ ( λ )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.