Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Handheld interventional ultrasound/photoacoustic puncture needle navigation based on deep learning segmentation

Open Access Open Access

Abstract

Interventional ultrasound (US) has challenges in accurate localization of the puncture needle due to intrinsic acoustic interferences, which lead to blurred, indistinct, and even invisible needles in handheld linear array transducer-based US navigation, especially the incorrect needle tip positioning. Photoacoustic (PA) imaging can provide complementary image contrast, without additional data acquisition. Herein, we proposed an internal illumination to solely light up the needle tip in PA imaging. Then deep-learning-based feature segmentation alleviates acoustic interferences, enhancing the needle shaft-tip visibility. Further, needle shaft-tip compensation aligned the needle shaft in US image and the needle tip in the PA image. The experiments on phantom, ex vivo chicken breast, preclinical radiofrequency ablation and in vivo biopsy of sentinel lymph nodes were piloted. The target registration error can reach the submillimeter level, achieving precise puncture needle tracking ability with in-plane US/PA navigation.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Interventional ultrasound (US) is increasingly and widely applied in modern clinical medicine. Under the guidance of US imaging, various intraoperative procedures such as surgical ablation, biopsy, regional anesthesia, drug delivery and therapeutic injection can be navigated [14]. It visualizes the puncture needle with real-time tracking, repeatable portability and minimal invasion in bedside clinics, unlike X-ray computed tomography [5]. Nevertheless, interventional US often fails due to either the specular reflection from the needle’s smooth surface or the excessive mismatch of acoustic impedance between the needle and surrounding complicated tissue. Until now, many strategies have been proposed to assist in precise needle detection. Mechanical slot is commonly attached to the probe, which strictly constrains the needle trajectory with fixed angle [6]. Optical sensors with two cameras out of the imaging plane were calibrated to track the needle location with overall error of 3.1 ± 1.8 mm [7]. The electronic [8] and electromagnetic [9] tracking sensors were separately designed, demonstrating a basic tracking error of 3.5 ± 1.9 mm by Krücker et al. A piezoelectric polymer fabricated ultrasonic sensor mounted on catheter can receive the transmitted pulse to deduce its location [10]. Optical flow-based method [11] can track the needle localization, and model fitting method using random sample consensus can locate the surgical tool in 3D US images [12]. Learning-based methods including support vector machine [13] and U-Net [14] were proposed recently for needle segmentation.

Photoacoustic (PA) imaging is an increasingly widespread imaging modality in clinics [1517], and it shows promise for alternatively visualizing puncture needle [1820]. The constituent materials of most commercial needle, i.e. metal, can absorb the energy from the nano-width pulse laser, and then generate ultrasonic signal based on the thermal-elastic effect [21]. Coincidentally, wavelength-dependent optical absorption of metal tends to be higher than endogenous absorbers at 1064 nm, which makes it excellent target to be distinguished by PA imaging. In addition, PA imaging shares the same data acquisition hardware with US imaging to form the dual-modality US/PA imaging platform [2226]. Wang et al. developed a novel handheld PA probe for image-guided needle biopsy of sentinel lymph nodes (SLNs) [27]. Bell et al. explored the feasibility of the robotically controlled probe to visualize the location of the tool tip by tracking PA signals generated from a metallic needle tip with tracking error < 2 mm [28,29], and the fiber was directly inserted with limited numerical aperture (NA). A PA needle attached with black resin at the needle tip area was designed by Yorozu et al., which improves needle tip visibility during deep peripheral nerve block [30]. Manohar et al. developed annular illumination probe that accommodates an interventional needle (14 gauge) and multimode optical fibers arranged around the circumference of the hollow center [31], but multiple fibers were elaborately integrated with large diameter needle. By scanning a linear array US probe, Tian et al. showed that 3D interventional PA imaging can precisely locate the needle [32]. Based on the fiber fabrication [33], new internal illumination method needs to be piloted for consistently lighting up the needle tip.

To further improve the detection accuracy of the needle trajectory, conventional feature extraction methods such as centroid or line detection, image filtering and projection approaches have been proposed [34]. However, most literatures rely on the original visibility of the needle in US, these image processing methods become unreliable once the echo intensities from the needle shaft and tip are insignificant. With the additional PA module to improve the needle visibility, advanced deep-learning-based feature segmentation methods have been used separately in US imaging [14] and PA imaging. The needle shaft and tip were enhanced from the single beam steered US image by Xia et al., whereas the overall needle features in PA were extracted from the semi-synthetic datasets [35,36]. Allman et al. trained convolutional neural network (CNN) to classify sources and remove artifacts from reflection [37], where point-like targets can be identified in pre-beamformed data. Different from the use of simulation data and CNN model, herein a modified end-to-end U-Net network [38] was applied in the small dataset of US/PA image feature segmentation. The U-Net++ consists of U-Net of varying depth whose decoders are densely connected at the same resolution via the redesigned skip connections [39], enabling high-precision segmentation. Attention mechanism can be further integrated for suppressing irrelevant regions of input while highlighting meaningful features [40], which is beneficial to the handheld transducer-needle coordination procedures.

In this work, to navigate needle puncture in deep tissue precisely, the custom PA apparatus, the deep learning segmentation and registration of US/PA images are the contributions toward interventional guidance in clinic. The optical fiber was uniquely grinded and polished to realize internal illumination in depth, overcoming the shallow penetration with the traditional extra-corporeal illumination. Most of image processing approaches rely on the visibility of the needle in the un-processed images as long line-like or point-like structures, which become unreliable once the acoustic interferences exist. Inspired by the U-Net and attention mechanism, we modified the U-Net++ architecture for better needle visualization and more accurate needle positioning. The decision-level image fusion was conducted [41] with the proposed needle shaft-tip alignment compensation. The target registration error (TRE) was used to evaluate the needle tip positioning accuracy. Through experiments (tissue-mimicking phantom and ex vivo chicken breast tissue with long muscle bundles) and preclinical advancements (percutaneous radiofrequency ablation and in vivo needle biopsy of SLNs in mouse), the success of the proposed strategy was demonstrated. This is crucial for controlling the needle-based interventional procedure in freehand guidance. Besides the inherent acoustic and optical dual contrast for puncture needle, preliminary results have shown that PA imaging can be used as an effective complement to current interventional US navigation, with respect to (1) custom machined optical fiber to conform and irradiate deep needle tip with different geometry; (2) attention gated (AG) U-Net++ model to enhance needle visibility and improve positioning accuracy; (3) needle shaft-tip compensation to predict the alignment of needle shaft in US image and needle tip in PA image.

2. Methods

With the shared data acquisition (DAQ) platform, the US/PA navigation system can realize US imaging on transmit-receive mode with all PA imaging functions preserved on receive only mode. Once the puncture needle inserted, the US/PA images were dynamically reconstructed, and then input to AG U-Net++ framework to segment the feature of interest. The US outputs (shaft and tip) and PA outputs (tip) were co-registered, aligned and fused in decision-level with the original US inputs. The workflow of the proposed strategy is graphically illustrated in Fig. 1.

 figure: Fig. 1.

Fig. 1. Overview of interventional US/PA guidance of puncture needle with deep learning.

Download Full Size | PDF

2.1. Dual-modality US/PA needle navigation

US image-guided interventions have become the standard of care for needle-based visualization. The core bottleneck lies in how to improve the positioning accuracy of needle shaft and tip. In interventional US, the highly angle-dependent specular reflection often causes the blurred or invisible needle, especially the insignificant needle tip. In addition, the excessive difference in acoustic impedance between the needle (metal) and surrounding tissue results in the acoustic interferences and reverberations, initiating discontinuous needle body and indistinct needle tip. To alleviate the angle-dependent specular reflection, beam steering technique can steer the US transmitting beam at an angle maximally perpendicular to the needle. Combining several different images from multiple beam steering angles, the spatial compounding was applied here to improve the needle visibility due to the elimination of the acoustic shadows from strong reflection and the reduction of the speckle generated in pulse-echo images. However, the acoustic artifacts from the acoustic grating lobes and reverberations still exist after compounding, as shown in the US images in Figs. 2(b-d).

 figure: Fig. 2.

Fig. 2. Handheld interventional US/PA needle navigation. (a) Procedure illustration with the machined fiber. (b-d) US/PA images with representative artifacts. FP, fiber photograph; BP, beam profile; NP, needle photograph; GL, grating lobe; RB, reverberation; IT, insignificant tip; AI, acoustic interface.

Download Full Size | PDF

PA imaging in US-guided interventional procedure has the potentials to provide solution in acoustic challenging cases in which conventional US guidance is not suitable, especially useful in conquering needle deflection and deformation. As shown in the insets of Fig. 2(a), custom machining such as ultrasonic grinding to conform the specific geometry of needle tip and polishing to increase the light transmission efficiency were performed. After inserting and attaching to the hollow cavity of the puncture needle, the areas irradiated by internal illumination directly highlight the metallic needle tip, serving as obvious PA markers. Therefore, only the needle tip was designated to be illuminated, then distinct PA signals can be produced with subtle background signal, achieving unique background free at 1064 nm. Nevertheless, the optical induced interferences and acoustic reverberations can randomly generate two or more point-shape targets with a small probability (1.2% in the whole PA dataset). For example, both the ‘Tip 1’ and ‘Tip 2’ co-exist because of the optical diffusion to light up the ‘Tip 2’ area (PA image in Fig. 2(b)). The representative acoustic reverberations of needle tip remain in PA image (PA image in Fig. 2(d)). For most cases, the needle tip in PA image displayed as single point-like target, as shown in PA image in Fig. 2(c).

In current clinics, there is no choice but to depend on the single US contrast of needle, whereas the dual-modality US/PA navigation strategy can enhance the needle visibility and track the PA signals generated from the needle tip more precisely. Herein we focused on in-plane 2D US/PA needle tracking with the widely clinical used handheld linear array ultrasound transducer (UT). 2D US is the standard in current clinics with the advantage of real-time convenient imaging. In-plane navigation can provide more valuable information of the entire needle structure in complicated acoustic environments. The dual-modality guidance can realize cross validation based on two different image contrasts with improved navigation accuracy. Combining the complementary contrasts, the ability to visualize both the needle (shaft and tip) with anatomical surrounding structures (US) and the advancing needle tip (PA) was achieved. Particularly, the optical contrast in PA imaging was introduced only at the tip of the needle, while the acoustic contrast synchronously displayed the background during the puncture. Once set in this dual-modality way, it does not diminish the value of traditional US physician clinical training, due to the facts that the same US features can be visualized.

2.2. Network implementation

Without any image processing, the original US/PA images cannot eliminate the aforementioned acoustic interferences. Also, the traditional centroid or maximum thresholding detection fails in the cases of multiple line-shape structures in US images and multiple point-shape targets in PA images (indicated in section 2.1). Hence the needle feature segmentation of both US and PA imaging needs to be replaced to enhance the needle visibility and identify the needle shaft and tip more precisely. Therefore, we modified the U-Net architecture with redesigned skip connections and added attention mechanism, as shown in Fig. 3. The modified dense skip connections can further exploit multiscale needle features for segmentation, and the attention gates can apply greater weight to the predefined or estimated needle route, considering the transducer-needle geometry performed by the US physician.

 figure: Fig. 3.

Fig. 3. Modified attention gated U-Net++ model.

Download Full Size | PDF

The proposed AG U-Net++ model was built on top of a standard U-Net architecture, as labelled by the dashed square in Fig. 3. U-Net++ starts with an encoder sub-network followed by a symmetrically arranged decoder sub-network. The proposed model includes (1) the re-designed dense skip pathways (shown in green and blue in the green-shaded triangle) can connect two sub-networks to extract more efficient hierarchical features; (2) the deep supervision (shown in red) enables model pruning and improves or in the worst case achieves comparable performance to use only one loss layer [39]. The combination of binary cross-entropy and dice coefficient was defined as the loss function to each of the above four semantic levels, as Loss = λ1 × Binary Cross-Entropy + λ2 × Dice Coefficient, where λ is the corresponding weighting factors; (3) attention gates were added before the nested convolutional blocks to weight the features extracted at different levels with a focused selection. Thus the irrelevant background regions, especially the acoustic interferences induced artifacts, can be implicitly suppressed in both US and PA images, whereas the needle features can be highlighted with pre-planned angle and position of needle insertion into the imaging plane. In addition, minimal computational overhead occurs with the added AG module, fulfilling the real-time demand for clinical interventional guidance.

Specifically, each scale consisted of two 3 × 3 convolutional layers followed by ReLU activation and 2 × 2 max pooling layer for the encoder path. For the decoder path, symmetrically, each scale contained two activated convolutional layers with an up-sampling factor of 2. The inside dense skip pathway shared the same convolution unit with U-Net. To enable deep supervision, a 1 × 1 convolutional layer followed by a sigmoid activation function was appended to each of the target nodes. The attention gate module added on U-Net++ was a dual attention mechanism that uses spatial and channel attention to extract contextual information within the same channel and dependencies between different channels, and the implementation details can be referred to Fig. S1 in supplementary data. The segmentation performances of U-Net and AG U-Net++ were compared. The networks were implemented in Python using PyTorch v1.2.0. Training was performed for 200 iterations with a batch size of 4 that minimized binary cross entropy dice loss using SGD optimizer (initial learning rate: 0.001) and on the 12th Gen Intel Core (i7-12700F 2.10 GHz) using a NVIDIA GeForce RTX 3060 Ti GPU.

2.3. US/PA image registration and fusion

According to the same imaging plane of US and PA image, these two modalities can be directly pixel added on the same x-z spatial coordinates. The US image was served as the new background with enhanced needle shaft and tip, and the related PA image was processed with appropriately transparent operation of its background region. The brightened needle tip in the PA image was then overlaid in spatial with the corresponding US image to form the fused image. For the US imaging, Hough line detection was performed on the US output mask (the extracted features via the proposed AG U-Net++ model) to get the single line-shape structure to represent the rigid needle, where the shaft end position was served as the tip with coordinate (xus_tip, zus_tip). For the PA imaging, the needle tip was extracted from PA output mask. Its centroid coordinate (xpa_tip, zpa_tip) was calculated from the detected maximum region of interest. With the calculated coordinates, TRE [42] was applied to quantitatively evaluate the needle tip positioning accuracy. The corresponding 2D TREs were calculated between the needle tip (xus_tip, zus_tip) in US image and (xpa_tip, zpa_tip) in PA image with Euclidean distance, which can cross validate the tracking error between dual navigation modalities.

If the TREs were abnormal (for example, greater than 1 mm), the US physician will be warned. Based on above needle tip coordinates and the Hough detected centerline, an efficient needle shaft-tip alignment method for the dual-contrast US/PA guidance was proposed. The centerline’s parameters such as the slope and coordinates of points in needle shaft were obtained to compensate (extend or shorten) the needle tip in US image to the nearest position of the needle tip in the corresponding PA image. For the case where the US display of the needle tip is shorter than the PA display (case 1), i.e. the insignificant needle tip in US, the compensated line segment is extended. Specifically, the straight line in the positive z-depth direction passed through the extracted center point (xpa_tip, zpa_tip) was created. Then Hough detected centerline was extended to intersect with above line at a certain point in space. Afterwards, the straight line detected from US output mask was subtle positively extended to the intersection position, which was specially marked (supplementary data, Fig. S2(a)). Conversely, for cases where the US display of the needle tip is longer than the PA display (case 2), i.e. percutaneous radiofrequency (RF) ablation, the line segment is negatively shortened. Based on the (xpa_tip, zpa_tip), the practical distance (from the inserted fiber position to the needle tip, L = 10 mm in Fig. 6(a)) and centerline’s slope, the virtual needle tip (xpa_tip, zpa_tip) that mapped to needle tip displayed in PA image can be calculated using the linear equation (supplementary data, Fig. S2(b)). The TREs were subsequently calculated between the needle tip (xus_tip, zus_tip) in US image and (xpa_tip, zpa_tip) deduced from PA image.

3. Experiments and results

3.1 US/PA experimental setup and dataset

The dual-modality US/PA experimental setup includes a shared DAQ hardware (Vantage 128, Verasonics, USA), shared handheld linear array UT (L11-4, Verasonics, USA), and nanosecond-width Nd: YAG Q-switched laser source (I-20, Surelite, USA) for internal illumination. The imaging sequence and detailed description of the image system can be referred to our previous publication [43]. The needles used were 18 Gauge commercial metal needle, excluding radiofrequency ablation needle. For the internal illumination, the multimode optical fiber (FT800EMT, Thorlabs, USA) with 0.39 NA and 800 µm core was grinded, inserted and spliced at the cavity of the needle, where the geometry of the fiber end can perfectly match the irregular shape of the needle tip. As shown in the insets of Fig. 2(a), the fiber outlet and beam profile can be seen with 4X magnification of the microscope. The fiber integrated needle was monitored with US/PA imaging during experiment, and it was also parallelly placed with the transducer surface before and after experiment to calibrate the alignment. The light transmission efficiency was measured as 76.2% via a laser power meter. The measured laser energy was 1.27 mJ/cm2 at the output wavelength of 1064 nm. The US images were reconstructed with delay and sum (DAS) of two-way pulse-echo data, and the spatial compounding was further applied to restore the multi-angle (-18, -9, 0, 9, 18 degrees) planewave beamformed images. While for the PA images, the same DAS beamforming method was applied to reconstruct the one-way PA data. The dynamic range of US images is 50 dB, and it is 18 dB in PA images to maintain high image contrast from the needle tip. The imaging speed was set as the laser pulse repetition rate.

The dataset is from the phantom and ex vivo chicken breast experiment, which has 3000 US images and related 3000 PA images. It was split into train, test and validation set with a ratio of 8: 1: 1. The dataset using phantom data (20 groups) and chicken breast tissue (40 groups) were randomly shuffled, and the US/PA images (50 images for each group) with varying insertion depth and angles were acquired. The data augmentation like random flipping and transformation were conducted on the input images, increasing their diversity and richness. The ground truth labels in US training data were generated by an experienced US physician (co-author), while the ground truth in corresponding PA training data was labelled independently in lab. Our networks were trained with the input pairs with the resolution of 640 × 448 pixels resized from the original size of 900 × 128 pixels via bicubic interpolation. Here we referred to the presence of insignificant needle tips in the original US image and the presence of two or more point-shape targets in the original PA image as ‘abnormal’ dataset. The statistical proportions of these two representative types in each whole dataset are 3% (US) and 1.2% (PA), respectively.

3.2 Phantom experiment results

Tissue-mimicking phantom experiment was conducted to validate the feasibilities of the proposed strategy. The cuboid gelatin phantom was made of intralipid (3% weight concentration) with reduced optical scattering coefficient 24.96 cm-1 at 1064 nm and corn powder (3% weight concentration) to act as the acoustic scatters. In the conventional reconstruction, the multiple line-shape reverberations and grating lobe induced acoustic artifacts exist in US images, as shown in Figs. 4(a) and 4(c). As a common practice for US physician, parallel needle insertion can effectively utilize the acoustic specular reflections, but the imaging depth is extremely limited. Also, the feature segmentation results from the individual U-Net network deteriorated sharply as the needle was placed parallel to the transducer surface. Nevertheless, after feature segmentation by modified AG U-Net++ model, the multiple line-shape artifacts due to the acoustic reverberation near the needle and acoustic impedance difference at interfaces can be eliminated effectively in US image. Hence only the single line-shape feature can be extracted and identified with Hough line detection.

 figure: Fig. 4.

Fig. 4. Dual-modality US/PA navigation using phantom. (a) US images with indicated processing. (b) PA images with indicated processing. (c) US images containing insignificant tip with indicated processing. (d) PA images with indicated processing. (e) Image fusion and shaft-tip registration.

Download Full Size | PDF

For PA images, even the optical diffusing induced two point-shape targets exist in original PA image (Fig. 4(b)), it can be reduced to remain the most matching one by the proposed AG U-Net++ model. Unlike the centroid thresholding method, two tips with similar intensities are hard to be distinguished. This problem remained in the original fused image in Fig. 4(e1), degrading the guiding accuracy. However, single line-shape structure (needle shaft and tip) in US image and single point-shape target (needle tip) in PA image can be precisely segmented with the proposed model. As shown in Figs. 4(e2) and 4(e4), the PA highlighted needle tip and US extracted needle shaft-tip centerline were accurately registered in spatial. The visibility of the needle tip with dual complementary contrasts was enhanced. In addition, it can be observed that the method we are currently proposing does not affect the doctor's previous experience, without changes to the traditional US imaging. The dynamic procedure corresponded to Fig. 4(e2) can be seen in Supplementary Material (Visualization 1).

To evaluate the performance of the neural networks, the quantitative metrics were calculated and displayed in Table 1. TRE was relatively difficult to compute in the two-tip case with the original PA data, and it is also cannot be calculated from the original U-Net due to its poor feature segmentation performance. Specially, the listed TRE for Fig. 4(e2) is calculated as 0.5507 mm from AG U-Net++, and the TREs from more randomly selected samples were statistically calculated as 0.61 ± 0.06 mm (n = 50). Compared with empirical ‘Labels’ as the gold standard, the intersection over union (IOU) and dice coefficient can evaluate the segmentation performance of neural network. The higher number in IOU and dice coefficient in the proposed AG U-Net++ framework indicates more precise needle shaft and tip overlap with the labels. Compared with the U-Net model, the IOU increased 46.40% with the proposed AG U-Net++ model, while the dice coefficient shows similar trend with IOU. In addition, the processing time for per frame in AG U-Net++ model is 0.0992 second.

Tables Icon

Table 1. Quantitative evaluation of the network using phantom

3.3 Ex vivo chicken breast experiment results

Ex vivo experiments using chicken breast tissue were further conducted, and the corresponding results are shown in Fig. 5. Figure 5(a) demonstrate the majorities of puncture trajectories (accounting for 97%), whereas other representative results with acoustic interferences are listed in Fig. 5(c). Note that the relative long muscle bundles of the chicken breast were placed in the 2D imaging plane, which shares the similar line-shape features with the rigid needle in US. As shown in Figs. 5(a) and 5(c), in terms of removing these line-shape artifacts in US images, it can be observed that the U-Net model cannot distinguish the needle structure precisely in the muscle bundles and the regions around the needle. Certain number of acoustic speckles from the long muscle bundles, layer interface and array surface were misclassified as the needle. The fine grain recognition improvements in AG U-Net++ models are quite noticeable, ignoring the substantially affection in the US images that share similar line-like features. The needle tips in both images were not so obvious to be observed, especially in the case of Fig. 5(c) (belongs to the ‘abnormal’ dataset in US image, accounting for 3% in US dataset). The intensities from the needle tip were insignificant, resulting in its incorrect positioning.

 figure: Fig. 5.

Fig. 5. Dual-modality US/PA navigation using chicken breast. (a) US images with indicated processing. (b) PA images with indicated processing. (c) US images containing insignificant tip with indicated processing. (d) PA images with indicated processing. (e) Image fusion and shaft-tip registration.

Download Full Size | PDF

For the PA images, the trained AG U-Net++ model kept performing well. Figure 5(b) is randomly selected from the most normal cases (accounting for 98.8%), and Fig. 5(d) is the representative result from the ‘abnormal’ dataset with acoustic reverberation induced multiple point-shape targets. Thresholding the maximum intensities on conventional reconstructed Fig. 5(d), the point-shape needle tip may be misidentified as signals from surrounding acoustic reverberation induced artifacts. Therefore, centroid thresholding is not convincingly reliable for feature segmentation in this case. After feature segmentation by the AG U-Net++ model, the point-shape needle tip can be accurately identified and clearly visualized. Fortunately, the point-shape target reconstructed in original PA images is single in most cases, with respect to the custom designed internal illumination scheme.

From the fusion images in Figs. 5(e1) and 5(e3), the needle from US and PA guidance corresponded well in spatial from the dual contrast modalities. For Fig. 5(e4), compared with the original overlaid images, the AG U-Net++ model enhanced US/PA fusion images can provide precise indications about the needle locations on conventional US images where the needle tip was barely visible. The corresponding the dynamic procedure can be seen in Supplementary Material (Visualization 2). The proposed dual-modality needle shaft-tip alignment method was applied if insignificant needle tip occurs in US images (case 1 in section 2.3), compensating the needle trajectory after deep learning segmentation. The compensation details can be seen in Fig. S2(a) in supplementary data. Based on above dual-modality spatial registration after trained neural network, only when the point-like needle tip in PA is located on the extended straight line of the needle shaft in US can this prediction mechanism determine that the needle is effectively tracked or not. Combining the TRE value, the US physician can be warned to adjust the puncture procedure once encountering unclear visibility of the needle tip.

The quantitative evaluation metrics were reported in Table 2. The TRE values calculated from Fig. 5(e2) shows that the result from AG U-Net++ model was in similar level with phantom study. It is also significantly higher than those situations (U-Net) that cannot be calculated. The statistical TREs were 0.62 ± 0.06 mm (n = 50) in the proposed AG U-Net++ model. The IOU improvement on the AG U-Net++ over U-Net is 544.02%, while the dice coefficient is increased by 507.12%, indicating more precise needle shaft and tip segmentation. Additionally, the overall execution time for the AG U-Net++ model is 0.0970 second per frame, which can be substantially decreased with the stronger computing power to meet the need of real-time clinical intervention.

Tables Icon

Table 2. Quantitative evaluation of the network using chicken breast tissue

3.4 Preclinical percutaneous RF ablation results

To validate the preclinical feasibility of the proposed strategy, completely independent dataset that has not been partitioned into the training dataset has been produced from the percutaneous RF ablation experiment. For accurate RF puncture needle guidance, the needle enhancement in US imaging is necessary [44,45]. The RF needle is single electrode that adapts to the RF ablation device (LDRF-120s, Lide Electronics Co., China). The schematic of the needle and the photograph of the ablated porcine liver sliced through the center of the lesion are separately shown in Figs. 6(a) and 6(b). As shown in Figs. 6(c) and 6 (d), the visibility of the rigid needle can be enhanced with AG U-Net++ model, both in US and PA image. The conformal internal fiber can only reach the bottom end of the hollow rigid needle, which has certain distance L (10 mm) to the needle tip. Thus the dual-modality needle shaft-tip alignment was required (case 2 in section 2.3), and the compensation details can be seen in Fig. S2(b) in the supplementary data. Note that the slope of the Hough detected centerline is -0.411 in the x-z coordinate. Figures 6(e1) and 6(e2) are the fused images with and without the needle shaft-tip compensation, respectively. The corresponding dynamic procedure with the needle shaft-tip compensation can be seen in Supplementary Material (Visualization 3). The TREs were calculated between the US extracted tip (xus_tip, zus_tip) and virtual PA compensated tip (xpa_tip, zpa_tip), which can reach 0.75 ± 0.44 mm (n = 50). Therefore, the percutaneous needle in the RF ablation can be accurately tracked with additional PA guidance.

 figure: Fig. 6.

Fig. 6. Dual-modality US/PA navigation in percutaneous RF ablation. (a) Schematic of RF ablation needle. (b) Photograph of the ablated porcine sliced liver. (c) US images with indicated processing. (d) PA images with indicated processing. (e) Image fusion and shaft-tip registration.

Download Full Size | PDF

3.5 In vivo needle biopsy of SLNs results

To measure the tracking accuracy of target in tissue, the US/PA image-guided insertion of puncture needle into mouse axillary lymph nodes (labelled known position in Fig. 7(a)) was conducted. Semiconducting polymers [46] with NIR-II PA characteristic was subcutaneous injected into the left forepaw of mouse. Then it can be accumulated in its proximal SLNs, which was under the overlaid ∼2.5 cm thick chicken breast to increase the imaging depth. Similarly with the ex vivo experiment results, the puncture needle can be accurately tracked before reaching the SLNs. In addition, the dyed SLNs can be gradually brightened up while the needle tip was approaching, indicating high tracking ability. The representative results at moment when the needle tip first contacts the SLNs are shown in Fig. 7. As depicted in Fig. 7(a), the trained AG U-Net++ model can alleviate intrinsic acoustic interference in US imaging, and enhance the needle shaft-tip visibility. However, the needle tip was not so obvious because of the acoustic interferences. Figure 7(b) depicts the PA enhancement of the needle tip after deep learning segmentation. The optical absorption of needle has stronger PA intensities than surrounding absorbers at 1064 nm, which can be distinguished and extracted by the proposed model. Finally, both the needle shaft, needle tip, and targeted SLNs in depth can be precisely identified. The dynamic procedure where the specific dyed SLNs were highlighted as the needle approaching can be seen in Supplementary Material (Visualization 4). The statistical TREs were 0.66 ± 0.29 mm (n = 50). Overall, these results demonstrate the clinical potential of the handheld US/PA guidance for needle biopsy of SLNs for cancer staging in clinics.

 figure: Fig. 7.

Fig. 7. Dual-modality US/PA navigation with in needle biopsy of SLNs. (a) US images with indicated processing. (b) PA images with indicated processing. (c) Image fusion and shaft-tip registration.

Download Full Size | PDF

4. Discussion

Accurate imaging is necessary to guide the needle punctured to the targeted location. By inserting the custom optical fiber into the puncture needle, a practical way for PA imaging guided intervention was achieved. Comparing the conventional external illumination to visualize the overall needle structure, the internal illumination (only at the needle tip) is capable of accompanying the movement of the needle tip in depth. Ideally, the positioning accuracy of needle tip in this way ultimately depends on the axial and lateral imaging resolution. Beside the above puncture needle with asymmetric tip, the single multimode optical fiber can be machined to match and illuminate other interventional tools with different geometric shape, as demonstrated in the RF ablation experiment. Similar with the needle biopsy of SLNs, the forward illumination can also be used to visualize other interventional procedure and specific contrast agents in tissue, such as regional anesthesia, drug delivery and therapeutic injection. In addition, the internal illumination positions at different depth can influence the amplitude and frequency of the exterior detected PA signal. Hence it is essential to correct the frequency-dependent acoustic attenuation.

Towards clinical transformation, the dataset collected in experiments (section 3.4 and 3.5) was independent without partition into the previous training set (section 3.2 and 3.3). It is meaningful to test the model generalization ability by the independent collected dataset. Currently, interventional US operations are still recognized as challenging tasks and considerable expertise is required to perform the procedure safely and conveniently. If the puncture site or needle orientation is incorrect, the hand-eye coordination may miss the target and require a reinsertion. The preclinical experiments of percutaneous RF ablation and needle biopsy of SLNs demonstrated that US/PA guided intervention has the potential in reducing execution time and facilitating needle positioning in specific targets, reducing physician professional training work and improving patient safety. Meanwhile, fast low-cost LED light source can be used because of low energy required in the internal illumination. For out-of-plane needle tracking, besides the point-like scatters in background, the US guidance of needle tip may also fail due to the facts that the cross section of the needle tip and shaft has the same cylindrical shape. Nevertheless, the detected feature in PA imaging is still point-like target (same with in plane). Therefore, the PA navigation guarantees precise needle tip positioning in the case of out-of-plane needle tracking.

5. Conclusion

An ongoing challenge in interventional US is the needle visibility, especially the needle tip. While maintaining the clinical US imaging, the PA imaging has the potential to visualize the puncture needle. However, the intrinsic acoustic interferences still exist, decreasing the needle positioning accuracy. In this study, we first developed an internal illumination scheme for PA imaging with complementary contrast to US guidance. The custom optical fiber demonstrated the PA imaging can highlight the needle tip and track its trajectory at high temporal and spatial resolution. Then the corresponding AG U-Net++ model to enhance the needle visibility and improve the needle tracking accuracy was proposed. The needle shaft-tip compensation method can further align the needle shaft and tip displayed in the dual-modality US/PA imaging, if the TRE notified as abnormal. From the phantom, ex vivo puncture in chicken breast and porcine liver, and in vivo needle biopsy experiments, the trained AG U-Net++ model kept performing well on both US and PA images with the elimination of strong acoustic interferences and background artifacts. The US imaging can provide the panoramic structure information of entire needle, and the corresponding PA imaging can finely outline the needle tip. The TRE can reach submillimeter level (0.61 ± 0.06 mm) with dual-contrast cross validation. The US associated with PA guidance shows the potential to assist in precise interventional procedures.

Funding

National Natural Science Foundation of China (62071310, 62101337, 81871429, 81971637).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. H. H. Holm and B. Skjoldbye, “Interventional ultrasound,” Ultrasound Med. Biol. 22(7), 773 (1996). [CrossRef]  

2. R. M. Comeau, A. F. Sadikot, A. Fenster, et al., “Intraoperative ultrasound for guidance and tissue shift correction in image guided neurosurgery,” Med. Phys. 27(4), 787–800 (2000). [CrossRef]  

3. S. Nicolaou, A. Talsky, K. Khashoggi, et al., “Ultrasound-guided interventional radiology in critical care,” Crit. Care Med. 35(Suppl), S186–S197 (2007). [CrossRef]  

4. K. J. Chin, A. Perlas, V. W. S. Chan, et al., “Needle visualization in ultrasound-guided regional anesthesia: Challenges and solutions,” Reg. Anesth. Pain Med. 33(6), 532–544 (2008). [CrossRef]  

5. J. Park, B. Park, J. K. Lim, et al., “Ultrasound-guided percutaneous needle biopsy for small pleural lesions: diagnostic yield and impact of CT and ultrasound characteristics,” AJR, Am. J. Roentgenol. 217(3), 699–706 (2021). [CrossRef]  

6. M. J. Bradley, “An in-vitro study to understand successful free-hand ultrasound guided intervention,” Clin. Radiol. 56(6), 495–498 (2001). [CrossRef]  

7. C. Chan, F. Lam, and R. Rohling, “A needle tracking device for ultrasound guided percutaneous procedures,” Ultrasound Med. Biol. 31(11), 1469–1483 (2005). [CrossRef]  

8. M. H. Howard, E. K. Paulson, M. A. Kliewer, et al., “An electronic device for needle placement during sonographically guided percutaneous inintervention,” Radiology (Oak Brook, IL, U. S.) 218(3), 905–911 (2001). [CrossRef]  

9. J. Krücker, S. Xu, N. Glossop, et al., “Electromagnetic tracking for thermal ablation and biopsy guidance: clinical evaluation of spatialaccuracy,” J. Vasc. Interv. Radiol. 18(9), 1141–1150 (2007). [CrossRef]  

10. D. Vilkomerson and D. Lyons, “A system for ultrasonic beacon-guidance of catheters and other minimally-invasive medical devices,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 44(2), 496–504 (1997). [CrossRef]  

11. E. Ayvali and J. P. Desai, “Optical flow-based tracking of needles and needle-tip localization using circular hough transform in ultrasound images,” Ann. Biomed. Eng. 43(8), 1828–1840 (2015). [CrossRef]  

12. M. Uhercik, J. Kybic, H. Liebgott, et al., “Model fitting using RANSAC for surgical tool localization in 3D ultrasound images,” IEEE Trans. Biomed. Eng. 57(8), 1907–1916 (2010). [CrossRef]  

13. P. Beigi, R. Rohling, T. Salcudean, et al., “Detection of an invisible needle in ultrasound using a probabilistic SVM and time-domain features,” J. Ultrason. 78(10), 18–22 (2017). [CrossRef]  

14. J. Gao, P. Liu, G. Liu, et al., “Robust needle localization and enhancement algorithm for ultrasound by deep learning and beam steering methods,” J. Comput. Sci. Technol. 36(2), 334–346 (2021). [CrossRef]  

15. S. Manohar and S. S. Gambhir, “Clinical photoacoustic imaging,” Photoacoustics 19, 100196 (2020). [CrossRef]  

16. L. V. Wang and J. Yao, “A practical guide to photoacoustic tomography in the life sciences,” Nat. Methods 13(8), 627–638 (2016). [CrossRef]  

17. C. Lutzweiler and D. Razansky, “Optoacoustic imaging and tomography: reconstruction approaches and outstanding challenges in image performance and quantification,” Sensors 13(6), 7345–7384 (2013). [CrossRef]  

18. T. Zhao, A. E. Desjardins, S. Ourselin, et al., “Minimally invasive photoacoustic imaging: current status and future perspectives,” Photoacoustics. 16, 100146 (2019). [CrossRef]  

19. M. S. Karthikesh and X. Yang, “Photoacoustic image-guided interventions,” Exp Biol Med 245(4), 330–341 (2020). [CrossRef]  

20. J. Su, A. Karpiouk, B. Wang, et al., “Photoacoustic imaging of clinical metal needles in tissue,” J. Biomed. Opt. 15(2), 021309 (2010). [CrossRef]  

21. D. Piras, C. Grijsen, P. Schütte, et al., “Photoacoustic needle: minimally invasive guidance to biopsy,” J. Biomed. Opt 18(7), 070502 (2013). [CrossRef]  

22. R. A. Kruger, P. Liu, and C. R. Appledorn, “Photoacoustic ultrasound (PAUS)-reconstruction tomography,” Med. Phys. 22(10), 1605–1609 (1995). [CrossRef]  

23. J. M. Jonas, A. Nikolaev, and G. V. Soest, “Photoacoustic imaging on its way toward clinical utility a tutorial review focusing on practical application in medicine,” J. biomed. opt. 28(12), 121205 (2023). [CrossRef]  

24. B. Richard, O. Sahin, and E. Stanislav, “Ultrasound guided photoacoustic imaging: current state and future development,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr. 61(3), 450–466 (2014). [CrossRef]  

25. X. Lin, N. Feng, Y. Qu, et al., “Compressed sensing in synthetic aperture photoacoustic tomography based on a linear-array ultrasound transducer,” Chin. Opt. Lett. 15(10), 101102 (2017). [CrossRef]  

26. C. W. Wei, T. M. Nguyen, J. Xia, et al., “Clinically translatable ultrasound/photoacoustic imaging for real-time needle biopsy guidance,” Ultrasonics Symposium, IEEE.839–842 (2014).

27. C. Kim, T. N. Erpelding, K. Maslov, et al., “Handheld array-based photoacoustic probe for guiding needle biopsy of sentinel lymph nodes,” J. biomed. opt. 15(04), 1 (2010). [CrossRef]  

28. M. A. L. Bell and J. Shubert, “Photoacoustic-based visual servoing of a needle tip,” Sci. Rep. 8(1), 33931 (2018). [CrossRef]  

29. M. Graham, F. Assis, D. Allman, et al., “In vivo demonstration of photoacoustic image guidance and robotic visual servoing for cardiac catheter-based interventions,” IEEE Trans. Med. Imaging 39(4), 1015–1029 (2020). [CrossRef]  

30. K. Watanabe, J. Tokumine, A. K. Lefor, et al., “Photoacoustic needle Improves needle tip visibility during deep peripheral nerve block: a cadaver study,” Sci. Rep. 11(1), 8432 (2021). [CrossRef]  

31. R. Elina, K. J. Francis, and S. Manohar, “Annular illumination photoacoustic probe for needle guidance in medical interventions,” Biomedical Photonic Imaging.11077 (2019).

32. H. Wang, S. Liu, T. Wang, et al., “Three-dimensional interventional photoacoustic imaging for biopsy needle guidance with a linear array transducer,” J. Biophotonics. 12(12), e201900212 (2019). [CrossRef]  

33. M. Ai, J. Youn, S. E. Salcudean, et al., “Photoacoustic tomography for imaging the prostate: a transurethral illumination probe design and application,” Biomed. Opt. Express 10(5), 2588–2605 (2019). [CrossRef]  

34. P. Beigi, S. E. Salcudean, G. C. Ng, et al., “Enhancement of needle visualization and localization in ultrasound,” Int. J. Comput. Ass. Rad. 16(1), 169–178 (2021). [CrossRef]  

35. M. Shi, T. Zhao, S. J. West, et al., “Improving needle visibility in LED-based photoacoustic imaging using deep learning with semi-synthetic datasets,” Photoacoustics. 26, 100351 (2022). [CrossRef]  

36. W. Xia, M. K. A. Singh, E. Maneas, et al., “Handheld real-time LED-based photoacoustic and ultrasound imaging system for accurate visualization of clinical metal needles and superficial vasculature to guide minimally invasive procedures,” Sensors 18(5), 1394 (2018). [CrossRef]  

37. D. Allman, A. Reiter, and M. A. L. Bell, “Photoacoustic source detection and reflection artifact removal enabled by deep learning,” IEEE Trans. Med. Imaging 37(6), 1464–1477 (2018). [CrossRef]  

38. P. Ronneberger, Fischer, and Broxt, “U-Net: Convolutional networks for biomedical image segmentation,” Medical Image Computing and Computer Assisted Intervention Society9351, 234–241 (2015).

39. Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, et al., “UNet++: redesigning skip connections to exploit multiscale features in image segmentation,” IEEE Trans. Med. Imaging 39(6), 1856–1867 (2020). [CrossRef]  

40. J. Schlemper, O. Oktay, M. Schaap, et al., “Attention gated networks: Learning to leverage salient regions in medical images,” Med. Image Anal. 53, 197–207 (2019). [CrossRef]  

41. T. Zhou, S. Ruan, and S. Canu, “A review: Deep learning for medical image segmentation using multi-modality fusion,” Array 3(4), 100004 (2020). [CrossRef]  

42. K. S. Rhode, D. L. Hill, P. J. Edwards, et al., “Registration and tracking to integrate X-ray and MR images in an XMR Facility,” IEEE Trans. Med. Imaging 22(11), 1369–1378 (2003). [CrossRef]  

43. X. Lin, Y. Shen, and L. Wang, “Multi-scale photoacoustic assessment of wound healing using chitosangraphene oxide hemostatic sponge,” Nanomaterials 11(11), 2879 (2021). [CrossRef]  

44. F. K. Joseph, H. Kruit, E. Rascevska, et al., “Minimally invasive photoacoustic imaging for device guidance and monitoring of radiofrequency ablation,” Proc. SPIE 11240, 11240SF (2020). [CrossRef]  

45. G. A. Pang, E. Bay, X. L. Deán-Ben, et al., “Optoacoustic monitoring of real-time lesion formation during radiofrequency catheter ablation,” J. Cardiovasc. Electrophysiol. 26, 339–345 (2015). [CrossRef]  

46. M. Zha, X. Lin, J. Ni, et al., “An ester-substituted semiconducting polymer with efficient nonradiative decay enhances NIR-II photoacoustic performance for monitoring of tumor growth,” Angew. Chem. Int. Ed. 59(51), 23268–23276 (2020). [CrossRef]  

Supplementary Material (5)

NameDescription
Supplement 1       Supplemental Document
Visualization 1       Visualization 1. Dynamic needle insertion of US/PA guidance using tissue-mimicking phantom.
Visualization 2       Visualization 2. Dynamic needle insertion of US/PA guidance using ex vivo chicken breast tissue.
Visualization 3       Visualization 3. Dynamic needle insertion of US/PA guidance in percutaneous RF ablation.
Visualization 4       Visualization 4. Dynamic needle insertion of US/PA guidance in the in vivo needle biopsy of SLNs.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Overview of interventional US/PA guidance of puncture needle with deep learning.
Fig. 2.
Fig. 2. Handheld interventional US/PA needle navigation. (a) Procedure illustration with the machined fiber. (b-d) US/PA images with representative artifacts. FP, fiber photograph; BP, beam profile; NP, needle photograph; GL, grating lobe; RB, reverberation; IT, insignificant tip; AI, acoustic interface.
Fig. 3.
Fig. 3. Modified attention gated U-Net++ model.
Fig. 4.
Fig. 4. Dual-modality US/PA navigation using phantom. (a) US images with indicated processing. (b) PA images with indicated processing. (c) US images containing insignificant tip with indicated processing. (d) PA images with indicated processing. (e) Image fusion and shaft-tip registration.
Fig. 5.
Fig. 5. Dual-modality US/PA navigation using chicken breast. (a) US images with indicated processing. (b) PA images with indicated processing. (c) US images containing insignificant tip with indicated processing. (d) PA images with indicated processing. (e) Image fusion and shaft-tip registration.
Fig. 6.
Fig. 6. Dual-modality US/PA navigation in percutaneous RF ablation. (a) Schematic of RF ablation needle. (b) Photograph of the ablated porcine sliced liver. (c) US images with indicated processing. (d) PA images with indicated processing. (e) Image fusion and shaft-tip registration.
Fig. 7.
Fig. 7. Dual-modality US/PA navigation with in needle biopsy of SLNs. (a) US images with indicated processing. (b) PA images with indicated processing. (c) Image fusion and shaft-tip registration.

Tables (2)

Tables Icon

Table 1. Quantitative evaluation of the network using phantom

Tables Icon

Table 2. Quantitative evaluation of the network using chicken breast tissue

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.