Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dictionary learning technique enhances signal in LED-based photoacoustic imaging

Open Access Open Access

Abstract

There has been growing interest in low-cost light sources such as light-emitting diodes (LEDs) as an excitation source in photoacoustic imaging. However, LED-based photoacoustic imaging is limited by low signal due to low energy per pulse—the signal is easily buried in noise leading to low quality images. Here, we describe a signal de-noising approach for LED-based photoacoustic signals based on dictionary learning with an alternating direction method of multipliers. This signal enhancement method is then followed by a simple reconstruction approach delay and sum. This approach leads to sparse representation of the main components of the signal. The main improvements of this approach are a 38% higher contrast ratio and a 43% higher axial resolution versus the averaging method but with only 4% of the frames and consequently 49.5% less computational time. This makes it an appropriate option for real-time LED-based photoacoustic imaging.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Photoacoustic imaging (PAI) is a non-invasive hybrid imaging modality with tremendous potential in structural, functional, and molecular imaging for pre-clinical and clinical applications such as brain mapping, tumor detection, cancer staging, tissue vasculature, and oral health [110]. PAI combines optical and ultrasound imaging modalities based on the photoacoustic effect to achieve the good contrast and spectral behavior of optical imaging as well as the spatial and temporal resolution of the ultrasound imaging [1114]. In PAI, the tissue is illuminated by a 5-100 ns light pulse; the absorbed optical energy leads to a local temperature and subsequent thermal expansion leading to wideband ultrasound waves.

The acoustic signal is directly proportional to optical fluence, and thus the optical excitation source is a key component of PAI systems. Pulsed lasers are common in PAI and offer powers just below the ANSI limit for strong PAI signal generation. However, these lasers are also bulky, expensive, and delicate. Thus, recent efforts have focused on low-cost light sources such as pulsed laser diodes (PLDs) and light-emitting diodes (LEDs) to further facilitate widespread clinical utility of PAI [15,16]. LED-based PAI imaging equipment is relatively inexpensive, compact, portable, and lightweight; these LEDs have a long life-time and good stability [1722]. Of course, LED-based PAI, like other PAI, is subject to thermal and electrical noises [23,24]. More significantly is that LEDs generally have a fluence that is three log orders lower than competing laser systems (mJ/cm2 to µJ/cm2) [25,26]. The low signal in LED-based photoacoustic leads to a low signal-to-noise ratio (SNR) and ultimately poor quality of the reconstructed photoacoustic images [19,2729]. Thus, improved tools are needed to compensate for the low fluence of LED-based excitation in PAI.

Many studies have reported reconstruction algorithms that yield high-quality PA images [3037]. Here, high-quality images could be acquired from noisy signals through an effective signal de-noising technique followed by a simple reconstruction method. There are several studies about laser-based photoacoustic signal de-noising including filtering [38], wavelet de-noising methods [39,40], singular value decomposition for laser-induced noise reduction [41], and empirical mode decomposition [29,42]. Filtering-based approaches suffer from an inability to filter noise when the signal and noise share a similar frequency spectrum: This is a problem in PA because the size of object can change the PA spectrum. Furthermore, these signals are usually broadband and cannot be considered to be signals with specific frequency bands [43]. Wavelet methods are promising but have challenges with choosing an appropriate basis function, optimum number of wavelet decomposition levels, and optimum threshold. While solutions have been proposed for these drawbacks, they are often complicated and time-consuming [44,45].

The most commonly used technique to reduce the noise level and improve the SNR of the PA signals is data averaging [38]. It is especially useful to compensate for low signal, i.e., LED-based PAI [25]. However, signal averaging requires multiple data acquisition steps and can be time-consuming [19,29]. Alternatively, adaptive filtering method without any prior knowledge requirement has been proposed for low energy PLD photoacoustic signal enhancement with fewer acquisitions than conventional averaging techniques; these reduced number of acquisitions can lead to shorter photoacoustic imaging times [19]. Nevertheless, adaptive filtering requires data averaging, which makes the imaging slower.

Recently, sparse representation has been extensively used for applications such as compressed sensing, reconstruction, and de-noising of medical images [46]. However, there are limited studies for sparse bio-signals de-noising [47]. These methods assume that the natural signal are sparse on to either a fixed dictionary like the Fourier and wavelet transform or a learned dictionary [48]. The fixed dictionary offers simplicity with known properties. However, it may not captures all features of the signal. Dictionary learning is a new approach for adaptive sparse representation of signals that has recently been used in photoacoustic imaging applications such as high-quality image reconstruction [49,50] and reverberation removal in the photoacoustic tomography [51]. All these studies used laser-based photoacoustic imaging. On the other hand, they applied simple dictionary which itself may be extremely noisy and contain information that is irrelevant.

Here, for the first time, we propose a signal de-noising approach for LED-based photoacoustic signal based on dictionary learning via an alternating direction method of multipliers (ADMM) method. This approach was validated with point target phantoms and in vivo experiments. This method offers a high peak signal-to-noise ratio (PSNR) of de-noised signal and high contrast of reconstructed images but with a lower number of frames in a timeframe compatible with medical imaging.

2. Theory

2.1 Sparse dictionary learning

The sparse representation of a signal $x \in {R^n}$ is a linear combination of a few elements. It is named atoms (k atoms) in a given over complete dictionary matrix $D \in {R^{n \times k}}$. More precisely, the signal x can be approximated as $x \approx D\alpha$ where $\alpha \in {R^k}$ is a sparse vector with the fewest nonzero entries containing the representation coefficients of x. Therefore, the sparse representation problem could be solved as the following optimization problem:

$${\mathop {\min }\limits_\alpha} {||\alpha ||_0},\,s.t\,{||{x - D\alpha } ||_2} \le \varepsilon$$
where ${||\alpha ||_0}$ is the zero norm of $\alpha$ that represents the number of non-zero values in a vector $\alpha$. Also, via the proper regularization parameter $\lambda$, Eq. (1) could be converted to an unconstrained problem as [52]:
$$\hat{\alpha } = \mathop {\arg }\limits_{\alpha ,D} \min {\mkern 1mu} ||x - D\alpha ||_2^2 + \lambda ||\alpha |{|_0}$$

2.2 Alternating direction method of multipliers (ADMM)

The ADMM is a candidate solver for convex problems. It is a simple but powerful algorithm to solve the convex optimization problem by breaking it into smaller sub-problems that has recently been used in several different areas [53,54]. The ADMM benefits from two main ideas: dual decomposition and augmented Lagrangian methods for constrained problems [55]. The ADMM is designed to solve the separable convex problems of the form:

$$\min \,f(x) + g(y),\,\,\,\,\,s.t.\,\,\,Ax + By = c$$
where $x \in {R^n}$, $y \in {R^m}$, $A \in {R^{p \times n}}$, and $B \in {R^{p \times m}}$. The augmentation Lagrangian for the Eq. (3) can be written as:
$${L_p}(x,y,\lambda ) = f(x) + g(y) + {\lambda ^T}(Ax + By - c) + (\frac{\rho }{2})||{Ax + By - c} ||_2^2$$
where term $\rho$ is a penalty term that is considered positive, and $\lambda$ is the Lagrangian multiplier. Equation (4) is solved over three steps: x-minimization and y-minimization. These two are split into N separate problems and followed by an updating step for multiplier $\lambda$ as follows:
$$\begin{array}{l} {x^{k + 1}}: = \arg \,\mathop {\min }\limits_x {L_p}(x,{y^k},{\lambda ^k}),\\ {y^{k + 1}}: = \arg \,\mathop {\min }\limits_y {L_p}({x^{k + 1}},y,{\lambda ^k}),\\ {\lambda ^{k + 1}}: = {\lambda ^k} + \rho (A{x^{k + 1}} + B{y^{k + 1}} - c). \end{array}$$

3. Materials and methods

3.1 Dictionary learning assisted signal de-noising (DLASD)

To solve problems with LED-based photoacoustic signal de-noising, the PA signal is modeled as an observed noisy signal s that is defined as:

$$s = x + n$$
Here, x is the desired signal and n denotes the observation noise that is bounded as ${||n ||_2} \le \varepsilon$. The photoacoustic signal de-noising process actually estimates x from the observation s. Considering the photoacoustic signal has a sparse representation over the dictionary D as $x = D\alpha$, the de-noising problem could be written as:
$$\mathop {\min }\limits_{D,\alpha } \,\,||{s - D\alpha } ||_F^2\,\,\,s.t\,\,\,\,{||{{\alpha_i}} ||_0} \le \varepsilon ,\,\,\,i = 1,\,2,\,\ldots ,\,k,\,\,\,\,\,x = D\alpha$$
In the dictionary learning phase, the dictionary is updated during the learning process according to the decomposed signal. Therefore, it could follow that the properties of decomposed signals can lead to sparser coefficients versus fixed dictionaries such as the wavelet transform. Here, ADMM is proposed for dictionary learning because designing the appropriate dictionary plays an important role in the ideal recovery of a signal [56]. The Lagrange function of dictionary learning based on Eq. (7) and 4 is obtained as:
$$L\, = \,||{s - x} ||_F^2 + \sum\limits_{i = 1}^L {\left\langle {{\Lambda _i},\,{{(x - D\alpha )}_i}} \right\rangle + \frac{\beta }{2}} ||{x - D\alpha } ||_F^2$$
Here, the operator $\left\langle {{\Lambda _i},{{(x - D\alpha )}_i}} \right\rangle$ denotes the trace of the matrix ${a^T}(x - D\alpha )$, where $\Lambda $ is the Lagrange multiplier matrix. The ADMM algorithm is applied via this equation, and the Majorization-minimization (MM) algorithm [57] is used to obtain the coefficients. Finally, the updated dictionary is achieved as below:
$${D^{(n + 1)}} = {D^{(n)}} + \frac{{{M^{(n)}}{\alpha _i}^{T(n)}}}{{{\alpha _i}^{(n)}{\alpha _i}^{T(n)} + \varepsilon }}, $$
where,
$${M^{(n)}} = \frac{{\beta {D^{(n)}}{\alpha ^{(n)}} + 2S - {\Lambda ^{(n)}}}}{{2 + \beta }} + \frac{{{\Lambda ^{(n)}}}}{\beta } - {D^{(n)}}{\alpha ^{(n)}}.$$
The first step is based on the given initial dictionary ${D^0}$ and training matrix. ${D^0}$ is a column vector with the length n chosen randomly from the given signal. The input signal is raw data detected by ultrasonic transducers decomposed into many patches. The proposed method was then applied on the original signal without any pre-processing. DLASD does not require any prior knowledge about characteristics of the signal and training data. Here, the MM algorithm is proposed to implement the sparse coding to achieve the coefficient vector $\alpha$. This is followed by the next step in which the sparse vector $\alpha$ is fixed, and the dictionary D is updated using the dictionary learning method based on ADMM via Eq. (9). In the following, the Lagrange multiplier matrix was updated based on Eq. (5) as below:
$${\Lambda ^{(n + 1)}} = {\Lambda ^{(n)}} + \gamma \beta (\frac{{\beta {D^{(n)}}{X^{(n)}} + 2Y - {\Lambda ^{(n)}}}}{{2 + \beta }} - {D^{(n + 1)}}{X^{(n)}})$$
The iteration was performed until the iteration time or pre-defined satisfactory error of the reconstructed signal is achieved. Here, the iteration was performed until the iteration times are reached the fixed iteration numbers 20 times (Fig. 1).

We created photoacoustic images that represent an optical absorption distribution map of the targets via the delay and sum (DAS) approach as the most commonly used reconstruction method in the photoacoustic imaging area.

 figure: Fig. 1.

Fig. 1. Flowchart of the proposed algorithm.

Download Full Size | PDF

3.2 Experimental setup

In this study, we used a commercially available LED-based PAI system (Cyberdyne Co., Tokyo, Japan) to perform all experiments. This imaging system has been characterized previously [17]. There were two high-density high-power LED arrays, and each included four rows of 36 single LEDs. These were coupled to the sides of a 128-element linear array transducer with a central frequency of 7 MHz and a bandwidth of 80.9%. Each single ultrasound element has a dynamic range of 16 bits with 1024 samples. The photoacoustic sampling rate for this imaging device is 40 MHz. The illumination source has a repetition rate of 4 KHz, wavelength of 850 nm, and a 100 ns pulse width.

3.2.1 Contrast measurement

To evaluate the contrast of the reconstructed image from de-noised signal processed by the proposed method, parallel lines (150 µm wide) with distances of 1.1 mm were printed on the transparent film. We placed the film between two layers of 1% agar and fixed the entire object inside the water tank. The B-mode frame rate was 6 Hz.

3.2.2 Spatial resolution and depth measurement

We placed black nylon monofilament sutures with a nominal diameter of 50 µm (Teleflex Medical OEM) inside 2% intralipid (20%, emulsion, Sigma-Aldrich Co, MO, USA) mixed with agar at different depths with an interval distance of 5 mm for the first five filaments and 10 mm for the remainder. The B-mode frame rate was 6 Hz.

3.2.3 In vivo experiment

All animal experiments were performed in compliance with the Institutional Animal Care and Use Committee established by University of California San Diego. Rabbits served as an animal model to evaluate the proposed de-noising method in vivo. We anesthetized the New Zealand rabbit (∼5 kg) using an intramuscular injection of ketamine (35 mg/kg) and xylazine (5 mg/kg). The pupils were dilated and anesthetized using 2.5% phenylephrine hydrochloride, 0.5% proparacaine hydrochloride, and 1% tropicamide. The transducer was placed on the top of opened eye and ultrasound gel was used for acoustic coupling. The LED repetition rate and B-mode frame rate for in vivo experiments are 4 KHz and 6 Hz, respectively. We used 690 nm in this study.

4. Results and discussion

Three different data sets were used to evaluate the DLASD method. Figure 2 shows a single line of the detected photoacoustic signal generated by the point target phantom where the averaging and proposed method were applied on different numbers of frames. The proposed method was compared with averaging via the same number of frames as a tool to improve the PSNR of the signals. Our proposed method has a PSNR of about 27.93 when using one frame; 20 frames are required to achieve the same PSNR value via averaging. The use of five frames in the DLASD markedly reduced the noise amplitude from 16.1 mv to about zero (-2.2×10−5 mv), which improved the PSNR by ∼40%.

 figure: Fig. 2.

Fig. 2. A single line of the detected de-noised photoacoustic signal by the averaging method in the left column, and DLASD in the right column. This was done for multiple frame numbers: (a) 1 frame, (b) 5 frames, (c) 10 frames, and (d) 20 frames. The cursors show the amplitude of the noisy part of the detected signals. The 200 first samples from any channel were intentionally set to zero before any de-noising processing to eliminate transducer artifact.

Download Full Size | PDF

4.1 Contrast assessment

To evaluate the contrast of the reconstructed images, the contrast ratio (CR) metric was used as below:

$$CR = 20{\log _{10}}(\frac{{{\mu _{background}}}}{{{\mu _{object}}}})$$
Here, µobject and µbackground are the maximum intensity of the object and the mean of image intensity in the background, respectively [32]. The background was defined as the pixels inside the green dashed rectangular region (Fig. 3). The results of reconstructed images from the de-noised signals via averaging and DLASD are shown with 5 and 10 frames (Fig. 3).

 figure: Fig. 3.

Fig. 3. Reconstructed images via the different signal de-noising methods. (a), (c) Averaging using 5 and 10 frames. (b), (d) DLASD using 5 and 10 frames; e) Averaging method using all 1050 frames. f) Comparison of CR and computational time of averaging and DLASD using five frames and averaging using all frames. The white arrows indicate noisy part of images after signal de-noising via averaging method (a, e). Our proposed DLASD method shows the dramatically better performance in signal de-noising as indicated by the white arrow in (b).

Download Full Size | PDF

When using five frames, the CR was found to be about -30.11 dB via the averaging method; DLASD had a CR of -58.61 dB, and the CR ratio is about -42.13 dB when all 1050 frames were used for averaging. The proposed method provides higher contrast than averaging methods with the same number of frames. The CR was improved with more frames via two approaches; however, the CR of DLASD with the same frames was higher than the averaging method. Furthermore, DLASD used only 0.5% of all frames but still had a 38% improvement in contrast ratio versus averaging using all frames. Our processing time includes averaging time, signal de-noising time, and image reconstruction time. The computational time of the DLASD method for phantom data was about 0.3 s. By using 0.5% of frames, the averaging time reduced from 1.8 s for all 1050 frames to 0.4 s for 5 frames. Also, the image reconstruction computational time is about 0.5 s for the two methods. Consequently, our proposed method versus averaging all frames markedly reduced the total processing time from 2.3 s to 1.2 s.

Furthermore, the DLASD method could also eliminate the mirror artifact in the reconstructed images. The mirror image artifact is a form of reverberation that occurs by the false assumption that an echo returns to the transducer after a single reflection. During the dictionary learning process, the dictionary is updated according to the decomposed signals. It follows the properties of main components of signals that can lead to sparse representation of signal. The mirror artifacts could be reduced by this sparse representation of signal.

Additionally, to evaluate the tolerance of the DLASD method to noise, different levels of SNR were added to the signal. We used signals of LED-based photoacoustic which includes noise, and we averaged all frames as a ground truth signal and added different levels of SNR: -5, -10, -15, -20, -25, and -30 dB. The Fig. 4 shows that the DLASD could recover objects until the SNR increased to -25 dB (see appendix, Fig. 7).

 figure: Fig. 4.

Fig. 4. Reconstructed images of noisy signals with SNR: a) -5 dB, c) -10 dB, e) -15 dB, g) -20 dB, i) –25 dB, and k) -30 dB. Figures (b), (d), (f), (h), (j), and (l) are reconstructed images of de-noised signals by DLASD method. Panel M) shows the reconstructed image of ground truth signal.

Download Full Size | PDF

4.2 Spatial resolution and depth assessment

We next used a depth phantom to evaluate the spatial resolution and depth evaluation. This phantom contains point-targets positioned at different depths. The photoacoustic images of de-noised signals via averaging and DL-based methods for different numbers of frames were reconstructed (Fig. 5). The first column shows the performance of the averaging method by using 30 and 50 frames. The middle column depicts the output of DLASD via the same number of frames, and the last column is the averaging method with all 1290 frames. The averaging method improved the overall CR of the images. However, the result of signal de-noising with averaging of 20 frames missed the 6th object in the deepest position in the reconstructed image (Fig. 5(a)). When averaging with 50 frames, the 6th object is not distinguishable and suffers from low contrast. The proposed method uses only 20 frames and can still detect the deepest object (Fig. 5(b)). The use of all 50 frames significantly improved the contrast of the 6th object. The DLASD method with 50 frames can detect the deepest object and improves the image quality. Table 1 shows the CR and computational time for both signal de-noising methods for the different number of frames. The proposed method has a CR about of -87.62 with 50 frames. This is better than the CR of about -71.25 obtained with 1290 frames in the averaging method.

 figure: Fig. 5.

Fig. 5. Reconstructed images of de-noised signals by averaging using 20 (a) and 50 frames (c). The DLASD using 20 (b) and 50 (d) frames. e) The averaging method using all 1290 frames. Panel f) shows the lateral and axial FWHM for objects that were 20 mm deep with averaging and DLASD via 50 frames and averaging all frames as a gold standard. Noise located at the left and right of images related to image reconstruction artifact.

Download Full Size | PDF

Tables Icon

Table 1. The CR and computational time of signal de-noising methods for different numbers of frames.

To quantitatively evaluate the DLASD method in terms of spatial resolution, the full-width-half-maximum (FWHM) was calculated in lateral and axial axes for the reconstructed images. The lateral and axial FWHM were calculated for objects at different depths. The focal depth of this transducer is about 20 mm, and the best resolution was achieved at this depth. The lateral and axial FWHM for the objects at this depth are presented in Fig. 5(f). The lateral FWHM values for objects 20 mm deep are 1.32, 0.78, and 0.79 mm via averaging method with 50 frames, DLASD with 50 frames, and averaging with all 1290 frames, respectively. DLASD used only 4% of the frames of the averaging but had no decrease in lateral FWHM. The averaging method with 50 frames leads to a FWHM of about of 0.77 mm for a depth of 20 mm in the axial direction. This improved by about 10% via averaging of all frames. We achieved a 43% improvement in axial FWHM but with only 4% of the frames. Thus, the DLASD method leads to better axial resolution versus averaging all frames as a gold standard method. Additionally, the linearity of the photoacoustic signals could be maintained through the DLASD method (see appendix, Fig. 8).

4.3 Temporal resolution assessment

Here, we investigated the effect of frame number on the quality of reconstructed images. Averaging can decrease the effect of noise with more frames but leads to longer scan times. Our proposed method also has better CR with more frames but requires only 50 frames (4% of all frames) to achieve the same CR as 1290 frames of averaging for the deepest object. The DLASD offers better CR than averaging all frames but with only 4% of the frames. The computational time of the DLASD method for this data was about 0.31 s. By using 4% of the frames, the averaging time decreased from 1.58 s for all 1290 frames to 0.57 s for 50 frames. These processes used MATLAB on an Intel Corei7 3.2 GHz CPU with 8 GB RAM. These results prove that our method improved the temporal resolution. The frame rate of averaging all frames is 0.05 Hz whereas the frame rate of DLASD with 50 frames is about 1 Hz.

In [35] to improve quality of LED-based PA images, the PA signals were averaged over a number of frames about 1360 frames for the best result and then then recurrent neural network were used to improve quality of PA images as well as gain in imaging frame rate. The computational time of their proposed method was about 0.1 s on CPU (Intel Core i7-7700K @4.20 GHz with 32 GB RAM) without considering training phase time using a GPU (NVIDIA GeForce GTX 1080 Ti). In comparison to averaging method in LED-based PA imaging, we proposed DLASD method which enable us using only simple reconstruction method delay and sum without any further post processing. The computational time of DLASD method was about 0.3 s on CPU (an Intel Corei7 3.2 GHz with 8 GB RAM). However, it does not need any training processes. Furthermore, we only used 50 frame averaging.

4.4 In vivo experiment

Finally, we evaluated the performance of the proposed signal de-noising method with in vivo data from a rabbit retina. Figure 6 shows the reconstructed images of de-noised signal via our proposed method as well as the averaging method with different number of frames. We defined the dashed region as the background to calculate CR of images. The CR of -14.5 dB was achieved for reconstructed images of de-noised signal using averaging via all 1536 frames. DLASD with 30 frames has a better CR of -18.05 dB. This shows a 24% improvement in CR versus averaging 30 frames with CR of -13.6 dB. Therefore, the retina of the rabbit can clearly be seen in Fig. 6(b). The retina is not distinguishable in Fig. 6(a) with averaging 30 frames. The CR was improved by using only 2% data versus averaging all frames. Furthermore, the background noise is about -35 dB for DLASD via 30 frames versus -27 dB for all averaged frames. In contrast to the averaging method, the background noise was considerably suppressed with increasing frames numbers: -38 and -42 dB for DLASD using 50 and 100 frames (Fig. 6(d, f)). The computational time of the DLASD method for in vivo data was about 0.32 s. The averaging time reduced from 4.58 s for all frames to 1.45 s for 30 frames by using fewer than 2% of frames. The image reconstruction computational time is almost equal for the two methods (about 0.5 s). Finally, our proposed method significantly reduced the processing time (includes averaging, signal de-noising, and image reconstruction times) from 5.08 s for averaging all frames to 2.27 s for DLASD.

 figure: Fig. 6.

Fig. 6. Reconstructed rabbit retina images of de-noised signals by averaging using 30 (a), 50 (c), and 100 (e) frames. DLASD using 30 (b), 50 (d), and 100 frames (f). Panel g) shows the averaging method using all 1536 frames. Panel h) shows the schematic of imaging setup for rabbit eye. Panel i) compares CR and computational time of averaging and DLASD using 30 frames and also averaging using all frames. The in vivo study has some artefacts in the reconstructed images— especially the data for averaging all frames and DLASD methods where the noise is suppressed.

Download Full Size | PDF

5. Conclusion

We proposed a dictionary learning assisted signal de-noising method that combines a Majorization-minimization method with ADMM to compensate for the low SNR of LED-based PAI systems. The proposed method was compared to the averaging method via phantoms and a rabbit retina. The DL-based signal de-noising method outperforms the averaging method in terms of PSNR. It also provides higher contrast versus averaging methods with the same number of frames; this was seen for all samples.

The lateral FWHM in our proposed method is identical to the averaging method but requires only 4% of the frames. The axial FWHM improved by around 43%. Thus, DL-based signal de-noising methods have better axial resolution versus averaging methods. Indeed, DLASD with only one single frame could achieve the same CR as all frames of averaging but we settled on 4% of all frames to identify the deepest object with better contrast than averaging all frames. The main improvement is the use of fewer frames with less computational time and faster frame rates.

Appendix

 figure: Fig. 7.

Fig. 7. Image profile for two SNR levels. Panels (a) and (c) show reconstructed images of noisy signals with SNR -25 dB and -30 dB, respectively. Panels (b) and (d) show reconstructed images of de-noised signals with DLASD method, respectively. Panels (e-h) show image profiles of the reconstructed images.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. The points show normalized intensity versus depth in reconstructed image using averaging via all frames (blue) and DLASD via 20 frames (red). The fitted lines indicate linear regression of points with R-squared values 0.991, 0.988for averaging all frames and DLASD, respectively.

Download Full Size | PDF

Funding

National Institutes of Health (1R21AG065776-01, 1R21DE029025-01, 3DP2HL137187-01S1); National Science Foundation (1842387).

Acknowledgements

Jesse V. Jokerst acknowledges funding from the National Institutes of Health including grant numbers DP HL137187, R21 DE029025, and R21 AG065776.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335(6075), 1458–1462 (2012). [CrossRef]  

2. L. V. Wang and J. Yao, “A practical guide to photoacoustic tomography in the life sciences,” Nat. Methods 13(8), 627–638 (2016). [CrossRef]  

3. J.-W. Kim, E. I. Galanzha, E. V. Shashkov, H.-M. Moon, and V. P. Zharov, “Golden carbon nanotubes as multimodal photoacoustic and photothermal high-contrast molecular agents,” Nat. Nanotechnol. 4(10), 688–694 (2009). [CrossRef]  

4. M. Nasiriavanaki, J. Xia, H. Wan, A. Q. Bauer, J. P. Culver, and L. V. Wang, “High-resolution photoacoustic tomography of resting-state functional connectivity in the mouse brain,” Proc. Natl. Acad. Sci. 111(1), 21–26 (2014). [CrossRef]  

5. M. Pramanik, G. Ku, C. Li, and L. V. Wang, “Design and evaluation of a novel breast cancer detection system combining both thermoacoustic (TA) and photoacoustic (PA) tomography,” Med. Phys. 35(6Part1), 2218–2223 (2008). [CrossRef]  

6. M. Mehrmohammadi, S. Joon Yoon, D. Yeager, and S. Y. Emelianov, “Photoacoustic imaging for cancer detection and staging,” Curr. Mol. Imaging 2(1), 89–105 (2013). [CrossRef]  

7. S. Arabpou, E. Najafzadeh, P. Farnia, A. Ahmadian, H. Ghadiri, and M. S. A. Akhoundi, “Detection of Early Stages Dental Caries Using Photoacoustic Signals: The Simulation Study,” Frontiers in Biomedical Technologies (2019).

8. E. Najafzadeh, H. Ghadiri, M. Alimohamadi, P. Farnia, M. Mehrmohammadi, and A. Ahmadian, “Application of multi-wavelength technique for photoacoustic imaging to delineate tumor margins during maximum-safe resection of glioma: A preliminary simulation study,” J. Clin. Neurosci. 70, 242–246 (2019). [CrossRef]  

9. J. Kang, D. Kim, J. Wang, Y. Han, J. M. Zuidema, A. Hariri, J. H. Park, J. V. Jokerst, and M. J. Sailor, “Enhanced performance of a molecular photoacoustic imaging agent by encapsulation in mesoporous silicon nanoparticles,” Adv. Mater. 30(27), 1800512 (2018). [CrossRef]  

10. C. Moore, Y. Bai, A. Hariri, J. B. Sanchez, C.-Y. Lin, S. Koka, P. Sedghizadeh, C. Chen, and J. V. Jokerst, “Photoacoustic imaging for monitoring periodontal health: A first human study,” Photoacoustics 12, 67–74 (2018). [CrossRef]  

11. P. Beard, “Biomedical photoacoustic imaging,” Interface focus 1(4), 602–631 (2011). [CrossRef]  

12. A. Rosencwaig and A. Gersho, “Theory of the photoacoustic effect with solids,” J. Appl. Phys. 47(1), 64–69 (1976). [CrossRef]  

13. S. Zackrisson, S. Van De Ven, and S. Gambhir, “Light in and sound out: emerging translational strategies for photoacoustic imaging,” Cancer Res. 74(4), 979–1004 (2014). [CrossRef]  

14. M. Xu and L. V. Wang, “Photoacoustic imaging in biomedicine,” Rev. Sci. Instrum. 77(4), 041101 (2006). [CrossRef]  

15. M. Erfanzadeh and Q. Zhu, “Photoacoustic imaging with low-cost sources; A review,” Photoacoustics 14, 1–11 (2019). [CrossRef]  

16. A. Hariri, A. Fatima, N. Mohammadian, S. Mahmoodkalayeh, M. A. Ansari, N. Bely, and M. R. Avanaki, “Development of low-cost photoacoustic imaging systems using very low-energy pulsed laser diodes,” J. Biomed. Opt. 22(7), 075001 (2017). [CrossRef]  

17. A. Hariri, J. Lemaster, J. Wang, A. S. Jeevarathinam, D. L. Chao, and J. V. Jokerst, “The characterization of an economic and portable LED-based photoacoustic imaging system to facilitate molecular imaging,” Photoacoustics 9, 10–20 (2018). [CrossRef]  

18. T. J. Allen and P. C. Beard, “High power visible light emitting diodes as pulsed excitation sources for biomedical photoacoustics,” Biomed. Opt. Express 7(4), 1260–1270 (2016). [CrossRef]  

19. R. Manwar, M. Hosseinzadeh, A. Hariri, K. Kratkiewicz, S. Noei, and M. N. Avanaki, “Photoacoustic signal enhancement: towards utilization of low energy laser diodes in real-time photoacoustic imaging,” Sensors 18(10), 3498 (2018). [CrossRef]  

20. M. Zafar, K. Kratkiewicz, R. Manwar, and M. Avanaki, “Development of Low-Cost Fast Photoacoustic Computed Tomography: System Characterization and Phantom Study,” Appl. Sci. 9(3), 374 (2019). [CrossRef]  

21. N. Puri, “Comparative study of diode laser versus neodymium-yttrium aluminum: garnet laser versus intense pulsed light for the treatment of hirsutism,” J. Cutan. Aesthet. Surg. 8(2), 97 (2015). [CrossRef]  

22. A. Fatima, K. Kratkiewicz, R. Manwar, M. Zafar, R. Zhang, B. Huang, N. Dadashzadesh, J. Xia, and M. Avanaki, “Review of cost reduction methods in photoacoustic computed tomography,” Photoacoustics 15, 100137 (2019). [CrossRef]  

23. J. Yao and L. V. Wang, “Sensitivity of photoacoustic microscopy,” Photoacoustics 2(2), 87–101 (2014). [CrossRef]  

24. L. V. Wang, “Tutorial on photoacoustic microscopy and computed tomography,” IEEE J. Sel. Top. Quantum Electron. 14(1), 171–179 (2008). [CrossRef]  

25. Y. Zhu, G. Xu, J. Yuan, J. Jo, G. Gandikota, H. Demirci, T. Agano, N. Sato, Y. Shigeta, and X. Wang, “Light emitting diodes based photoacoustic imaging and potential clinical applications,” Sci. Rep. 8(1), 9885 (2018). [CrossRef]  

26. M. W. Schellenberg and H. K. Hunt, “Hand-held optoacoustic imaging: A review,” Photoacoustics 11, 14–27 (2018). [CrossRef]  

27. S. Telenkov and A. Mandelis, “Signal-to-noise analysis of biomedical photoacoustic measurements in time and frequency domains,” Rev. Sci. Instrum. 81(12), 124901 (2010). [CrossRef]  

28. A. M. Winkler, K. I. Maslov, and L. V. Wang, “Noise-equivalent sensitivity of photoacoustics,” J. Biomed. Opt. 18(9), 097003 (2013). [CrossRef]  

29. M. Zhou, H. Xia, H. Zhong, J. Zhang, and F. Gao, “A Noise Reduction Method for Photoacoustic Imaging In Vivo Based on EMD and Conditional Mutual Information,” IEEE Photonics J. 11, 1–10 (2019). [CrossRef]  

30. C. Huang, K. Wang, L. Nie, L. V. Wang, and M. A. Anastasio, “Full-wave iterative image reconstruction in photoacoustic tomography with acoustically inhomogeneous media,” IEEE Trans. Med. Imaging 32(6), 1097–1110 (2013). [CrossRef]  

31. S. Antholzer, M. Haltmeier, and J. Schwab, “Deep learning for photoacoustic tomography from sparse data,” Inverse Probl. Sci. Eng. 27(7), 987–1005 (2019). [CrossRef]  

32. M. Mozaffarzadeh, A. Hariri, C. Moore, and J. V. Jokerst, “The double-stage delay-multiply-and-sum image reconstruction method improves imaging quality in a led-based photoacoustic array scanner,” Photoacoustics 12, 22–29 (2018). [CrossRef]  

33. N. Davoudi, X. L. Deán-Ben, and D. Razansky, “Deep learning optoacoustic tomography with sparse data,” Nat. Mach. Intell. 1(10), 453–460 (2019). [CrossRef]  

34. P. Omidi, M. Zafar, M. Mozaffarzadeh, A. Hariri, X. Haung, M. Orooji, and M. Nasiriavanaki, “A novel dictionary-based image reconstruction for photoacoustic computed tomography,” Appl. Sci. 8(9), 1570 (2018). [CrossRef]  

35. E. M. A. Anas, H. K. Zhang, J. Kang, and E. Boctor, “Enabling fast and high quality LED photoacoustic imaging: a recurrent neural networks based approach,” Biomed. Opt. Express 9(8), 3852–3866 (2018). [CrossRef]  

36. E. M. A. Anas, H. K. Zhang, C. Audigier, and E. M. Boctor, “Robust Photoacoustic Beamforming Using Dense Convolutional Neural Networks,” in Simulation, Image Processing, and Ultrasound Systems for Assisted Diagnosis and Navigation (Springer, 2018), pp. 3–11.

37. H. Lan, D. Jiang, C. Yang, and F. Gao, “Y-Net: A Hybrid Deep Learning Reconstruction Framework for Photoacoustic Imaging in vivo,” arXiv preprint arXiv:1908.00975 (2019).

38. J. Li, B. Yu, W. Zhao, and W. Chen, “A review of signal enhancement and noise reduction techniques for tunable diode laser absorption spectroscopy,” Appl. Spectrosc. Rev. 49(8), 666–691 (2014). [CrossRef]  

39. S. H. Holan and J. A. Viator, “Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction,” Phys. Med. Biol. 53(12), N227 (2008). [CrossRef]  

40. S. Tzoumas, A. Rosenthal, C. Lutzweiler, D. Razansky, and V. Ntziachristos, “Spatiospectral denoising framework for multispectral optoacoustic imaging based on sparse signal representation,” Med. Phys. 41(11), 113301 (2014). [CrossRef]  

41. E. R. Hill, W. Xia, M. J. Clarkson, and A. E. Desjardins, “Identification and removal of laser-induced noise in photoacoustic imaging using singular value decomposition,” Biomed. Opt. Express 8(1), 68–77 (2017). [CrossRef]  

42. Y. Lei, J. Lin, Z. He, and M. J. Zuo, “A review on empirical mode decomposition in fault diagnosis of rotating machinery,” Mech. Syst. Signal Pr. 35(1-2), 108–126 (2013). [CrossRef]  

43. C. Li and L. V. Wang, “Photoacoustic tomography and sensing in biomedicine,” Phys. Med. Biol. 54(19), R59–R97 (2009). [CrossRef]  

44. C. B. Smith, S. Agaian, and D. Akopian, “A wavelet-denoising approach using polynomial threshold operators,” IEEE Signal Process. Lett. 15, 906–909 (2008). [CrossRef]  

45. J. Xu, Z. Wang, C. Tan, L. Si, L. Zhang, and X. Liu, “Adaptive wavelet threshold denoising method for machinery sound based on improved fruit fly optimization algorithm,” Appl. Sci. 6(7), 199 (2016). [CrossRef]  

46. M. Elad and M. Aharon, “Image denoising via learned dictionaries and sparse representation,” in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06) (IEEE, 2006), pp. 895–900.

47. G. Grossi, R. Lanzarotti, and J. Lin, “Orthogonal procrustes analysis for dictionary learning in sparse linear representation,” PLoS One 12(1), e0169663 (2017). [CrossRef]  

48. B. Deka, M. Handique, and S. Datta, “Sparse regularization method for the detection and removal of random-valued impulse noise,” Multimed. Tools Appl. 76(5), 6355–6388 (2017). [CrossRef]  

49. F. Liu, X. Gong, L. V. Wang, J. Guan, L. Song, and J. Meng, “Dictionary learning sparse-sampling reconstruction method for in-vivo 3D photoacoustic computed tomography,” Biomed. Opt. Express 10(4), 1660–1677 (2019). [CrossRef]  

50. S. Zheng and Y. Xiangyang, “Image reconstruction based on compressed sensing for sparse-data endoscopic photoacoustic tomography,” Comput. Biol. Med. 116, 103587 (2020). [CrossRef]  

51. S. Govinahallisathyanarayana, B. Ning, R. Cao, S. Hu, and J. A. Hossack, “Dictionary learning-based reverberation removal enables depth-resolved photoacoustic microscopy of cortical microvasculature in the mouse brain,” Sci. Rep. 8(1), 985 (2018). [CrossRef]  

52. K. Huang and S. Aviyente, “Sparse representation for signal classification,” in Advances in neural information processing systems (2007), pp. 609–616.

53. J. Eckstein and W. Yao, “Augmented Lagrangian and alternating direction methods for convex optimization: A tutorial and some illustrative computational results,” RUTCOR Research Reports 32 (2012).

54. B. Wahlberg, S. Boyd, M. Annergren, and Y. Wang, “An ADMM algorithm for a class of total variation regularized estimation problems,” IFAC Proc. Volumes 45(16), 83–88 (2012). [CrossRef]  

55. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” FNT in Machine Learning 3(1), 1–122 (2010). [CrossRef]  

56. Q. Tong, Z. Sun, Z. Nie, Y. Lin, and J. Cao, “Sparse decomposition based on ADMM dictionary learning for fault feature extraction of rolling element bearing,” J. Vibroeng. 18(8), 5204–5216 (2016). [CrossRef]  

57. M. Yaghoobi, T. Blumensath, and M. E. Davies, “Dictionary learning for sparse approximations with the majorization method,” IEEE Trans. Signal Process. 57(6), 2178–2191 (2009). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Flowchart of the proposed algorithm.
Fig. 2.
Fig. 2. A single line of the detected de-noised photoacoustic signal by the averaging method in the left column, and DLASD in the right column. This was done for multiple frame numbers: (a) 1 frame, (b) 5 frames, (c) 10 frames, and (d) 20 frames. The cursors show the amplitude of the noisy part of the detected signals. The 200 first samples from any channel were intentionally set to zero before any de-noising processing to eliminate transducer artifact.
Fig. 3.
Fig. 3. Reconstructed images via the different signal de-noising methods. (a), (c) Averaging using 5 and 10 frames. (b), (d) DLASD using 5 and 10 frames; e) Averaging method using all 1050 frames. f) Comparison of CR and computational time of averaging and DLASD using five frames and averaging using all frames. The white arrows indicate noisy part of images after signal de-noising via averaging method (a, e). Our proposed DLASD method shows the dramatically better performance in signal de-noising as indicated by the white arrow in (b).
Fig. 4.
Fig. 4. Reconstructed images of noisy signals with SNR: a) -5 dB, c) -10 dB, e) -15 dB, g) -20 dB, i) –25 dB, and k) -30 dB. Figures (b), (d), (f), (h), (j), and (l) are reconstructed images of de-noised signals by DLASD method. Panel M) shows the reconstructed image of ground truth signal.
Fig. 5.
Fig. 5. Reconstructed images of de-noised signals by averaging using 20 (a) and 50 frames (c). The DLASD using 20 (b) and 50 (d) frames. e) The averaging method using all 1290 frames. Panel f) shows the lateral and axial FWHM for objects that were 20 mm deep with averaging and DLASD via 50 frames and averaging all frames as a gold standard. Noise located at the left and right of images related to image reconstruction artifact.
Fig. 6.
Fig. 6. Reconstructed rabbit retina images of de-noised signals by averaging using 30 (a), 50 (c), and 100 (e) frames. DLASD using 30 (b), 50 (d), and 100 frames (f). Panel g) shows the averaging method using all 1536 frames. Panel h) shows the schematic of imaging setup for rabbit eye. Panel i) compares CR and computational time of averaging and DLASD using 30 frames and also averaging using all frames. The in vivo study has some artefacts in the reconstructed images— especially the data for averaging all frames and DLASD methods where the noise is suppressed.
Fig. 7.
Fig. 7. Image profile for two SNR levels. Panels (a) and (c) show reconstructed images of noisy signals with SNR -25 dB and -30 dB, respectively. Panels (b) and (d) show reconstructed images of de-noised signals with DLASD method, respectively. Panels (e-h) show image profiles of the reconstructed images.
Fig. 8.
Fig. 8. The points show normalized intensity versus depth in reconstructed image using averaging via all frames (blue) and DLASD via 20 frames (red). The fitted lines indicate linear regression of points with R-squared values 0.991, 0.988for averaging all frames and DLASD, respectively.

Tables (1)

Tables Icon

Table 1. The CR and computational time of signal de-noising methods for different numbers of frames.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

minα||α||0,s.t||xDα||2ε
α^=argα,Dmin||xDα||22+λ||α||0
minf(x)+g(y),s.t.Ax+By=c
Lp(x,y,λ)=f(x)+g(y)+λT(Ax+Byc)+(ρ2)||Ax+Byc||22
xk+1:=argminxLp(x,yk,λk),yk+1:=argminyLp(xk+1,y,λk),λk+1:=λk+ρ(Axk+1+Byk+1c).
s=x+n
minD,α||sDα||F2s.t||αi||0ε,i=1,2,,k,x=Dα
L=||sx||F2+i=1LΛi,(xDα)i+β2||xDα||F2
D(n+1)=D(n)+M(n)αiT(n)αi(n)αiT(n)+ε,
M(n)=βD(n)α(n)+2SΛ(n)2+β+Λ(n)βD(n)α(n).
Λ(n+1)=Λ(n)+γβ(βD(n)X(n)+2YΛ(n)2+βD(n+1)X(n))
CR=20log10(μbackgroundμobject)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.