Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Enhancing sparse-view photoacoustic tomography with combined virtually parallel projecting and spatially adaptive filtering

Open Access Open Access

Abstract

To fully realize the potential of photoacoustic tomography (PAT) in preclinical and clinical applications, rapid measurements and robust reconstructions are needed. Sparse-view measurements have been adopted effectively to accelerate the data acquisition. However, since the reconstruction from the sparse-view sampling data is challenging, both the effective measurement and the appropriate reconstruction should be taken into account. In this study, we present an iterative sparse-view PAT reconstruction scheme, where a concept of virtual parallel-projection matching the measurement condition is introduced to aid the “compressive sensing” in the reconstruction procedure, and meanwhile, the non-local spatially adaptive filtering exploring the a priori information of the mutual similarities in natural images is adopted to recover the unknowns in the transformed sparse domain. Consequently, the reconstructed images with the proposed sparse-view scheme can be evidently improved in comparison to those with the universal back-projection method, for the cases of same sparse views. The proposed approach has been validated by the simulations and ex vivo experiments, which exhibits desirable performances in image fidelity even from a small number of measuring positions.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Photoacoustic tomography (PAT) is emerging as a powerful technique for providing deep-tissue structural, functional and molecular information in both small animal [1–3] and human imaging studies [4–6]. In PAT, unfocused ultrasonic transducers, including the single-element or ultrasonic arrays, are usually adopted to receive the photoacoustic (PA) signals emitted from biological tissues. However, a normal ultrasonic array comprising several hundred elements would be relatively expensive, since each element requires its exclusive preamplifier and data acquisition channel. Thus, using one or several single-element transducers for circularly scanning the imaging object is the usual choice in many experimental setups in view of its comparative cost-effectiveness and high measurement flexibility [7, 8]. Nevertheless, these schemes normally require hundreds of scanning steps to acquire full-view PAT projections and usually take several minutes. To achieve rapid scanning for increasing demands in practice, sparse-view PAT measurements can be utilized, where the scanning time can be greatly shorten by increasing the interval of the projection angles. Naturally, due to the sparse-view measurements, the conventional algorithms, e.g., the universal back-projection (UBP) [9] or the model-based (also matrix-based) inversion without sparse regularization [10], usually lead to blurred and distorted reconstructions.

In order to guarantee the reconstructed images with high quality in sparse-view PAT, some reconstruction algorithms driven by the concept of compressed sensing (CS) have been studied. At present, the CS-based approaches have been successfully applied to the biomedical imaging field, most notably for Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) [11]. In 2009, the idea of introducing CS theory into PAT was first proposed by Provost and Lesage and was tested in phantom experiments [12]. Thereafter, many efforts about CS-based PAT have been made, and some promising results of phantom, ex vivo or in vivo experiments have also been presented. Most of them import certain a priori information, such as the sparsity of the object representation, to the model-based inverse problem as the regularization item, and then recover images by using the iteration approaches [13–19]. Commonly, the total variation (TV) as a valid sparsity regularization term utilizing the sparsity of the natural image gradient has been adopted in CS-based PAT reconstructions [13–17]. Moreover, Meng et al. have proposed a reconstruction framework of “compressed sensing with partially known support”, which uses a small part of the known nonzero-signals’ locations in the transformed sparse domain as a priori information [20]. She has also developed a principal component-analysis-based PAT to reduce the scale of the reconstruction [21]. In addition, Sandbichler et al. have presented a reconstruction using the sparsely transformed integrating-line-detector data followed by the UBP process [22]. Besides, some approaches using patterned excitation light or different scanning schemes have also been proposed [23, 24]. The demand of the model-based PAT for the computational resources is large, and in particular, extensive memory space is needed to store the model matrix [25]. Furthermore, it is important but difficult to choose a suitable regularization parameter to balance the fidelity term and the regularization term during the reconstruction. Moreover, the TV-based regularization methods are only the measure of local changes in images and tend to produce over-smoothed image edges and texture details, since these methods favor the piece-wise constant solutions [26]. In addition, when the imaging objects are some small and narrow targets, such as the tiny blood vessels, the TV-based methods may not be effective because the image gradient is not enough sparse in these cases.

Based on the above analysis, to enhance the image quality in sparse-view PAT, both the sparse representation method and the regularization procedure should be optimized. In this study, we present an iterative sparse-view PAT reconstruction scheme that is described as follows. Under a certain measurement condition, the received signals from one measuring position can be viewed as the “virtual parallel-projection profile” of the initial pressure distribution (P0 image) along the direction orthogonal to the measuring direction (defined as the position vector of the transducer). In this case, similar to the parallel-beam CT imaging, the received PA signals from each measuring position can be transformed to the Fourier domain, and the transformed ones can be considered as the partial Fourier spectrum of the image to be recovered according to the central-slice theorem. In this way, the “compression” process can be achieved. Although a high-fidelity reconstruction can be achieved using a part of the Fourier spectrum, in some sparse-view PAT scenarios, however, the measured spectrum is still insufficient. In principle, it is significantly meaningful to develop a method that is able to effectively exploit the unobserved partial spectrum. For this goal, we propose herein an iterative reconstruction approach. In each iteration step, a spatially adaptive filtering procedure, known as Block-matching and 3D filtering (BM3D) [27], is applied to the image domain to effectively exploit the new features and details of the object. Then the filtered image is transformed back to the Fourier domain. By keeping the part of measured Fourier spectrum unchanged, the next iteration will be executed. The BM3D filtering as a sparse representation process fully considers the a priori information of the existing mutually similar blocks in natural images, and it is a non-local filtering that utilizes more comprehensive feature information of the imaging object compared with the local-based methods. The proposed approach has been validated by the simulation and ex vivo experiments, exhibiting promising performances in imaging fidelity even from a small number of measuring positions.

2. Methods

2.1 Virtual parallel-projection

In CT reconstruction, one of the most fundamental concepts is the central-slice theorem. For an imaging object, F(ωx,ωy) is its two-dimensional Fourier-transform (2D-FT) result. This theorem states that the values of one-dimensional Fourier transform (1D-FT) of the projection P(ω,α) of this object and the values of the profile along a line drawn through the center of F(ωx,ωy) are the same. The formula of the central-slice theorem takes the following form [28]:

P(ω,α)=F(ωx,ωy)|ωx=ωcosα,ωy=ωsinα,
where ω is the frequency component in the 1D Fourier domain; α is the angle of one projection; ωx and ωy represent the x-direction and y-direction frequency components in the 2D Fourier domain, respectively. The central-slice theorem provides a direct relationship between the measured signals and the imaging object in the transformed domain where only a part of coefficients are required to reconstruct the image.

In 2D-PAT reconstruction, the received signals from one measuring position can be viewed as the projections along different arcs centered on the transducer, as shown in Fig. 1(a). In practice, if the rotation radius and the size of the transducer are appropriately increased, the arc projection path can become linear. This can be regarded as a “virtual parallel-projection” process, as depicted in Fig. 1(b).

 figure: Fig. 1

Fig. 1 Sketches of the different back-projection modes of the PA signals: (a) Arc projections and (b) Virtual parallel-projections.

Download Full Size | PDF

In this situation, similar to the CT imaging, the measured PA signals can be transformed to the Fourier domain, and the transformed signals can be considered as the partial Fourier spectrum of P0. Then, if the remaining part of unknown spectrum can be effectively obtained, the high-quality PAT image is able to be reconstructed.

The determination of the rotation radius is important. The too short scanning distance can’t guarantee the “virtual parallel-projection” condition, while the too long scanning distance would lead to a serious decay of the PA signal, especially for the high-frequency components. The consideration for selecting the rotation radius is described as follows, and Fig. 2 is used to help illustrate this process: (1) For a transducer with a large active size (D), the received signals from one sampling position can be regarded as the superposition of the signals from a series of virtual point-transducers arranged along the surface of the transducer, as illustrated in Fig. 2(a). This superposition process leads to a change from the multiple arc back-projection paths to an approximate straight line [29–32], especially in the central area (with the same size as D) of the reconstruction grid; (2) The PA signals are back projected in the given reconstruction grid from the measuring position. For the column of the grid closest to the measuring position (Column 1 in Fig. 2(a)), if one synthetic projection path is almost completely within Column 1, the other synthetic projection paths further away from the same measuring position are more like straight lines.

 figure: Fig. 2

Fig. 2 Schematics of (a) the superposition process of the signals back projected from a series of virtual point-transducers arranged along the surface of the transducer, and (b) calculating the minimum rotation radius of the transducer.

Download Full Size | PDF

As depicted in Fig. 2(b), D1 is regarded as a virtual-point detector located at the edge of the transducer. E is the edge point of the reconstruction grid with the side length of L. Point F and point G are separately located on the two boundary lines of Column 1, and EGGD1. The distance between F and G is dl, which represents the minimum resolution of the reconstruction grid. Taking (1) and (2) into account, if one synthetic projection path is almost entirely within Column 1, and it can pass through the upper and lower boundaries of Column 1, at least through the edge points of the boundaries, then the “virtual parallel-projection” condition can be approximately satisfied. It means that if one back-projection path from D1 can pass through E and F, then the rotation radius (R) calculated in this case is considered to be the minimum rotation radius. R is formulated as follows:

RR1dl+L2,R1=(LD2)2+dl22cos[arctan(LD2dl)],
where R1 is the distance between D1 and E, and ED1=FD1=R1. The derivation of R1 is detailed in the Appendix A.

In our experiments, L is set to 22 mm for the small-animal model, and dl is set slightly less than the axial resolution of the transducer [33]. The central frequency of the selected unfocused transducer is 2.25 MHz (V306, Olympus NDT), thus dl is set to 100 μm accordingly, and D of this transducer is about 12.5 mm. In this case, the scanning radius R is finally selected as 155 mm.

Picking three points in one PA signal sequence received from one measuring position and using the above settings, we carry out the simulation to exhibit the projection paths, as shown in Fig. 3(b). For comparison, the projection paths back projected from the point transducer have also been depicted in Fig. 3(a). It can be observed from the results that, using the transducer with a larger active size and the suitable rotation radius can make the arc-projection path become the line-like one. Furthermore, the “virtual parallel-projection” condition is basically satisfied.

 figure: Fig. 3

Fig. 3 Simulation results of the projection paths back projected from (a) Point transducer, and (b) Transducer with the proposed imaging conditions.

Download Full Size | PDF

2.2 Iterative sparse-view PAT reconstruction cooperated with BM3D filtering

In order to exploit the unobserved partial Fourier spectrum of the P0 image, an iterative sparse-view PAT reconstruction scheme cooperating with BM3D filtering is proposed.

2.2.1 BM3D filtering

The BM3D filtering [27, 34] consists of two main steps:

Step 1: Taking the original image (u0) as the input, an intermediate (i.e., basic estimate) image (ubasic) is estimated using hard thresholding during the collaborative filtering process:

  • (a) Grouping and collaborative filtering: The input image u0 is processed block by block. These blocks are called reference blocks. For each reference block, similar blocks to the currently processed one are found using a similarity measure (block matching). A three-dimensional (3D) group (array) is built by stacking the matched blocks, and then a collaborative filtering is applied to the grouped blocks. In this step, hard thresholding is applied in shrinking the coefficients in the transform domain.
  • (b) Aggregation: After collaborative filtering, an estimate of each block is obtained, and multiple estimates for each pixel due to the overlapping of blocks are obtained. The output of Step 1 (ubasic) is obtained by weighted averaging all the obtained block-wise estimates that have overlapped.

Step 2: This step produces the final estimate of the image (ufinal) based on both u0 and ubasic obtained from Step 1. Here, instead of hard thresholding, Wiener filtering [35] is used as the shrinkage method.

  • (a) Grouping and collaborative filtering: Execute block matching on ubasic. For each reference block, record the locations of all matched blocks. Use these locations to form two 3D groups (arrays) of image blocks, one from u0 and the other from ubasic. Then, apply a 3D transform on both groups. The collaborative Wiener filtering is performed on the first group (u0-group). Here, the energy spectrum of ubasic is used as the true (pilot) energy spectrum in calculating the empirical Wiener coefficients. The estimates of all grouped blocks are obtained by applying the inverse 3D transform on the filtered coefficients and returning to their original locations.
  • (b) Aggregation: ufinal is obtained by aggregating all the block-wise estimates using a weighted average.

In BM3D filtering process, as the grouped image blocks are similar, the transformation can achieve a very high sparse representation of the original image, and the collaborative filtering is able to reveal the finest details shared by the grouped blocks. Thus, during the PAT reconstruction, the BM3D filtering can help to enhance the useful features of the P0 image and promote the reconstruction fidelity in the image domain.

2.2.2 Iterative reconstruction process

The unobserved spectrum in Fourier domain can be gradually estimated by an iterative process that is further detailed in the following pseudo codes. At the beginning of the iteration process, we first reconstruct the initial P0 image from the projection data by using the inverse radon transform (IRT) that has the following form [28]:

f(x,y)=02πp(s,α)s12π2(xcosα+ysinαs)dsdα,
where f(x,y) is the P0 image to be reconstructed in our work, and p(s,α) denotes the projection at the sampling angle α, i.e., the PA signals received from the sampling angle α. Then the BM3D filtering process is executed (Eq. (4)). After that, the updated P0 image is transformed to the Fourier domain (Eq. (5)). Keeping the measured Fourier spectrum unchanged (Eq. (6)), the updated 2D Fourier spectrum is transformed back to the image domain (Eq. (7)). Then the next filtering process will be carried out. The proposed method is abbreviated as IRT-BM3D in this work. To describe the implementation process of the proposed IRT-BM3D approach more vividly, the flowchart of the algorithm is also illustrated in Fig. 4.

 figure: Fig. 4

Fig. 4 Flowchart of the implementation of IRT-BM3D.

Download Full Size | PDF

Tables Icon

Algorithm 1. Iterative sparse-view PAT reconstruction cooperating with BM3D filtering.

At each iteration, some pseudo-random noise Ω is only injected in the unobserved spectrum part (Eq. (6)), which helps the BM3D filtering to attenuate the noise robustly and to reveal new features effectively and further to accelerate the convergence. This is similar to the “random search” process. To avoid the injected noise in Fourier domain affecting the global reconstructed image space, the amplitude of Ω is set to very small and exponentially decreasing with the increase of iterations. In the final iteration, the amplitude is set to 0.

According to the central-slice theorem, for the measurements from the sampling angle α and α+180°, the corresponding measuring mask in Fourier domain is the same line. For the proposed method, if the sampling angles include both α and α+180°, the 1D-FT of the PA signal P(α) received from the sampling angle α can be calculated as 0.5×(FT(P(α))+FT(P^(α+180))), where P^(α+180) denotes the inverted order of the PA signal P(α+180).

2.3 Experimental setup

Figure 5 shows the layout of the experimental setup. The optical excitation is sourced from a pulsed Nd:YAG laser (Nimma-600, Beamtech Optronics, China), generating 6 ns pulses at 532 nm with a repetition rate of 10 Hz. The light beam is delivered to the top surface of the phantom via a reflection mirror and combined concave mirror with ground glass for beam extension, reaching around 2.5 cm in diameter. The average laser intensity of the light spot is lower than the ANSI limit of 20 mJ/cm2. The phantom is placed in the cylindrical imaging chamber that is positioned along the axis of rotation, the z axis, by connecting it to the vertical translation stage. Different imaging planes can be chosen by adjusting the vertical translation stage. To achieve multi-view scanning, the rotation stage is employed to carry a transducer with a central frequency of 2.25 MHz (V306, Olympus NDT). The transducer is connected to a translation stage, and the change of rotation radius can be realized by adjusting the displacement of the translation stage. Both the phantom and the transducer are immersed in a water tank for acoustic coupling. The received PA signals are then amplified by a 50-dB amplifier (PREAMP2-D, US Ultratek), and digitized by a data acquisition card with a sampling rate of 75 MHz (PCI8552, ART Technology, China). The whole PAT measurement is synchronized by the pulsed Nd:YAG laser.

 figure: Fig. 5

Fig. 5 Schematic of the PAT measuring system.

Download Full Size | PDF

To assess the experimental performance of the proposed imaging system in terms of the spatial resolution, a phantom is built that consists of some absorbing polyethylene microspheres with a diameter of 200 μm. These microspheres are placed approximately in the same plane of the phantom. The full angle measurement with the sampling interval of 1o is carried out, and then the image is reconstructed by the proposed IRT-BM3D method. Some of the reconstructed microspheres are shown in Fig. 6(a). The profile (along line A-A’) of one recovered microsphere is depicted in Fig. 6(b). The reconstructed diameter of the microsphere is estimated as 310 μm, i.e., the average value of the full width at half maximum of three recovered microspheres (I, II, and III).

 figure: Fig. 6

Fig. 6 Experimental performance of the imaging system: (a) Image reconstruction of the microspheres (with a diameter of 200 μm); (b) Profile along line A-A’, as marked in (a).

Download Full Size | PDF

3. Experiments

3.1 Simulation experiment

To demonstrate the efficiency and superiority of the IRT-BM3D scheme in sparse-view PAT reconstruction, the simulations are performed in 2D situation. The exact P0 image is set as the vessel phantom, and two “tumor” targets are added to the phantom as illustrated in Fig. 7(k). The reconstruction region is 20mm×20mm (length×width) with the image dimensions of 256pixels×256pixels. The scanning radius is set to 155 mm which is the same as that in section 2.1. The sound speed is 1500 m/s, which is consistent in the simulation. The simulations are executed using the K-Wave Toolbox [36].

 figure: Fig. 7

Fig. 7 Reconstructed results of the vessel phantom with tumor targets: (a)-(e) UBP results of #360-, #120-, #90- #60- and #30-view cases; (f)-(j) IRT-BM3D results of #360-, #120-, #90-, #60- and #30-view cases; (k) Exact image of the phantom; (l)-(p) Profiles (along line A-A’) of the recovered images in #360-, #120-, #90-, #60- and #30-view cases.

Download Full Size | PDF

In order to investigate the performances of the IRT-BM3D algorithm in the cases of different amount of measuring positions, five cases with different sampling intervals of 1o, 3o, 4o, 6o and 12o are simulated, which are termed as #360-, #120-, #90-, #60- and #30-view PAT measurements correspondingly. The IRT-BM3D reconstructions are performed using an Intel(R) Core(TM) i7-2600 CPU @ 3.40 GHz and 16.0 GB RAM, and the time of each iteration is about 0.2109s. The results reconstructed by the UBP and the IRT-BM3D algorithms are shown in Figs. 7(a)-7(j), respectively. It can be observed from the images that the IRT-BM3D results are overall superior to the UBP results, especially in #60- and #30-view cases. For example, there are lots of artifacts in #60-view image which is reconstructed by the UBP method (Fig. 7(d)), and some fine structures, e.g. the branched thin vessels, become difficult to be recognized in the low-position tumor target. However, they can be clearly observed in the IRT-BM3D result (Fig. 7(i)). As for the #30-view UBP result shown in Fig. 7(e), even the large structures are blurred. For instance, the whole boundaries of the two tumor targets cannot be extracted clearly, while these structures are reconstructed with high fidelity in the IRT-BM3D image (Fig. 7(j)). Moreover, both of the UBP and the IRT-BM3D schemes obtain the high-quality results in #360-view case, which also demonstrate the accuracy of the proposed IRT-BM3D method. In order to qualitatively analyze the texture details reconstructed by different methods in each sparse-view case, the profiles of all recovered images along the yellow dashed line (A-A’) drawn in Fig. 7(k) are showcased in Figs. 7(l)-7(p). These results demonstrate that there exist large fluctuations in the profiles of the UBP results because of the severe artifacts existing in the reconstructed images. In contrast, the profiles of the IRT-BM3D results are much closer to the exact ones, especially in #60- and #30-view cases. For example, the small vessel on the right side of the upper tumor target is indistinguishable in #30-view UBP result, while it is conspicuous in the IRT-BM3D result (Fig. 7(p)). From the above analysis, the IRT-BM3D scheme is proved to be able to obtain the satisfactory imaging fidelity and structural precision even under the sparser sampling conditions.

To quantitatively assess the reconstructed results, the parameter “distance (d)” is calculated to show the difference between the reconstructed image and the exact one. The parameter d is defined as

d=i=1Mj=1N(u(i,j)v(i,j))2/i=1Mj=1Nv(i,j)2,
where u and v are the reconstructed image and the exact phantom image, respectively. The size of the image is M×N. The smaller value of d denotes that the reconstructed image is closer to the exact one. Besides, the peak signal-to-noise ratio (PSNR) of the reconstructed result is also calculated, which is defined as
PSNR=10×log10(M×N×vmax2i=1Mj=1N(u(i,j)v(i,j))2),
where vmax is the maximum gray value of phantom image, and the normalized vmax is set to 1 in this work.

Figures 8(a) and 8(b) showcase the values of d and the PSNR for each iteration step during the IRT-BM3D reconstruction process in each sparse-view case. To better reconstruct some fine structures and to evaluate the convergence performance of the algorithm, the number of the iterations is set to 300. It can be analyzed from the results that, the convergence can be reached in each case. The smaller d and the larger PSNR in the final results compared with the initial ones demonstrate the proposed method is able to effectively reconstruct the images with high quality. Furthermore, compared with #30-view case, the final values of d are relatively close in #120-, #90- and #60-view cases, so are the PSNR values. These indicate that, to some extent, different number of sampling positions does not entirely affect the quality of the IRT-BM3D results. While with the sampling positions becoming particularly sparse, such as in #30-view case, the quality of the reconstruction is not enough good as the results in other sparse-sampling cases analyzed in our work. Nevertheless, the value of d has a rapid and large decline process in #30-view case.

 figure: Fig. 8

Fig. 8 Values of (a) d, and (b) PSNR of the reconstructed results versus the iterations.

Download Full Size | PDF

Table 1 lists the PSNR and d values of the final reconstructed UBP and IRT-BM3D results in all sampling cases in simulation, respectively.

Tables Icon

Table 1. The PSNR and d values of the final reconstructed UBP and IRT-BM3D results

It can be analyzed from the table that, the PSNR values of the IRT-BM3D results are higher than those of UBP by 5.273%, 27.31%, 51.88%, 43.62% and 54.98%, respectively. The d values of the IRT-BM3D results are lower than those of UBP by 10.48%, 63.21%, 68.03%, 67.70%, and 64.31%, respectively. The quantitative data demonstrates that, compared with the UBP algorithm, the proposed approach can vastly improve the imaging fidelity. For #30-view case, the remarkable increase of PSNR and the relatively small value of d also indicate the validity of the IRT-BM3D method in quite sparse case.

In practical experiments, the measured PA signals are usually affected by the system noises. To verify the performance of the algorithm under extremely harsh noise conditions, the white Gaussian noises are added to #60-view simulated PA signals to obtain the signal-to-noise ratio (SNR) values of 20, 15, 10 and 5 dB, respectively. The results reconstructed by the UBP and IRT-BM3D methods are illustrated in Fig. 9. The values of d and PSNR of each iteration step during the IRT-BM3D reconstruction in each SNR case have also been displayed in Figs. 10(a) and 10(b).

 figure: Fig. 9

Fig. 9 Reconstructed results of the vessel phantom with tumor targets in #60-view case with different SNR values: (a)-(d) UBP results in the cases of SNR = 5 dB, 10 dB, 15 dB and 20 dB; (e)-(h) IRT-BM3D results in the cases of SNR = 5 dB, 10 dB, 15 dB and 20 dB.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Values of (a) d, and (b) PSNR of the reconstructed results versus the iterations.

Download Full Size | PDF

As observed from the results, with the increase of the noise level, the quality of the UBP images deteriorates seriously. For instance, some fine structures, e.g. the branched thin vessels, cannot be recognized clearly in the case of SNR = 5 dB. On the contrary, the IRT-BM3D results show better noise robustness, and all the texture details can be extracted accurately in all the noise cases. In the cases of SNR = 15 dB and SNR = 20 dB, there are no significant reduction in visual quality compared with the non-noise case. Although in the case of SNR = 5 dB, some background noises can be observed in IRT-BM3D image, the fine structures can still be distinguished obviously. The results in Fig. 10 demonstrate that the convergence can be reached in each case, and after the iterations, the PSNR and fidelity of the reconstructed images are dramatically improved. The increases of the PSNR values are 6.8604, 5.2097, 5.7316 and 5.2695 dB, which correspond to the cases of SNR = 5, 10, 15 and 20 dB, respectively. Meanwhile, in all cases, the d values are reduced by at least 0.15.

3.2 Ex vivo experiments with biological tissue

To further validate the feasibility of the proposed IRT-BM3D approach for biomedical applications, the ex vivo experiments are conducted on porcine tissues and mouse intestinal tissue, respectively.

3.2.1 Tumor-mimicking tissue imaging

To set up the imaging experiment for the mimic subcutaneous tumor tissue, the biological sample is designed as a sandwich shape, as shown in Figs. 11(k) and 11(l). The top and bottom layers of the sample are composed of porcine tenderloin tissue. Two small pieces of porcine liver tissue with different shapes mimicking the “tumor” targets are sandwiched between two layers. The whole sample with dimensions (length×width×height) of 20mm×20mm×12mm is placed into the imaging chamber. Then the remaining space between the sample and chamber is filled with the agar mixture (36°C, A9414, SIGMA). The #360-, #120-, #90-, #60- and #30-view PAT measurements are carried out in the imaging plane close to the interlayer of the ex vivo sample. The measurement settings are the same as those in the simulation experiment.

 figure: Fig. 11

Fig. 11 Reconstructed images of the tumor-mimicking tissue sample: (a)-(e) UBP results of #360-, #120-, #90- #60- and #30-view cases; (f)-(j) IRT-BM3D results of #360-, #120-, #90-, #60- and #30-view cases; (k)-(l) Photographs of the biological tissue sample; (m) Reconstructed details in yellow dotted boxes in (e) and (j).

Download Full Size | PDF

The images reconstructed by the UBP and the IRT-BM3D algorithms are shown in Figs. 11(a)-11(j), respectively. It can be observed from the images that the UBP and the IRT-BM3D results have the comparable quality in #360- and #120-view cases. Nevertheless, there is a certain degree of blurs in #120-view UBP result. The fidelity of UBP images sharply declines as the sampling positions decrease. There are lots of artifacts in #90-, #60- and #30-view UBP results, where the boundaries of “tumor” targets are severely blurred, as indicated by the arrows in Fig. 11(d). In contrast, the IRT-BM3D image has the sharper image edges and clearer target boundaries (Fig. 11(i)). In #30-view case, the two targets can hardly be visually distinguished in the UBP image, while they are able to be extracted in the IRT-BM3D image (Fig. 11(m)). In #360-view case, some texture details can be recovered clearly. For example, some muscle fibers of the porcine tenderloin tissue can be observed. Compared to #360-view result, although these particularly fine structures might not be preserved in #60- and #30-view IRT-BM3D images, the reconstructed results are still satisfactory in image fidelity and accuracy of the target structure.

Assuming #360-view IRT-BM3D result is the standard image, the PSNR and d values of the final reconstructed UBP and IRT-BM3D results in #120-, #90-, #60- and #30-view cases are calculated and listed in Table 2. As can be seen from the table that compared with the UBP results, the IRT-BM3D images have a significant increase in PSNR values and have smaller d values. The maximum increase of the PSNR value is 11.4036 dB in #120-view case, and the maximum drop of d value is 0.1646 in #30-view case.

Tables Icon

Table 2. The PSNR and d values of the final reconstructed UBP and IRT-BM3D results

3.2.2 Vascular imaging for mouse intestinal tissue

To further demonstrate the applicability of the proposed IRT-BM3D method for imaging more complex biological structures, the mouse intestinal tissue is selected because it has abundant blood vessels with various scales. A 6-week-old male healthy KM mouse is euthanized, and a portion of intestinal tissue is excised for imaging. The tissue is put in the imaging chamber and embedded in turbid agar gel, as shown in Fig. 12(a). The sample is placed in the imaging system < 30 minutes after the animal’s death. The experimental procedures in this study are reviewed and approved by the subcommittee on research animal care at Tianjin Medical University Cancer Institute & Hospital. For fine-structure tissue imaging, to balance the measurement cost and the fidelity of the reconstruction, we evaluate the performance of the IRT-BM3D method in #90-view measurement case. The #90-view IRT-BM3D result is illustrated in Fig. 12(b), and the #90-view UBP image is also shown in Fig. 12(c) as comparison.

 figure: Fig. 12

Fig. 12 Reconstructed images of the ex vivo mouse intestinal tissue: (a) Photograph of the tissue sample; (b) #90-view IRT-BM3D result; (c) #90-view UBP result. The reconstructed details are shown in white dotted boxes.

Download Full Size | PDF

As seen from the anatomical structure of the tissue in Fig. 12(a), blood vessels of different sizes (about 150 ~500 μm) and shapes are distributed in the sample. The image quality of the #90-view UBP image is not satisfactory and only part of the fine structures can be captured. That’s because with the appearance of multiple artifacts in the background, the image contrast is impaired and the visualization of fine details is blocked. This indicates that the UBP algorithm is not suitable for recovering texture details in sparse-view case of sampling interval of 4o. In contrast, the #90-view IRT-BM3D result presents the better performance in image fidelity and structural precision. The IRT-BM3D result corresponds well to the vascular anatomy, as indicated by the arrows. The vessels with different sizes (arrows 5, 11, 12 and 8) and different shapes (the “Y”-shaped vessels indicated by arrows 2 and 9) can be recovered more clearly in the IRT-BM3D result compared with the UBP image, e.g. the regions in white dotted boxes in Fig. 12(b), and 12(c). The branch and the orientation of the vessels indicated by arrows 1, 2, 3, 8 and 12 in Fig. 12(b) are explicit, while these features cannot be accurately resolved in the #90-view UBP image.

4. Discussions and conclusions

In order to guarantee the reasonable instrument cost and meet the need of rapid scanning in some cases, such as in dynamic imaging, the sparse-view PAT measurement is adopted. To recover the P0 image with high quality, the sparse-view PAT reconstruction approaches are widely studied. In this work, a novel iterative IRT-BM3D reconstruction has been proposed. The simulations have been carried out, and the experimental validations have been performed on a self-built PAT imaging system.

For the IRT-BM3D approach, first, we have proposed the “virtual parallel-projection” measurement condition. Under this condition, the arc back-projection paths tend to the approximately parallel lines, as verified and illustrated in Fig. 3. Thus, similar to the parallel-beam CT reconstruction, following the central-slice theorem, the measured signals can be transformed to the 2D Fourier domain where the P0 image can be sparsely represented, and the transformed signals can be directly acted as the partial Fourier spectrum of the P0 image. Furthermore, the unobserved partial spectrum can be gradually exploited by iteratively executing the BM3D filtering on the image which is obtained by transforming the original “measured” and the “newly explored” Fourier spectrum to the image domain. This process utilizes another sparse representation assumption of the P0 image: The natural images consist of a few templates, and these templates can be vessels and any other tissues that have the similar appearance in image. The BM3D filtering process is able to enhance the similarity between blocks that can be described by one template, which leads to a sparse representation with a few block templates for exploiting new features and improving the imaging quality effectively.

As the commonly used sparse-view PAT reconstruction scheme, the model-based inversions with the TV-based regularization item (MB-TV) are widely investigated. To compare the performances of the proposed IRT-BM3D method and the common MB-TV scheme, the simulation experiments of the vessel phantom in #60- and #30-view sampling cases are performed. The images reconstructed by the UBP, TV-based gradient descent (TV-GD) [13] and IRT-BM3D algorithms are shown in Fig. 13, respectively. The same reconstructions have also been carried out on the tumor-mimicking tissue sample, as exhibited in Fig. 14. For the TV-GD reconstruction, the regularization parameter is set to 0.2 [13], and the criterion for the iteration termination is that the parameter d stops decreasing for two successive updating stages. The parameters for the IRT-BM3D reconstruction are the same as the previous part. Table 3 and Table 4 list the PSNR and d values of the final reconstructed TV-GD and IRT-BM3D results of the simulation and ex vivo experiment, respectively.

 figure: Fig. 13

Fig. 13 Reconstructed results of the vessel phantom with tumor targets: (a)-(c) UBP, TV-GD and IRT-BM3D results for #60-view case; (d)-(f) UBP, TV-GD and IRT-BM3D results for #30-view case.

Download Full Size | PDF

 figure: Fig. 14

Fig. 14 Reconstructed results of the tumor-mimicking tissue sample: (a)-(c) UBP, TV-GD and IRT-BM3D results for #60-view case; (d)-(f) UBP, TV-GD and IRT-BM3D results for #30-view case.

Download Full Size | PDF

Tables Icon

Table 3. The PSNR and d values of the final reconstructed TV-GD and IRT-BM3D results of the vessel phantom

Tables Icon

Table 4. The PSNR and d values of the final reconstructed TV-GD and IRT-BM3D results of the tumor-mimicking tissue sample

It can be seen from the results that both the TV-GD and the IRT-BM3D images are conspicuously superior to the UBP results, especially in #30-view case. While the IRT-BM3D images exhibit better performances than TV-GD results in visual observation and quantitative assessment. For example, compared with the #60-view IRT-BM3D result (Fig. 13(c)), some details, e.g. the branched thin vessels in low-position tumor target, are slightly blurred in the TV-GD image (Fig. 13(b)). Moreover, there are more artifacts in the #30-view TV-GD result (Fig. 13(e)). In general, according to the imaging results, the TV-GD method tends to produce over-smoothed edges and texture details, especially illustrated in Fig. 14(e). This observation can be explained as the fact that the classical TV-based algorithms are prone to perceive images to be local piecewise-constant, while this violates most of the natural images. In contrast, the BM3D method, as a non-local filtering, is more appropriate for reconstruction because the selection of image blocks is not limited to the neighborhoods. Furthermore, the selection of the regularization parameter for balancing two parts of the objective function in the MB-based approach has a significant effect on imaging results, which should be deeply studied. Fortunately, this issue can be avoided in the IRT-BM3D reconstruction.

In this paper, the numerical simulations and ex vivo experiments have verified the effectiveness of the proposed IRT-BM3D algorithm. The reconstructed results having been discussed qualitatively and quantitatively show that the IRT-BM3D approach has the better performance than UBP, in terms of PSNR and imaging fidelity, especially in #30-view case which is a relatively sparse sampling case. Simulation results have also exhibited the high robustness and good convergence performance of the proposed method. The ex vivo experiments have further demonstrated that the IRT-BM3D method is competent for practical sparse-view PAT applications. The IRT-BM3D reconstructions exhibit better performances in effectively reducing the under-sampling artifacts and clearly revealing target boundaries and some texture details. For instance, in #90-view vascular imaging experiment, even the branched thin vessels can be reconstructed accurately, which is crucial for biomedical imaging diagnosis, while this fine structure is indistinguishable in the UBP result. Moreover, it should be noted that, if the imaging objects contain some fine structures, such as in vessel phantom, the quality of the #30-view IRT-BM3D reconstruction is worse than that in #120-, #90- and #60-view cases. However, in tumor-mimicking tissue imaging experiment, where the imaging target has a larger size, the reconstructed results have the similar quality in #90-, #60- and #30-view cases in terms of PSNR, d and visual observation. This indicates that selecting the suitable sparse measuring scheme according to the imaging demand is also important for obtaining good imaging results in sparse-view PAT.

In the setup with only a single transducer, the overall time of data acquisition is generally reduced with increase in the sampling interval (equivalently, the decrease in the sampling positions). Nevertheless, this reduction is not significant in the case of the sparse-view scanning, since the total scanning path of the transducer is always a full circle around the target, which needs much longer time to go through than the time for the sparse sampling. For our imaging system, the total measurement durations in #360-, #120-, #90-, #60- and #30-view cases are 56.88s, 35.95s, 34.66s, 32.15s and 25.76s, respectively. In realistic scenarios, if a few more transducers are used, such as only six transducers, and these transducers are fixed on the rotation stage every 60o, then to achieve #60-view measurement, the rotation stage only needs to rotate nine times at an interval of 6o. In this case, the sampling time can be reduced to 4.8225s.

In the current work, the imaging system and the scanning geometry are designed for in vivo small animal imaging. Nevertheless, if the imaging target has a larger size, such as the breast, this scanning geometry impractically requires a larger rotation radius to satisfy the “virtual parallel-projection” condition. In this case, instead of increasing the rotation radius, increasing the active size of the transducer, e.g., using an integrating line detector [30], can be the better alternative, as aforementioned.

Appendix A derivation of R1

R1 can be calculated by

R1=LHcosα,
where
LH=12EF=12(EG2+dl2)=12((LD2)2+dl2),
and
α=arctan(EGdl)=arctan(LD2dl).
Thus, we have

R1=(LD2)2+dl22cos[arctan(LD2dl)].

Funding

National Natural Science Foundation of China (81771880, 61475115, 61475116, 61575140, 81571723, 81671728), and Tianjin Municipal Government of China (16JCZDJC31200, 17JCZDJC32700).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References and links

1. X. L. Deán-Ben, S. Gottschalk, B. Mc Larney, S. Shoham, and D. Razansky, “Advanced optoacoustic methods for multiscale imaging of in vivo dynamics,” Chem. Soc. Rev. 46(8), 2158–2198 (2017). [CrossRef]   [PubMed]  

2. L. Li, L. Zhu, C. Ma, L. Lin, J. Yao, L. Wang, K. Maslov, R. Zhang, W. Chen, J. Shi, and L. V. Wang, “Single-impulse panoramic photoacoustic computed tomography of small-animal whole-body dynamics at high spatiotemporal resolution,” Nat Biomed Eng 1(5), 0071 (2017). [CrossRef]   [PubMed]  

3. J. Xia and L. V. Wang, “Small-animal whole-body photoacoustic tomography: a review,” IEEE Trans. Biomed. Eng. 61(5), 1380–1389 (2014). [CrossRef]   [PubMed]  

4. J. Aguirre, M. Schwarz, N. Garzorz, M. Omar, A. Buehler, K. Eyerich, and V. Ntziachristos, “Precision assessment of label-free psoriasis biomarkers with ultra-broadband optoacoustic mesoscopy,” Nat. Biomed. Eng. 1(5), 0068 (2017). [CrossRef]  

5. M. Schwarz, A. Buehler, J. Aguirre, and V. Ntziachristos, “Three-dimensional multispectral optoacoustic mesoscopy reveals melanin and blood oxygenation in human skin in vivo,” J. Biophotonics 9(1-2), 55–60 (2016). [CrossRef]   [PubMed]  

6. M. Heijblom, D. Piras, W. Xia, J. C. G. van Hespen, J. M. Klaase, F. M. van den Engh, T. G. van Leeuwen, W. Steenbergen, and S. Manohar, “Visualizing breast cancer using the Twente photoacoustic mammoscope: what do we learn from twelve new patient measurements?” Opt. Express 20(11), 11582–11597 (2012). [CrossRef]   [PubMed]  

7. L. Nie, Z. Guo, and L. V. Wang, “Photoacoustic tomography of monkey brain using virtual point ultrasonic transducers,” J. Biomed. Opt. 16(7), 076005 (2011). [CrossRef]   [PubMed]  

8. Z. Deng, W. Li, and C. Li, “Slip-ring-based multi-transducer photoacoustic tomography system,” Opt. Lett. 41(12), 2859–2862 (2016). [CrossRef]   [PubMed]  

9. M. Xu and L. V. Wang, “Universal back-projection algorithm for photoacoustic computed tomography,” Phys. Rev. E 71(1), 016706 (2005). [CrossRef]   [PubMed]  

10. X. L. Deán-Ben, V. Ntziachristos, and D. Razansky, “Acceleration of optoacoustic model-based reconstruction using angular image discretization,” IEEE Trans. Med. Imaging 31(5), 1154–1162 (2012). [CrossRef]   [PubMed]  

11. C. G. Graff and E. Y. Sidky, “Compressive sensing in medical imaging,” Appl. Opt. 54(8), C23–C44 (2015). [CrossRef]   [PubMed]  

12. J. Provost and F. Lesage, “The application of compressed sensing for photo-acoustic tomography,” IEEE Trans. Med. Imaging 28(4), 585–594 (2009). [CrossRef]   [PubMed]  

13. Y. Zhang, Y. Wang, and C. Zhang, “Total variation based gradient descent algorithm for sparse-view photoacoustic image reconstruction,” Ultrasonics 52(8), 1046–1055 (2012). [CrossRef]   [PubMed]  

14. Y. Dong, T. Gorner, and S. Kunis, “An algorithm for total variation regularized photoacoustic imaging,” Adv. Comput. Math. 41(2), 423–438 (2015). [CrossRef]  

15. Y. Han, S. Tzoumas, A. Nunes, V. Ntziachristos, and A. Rosenthal, “Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging,” Med. Phys. 42(9), 5444–5452 (2015). [CrossRef]   [PubMed]  

16. C. Zhang and Y. Wang, “High total variation-based method for sparse-view photoacoustic reconstruction,” Chin. Opt. Lett. 12(11), 111703 (2014). [CrossRef]  

17. J. Wang, C. Zhang, and Y. Wang, “A photoacoustic imaging reconstruction method based on directional total variation with adaptive directivity,” Biomed. Eng. Online 16(1), 64 (2017). [CrossRef]   [PubMed]  

18. M. Haltmeier, T. Berer, S. Moon, and P. Burgholzer, “Compressed sensing and sparsity in photoacoustic tomography,” J. Optics-UK 18(11), 114004 (2016). [CrossRef]  

19. Y. Han, L. Ding, X. L. D. Ben, D. Razansky, J. Prakash, and V. Ntziachristos, “Three-dimensional optoacoustic reconstruction using fast sparse representation,” Opt. Lett. 42(5), 979–982 (2017). [CrossRef]   [PubMed]  

20. J. Meng, L. V. Wang, L. Ying, D. Liang, and L. Song, “Compressed-sensing photoacoustic computed tomography in vivo with partially known support,” Opt. Express 20(15), 16510–16523 (2012). [CrossRef]  

21. J. Meng, Z. Jiang, L. V. Wang, J. Park, C. Kim, M. Sun, Y. Zhang, and L. Song, “High-speed, sparse-sampling three-dimensional photoacoustic computed tomography in vivo based on principal component analysis,” J. Biomed. Opt. 21(7), 076007 (2016). [CrossRef]   [PubMed]  

22. M. Sandbichler, F. Krahmer, T. Berer, P. Burgholzer, and M. Haltmeier, “A novel compressed sensing scheme for photoacoustic tomography,” SIAM J. Appl. Math. 75(6), 2475–2494 (2015). [CrossRef]  

23. M. Sun, N. Feng, Y. Shen, X. Shen, L. Ma, J. Li, and Z. Wu, “Photoacoustic imaging method based on arc-direction compressed sensing and multi-angle observation,” Opt. Express 19(16), 14801–14806 (2011). [CrossRef]   [PubMed]  

24. S. Arridge, P. Beard, M. Betcke, B. Cox, N. Huynh, F. Lucka, O. Ogunlade, and E. Zhang, “Accelerated high-resolution photoacoustic tomography via compressed sensing,” Phys. Med. Biol. 61(24), 8908–8940 (2016). [CrossRef]   [PubMed]  

25. C. Lutzweiler and D. Razansky, “Optoacoustic imaging and tomography: reconstruction approaches and outstanding challenges in image performance and quantification,” Sensors (Basel) 13(6), 7345–7384 (2013). [CrossRef]   [PubMed]  

26. X. Fei, Z. Wei, and L. Xiao, “Iterative directional total variation refinement for compressive sensing image reconstruction,” IEEE Signal Process. Lett. 20(11), 1070–1073 (2013). [CrossRef]  

27. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16(8), 2080–2095 (2007). [CrossRef]   [PubMed]  

28. G. L. Zeng, Medical image reconstruction: A conceptual tutorial (Higher Education Press, 2010).

29. A. Rosenthal, V. Ntziachristos, and D. Razansky, “Acoustic inversion in optoacoustic tomography: A review,” Curr. Med. Imaging Rev. 9(4), 318–336 (2014). [CrossRef]   [PubMed]  

30. P. Burgholzer, J. Bauer-Marschallinger, H. Gruen, M. Haltmeier, and G. Paltauf, “Temporal back-projection algorithms for photoacoustic tomography with integrating line detectors,” Inverse Probl. 23(6), S65–S80 (2007). [CrossRef]  

31. G. Paltauf, R. Nuster, M. Haltmeier, and P. Burgholzer, “Photoacoustic tomography using a Mach-Zehnder interferometer as an acoustic line detector,” Appl. Opt. 46(16), 3352–3358 (2007). [CrossRef]   [PubMed]  

32. M. Haltmeier, O. Scherzer, P. Burgholzer, and G. Paltauf, “Thermoacoustic computed tomography with large planar receivers,” Inverse Probl. 20(5), 1663–1673 (2004). [CrossRef]  

33. L. V. Wang and L. Gao, “Photoacoustic microscopy and computed tomography: from bench to bedside,” Annu. Rev. Biomed. Eng. 16(1), 155–185 (2014). [CrossRef]   [PubMed]  

34. A. Danielyan, A. Foi, V. Katkovnik, and K. Egiazarian, “Spatially adaptive filtering as regularization in inverse imaging: Compressive sensing, super-resolution, and upsampling,” in Super-Resolution Imaging, M. Peyman, ed. (CRC, 2010).

35. S. P. Ghael, A. M. Sayeed, and R. G. Baraniuk, “Improved wavelet denoising via empirical Wiener filtering,” Proc. SPIE 3169, 389–399 (1997). [CrossRef]  

36. B. E. Treeby and B. T. Cox, “k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt. 15(2), 021314 (2010). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 Sketches of the different back-projection modes of the PA signals: (a) Arc projections and (b) Virtual parallel-projections.
Fig. 2
Fig. 2 Schematics of (a) the superposition process of the signals back projected from a series of virtual point-transducers arranged along the surface of the transducer, and (b) calculating the minimum rotation radius of the transducer.
Fig. 3
Fig. 3 Simulation results of the projection paths back projected from (a) Point transducer, and (b) Transducer with the proposed imaging conditions.
Fig. 4
Fig. 4 Flowchart of the implementation of IRT-BM3D.
Fig. 5
Fig. 5 Schematic of the PAT measuring system.
Fig. 6
Fig. 6 Experimental performance of the imaging system: (a) Image reconstruction of the microspheres (with a diameter of 200 μm); (b) Profile along line A-A’, as marked in (a).
Fig. 7
Fig. 7 Reconstructed results of the vessel phantom with tumor targets: (a)-(e) UBP results of #360-, #120-, #90- #60- and #30-view cases; (f)-(j) IRT-BM3D results of #360-, #120-, #90-, #60- and #30-view cases; (k) Exact image of the phantom; (l)-(p) Profiles (along line A-A’) of the recovered images in #360-, #120-, #90-, #60- and #30-view cases.
Fig. 8
Fig. 8 Values of (a) d, and (b) PSNR of the reconstructed results versus the iterations.
Fig. 9
Fig. 9 Reconstructed results of the vessel phantom with tumor targets in #60-view case with different SNR values: (a)-(d) UBP results in the cases of SNR = 5 dB, 10 dB, 15 dB and 20 dB; (e)-(h) IRT-BM3D results in the cases of SNR = 5 dB, 10 dB, 15 dB and 20 dB.
Fig. 10
Fig. 10 Values of (a) d, and (b) PSNR of the reconstructed results versus the iterations.
Fig. 11
Fig. 11 Reconstructed images of the tumor-mimicking tissue sample: (a)-(e) UBP results of #360-, #120-, #90- #60- and #30-view cases; (f)-(j) IRT-BM3D results of #360-, #120-, #90-, #60- and #30-view cases; (k)-(l) Photographs of the biological tissue sample; (m) Reconstructed details in yellow dotted boxes in (e) and (j).
Fig. 12
Fig. 12 Reconstructed images of the ex vivo mouse intestinal tissue: (a) Photograph of the tissue sample; (b) #90-view IRT-BM3D result; (c) #90-view UBP result. The reconstructed details are shown in white dotted boxes.
Fig. 13
Fig. 13 Reconstructed results of the vessel phantom with tumor targets: (a)-(c) UBP, TV-GD and IRT-BM3D results for #60-view case; (d)-(f) UBP, TV-GD and IRT-BM3D results for #30-view case.
Fig. 14
Fig. 14 Reconstructed results of the tumor-mimicking tissue sample: (a)-(c) UBP, TV-GD and IRT-BM3D results for #60-view case; (d)-(f) UBP, TV-GD and IRT-BM3D results for #30-view case.

Tables (5)

Tables Icon

Table 1 Algorithm 1. Iterative sparse-view PAT reconstruction cooperating with BM3D filtering.

Tables Icon

Table 1 The PSNR and d values of the final reconstructed UBP and IRT-BM3D results

Tables Icon

Table 2 The PSNR and d values of the final reconstructed UBP and IRT-BM3D results

Tables Icon

Table 3 The PSNR and d values of the final reconstructed TV-GD and IRT-BM3D results of the vessel phantom

Tables Icon

Table 4 The PSNR and d values of the final reconstructed TV-GD and IRT-BM3D results of the tumor-mimicking tissue sample

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

P(ω,α)=F( ω x , ω y )| ω x =ωcosα, ω y =ωsinα ,
RR1dl+ L 2 ,R1= ( LD 2 ) 2 +d l 2 2cos[ arctan( LD 2dl ) ] ,
f(x,y)= 0 2π p(s,α) s 1 2 π 2 (xcosα+ysinαs) dsdα,
d= i=1 M j=1 N ( u(i,j)v(i,j) ) 2 / i=1 M j=1 N v (i,j) 2 ,
PSNR=10× log 10 ( M×N× v max 2 i=1 M j=1 N ( u(i,j)v(i,j) ) 2 ),
R1= L H cosα ,
L H = 1 2 EF= 1 2 ( E G 2 +d l 2 )= 1 2 ( ( LD 2 ) 2 +d l 2 ),
α=arctan( EG dl )=arctan( LD 2dl ).
R1= ( LD 2 ) 2 +d l 2 2cos[ arctan( LD 2dl ) ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.