Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Algebraic determination of back-projection operators for optoacoustic tomography

Open Access Open Access

Abstract

The simplicity and computational efficiency of back-projection formulae have made them a popular choice in optoacoustic tomography. Nonetheless, exact back-projection formulae exist for only a small set of tomographic problems. This limitation is overcome by algebraic algorithms, but at the cost of higher numerical complexity. In this paper, we present a generic algebraic framework for calculating back-projection operators in optoacoustic tomography. We demonstrate our approach in a two-dimensional optoacoustic-tomography example and show that once the algebraic back-projection operator has been found, it achieves a comparable run time to that of the conventional back-projection algorithm, but with the superior image quality of algebraic methods.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Tomographic inversion problems, which pertain to the determination of 2D or 3D objects from their projections, are common in many imaging fields [1–8]. Most often, the projection data may be described by the Radon transform, or a modification thereof. While the Radon transform is most commonly known in the context of X-ray computerized tomography (X-ray CT) [1–3] it also appears in single-photon emission computed tomography (SPECT) [4], positron emission tomography (PET) [8], optical projection tomography (OPT) [5], and ultrasound tomography (UT) [6]. In the field of optoacoustic tomography (OAT), often referred to as photoacoustic tomography (PAT) [7], as well as in other forms of thermoacoustic tomography (TAT) [9], a variation of the Radon transform is used in which the projections are taken not over lines, but rather over arcs or spherical shells [10].

In OAT, most inversion algorithms fall under one of the following categories [10]: analytical and optimization-based. In analytical algorithms, it is assumed that the projection data are represented by a continuous function and the solution is written as a set of analytical operations that are applied on that function. Known examples of that approach are back-projection (BP) algorithms [11–15] and Fourier-based methods [16,17]. While analytical algorithms are often exact, approximate formulae are also very valuable when they are simple to compute [11]. Approximate BP formulae may also be used to improve the reconstruction quality in non-ideal imaging scenarios, as demonstrated in [18–20]. Since all analytical algorithms are eventually discretized and applied on non-ideal data, small errors in the analytical approximations do not necessarily have a practical importance [1]. In optimization-based algorithms, the operation describing the projection operation is first discretized, and the inversion is performed for the discretized problem using tools from optimization theory. In the literature, such numerical algorithms are known by different names, which emphasize different aspects: algebraic reconstruction technique (ART) [21], model-based algorithms [22–25], and iterative algorithms [26].

The main advantages of analytical algorithms are the relative simplicity in which they may be programmed and the possibility for computationally efficient implementations. Analytical algorithms, and in particular BP algorithms, are especially attractive in the fields of X-ray CT and OAT, where they achieve high-quality reconstructions in various detection geometries. Optimization-based techniques offer several advantages over analytical ones, which include exact compensation for additional effects in the projection formulae [27–30], compatibility with arbitrary detection geometries [10], higher tolerance to discretization errors and additive white noise [26,31], and the ability to use nonlinear tools from compressed-sensing theory such as sparsity in alternative representations [32] and least total variation [26,33]. The two major disadvantages of optimization-based techniques are the complexity of their implementation and their computational burden, especially when non-linear regularizers are used [26,32,33]. In the case of 3D imaging, images with relatively modest resolutions may take several minutes to reconstruct when efficiently implemented on graphical processing units (GPUs) [34]. In the case of volumetric images with high voxel counts, analytical formulae, even when not exact, remain the most practical option for real-time operation [35,36].

In this paper we develop a novel scheme for algebraic calculation of BP operators for OAT, which we term algebraic back-projection (ABP). While in standard algebraic inversion, one may invert the projection operator without making any assumptions on the nature of its inverse, in ABP it is assumed that the inverse operation consists of a single BP operator that is applied on all the projections. As we show in this work, this BP operation may be found by an iterative least-square minimization procedure. Thus, similarly to conventional BP formulae, ABPs are most appropriate when the scanning geometry contains symmetries, namely when the scanning surface is either translation or rotation invariant.

To efficiently implement ABP it is required that the discretization of the image and detection surface share the same symmetry, i.e. share the same invariances. Accordingly, in our work, we chose 2D OAT with linear scanning as our geometry for the implementation of ABP, which is translation invariant and thus compatible with image discretization on a Cartesian grid. We demonstrated ABP in numerical simulations, and obtained that it produced inverse operations in which each point in the projection data is back-projected over an arc in the image – the hallmark of optoacoustic BP formulae. In our numerical simulations, the reconstructions of ABP achieved comparable quality to those obtained by the conventional model-based approach, but with the runtime of a BP formula.

In contrast to BP formulae, there is no limitation in ABPs on the mathematical operation that describes the projections. Thus, one may include additional physical effects in the forward model, e.g. the effect of finite detector size in OAT [23], which may not always be accurately included in BP inversion formulae. For example, it has been previously shown that when focused detectors are used [27,28], e.g. in optoacoustic microscopy, approximate BP-based inversion formulae involve significant reconstruction errors. For such geometries, BP operators found via ABP may offer the benefit of accurate reconstruction without the high computational burden of conventional algebraic reconstruction methods.

The paper is organized as follows: In Section II, a short theoretical introduction to the tomographic problem in optoacoustic imaging is given. In Section III, the theory of ABP for the case of 2D OAT with linear scanning is developed. In Section IV, the performance of ABP is demonstrated in numerical simulations, and in Section V the results are discussed.

2. Image reconstruction in OAT

In OAT, the image, denoted by H(r), represents the energy-deposition map in the tissue and is indicative of the optical absorption coefficient within the imaged specimen. The projection data relate to a set of 1D acoustic signals obtained at different positions of the ultrasound detector. The relation between the acoustic pressure p(r,t)at a given position r and time instant t to the optoacoustic image H(r)is often described by the following integral relation [10]:

p(r,t)=Γ4πct|rr'|=ctH(r')|rr'|dr',
where, Γ is the Grüneisen coefficient, cis the speed of sound in the medium, and tis time. The projection formula in Eq. (1) represents 3D integration (or projection) over a sphere with a radius of ct whose origin is at r. In 2D problems in which it is assumed H(r)is approximately confined to a plane, the integral in Eq. (1) is calculated over a circle.

The set of acoustic signals p(r,t)obtained over several locations represents the projection data used for reconstructing the image H(r). When H(r) has a finite support, and p(r,t)is known over a closed surface that surrounds H(r), Eq. (1) may be uniquely inverted [10]. When the detection surface is either a sphere, a cylinder, or a plane, exact inversion may be achieved by the following BP formula [12]:

H(r)=2Γ|rr'|=ct[p(r',t)ttp(r',t)]dΩr(r')Ω0,
where dΩr(r')is the solid angle element corresponding to the detector surface element dS' when viewed from r, and is given by the following equation [11]:
dΩr(r')=dS'|r'r|2[n(r')r'r|r'r|],
where n(r') denotes the outward-pointing normal of the integration surface at r'. In Eq. (2), Ω0is the solid angle covered by the detection surface and is equal to 4π for spherical and cylindrical surfaces, and to 2π for planar surfaces. When the detection surface is in the far field of the imaged object, Eq. (2) represents an approximate solution to the inverse problem for any closed-surface [11]. In that case, the first term in Eq. (2) may be omitted since p(r',t)t/tp(r',t).

While Eq. (2), or variations thereof, is the most commonly used BP formula in optoacoustic tomography, it is important to note that numerous alternative formulae exist. For the case of infinite line detectors, Burgholzer et al. developed an exact 2D formula that was successfully tested on experimental data [13]. 2D formulae have also been developed for the case in which the optoacoustic image is restricted to a 2D surface [14,15]. However, these algorithms have not been tested on experimental data.

In algebraic inversion algorithms, both the optoacoustic image and projection data are discretized and represented by vectors, which we denote byhand prespectively. The discretization of Eq. (1) leads to the following matrix relation [11]:

p=Mh,
where Mis the model matrix that represents the integral in Eq. (1). The calculation of Mmay be performed via the interpolation of H(r)at positions that are not on the discretization grid or by assuming that H(r)is composed of atoms for which an analytical solution to Eq. (1) is known.

In principle, the inversion of Eq. (4) may be performed for any detection surface by solving the following least squares minimization problem [25]:

hrec=argmin||pMh||2,
where ||||2, is the l2 norm. For a given signal p, the solution of Eq. (5) may be found by using the pseudo-inverse of M [37–39], or by using iterative methods such as LSQR [40], [41]. The advantage of the pseudo-inverse approach is that it provides a closed-form solution to Eq. (5), given by
hrec=Mp
where Mis the pseudo-inverse matrix, which is given by M=(MTM)1MT. Once Mhas been pre-calculated, the reconstruction may be efficiently performed by calculating the matrix-vector product in Eq. (6). However, the computational burden of the pre-calculation of Mhas limited its use. In practice, Eq. (5) is solved by iterative methods that do not require pre-calculations and involve acceptable reconstruction times [41–43], albeit considerably slower than those achieved by BP methods. Both the pseudo-inverse and iterative solutions to Eq. (5) may be extended to include Tikhonov regularization that is required when the projection data is insufficient for an accurate reconstruction of the image on the pre-prescribed grid, leading to an ill-conditioned matrixM. In the case of LSQR, regularization may also be performed by stopping the algorithm after a few iterations, before convergence is reached [40,42], and it is the method of regularization chosen in this work. A discussion on the numerical complexity of the different inversion methods is given in Sec. 3.2 and in the Conclusion.

3. Algebraic calculation of back-projection operators

3.1 Theory

In this section we develop the theory for ABP reconstruction in OAT for a rectangular grid with a linear scan of the acoustic detector. The parameters of the tomographic problem are depicted in Fig. 1(a). Briefly, our optoacoustic image is discretized over a linear 2D grid with Nx and Nypixels in the x and y directions, respectively, where the respective pixel sizes areΔx and Δy. We assume that the projections are obtained over a linear scan in the x direction with a step size of Δx and Np scanning positions, where Np>Nx. In case the projection data is measured with a step size different than Δx, it may be interpolated to a new grid with a step size of Δx. For each scanning position, the acoustic signal is discretized with Nt time points, where Nt>Ny. We further assume that the projections are acquired symmetrically with respect to the center of the image. For each position of the detector, the acoustic signal is discretized by the vector pi, where the entire projection data are given by the following equation:

 figure: Fig. 1

Fig. 1 (a) The grid of the image and detector locations used for calculating the model matrixM. The image is divided into Nx×Ny square pixels with a pixel area of ΔxΔy and the acoustic signals are sampled at Np positions over a line with a distance ofΔx between them. (b) The image grid on which the projection operator K is calculated. Here, only a single back-projection is calculated, and the number of pixels in the x directions is increased to NK=Nx+Np1.

Download Full Size | PDF

p=(p1p2pNp1pNp).

Similarly to analytical BP formulae, in ABP the same back-projection operation is applied on each data vector pi. However, in contrast to analytical formulae, the back-projection operation in ABP is represented by a kernel matrix K that back-projects each piover an image with Ny pixels on the y axis and NK=Nx+Np1pixels on the x axis, as illustrated in Fig. 1(b). As shown in Fig. 2, the requirement NK=Nx+Np1ensures that all the back-projected images overlap with the grid of the image we wish to reconstruct.

 figure: Fig. 2

Fig. 2 An illustration of how the back-projection operator is applied on data from different detectors. The back-projected images are shown by the arcs and their span in the x direction isNK pixels. The back-projection operation is the same for all the detector locations, but the position of the projected image is shifted so it is always centered on the position of the detector. All the back-projected images overlap with the region of interest in which the image is to be reconstructed (shown in the gray square), where the final reconstruction is a sum of all the back-projected images.

Download Full Size | PDF

The kernel matrix Kmay be represented as follows:

K=(K1K2KNK1KNK),
where Kn(n=1NK) is the back-projection operation that creates the nth column in the back-projected image. Accordingly, Knpirepresent the nth column of the image back-projected from the ith projection. As illustrated in Fig. 2, the reconstructed image is a sum of the translated back-projected images. Accordingly, the reconstructed image hrecmay be described by the following matrix relation:
hrec=K^p
where the modified kernel matrix K^ is given by
K^=(KNpKNp1K2K1KNp+1KNpK3K2KNp+Nx1KNp+Nx2KNx+1KNx),
Multiplying Eq. (9) by Mand substituting Eq. (4), we obtain
prec=MK^p.
An exact reconstruction would fulfillprec=p for any vector p. Using Eq. (11), a kernel matrix that achieves an exact reconstruction should fulfill
MK^=INpNt,
where INpNt is an identity matrix of size NpNt×NpNt. To invert Eq. (12), the matrix K^needs to be represented in vector form without reoccurring entries. The details of the transformation of Eq. (12) into a standard algebraic equation composed of a matrix-vector product are given in the Appendix.

3.2 Algorithmic implementation

The implementation of ABP, based on the derivations in the Appendix, may be summarized by the following steps:

  • 1) Build the model matrix Mfor the geometry shown in Fig. 1(a).
  • 2) Construct the matrix Mtemp=[0NtNp×Ny(NKNx),M].
  • 3) Set Ms=Mtemp.
  • 4) Execute Np1time the following operations:
    • 4.1) Shift circularly Nyplaces to the left all the rows in Mtempand save the result inMtemp.
    • 4.2) Set Ms=(MsMtemp).
  • 5) Build the matrix Itemp=[0Nx2×Ny(NKNx),INxNy].
  • 6) Set Iv=Itemp.
  • 7) Execute Np1 time the following operations:
    • 7.1) Shift circularly Nyplaces to the left all the rows in Itemp and save the result in Itemp.
    • 7.2) Set Iv=(IvItemp)
  • 8) Set n = 1.
  • 9) Repeat Nt times:
    • 9.1) Set Ic,nas the nth vector in matrix Iv.
    • 9.2) Find the vector Kc,n that minimizes MsKc,nIc,n2.
    • 9.3) Set n = n + 1.
  • 10) Construct the kernel matrix Kby concatenating all the column vectors found in step 9, as shown in Eq. (A7a).
  • 11) Build the matrix pmataccording to Eq. (A10) and calculate hmat=Kpmat.
  • 12) Construct the solution by using Eq. (A11).

Steps 1-10 in the algorithm are considered as pre-calculation and need to be performed only once for a given imaging geometry. In the pre-calculation, steps 1-7 involve merely rearrangement of matrix elements and their execution time is inconsequential when performed with sufficient random access memory (RAM) to store Ms. The computational burden of ABP is mostly in step 9, which involves solving an optimization problem for the matrix Ms. The reason for this high computational burden is that the matrix Ms has Np times more entries than the model matrix M. When solving Eq. (A9) in iterations, each iteration of LSQR involves a complexity proportional to calculating MsKc,n Nttimes, i.e. O(Np2Nt2NyNK). In contrast, step 11, which is individually performed for each dataset, involves a single matrix-matrix multiplication with a complexity of O(NtNyNKNp), i.e. NtNp times less than a single iteration of step 9. Finally, step 12 involves summing Np back-projected images and thus has a complexity of O(NpNyNx). Since in most practical application, Nx,Ny,Np and Nt are of comparable magnitude, which we denote by N, the complexity of reconstructing an image with ABP is O(N4)

While the pre-calculation in ABP is high, it is still lower than that of the pseudo-inverse approach (Eq. (6)). The matrix multiplication and inversion steps in Eq. (6) lead to a complexity of O[(NpNt)2NxNy+NpNt(NxNy)2+(NxNy)3]. In addition, in the case studied in this work, regularization is required since the projections are obtained with a limited view. As a result, the calculation of the pseudo-inverse matrix requires a Tikhonov regularization term to Eq. (5) [38], and to perform the inversion for several values of the parameter, thus leading to a total computational burden of O[(NpNt)2NxNyNλ+NpNt(NxNy)2Nλ+(NxNy)3Nλ], where Nλis the number of regularization parameters that are tested.

4. Numerical results

Our numerical simulations were performed for the 2D geometry illustrated in Fig. 1 with the parameters Nx=Ny=81 and Δx=Δy=0.025 cm, which correspond to an image size 2 × 2 cm – a typical image size for optoacoustic tomography [43]. The detector was scanned over a line at a distance of 1.5 cm from the center of the image grid and acquired Np=325 projections. We note that taking Np to be considerably larger than Nxis essential for achieving a good tomographic coverage of the object and reducing limited-view artifacts [42]. The speed of sound used in the simulations was c = 1500 m/s. The time vector over which the projections were calculated had Nt=647 point with a time step of Δ/(3c). To avoid using the same model for the forward and inverse problems, the model matrix used for reconstruction assumed acoustic signals with a time step of Δ/(2c), corresponding to Nt=431 and a matrix size of 140075 × 6561, which occupied 156 MB when saved as a double sparse matrix in Matlab. Before reconstruction, the projections were interpolated from aΔ/(3c)to a Δ/(2c)time step. The simulations were performed without parallelization on a Lenovo ThinkStation P900 workstation with an Intel Xeon E5-2650 CPU operating at 2.3 GHz, 256 GB of RAM, and one terabyte of hard drive space.

To apply ABP, the matrices Ms and Iv in Eq. (17) were first calculated. The matrices had a size of 45524375 × 32805 and 45524375 × 431, and occupied 51 GB and 2.2 GB of RAM, respectively. Once the matrices had been calculated, the LSQR algorithm was used to solve the optimization problem in Eq. (A9). The first iteration of LSQR took approximately 17 minutes for each time instant, and approximately 5 days for the entire matrixK. Subsequent iterations took only 45 seconds per iteration for a given time instant, or approximately 5 hours for the entire matrixK. While in previous studies on conventional algebraic inversion in OAT, it was found that 50 iterations were sufficient for the LSQR algorithm to converge [21,35], in the current study 120 iterations were performed to enable us to properly present the convergence rate of the algorithm.

Figures 3(a)-3(c) show the back-projection operators obtained from K for different times for the 120th iteration. As the figures shows, all the back-projections were on arcs and had a bipolar nature – the hallmarks of an optoacoustic back-projection operation (Eq. (2)). In Fig. 3(d), we show a 1D slice of the back-projection image of Fig. 3(b) taken over the y axis at x = 0 (solid curve). For comparison, Fig. 3(d) also shows the result obtained for the 1D slice when only a single iteration was used in the LSQR solution of Eq. (A9), in which K was calculated (dashed curve). As the figure clearly shows, the slice from the 1st iteration is non-zero only at two discrete time instants, at which the back-projected signal has opposite signs. This type of back-projected signal corresponds to a derivate operation, which is also the dominant operation in the 3D back-projection formula (Eq. (2)). As Fig. 3(d) shows, when more iterations were performed, a more elaborate back-projection structure emerged. We note that for all the time instants for which K was calculated, the slices of the back-projected arcs had approximately the same temporal profile shown in Fig. 3(d).

 figure: Fig. 3

Fig. 3 (a-c) Images from the back-projection operatorKfor different time instants. The results were obtained with 120 iterations of the LSQR algorithm applied on Eq. (A9). (d) A 1D slice (solid curve) taken from the back-projection operator Kshown in Fig. 3(b) (t = 11 µs) at x = 0. The slice, obtained with 120 iterations, is compared to the one obtained with a single iteration (dashed curve).

Download Full Size | PDF

Figures 4(a)-4(c) show the three test images used to evaluate the performance of the ABP algorithm. The first image (Fig. 4(a)), composed of identical point-like circles, was chosen to test the reconstruction isotropy, whereas the second image (Fig. 4(b)), which contained circles of various sizes, was chosen to check how different scales in the image affected the reconstructions. In the third image (Fig. 4(c)), the reconstruction was tested for lines, which imitate the structure of blood vessels. Three methods were used to reconstruct the images: the BP formula of Eq. (2), algebraic inversion via the application of LSQR on Eq. (5), and the proposed ABP.

 figure: Fig. 4

Fig. 4 (a-c) The input images from which the projection data originated and the corresponding reconstructions obtained via (d-f) the back-projection (BP) formula in Eq. (2); (g-l) the algebraic approach in which Eq. (5) was solved by using the LSQR algorithm with (g-i) a single iteration and (j-l) 120 iterations; and (m-r) the proposed algebraic back-projection (ABP) algorithm in which Eq. (A9) was solved by using the LSQR algorithm with (m-o) a single iteration and (p-r) 120 iterations.

Download Full Size | PDF

The BP reconstructions are shown in Figs. 4(d)-4(f). Since Eq. (2) does not include the correct scaling factor for 2D images, its reconstructions were normalized. In Fig. 4(d), a good reconstruction is obtained, in which the major error is due to limited-view artifacts and smearing in the lateral direction. As expected, the artifacts become more significant the farther the objects are from the detector. In Fig. 4(e), an additional high-pass effect is observed. Indeed, it has been previously documented that when the BP formula in Eq. (2), which is exact for 3D, is applied on a 2D problem, the resulting reconstruction is high-passed compared to the originating image [25, 28]. The reconstructions in Fig. 4(d)-4(f) suffered from negative values and excess of background noise, similarly to previous studies [22]. The reconstruction time for each image was approximately 0.17 seconds, which is similar to the run times reported in Ref [44]. for a similar image size.

Figures 4(g)-4(i) show the algebraic reconstructions via Eq. (5) when the LSQR algorithm was performed with a single iteration, whereas Figs. 4(j)-4(l) show the results of the 120th iteration. The reconstruction took approximately 0.65 seconds for a single iteration and approximately 43.5 seconds for 120 iterations. When only a single iteration of the LSQR was performed, the reconstructions exhibited the same smearing effect and accentuation of small objects as those observed in the BP reconstructions (Figs. 4(d)-4(f)). In addition, the images also exhibited a depth-dependent attenuation. However, when 120 iterations were performed, the reconstructions were almost identical to the originating images, with only minor limited-view artifacts.

Figures 4(m)-4(o) show the reconstructed images obtained by ABP with a single iteration, and Figs. 4(p)-4(r) show the result of the 120th iterations. Once the matrix Khad been pre-calculated, the ABP reconstructions required only 0.17 seconds, regardless of how many iterations were used to pre-calculate K. In particular steps 11 and 12 in the ABP algorithm (Section IIIB) required 0.12 and 0.05 seconds, respectively.

The first iteration of the ABP reconstructions suffered from the same distortions observed in the first iteration of the algebraic reconstructions (Figs. 4(g)-4(i)). Similarly to the case of the algebraic reconstructions, in the 120th iteration of the ABP Similarly to the case of the algebraic reconstructions, in the 120th iteration of the ABP reconstructions the distortions were significantly reduced, and only weak limited-view artifacts remained (Fig. 4(p)-4(r)), albeit more visible than those obtained in the algebraic reconstructions.

In the last numerical example, we compared the performance of the reconstruction algorithms in the presence of noise for the images in Fig. 4(a)-4(c). To simulate a noisy measurement, we added to each point in the projection data a random variable with a Gaussian distribution, zero mean, and standard-deviation of 20% of the maximum value in the projection data, where all the random variables were uncorrelated.

We used two measures to assess the quality of the reconstructions. The first measure was the root-mean-square error (RMSE) of the images when compared to their respective originating images in Fig. 4(a)-4(c). For the algebraic inversion, the reconstructions with the lowest RMSE were respectively obtained at the 2nd, 8th, and 3rd iterations for the images of Figs. 4(a)-4(c). Figure 5 shows the reconstructions obtained using the BP formula of Eq. (2) (Figs. 5(a)-5(c)), the algebraic reconstruction of Eq. (5) performed with 120 iterations (Figs. 5(d)-5(f)) and with the number of iterations that produced the lowest RMSE (Figs. 5(g)-5(i)), and ABP with 120 iterations (Figs. 5(j)-5(l)). A 1D profile of the reconstructions of Figs. 5(a), 5(d), 5(g) and 5(j) over the horizontal line y = 0 is shown in Fig. 6.

 figure: Fig. 5

Fig. 5 The reconstruction of the images of Fig. 4(a)-4(c) from noisy data. The reconstructions were performed using (a-c) the back-projection formula (BP) in Eq. (2), (d-f) algebraic solution to Eq. (5) with 120 iterations and (g-i) the number of iterations that minimized the RMSE, and (j-l) algebraic back-projection (ABP) with 120 iterations in the solution of Eq. (A9).

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 The 1D profiles of the reconstructions shown in 5(a), 5(d), 5(g) and 5(j) over the horizontal line y = 0.

Download Full Size | PDF

The RMSE values for the noiseless and noisy cases are shown in Figs. 7(a)-7(c) as a function of iteration for standard algebraic inversion (Eq. (5)) and ABP. The figure shows that while the algebraic reconstructions achieved a lower RMSE in the noiseless case, they obtained a higher RMSE than ABP in the noisy case. As Figs. 7(a)-7(c) show, the RMSE of ABP in all three examples converges to a value that is larger than zero. The result is expected because of the restriction put on ABP that all the signals are back-projected with the same kernel. In contrast, in conventional algebraic inversion, the noiseless case would yield an RMSE that practically decays to zero, up to rounding errors, assuming the same model matrix is used in the forward and inverse problems. Since our case involved a limited-view reconstruction, the RMSE decay of the algebraic inversion was slow, as expected in an ill-conditioned problem [40,42]. Interestingly, for both ABP and the algebraic reconstruction, the slowest decay was obtained for the image in Fig. 4(b). This result may be explained by the different scales exhibited in the image of Fig. 4(b) in contrast to the images of Figs. 4(a) and 4(c) that exhibit structures of the same scale. In the noisy case, the lowest RMSE of the algebraic reconstruction was obtained by terminating the LSQR before the 120th iteration. We note that in [42], terminating the LSQR algorithm before convergence was used as a way to regularize the algebraic inversion. For ABP, the susceptibility of the later iterations to noise was found to be less significant than in the algebraic reconstructions.

 figure: Fig. 7

Fig. 7 (a-c) The root-mean-square error (RMSE) of the algebraic and ABP reconstructions as a function of iteration for the images in Figs. 4(a)-4(c), respectively. While the algebraic reconstructions exhibit a lower RMSE, their RMSE is more susceptible to noise. The RMSE was calculated from the difference between the reconstructions and originating images. (d-f) The signal-to-noise ratio (SNR) of the algebraic and ABP reconstructions as a function of iteration for the images in Figs. 4(a)-4(c), respectively. The results are compared to the ones achieved by the back-projection (BP) formula. The SNR was calculated for each image dividing its maximum value with the standard deviation in a 10 × 10 pixel area on its top left. For all images, ABP obtained the highest SNR, while the BP formula obtained the lowest.

Download Full Size | PDF

The second measure of quality was the signal-to-noise ratio (SNR) in the noisy reconstructions, calculated by dividing the maximum value of each image by the standard deviation in a 10 × 10 pixel area on the top left of the image, a region in which the originating images are identically equal to zero. Figures 7(d)-7(f) respectively show the SNR of the reconstructions achieved by BP, algebraic inversion, and ABP for the test images of Figs. 4(a)-4(c). For all the test images, ABP achieved the highest SNR, while the BP formula achieved the lowest. This quantitative result can also be observed qualitatively when comparing the images in Fig. 5. We note that in Fig. 7(e), the SNR of ABP improved significantly in the first few iterations, in contrast to the other examples, because of the low signal level obtained in the first iteration (Fig. 5(n)). Thus, while the noise in all the reconstructions increased in the first iterations, only in Fig. 7(e) the increase in the signal was sufficient to improve the SNR.

5. Experimental results

To further test the performance of ABP it was tested on experimental optoacoustic data. The optoacoustic excitation was performed with an OPO pulse laser with a repetition rate of 100 Hz and pulse energy of approximately 30 mJ, tuned to λ = 680 nm (SpitLight DPSS 100 OPO, InnoLas Laser GmbH, Krailling, Germany). The imaged object was a dark polyethylene microsphere with a diameter of 0.3 mm (Cospheric LLC, Santa Barbara, California), which was embedded in transparent agar. The detection was performed with a needle hydrophone with a diameter of 1 mm and an acoustic bandwidth of 12 MHz (Precision Acoustics, Dorchester, UK), which was scanned at a distance of approximately 6 mm from the microsphere over a span of 2 cm with a step size of 0.25 mm. The voltage signal was digitized by an oscilloscope with a bandwidth of 70 MHz (Rigol Technologies, Beaverton, OR, USA) and averaged 16 times per position.

Similar to the previous section, the image was reconstructed over a grid of 1 × 1 cm, at a distance of 0.5 cm from the detector. Figures 8(a)-8(c) show the reconstruction results obtained using BP, algebraic inversion, and ABP, respectively. Since the image included only a single microsphere, only a 5 × 5 mm square of the image is presented. The algebraic reconstruction was performed with 5 iteration, where further iterations did not change the magnitude of the microsphere in the image. Figure 8(d) shows a 1D plot of the microsphere reconstruction, obtained on the horizontal line of y = -3.72 mm in Figs. 8(a)-8(c). While all three methods obtained a similar reconstruction of the microsphere, the reconstruction of BP suffered from a significantly higher noise level than the algebraic reconstruction and ABP.

 figure: Fig. 8

Fig. 8 (a-c) Reconstruction of a microsphere with a diameter of 0.3 mm from experimental data using (a) BP (b) algebraic inversion with 5 iterations and (c) ABP operator calculated with 120 iterations. For the algebraic inversion, no further improvement in the peak value or width of the microsphere reconstruction was obtained with more iterations. (d) a 1D plot of the reconstruction through the horizontal line y = 3.72 mm, showing the 1D profile of the microsphere reconstructions.

Download Full Size | PDF

6. Conclusion

In conclusion, we developed a method for the algebraic determination of back-projection operators. The method, which we term algebraic back-projection (ABP), finds the optimal back-projection operator in the least-square sense under the restriction that the same operator is applied on each of the projections. Once the back-projection operator, represented by the kernel matrix K, is found, the reconstruction is achieved by a matrix-multiplication operation followed by a summation operation (Eq. (23)). Since K is pre-calculated for a given geometry and does not depend on the projection data, ABP may be implemented with a numerical efficiency comparable to the one achieved by analytical BP formulae.

While the purpose of this paper is to introduce the general framework of ABP, which represents a new type of solution to tomographic problems, the specific implementation was for the problem of 2D OAT with a linear scanning of the acoustic detector. This specific geometry simplified the implementation of ABP in two ways. First, in this geometry both the projections and the image were discretized in Cartesian coordinates, simplifying the matrix operations required to describe the translation of the acoustic detector or of the image. Second, the use of a 2D example enabled the implementation of ABP without parallel programming. In our numerical simulations, the kernel matrix was calculated iteratively using the LSQR algorithm. When LSQR was applied with a single iteration, the result was a back-projection of the derivative of the acoustic signals onto arcs, which is the well-known far-field approximation to the 3D BP formula (Eq. (2)) [11]. Indeed, when we used the single-iteration ABP in our 2D geometry, we obtained high-passed reconstructions similar to those obtained when the analytical BP formula (Eq. (2)) was applied. When higher iterations were used to calculate K, finer details were added to the back-projection operation, though it still maintained its general structure of arcs with derivative operations.

In terms of run time, in our numerical simulations ABP was comparable to BP and considerably faster than iterative algebraic inversion. In iterative algebraic inversion, each iteration of the LSQR algorithm requires multiplying the model matrix Mby a vector [40], leading to a complexity of O(N4)per iteration. In contrast, in ABP the total complexity per reconstructed image was O(N4). While the complexity of the BP algorithms is only O(N3) [40], they involve more complex calculations per pixel than ABP in which the complexity relates to an efficiently implementable matrix-matrix multiplication, and as a result the run times were comparable in our numerical example. We note that future implementations of ABP may achieve a complexity of O(N3)by utilizing the sparsity ofK, visible in Fig. 3, and performing the multiplications only for the significant elements of K.

The pre-calculation of ABP involves a complexity of O(N5)per iteration, whereas the pseudo-inverse approach has a complexity of O(N6). While efficient methods to calculate the pseudo-inverse have been developed using frequency-domain [45] and combined-space [46] representations, these methods did not reveal the underlying back-projection kernel in time domain shown in Fig. 3. In addition, in contrast to the current work, the methods in Refs [45,46] have not been tested in limited-view scenarios in which the angular coverage was smaller than 180°.

In terms of reconstruction accuracy, the results obtained by ABP in our simulations were visually comparable to those obtained by standard algebraic inversion. While in the noiseless case the standard algebraic inversion obtained a lower reconstruction error for both images tested, it was more sensitive to noise than ABP. Thus, we conclude that the restriction in ABP to a single back-projection operator acts as a regularizer which suppresses noise at the price of moderate loss of image fidelity. In comparison to ABP and standard algebraic inversion, the analytical BP formula obtained the lowest image fidelity and was the most susceptible method to noise. We note that since the geometry studied involved a limited-tomographic view, the reconstruction errors were manifested also by negative image artifacts around the imaged objects and loss of tangential resolution [10].

Since ABP can achieve run times comparable to conventional BP formulae, while potentially offering better noise suppression, it may prove useful even in cases in which exact BP formulae exist. In addition, the generic framework of ABP may enable future implementations of this approach whose benefits go well beyond noise suppression. In principle, ABP may be applied to any linear tomographic problem, regardless of what operation is used to calculate the projection data. The performance of ABP will be determined by whether the discretization of the image and projection data have the same invariants. When they do, symmetry will require that the optimal back-projection operation be the same for all projections, making ABP equivalent to standard algebraic inversion. An example of such a scenario is cross-sectional imaging with ring detectors in OAT [7] with a polar image grid [45]. In 3D OAT, spherical or cylindrical grids may be used to develop ABP for spherical or cylindrical scans, respectively. A cylindrical grid may also be used for developing ABP for the recently introduced spiral scan [36]. When the projection data do not possess the symmetry of the image grid, ABP will find the best compromise, in the least-squares sense, for the back-projection operator.

One possible direction in which ABP may be generalized is the inclusion of complex projection operations, such as those found in practical OAT systems. For example, when the dimensions of the acoustic detector are larger than the acoustic wavelength, a spatial-averaging operation is added to Eq. (1) [10]. This operation leads to projection formulae that exhibit a complex dependency on the detector geometry for which no exact BP formulae are known; a high-fidelity reconstruction in such cases necessitates the use of algebraic methods [27,28]. Two more effects that may be included in the OAT model are non-uniform illumination [29] and signal attenuation [27]. Since all the above-mentioned effects are identical for all the projections, their inclusion is not expected to affect the performance of ABP with respect to the performance of standard algebraic inversion.

One of the major challenges of extending ABP to other imaging scenarios is computational. In our work, for a 2D image with a modest pixel count of 81 × 81, the corresponding matrix Ms occupied 51 GB of RAM, requiring a workstation to implement the pre-calculations of the algorithm. In addition, 120 iterations performed to calculate K required approximately six weeks when executed on a single processor. Clearly, directly extending the algorithm to higher resolutions or 3D may lead to an impractical computational burden. However, this computational burden may be decreased by minimizing the redundancy in the over-determined inversion problem of Eq. (A9). For example, in the simulations discussed in this work, the number of equations in Eq. (A8) was over 3 orders of magnitude larger than the number of unknowns. Thus, it may be possible to drastically reduce the number of linear equations without sacrificing the accuracy of the reconstruction, facilitating the extension of ABP to images with higher pixel/voxel count.

A different approach for accelerating the pre-calculation in ABP is to use parallel computing. Since the complexity of the pre-calculation stems from performing the Nt independent calculations in Step 9 in Section IIIB, an improvement on the scale of Nt may be achieved by using a similar number of processors. Since the pre-calculation is performed only once for a given geometry, a computer cluster may be used to perform the pre-calculation offline. Alternatively, parallelization may be achieved by using graphical processing units (GPUs). Although the matrix Ms is too large to be saved in the RAM of conventional GPUs, a more efficient representation of its entries may enable one to implement ABP on a GPU. In particular, the matrix Ms is not composed of unique entries, but rather contains Np copies ofM, which in our example occupied only 156 MB. Thus, the algorithm may be implemented on a GPU with the matrix M stored in the RAM of the GPU, with elements of the matrix Ms calculated on-the-fly in the iterative solution of Eq. (A9). Moreover, the matrix M itself may be generated from an even smaller set of data, as previously demonstrated in [34,47]. Thus, ABP may be adapted for GPU implementation, which may enable its generalization to 3D.

Finally, we note that the information found in the low-resolution BP operator may be extended to enable image reconstruction at arbitrary scales if an analytical operation is empirically fitted to the numerical BP operator. For example, the result of the first iteration of ABP found in our paper may be approximated by projecting on the arcs the temporal derivatives of the acoustic signals and applying an angle dependent weighting function on the arc. For higher iterations, the signal that is back-projected would need to be a convolution of the acoustic signal with the 1D BP response found in Fig. 3(d). This approach may enable finding empirical filtered back-projection formulae, analogues to those based on the Ram-Lak filter in computerized tomography [48].

Appendix

In this Appendix, a detailed derivation of the algorithm in Sec. 3.2. is provided. As Eq. (10) shows, the matrix K^ has a size of NxNy×NpNt and is composed of NK=Nx+Np1 sub-matrices of size Ny×Nt that appear at several positions in the combined matrix. Thus, although the matrix K^ contains NxNyNpNtunknown elements, the number of its unique entries is onlyNKNyNt. In comparison, the number of linear equations in Eq. (12) is equal to the number of elements in INpNt, i.e. (NpNt)2. Since Np>Nx1 and Nt>Ny, the number of equations vastly exceeds the number of unknowns.

To determine K^ via algebraic inversion, Eq. (12) first needs to be rearranged in a way in which the matrix that contains the unknowns has only unique entries. As a first step, we break K^into Np matricesKv,1Kv,Np, each with the size of NxNy×Nt,

Kν,1=(KNpKNp+1KNp+Nx1);Kν,2=(KNp1KNpKNp+Nx2);Kν,Np1=(K2K3KNx+1);Kν,Np=(K1K2KNx),
and the matrix INpNt into Np matrices It,1It,Np each with the size of NpNt×Nt,
Iν,1=(INt000);Iν,2=(0INt00)Iν,Np1=(00INt0);Iν,Np=(000INt),
whereK^=[Kv,1,Kv,2,,Kv,Np1,Kv,Np] andINpNt=[It,1,It,2,,It,Np1,It,Np]. Replacing K^with a vertical arrangement of Kv,1Kv,Np, Eq. (12) may be rewritten as follows:
(M0000M0000M0000M)(Kν,1Kν,2Kν,Np1Kν,Np)=(Iν,1Iν,2Iν,Np1Iν,Np).
We denote the block-diagonal matrices on the left-hand of Eq. (A3) by Md, and the vertical rearrangement of K^ and INpNt in Eq. (A3) by Kv and Iv, respectively. The dimensions of Md,Kv,andIv, are given by NtNp2×NxNyNp,NxNyNp×Nt,and NtNp2×Nt,respectively. The matrix Kv in Eq. (A3) is connected to the matrix K in Eq. (8) by the following transformation:
Kν=(0NxNy×Ny(NKNX)INxNy0NxNy×Ny(NKNX1)INxNy0NxNy×Ny0NxNy×Ny(NKNX2)INxNy0NxNy×2Ny0NxNy×2NyINxNy0NxNy×Ny(NKNX2)0NxNy×NyINxNy0NxNy×Ny(NKNX1)INxNy0NxNy×Ny(NKNX))K.
We denote the first matrix on the right hand side of Eq. (A4) by Is. As Eq. (A4) shows, the size of the matrixIsis NxNy(NKNx+1)×NyNK, or equivalently NxNyNp×NyNK. Substituting Eq. (A4) into Eq. (A3), we obtain:
MSK=Iν,
where MS=MdIs, and has the following form:
Ms=(0NtNp×Ny(NKNX)M0NtNp×Ny(NKNX1)M0NtNp×Ny0NtNp×Ny(NKNX2)M0NtNp×2Ny0NtNp×2NyM0NtNp×Ny(NKNX2)0NtNp×NyM0NtNp×Ny(NKNX1)M0NtNp×Ny(NKNX)).
Thus, the matrix MS has the size of Np2Nt×NyNKand is composed of Npleft-circular shifts of the matrix [0NtNp×Ny(NKNx),M].

Similarly to Eq. (12), Eq. (A5) is a matrix representation of (NpNt)2linear equations. However, in contrast to the matrix K^ in Eq. (12), the matrix K has only unique entries. To invert Eq. (A5) and obtain K, we first decompose the matrices Kand Iνin Eq. (A5) into a set of Ntcolumn vectors:

K=(Kc,1Kc,2Kc,Nt1Kc,Nt),
Iv=(Ic,1Ic,2Ic,Nt1Ic,Nt),
Accordingly, Eq. (A5) may be rewritten as Ntmatrix-vector multiplications:
MsKc,n=Ic,n{n=1Nt}.
For each value of n, Eq. (A8) represents a set of Np2Ntequations with NKNyunknowns. Accordingly, Eq. (A8) represents an overdetermined set of equations, which may not have an exact solution, and one needs to find the vectors Kc,n for which MSKc,nbest approximates Ic,n. To invert Eq. (A8), we therefore use the following l2  minimization:
minMsKc,nIc,n2{n=1Nt},
which may be efficiently performed by the LSQR algorithm [40] due to the sparsity of Ms.

Once all the vectors Kc,n have been found via Eq. (A9), they are used to construct the kernel matrix K via Eq. (A7a). To reconstruct the images, one may construct K via Eq. (10) and then use Eq. (9). Alternatively, one may rearrange the projection data in the following matrix form:

pmat=(p1p2pNp1pNp),
and then construct the matrix hmat=Kpmat. Each column in hmat represents a single back-projected image with the size of NKand Nyin the x and y directions, respectively, as illustrated in Fig. 1(b). Denoting the nth column vector of hmat by hmat,nthe translation and summation of the back-projected images may be obtained by the following equation, which is equivalent to the operation shown in Fig. 2:
hrec=n=0Np1hmat,n(NyNpnNy:NyNknNy),
where the notation hmat,n(i:j) represents to a column vector taken from the ith to the jth elements of hmat,n.

Funding

The Israel Science Foundation (942/15); the Ministry of Science and Technology and Space (3-12970).

Acknowledgment

The author acknowledges the assistance of Yoav Hazan and Ahiad Levi with the experimental work.

Disclosures

The author declares that there are no conflicts of interest related to this article.

References

1. X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?” Inverse Probl. 25(12), 123009 (2009). [CrossRef]   [PubMed]  

2. W. A. Kalender, “X-ray computed tomography,” Phys. Med. Biol. 51(13), R29–R43 (2006). [CrossRef]   [PubMed]  

3. W. Kalender, Computed Tomography: Fundamentals, System Technology, Image Quality, Applications, 3rd ed. (Wiley-VCH, 2011).

4. R. M. Lewitt and S. Matej, “Overview of methods for image reconstruction from projections in emission computed tomography,” Proc. IEEE 91(10), 1588–1611 (2003). [CrossRef]  

5. J. Sharpe, U. Ahlgren, P. Perry, B. Hill, A. Ross, J. Hecksher-Sørensen, R. Baldock, and D. Davidson, “Optical projection tomography as a tool for 3D microscopy and gene expression studies,” Science 296(5567), 541–545 (2002). [CrossRef]   [PubMed]  

6. C. Li, N. Duric, P. Littrup, and L. Huang, “In vivo breast sound-speed imaging with ultrasound tomography,” Ultrasound Med. Biol. 35(10), 1615–1628 (2009). [CrossRef]   [PubMed]  

7. A. Taruttis and V. Ntziachristos, “Advances in real-time multispectral optoacoustic imaging and its applications,” Nat. Photonics 9(4), 219–227 (2015). [CrossRef]  

8. B. Bendriem and D. W. Townsend, The Theory and Practice of 3D PET (Springer, 1998)

9. D. Razansky, S. Kellnberger, and V. Ntziachristos, “Near-field radiofrequency thermoacoustic tomography with impulse excitation,” Med. Phys. 37(9), 4602–4607 (2010). [CrossRef]   [PubMed]  

10. A. Rosenthal, V. Ntziachristos, and D. Razansky, “Acoustic Inversion in optoacoustic tomography: a review,” Curr. Med. Imaging Rev. 9(4), 318–336 (2014). [CrossRef]   [PubMed]  

11. P. Burgholzer, G. J. Matt, M. Haltmeier, and G. Paltauf, “Exact and approximative imaging methods for photoacoustic tomography using an arbitrary detection surface,” Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 75(4), 046706 (2007). [CrossRef]   [PubMed]  

12. M. Xu and L. V. Wang, “Universal back-projection algorithm for photoacoustic computed tomography,” Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 71(1), 016706 (2005). [CrossRef]   [PubMed]  

13. P. Burgholzer, J. Bauer-Marschallinger, H. Grün, M. Haltmeier, and G. Paltauf, “Temporal back-projection algorithms for photoacoustic tomography with integrating line detectors,” Inverse Probl. 23(6), S65–S80 (2007). [CrossRef]  

14. M. Haltmeier, “Universal inversion formulas for recovering a function from spherical means,” SIAM J. Math. Anal. 46(1), 214–232 (2014). [CrossRef]  

15. L. A. Kunyansky, “Explicit inversion formulae for the spherical mean radon transform,” Inverse Probl. 23(1), 373–383 (2007). [CrossRef]  

16. Y. Xu, D. Feng, and L. V. Wang, “Exact frequency-domain reconstruction for thermoacoustic tomography--I: Planar geometry,” IEEE Trans. Med. Imaging 21(7), 823–828 (2002). [CrossRef]   [PubMed]  

17. K. Wang and M. A. Anastasio, “A simple Fourier transform-based reconstruction formula for photoacoustic computed tomography with a circular or spherical measurement geometry,” Phys. Med. Biol. 57(23), N493–N499 (2012). [CrossRef]   [PubMed]  

18. J. Xiao, X. Luo, K. Peng, and B. Wang, “Improved back-projection method for circular-scanning-based photoacoustic tomography with improved tangential resolution,” Appl. Opt. 56(32), 8983–8990 (2017). [CrossRef]   [PubMed]  

19. H. Huang, G. Bustamante, R. Peterson, and J. Y. Ye, “An adaptive filtered back-projection for photoacoustic image reconstruction,” Med. Phys. 42(5), 2169–2178 (2015). [CrossRef]   [PubMed]  

20. J. Turner, H. Estrada, M. Kneipp, and D. Razansky, “Improved optoacoustic microscopy through three-dimensional spatial impulse response synthetic aperture focusing technique,” Opt. Lett. 39(12), 3390–3393 (2014). [CrossRef]   [PubMed]  

21. Y. Zhao, X. Zhao, and P. Zhang, “An extended algebraic reconstruction technique (E-ART) for dual spectral CT,” IEEE Trans. Med. Imaging 34(3), 761–768 (2015). [CrossRef]   [PubMed]  

22. Z. Yu, J. B. Thibault, C. A. Bouman, K. D. Sauer, and J. Hsieh, “Fast model-based X-ray CT reconstruction using spatially nonhomogeneous ICD optimization,” IEEE Trans. Image Process. 20(1), 161–175 (2011). [CrossRef]   [PubMed]  

23. A. Rosenthal, V. Ntziachristos, and D. Razansky, “Model-based optoacoustic inversion with arbitrary-shape detectors,” Med. Phys. 38(7), 4285–4295 (2011). [CrossRef]   [PubMed]  

24. X. L. Deán-Ben, A. Buehler, V. Ntziachristos, and D. Razansky, “Accurate model-based reconstruction algorithm for three-dimensional optoacoustic tomography,” IEEE Trans. Med. Imaging 31(10), 1922–1928 (2012). [CrossRef]   [PubMed]  

25. A. Rosenthal, D. Razansky, and V. Ntziachristos, “Fast semi-analytical model-based acoustic inversion for quantitative optoacoustic tomography,” IEEE Trans. Med. Imaging 29(6), 1275–1285 (2010). [CrossRef]   [PubMed]  

26. K. Wang, R. Su, A. A. Oraevsky, and M. A. Anastasio, “Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography,” Phys. Med. Biol. 57(17), 5399–5423 (2012). [CrossRef]   [PubMed]  

27. M. Á. Araque Caballero, A. Rosenthal, J. Gateau, D. Razansky, and V. Ntziachristos, “Model-based optoacoustic imaging using focused detector scanning,” Opt. Lett. 37(19), 4080–4082 (2012). [CrossRef]   [PubMed]  

28. G. Drozdov and A. Rosenthal, “Analysis of negatively focused ultrasound detectors in optoacoustic tomography,” IEEE Trans. Med. Imaging 36(1), 301–309 (2017). [CrossRef]   [PubMed]  

29. T. Jetzfellner, A. Rosenthal, A. Buehler, A. Dima, K.-H. Englmeier, V. Ntziachristos, and D. Razansky, “Optoacoustic tomography with varying illumination and non-uniform detection patterns,” J. Opt. Soc. Am. A 27(11), 2488–2495 (2010). [CrossRef]   [PubMed]  

30. X. L. Deán-Ben, R. Ma, A. Rosenthal, V. Ntziachristos, and D. Razansky, “Weighted model-based optoacoustic reconstruction in acoustic scattering media,” Phys. Med. Biol. 58(16), 5555–5566 (2013). [CrossRef]   [PubMed]  

31. X. Jia, Y. Lou, R. Li, W. Y. Song, and S. B. Jiang, “GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation,” Med. Phys. 37(4), 1757–1760 (2010). [CrossRef]   [PubMed]  

32. J. Provost and F. Lesage, “The application of compressed sensing for photo-acoustic tomography,” IEEE Trans. Med. Imaging 28(4), 585–594 (2009). [CrossRef]   [PubMed]  

33. A. Özbek, X. L. Deán-Ben, and D. Razansky, “Optoacoustic imaging at kilohertz volumetric frame rates,” Optica 5(7), 857 (2018). [CrossRef]  

34. L. Ding, X. L. Dean-Ben, and D. Razansky, “Efficient 3-D Model-Based Reconstruction Scheme for Arbitrary Optoacoustic Acquisition Geometries,” IEEE Trans. Med. Imaging 36(9), 1858–1867 (2017). [CrossRef]   [PubMed]  

35. M. Omar, J. Gateau, and V. Ntziachristos, “Raster-scan optoacoustic mesoscopy in the 25-125 MHz range,” Opt. Lett. 38(14), 2472–2474 (2013). [CrossRef]   [PubMed]  

36. X. L. Deán-Ben, T. F. Fehm, S. J. Ford, S. Gottschalk, and D. Razansky, “Spiral volumetric optoacoustic tomography visualizes multi-scale dynamics in mice,” Light Sci. Appl. 6(4), e16247 (2017). [CrossRef]   [PubMed]  

37. J. Prakash, H. Dehghani, B. W. Pogue, and P. K. Yalavarthy, “Model-resolution-based basis pursuit deconvolution improves diffuse optical tomographic imaging,” IEEE Trans. Med. Imaging 33(4), 891–901 (2014). [CrossRef]   [PubMed]  

38. J. Prakash, A. S. Raju, C. B. Shaw, M. Pramanik, and P. K. Yalavarthy, “Basis pursuit deconvolution for improving model-based reconstructed images in photoacoustic tomography,” Biomed. Opt. Express 5(5), 1363–1377 (2014). [CrossRef]   [PubMed]  

39. M. S. Ždanov, Geophysical Inverse Theory and Regularization Problems (Elsevier, 2006).

40. C. C. Paige and M. A. Saunders, “LSQR: an algorithm for sparse linear equations and sparse least squares,” ACM Trans. Math. Softw. 8(1), 43–71 (1982). [CrossRef]  

41. C. B. Shaw, J. Prakash, M. Pramanik, and P. K. Yalavarthy, “Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography,” J. Biomed. Opt. 18(8), 080501 (2013). [CrossRef]   [PubMed]  

42. A. Buehler, A. Rosenthal, T. Jetzfellner, A. Dima, D. Razansky, and V. Ntziachristos, “Model-based optoacoustic inversions with incomplete projection data,” Med. Phys. 38(3), 1694–1704 (2011). [CrossRef]   [PubMed]  

43. D. Razansky, A. Buehler, and V. Ntziachristos, “Volumetric real-time multispectral optoacoustic tomography of biomarkers,” Nat. Protoc. 6(8), 1121–1129 (2011). [CrossRef]   [PubMed]  

44. C. Zhang, C. Li, and L. V. Wang, “Fast and robust deconvolution-based image reconstruction for photoacoustic tomography in circular geometry: experimental validation,” IEEE Photonics J. 2(1), 57–66 (2010). [CrossRef]   [PubMed]  

45. C. Lutzweiler, X. L. Deán-Ben, and D. Razansky, “Expediting model-based optoacoustic reconstructions with tomographic symmetries,” Med. Phys. 41(1), 013302 (2013). [CrossRef]   [PubMed]  

46. A. Rosenthal, T. Jetzfellner, D. Razansky, and V. Ntziachristos, “Efficient framework for model-based tomographic image reconstruction using wavelet packets,” IEEE Trans. Med. Imaging 31(7), 1346–1357 (2012). [CrossRef]   [PubMed]  

47. J. Aguirre, A. Giannoula, T. Minagawa, L. Funk, P. Turon, and T. Durduran, “A low memory cost model based reconstruction algorithm exploiting translational symmetry for photoacustic microscopy,” Biomed. Opt. Express 4(12), 2813–2827 (2013). [CrossRef]   [PubMed]  

48. T. Taylor and L. R. Lupton, “Resolution, artifacts and the design of computed tomography systems,” Nucl. Instrum. Methods Phys. Res. A 242(3), 603–609 (1986). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 (a) The grid of the image and detector locations used for calculating the model matrixM. The image is divided into N x × N y square pixels with a pixel area of Δ x Δ y and the acoustic signals are sampled at N p positions over a line with a distance of Δ x between them. (b) The image grid on which the projection operator K is calculated. Here, only a single back-projection is calculated, and the number of pixels in the x directions is increased to N K = N x + N p 1.
Fig. 2
Fig. 2 An illustration of how the back-projection operator is applied on data from different detectors. The back-projected images are shown by the arcs and their span in the x direction is N K pixels. The back-projection operation is the same for all the detector locations, but the position of the projected image is shifted so it is always centered on the position of the detector. All the back-projected images overlap with the region of interest in which the image is to be reconstructed (shown in the gray square), where the final reconstruction is a sum of all the back-projected images.
Fig. 3
Fig. 3 (a-c) Images from the back-projection operatorKfor different time instants. The results were obtained with 120 iterations of the LSQR algorithm applied on Eq. (A9). (d) A 1D slice (solid curve) taken from the back-projection operator Kshown in Fig. 3(b) (t = 11 µs) at x = 0. The slice, obtained with 120 iterations, is compared to the one obtained with a single iteration (dashed curve).
Fig. 4
Fig. 4 (a-c) The input images from which the projection data originated and the corresponding reconstructions obtained via (d-f) the back-projection (BP) formula in Eq. (2); (g-l) the algebraic approach in which Eq. (5) was solved by using the LSQR algorithm with (g-i) a single iteration and (j-l) 120 iterations; and (m-r) the proposed algebraic back-projection (ABP) algorithm in which Eq. (A9) was solved by using the LSQR algorithm with (m-o) a single iteration and (p-r) 120 iterations.
Fig. 5
Fig. 5 The reconstruction of the images of Fig. 4(a)-4(c) from noisy data. The reconstructions were performed using (a-c) the back-projection formula (BP) in Eq. (2), (d-f) algebraic solution to Eq. (5) with 120 iterations and (g-i) the number of iterations that minimized the RMSE, and (j-l) algebraic back-projection (ABP) with 120 iterations in the solution of Eq. (A9).
Fig. 6
Fig. 6 The 1D profiles of the reconstructions shown in 5(a), 5(d), 5(g) and 5(j) over the horizontal line y = 0.
Fig. 7
Fig. 7 (a-c) The root-mean-square error (RMSE) of the algebraic and ABP reconstructions as a function of iteration for the images in Figs. 4(a)-4(c), respectively. While the algebraic reconstructions exhibit a lower RMSE, their RMSE is more susceptible to noise. The RMSE was calculated from the difference between the reconstructions and originating images. (d-f) The signal-to-noise ratio (SNR) of the algebraic and ABP reconstructions as a function of iteration for the images in Figs. 4(a)-4(c), respectively. The results are compared to the ones achieved by the back-projection (BP) formula. The SNR was calculated for each image dividing its maximum value with the standard deviation in a 10 × 10 pixel area on its top left. For all images, ABP obtained the highest SNR, while the BP formula obtained the lowest.
Fig. 8
Fig. 8 (a-c) Reconstruction of a microsphere with a diameter of 0.3 mm from experimental data using (a) BP (b) algebraic inversion with 5 iterations and (c) ABP operator calculated with 120 iterations. For the algebraic inversion, no further improvement in the peak value or width of the microsphere reconstruction was obtained with more iterations. (d) a 1D plot of the reconstruction through the horizontal line y = 3.72 mm, showing the 1D profile of the microsphere reconstructions.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

p(r,t)= Γ 4πc t |rr'|=ct H(r') |rr'| dr',
H(r)= 2 Γ |rr'|=ct [ p(r',t)t t p(r',t) ] d Ω r (r') Ω 0 ,
d Ω r (r')= dS' |r'r | 2 [ n(r') r'r |r'r| ],
p=Mh,
h rec =argmin||pMh| | 2 ,
h rec = M p
p=( p 1 p 2 p N p 1 p N p ).
K=( K 1 K 2 K N K 1 K N K ),
h rec = K ^ p
K ^ =( K N p K N p 1 K 2 K 1 K N p +1 K N p K 3 K 2 K N p + N x 1 K N p + N x 2 K N x +1 K N x ),
p rec =M K ^ p.
M K ^ = I N p N t ,
K ν,1 =( K N p K N p +1 K N p + N x 1 ); K ν,2 =( K N p 1 K N p K N p + N x 2 ); K ν, N p 1 =( K 2 K 3 K N x +1 ); K ν, N p =( K 1 K 2 K N x ),
I ν,1 =( I N t 0 0 0 ); I ν,2 =( 0 I N t 0 0 ) I ν, N p 1 =( 0 0 I N t 0 ); I ν, N p =( 0 0 0 I N t ),
( M 0 0 0 0 M 0 0 0 0 M 0 0 0 0 M )( K ν,1 K ν,2 K ν, N p 1 K ν, N p )=( I ν,1 I ν,2 I ν, N p 1 I ν, N p ).
K ν =( 0 N x N y × N y ( N K N X ) I N x N y 0 N x N y × N y ( N K N X 1) I N x N y 0 N x N y × N y 0 N x N y × N y ( N K N X 2) I N x N y 0 N x N y ×2 N y 0 N x N y ×2 N y I N x N y 0 N x N y × N y ( N K N X 2) 0 N x N y × N y I N x N y 0 N x N y × N y ( N K N X 1) I N x N y 0 N x N y × N y ( N K N X ) )K.
M S K= I ν ,
M s =( 0 N t N p × N y ( N K N X ) M 0 N t N p × N y ( N K N X 1) M 0 N t N p × N y 0 N t N p × N y ( N K N X 2) M 0 N t N p ×2 N y 0 N t N p ×2 N y M 0 N t N p × N y ( N K N X 2) 0 N t N p × N y M 0 N t N p × N y ( N K N X 1) M 0 N t N p × N y ( N K N X ) ).
K=( K c,1 K c,2 K c, N t 1 K c, N t ),
I v =( I c,1 I c,2 I c, N t 1 I c, N t ),
M s K c,n = I c,n {n=1 N t }.
min M s K c,n I c,n 2 {n=1 N t },
p mat =( p 1 p 2 p N p 1 p N p ),
h rec = n=0 N p 1 h mat,n ( N y N p n N y : N y N k n N y ),
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.