Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Decomposition of images and objects into measurement and null components

Open Access Open Access

Abstract

We derive and discuss an algorithm for separating measurement-and null-space components of an object with respect to a given imaging system. This algorithm requires a discrete-to-discrete approximation of a typically continuous-to-discrete imaging system, and problems associated with such an approximation are examined. Situations where knowledge of the measurement and null spaces is crucial for analyzing imaging systems are discussed. We conclude with two examples demonstrating the usefulness of this algorithm, even in the discrete-to-discrete approximation.

©1998 Optical Society of America

1. Introduction

A medical-imaging system is typically designed to estimate some set of parameters associated with a real-world object. For instance, in single-photon emission computed tomography (SPECT), the parameters that the system attempts to estimate are the radiotracer concentrations within object volume elements, or voxels. (Often some smaller set of values, or a single value, is the desired final product, but this final product is almost always calculated by condensing the voxel estimates.)

In SPECT, as in other forms of tomographic imaging, the estimation procedure is referred to as reconstruction, and a number of reconstruction algorithms have been advanced. These include filtered backprojection, maximum-likelihood algorithms, the algebraic reconstruction technique (ART), and various Bayesian techniques. Each of these reconstruction methods has advantages and disadvantages that have been discussed in the literature [see, for example, 1,2]. However, inherent in the tomographic reconstruction problem are limitations more fundamental than differences between the various algorithms. Three examples of estimation-limiting factors are statistical noise in the data collected by the imaging system, an improper model of the imaging system, and inadequacies of the imaging system itself.

Statistical noise and improper system modeling have been discussed in depth [3,4], and we shall not focus on them here. Instead we shall study the problems associated with the design of the imaging system, and how these problems relate to the inability of the system to estimate certain parameters. We do not, during the course of this study, attempt to suggest optimal system designs. Rather, we derive a method for determining the parameters that are estimable by a given system. We then demonstrate how our technique can separate estimation problems associated with the imaging system from problems associated with other factors.

2. Background and derivation

In the absence of noise, a SPECT imaging system can be well modeled with the equation

g=𝛨f,

where f, a continuous function of the position r, represents the radioisotope distribution within the patient, and the continuous-to-discrete imaging operator, 𝛨, has kernel hm (r) equal to the probability that a photon emitted at object position r will be detected in detector pixel, m. Elements of the discretely collected data vector, g, are calculated as

gm=hm(r)f(r)d3r

for m = 1,…,M

In SPECT, the desired imaging goal is to estimate f given g. However, it is clearly impossible to fully determine the infinite-dimensional object vector f from an M-dimensional data vector g. The usual method for circumventing this problem is to divide the continuous f into a number of voxels approximately equal to the number of elements in g. We rewrite equation (1) as

ga=Hf,

where, f, the discrete approximation of f has elements

fn=vnf(r)d3r,

for n = 1,…,N, and H, the matrix approximation of 𝛨, has elements

hmn=vnhm(r)d3r.

In these equations vn represents the n th voxel in object space and the subscript on ga indicates that this is only an approximation of the true projection process.

2.1 Null space, measurement space and singular-value decomposition

Ideally, H would be invertible. However, it is generally found that even the approximation of the true imaging system is singular. This singularity results from the manner that tomographic systems sample the object space and the well-known fact that high frequencies are transferred very poorly in tomography. One way that the singularity of H has been explored is with singular-value decomposition (SVD) theory. It can be shown that any m×n matrix, H, can be decomposed as

H=UDVt

where the M×M U and the N×N V are nonsingular orthonormal matrices, and the M×N D has zero elements only at the first p locations on the diagonal.

Since U and V are orthonormal, their column vectors, ui and vi , form an orthonormal basis for the projection space and discretized object space, respectively. Equation 3 can be rewritten, in terms of ui and vi , as

ga=i=1pdii(vif)ui,

where vif is the inner product of the i th object-space eigenvector and the object. Thus, we see that the first p vi contribute to the projection operation and the rest do not. We can then therefore define an n×n measurement-space projector, M, with p rows equal to the first p vi , and N - p rows equal to the 0 vector. The null-space projector, N, is simply the identity matrix minus M. The Mf operation is then the projection of f onto the measurement space, and Nf is the projection of f onto the null space.

The Moore-Penrose pseudoinverse of H is

H+VD1Ut,

where d1ii = 1/dii for dii greater than zero, and zero otherwise. We note that the measurement- and null-space projectors can be written as M = H + H and N = I - H + H.

The vector H + ga is a least-squares solution to the problem stated by equation 3. However, a modeling error has been introduced by the discretization of equation 1, and a least-squares solution of the discretized problem does not imply a least-squares solution of the real continuous-to-discrete system. Although H + is not the least-squares solution to the real imaging problem, it is very advantageous be able to calculate it when analyzing imaging systems or reconstruction algorithms. We shall present two examples demonstrating uses of H + in section 3.

2.4 Derivation of the algorithm

One problem associated with the singular-value decomposition of an imaging system is the computational difficulty of the operation. Most SVD algorithms require that the entire H be stored in memory, and typical three-dimensional tomographic systems have H matrices with a size of several gigabytes.

For this reason the pseudoinverse is rarely used for three-dimensional reconstruction. An estimate of the discretized object is normally reconstructed by an iterative process (except in the case of the filtered-backprojection algorithm, which assumes a very simplified form of H). Iterative algorithms allow the reconstruction process to be done “on the fly” so that the H matrix does not need to be stored in memory. They also allow the possibility of stopping when a result of suitable accuracy has been achieved, thus saving the computation time required to reach convergence.

Most iterative algorithms are not designed for separating the null and measurement space. Instead, they arrive at solutions that contain some null-space components and omit some measurement-space components. However, important examples of algorithms that do iterate toward measurement- and null-space solutions are the Landweber algorithm [5] and some algebraic reconstruction techniques [6].

We have developed an additional iterative algorithm capable of finding Nf and Mf for any given f and H. We shall present only an outline of the algorithm’s derivation here. (A full derivation is contained in [7].) We start with a limiting representation of the Moore-Penrose pseudoinverse:

H+=limη0[HtH+ηI]1Ht.

We note that an operator [I + X]-1 can be expanded in the Neuman series

[I+X]1=n=0Xn.

When we expand equation 9 in the Neuman series and allow η to go to zero, we arrive at an iterative algorithm for calculating the null space of a digitized f:

Nf=n=0(IHtH)nf.

Using this expression, it can be shown that acting iteratively with the operator I - αHtH (α chosen to insure convergence), we can estimate the null component of f with respect to the system, H. The measurement projection of f can be calculated with the relationship Mf = f - Nf. It can readily be seen that the Mf operation will produce results very similar to those of the Landweber algorithm acting on Hf. Another related algorithm is the SIRT [8]. Readers interested in comparisons of these types of algorithms are referred to [6].

3. Examples

In this section we shall present two examples that illustrate uses of the null-space algorithm. In the first example we were able to find the source of an imaging-system resolution problem. The second example demonstrates how the null-space algorithm can be combined with another technique to reconstruct tomographic images.

3.1 Example 1: FASTSPECT imaging resolution

One of the major projects being carried out by our group is the design and construction of a stationary SPECT imager. This system consists of 24 photon-sensitive modular scintillation cameras arranged about the patient, with incoming photons collimated by pinholes. Since the system is stationary, it can collect full tomographic information without need of motion, giving it the capability of dynamic SPECT. For this reason we refer to our system as FASTSPECT. More information on the FASTSPECT system can be obtained at [9].

When imaging dynamically, we often collect projection frames over very short time periods. The obvious problem associated with fast imaging is the small number of collected photons (and resulting low signal-to-noise ratio) in the individual projection images. We can increase the number of collected photons by increasing the diameter of the pinholes, but the trade-off associated with increased pinhole diameter is a decrease in system spatial resolution.

 figure: Fig. 1.

Fig. 1. A slice from the reconstructed image of the wedge phantom.

Download Full Size | PDF

An application of the FASTSPECT system is dynamic cardiac imaging (DCI). Since the goal of DCI is to collect images throughout the cardiac cycle, a very short imaging time per frame is required. In our first attempt at DCI, we increased the pinhole diameter to eight millimeters. This was far larger than any previously used pinhole diameter and caused concern that the system’s spatial resolution might be compromised.

One method we employ to measure system resolution is to image the wedge phantom shown at [10]. By determining the width at which we can resolve the wedge fingers in a reconstructed image, we can arrive at a very rough estimate of system resolution. A slice from the reconstructed image of the wedge phantom taken with the eight-millimeter-pinhole system is shown in Fig. 1. It demonstrates a resolution far below what is normally achieved by FASTSPECT, and worse than expected.

 figure: Fig. 2.

Fig. 2. A slice from (a) the digital wedge phantom, (b) the measurement space of the phantom, and (c) the null space of the phantom.

Download Full Size | PDF

There were several possible reasons for the poor resolution. While the most likely cause was the large pinhole diameter, it was also possible that our problems resulted from one or more of: 1) the approximation of the continuous-to-discrete system by a discrete-to-discrete matrix (𝛨→H), 2) an improper model of the of hi (r) when estimating H or 3) the reconstruction process. Since we physically measured H, we were fairly confident that it accurately characterized a voxelized FASTSPECT system, but problems sometimes arise when we do the measurements. Additionally, scatter from within the wedge phantom was not included, and we had measured H with larger than usual voxels. Also, as will demonstrate in example two, our reconstruction algorithm is sensitive to smoothing parameters.

 figure: Fig. 3.

Fig. 3. The wedge phantom for the redesigned system, with (a) measurement space for the digitized phantom and (b) reconstruction of the actual phantom.

Download Full Size | PDF

In order to determine the source of the poor resolution, we created a digitized wedge phantom from measurements of the physical wedge phantom. A slice from this digital phantom is given in Fig. 2(a). We then calculated the null and measurement projections of this phantom with respect to H. Modeling errors, approximations due to voxelization, and reconstruction had now been removed from the problem. The only remaining effect was the measured response of the voxelized imaging system, so if the phantom’s measurement space showed quality similar to the reconstruction given in Fig. 1, we would assume that the reduced resolution was a function of the actual imaging system. It is observed in Figs. 2(b) and 2(c) that the measurement space of the digitized phantom looks much the same as the reconstructions, with most of the high-frequency information contained in the null space. From this we concluded that the poor resolution was an inherent system problem, and redesigned DCI FASTSPECT with smaller (two millimeter) pinholes. The measurement space of the digitized wedge phantom with respect to the redesigned system is given in Fig. 3(a). Fig. 3(b) shows reconstructed projections of the physical wedge phantom. It is seen that the system resolution is improved and that the null-space algorithm accurately predicts this improvement.

3.2 Example 2: Reconstruction using the null-space algorithm.

In this section we demonstrate a method for combining the null-space algorithm and another technique to produce a tomographic reconstruction algorithm. We do not suggest that this combination is superior to existing algorithms. Rather, we present our results to illustrate the detrimental effects associated with attempts to reconstruct the null space.

 figure: Fig. 4.

Fig. 4. A slice from the with best agreement with a g collected by our FASTSPECT imaging system.

Download Full Size | PDF

We currently reconstruct images with a search method that changes the estimate of individual voxels to maximize agreement between the projection of the estimated object distribution, Hf̂ , and the data collected by the imaging system, g. This algorithm quickly arrives at an in good agreement with the projection data, but it in no way suppresses the null-space components of . Unless the reconstructed image is artificially smoothed, it will be of extremely poor visual quality. For this reason we perform some type of local smoothing of at each iteration of the reconstruction process. This diminishes the agreement between the estimate and the projection data, but increases the apparent visual quality of the image.

 figure: Fig. 5.

Fig. 5. The (a) null space and (b) measurement space of the reconstruction image shown in Fig. 6.

Download Full Size | PDF

With our new technique, however, we did not smooth the estimates, but allowed Hf̂ to reach maximum agreement with g. The resulting is given in Fig. 4. This estimate has very good agreement with the data collected by the imaging system but, clearly, little can be determined from it – including the phantom imaged in the experiment.

We then calculated Mf̂ (Fig. 5(a)) and Nf̂ (Fig. 5(b)) with respect to the H used for reconstruction. It is seen that Mf̂ conveys more information about the object being imaged than alone, and that Nf̂ only served to confound the original estimate. We draw from this experiment the somewhat obvious conclusion that a reconstruction algorithm should not be allowed to proceed heedless of the null space.

4. Conclusions

We have derived an iterative algorithm for separating null- and measurement-space components of a given digitized phantom. We have presented methods for applying this algorithm to analyze imaging-system design and determine the cause of problems found in reconstructed images. Our algorithm has also served to demonstrate reconstruction deficiencies associated with the null space.

References and links

1. D.R. Gilland, B.M.W. Tsui, C.E. Metz, R.J. Jaszczak, and J.R. Perry, “An evaluation of maximum likelihood-EM reconstruction for SPECT by ROC analysis,” J. Nucl. Med. 33, 451–457 (1992). [PubMed]  

2. X-L. Xu, J-S. Liow, and S.C. Strother, “Iterative algebraic reconstruction algorithms for emission computed tomography: A unified framework and its application to positron emission tomography,” Med. Phys. , 20, 1675–1684 (1993). [CrossRef]   [PubMed]  

3. S.C. Moore, S.P. Kijewski, S.P. Muller, and B.L. Holman, “SPECT Image Noise Power: Effects of Nonstationary Projection Noise and Attenuation Compensation,” J. Nucl. Med. 29, 1704–1709, (1988). [PubMed]  

4. H.H. Barrett, D.W. Wilson, and B.M.W. Tsui, “Noise properties of the EM algorithm: I. Theory,” Phys. Med. Biol. 39, 833–846 (1994). [CrossRef]   [PubMed]  

5. O.N. Strand, “Theory and methods related to the singular-function expansion and Landweber’s iteration for integral equations of the first kind,” SIAM J. Numer. Anal. , 11, 798–825 (1974). [CrossRef]  

6. K.M. Hanson, “Bayesian and Related Methods in Image Reconstruction from Incomplete Data,” in Image Recovery, Theory and Application, Henry Stark, ed. (Academic, Orlando, Fla. 1987)

7. H.H. Barrett and D.W. Wilson, “An algorithm to calculate null and measurement spaces,” http://www.radiology.arizona.edu/~fastspec/nul_spc.pdf

8. P. Gilbert, “Iterative Methods for the Three-Dimensional Reconstruction of an Object from Projections,” J. Theor. Biol. 36, 105–117 (1972). [CrossRef]   [PubMed]  

9. H.H. Barrett, “The FASTSPECT imaging system,” http://www.radiology.arizona.edu/~fastspec.

10. I. Pang, “Wedge phantom for resolution measurement,” http://www.radiology.arizona.edu/~fastspec/wedge.htm.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. A slice from the reconstructed image of the wedge phantom.
Fig. 2.
Fig. 2. A slice from (a) the digital wedge phantom, (b) the measurement space of the phantom, and (c) the null space of the phantom.
Fig. 3.
Fig. 3. The wedge phantom for the redesigned system, with (a) measurement space for the digitized phantom and (b) reconstruction of the actual phantom.
Fig. 4.
Fig. 4. A slice from the with best agreement with a g collected by our FASTSPECT imaging system.
Fig. 5.
Fig. 5. The (a) null space and (b) measurement space of the reconstruction image shown in Fig. 6.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

g = 𝛨 f ,
g m = h m ( r ) f ( r ) d 3 r
g a = H f ,
f n = v n f ( r ) d 3 r ,
h mn = v n h m ( r ) d 3 r .
H = UD V t
g a = i = 1 p d ii ( v i f ) u i ,
H + VD 1 U t ,
H + = lim η 0 [ H t H + η I ] 1 H t .
[ I + X ] 1 = n = 0 X n .
Nf = n = 0 ( I H t H ) n f .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.