Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fluorescence lifetime optical tomography with Discontinuous Galerkin discretisation scheme

Open Access Open Access

Abstract

We develop discontinuous Galerkin framework for solving direct and inverse problems in fluorescence diffusion optical tomography in turbid media. We show the advantages and the disadvantages of this method by comparing it with previously developed framework based on the finite volume discretization. The reconstruction algorithm was used with time-gated experimental dataset acquired by imaging a highly scattering cylindrical phantom concealing small fluorescent tubes. Optical parameters, quantum yield and lifetime were simultaneously reconstructed. Reconstruction results are presented and discussed.

©2010 Optical Society of America

1. Introduction

Fluorescence Diffuse Optical Tomography (fDOT) aims to visualize and quantitatively characterize molecular events from noninvasive boundary measurements. Fluorescence emissions from specifically designed markers reveal biochemical processes at the molecular level that can be exploited by tomographic imaging. Combining new fluorescent agents with diffuse optical imaging in animal models is likely to become a very powerful tool in the development of new drugs and the study of biochemical processes [1]. In addition, fluorescence lifetime imaging [2–8] is a particularly useful technique because there are many reactions and dynamics of molecular processes that take place on the same time scale as the lifetime of the excited state. The measurement of fluorescence lifetime can provide information concerning the local fluorophore environment in biological tissues. Moreover, protein interactions and conformational changes can be demonstrated using Förster resonant energy transfer, for which fluorescence lifetime provides a reliable read-out [9–13].

In this paper we develop the Discontinuous Galerkin framework (DG) [14, 15] for solving direct and inverse problems in fDOT. We apply this methodology to fluorescence lifetime imaging. DG is the combination of the Finite Volume numerical scheme (FV) and the Finite Elements Method (FEM). The most valuable feature of DG is the ease of mesh adaptation inherited from FV, which is crucial for three-dimensional problems. Secondly, DG method effectively deals with discontinuities in the solution, which may inherently be present in media having, for instance, internal refractive index mismatches [16]. On the other hand, an application of DG numerical scheme to inverse problems has several disadvantages. One of them is the rather high computational cost due to the higher degree of freedom it offers. We show advantages and disadvantages of the DG method by comparing it with a previously developed framework based on FV discretization [17].

The lifetime reconstruction requires time dependent information describing evolution of a physical system. Acquired time dependent data can be Fourier transformed with respect to time to give the equivalent Fourier domain data at multiple harmonic samples. Reconstruction in the Fourier domain has significant advantages over the time domain reconstruction due to its simplicity [7, 17]. Our reconstruction algorithm is designed in the Fourier domain as an iterative solver of a system of equations of Helmholtz type and does not involve full ill-conditioned matrix computations. The algorithm is based on the idea of the reconstruction of a system of parameters iteratively by minimizing the differences of their values estimated for different projection angles and frequencies. The algorithm has been applied to a large experimental dataset acquired by the use of the time gating technique [7, 8, 18–21]. For this type of imaging, the time-gated CCD camera is placed at some distance from the scattering volume, and every pixel of the camera collects photons escaping from the imaging surface within a very short exposure time. Mapping experimental images onto object's surface is not a straight forward procedure. That is because the angular dependence of radiation leaving the surface must be taken into account. In this paper a brief discussion of this problem is also provided.

The paper is organized as follows. Mathematical and computational details of our approach are described in the next section. Section 3 is devoted to experimental details. The last section presents reconstruction results and discussions.

2. Methodology

In contrast to Optical Projection Tomography (OPT), where photons propagate along straight or curved lines, light transport in fDOT is modeled by the diffusion or telegraph equations describing the radiant energy density [22–24]. The diffusive nature of light transport in turbid media such as biological tissues presents significant difficulties in image reconstruction problems and requires development of more sophisticated algorithms than those used in OPT. Thus, the well-known backprojection algorithm cannot be applied in fDOT. Nevertheless, an analogue of the backprojection algorithm for fDOT can be derived by constructing an appropriate cost functional. In this study, the cost functional is built as a sum of excitation and fluorescent error norms and two Lagrangian terms expressing the fact that the energy densities must satisfy the telegraph equation. In addition, the cost functional is penalized by adding one or more regularization terms.

2.1. Inverse problem

We begin by considering a simple experimental setup wherein positions of the laser source and the CCD camera are fixed, but the object under study is rotated over an angle θ. This approach can be easily generalized if necessary. Then, the variational problem is formulated as a minimization problem of the cost functional ℱ:

=(2π)ξ(θ)(θ+θ)dθ+ϒ,

where the error norm is given as:

θ=ς(ω)dωVχθ(r)(eθuθ2+hθvθ2)d3r.

The function uθ and vθ are the model excitation and fluorescence energy densities, respectively, corresponding to the projection angle θ; the functions eθ and hθ are experimentally measured excitation and fluorescence energy densities at the surface of the light scattering object, respectively. The function ξ (θ) is introduced for convenience and for emphasizing similarity with the backprojection operator. It represents the source distribution, which for the case of point sources, as used in this paper, is

ξ(θ)=Σ0n<Nδ(θθn),

where N is the number of source-camera positions. Similarly, the functions χθ and ς represent sampling of measurements in space and frequency

χθ(r)=Σ0m<Mamδ(rrθm),ς(ω)=Σ0l<Lδ(ωωl),

where M is the number of discrete points on the imaged phantom's surface; L denotes the number of samples in the Fourier domain (ω); the vector r θm denotes the surface points visible by the CCD camera corresponding to the excitation source at r θ. Factors am are surface areas around r θm such that ∫ χθ (r)d 3 r gives the total visible area. The form of ℑθ is chosen in order to simplify a variational procedure. Thus, the function χθ allows to replace a sum over surface points visible by the CCD camera with a volume integral. Analogously, the function ς replaces a sum over samples in the Fourier domain with an integral.

The Lagrangian terms in Eq. (1) are denoted by ℒθ and explicitly given by

θ=Reς(ω)ψθ,Λuθρθcdω+Reς(ω)φθ,Λνθquθdω.

In expression Eq. (5) 〈·, ·〉 denotes the inner product; the functions ψθ and φθ are Lagrange multipliers satisfying the same boundary conditions as the energy densities. The Helmholtz operator Λ is given by

Λ=·κ+σ,

where

κ=[3(μs+μa+iωc)]1,
σ=μa(1+iωcμa),

c is the speed of light in the medium; ρθ is the power of the excitation source at r θ. The unknown reduced scattering and absorption coefficients are denoted as μs and μa, respectively. The function q depends on unknown quantum yield, η, the absorption coefficient and lifetime, τ, in the medium as

q=ημa(1+iωτ)1.

To avoid a confusion of terminology we term the quantity ημa as “fluorescence efficiency”. The quantum yield, η, is understood here as a fraction of two energy densities: the energy density of re-emitted fluorescent photons over the energy density of absorbed excitation ones. The energy loss due to the Stokes shift is already included into this definition.

Note, that unlike the diffusion approximation the telegraph equation assumes dependence of the diffusion coefficient, κ, on the frequency, ω. For low frequencies ∣ω/s∣≪1 and μsμa this dependence is negligible. However, for ∣ω∣ above a few gigahertz and in presence of regions with relatively low values of μs this dependence becomes important.

Next, we define a vector containing unknown optical and fluorescent parameters by x = (μs,μa,ημa,τ)T at every point of the domain and choose a dynamic form of the regularization term, ϒ(x), depending on k-th iteration as

ϒ(x)=Σ1i4αiς(ω)xk+1ixki2dω,

where xik is i-th component of x k; and αi are Tikhonov regularization parameters.

Rather than applying the Gauss-Newton algorithm [25,26] for solving the minimization problem for ℱ we suggest to compute variations of this functional with respect to unknown functions [17, 27, 28]. Setting these variations to zero results in the system of partial differential equations for energy densities, adjoint energy densities (Lagrange multipliers), optical and fluorescence parameters. This system is given below:

Λuθ=ρθc;Λvθ=quθ,
Λφθ*=2χθ(r)(hθ*vθ*),
Λψθ*=2χθ(r)(eθ*uθ*)+qφθ*,
xk+1i=xki+(1αi)(2π)ξ(θ)fi(θ,xk)dθ,

where asterisk denotes complex conjugation and functions fi(θ, x k) are given by

f1=3Re[κ2(φθ*·vθ+ψθ*·uθ)],
f2=Re(φθ*vθ+ψθ*uθ)+ηf3,
f3=Reϑ,f4=ωIm(qϑ),
ϑ=(1+iωτ)1φθ*uθ.

Note that Eq. (14) contains integration over projection angles θ. Insertion of the function ξ(θ), Eq. (3), into Eq. (14) allows us to rewrite this equation in the form

xk+1i=[xs,ki+(1αi)fi(θs,xk)]+(1αi)Σn=s+1N1fi(θn,xk),

where xi 0,k = xik, and define a subsequence of k-th iteration by letting x k depend on the projection angle θs as

xs+1,ki=xs,ki+(1αi)fi(θs,xs,k).

Next, we let the index s run over projection angles in an arbitrary order and associate the index k with samples in the Fourier domain. In this form Eq. (20) presents a variant of the Landweber-Kaczmarz method [29].

Equations (11)–(14) are the Karush-Kuhn-Tucker conditions for optimization of ℑθ with conditions ℒθ [30]. Equations satisfied by adjoint energy densities, φθn* and ψθn* , are Helmholtz equations (Fourier images of corresponding telegraph equations) with source terms placed on the surface of a scattering object visible by the CCD camera. Sources' amplitudes are differences between recorded and computed energy densities. Equations (15)–(18) are used for reconstruction of optical and fluorescent parameters. The physical meaning of these equations can be given by introducing the most probable (averaged) photons' propagation paths. These photons' paths provide the largest contribution to recorded or computed energy densities on the surface of a scattering object from a given source. For instance, for a single source-detector pair averaged photons' propagation paths can be visualized as banana-like distributions connecting source and detector [31, 32]. Thus, parameters are updated iteratively by summing up backprojected differences between recorded and computed energy densities on the visible part of the surface while stepping through projection angles, θn, and frequencies, ωl. Parameters 1/αi in Eq. (20) are computed at each iteration step as described in [17]. Iterations are terminated when ℑθ + ϒ attains its global minimum. Notice, that this approach can be used even for one frequency. For example, for the time independent case (ω = 0) the quantum yield and optical parameters can be reconstructed.

2.2. Implementation

We compute the functions fi by solving Helmholtz and adjoint Helmholtz equations numerically. The direct solver employs the DG method, which is outlined below. Let us start with the Helmholtz equation written as a system of two first order equations:

·q+σu=ρc,
q=κu.

Equations (21)–(22) are supplied with boundary conditions according to the approximate intensity

Ic4πu+3c4πq·s.

That is, I(s · n<0)=γI(s · n>0) at the open boundary of the scattering domain, where n is the surface normal; s is the unit vector in a particular direction; c q is the energy flux; and γ is a constant depending on the refractive index mismatch [33]. Furthermore, the computational domain is divided into cells, where the solution of Eqs. (21)–(22) is expanded over shape functions ϕi(r) satisfying the completeness condition Σiϕi = 1 inside a cell. For the sake of computational performance we represent optical parameters as piecewise constant functions, following the Finite Volume framework, while all other functions are expanded over piecewise linear basis. Therefore, the energy density and the source term are represented as

u=uiϕi,ρ=ρiϕi.

Here, and in the rest of this section, a summation over repeated indices is assumed. Multiplying Eq. (21) by ϕj and integrating over the cell's volume we arrive at a weak formulation of the direct problem in the local form

wj+κaijui+σbijui=bijρic.

where matrix elements are given by

wj=V(q·n)ϕjd2r,
aij=V(ϕi·ϕj)d3r,
bij=Vϕiϕjd3r.

Here V denotes the cell's volume having an outward normal n, and ∂V denotes cell's interface. At this stage the local form, Eq. (25), consists of a system of uncoupled equations. Coupling between cells is provided by replacing q with the interface flux, which is derived below.

As the next step, it is convenient to introduce some useful notations. Let us define jumps and averages across cell's interfaces as

[u]=uu,{u}=(12)(u+u),[q]=qq,{q}=(12)(q+q),

where primes, u′ and q′, denotes corresponding quantities in a neighboring cell. A sum of fluxes at cells' interfaces, Eqs. (26), together with the observation that n = −n′ results in

wj+wj=V[qϕj]·nd2r.

This expression is further transformed into the following

wj+wj=V({q}[ϕj]+[q]{ϕj})·nd2r.

Noting that the condition [q] · n = 0 is satisfied for the exact solution and taking into account the flux equation q = −κuiϕi Eq. (31) simplifies to

wj+wj=V{κuiϕi}·n[ϕj]d2r.

According to this equation we write the term wj, Eq. (26), in the form

wj=uifij+uifij,

where the interface flux contribution to j-th row and i-th and i′-th columns of the system matrix is

fij=12κVϕjn·ϕid2r,fij=12κVϕjn·ϕid2r.

Expression (33) has a simple meaning as a sum of outgoing and incoming fluxes through the shared cell's interface. If the cell interface belongs to the open boundary ∂Ω then boundary conditions are applied

wj=13(1γ1+γ)uiΩϕiϕjd2r.

The numerical flux = −{κuiϕi} in Eq. (32), which replaces q in Eq. (26), is the so-called Bassi-Rebay interface flux [34]. The scheme with the Bassi-Rebay flux is stable for the polynomial interpolation of shape functions ϕi of degree higher then 1. For a linear interpolation, the scheme is unstable and must be regularized. For this purpose we impose a set of constraints that the solution is continuous across the cell's interfaces, i.e. [u] = 0, and add the following “zero-term” to the right hand side of Eq. (32):

vj+vj=βV{κϕj}·n[u]d2r+δV[ϕj][u]d2r,

where parameters β = {−1,0,1} and δ ∈ ℝ+ are penalty parameters. In the same way as before we identify vj in this expression as:

vj=uieij+uieij,

where

eij=β2κVϕin·ϕjd2r+δVϕiϕjd2r,eij=β2κVϕin·ϕjd2rδVϕiϕjd2r.

These penalty terms improve the condition number of the system matrix.

Some technical details on implementation of DG numerical scheme are outlined below. Firstly, a hexahedral mesh was used in this study. Each computational cell is identified by 8 vertices r i. Shape functions are chosen in the trilinear form

ϕi=18(1±ξ)(1±η)(1±ζ),

where {ξ, η, ζ} ∈ [−1, 1]. In order to compute matrix elements we map the cell's interior and interfaces into reference domains, which are a cube and square, respectively. Jacobians of these transformations can be computed by employing quadrature formulae. However, semi-analytical expressions are used in our implementation, which involve computation of the following tensors

gij=vlmn·ϕlξϕmηϕjϕidξdη,
hij=vlmϕlξϕmηϕiϕjdξdη,

plus similar expressions for other interfaces and

aij=λlmnϕlξϕmηϕnζϕi·ϕjdξdηdζ,
bij=λlmnϕlξϕmηϕnζϕiϕjdξdηdζ,

where

λlmn=rl·(rm×rn);vlm=n·(rl×rm).

The octal tree data structure naturally supports mesh adaptation technique. Mesh adaptation by use of this data structure introduces two new cases to consider, namely (i) when a particular cell has many neighbors at the cell's interface, and the case (ii) when the neighboring cell is a coarser cell. The only modification needed to be made to the DG numerical scheme in order to handle these cases is the modification of the incoming flux. That is, the total incoming flux into a cell having many neighbors is just a sum of incoming fluxes from each neighbor. The incoming flux from a coarser neighboring cell is computed as a fraction of an outgoing flux from this neighbor. Technically, it is done by representing a shape function having larger support as a linear combination of shape functions having smaller support on the intersection of their supports.

It is instructive to compare the implementation of the DG method to the Finite Volume (FV) method [17]. FV scheme is similar to the finite difference scheme but can be easily formulated for unstructured meshes. FV scheme can be obtained from DG scheme by replacing ui and ρi in Eq. (25) with their arithmetic averages

u¯=18Σ0i<8ui;ρ¯=18Σ0i<8ρi,

and using the completeness condition for shape functions. The numerical flux q becomes 0 inside each cell and, therefore, is defined only on cells' interfaces, where, according to Eq. (22), n · q becomes a δ-function. In the weak form n · q is found in terms of a jump of the energy density ū across the cell's interface [35, 36], see Eq. (29). Both, FV and DG, belong to the family of so-called shock capturing schemes. DG solution space belongs to broken Sobolev's spaces [14] while the solution space of FV is the space of piecewise constant functions. Hence, both methods are well suited for handling discontinuities in parameters as well as in the solution itself [16].

3. Instrumentation

A brief description of the experimental setup is provided in this section. The experimental apparatus consists of three main parts: a pulsed laser, a rotational stage, and a gated camera. The light pulses (pulse width of about 100 ps and central wavelength of 633 nm) are emitted by a diode laser (PDL800, PicoQuant,Germany) operating at the repetition rate of 80 MHz. The light is coupled to a multi-mode graded index fiber and finally focused on the sample placed on a computer controlled rotational stage. The diffusive light, which exits the sample, is imaged by means of an objective (Schneider, Germany) with high numerical aperture (NA = 1.4) on the sensors of the intensified CCD camera (ICCD). In order to separate the fluorescence light from excitation, two filters are used in front of the objective: a long pass glass filter (Schott, Germany), with cutoff wavelength at 665 nm, and a bandpass filter (650–690nm, XF3030 Omega Optical, Battleboro, VT). By removing the filters, the diffusive light at the excitation wavelength is acquired.

 figure: Fig. 1.

Fig. 1. (a) Phantom; (b) Surface mesh; (c) Mesh slice showing internal mesh structure.

Download Full Size | PDF

ICCD camera consists of an high repetition rate image intensifier (HRI,Kentech, UK) and a 12-bit Peltier cooled CCD camera (PCO GmbH, Germany). The image intensifier provides a fast gate (approximately 300 ps wide) that allows one to temporally sample the diffused excitation and fluorescence light exiting the phantom. The opening of the gate is synchronized with the master oscillator of the diode laser and delayed by means of a jitter free passive delay generator (minimum temporal step of about 50 ps). Even though a shorter temporal step could be set by the delay generator, this does not provide any advantages due to the jitter of the overall system, which is approximately 30 ps [19]. Since the spatial resolution of the image intensifier is limited to about 128×128 lines, the images are acquired by binning the 1280×1024 CCD sensor to 8×8 pixels. The whole set-up is placed in a light proof black box in order to reduce stray light and secondary reflections that could reach the ICCD sensor at different delays. In order to reduce the acquisition time, the whole measurement procedure is completely automated. For every pixel of the CCD camera the decay profiles of the corresponding temporal signals were fitted by the Green function in infinite space with amplitude, absorption and diffusion coefficients as the 3 fitting parameters. These time-resolved signals were then Fourier-transformed to give complex functions that formed a dataset for reconstruction.

The phantom diagram is shown in Fig. 1.

 figure: Fig. 2.

Fig. 2. (a) Image recorded by the CCD camera; (b) corrected image; (c) computed energy density on the surface of a homogeneous cylinder. Intensities of all images are scaled by 105. Images are shown at ω = 0. The source is located on the opposite side of visible part of the surface.

Download Full Size | PDF

It is a solid homogeneous cylinder 40mm in diameter and 100mm in height. The phantom was made of toner and TiO 2 powder dispersed in the hardener [37]. It has tissue-like optical parameters μa = 0.01mm −1 and μs = 0.83mm −1, which were measured by a time-resolved spectrophotometer for turbid media based on the Time Correlated Single Photon Counting technique [38]. The phantom has three tubes, two of them were filled with fluorophore (Nile Blue dissolved in methanol) and one was filled with an absorber only. One fluorescent tube is truncated, which makes the inverse problem three-dimensional. All tubes are placed 10mm off the phantom's axis. Tubes A and C contain fluorophore. The concentration of fluorophore in the tube C was four times higher than in the tube A (10−5 M in the tube C and 2.5 × 10−6 M in the tube A). However, the quantum yield is the same in both tubes. The tube B was filled with absorber (μa = 0.04mm −1) and its reduced scattering coefficient, μs, has been set to background. Tube A has twice lower value of μs than the background. Tubes A and B have heights 100mm and diameters 4mm. The tube C has smaller diameter 3mm and shorter height 50mm. Its volume is approximately 3.6 times smaller than the volume of the tube A, and, therefore, in spite of higher fluorophore concentration, its brightness is comparable with brightness of the tube A upon excitation. The phantom was probed at three different heights y = {42.5;50.0;67.5} mm. At each height the phantom was rotated by π/6 and imaged. For each camera's position 41 time windows were acquired.

Practically, any reconstruction algorithm requires mapping of experimental images onto the surface of a scattering object. Several mapping procedures were suggested in literature. One of them can be found in works by J. Ripoll et al [39]. Our mapping procedure is different from those and, therefore, we briefly outline it in Appendix. As an illustration of our mapping method, recorded, corrected and computed images for the first projection angle at the excitation wavelength are shown in Fig. 2.

 figure: Fig. 3.

Fig. 3. Functions fi. These functions are used for computing fluorescence and optical parameters. Each row corresponds to a projection angle. Only 3 angles are shown. Each column corresponds to a particular parameter according Eqs. (14)–(18).

Download Full Size | PDF

4. Results and Discussions

As was mentioned above, our reconstruction algorithm, Eqs. (11)–(18), involves repeated solution of four Helmholtz equations. Excitation, fluorescent and corresponding adjoint energy densities are consequently used for computing the functions fi from Eqs. (15)–(18). An integration of these functions over all projection angles serves as a diffusion analogue of the backprojection operator. However, unlike our previous approach [17], where this backprojection operator was computed for every frequency, we update fluorescent and optical parameters for every projection angle, Eq. (20). This noticeably improves the performance of the algorithm without significant difference in reconstruction results. The drawback of this improvement, however, is that the operator Λ, Eq. (6), must be updated together with optical parameters, i.e. for each projection angle, which was not needed previously. To avoid repeating computation of tensors [Eq. (40)–(43)], we store them in a file. Reconstruction starts with some initial guess where background values of μs and μa were chosen. However, any physically meaningful values of the quantum yield and lifetime can serve as the initial guess. Here we set η = 0.001 and τ = 0.01 ns initially everywhere in the computational domain except for the boundary, where η = 0 and τ = 0. Then, all parameters are updated iteratively according to Eq. (20). As an illustration of the algorithm, functions fi are shown in Fig. 3 for 3 projection angles at ω = 500MHz. Slices are taken at the middle of the phantom.

As we see in Fig. 3, an expected resolution of reconstructed images will not be very high. The lack of resolution is the well-known problem in fDOT in addition to the high computational cost of reconstruction. Computational cost of our algorithm mostly results from repeated solution of Helmholtz equations, while updating parameters does not take much time. The octal tree traversal takes O(nlog8 n), where n is number of terminal tree nodes, which is much cheaper operation than solving four linear systems.

 figure: Fig. 4.

Fig. 4. Reconstruction results. First, second, and third rows show slices at y = 40mm; 50mm; and 60mm respectively. First column shows reconstructed reduced scattering coefficient μs, second column shows the absorption coefficient μa, third - the fluorescence efficiency ημa, and fourth - the lifetime τ.

Download Full Size | PDF

Reconstruction results are shown in Fig. 4. Each row displays (i) the reduced scattering coefficient μs in mm −1; (ii) the absorption coefficient μa in mm −1; (iii) the fluorescence efficiency ημa in mm −1, and (iv) the lifetime τ in nanoseconds. Each column displays slices at three different heights y = 40, 50, and 60 mm. Two frequencies were used in reconstruction: 500MHz and 750MHz.

The value of the quantum yield η can be estimated by dividing reconstructed fluorescence efficiency by μa. It is seen that the reduced scattering coefficient was reconstructed relatively well. Its minimal value in the tube A is about 0.45mm −1 at height y = 40mm, which is slightly higher than the true value 0.415mm −1. The value of μs slightly increases with height achieving 0.52mm −1 at y = 60mm. As usual, reconstruction artifacts are also present. Reconstruction of the absorption coefficient μa is far less accurate. Its value in the tube B is roughly 1.5 lower than it should. Thus, its maximum reconstructed value is 0.027mm −1 while the true value is 0.04mm −1. Tubes A and C, which were filled with fluorophore, also appear in reconstruction. This is an expected results due to absorbing properties of fluorophores. Localization of the fluorescent efficiency appears to be relatively good. It is clearly seen that two tubes appear at height y = 40mm and only one at heights 50 and 60mm. The value of the quantum yield is lower than expected. Thus, dividing ημa by reconstructed μa we obtain approximately [0.18;0.2] at the the center of the tube A, while the true value is about [0.26;0.27]. The lifetime distribution τ, the last column in Fig. 4, has quite high contrast and is well localized. Slices showing the lifetime have background value almost 0. Reconstructed lifetime values are close to the true value of Nile Blue fluorophore, which is 1.2 ns. Thus, (i) at y = 40mm the maximum lifetime is 1.17 ns; (ii) 1.29 ns at y = 50mm; and (iii) 1.14 at y = 60mm. Reconstruction in Fig. 4 shows reasonable errors in optical and fluorescent parameters. The reconstruction error in fDOT must be attributed to the ill-conditioning nature of the inverse problem and can be much higher than it is presented here. It is well-known that it is notoriously difficult to obtain perfectly correct values of parameters in turbid media. The same arguments apply for localization and, especially, for shapes of targets.

 figure: Fig. 5.

Fig. 5. Reconstruction results based on the Finite Volume discretization at y = 40mm; 50mm; and 60mm respectively. First column shows reconstructed reduced scattering coefficient μs, second column shows the absorption coefficient μa, third - the fluorescence efficiency ημa, and fourth - the lifetime τ.

Download Full Size | PDF

4.1. Finite volume method

We compare the DG method to the previously reported FV method [17]. The same octal tree data structure with the same static mesh adaptation technique was employed in reconstruction. FV reconstruction results are shown in Fig. 5.

At first sight, the resolution of reconstructed images is lower than resolution of images computed by DG. It comes as no surprise because FV scheme is only first order accurate in space. Secondly, image mapping adopted in FV is cruder for the same mesh resolution than in DG. That is because DG scheme uses an expansion over shape functions of mapped images while FV computes average values of pixels falling within boundary cells' interfaces. Nevertheless, quantitative accuracy of reconstruction is comparable with DG approach. The reduced scattering coefficient attains values 0.40, 0.32 and 0.42 mm −1 at the middle of the tube A at height 40, 50 and 60 mm, respectively. The absorption coefficient is almost correct for the tube B at y = 40mm but decreases to 0.03mm −1 at y = 60mm. Two tubes A and C are well separated at height 40mm and only one tube A is present at heights 50 and 60mm on fluorescence efficiency, as it should. The value of the quantum yield varies in the range [0.20;0.28], which corresponds to reality. Lifetime images does not show separation of tubes A and C at y = 40mm. However, the lifetime value varies in the range [1.0; 1.28] ns, which is a reasonable result.

The main advantage of FV technique is its performance. FV direct solver is at least 8 times faster than DG solver, which obviously speeds up overall reconstruction. That was expected due to different system matrix sizes. Thus, the number of non-zero entries of discretized operator Λ is 64 times larger in the case of DG discretization than in the case of FV. This estimate directly follows from the averaging procedure, Eq. (45). Secondly, the condition number of the operator Λ in FV is smaller, meaning that simple methods for solving linear system can be applied. For instance, due to the diagonal dominance of the operator Λ in the case of FV, methods such as Jacobi or Guass-Seidel iterations can be employed while DG requires more sophisticated algorithms.

Notice that the same optical parameters for excitation and fluorescent light were assumed. Therefore, our reconstructions present somewhat weighted averages of “true” values of optical parameters corresponding to excitation and fluorescent wavelengths. This assumption has the following reason. It is well-known that in absorption-scattering tomography the problem of simultaneous reconstruction of both optical parameters is not unique at a single excitation wavelength. In fluorescence tomography this non-uniqueness is alleviated due to the presence of a fluorescence dataset. Thus, against 4 unknown functions, μs, μa, ημa and τ, we have 4 distinct datasets: (i) real and imaginary parts of the excitation energy density; (ii) real and imaginary parts of the fluorescent energy density. If we consider distinct optical parameters for excitation and fluorescent light transport, then the non-uniqueness problem will appear again. The same argument applies to the case of multiple lifetimes. Therefore, our assumption is quite reasonable even though values of μs and μa can be noticeably different at different wavelengths, as happens in the visible part of the light spectrum. Finally, we would like to emphasize the importance of knowledge of the excitation amplitude, which is a complex number coming from the excitation source term ρ, Eq. (21). As it was suggested previously [17], we use the least squares fit for estimating the absolute value of the amplitude and its phase. Fitting works well when the background optical parameters are known. However, in the case of unknown background values the non-uniqueness problem appears. That is because two additional unknown scalar values are introduced. Thus, starting with any physically meaningful values for background parameters it is always possible to find a reasonably good fit for the excitation amplitude. This obviously affects reconstruction. For instance, higher background value of μs results in increased value of reconstructed ημa, while incorrect phase affects reconstruction of μs and τ due to non-linearity of the inverse problem.

In summary, we have demonstrated three-dimensional fluorescence lifetime imaging on the basis of the DG discretization scheme in the Fourier domain with a large experimental dataset. Optical parameters, quantum yield and lifetime were reconstructed simultaneously. The methodology was compared against FV discretization. Our previous variational formulation of the inverse problem was also improved.

Appendix

It is well-known in astrophysics that the ratio of brightness at the center of the disc of a star to that at the edge is equal to 2.9 [22]. This effect is explained by taking into account an angular dependence of radiation leaving the surface. Therefore, the distribution of brightness of experimental images must be corrected according to that angular dependence. We employ the method of Schwarzschild-Shuster [22, 40] and find an image correction factor. Although the method is well known in astrophysics we would like to describe its basic idea here. Let us assume an isotropically scattering layered medium and write the radiative transfer equation in the form

cosθdI(ρ,θ)dρ=I(ρ,θ)B(ρ),
B(ρ)=120πI(ρ,θ)sinθdθ,

where I is the intensity, ρ is the optical thickness, and θ is the polar angle. At the boundary we have I (0,θ) = 0 with θ > π/2. Next, average intensities, I 1 and I 2, are introduced by integrating I(ρ,θ) sin θ over θ from 0 to π/2 and from π/2 to π, respectively. Therefore, Eq. (47) becomes

B(ρ)=12[I1(ρ)+I2(ρ)].

Multiplying Eq. (46) by sin θdθ and integrating over the interval from 0 to π/2 and, consequently, from π/2 to π we obtain a system of two coupled ordinary differential equations for I 1 and I 2. This system is solved resulting in

B(ρ)=F(12+ρ),

where F is a constant of integration. Knowledge of the function B(ρ) provides a possibility to compute the intensity leaving the surface. Solving Eq. (46) we arrive at

I(0,θ)=0B(ρ)exp(ρcosθ)dρcosθ=F(12+cosθ).

Turning now to the constant F, we find that it is expressed by the flux of radiation. Equation (50) gives an approximate angular dependence of radiation leaving the surface. Thus, the quantity I(0,θ) defines the distribution of brightness over the image. Let us denote the ratio of brightness at the center of the image to that at the edge by r = I(0,0)/I(0,π/2). In accordance with Eq. (50), this method gives r = 3.

Next, we notice that (i) the quantity I(0,θ) cos ψ is proportional to the energy collected by a pixel of the CCD camera, where ψ is an angle between pixel's normal and the line connecting this pixel to the emitting surface [7]; (ii) the quantity F is the outgoing energy flux c n · q; and (iii) cosθ = n · s 0, where s 0 is the unit vector pointing to the CCD camera. Application of boundary conditions, Eq. (23), allows to replace n · q in Eq. (50) with (1/3)u and results in the expression for the energy density on the cylinder surface in terms of the observed intensity

urV=3c(1+γ1γ)I(0,θ)12+cosθ.

The intensity I(0,θ) is converted to the image brightness as described in [7]. Obviously, other approximate methods for image correction can be employed as well. However, some of them are too complex while the others are less accurate at the surface. For instance, Eddington's method [41] provides the ratio r = 2.5. The method of Chandrasekhar [42] with two discrete directions gives r = 2.73. Of course, the proper treatment of this problem requires solving the radiative transfer equation with an appropriate phase function.

Acknowledgments

This work was partly supported by Royal Society International Joint Project 2009/R2, EPSRC grant EP/E034950/1, and MIUR under the project PRIN2007 (prot. 2007AKK9LX/004).

References and links

1. R. E. Nothdurft, S. V. Patwardhan, W. Akers, Y. Ye, S. Achilefu, and J. P. Culver, “In vivo fluorescence lifetime tomography,” J. Biomed. Opt. 14, 024004 (2009).

2. M. A. O'Leary, D. A. Boas, X. D. Li, B. Chance, and A. G. Yodh, “Fluorescence lifetime imaging in turbid media,” Opt. Lett. 21, 158–160 (1996).

3. A. T. N. Kumar, J. Skoch, B. J. Bacskai, D. A. Boas, and A. K. Dunn, “Fluorescence-lifetime-based tomography for turbid media,” Opt. Lett. 30, 3347–3349 (2005).

4. A. T. N. Kumar, S. B. Raymond, G. Boverman, D. A. Boas, and B. J. Bacskai, “Time resolved fluorescence tomography of turbid media based on lifetime contrast,” Opt. Express 14, 12255–12270 (2006).

5. L. Zhang, F. Gao, H. He, and H. Zhao, “Three-dimensional scheme for time-domain fluorescence molecular tomography based on Laplace transforms with noise-robust factors,” Opt. Express 16, 7214–7223 (2008).

6. F. Gao, J. Li, L. Zhang, P. Poulet, H. Zhao, and Y. Yamada, “Simultaneous fluorescence yield and lifetime tomography from time-resolved transmittances of small-animal-sized phantom,” Appl. Opt. 49, 3163–3172 (2010).

7. V. Y. Soloviev, K. B. Tahir, J. McGinty, D. S. Elson, M. A. A. Neil, P. M. W. French, and S. R. Arridge, “Fluorescence lifetime imaging by using time gated data acquisition,” Appl. Opt. 46, 7384–7391 (2007).

8. V. Y. Soloviev, J. McGinty, K. B. Tahir, M. A. A. Neil, A. Sardini, J. V. Hajnal, S. R. Arridge, and P. M. W. French, “Fluorescence lifetime tomography of live cells expressing enhanced green fluorescent protein embedded in a scattering medium exhibiting background autofluorescence,” Opt. Lett. 32, 2034–2036 (2007).

9. J. R. Lakowicz, in Principles of Fluorescence Spectroscopy (Plenum Press, New York, 1999).

10. D. L. Andrews and D. S. Bradshaw, “Virtual photons, dipole fields and energy transfer: a quantum electrodynamical approach,” Eur. J. Phys. 25, 845–858 (2004).

11. J. McGinty, V. Y. Soloviev, K. B. Tahir, R. Laine, D. W. Stuckey, J. V. Hajnal, A. Sardini, P. M. W. French, and S. R. Arridge, “Three-dimensional imaging of Förster resonance energy transfer in heterogeneous turbid media by tomographic fluorescent lifetime imaging,” Opt. Lett. 34, 2772–2774 (2009).

12. V. Gaind, K. J Webb, S. Kularatne, and C. A. Bouman, “Towards in vivo imaging of intramolecular fluorescence resonance energy transfer parameters,” J. Opt. Soc. Am. A 26, 1805–1813 (2009).

13. V. Gaind, S. Kularatne, P. S. Low, and K. J. Webb, “Deep-tissue imaging of intramolecular fluorescence resonance energy-transfer parameters,” Opt. Lett. 35, 1314–1316 (2010).

14. B. Rivière, in Discontinuous Galerkin Methods for solving elliptic and parabolic equations (SIAM, Philadelphia, 2008).

15. J. S. Hesthaven and T. Warburton, in Nodal Discontinuous Garlerkin Methods, Algorithms, Analysis, and Applications (Springer, New York, 2008).

16. P. S. Mohan, V. Y. Soloviev, and S. R. Arridge, “Discontinuous Galerkin method for the forward modelling in optical diffusion tomography,” Int. J. Numer. Meth. Engng. to appear, (2010).

17. V. Y. Soloviev, C. D'Andrea, G. Valentini, R. Cubeddu, and S. R. Arridge, “Combined reconstruction of fluorescent and optical parameters using time-resolved data,” Appl. Opt. 48, 28–36 (2009).

18. R. Cubeddu, D. Comelli, C. D'Andrea, P. Taroni, and G. Valentini, “Time-resolved fluorescence imaging in biology and medicine,” J. Phys. D: Appl. Phys. 35, R61–R76 (2002).

19. C. D'Andrea, D. Comelli, A. Pifferi, A. Torricelli, G. Valentini, and R. Cubeddu, “Time-resolved optical imaging through turbid media using a fast data acquisition system based on a gated CCD camera,” J. Phys. D: Appl. Phys. 36, 1675–1681 (2003).

20. C. Dunsby, P. M. P. Lanigan, J. McGinty, D. S. Elson, J. Requejo-Isidro, I. Munro, N. Galletly, F. McCann, B. Treanor, B. Onfelt, D. M. Davis, M. A. A. Neil, and P. M. W. French, “An electronically tuneable ultrafast laser source applied to fluorescence imaging and fluorescence lifetime imaging microscopy,” J. Phys. D.: Appl. Phys. 37, 3296–3303 (2004).

21. J. McGinty, J. Requejo-Isidro, I. Munro, C. B. Talbot, P. A. Kellett, J. D. Hares, C. Dunsby, M. A. A. Neil, and P. M. W. French, “Signal-to-noise characterization of time-gated intensifiers used for wide-field time-domain FLIM,” J. Phys D.: Appl Phys. 42, 135103 (2009).

22. V. V. Sobolev, A Treatise on Radiative Transfer (D. Van Nostrand Company, Inc, Princeton, 1963).

23. S. R. Arridge, “Optical tomography in medical imaging,” Inverse Probl. 15, R41–R93 (1999).

24. S. R. Arridge and J. Schotland, “Optical tomography: forward and inverse problems,” Inverse Probl. 25, 123010 (2009).

25. A. Joshi, W. Bangerth, and E. M. Sevick-Muraca, “Non-contact fluorescence optical tomography with scanning patterned illumination,” Opt. Express 14, 6516–6534 (2006).

26. A. Joshi, W. Bangerth, K. Hwan, J. C. Rasmussen, and E. M. Sevick-Muraca, “Fully adaptive FEM based fluorescence tomography from time-dependant measurements with area illumination and detection,” Med. Phys. 33, 1299–1310 (2006).

27. M. Tadi, “Inverse heat conduction based on boundary measurement,” Inverse Probl. 13, 1585–1605 (1997).

28. V. Y. Soloviev, C. D'Andrea, M. Brambilla, G. Valentini, R. B. Schulz, R. Cubeddu, and S. R. Arridge, “Adjoint time domain method for fluorescent imaging in turbid media,” Appl. Opt. 47, 2303–2311 (2008).

29. S. Kaczmarz, “Approximate solution of system of linear equations,” Internat. J. Control 57, 1269–1271 (1993).

30. J. Nocedal and S. J. Wright, in Numerical Optimization (Springer-Verlag Inc., New York, 1999).

31. S. R. Arridge and M. Schweiger, “Photon measurement density functions, Part II: Finite Element results,” Appl. Opt. 34, 8026–8037 (1995).

32. S. B. Colak, D. G. Papaioannou, G. W. 't Hooft, M. B. van der Mark, H. Schomberg, J. C. J. Paasschens, J. B. M. Melissen, and N. A. A. J. van Asten, “Tomographic image reconstruction from optical projection in light-diffusing media,” Appl. Opt. 36, 180–213 (1997).

33. M. Schweiger, S. R. Arridge, M. Hiraoka, and D. T. Delpy, “The Finite Element method for the propagation of light in scattering media : boundary and source conditions,” Med. Phys. 22, 1779–1792 (1995).

34. F. Bassi and S. Rebay, “A high-order accurate discontinuous finite elements method for the numerical solution of the compressible Navier-Stokes equations,” J. Comp. Phys. 131, 267–279 (1997).

35. V. Y. Soloviev and L. V. Krasnosselskaia, “Dynamically adaptive mesh refinement technique for image reconstruction in optical tomography,” Appl. Opt. 45, 2828–2837 (2006).

36. V. Y. Soloviev, “Mesh adaptation technique for Fourier-domain fluorescence lifetime imaging,” Med. Phys. 33, 4176–4183 (2006).

37. A. Pifferi, A. Torricelli, A. Bassi, P. Taroni, R. Cubeddu, H. Wabnitz, D. Grosenick, M. Möller, R. MacDonald, J. Swartling, T. Svensson, S. Andersson-Engels, R. L. P. van Veen, H. J. C. M. Sterenborg, J. M. Tualle, H. L. Nghiem, S. Avrillier, M. Whelan, and H. Stamm, “Performance assessment of photon migration instruments: the MEDPHOT protocol,” Appl. Opt. 44, 2104–2114 (2005).

38. A. Bassi, A. Farina, C. D'Andrea, A. Pifferi, G. Valentini, and R. Cubeddu, “Portable, large-bandwidth time-resolved system for diffuse optical spectroscopy,” Opt. Express 15, 14482–14487 (2007).

39. J. Ripoll, R. B. Schulz, and V. Ntziachristos, “Free-Space propagation of diffuse light: theory and experiments,” Phys. Rev. Lett. 91, 103901 (2003).

40. A. Schuster, “Radiation through a foggy atmosphere,” Astrophys. J. 21, 1–22 (1905).

41. A. S. Eddington, in The Internal Constitution of the Stars (Cambridge University Press, Cambridge, 1926).

42. S. Chandrasekhar, in Radiative Transfer (Dover Publications, New York, 1960).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. (a) Phantom; (b) Surface mesh; (c) Mesh slice showing internal mesh structure.
Fig. 2.
Fig. 2. (a) Image recorded by the CCD camera; (b) corrected image; (c) computed energy density on the surface of a homogeneous cylinder. Intensities of all images are scaled by 105. Images are shown at ω = 0. The source is located on the opposite side of visible part of the surface.
Fig. 3.
Fig. 3. Functions fi . These functions are used for computing fluorescence and optical parameters. Each row corresponds to a projection angle. Only 3 angles are shown. Each column corresponds to a particular parameter according Eqs. (14)–(18).
Fig. 4.
Fig. 4. Reconstruction results. First, second, and third rows show slices at y = 40mm; 50mm; and 60mm respectively. First column shows reconstructed reduced scattering coefficient μ s , second column shows the absorption coefficient μa , third - the fluorescence efficiency ημa , and fourth - the lifetime τ.
Fig. 5.
Fig. 5. Reconstruction results based on the Finite Volume discretization at y = 40mm; 50mm; and 60mm respectively. First column shows reconstructed reduced scattering coefficient μ s , second column shows the absorption coefficient μa , third - the fluorescence efficiency ημa , and fourth - the lifetime τ.

Equations (55)

Equations on this page are rendered with MathJax. Learn more.

= ( 2 π ) ξ ( θ ) ( θ + θ ) d θ + ϒ ,
θ = ς ( ω ) d ω V χ θ ( r ) ( e θ u θ 2 + h θ v θ 2 ) d 3 r .
ξ ( θ ) = Σ 0 n < N δ ( θ θ n ) ,
χ θ ( r ) = Σ 0 m < M a m δ ( r r θ m ) , ς ( ω ) = Σ 0 l < L δ ( ω ω l ) ,
θ = Re ς ( ω ) ψ θ , Λ u θ ρ θ c d ω
+ Re ς ( ω ) φ θ , Λ ν θ q u θ d ω .
Λ = · κ + σ ,
κ = [ 3 ( μ s + μ a + i ω c ) ] 1 ,
σ = μ a ( 1 + i ω c μ a ) ,
q = η μ a ( 1 + i ω τ ) 1 .
ϒ ( x ) = Σ 1 i 4 α i ς ( ω ) x k + 1 i x k i 2 d ω ,
Λ u θ = ρ θ c ; Λ v θ = q u θ ,
Λ φ θ * = 2 χ θ ( r ) ( h θ * v θ * ) ,
Λ ψ θ * = 2 χ θ ( r ) ( e θ * u θ * ) + q φ θ * ,
x k + 1 i = x k i + ( 1 α i ) ( 2 π ) ξ ( θ ) f i ( θ , x k ) d θ ,
f 1 = 3 Re [ κ 2 ( φ θ * · v θ + ψ θ * · u θ ) ] ,
f 2 = Re ( φ θ * v θ + ψ θ * u θ ) + η f 3 ,
f 3 = Re ϑ , f 4 = ω Im ( q ϑ ) ,
ϑ = ( 1 + i ω τ ) 1 φ θ * u θ .
x k + 1 i = [ x s , k i + ( 1 α i ) f i ( θ s , x k ) ] + ( 1 α i ) Σ n = s + 1 N 1 f i ( θ n , x k ) ,
x s + 1 , k i = x s , k i + ( 1 α i ) f i ( θ s , x s , k ) .
· q + σ u = ρ c ,
q = κ u .
I c 4 π u + 3 c 4 π q · s .
u = u i ϕ i , ρ = ρ i ϕ i .
w j + κ a i j u i + σ b i j u i = b i j ρ i c .
w j = V ( q · n ) ϕ j d 2 r ,
a i j = V ( ϕ i · ϕ j ) d 3 r ,
b i j = V ϕ i ϕ j d 3 r .
[ u ] = u u , { u } = ( 1 2 ) ( u + u ) ,
[ q ] = q q , { q } = ( 1 2 ) ( q + q ) ,
w j + w j = V [ q ϕ j ] · n d 2 r .
w j + w j = V ( { q } [ ϕ j ] + [ q ] { ϕ j } ) · n d 2 r .
w j + w j = V { κ u i ϕ i } · n [ ϕ j ] d 2 r .
w j = u i f ij + u i f i j ,
f ij = 1 2 κ V ϕ j n · ϕ i d 2 r ,
f i j = 1 2 κ V ϕ j n · ϕ i d 2 r .
w j = 1 3 ( 1 γ 1 + γ ) u i Ω ϕ i ϕ j d 2 r .
v j + v j = β V { κ ϕ j } · n [ u ] d 2 r + δ V [ ϕ j ] [ u ] d 2 r ,
v j = u i e i j + u i e i j ,
e i j = β 2 κ V ϕ i n · ϕ j d 2 r + δ V ϕ i ϕ j d 2 r ,
e i j = β 2 κ V ϕ i n · ϕ j d 2 r δ V ϕ i ϕ j d 2 r .
ϕ i = 1 8 ( 1 ± ξ ) ( 1 ± η ) ( 1 ± ζ ) ,
g i j = v l m n · ϕ l ξ ϕ m η ϕ j ϕ i d ξ d η ,
h i j = v l m ϕ l ξ ϕ m η ϕ i ϕ j d ξ d η ,
a i j = λ lmn ϕ l ξ ϕ m η ϕ n ζ ϕ i · ϕ j d ξ d η d ζ ,
b i j = λ lmn ϕ l ξ ϕ m η ϕ n ζ ϕ i ϕ j d ξ d η d ζ ,
λ lmn = r l · ( r m × r n ) ; v lm = n · ( r l × r m ) .
u ¯ = 1 8 Σ 0 i < 8 u i ; ρ ¯ = 1 8 Σ 0 i < 8 ρ i ,
cos θ dI ( ρ , θ ) d ρ = I ( ρ , θ ) B ( ρ ) ,
B ( ρ ) = 1 2 0 π I ( ρ , θ ) sin θ d θ ,
B ( ρ ) = 1 2 [ I 1 ( ρ ) + I 2 ( ρ ) ] .
B ( ρ ) = F ( 1 2 + ρ ) ,
I ( 0 , θ ) = 0 B ( ρ ) exp ( ρ cos θ ) d ρ cos θ = F ( 1 2 + cos θ ) .
u r V = 3 c ( 1 + γ 1 γ ) I ( 0 , θ ) 1 2 + cos θ .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.