Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Quantification and normalization of noise variance with sparsity regularization to enhance diffuse optical tomography

Open Access Open Access

Abstract

Conventional reconstruction of diffuse optical tomography (DOT) is based on the Tikhonov regularization and the white Gaussian noise assumption. Consequently, the reconstructed DOT images usually have a low spatial resolution. In this work, we have derived a novel quantification method for noise variance based on the linear Rytov approximation of the photon diffusion equation. Specifically, we have implemented this quantification of noise variance to normalize the measurement signals from all source-detector channels along with sparsity regularization to provide high-quality DOT images. Multiple experiments from computer simulations and laboratory phantoms were performed to validate and support the newly developed algorithm. The reconstructed images demonstrate that quantification and normalization of noise variance with sparsity regularization (QNNVSR) is an effective reconstruction approach to greatly enhance the spatial resolution and the shape fidelity for DOT images. Since noise variance can be estimated by our derived expression with relatively limited resources available, this approach is practically useful for many DOT applications.

© 2015 Optical Society of America

1. Introduction

Diffuse optical tomography (DOT) is a non-invasive medical imaging technique, which uses low power light in the near infrared (NIR) range to detect hemodynamic changes inside the brain. In particular, the absorption spectra of oxygenated hemoglobin (HbO) and deoxygenated hemoglobin (Hb) in the NIR range differ distinctly, resulting in the detectability of both forms of hemoglobin in the targeted cortical regions using two or more wavelengths. In general, DOT provides a noninvasive, portable, and cost-effective imaging solution with a higher temporal resolution. However, the spatial resolution and depth penetration of DOT are limited compared to functional magnetic resonance imaging (fMRI). Recently, DOT has gained considerable attention and become an important tool for functional brain imaging [1–4] to be used in various biomedical applications [5,6].

Similar to many other medical imaging modalities, reconstructing a high quality DOT image from a limited number of measurements is a great challenge. A variety of solutions have been proposed and explored to improve the spatial resolution of DOT by (1) optimally selecting optode arrangements [7–9], (2) applying different regularization terms in image reconstruction algorithms [9–11], and (3) reducing the measurement noise after noise information is calculated and used to weight/balance the measurements from different channels [12,13]. However, acquisition of the noise information usually requires a large amount of measurement samples, which might be difficult to collect in many DOT applications.

In this paper, we wish to introduce a novel noise quantification method, which can be utilized with noise normalization and sparsity regularization to greatly improve DOT spatial resolution. This will be demonstrated in following sections by both computer simulation and laboratory phantom results. Specifically, we have derived a noise quantification expression by associating the Rytov linear approximation of the forward diffusion model with noise analysis. The relative noise variance related to the Rytov approximation can be computed via an equation consisting of the magnitude of original measurements and the variance of measurement noise. This novel noise quantification method is practical to most of DOT applications, where the Rytov approximation can be applied. Furthermore, this noise quantification method is implemented with noise normalization and sparsity regularization-based reconstruction to form an effective method for DOT image reconstruction. The Noise normalization reduces distortion of reconstructed objects and the sparsity regularization based reconstruction improves spatial resolution of the image. Hence when combining the two, the quality of the reconstructed images is improved greatly compared to that of using the conventional method.

The rest of the paper is organized as follows: In section 2, we will review image reconstruction algorithms for DOT. In section 3, we derive the novel method to quantify noise variance that can be used to enhance DOT image reconstruction. In Section 4, we will present both computational and experimental DOT results to illustrate the effectiveness of our new method, followed by final discussion and conclusion.

2. Review of image reconstruction algorithms for DOT

A medical imaging problem can be generally divided into two sub-problems: the forward problem, which deals with the physical model of the imaging system, and the inverse problem, which converts the measurements into an image to show hidden objects or underlying information [4].

When light propagates in biological media, scattering and absorption are two fundamental interactions between light and tissue. Given that the reduced scattering coefficient is much greater than the absorption coefficient for near infrared light, this process can be modeled as the diffusion approximation (DA) of the Boltzmann transport equation [14]. The DA of the frequency domain system is given by:

D(r)Φ(r,ω)+cμa(r)Φ(r,ω)+jωΦ(r,ω)=cS(r,ω),
where c is the speed of light in the medium, S is the isotropic source providing the number of photons emitted at position r, and Φ is the photon fluence rate. D is the diffusion coefficient, which is defined as D = c/3(μa + μs'), where μa is the absorption coefficient and μs' is the reduced scattering coefficient defined as μs' = (1-g)μs and g is the averaged cosine value of the scattering angle. Moreover, since in most tissues, we have μs>>μa and Dc/(3μs'). The continuous-wave (CW) system can be considered as a special frequency domain system with frequency ω = 0, hence its DA can be obtained from Eq. (1) by setting ω = 0.

In functional brain imaging, the changes of light scattering induced by neural activation are relatively small and therefore can be ignored without causing significant errors [3,15]. In the field of diffuse optical tomography or near infrared spectroscopy, two common forms are used to solve inverse problem [3]. The Born approach writes U(r) = U0(r) + Usc(r), while the Rytov approach writes U(r) = U0(r) exp[Usc(r)], where U0(r) and U(r) are the spatial part of the photon fluence rate in the homogeneous background and in homogeneous medium, while Usc(r) is the correction to U0 as a result of the heterogeneities. The linearized Rytov approach, i.e., U(r) = U0(r) × exp[Usc(r)], leads to a matrix equation, which can be written as Usc(rd, rs) = ln[U(rd, rs)/U0(rd, rs)]. Since most of functional brain imaging experiments measure relative changes of optical signals with respect to a baseline reference, the measured signals can be well expressed by or matched with U(rd, rs) and U0(rd, rs) for further data/image processing, where rs and rd represent a light source and detector position. Therefore, Rytov linear approximation is a better fit for the DOT brain image studies. The solution of Eq. (1) can be expressed by the linear Rytov approximation [16].

On the other hand, the modified Beer-Lambert law [17] can be written as I = Is10-µaL and thus OD = log(Is/I) = μaL, where μa is the absorption coefficient along the optical pathlength, L, from a light source at position rs to a light detector at position rd, Is and I are the incident and detected light intensity. If a relative absorption change, Δμa, occurs within the medium along the optical pathlength from the light source to the detector between two measurement states, Φ0 and Φ, the Rytov approximation is related to the modified Beer-Lambert law by the following equation [15]:

12.3lnΦ0(rd,rs)Φ(rd,rs)=ΔOD=ODOD0=[log(IsI)log(IsI0)]=ΔμaL.

Equation (2) can be further discretized into the following linear equation:

Ax=b
where b RM × 1 is the vector of measured relative light intensity (i.e., ΔOD) changes from M different S-D pairs, x∈ RN × 1 is the vector being related to the changes of absorption properties (i.e., Δμa) in spatial domain and discretized into pixels, and A∈ RM × N is the sensing matrix referring to the sensitivity of measurements at different S-D pairs with respect to the changes in absorption property of tissue [18].

2.1 Regularization algorithms used for DOT

Due to the limited numbers of sources and detectors used in DOT, the number of measurements obtained is much fewer than the number of pixels to be reconstructed. As a result, Eq. (3) is highly ill-posed; its solution is not unique [4,19]. Therefore, regularization to its solutions is often required. The conventional way to regularize the solutions is to use ||x||2, which is the l2 norm of x, as defined by ||x||2 = Σi|xi|2. This approach is also known as the Tikhonov regularization method [20], where the image can be reconstructed from the following equation:

x^=argminx(Axb22+λx22)=AT(AAT+λI)1b,
where AT denotes the transpose of A, and λ is the regularization parameter that controls the balance between the data fidelity and regularization terms. Methods of selecting this parameter for DOT have been discussed extensively in [21]. Since the Tikhonov method has a closed form and is easy to implement, it is widely used.

However, using the l2 norm as the regularization term for DOT image reconstruction over-penalizes the pixels that have near-zero coefficients, producing many pixels with unwanted small coefficients. This approach tends to blur reconstructed images and results in poor spatial resolution. As a result, a reconstructed object in DOT often appears to have low spatial resolution and unclear boundaries compared to the actual imaged object(s) and to those obtained by other medical imaging modalities (e.g., CT or MRI). Hence, one of the main challenges in DOT is to improve the spatial resolution of reconstructed images.

In general, many images in DOT problems are usually sparse, namely, most of the coefficients in matrix x are zero or close to zero. For example, in the application of functional brain imaging [15], a reconstructed DOT image reflects relative changes in light absorption within imaged areas between two different brain functional states. For better visualization, the measured region interrogated by the optical probe array is often chosen to be reasonably larger than the activation area. Thus, a large portion within the reconstructed image will have little changes in absorption, or to be close to zero with respect to the activation area. Given this condition, researchers have utilized sparsity regularization or l1 norm regularization to reconstruct DOT images for improving its respective spatial resolution [9,11,21,22]. Let us briefly inspect the specificity and/or suitability of each approach for DOT image reconstruction.

  • (i) Based on ref [11], the expectation maximization (EM) algorithm is employed to solve the following optimization:
    x^=argmaxx(lnp (b|x)λx1)

    where p(b|x) is the conditional probability distribution function (pdf) of b given x. The results yielded by this method demonstrate better spatial resolution than the Tikhonov regularization. However, applying the EM algorithm requires statistical information of measurements in order to solve the inverse problem. Such information is easy to find in computer simulations, but it can be difficult to quantify in actual measurements due to the diversity of devices and measuring environments.

  • (ii) When the compressed sensing concept was introduced in ref [21], the measurement data was mapped into the frequency domain and yielded the following cost function:
    x^=argminx(Axb22+λF(x)1),

    where F represents the Fourier transform of the desired image data. Hence, the regularization term in this method is Fourier coefficients of the image under the assumption that the image is sparse in its frequency domain. This assumption might be true for certain images, but as we mentioned, the images in DOT are usually sparse in the spatial domain. In general, a spatially sparse image is not sparse in its frequency domain. As a result, the use of l1 norm of image's Fourier coefficients as a regularization term is highly likely to yield blurred images.

  • (iii) Regularization with the total variation (TV) as well as with the Hubert function were also discussed in ref [10], which could better preserve the edges of targets but would be suitable for piecewise constant images.
  • (iv) Sparseness of DOT images in the spatial domain and an l0 norm based compressive sensing algorithm were discussed in ref [9]. with a major challenge of its non-convexity. Recently, a combination of l1 regularization and the depth compensation algorithm (DCA) has been reported [22], showing great improvement in both the spatial resolution and depth localization in 3D DOT reconstruction. A method of preconditioning the sensing matrix was discussed [23], which can be used to improve compressive sensing based reconstruction methods.

Based on the brief review given above, it is clear that the l1 norm of image coefficients serves more effectively as the regularization term for sparse images than the l2 norm [24]. Hence, in our development, we first utilized the sparsity regularization (SR) for DOT by replacing the l2norm of x in Eq. (4) with an l1 norm term. The image reconstruction task now becomes to solve the following equation [25]:

x^=argminx(Axb22+λx1).
Unlike Tikhonov regularization, as given in Eq. (4), a close-form solution of Eq. (7) does not exist. Nonetheless, several iterative methods can be used to solve Eq. (7). For example, the conjugate gradient descent method was discussed in [26], the l1 regularized least squares (l1-ls) algorithm was proposed in by [27] and utilized in [22], and the recent split augmented Lagrangian shrinkage algorithm (SALSA) was introduced in [28]. Among these algorithms, the SALSA algorithm is specifically designed to solve the l1 norm regularized image reconstruction problem. It also has a high convergent speed as it splits the l1 regularized least squares problem into a minimization problem of two simple quadratic functions by using the augmented Lagrangian method [28,29]. Therefore, in this work, we use the SALSA algorithm to solve Eq. (7).

2.2 Measurement noise removal for DOT

Measurement noise always exists, shields the true response signals, and leads to wrong conclusions when DOT is utilized to image brain activities stimulated by given tasks. In current practice, instrumental noise is usually inspected after data acquisition of functional DOT. The channels showing significant signal oscillations or with a low signal to noise ratio (SNR) are removed prior to removal of physiological noises and image reconstruction process [30,31], where SNR is calculated as 10 × [log10 (signal powernoise variance). However, the noise level within the remaining channels may still be substantial and vary among them. Methods to design a weighting or balancing matrix for the Tikhonov method were explored and discussed [12,13]. In ref [13], the so called generalized least-squares (GLS) method modified the Tikhonov method in Eq. (4) as follows:

x^=argminx(DAxDb22+λDxx22)
where D and Dx are diagonal matrices with the diagonal entries being equal to the reciprocal of the corresponding standard deviation of the noise and pixel values of the reconstructed image, respectively. With the appropriate noise variance information, the GLS method improves the image quality of conventional Tikhonov-based reconstruction. However, finding appropriate noise variance for a specific measurement might be difficult in practice. Hence, a noise quantification method is needed to weigh signals from different channels at various S-D separations and consequently to promote the fidelity of reconstructed DOT images.

3. Novel development on quantification of noise variance

In practice, the measurement noise is always unavoidable. As in many of previous reports, the noise within b given in Eq. (3) is usually treated as white Gaussian noise (WGN) [9,11,32]. However, this might not be true in actual experimental measurements when Rytov linear approximation is applied. This is because WGN should be added to or included in the raw measurement data, not to the logarithm of a ratio of two raw measurements. For example, Fig. 1 shows noise variances seen at different channels with variable S-D separations of a laboratory phantom experiment [7], where 100 data samples were collected from all channels with and without the object within the liquid phantom. Then the natural logarithm between the two sets of 100 samples at two measurement states were calculated to obtain 100 relative measurements b [e.g., b(1) = ln(Φ0(1)/Φ(1)), b(2) = ln(Φ0(2)/Φ(2)),…]. Then the sample variance of these 100 data points of each channel were calculated and shown in Fig. 1. From this figure, it clearly shows the variation of noise variances among different S-D separations; namely, a larger variance is obtained as the S-D separation increases, particularly when the separation is larger than 4 cm. Since the noise variance is not identical among different channels, the relative noise is no longer WGN.

 figure: Fig. 1

Fig. 1 Noise variance of different S-D pairs from a phantom experiment.

Download Full Size | PDF

To better analyze noise variances of DOT measurements, in this section, we first derive a quantitative expression for noise variance, and then utilize it as a weight-matrix to normalize measurement signals, followed by completion of noise-suppression, sparsity-regularized DOT image reconstruction. Our novel development reported in this paper lies on two specific aspects of DOT images: (i) varying variances of noise and (ii) sparse reconstructed objects. In brief, our approach integrates (1) quantification and (2) normalization of noise variance with (3) sparsity regularization, as noted QNNVSR.

3.1 Noise variances of DOT signals

Following the concept given in Eq. (8), we modify Eq. (7) by including a weight matrix D, which normalizes noise variances of the fidelity term [13,33], as given below:

x^=argminx(DAxDb22+λx1)
where D = diag [1/σ1, 1/σ2, …, 1/σn], and σj2 is the noise variance of the j-th channel with 1 ≤ j ≤ n. One can solve Eq. (9) by using the same algorithm as that used to solve Eq. (7) by setting AD = DA and bD = Db if values of σj’s are known. Note that the creation of Eq. (9) is built on the rationales of Eqs. (7) and (8) but different from either of them.

The noise covariance matrix might be easy to obtain from both simulations and phantom experiments with given objects to be imaged. In computer simulations, the statistical information is known in advance. Even in a laboratory phantom experiment, each measurement of individual S-D pairs can be acquired with relatively stable values; hence, the noise variance of each S-D pair or channel can be found with conventional estimation method from a number of measurement samples. However, such a covariance matrix might be difficult to obtain in biological applications where the measurements are dynamic or do not have enough acquisition samples available for estimation of the covariance information, particularly for in vivo imaging of brain activities and functions. Hence an alternative approach to quantify the noise variance is needed.

In functional brain imaging by DOT, researchers often measure or determine changes in light intensity or photon density at different time instances. It follows that with the linear Rytov approximation, the noise-free, relative signal changes measured by DOT [as seen in Eq. (3)] can be expressed by

b=lnΦ0Φ,
where both Φ0 and Φ are vectors to represent the baseline and transient measurements of light intensity or photon density from all the channels at baseline time t1 and time instant t2, respectively. By defining the measurement noise at these two time instants as w1 and w2, a noisy measurement is then given by
b˜=lnΦ0+w1Φ+w2=lnΦ0Φ+ln1+w1/Φ01+w2/Φ=b+ln1+w1/Φ01+w2/Φ.
Hence, a noisy DOT measurement can be considered as a summation of a noise-free, relative light intensity measurement vector from all channels, as given by Eq. (10), and a relative noise vector w defined below:

w=ln1+w1/Φ01+w2/Φ=ln(1+w1Φ0)ln(1+w2Φ).

Mathematically, this equation implies that the relative noise vector, w, is affected not only by noises at both time instants, w1 and w2, but also by actual signals from imaged objects or activities at both temporal states, Φ0 and Φ. Consequently, the assumption that the noise variances (i.e., elements of w) of different channels at different S-D separations have a distribution of WGN [9,11,32] is not accurate or appropriate in this case.

As shown in ref [34], incorporating the noise variance information into image reconstruction is a meaningful step to achieve high-quality DOT. Before starting to derive a novel noise quantification method in the next sub-section, we state the following assumptions that simplify the noise quantification process when the covariance matrix of Eq. (12) is approximated:

  • (A1) Noise random variables of different S-D pairs are independent with a zero-mean Gaussian distribution, namely, the covariance matrix Cw is diagonal [34,35].
  • (A2) The measurement noise variance is much less than the measured photon intensity or fluence, namely, the SNR is high so that the probabilities of Pr (w1/Φ0< −1) and Pr (w2/Φ< −1) are almost zero.

The first assumption is acceptable if the noise or signal fluctuation of each light source is much smaller than the respective light intensity and each detector takes independent measurement from one another at independent locations. The second assumption is also valid since the amplitude of instrument noise is usually less than that of the measured signal; for example, the system noise is usually assumed to be a few percent of the measured signals [5,36].

3.2 Quantification of noise variance for DOT measurements

Now, based on Eq. (12) and assumption (A1), the noise added to the i-th S-D channel is

w(i)=ln(1+w1(i)ϕ0(i))ln(1+w2(i)ϕ(i)),
whose variance is equal to the i-th diagonal entry of the covariance matrix Cw. In actual application, the sample mean of the raw measurement can be used as Φ and Φ0, and the sample variances of the raw measurement correspond to random variables of w1(i) and w2(i).

Next, let a = w1(i)/ϕ0(i) and b = w2(i)/ϕ(i), where w1(i) and w2(i) are random variables with a zero-mean Gaussian distribution and variances of σ12 and σ22, respectively. Then, because ϕ0(i) and ϕ(i) are to be constants, a and b are also zero-mean Gaussian random variables with variances of σa2 = σ12 /ϕ02(i) and σb2 = σ22 /ϕ2(i), respectively.

Therefore, Eq. (13) becomes

w(i)=ln(1+a)ln(1+b)
whose variance is

sw(i)2=E[w2(i)](E[w(i)])2=E{[ln(1+a)ln(1+b)]2}{E[ln(1+a)ln(1+b)]}2=E[ln2(1+a)]+E[ln2(1+b)]{E[ln(1+a)]}2{E[ln(1+b)]}2=Var[ln(1+a)]+Var[ln(1+b)].

Based on Assumption (A2), both a and b are Gaussian random variables with small variance much less than 1. Next, we let

g(x)=ln(1+x),
where x can be either a or b and is normal, i.e., x ~N(0, σx2) with σx<< 1.

Applying Taylor expansion to Eq. (14) yields:

g(x)=k=1(1)k+1xkk,for1<x<1.
Hence,
E[g(x)]=k=1(1)k+1E[xk]k.
By using the fact that the odd moments of Gaussian distribution are zero, we can obtain
E[g(x)]=k is even(1)k+1kσxk(k1)!!=k=1(2k1)!!2kσx2k,
where k!! is the double factorial, defined by k!! = k (k-2)… × 5 × 3 × 1 if k is a positive odd number.

Following the same steps, we have

E[g2(x)]=E[(i=1(1)i+1xii)(j=1(1)j+1xjj)]=Σi=1Σj=1(1)i+jijE[xi+j]=ij(i+j1)!!ijσxi+j,
where i + j is even.

Since the variance of x is much less than one, Eq. (15) can be approximated as:

E[g(x)](12σx2+34σx4+52σx6+O(σx8)).
Hence,
E2[g(x)](12σx2+34σx4+52σx6+O(σx8))214σx4+34σx6+O(σx8).
Similarly, Eq. (16) can be approximated as:
E[g2(x)]σx2+114σx4+13712σx6+O(σx8)
Therefore, the variance of g(x) can be found from the following equation:
Var [g(x)]=E[g2(x)]E2[g(x)] σx2+52σx4+323σx6+O(σx8).
Replacing x by a and b with the expressions derived above, we have:
E[w(i)]=E{ln[1+w1(i)ϕ0(i)]}E{ln[1+w2(i)ϕ(i)]}{12[σ12ϕ02(i)σ22ϕ2(i)]+34[σ14ϕ04(i)σ24ϕ4(i)]+52[σ16ϕ06(i)σ26ϕ6(i)]+O(σ8)}0
and

σw(i)2=Var[w(i)][σ12ϕ02(i)+σ22ϕ2(i)]+52[σ14ϕ04(i)+σ24ϕ4(i)]+323[σ16ϕ06(i)+σ26ϕ6(i)]+O(σ8).

Hence, the variance of the relative noise of one S-D channel, i, can be quantified as shown in Eq. (17), consisting of original noise variances, σ12 and σ22, and the amplitude of raw measurements, ϕ0(i) and ϕ(i), at two different time instances. Note that even if the noise terms, w1 and w2, are WGN, due to the ϕ0(i) and ϕ(i) terms, the noise variances of Eq. (12) are not necessarily the same at different channels, as derived and expressed in Eq. (17).

3.3 Normalization of noise variance with sparsity regularization for DOT reconstruction

Note that in Eq. (12), the measurement noise w1 and w2 are the noise from the same S-D channel at two different time instances. If further assuming that the noise does not change largely over time, namely, σ12 = σ22 = σw2, Eq. (17) can be modified as

σw(i)2σw2[1ϕ02(i)+1ϕ2(i)]+52σw4[1ϕ04(i)+1ϕ4(i)]+323σw6[1ϕ06(i)+1ϕ6(i)]+O(σ8).
This simplification makes the noise quantification method more practical and suitable for most of DOT applications. In practice, the noise variance of each S-D channel, σw2, can be measured in advance (e.g., it can be estimated using a short acquisition of N samples from the baseline measurement; the sample variance of these N samples can be calculated by 1N1k=1N(wkμ)2, where wk is the k-th measurement sample and µ is the sample mean). Then with Eq. (18), the noise variance of relative measurements can be quantitatively calculated. Furthermore, it is possible now to perform noise normalization with sparsity regularization, using Eq. (9). The weighting matrix D in Eq. (9) becomes:
D= (1σw(1)1σw(n)),
where n is the total number of channels with different S-D separations and σw(n)2 is the measurement variance of nth channel.

It is noteworthy that estimated variance of relative noise at each channel can be considered as its noise power to signal power ratio of the respective channel since σa2 = σ12 /ϕ02(i) and σb2 = σ22 /ϕ2(i). Hence, its reciprocal or the weighting coefficient given in matrix D is closely associated with or equivalent to the SNR of each channel. As the weighting matrix D is applied in Eq. (9), measurement channels with a higher SNR will gain higher weights compared to those with a low SNR. Equivalently, matrix D will suppress the signals from the measurement channels with a lower SNR, thus reducing the impact of noise. It should also be noted that for the applications where the original noise level cannot be estimated, one can approximately assume that noise variances among different channels are nearly identical (i.e., σ1 = σ2 = a small constant) for all channels. Then, measured amplitudes can be used to estimate relative noise variances and to build the weighting matrix.

In Section 3 above, we derived and explained our newly developed algorithm on quantification and normalization of noise variance with sparsity regularization, QNNVSR, for DOT image reconstruction. Specifically, in order to perform QNNVSR, the following steps need to be followed:

  • (1) Eqs. (17) or (18) are used to quantify noise variance for each measurement channel, σw(i)2, based on actual experimental measurements;
  • (2) Eq. (19) are used to calculate the weight matrix D;
  • (3) Eq. (9) are used for reconstructing the DOT image, x^;
  • (4) SALSA algorithm [28] is followed to solve Eq. (9).

4. Algorithm validation by numerical simulations and laboratory experiments

4.1 Procedures and setup for computer simulations

In the computer simulation experiment, a CW DOT system was considered. The geometry of sources and detectors is illustrated in Fig. 2, where 25 bifurcated optodes were placed as a 5 × 5 square grid over the top surface of a semi-infinite medium and centered at the origin. The distance between every two closest optodes was 1.4 cm. By using the first to sixth nearest S-D pairs, 188 channels could be obtained, based on the laboratory phantom measurements (see Section 4.3). White Gaussian noise with a variance of 10−4, which is approximately equal to the noise variance found in phantom experiments (see Fig. 1), was added to both background measurements, Φ0, and inhomogeneous measurements, Φ.

 figure: Fig. 2

Fig. 2 Probe array setup for both simulation and phantom experiments. The unit in both x and y directions is cm.

Download Full Size | PDF

The absorption and reduced scattering coefficients of the homogeneous background medium were chosen in a reasonable range [37] as μa = 0.06 cm−1 and μs’ = 8.2 cm−1, respectively. The forward model was computed analytically as in refs [18,38]. Since the main focus of this study is to improve the spatial resolution for functional brain imaging, only relative changes in absorption within activated brain areas are considered. Thus, the difference in absorption, Δμa, between the imaged brain area and background is what we need to reconstruct for DOT images. An “L” shape absorber with Δμa = 0.1 cm−1 was placed in the center of optode array as shown in Fig. 3, at a depth of 1.5 cm.

 figure: Fig. 3

Fig. 3 Original absorbing object used for simulation experiment; the x and y dimension is in cm. Color bar represents Δμa in cm−1.

Download Full Size | PDF

In order to demonstrate the advantage of the QNNVSR method, we compared the reconstruction results with the conventional Tikhonov regularization method as given in Eq. (4), with SR in Eq. (7), and with the GLS method in Eq. (8). The regularization parameters in Tikhonov regularization methods were selected with the L-curve approach [19], as it is effective to provide the optimal solution and thus commonly used with Tikhonov regularization methods. For all the other methods, the regularization parameters were selected from a number of trials that yielded the least distortion of the imaged object's shape. Moreover, methods besides Tikhonov regularization were all performed by the SALSA algorithm [28], which is a solver for convex optimization problems with the same stopping criteria.

Two quantitative metrics were used to evaluate the reconstruction results: the structural similarity index metric (SSIM) [39] and the contrast-to-noise ratio (CNR) [40].

The SSIM is given by

SSIM(x,y)=(2μxμyμx2+μy2)(2σxyσx2+σy2)
where μx, μy, σx2, and σy2 are the means and variances of vector x and y, respectively. σxy is the covariance between vector x and y.

On the other hand, by defining the area of targeted object as region of interest (ROI) and the rest area as background (BKG), the CNR is calculated by:

CNR=μROI¯μBKG¯c1σμROI2+c2σμBKG2,
where μROI¯andμBKG¯ are the average values of reconstructed coefficients in ROI and BKG, σμROI2 and σμBKG2 are the variances of the reconstructed coefficients of the two, and c1 and c2 are the weighting coefficients that are calculated as follows

ci=Number of pixels in ROI/BKGTotal number of pixels in the entire image,i=1,2.

4.2 Results from computer simulations

4.2.1 Validation of noise quantification

In order to verify the noise quantification method just derived in Section 3.2, two sets of data from two spherical objects were simulated and collected: Φ0 readings were related to the base line measurements from all S-D channels in the absence of the absorbers. Corresponding Φ values were acquired from the measurements in the presence of the objects inside the medium.

Both Φ0 and Φ were simulated 100 sampling times, separately. The mean and variance of Φ0 and Φ over 100 samples were computed and used to calculate the relative noise variance using Eq. (17). These calculated noise variances were compared with the actual ones directly estimated from 100 simulated relative measurement samples, i.e. b, as shown in Fig. 4. It shows clearly that the approximated noise variances quantified by Eq. (17) are consistent with the actual ones. The mean square error between the estimated and actual noise variance was calculated and equal to $1.76 × 10−9. Note that since the noise term here corresponds to relative changes in light absorption in a log scale [see Eq. (2)], it is derived from a fraction of two raw measurements. Thus, it does not have a unit.

 figure: Fig. 4

Fig. 4 Comparison of noise variance from all simulated channels. Calculated noise variances by Eq. (17) are represented by y-axis and the actual noise variances computed from simulated 100 measurements are shown in x-axis.

Download Full Size | PDF

4.2.2 Comparison of reconstructed images using different reconstruction algorithms

Several reconstructed images from computational experiments using four different reconstruction algorithms are shown in Figs. 5(a) to 5(d). The result by Tikhonov regularization is shown in Fig. 5(a) with blurry and diffuse image boundary; the shape of constructed image is completely distorted from the “L” shape [see Fig. (3)] into an elliptical object. After performing normalization of noise variance with Eq. (17), the GLS method improves the shape of the reconstructed object [see Fig. 5(b)] compared to the Tikhonov method. However, the recovered object is still fuzzy and has much background noise. It is clear that the SR approach shows a less blurred image as shown in Fig. 5(c), but the reconstructed object tends to have a “C” shape, still somewhat deviating from the “L” shape. Finally, the reconstruction with the QNNVSR method gives rise to a much improved reconstructed image that recovers the original shape almost completely as shown in Fig. 5(d). Note that since each reconstruction algorithm has added different weighting factors or matrix, the absolute values of absorption changes reconstructed by different algorithms could not be compared consistently. Thus, all the reconstructed images are normalized to their respective maximum for fair comparison.

 figure: Fig. 5

Fig. 5 Comparison of normalized reconstruction results from the simulation experiment with: (a) Tikhonov method, (b) GLS, (c) SR, and (d) QNNVSR.

Download Full Size | PDF

The SSIM [see Eq. (20)] and CNR [Eq. (21)] values of the reconstructed object by all four methods were computed and listed in Table 1. Notice that QNNVSR gives rise to the highest SSIM value among all the methods tested, meaning that quantification and normalization of noise variance with sparsity regularization can best enhance image reconstruction in DOT, because it is able to recover the object shape very close to the actual one. Moreover, a high CNR indicates that most of non-zero pixels in the reconstructed image are consistently located within the expected ROI. Even though the SR method yielded a clean background, its CNR is the lowest. This is partially due to the distortion of the reconstructed object's shape; some of the non-zero pixels are fallen outside the expected ROI, so they are considered as background noise in the CNR calculation. In sharp contrast, our QNNVSR method recovers the object in both location and shape, almost perfectly having a clean background. Therefore, its CNR is the highest among all four methods.

Tables Icon

Table 1. SSIM and CNR values of reconstructed images from numerical experiments with Tikhonov, GLS, SR, and NQNSR

4.3 Setup for laboratory experiments

In addition to computational experiments, laboratory phantom experiments were also performed to test our newly derived reconstruction method with real experimental noise.

In the phantom experiment, the experimental setup and procedures were similar to those used in [7,38]: A CW-based DOT imaging system (DYNOT, NIRx, New York) was used to obtain the measurements from a container of 15 × 10 × 10 cm3 filled with 1% Intralipid solution. The optodes were placed on the very top surface of the intralipid solution such that the tips of optodes were just touching the liquid phantom surface. In this way, no air gap existed between the tips and liquid to minimize the refractive index (RI) mismatch. This setup provided us with a very similar boundary condition to that in a light-tissue interaction situation. The wavelength of incident light was 830 nm and the absorption and reduced scattering coefficients of the homogeneous background medium were measured (Model 96208, ISS Inc., Champaign, Illinois) as μa = 0.12 cm−1 and μs’ = 8.8cm−1, respectively. The geometry of sources and detectors was the same as that used for the computer simulation as illustrated in Fig. 2, where 25 bifurcated optodes were placed as a 5 × 5 square grid over the top surface of the Intralipid medium and centered at the origin. By considering all the possible combination of different optodes, 300 measurement channels could be obtained and the SNRs of all these channels were calculated as shown in Fig. 6.

 figure: Fig. 6

Fig. 6 Calculated SNRs among different channels in the phantom experiment (“+” means outliers). X-axis represents S-D separations, while discrete boxes from Y axis mark the 1st, 2nd, 3rd, … and nth nearest S-D pairs.

Download Full Size | PDF

It is seen that SNRs of the first (1.4-cm separation) to the sixth (4.2-cm separation) nearest S-D pairs have a median value larger than 45 dB, relatively higher than the rest of the separations. Therefore, to avoid channels with a significantly low SNR level, only first to sixth nearest S-D pairs were selected for data processing (i.e., channels with S-D separations equal to or less than 4.2 cm). As a result, 188 channels were used for image analysis and reconstruction. Two sets of objects were used separately: (1) Two spherical absorbers with identical sizes (diameter = 1 cm) were placed along the y axis (see Fig. 2), 1.5 cm below the measurement surface. The two absorbers used were two plastic black beads, which therefore should be treated as “pure” absorbers. The center-to-center distance between the two absorbers was 1.5 cm. (2) A black “L” shaped object, as shown in Fig. 7, was placed along the y axis (see Fig. 2) and at 1.5 cm below the measurement surface. The total grid area on the background of Fig. 7 is 25 mm × 25 mm. The L-shaped absorber was made from a vinyl eraser. Similar to the first phantom experiment, the absorber was also black and thus should be treated as a “pure” absorber.

 figure: Fig. 7

Fig. 7 Absorber used in the second phantom experiment, where the size of the background grid has a total area of 25 mm × 25 mm, with a grid unit of 5 mm in each direction.

Download Full Size | PDF

Moreover, strong light scattering of biological tissue causes the detection sensitivity of DOT to attenuate exponentially with increased depth, resulting in depth localization in DOT to be poor in general. Thus, it becomes unrealistic to accurately quantify the local absorption perturbation without knowing its actual depth. A variety of efforts have been made by different research groups to improve the accuracy of in-depth localization. One common method that has been used in functional brain imaging was to combine DOT with prior anatomical information from MRI. It is well known that most of optical signals measured by functional DOT come from cortical regions of the human brain below the measurement surface [15]. Therefore, a mathematical constraint to force the DOT images to be reconstructed on the cortical layers is added as a spatial prior based on structural and functional MRI [41] to reduce DOT partial volume error and to improve quantitative accuracy of reconstructed DOT images. This means that in DOT reconstruction, a 2D image is reconstructed at a pre-selected depth (between 1 and 1.5 cm). In this way, a 3D reconstruction problem is reduced to a 2D slice reconstruction problem. This is how we obtain 2D reconstructed images from our 3D experimental space. Since the thickness of extra-cranial or extra-cerebral layers is relatively constant across adult human heads (between 1 and 1.5 cm), such a cortical depth constraint can be applied in practical and/or clinical applications without causing too large errors. This depth constraint needs to be adjusted for pediatric and adolescent populations based on corresponding MRI images.

Figure 8 shows reconstructed images of the first phantom experiment with two spherical objects by using (a) Tikhonov regularization, as given by Eq. (4), where the regularization parameter was selected by the L-curve method [18,19] with λ = 10−4, (b) GLS with λ = 3 × 10−6, (c) SR, as given in Eq. (5), λ = 3 × 10−4, and (d) QNNVSR with λ = 3 × 10−5. Despite slight distortion of one absorber, SR successfully reconstructs two objects along the vertical (i.e., y) axis, while Tikhonov and GLS methods fail to recover two distinct absorbers inside the medium. This demonstrates that sparsity regularized reconstruction indeed improves the spatial resolution of DOT images. However, due to the measurement noise, one of the two objects in the image tends to be smaller than its actual size. When QNNVSR is applied, the two imaged objects are retrieved with a comparable size to the actual one with minor distortion in shape as seen in Fig. 8(d).

 figure: Fig. 8

Fig. 8 Normalized reconstructed images of the first phantom experiment (two spherical objects) obtained by using (a) Tikhonov regularization, (b) GLS, (c) SR, and (d) QNNVSR.

Download Full Size | PDF

In order to emphasize and compare the spatial contrast, cross sections of non-normalized image amplitudes recovered with Tikhonov, GLS, SR, and QNNVSR at x = 0 (i.e., along y-axis of Figs. 2 or 8) of the image plane are compiled and plotted in Fig. 9. As mentioned in Section 4.2.2, since each reconstruction algorithm has added different weighting factors or matrix, the absolute values of absorption changes reconstructed by different algorithms could not be compared consistently. The major inspection here is to see which algorithm can well separate the two imaged objects. Figure 9 illustrates clearly that two targeted objects are well separated with SR and QNNVSR, while Tikhonov and GLS could not resolve two objects at all with much blurry cut-off edges. In this one-dimensional plot along this specific cross section, we cannot observe much clear distinction between the reconstruction quality by SR and QNNVSR.

 figure: Fig. 9

Fig. 9 Recovered amplitudes of absorption changes along y-axis of image plane (see Fig. 2 or Fig. 8) with Tikhonov, GLS, SR, and QNNVSR reconstruction algorithm at x = 0.

Download Full Size | PDF

The reconstruction results of the second laboratory experiment with the L-shaped absorber are obtained and compared in Fig. 10. The regularization parameters used for Tikhonov, GLS, SR, and QNNVSR method are λ = 10−5, 8 × 10−7, 2 × 10−5, and 8 × 10−6, respectively. Though all four methods are able to recover an absorber in the center of the images, the quality of the reconstructed background and object vary across these four cases. Even though S-D pairs with larger separations (i.e., S-D separation >4.2 cm or after the sixth nearest S-D pairs) were removed to minimize significant noise, the reconstructed object in Fig. 10(a) by Tikhonov regularization is surrounded by many background artifacts with a distorted shape of the object. In Fig. 10(b), a blurred “L” shaped object can be roughly identified, but the background is still filled with artifacts. The result in Fig. 10(c) reconstructed with SR, on the other hand, shows a clearer background and an improved object shape. Finally, the reconstruction result by our QNNVSR method, as shown in Fig. 10(d), demonstrates that not only the shape of the reconstructed object is improved (shown to be similar to the actual one given in Fig. 7), but also the background noise is further suppressed compared to the one using SR alone.

 figure: Fig. 10

Fig. 10 Normalized images of the second phantom experiment with an L-shape object (see Fig. 7) reconstructed by (a) Tikhonov, (b) GLS, (c)SR, and (d) QNNVSR. The data used for reconstruction were taken from the first to sixth nearest S-D pairs (up to 4.2 cm), having a total number of 188 channels.

Download Full Size | PDF

Furthermore, corresponding SSIM and CNR values from all four reconstructed images are calculated and listed in Table 2. The top four rows correspond to the cases with S-D separation up to 4.2 cm, namely with 188 S-D channels from the first to the sixth nearest S-D pairs. This table illustrates clearly that QNNVSR offers the best value of SSIM and CNR among all the reconstruction algorithms tested.

Tables Icon

Table 2. SSIM and CNR values of reconstructed images from phantom experiments with Tikhonov, GLS, SR, and NQNSR

5. Discussion and future work

In this paper, we have derived and demonstrated a novel method for quantification and normalization of noise variance and then combined it with sparsity regularized reconstruction to enhance spatial resolution and image contrast of DOT images. According to our step-by-step derivation, the noise variance can be approximately estimated from relatively limited resources available, provided a short-period of time is given for baseline data acquisition [see Eqs. (17) or (18)]. Furthermore, a weight matrix D can be generated and used for normalization of noise variance [see Eq. (19)], which enhances signals with a large SNR and suppresses signals with a low SNR. Our novel work rests on the following findings: a combination between normalization of measurement data by estimated noise variance and sparsity regularization [i.e., combining Eqs. (9) and (19)] enhances the spatial resolution and the shape fidelity considerably for DOT images. Specifically, after we define and quantify the weight matrix D, we can solve the sparsity regularized DOT reconstruction using SALSA algorithm [28]. Throughout this paper, we have demonstrated our points by reconstructed images in computer simulations and laboratory phantom experiments.

This study has a few of limitations. First, we do acknowledge that a major weakness of the experimental study or setup was the use of “dark” or “pure” absorbers, which would not rigorously satisfy the mathematical requirements given by Rytov approximation, as given by Eq. (2). Such mismatch between the mathematical requirement and experimental conditions may lead to potentially inaccurate results and conclusions. To overcome this problem, further laboratory experiments using absorbers with much lower absorption coefficients are needed to confirm QNNVSR. Second, in practical DOT brain imaging applications, light absorption changes due to brain activations are usually less than 5% of the baseline signals [7,42], which is much less than the absorption change used in this laboratory experiment. Hence, in actual functional brain study, the contrast will be much lower than the phantom experiment we have utilized. It implies that we may be able to use only the 1st and 2nd nearest S-D separations for sensitive DOT measurements, depending on the actual optode setup and probe geometry. Third, since the overall development and derivation is based on CW setting with ω = 0 in Eq. (1), our method is not able to separate light scattering from light absorption, and thus it works mainly for functional brain imaging, namely, to measure only changes in HbO and Hb concentrations, assuming constant light scattering. Last, the current results were obtained using the trial-and-error approach to select the regularization parameter; we have not yet discovered a systematic method to optimally determine the regularization parameter, which will be the next step of this research project.

While our development focuses on functional brain imaging, where the images are often sparse in the spatial domain for absolution changes, the methodology can be valid and modified accordingly for broader applications. For example, if the image to be reconstructed is not sparse in the spatial domain, one can map or transform the image into another domain where the image will become sparser (e.g., for some transform S, S(x) will be much sparser than the original x). Such techniques have been utilized and demonstrated in other compressive sensing applications. Further development in this direction can be our future work that will broaden the usefulness of QNNVSR for other DOT applications.

References and links

1. M. Ferrari and V. Quaresima, “A brief review on the history of human functional near-infrared spectroscopy (fNIRS) development and fields of application,” Neuroimage 63(2), 921–935 (2012). [CrossRef]   [PubMed]  

2. D. R. Leff, F. Orihuela-Espina, C. E. Elwell, T. Athanasiou, D. T. Delpy, A. W. Darzi, and G. Z. Yang, “Assessment of the cerebral cortex during motor task behaviours in adults: a systematic review of functional near infrared spectroscopy (fNIRS) studies,” Neuroimage 54(4), 2922–2936 (2011). [CrossRef]   [PubMed]  

3. T. Durduran, R. Choe, W. B. Baker, and A. G. Yodh, “Diffuse optics for tissue monitoring and tomography,” Rep. Prog. Phys. 73(7), 076701 (2010). [CrossRef]   [PubMed]  

4. S. R. Arridge and J. C. Schotland, “Optical tomography: forward and inverse problems,” Inverse Probl. 25(12), 123010 (2009). [CrossRef]  

5. F. Tian, M. R. Delgado, S. C. Dhamne, B. Khan, G. Alexandrakis, M. I. Romero, L. Smith, D. Reid, N. J. Clegg, and H. Liu, “Quantification of functional near infrared spectroscopy to assess cortical reorganization in children with cerebral palsy,” Opt. Express 18(25), 25973–25986 (2010). [CrossRef]   [PubMed]  

6. A. T. Eggebrecht, B. R. White, S. L. Ferradal, C. Chen, Y. Zhan, A. Z. Snyder, H. Dehghani, and J. P. Culver, “A quantitative spatial comparison of high-density diffuse optical tomography and fMRI cortical mapping,” Neuroimage 61(4), 1120–1128 (2012). [CrossRef]   [PubMed]  

7. F. Tian, G. Alexandrakis, and H. Liu, “Optimization of probe geometry for diffuse optical brain imaging based on measurement density and distribution,” Appl. Opt. 48(13), 2496–2504 (2009). [CrossRef]   [PubMed]  

8. J. P. Culver, V. Ntziachristos, M. J. Holboke, and A. G. Yodh, “Optimization of optode arrangements for diffuse optical tomography: A singular-value analysis,” Opt. Lett. 26(10), 701–703 (2001). [CrossRef]   [PubMed]  

9. O. Lee, J. M. Kim, Y. Bresler, and J. C. Ye, “Compressive diffuse optical tomography: noniterative exact reconstruction using joint sparsity,” IEEE Trans. Med. Imaging 30(5), 1129–1142 (2011). [CrossRef]   [PubMed]  

10. A. Douiri, M. Schweiger, J. Riley, and S. Arridge, “Local diffusion regularization method for optical tomography reconstruction by using robust statistics,” Opt. Lett. 30(18), 2439–2441 (2005). [CrossRef]   [PubMed]  

11. N. Cao, A. Nehorai, and M. Jacobs, “Image reconstruction for diffuse optical tomography using sparsity regularization and expectation-maximization algorithm,” Opt. Express 15(21), 13695–13708 (2007). [CrossRef]   [PubMed]  

12. R. J. Gaudette, D. H. Brooks, C. A. DiMarzio, M. E. Kilmer, E. L. Miller, T. Gaudette, and D. A. Boas, “A comparison study of linear reconstruction techniques for diffuse optical tomographic imaging of absorption coefficient,” Phys. Med. Biol. 45(4), 1051–1070 (2000). [CrossRef]   [PubMed]  

13. P. K. Yalavarthy, B. W. Pogue, H. Dehghani, and K. D. Paulsen, “Weight-matrix structured regularization provides optimal generalized least-squares estimate in diffuse optical tomography,” Med. Phys. 34(6), 2085–2098 (2007). [CrossRef]   [PubMed]  

14. V. Ntziachristos, B. Chance, and A. Yodh, “Differential diffuse optical tomography,” Opt. Express 5(10), 230–242 (1999). [CrossRef]   [PubMed]  

15. D. A. Boas, A. M. Dale, and M. A. Franceschini, “Diffuse optical imaging of brain activation: approaches to optimizing image sensitivity, resolution, and accuracy,” Neuroimage 23(Suppl 1), S275–S288 (2004). [CrossRef]   [PubMed]  

16. M. A. O’Leary, D. A. Boas, B. Chance, and A. G. Yodh, “Experimental images of heterogeneous turbid media by frequency-domain diffusing-photon tomography,” Opt. Lett. 20(5), 426–428 (1995). [CrossRef]   [PubMed]  

17. M. Cope, D. T. Delpy, E. O. Reynolds, S. Wray, J. Wyatt, and P. van der Zee, “Methods of quantitating cerebral near infrared spectroscopy data,” Adv. Exp. Med. Biol. 215, 183–189 (1988). [CrossRef]   [PubMed]  

18. H. Niu, F. Tian, Z. J. Lin, and H. Liu, “Development of a compensation algorithm for accurate depth localization in diffuse optical tomography,” Opt. Lett. 35(3), 429–431 (2010). [CrossRef]   [PubMed]  

19. C. Vogel, Computational Methods for Inverse Problems, in Computational Methods for Inverse Problems (2002).

20. A. Tikhonov, “Solution of incorrectly formulated problems and the regularization method,” Soviet Mathematics Doklady 4, 1035–1038 (1963).

21. M. Süzen, A. Giannoula, and T. Durduran, “Compressed sensing in diffuse optical tomography,” Opt. Express 18(23), 23676–23690 (2010). [CrossRef]   [PubMed]  

22. V. C. Kavuri, Z. J. Lin, F. Tian, and H. Liu, “Sparsity enhanced spatial resolution and depth localization in diffuse optical tomography,” Biomed. Opt. Express 3(5), 943–957 (2012). [CrossRef]   [PubMed]  

23. A. Jin, B. Yazici, A. Ale, and V. Ntziachristos, “Preconditioning of the fluorescence diffuse optical tomography sensing matrix based on compressive sensing,” Opt. Lett. 37(20), 4326–4328 (2012). [CrossRef]   [PubMed]  

24. E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]  

25. M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: The application of compressed sensing for rapid MR imaging,” Magn. Reson. Med. 58(6), 1182–1195 (2007). [CrossRef]   [PubMed]  

26. M. Lustig, D. Donoho, J. Santos, and J. Pauly, “Compressed Sensing MRI,” IEEE Signal Process. Mag. 25(2), 72–82 (2008). [CrossRef]  

27. S. J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, “An interior-point method for large-scale l1-regularized least squares,” IEEE Sel. Top. Sig. Process. 1(4), 606–617 (2007). [CrossRef]  

28. M. V. Afonso, J. M. Bioucas-Dias, and M. A. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19(9), 2345–2356 (2010). [CrossRef]   [PubMed]  

29. M. Figueiredo, J. Bioucas-Dias, and M. Afonso, “Fast frame-based image deconvolution using variable splitting and constrained optimization,” Statistical Signal Processing, IEEE/SP 15th Workshop on, 109 - 112 (2009). [CrossRef]  

30. F. Tian, F. A. Kozel, A. Yennu, P. E. Croarkin, S. M. McClintock, K. S. Mapes, M. M. Husain, and H. Liu, “Test-retest assessment of cortical activation induced by repetitive transcranial magnetic stimulation with brain atlas-guided optical topography,” J. Biomed. Opt. 17(11), 116020 (2012). [CrossRef]   [PubMed]  

31. B. Khan, P. Chand, and G. Alexandrakis, “Spatiotemporal relations of primary sensorimotor and secondary motor activation patterns mapped by NIR imaging,” Biomed. Opt. Express 2(12), 3367–3386 (2011). [CrossRef]   [PubMed]  

32. A. Douiri, M. Schweiger, J. Riley, and R. Arridge, “Anisotropic diffusion regularization methods for diffuse optical tomography using edge prior information,” Meas. Sci. Technol. 18(1), 87–95 (2007). [CrossRef]  

33. M. Kay, Fundamentals of Statistical Signal Processing, 1st, Prentice Hall (2001).

34. N. Cao and A. Nehorai, “Tumor localization using diffuse optical tomography and linearly constrained minimum variance beamforming,” Opt. Express 15(3), 896–909 (2007). [CrossRef]   [PubMed]  

35. J. C. Ye, A. Bouman, J. Webb, and P. Millane, “Nonlinear multigrid algorithms for Bayesian optical diffusion tomography,” IEEE Image Processing 10(6), 909–922 (2001). [CrossRef]  

36. A. Zourabian, A. Siegel, B. Chance, N. Ramanujan, M. Rode, and D. A. Boas, “Trans-abdominal monitoring of fetal arterial blood oxygenation using pulse oximetry,” J. Biomed. Opt. 5(4), 391–405 (2000). [CrossRef]   [PubMed]  

37. F. Tian and H. Liu, “Depth-compensated diffuse optical tomography enhanced by general linear model analysis and an anatomical atlas of human head,” Neuroimage 85(Pt 1), 166–180 (2014). [CrossRef]   [PubMed]  

38. H. Niu, Z. J. Lin, F. Tian, S. Dhamne, and H. Liu, “Comprehensive investigation of three-dimensional diffuse optical tomography with depth compensation algorithm,” J. Biomed. Opt. 15(4), 046005 (2010). [CrossRef]   [PubMed]  

39. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]   [PubMed]  

40. X. Song, B. W. Pogue, S. Jiang, M. M. Doyley, H. Dehghani, T. D. Tosteson, and K. D. Paulsen, “Automated region detection based on the contrast-to-noise ratio in near-infrared tomography,” Appl. Opt. 43(5), 1053–1062 (2004). [CrossRef]   [PubMed]  

41. D. A. Boas and A. M. Dale, “Simulation study of magnetic resonance imaging-guided cortically constrained diffuse optical tomography of human brain function,” Appl. Opt. 44(10), 1957–1968 (2005). [CrossRef]   [PubMed]  

42. D. K. Joseph, T. J. Huppert, M. A. Franceschini, and D. A. Boas, “Diffuse optical tomography system to image brain activation with improved spatial resolution and validation with functional magnetic resonance imaging,” Appl. Opt. 45(31), 8142–8151 (2006). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Noise variance of different S-D pairs from a phantom experiment.
Fig. 2
Fig. 2 Probe array setup for both simulation and phantom experiments. The unit in both x and y directions is cm.
Fig. 3
Fig. 3 Original absorbing object used for simulation experiment; the x and y dimension is in cm. Color bar represents Δμa in cm−1.
Fig. 4
Fig. 4 Comparison of noise variance from all simulated channels. Calculated noise variances by Eq. (17) are represented by y-axis and the actual noise variances computed from simulated 100 measurements are shown in x-axis.
Fig. 5
Fig. 5 Comparison of normalized reconstruction results from the simulation experiment with: (a) Tikhonov method, (b) GLS, (c) SR, and (d) QNNVSR.
Fig. 6
Fig. 6 Calculated SNRs among different channels in the phantom experiment (“+” means outliers). X-axis represents S-D separations, while discrete boxes from Y axis mark the 1st, 2nd, 3rd, … and nth nearest S-D pairs.
Fig. 7
Fig. 7 Absorber used in the second phantom experiment, where the size of the background grid has a total area of 25 mm × 25 mm, with a grid unit of 5 mm in each direction.
Fig. 8
Fig. 8 Normalized reconstructed images of the first phantom experiment (two spherical objects) obtained by using (a) Tikhonov regularization, (b) GLS, (c) SR, and (d) QNNVSR.
Fig. 9
Fig. 9 Recovered amplitudes of absorption changes along y-axis of image plane (see Fig. 2 or Fig. 8) with Tikhonov, GLS, SR, and QNNVSR reconstruction algorithm at x = 0.
Fig. 10
Fig. 10 Normalized images of the second phantom experiment with an L-shape object (see Fig. 7) reconstructed by (a) Tikhonov, (b) GLS, (c)SR, and (d) QNNVSR. The data used for reconstruction were taken from the first to sixth nearest S-D pairs (up to 4.2 cm), having a total number of 188 channels.

Tables (2)

Tables Icon

Table 1 SSIM and CNR values of reconstructed images from numerical experiments with Tikhonov, GLS, SR, and NQNSR

Tables Icon

Table 2 SSIM and CNR values of reconstructed images from phantom experiments with Tikhonov, GLS, SR, and NQNSR

Equations (31)

Equations on this page are rendered with MathJax. Learn more.

D( r )Φ( r,ω )+c μ a ( r )Φ( r,ω )+jωΦ( r,ω )=cS( r,ω ),
1 2.3 ln Φ 0 ( r d , r s ) Φ( r d , r s ) =ΔOD=ODO D 0 =[log( I s I )log( I s I 0 )]=Δ μ a L.
Ax=b
x ^ =arg min x ( Axb 2 2 +λ x 2 2 )= A T (A A T +λI) 1 b,
x ^ =arg max x ( lnp (b|x)λ x 1 )
x ^ =arg min x ( Axb 2 2 +λ F(x) 1 ),
x ^ =arg min x ( Axb 2 2 +λ x 1 ).
x ^ =arg min x ( DAxDb 2 2 +λ D x x 2 2 )
x ^ =arg min x ( DAxDb 2 2 +λ x 1 )
b=ln Φ 0 Φ ,
b ˜ =ln Φ 0 + w 1 Φ+ w 2 =ln Φ 0 Φ +ln 1+ w 1 / Φ 0 1+ w 2 /Φ =b+ln 1+ w 1 / Φ 0 1+ w 2 /Φ .
w=ln 1+ w 1 / Φ 0 1+ w 2 /Φ =ln(1+ w 1 Φ 0 )ln(1+ w 2 Φ ).
w(i)=ln(1+ w 1 (i) ϕ 0 (i) )ln(1+ w 2 (i) ϕ(i) ),
w( i ) = ln ( 1+a ) ln ( 1+b )
s w( i ) 2 =E[ w 2 ( i ) ] ( E[ w( i ) ] ) 2 =E{ [ ln( 1+a ) ln( 1+b ) ] 2 } { E[ ln( 1+a ) ln( 1+b ) ] } 2 =E[ ln 2 ( 1+a ) ] +E[ ln 2 ( 1+b ) ] { E[ ln ( 1+a ) ] } 2 { E[ ln( 1+b ) ] } 2 =Var[ ln( 1 +a ) ] + Var[ ln( 1 +b ) ].
g( x ) = ln ( 1 +x ),
g( x )= k=1 (1) k+1 x k k , for 1<x<1.
E[g( x )]= k=1 (1) k+1 E[ x k ] k .
E[ g( x ) ]= k is even ( 1 ) k+1 k σ x k ( k1 )!!= k=1 ( 2k1 )!! 2k σ x 2k ,
E[ g 2 ( x ) ]=E[ ( i=1 (1) i+1 x i i )( j=1 (1) j+1 x j j ) ] = Σ i=1 Σ j=1 ( 1 ) i+j ij E[ x i+j ]= i j ( i+j1 )!! ij σ x i+j ,
E[ g( x ) ]( 1 2 σ x 2 + 3 4 σ x 4 + 5 2 σ x 6 +O( σ x 8 ) ).
E 2 [ g( x ) ] ( 1 2 σ x 2 + 3 4 σ x 4 + 5 2 σ x 6 +O( σ x 8 ) ) 2 1 4 σ x 4 + 3 4 σ x 6 +O( σ x 8 ).
E[ g 2 ( x ) ] σ x 2 + 11 4 σ x 4 + 137 12 σ x 6 +O( σ x 8 )
Var [ g( x ) ]=E[ g 2 ( x ) ] E 2 [g(x)]  σ x 2 + 5 2 σ x 4 + 32 3 σ x 6 +O( σ x 8 ).
E[ w( i ) ]=E{ ln[ 1+ w 1 ( i ) ϕ 0 ( i ) ] }E{ ln[ 1+ w 2 ( i ) ϕ( i ) ] } { 1 2 [ σ 1 2 ϕ 0 2 ( i ) σ 2 2 ϕ 2 ( i ) ]+ 3 4 [ σ 1 4 ϕ 0 4 ( i ) σ 2 4 ϕ 4 ( i ) ]+ 5 2 [ σ 1 6 ϕ 0 6 (i) σ 2 6 ϕ 6 (i) ]+O( σ 8 ) }0
σ w(i) 2 =Var[w( i )][ σ 1 2 ϕ 0 2 ( i ) + σ 2 2 ϕ 2 ( i ) ]+ 5 2 [ σ 1 4 ϕ 0 4 ( i ) + σ 2 4 ϕ 4 ( i ) ]+ 32 3 [ σ 1 6 ϕ 0 6 ( i ) + σ 2 6 ϕ 6 ( i ) ]+O( σ 8 ).
σ w(i) 2 σ w 2 [ 1 ϕ 0 2 ( i ) + 1 ϕ 2 ( i ) ]+ 5 2 σ w 4 [ 1 ϕ 0 4 ( i ) + 1 ϕ 4 ( i ) ]+ 32 3 σ w 6 [ 1 ϕ 0 6 ( i ) + 1 ϕ 6 ( i ) ]+O( σ 8 ).
D= ( 1 σ w(1) 1 σ w(n) ),
SSIM( x,y )=( 2 μ x μ y μ x 2 + μ y 2 )( 2 σ xy σ x 2 + σ y 2 )
CNR= μ ROI ¯ μ BKG ¯ c 1 σ μ ROI 2 + c 2 σ μ BKG 2 ,
c i = Number of pixels in ROI/BKG Total number of pixels in the entire image , i = 1,2.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.