Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Wide-field fluorescence molecular tomography with compressive sensing based preconditioning

Open Access Open Access

Abstract

Wide-field optical tomography based on structured light illumination and detection strategies enables efficient tomographic imaging of large tissues at very fast acquisition speeds. However, the optical inverse problem based on such instrumental approach is still ill-conditioned. Herein, we investigate the benefit of employing compressive sensing-based preconditioning to wide-field structured illumination and detection approaches. We assess the performances of Fluorescence Molecular Tomography (FMT) when using such preconditioning methods both in silico and with experimental data. Additionally, we demonstrate that such methodology could be used to select the subset of patterns that provides optimal reconstruction performances. Lastly, we compare preconditioning data collected using a normal base that offers good experimental SNR against that directly acquired with optimal designed base. An experimental phantom study is provided to validate the proposed technique.

© 2015 Optical Society of America

1. Introduction

Fluorescence Molecular Tomography (FMT) has known a rapid development over the last decade and half [1]. Coupled with an ever increasing availability of fluorescent molecular probes operating in the near-infrared spectra, the high sensitivity and multiplexing potential of optical systems has positioned FMT as a powerful molecular imaging technique for preclinical studies. Recently, a new compressive instrumental approach has been proposed for preclinical FMT in which wide-field structured light is employed to enable fast acquisition of tomographic data sets over large volumes. Using light modulators, illumination (and detection) bases can be created with great flexibility. Belanger et al [2] demonstrated that diffuse optical tomography based on a double light modulator architecture was achievable at very high speed. In this implementation, both illumination and detection masks are jointly employed, leading to a tomographic “single pixel” system. We successfully extended this approach to time-resolved FMT by implementing time-resolved wide-field structured light illumination and generating detection patterns via spatial integration of gated ICCD data sets. First, we demonstrated that the spatial integration did not compromise temporal data sets and that quantitative accurate optical tomography was feasible with performance similar to punctual optode strategies for absorptive inclusions [3]. Then, we demonstrated that whole-body small animal imaging FMT was feasible within a time frame of a few minutes when using such approach with a quantized low-frequency imaging base of 36 patterns [4] and cross-validation with micro-CT. Similar compressive imaging work in FMT was reported later on by Ducros et al who used a wavelet base [5]. Beside validations in well-controlled settings, the method has been applied in vivo to quantify FRET signals [6], with applications in drug delivery monitoring [7].

Recently, we have extended the methodology to hyperspectral time-resolved imaging by coupling the double light modulator design with a time-resolved spectrophotometer [8]. This implementation enables the acquisition of dense 4D data cubes for FMT (2D spatial, temporal and spectral). However, although all these compressive implementations based on wide-field structured light yield shorter acquisition time, better SNR, and less hyper-sensitivity to surface, the resulting optical inverse problem is still ill-conditioned. Sparsity constraints can be employed to improve the reconstruction distribution, especially when using early gates [9], but the inverse problem nonetheless suffers from high sensitivity to noise.

Herein, we investigate the benefits of using compressive sensing-based preconditioning to the wide-field FMT method. More precisely, we implement the preconditioning technique proposed by Jin et al [10] that was devised for point source/detection strategies to the wide-field illumination/detection strategy. The merit of the technique is evaluated when applied to the whole Jacobian or independently to the illumination/detection bases. Beyond applying this preconditioning method, we propose a novel approach to identify the subset of preconditioned patterns that leads to the optimal reconstruction results. Last, we compare results when using the preconditioning method post-hoc on experimental data from our bar pattern base with that directly collected from the optimal base.

2. Method

Optical tomography is an ill-conditioned inverse problem that is notoriously difficult to solve [11]. In the case of FMT, where fluorescence molecular probes are employed, thankfully the imaging space is, by design, sparse. Hence, there has been an increasing interest in using compressive sensing based methods to enhance FMT performance. To date, the main focus of compressive sensing based approaches has been related to the use of sparsity constraints when solving the inverse problem iteratively [9, 11–15]. Though, compressive sensing methods can also be devised to optimize the Jacobian features for improved sparse signal recovery [16–18]. Recently, Jin et al [10] proposed a preconditioning method based on compressive sensing for punctual optode data sets and recovery of sparse fluorophore distributions. The approach was validated in a phantom experiment and improvements in FMT performances were demonstrated. Herein, we follow the same strategy and apply it to the wide field structured light instrumental paradigm. We provide below the main salient points of the formulation of the inverse problem, theoretical derivation of the preconditioners, inverse solvers and objective metrics employed.

2.1 Forward model and inverse problem

To cast the optical inverse problem, we follow the formulation described in [19]. In such formulation, a Monte Carlo (MC) method [20] is employed to derive the excitation and emission forward models provided by (simplified to the continuous case):

ϕxi(r)=Ωgx(r,r')si(r')dr',i=1,...Ns
and
ϕmi(r)=Ωgm(r,r')ϕxi(r')ημaxf(r')dr',i=1,...Ns
where ϕ is light field and subscripts x, m stand for excitation and emission wavelengths. rΩ denotes location and sidenotes the ith source. μais absorption coefficient, ημaxf is fluorophore yield and gx/gm are Green’s function computed via the MC approach. Then, the measurement at the jth detector due to ith source can be expressed as:

Γi,j=Ωgmj(r)ϕxi(r)ημaf(r)dri=1,...Ns,j=1,...Nd.

A linear relationship between measurements b and fluorophore yields x can be derived as:

Ax=b;withA=[gem,11ϕex,11gem,N1ϕex,N1gem,1Ndϕex,11gem,NNdϕex,N1gem,11ϕex,12gem,N1ϕex,N2gem,1Ndϕex,1Nsgem,NNdϕex,NNs]RM×N
where A is the sensitivity/Jacobian matrix, x is the discretized image vector, and b is the vector of measurements. M is the total number of source-detector pairs and N is the number of elements in the imaging domain. The matrix A is always ill-conditioned and can be further manipulated (preconditioned) for improved reconstructions.

2.2 Compressive sensing-based preconditioning

In this section, we provide a brief summary of the theoretical derivation of the preconditioners which follow the formulation described in Ref [10]. The sensitivity matrix A is actually the column-wise Kronecker product of the excitation light field Φ via structured illumination and the emission light field G acquired via wide-field detection:

A=ΦG=[ϕ1g1,ϕ2g2,,ϕNgN],
where ϕk=[ϕex,k1,,ϕex,kNs]T, gk=[gem,k1,,gem,kNd]T and denotes Kronecker product. The preconditioning strategy implemented herein focuses on improving the incoherence of the Jacobian matrix. Supposing Φ and G are already column-normalized, the coherence of A can be bounded such as:
μ(A)μ1(k,A)K(ΦTΦINF2+GTGINF2),
in which μ(A) and μ1(k,A) are coherence and cumulative coherence. EF refers to Frobenius norm, IN is identity matrix and K is a constant decided by N, Ns, Nd and k. Hence, the coherence of the sensitivity matrix A can be reduced by decreasing ΦTΦINF2 and GTGINF2. Reducing these norms can be performed via preconditioning Φ and G (Φpre=MsΦ; Gpre=MdG) such that:
ΦpreTΦpreIN;GpreTGpreIN,
where Ms and Md are the wide-field structured light excitation and detection preconditioners, also known as optical and measurement masks or separate masks.

Alternatively, we can also apply a preconditioning matrixMA, also called global mask, to the whole Jacobian to reduce its coherence. Following the derivation from Jin et al [10], the preconditioning matrices can be obtained via Singular Value Decomposition (i.e. Φ=UΣV). The mathematical expressions for these preconditioners are similar and expressed as:

Mi=(Λi+εI)-1/2UiT
where i stands for the respective preconditioner s, d or A, Λi=ΣiΣiT and ε>0 is a regularization parameter for stabilization in case of poor condition number. Lastly, since Ms and Md might have negative entries and cannot be directly applied for experiments, they are decomposed into two matrices with non-negative values such as:

Mi=Mi(+)(Mi(-)).

2.3 Reconstruction algorithms and objective evaluation metrics

The performance of FMT is highly dependent on the selection of the inverse solver and parameters. Herein, we employ two different solvers. First, for all in silico investigations, we employ a popular solver, the Matlab built-in LSQR function. As we focus solely on assessing the merits of the CS-based preconditioning method and numerous tests are performed, this solver provides a fast and reliable approach for comparing different preconditioning strategies. The convergence tolerance is set to 10−4 and the iteration number is set equal to the noise level. For example, if we add 25dB white Gaussian noise, the iteration number will then be 25. No regularization scheme is employed jointly with the LSQR solver.

However, LSQR is not expected to provide the best reconstruction results for our application. Hence, for experimental data, we also employ our previously reported Lp regularization method [9], in which Lp (0 < p ≤ 1) norm is added to the optimization problem as sparsity constraint. Such solver is expected to be optimal for recovery of sparse signals. An L-curve is plotted in all cases (L1) to objectively select the best regularization parameter [9, 15]. L-curve analysis was performed using the same approach described in Ref [21].

To objectively compare the tomographic reconstruction quality, we established three quantitative metrics: localization error (LE), volume error (VE) and root mean square error (RMSE) under isovolume set at 50%. LE is defined as the Euclidean distance between the 3D centroid of the reconstructed object (within the isovolume) and the ground truth, in units of mm. VE is defined as the difference of reconstructed object volume (within the isovolume) and the ground truth volume divided by the volume of ground truth.

VE=(VreconVtruth)/Vtruth.

RMSE is a good metric for reporting the quantitative accuracy of the reconstructions and is defined as:

RMSE=i=1N(Xrecon(i)Xtruth(i))2/i=1N(Xtruth(i))2,
where Xtruth is the ground truth object and Xrecon is the reconstructed object. N is the total number of nodes in the imaging domain.

3. In silico investigation

3.1 Numerical set up

For the in silico study, we built a numerical phantom of size 50mm × 40mm × 20mm with optical properties set to μa = 0.05 cm−1, μs = 8 cm−1 and g = 0.9 (Henyey-Greenstein phase function). Two fluorescence inclusions with the same fluorescence yield, length (20mm), and radius (3mm), were positioned in the middle of the phantom (at a height of 10mm). The two inclusions were set 15mm apart (see Fig. 1(a)). The overall imaging domain was discretized with a homogenous mesh containing 9,678 nodes.

 figure: Fig. 1

Fig. 1 (a) Simulated numerical phantom. Two tubes are segmented at 50% isovolume. (b) Subset of the quantized low-frequency imaging base used as illumination and detection bases. Half of the imaging area is taken in each case.

Download Full Size | PDF

To replicate our typical experimental settings, we simulated quantized illumination and detection pattern bases comprised of 40 bar-shape patterns with overall size of 40mmx40mm (Fig. 1(b)). This base led to the best imaging performances as reported in [2]. Additionally, such base is easy to implement for fast acquisition and does not depend on a priori knowledge of the sample parameters. If such parameters, i.e. animal size/posture/geometry and optical properties, are known prior to the imaging session, then the optimal base can be derived using the approach proposed by Dutta et al. [22].

Measurements in transmission geometry were obtained for all combinations of illumination and detection patterns (1,600) using our wide-field mesh-based MC code [23]. This code allows efficient simulation of complex extended illumination sources with accuracy and construction of the associated Jacobians using the adjoint method [20]. 108 photons were used per simulated pattern. Note that the simulations were performed in the time domain and then integrated to yield continuous-wave (CW) data sets. Different levels of noise were simulated by adding Gaussian noise to the synthetic tomographic data sets. The simulated Signal-to-Noise ratios (SNR) ranged from 20dB to 40dB, which was relevant to experimental noise typically encountered in FMT.

3.2 Masks and resulting patterns

Applying the preconditioning method as described in section 2.2 yields potentially three masks: illumination, detection and global masks. In all cases, these preconditioners are computational constructs, though in the case of illumination and detection masks, they can also be applied experimentally. These masks, when applied to the quantized low-frequency imaging base, yield an optimal base. Each optimal pattern is computed as the weighted superposition of 40 original patterns, with weights provided in one row of the preconditioner. Each row of the preconditioner then leads to one optimal structured light illumination/ detection pattern. As each separate mask is decomposed into two matrices with non-negative entries, the optimal illumination and detection patterns come in pairs. Figure 2 provides a visual depiction of the paired preconditioning matrices and some associated optimal patterns. It is worthy to note that masks and resulting patterns are completely independent of the object being imaged, so no priori information is required to apply the proposed scheme.

 figure: Fig. 2

Fig. 2 (a) Positive and negative optical masks, with regularization parameter ε = 10−2. (b) First nine pattern pairs in the optimal base, figures are scaled to the individual maximum value in each pair (see Visualization 1 for complete optimal base).

Download Full Size | PDF

3.3 Effect of regularization parameter on the preconditioning

As described in Eq. (8), a regularization parameter ε is implemented in the SVD-driven derivation of the preconditioner. Such a regularization parameter is necessary to obtain a stable matrix due to poor conditioning of the illumination/detection structured wide-field light field matrices, as well as the full Jacobian. The regularization parameter plays a critical role in regulating matrix orthogonality, noise amplification, and high-frequency information. To provide insights on the effect of this parameter, we provide in Fig. 3 the ranked values of the normalized inner product of the Jacobian with global preconditioning (MA) and separate masks (Ms,Md).

 figure: Fig. 3

Fig. 3 Normalized inner product of the Jacobian after (a) global preconditioning and (b) separate masks preconditioning. The regularization parameter value is provided as the relative factor compared to the squared maximum singular value of the matrix (with ε = 10−1, 10−2, 10−3 and 10−4).

Download Full Size | PDF

As expected, increasing the regularization parameter value will decrease the matrix orthogonality, impairing the preconditioning effect. Hence, low regularization parameters are preferred to retain information content. However, noise propagation can then become an issue. In Fig. 4, we provide a pictorial description of this effect. In the case of perfect (noiseless) measurements, the noise propagation is limited even in the case of ε = 10−3 (Fig. 4(d)) whereas it becomes overwhelming at noise of 25dB (Fig. 4(h)). The same trend was observed in the reconstruction results (not shown) in which use of a larger regularization parameter reduced the localization error and root mean square error whereas the volume error stayed large or increased since high regularization is associated with loss of resolution. Based on these results, ε = 10−2 was selected for subsequent studies.

 figure: Fig. 4

Fig. 4 Single-pixel measurements for each combination of illumination/detection patterns (40x40) in the case of no-noise (1st row) and 25dB noise added (2nd row). The first column corresponds to measurements obtained w/o preconditioning (a, e). The other columns correspond to measurements obtained via separate mask preconditioning at ε = 10−1 (b, f), ε = 10−2 (c, g) and ε = 10−3 (d, h).

Download Full Size | PDF

3.4 Selection of pattern/measurement subset

We have seen from the last section that noise is over-amplified for some source-detector pairs and could impair reconstruction results. We also notice that, after preconditioning the information content is mainly associated with a few patterns (top left corner as shown in Fig. 4). Hence, selecting this subset of measurements after preconditioning should provide the most useful and accurate sub data set. Herein, we propose to select this sub data set in two steps. From Eq. (8) we can see that every preconditioning pattern (kth) is associated to one singular value (kth) in the ranked SVD. As it is well-established that most relevant measurements will be requested by the first m highest singular values, we can select the associated patterns to form a sub-base that contains the most important and relevant information. This process typically decreases the number of patterns employed from couple of dozens to around ten. In step two, measurements with the largest absolute values are picked out for reconstruction. An example of the process of pattern and measurement selection is provided in Fig. 5. The size of the sub data set depends on multiple parameters such as the experimental base used, fluorophore distribution and SNR. A few sub data sets should be tested and the one leading to the best reconstruction is selected.

 figure: Fig. 5

Fig. 5 Flow of pattern and measurement subset selection after preconditioning. The whole data set is shown in (a), with squared singular value plot (b), first 9 illumination/detection patterns are kept. 81 absolute measurement values (c) are then sorted as (d), and 15 largest values are selected for reconstruction (e).

Download Full Size | PDF

3.5 Comparison of separate masks and global mask preconditioning

To establish the merits of the preconditioning technique, reconstructions of the numerical phantom with separate masks, global mask, and no preconditioning were performed at levels of noise ranging from 20dB to 40dB with 5dB steps. At each level of noise, 100 independent synthetic tomographic data sets were created and reconstructed (a total of 1,500 reconstructions). For both preconditioning strategies, 7 patterns/49 first source-detector pairs and the 13 largest measurement values are selected for reconstruction. Each individual reconstruction was post-processed to yield LE, VE and RMSE. We report in Table 1 the mean values computed from these 100 trials for each condition investigated (noise level, preconditioning used).

Tables Icon

Table 1. Objective evaluation metrics mean (from 100 trials) for the reconstruction of the numerical phantom at different levels of noise and using different preconditioning approaches.

As discussed in section 3.4, the optimal pattern/measurement number varies from case to case, so not all reconstructions for global mask/separate masks preconditioning in the 100 reconstructions provide the best results. Even so, preconditioning leads to improved reconstructions versus using no preconditioning when considering RMSE and LE in all cases. In terms of VE, the performance is a little worse due to loss of high-frequency information when selecting the pattern/measurement subset. Furthermore, the reconstructions are performed using a simple LSQR algorithm without use of regularization. We propose in Fig. 6 the visualization of a reconstruction example for 25dB noise.

 figure: Fig. 6

Fig. 6 Visualization of the reconstructions at isovolume of 50% for (a) ground truth, (b) no-preconditioning, (c) separate masks preconditioning and (d) global mask preconditioning.

Download Full Size | PDF

While both types of preconditioning improve reconstructions by reducing artifacts and providing a better rendering of the structural details, only separate mask preconditioning is able to fully resolve two objects, which explains the smaller VE for separate masks than global mask preconditioning. The difference could be due to larger condition number and thus more significant noise amplification (the condition number of the sensitivity matrix after applying separate masks is 5.4 × 106 while it’s 3.8 × 107 for global mask). This is in agreement with observations reported in ref 10. Hence, we conclude that separate masks preconditioning provides better performance in terms of reconstructions than global mask preconditioning.

3.6 Comparison of preconditioning with separate masks and imaging with optimal base

If preconditioning yields better results, we can hypothesize that in conditions where phantom properties (geometrical and optical) are known, it is better-suited to compute ad hoc the optimal structured illumination and detection bases and directly use them experimentally. To do so, a positive and negative base, as shown in Fig. 2, can be constructed to sequentially acquire measurements. This approach is classical in CS-based imaging and has been implemented for wavelet bases in FMT by Ducros et al. [24] (termed virtual sources). The caveat of this approach is that it requires at least twice the acquisition time since the positive and negative components need to be acquired sequentially to form the mathematical base. When directly applying the optimal pattern base, the measurements used for reconstruction are computed from four experimental measurements, such as

m=m1m2m3+m4,
where m1 is collected from Pi+ and Pd+, m2 from Pi+ and Pd, m3 from Pi and Pd+, m4 from Pi and Pd. Pi+, Pi stand for positive/negative optimal illumination patterns and Pd+, Pd stand for positive/negative optimal detection patterns.

To assess the merits of directly imaging using the optimal illumination/detection base as obtained from SVD or performing preconditioning on experimental data from the quantized low-frequency base, we propose in Fig. 7 the measurements associated with each case at different noise levels. SNR reported are associated with each measurement (mi) in the case of optimal base imaging, and for each measurement as in Fig. 4(e) in the case of quantized low frequency base.

 figure: Fig. 7

Fig. 7 Single-pixel measurements for first 81 pairs of illumination and detection patterns in the case of preconditioning after imaging with quantized low-frequency base (first row) and directly imaging with optimal base (second row). From left to right, 50dB, 40dB, 30dB and 20dB Gaussian noise is added.

Download Full Size | PDF

As seen, directly employing the optimal base leads to very poor SNR in the measurements whereas the preconditioned measurements using quantized low frequency base experimental data are robust even at 20dB. This is expected as optimal base imaging data are computed out of 4 experimental measurements whereas preconditioned measurements using quantized low frequency base experimental data are obtained by the weighed sum of 1,600 measurements. Hence, these results indicate that it is preferable to acquire data sets with high SNR values using a quantized base and then perform preconditioning instead of using a mathematically optimal base with poor SNR. Note also that the optimal base as described in Visualization 1 are challenging to implement experimentally. Using the optimal positive illumination pattern set as an example, the range (difference between maximum and minimum pixel value) is between 100 and 150 for pattern 1-9 (out of the 0-255 potential values using the 8-bit DMDs typically employed), 60-80 for pattern 10-21 and almost 0 for the rest. Hence, it is impossible to experimentally use generated patterns after #21. In addition, for the first 9 patterns, the minimum difference between pixel values could be as low as 0.003 (3rd pattern), well below the minimum value that can be encoded using our DMDs.

4. Experimental validation

A 20mm thick phantom with optical properties of μa = 0.1cm−1 and μs = 9.6cm−1 was created using the recipe described in ref 6 and validated via time-resolved spectroscopy. Two tubes (3.0mm diameter and 15.5mm center-center distance) with different fluorescence dyes, Alexa Fluor 750 and 3.3′- diethylthiatricarbocyanine iodide, were placed at 10mm depth in the phantom. At both illumination and detection side, light is encoded by a digital mirror device (DMD) over 40mm by 30mm area on the sample with 36 quantized low frequency patterns as used in [8]. Excitation is performed at 750nm and emission light is collected through a TCSPC module over 10 wavelength channels (centered at 770nm-820nm) and 256 time gates (48.9ps width). We integrated the data temporally and over 3 spectral channels (channel 775-785) to obtain a continuous-wave data set. We applied the preconditioning approach to this data set. Figure 8 provides the experimental data set as well as the measurements after preconditioning. The relative regularization parameter was set to ε = 10−3 due to the high SNR level of data acquired; 7 patterns and 25 measurements after preconditioning were selected.

 figure: Fig. 8

Fig. 8 Wide-field structured light measurements: (a) experimental data using the quantized low frequency base, (b) measurements after separate mask preconditioning.

Download Full Size | PDF

Reconstructions were performed using both LSQR and L1-regularization methods, with data set before and after preconditioning. The number of iterations is set to 30 for LSQR in cases and 3 × 104, 3 × 105 for L1-regularization before/after preconditioning. The regularization parameters were chosen from the L-curves. The reconstructions results are provided in Fig. 9. In all cases, separate mask preconditioning led to better reconstructions. Especially, when associated with sparsity constraints, the two inclusions were separated efficiently and located with accuracy. Note that these reconstructions were performed using a CW data type and better performances are expected when using early gates [9].

 figure: Fig. 9

Fig. 9 (a)-(c): Reconstruction using directly the experimental measurements; (d)-(f): reconstruction using separate mask preconditioning and subset selection. (a) and (d) are acquired with LSQR; (b) and (d) with L1-norm regularization; (c) and (f) are the curvature plots of L-curves to select the optimal regularization parameters.

Download Full Size | PDF

5. Conclusion

We have implemented and investigated a compressive sensing based preconditioning method for wide-field structured light fluorescence molecular tomography. The method was tested in silico and experimentally. In all cases investigated, separate mask preconditioning, in which structured illumination and detection light fields are preconditioned independently, led to the best overall performances in 3D reconstructions. Additionally, we demonstrated that imaging with a quantized low-frequency base, in conjunction with preconditioning, led to better results than directly imaging with optimal base as derived from the SVD.

However, our investigation was limited to low-frequency quantized basis and the CW data type. For future studies, we plan to apply this preconditioning scheme to 1) various kinds of commonly used basis in compressive imaging, such as wavelet based or DCT based patterns; 2) time-domain imaging, especially for early gates, which offer improved resolution. Also, we plan to extend the method to hyperspectral data sets. 4D data cubes lead to large data sets that are difficult to employ for FMT due to memory constraints. The selection of pattern/measurement subset approach described herein, leads to significantly reduced data sets that would allow us to leverage the rich information content captured by compressive hyperspectral time-resolved imaging for fast whole-body multiplexed FMT.

Acknowledgments

This work is supported by the National Science Foundation (NSF) under Career award CBET 1149407 and National Institute of Health (NIH) Grants R01 EB19443 and R21 EB013421. The authors would like to thank An Jin and Birsen Yazici for insightful discussions.

References and links

1. C. Darne, Y. Lu, and E. M. Sevick-Muraca, “Small animal fluorescence and bioluminescence tomography: a review of approaches, algorithms and technology update,” Phys. Med. Biol. 59(1), R1–R64 (2014). [CrossRef]   [PubMed]  

2. S. Bélanger, M. Abran, X. Intes, C. Casanova, and F. Lesage, “Real-time diffuse optical tomography based on structured illumination,” J. Biomed. Opt. 15(1), 016006 (2010). [CrossRef]   [PubMed]  

3. J. Chen, V. Venugopal, F. Lesage, and X. Intes, “Time-resolved diffuse optical tomography with patterned-light illumination and detection,” Opt. Lett. 35(13), 2121–2123 (2010). [CrossRef]   [PubMed]  

4. V. Venugopal, J. Chen, F. Lesage, and X. Intes, “Full-field time-resolved fluorescence tomography of small animals,” Opt. Lett. 35(19), 3189–3191 (2010). [CrossRef]   [PubMed]  

5. N. Ducros, A. Bassi, G. Valentini, G. Canti, S. Arridge, and C. D’Andrea, “Fluorescence molecular tomography of an animal model using structured light rotating view acquisition,” J. Biomed. Opt. 18(2), 020503 (2013). [CrossRef]   [PubMed]  

6. V. Venugopal, J. Chen, M. Barroso, and X. Intes, “Quantitative tomographic imaging of intermolecular FRET in small animals,” Biomed. Opt. Express 3(12), 3161–3175 (2012). [CrossRef]   [PubMed]  

7. K. Abe, L. Zhao, A. Periasamy, X. Intes, and M. Barroso, “Non-invasive in vivo imaging of near infrared-labeled transferrin in breast cancer cells and tumors using fluorescence lifetime FRET,” PLoS One 8(11), e80269 (2013). [CrossRef]   [PubMed]  

8. Q. Pian, R. Yao, L. Zhao, and X. Intes, “Hyperspectral time-resolved wide-field fluorescence molecular tomography based on structured light and single-pixel detection,” Opt. Lett. 40(3), 431–434 (2015). [CrossRef]   [PubMed]  

9. L. Zhao, H. Yang, W. Cong, G. Wang, and X. Intes, “Lp regularization for early gate fluorescence molecular tomography,” Opt. Lett. 39(14), 4156–4159 (2014). [CrossRef]   [PubMed]  

10. A. Jin, B. Yazici, and V. Ntziachristos, “Light illumination and detection patterns for fluorescence diffuse optical tomography based on compressive sensing,” IEEE Trans. Image Process. 23(6), 2609–2624 (2014). [CrossRef]   [PubMed]  

11. S. Arridge and J. Schotland, “Optical tomography: forward and inverse problems,” arXiv preprint arXiv:0907.2586 (2009).

12. V. C. Kavuri, Z.-J. Lin, F. Tian, and H. Liu, “Sparsity enhanced spatial resolution and depth localization in diffuse optical tomography,” Biomed. Opt. Express 3(5), 943–957 (2012). [CrossRef]   [PubMed]  

13. S. Okawa, Y. Hoshi, and Y. Yamada, “Improvement of image quality of time-domain diffuse optical tomography with l sparsity regularization,” Biomed. Opt. Express 2(12), 3334–3348 (2011). [CrossRef]   [PubMed]  

14. J. Shi, F. Liu, J. Zhang, J. Luo, and J. Bai, “Fluorescence molecular tomography reconstruction via discrete cosine transform-based regularization,” J. Biomed. Opt. 20(5), 055004 (2015). [CrossRef]   [PubMed]  

15. F. Yang, M. S. Ozturk, L. Zhao, W. Cong, G. Wang, and X. Intes, “High-resolution mesoscopic fluorescence molecular tomography based on compressive sensing,” IEEE Trans. Biomed. Eng. 62(1), 248–255 (2015). [CrossRef]   [PubMed]  

16. M. Elad, “Optimized projections for compressed sensing,” IEEE Trans. Sig. Proc. 55(12), 5695–5702 (2007). [CrossRef]  

17. K. Schnass and P. Vandergheynst, “Dictionary preconditioning for greedy algorithms,” IEEE Trans. Sig. Proc. 56(5), 1994–2002 (2008). [CrossRef]  

18. L. Zelnik-Manor, K. Rosenblum, and Y. C. Eldar, “Sensing matrix optimization for block-sparse decoding,” Signal Processing, IEEE Transactions on 59(9), 4300–4312 (2011). [CrossRef]  

19. J. Chen, V. Venugopal, and X. Intes, “Monte Carlo based method for fluorescence tomographic imaging with lifetime multiplexing using time gates,” Biomed. Opt. Express 2(4), 871–886 (2011). [CrossRef]   [PubMed]  

20. J. Chen and X. Intes, “Comparison of Monte Carlo methods for fluorescence molecular tomography-computational efficiency,” Med. Phys. 38(10), 5788–5798 (2011). [CrossRef]   [PubMed]  

21. X. Intes, J. Ripoll, Y. Chen, S. Nioka, A. G. Yodh, and B. Chance, “In vivo continuous-wave optical breast imaging enhanced with Indocyanine Green,” Med. Phys. 30(6), 1039–1047 (2003). [CrossRef]   [PubMed]  

22. J. Dutta, S. Ahn, A. A. Joshi, and R. M. Leahy, “Illumination pattern optimization for fluorescence tomography: theory and simulation studies,” Phys. Med. Biol. 55(10), 2961–2982 (2010). [CrossRef]   [PubMed]  

23. J. Chen, Q. Fang, and X. Intes, “Mesh-based Monte Carlo method in time-domain widefield fluorescence molecular tomography,” J. Biomed. Opt. 17(10), 1060091 (2012). [CrossRef]   [PubMed]  

24. N. Ducros, C. D’Andrea, A. Bassi, G. Valentini, and S. Arridge, “A virtual source pattern method for fluorescence tomography with structured light,” Phys. Med. Biol. 57(12), 3811–3832 (2012). [CrossRef]   [PubMed]  

Supplementary Material (1)

NameDescription
Visualization 1: AVI (26177 KB)      Full optimal base (40 pattern pairs) in a video

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 (a) Simulated numerical phantom. Two tubes are segmented at 50% isovolume. (b) Subset of the quantized low-frequency imaging base used as illumination and detection bases. Half of the imaging area is taken in each case.
Fig. 2
Fig. 2 (a) Positive and negative optical masks, with regularization parameter ε = 10−2. (b) First nine pattern pairs in the optimal base, figures are scaled to the individual maximum value in each pair (see Visualization 1 for complete optimal base).
Fig. 3
Fig. 3 Normalized inner product of the Jacobian after (a) global preconditioning and (b) separate masks preconditioning. The regularization parameter value is provided as the relative factor compared to the squared maximum singular value of the matrix (with ε = 10−1, 10−2, 10−3 and 10−4).
Fig. 4
Fig. 4 Single-pixel measurements for each combination of illumination/detection patterns (40x40) in the case of no-noise (1st row) and 25dB noise added (2nd row). The first column corresponds to measurements obtained w/o preconditioning (a, e). The other columns correspond to measurements obtained via separate mask preconditioning at ε = 10−1 (b, f), ε = 10−2 (c, g) and ε = 10−3 (d, h).
Fig. 5
Fig. 5 Flow of pattern and measurement subset selection after preconditioning. The whole data set is shown in (a), with squared singular value plot (b), first 9 illumination/detection patterns are kept. 81 absolute measurement values (c) are then sorted as (d), and 15 largest values are selected for reconstruction (e).
Fig. 6
Fig. 6 Visualization of the reconstructions at isovolume of 50% for (a) ground truth, (b) no-preconditioning, (c) separate masks preconditioning and (d) global mask preconditioning.
Fig. 7
Fig. 7 Single-pixel measurements for first 81 pairs of illumination and detection patterns in the case of preconditioning after imaging with quantized low-frequency base (first row) and directly imaging with optimal base (second row). From left to right, 50dB, 40dB, 30dB and 20dB Gaussian noise is added.
Fig. 8
Fig. 8 Wide-field structured light measurements: (a) experimental data using the quantized low frequency base, (b) measurements after separate mask preconditioning.
Fig. 9
Fig. 9 (a)-(c): Reconstruction using directly the experimental measurements; (d)-(f): reconstruction using separate mask preconditioning and subset selection. (a) and (d) are acquired with LSQR; (b) and (d) with L1-norm regularization; (c) and (f) are the curvature plots of L-curves to select the optimal regularization parameters.

Tables (1)

Tables Icon

Table 1 Objective evaluation metrics mean (from 100 trials) for the reconstruction of the numerical phantom at different levels of noise and using different preconditioning approaches.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

ϕ x i (r)= Ω g x (r,r ') s i (r')dr',i=1,... N s
ϕ m i (r)= Ω g m (r,r ') ϕ x i (r')η μ axf (r')dr',i=1,... N s
Γ i,j = Ω g m j (r) ϕ x i (r)η μ af (r) dri=1,... N s , j=1,... N d .
Ax=b; with A=[ g em,1 1 ϕ ex,1 1 g em,N 1 ϕ ex,N 1 g em,1 N d ϕ ex,1 1 g em,N N d ϕ ex,N 1 g em,1 1 ϕ ex,1 2 g em,N 1 ϕ ex,N 2 g em,1 N d ϕ ex,1 N s g em,N N d ϕ ex,N N s ] R M×N
A=ΦG=[ ϕ 1 g 1 , ϕ 2 g 2 ,, ϕ N g N ],
μ(A) μ 1 (k,A)K( Φ T Φ I N F 2 + G T G I N F 2 ),
Φ pre T Φ pre I N ; G pre T G pre I N ,
M i =( Λ i +εI ) -1/2 U i T
M i = M i (+) ( M i (-) ).
VE=( V recon V truth )/ V truth .
RMSE= i=1 N ( X recon (i) X truth (i)) 2 / i=1 N ( X truth (i)) 2 ,
m= m 1 m 2 m 3 + m 4 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.