Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Implementation of the extended Kalman filter for determining the optical and geometrical properties of turbid layered media by time-resolved single distance measurements

Open Access Open Access

Abstract

In this article we propose an implementation of the extended Kalman filter (EKF) for the retrieval of optical and geometrical properties in two-layered turbid media assuming a dynamic setting, where absorption of each layer was changed in different steps. Prior works implemented the EKF in frequency-domain with several pairs of light sources and detectors and for static parameters estimation problems. Here we explore the use of the EKF in single distance, time-domain measurements, together with a corresponding forward model. Results show good agreement between retrieved and nominal values, with rather narrow analytical credibility intervals, indicating that the recovery process has low uncertainty, especially for the absorption coefficients.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

During the past thirty years, Near Infrared Spectroscopy (NIRS) has become an increasingly interesting field in Biomedical Optics, due to the low absorption that NIR light presents in biological tissues [16]. Among these, one of the most important organs is the human brain, where chromophores concentration in blood (such as haemoglobin) is modified by external stimuli or internal lesions [79]. In this sense, brain haemodynamics can be studied by means of the analysis of changes in light absorption when light signals are injected from the outside. Hereby photons must propagate through a layered system consisting of scalp, skull and cerebrospinal fluid (CSF)–among others–before reaching the cerebral cortex. It is important to translate these absorption changes into changes in the content of chromophores in the blood. That can be done by analyzing the distributions of time of flight (DTOFs) of photons that are affected by variations in the optical properties in the studied system, which are the absorption coefficient ($\mu _a$) and the reduced scattering coefficient ($\mu _s^{\prime }$).

Several methods have been introduced in order to solve the inverse problem of retrieving the optical properties of layered media from a single measurement or a set of measurements [1014]. Most of these methods are based on the Maximum Likelihood Estimator (MLE) approach. There are, at least, two disadvantages in using such techniques: first, depending on the configuration of the solvers, it might be necessary to keep track on the initial points, and propose ad hoc criteria for the selection of the corresponding initial point for each measurement separately. This approach was considered in [12], where the authors use a fixed initial distribution (characterized by an initial mean and covariance) for every single measurement. Second, we may waste prior information on the dynamics as well as any other information we may have collected prior to the measurements. For example, if after a given set of recoveries, all of them showed an increasing behavior, we might expect the next recovery to increase as well.

In this publication we present an algorithm based on the Extended Kalman Filter for retrieving the optical as well as the geometrical properties of two-layered media, using time-resolved, single-distance measurements. This algorithm is a modified version of the one presented in Ref. [15], where the recovery of optical properties and thicknesses of a four-layered turbid medium in a multi-distance setup using a frequency-domain approach was performed, through the proposal of a static estimation problem. The main improvements are the use of a single-distance approach, a more straightforward deconvolution process that does not require calibration of the corresponding hyperparameters, and the discard of the calibration process, which implies less complexity and computation times. Other changes regarding the forward model were also implemented, which strongly impact the efficiency of our method.

This work is structured as follows. Section 2 introduces the analytical model and the experimental setup. Section 3 describes the extended Kalman Filter together with the corresponding improvements in the methodology. Section 4 deals with the tools used to reduce the complexity and the time computation of the forward model. Section 5 gives details on the particular implementation on the EKF from the statistical point of view for the different studied situations. Section 6 presents the main results, which are finally analyzed in Section 7, also concluding with a brief summary of the whole work, together with some ideas for future improvements.

2. Problem formulation and setup

2.1 Theoretical model

The situation to be modelled is presented in Fig. 1. A laser beam impinges on the center of the top surface of an $N$-layered cylinder of radius $R$, where layer $j=1,\ldots ,N$ has optical properties $\mu _{a,\;j}$ and $\mu _{s,\;j} ^{\prime }$ (so that the diffusion coefficient is defined as $D_j = 1/(3\mu _{s,\;j} ^{\prime })$), refractive index $n_j$ and thickness $l_j$, except for the last layer, which is taken as semi-infinite [16], i.e, $l_N = \infty$. According to the diffusion approximation, scattering becomes isotropic at a depth $z_0 = 1/(\mu _{s,1} ^{\prime })$ [1], so the actual position of the isotropic source is $\mathbf {r} = \left ( 0,0,\;z_0 \right )$. The reflectance $R \left ( \rho , t \right )$ with optode separation $\rho$ at time $t$ at the surface $z=0$ can be obtained as follows:

$$R \left( \rho, t \right) = \frac{1}{4A\pi^2 R_{EB}^2}\sum_{n=1}^{\infty} \left[ \int _{-\infty} ^{\infty} G_1 \left( s_n, z=0,\omega \right) e ^{i\omega t} d\omega \right] \frac{J_0 \left( s_n \rho \right)}{J_1 ^2 \left( s_n R_{EB} \right)},$$
where $J_0$ and $J_1$ are the Bessel functions of the first kind and orders zero and one, respectively; $A=A(n)$ is a factor that depends on the refractive index mismatch between the first layer and the surrounding medium [3], $R_{EB}$ is an extrapolated radius given by $R_{EB}=R+z_{b,1}$ (with extrapolation distance $z_{b,1}=2AD_1$); and $s_n$ is the $n$-th scaled order zero Bessel function root such that $J_0 \left ( s_n R_{EB} \right ) = 0$. The Green’s function $G_1 \left (s_n,\;z,\;\omega \right )$ for the first layer has the form:
$$\begin{aligned} G_1 \left(s_n,\;z,\;\omega \right) &= \frac{\exp{\left[-\alpha_1 \left( z - z_0 \right) \right]}-\exp{\left[-\alpha_1 \left( z + z_0 + 2z_{b,1} \right) \right]}}{2D_1 \alpha_1}+\\ &+\frac{\sinh \left[ \alpha_1 \left( z_0 + z_{b,1} \right) \right]\sinh \left[ \alpha_1 \left( z + z_{b,1} \right) \right]}{D_1 \alpha_1 \exp{\left[\alpha_1 \left(l_1+z_{b,1} \right) \right]}} \times\\ &\times \frac{D_1 \alpha_1 n_1 ^2 \beta_N - D_2 \alpha_2 n_2 ^2 \gamma_N}{D_1 \alpha_1 n_1 ^2 \beta_N \cosh \left[\alpha_1 \left(l_1+z_{b,1} \right)\right]+D_2 \alpha_2 n_2 ^2 \gamma_N \sinh \left[\alpha_1 \left(l_1+z_{b,1} \right)\right]}, \end{aligned}$$
being $\alpha _j = \sqrt {\frac {\mu _{a,\;j}}{D_j}+s_n^2+\frac {i\omega }{D_j c_j}}$ and $c_j$ the speed of light in layer $j$. The factors $\beta _N$ and $\gamma _N$ depend on the optical properties and thicknesses of the remaining layers, except for the two-layered case, where $\beta _N = \gamma _N = 1$.

 figure: Fig. 1.

Fig. 1. Scheme for the modelling of light diffusion in a multi-layered cylinder with the last layer of infinite thickness.

Download Full Size | PDF

2.2 Experimental setup

Experimental data were taken from single-distance, time-resolved reflectance measurements conducted on a two-layered liquid phantom, as illustrated in Fig. 2. The experiments were performed with the time-domain NIRS instrumentation described in Refs. [17,18]. It was based on picosecond diode lasers (Sepia, Picoquant GmbH, Berlin, Germany), fast single-photon detectors, and time-correlated single-photon counting (TCSPC) modules (SPC-134, Becker & Hickl GmbH, Berlin, Germany). The measurements were part of a performance assessment of time-domain optical brain imagers according to the "nEUROPt Protocol" [19] in which several systems and configurations were compared. The dataset used in the present work as well as in Ref. [20] corresponds to configuration "PTB 1" in Ref. [19]. It was acquired with a 797 nm laser head and with a R7400U-02 photomultiplier tube (Hamamatsu Photonics, Japan). Data from the same two-layer phantom experiment but recorded by another detector were analyzed in Ref. [12]. The picosecond laser pulses were guided to the surface of the phantom by a multimode optical fiber (core diameter 200 µm). At a distance of $\rho = 30$ mm diffusely scattered photons were collected by a fiber bundle (diameter 4 mm, length 1.5 m, NA 0.54) connected to the PMT. The phantom that is described in detail in Ref. [21] consisted of a container made of black polyvinyl chloride. The front plate (thickness 2 mm) was equipped with two plexiglass windows (diameter 7 mm) for the source fiber and the detector fiber bundle. A Mylar foil of 30 µm thickness was used as a separator to realize the two-layer structure. The thickness of the first layer was $l _1 = 9 mm$, the thickness of the second layer $l _2 = 61$ mm. Both layer volumes in the container were filled with liquid solutions made of water, Intralipid-20% to adjust the scattering coefficient, and black India ink to adjust the absorption. The dependence of the (nominal) optical properties on the Intralipid and India ink concentration was known with high accuracy from previous measurements within a multicenter study [17]. The procedure of the preparation of the liquid was described in detail in Ref. [20].

 figure: Fig. 2.

Fig. 2. Scheme of the experimental setup for measuring the optical properties and the thickness of the two-layered phantom. The source-detector distance $\rho$ was set to 30 mm.

Download Full Size | PDF

A total of 24 measurements with different absorption coefficients $\mu _{a,1}$ and $\mu _{a,2}$ of the two layers was done, while the reduced scattering coefficients of both layers remained constant ($\mu _{0,\;s,\;1}^\prime = \mu _{0,\;s,\;2}^\prime = 1 mm ^{-1}$) throughout the study. In the first twelve measurements, the absorption coefficient $\mu _{a,1}$ of the first layer was increased almost linearly, with smaller steps in the range of the smallest changes. In the second twelve measurements, $\mu _{a,2}$ was stepwise increased after $\mu _{a,1}$ was set back to the start value of measurement 1. Figure 3 illustrates the nominal values of the parameters of interest that are considered as unknown parameters in the following analysis. The photon count rate was adjusted to $1\times 10^6 s^{-1}$ in the homogeneous case (measurements # 1 and # 13), and the filter settings were not changed when adding absorption. The lowest count rate in the series was $2\times 10^5 s^{-1}$. The DTOFs were recorded with a collection time of 1 s, with 100 repetitions in each measurement. After each measurement the instrumental response function was recorded. Due to the small time gap to the measurement, long term drifts could be excluded in this way. Figure 4 shows the DTOF for measurement # 1 (homogeneous case) after summation over the 100 repetitions, together with the normalized IRF. Furthermore, the DTOFs for measurement # 12 (largest absorption in layer 1) and measurement # 24 (largest absorption in layer 2) are plotted for comparison. The DTOFs have been normalized to the maximum of DTOF # 1. The IRF exhibits a small shoulder due to the shape of the laser pulse and its halfwidth is approximately 500 ps.

 figure: Fig. 3.

Fig. 3. Nominal parameter values of the studied two-layered phantom.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Normalized DTOFs for the homogeneous case (measurement # 1), the largest absorption in layer 1 (measurement # 12), and the largest absorption in layer 2 (measurement # 24), together with the measured IRF.

Download Full Size | PDF

3. The extended Kalman filter

The Extended Kalman Filter (EKF) [15,22] is a non-linear, non-optimal generalization of the Kalman Filter which is suitable for problems where the forward model is non-linear. In our specific application we will name $x$ to the vector containing the optical properties $\mu _a$ and $\mu _s^\prime$ (for the corresponding layer), the geometrical parameter $l$ and $t_0$ (an optional time shift to account for an imperfect correction of the delay due to different fiber arrangements in IRF and phantom measurements), and $y$ to the measured DTOF. The subscript $t$ will indicate the respective measurement in the sequence described above, i.e. $t=1,\ldots ,24$. Considering an evolution operator $F$, an observation operator $H$, the Extended Kalman Filter can be used to solve problems written in the form

$$ x_{t+1} = F(x_t) + \eta _t, \qquad \eta _t \sim \mathcal{N}(0, Q_t) $$
$$y_{t+1} = H(x_{t+1}) + \nu _t, \qquad \nu _t \sim \mathcal{N}(0, R_t) $$
where the covariance matrices $Q_t$ and $R_t$ stand for the process noise and the measurement noise, respectively. This means that the uncertainty of the evolution of the variable $x_t$ follows a Normal distribution with zero mean and covariance matrix $Q_t$ (denoted as $N(0,Q_t)$), and the uncertainty in the measurement follows a Normal distribution with mean $0$ and covariance $R_t$. The variables $\eta$ and $\nu$ are realizations of those random variables. Assuming independence between $F$ and $\eta$ and between $H$ and $\nu$, it can be seen that the residuals follow a Normal distribution as described above. The EKF advances in time by updating information provided by two operators: the evolution operator and the observation operator. The predicted distribution is constructed using the information of the prior step and the evolution operator; then, the data is compared with the measurement using the observation operator and the discrepancies are corrected. The result of this procedure is the updated distribution, and it can be summarized in the following iterative process: considering the initial distribution $N(x_0,\Gamma _0)$, in step $t+1$ the mean and covariance $x_{t+1}, \Gamma _{t+1}$ follow the upgrade equations
$$\begin{aligned}\underline{\textbf{Prediction step:}}&\\ x_{t+1|t} &= F(x_t) \end{aligned}$$
$$\Gamma _{t+1|t} = F^{\prime T}\Gamma _{t|t}F^{\prime} + Q_t $$
$$\begin{aligned} \underline{\textbf{Update step:}}&\\ x_{t+1|t+1} &= \arg\!\min _{x\in \mathbb{R^n}} \left\Vert (y_t-H(x)) \right\Vert ^2_{R_t} + \left\Vert (x-x_{t+1|t}) \right\Vert ^2_{\Gamma _{t+1|t}} \end{aligned}$$
$$K_t = \Gamma _{t+1|t} H^\prime _t \left(H^\prime _t\Gamma _{t+1|t} H^{\prime T} _t + R_t) \right)^{{-}1} $$
$$\Gamma _{t+1|t+1} = (I-K_t H^\prime_t)\Gamma _{t+1|t}$$
where $H^\prime = \frac {\partial H}{\partial x}(x_{t+1|t})$ is the Jacobian matrix of the observation operator and $K_t$ is the Kalman gain matrix. The output of the EKF is the updated distribution after considering, at each step, the prediction according to our evolution proposal, and correction after seeing the measurement, i.e. $N(x_{t+1|t+1},\Gamma _{t+1|t+1})$.

Although the output of the EKF is a distribution, we may be interested in particular point estimates which may be useful to analyze particular cases, such as the mode, the median, the mean, etc. In this work we will use the Maximum A Posteriori (MAP) estimate, which corresponds to the mode of the resulting distribution after an appropriate reparametrization [15].

4. Model acceleration

The model presented in Section 2.1 requires the calculation of the $G_1$ function for many frequencies $\omega$ and for many Bessel zeros. In order to ensure stability and convergence of the model, it is necessary to perform many calculations which can be superfluous. In order to accelerate the calculations we propose two improvements which give substantial gain in computation time.

4.1 GPU implementation

As the method involves an inverse Fourier Transform and an inverse discrete Hankel Transform, the computational cost is of $O((sw)^2\log (w))$ where $s$ and $w$ are the number of Bessel zeroes and frequencies, respectively. Although a vectorized implementation can be used, the amount of operations can be huge. The impact of this can be observed in the inverse problem time, i.e. on an Intel i7 3630QM processor with 16 GB RAM at 1600 MHz the evaluation time is about 2.15 seconds. In an iteration of the EKF we may need some hundreds of evaluations per iteration, leading to very large iteration times. The first proposal is to use a tailored GPU parallel implementation. To this end, we applied a NVIDIA GeForce GT 760M graphics card. In Fig. 5 the ratio between CPU and GPU times is plotted, showing that we can get an up to 4x gain with this GPU.

 figure: Fig. 5.

Fig. 5. Speed comparison between models implemented in CPU and GPU. The chosen parameters were $\mu _{a,1}$ = $\mu _{a,2}$ = 0.01 mm$^{-1}$, $\mu _{s,1}^\prime$ = $\mu _{s,2}^\prime$ = 1 mm$^{-1}$ and $l$ = 10 mm, using a number of frequencies $n_{\omega }$ = 1024 and a number of Bessel zeros $n_{s}$ = 7000

Download Full Size | PDF

4.2 Model reduction

The model defined in Section 2.1 is constructed in the frequency-domain and taken to the time-domain through a Fourier transform. A possible reduction of the number of frequencies in the model can be obtained by seeing how much information can be retrieved through deconvolution processes. Given that the measurement is the convolution of the DTOF and the instrument response function, it is necessary to extract the DTOF from the measurement in order to compare the information provided by the DTOF with our models and this is done by deconvolution. As the measurement is noisy, as well as the IRF (because they are realizations of random variables), we may encounter problems in such process. Here, we will show two types of deconvolution: a plain deconvolution and a Tikhonov regularized deconvolution [23].

The plain deconvolution can be stated in a very natural form and is obtained by performing:

$$f_{\textrm{plain}} = \mathcal{F}^{{-}1}\left[\frac{\mathcal{F}(f_{\textrm{measured}})}{\mathcal{F}(g)}\right]$$
where $g$ is the Instrument Response Function (IRF) and $f_{\textrm {measured}}$ is the collected data in time-domain. This implementation is not reliable because both, $f_{\textrm {measured}}$ and $g$ are contaminated by noise and we cannot guarantee that the quotient $\frac {\mathcal {F}(f_{\textrm {measured}})}{\mathcal {F}(g)}$ lies in the Schwarz space and, thus, may not converge to a useful solution. Nevertheless, some useful information may still be recovered in the frequency-domain at low frequencies before noise dominates, as Fig. 6 shows in the highlighted region of interest.

 figure: Fig. 6.

Fig. 6. Normalized logarithm of the amplitude (both for the theory and the deconvolved measurement number 1) vs. frequency. The region of interest (highlighted in red) shows the required frequencies used for the DTOF reconstruction.

Download Full Size | PDF

The Tikhonov regularized deconvolution [23] can be stated as:

$$f_{\textrm{Tikhonov}} = \mathcal{F}^{{-}1}\left[\frac{\mathcal{F}^{*}(g)\mathcal{F}(f_{\textrm{measured}})}{\vert\mathcal{F}(g)\vert ^2 + \lambda}\right]$$
where $^*$ stands for complex conjugation and $\lambda$ is a tunable hyperparameter which, in this work, was obtained through the L-curve criterion [24]. It can be shown that the Tikhonov regularized deconvolution is well defined and can be used to reconstruct the underlying DTOF [25]. However, our interest is the region where both approaches share the same kind of information when comparing them with the corresponding model and we need to see the usable information in frequency-domain, not in time-domain. This is also necessary because our model is constructed in the frequency-domain and then transformed to the time-domain. In Fig. 6 we can see a region which allows us to devise a reduced model. Instead of using the full information provided by the model using all the frequencies in the Fourier space, we can use the information provided by the model evaluated in a subset of the frequency-domain. Using that subset, we reduce the computational effort of computing function $G_1$ from $O(sw)$ to $O(s)$ once we have chosen an appropriate amount of frequencies, which will be fixed (the total cost for computing the reduced model is $O(s^2\log (s))$ which can be dramatically lower than the full model). For the rest of the frequencies, we set $G_1(s_n,\;z,\;\omega ) = 0$.

In Fig. 7 a comparison of the full model (where we calculate $G_1(s_n,\;z,\;\omega )$ without truncating) with the reduced model is shown (both after convolution with the same IRF). In log-scale there is a good agreement except at very early and very late times. However, this disagreement will be treated in the next Section as we will not consider those regions where there is a large relative error.

 figure: Fig. 7.

Fig. 7. Comparison between full and reduced model after convolution with the IRF. The chosen parameters are the same as in Fig. 5.

Download Full Size | PDF

The EKF uses an observation operator which is not the model compared directly against the measurements, but its convolution with the IRF instead, i.e. $H(x) = g*R(x)$ . Such convolution must be compared using both, full and reduced model, to see how much information is lost. In order to quantify the information gain of using the full vs. the reduced model, we use the Kullback-Leibler divergence [26], which measures how much information is lost when we replace the full model with the reduced one.

According to this test, the obtained value of the divergence is 5.4852$\times 10^{-4}$, which is the same information lost as in the case of a standard Gaussian distribution, considered as the full model, replaced by a Gaussian distribution of zero mean and standard deviation of 0.9770, which suggests that the information lost is negligible. The Kullback-Leibler divergence is also used as a measure of discrepancy between probability distributions, so this interpretation is also appropriate because the DTOFs can be regarded as probability distributions.

Figure 8 shows the speed gain between the full model implemented in GPU and the reduced model. The total speed improvement is almost 60 times.

 figure: Fig. 8.

Fig. 8. Speed gain ratio between the reduced model and the full model, both implemented in GPU. The total speed gain is around 60 times the original model (in its CPU version). The parameters were set as $\mu _{a,1,2} = 0.01$ mm$^{-1}$, $\mu _{s,1,2} = 1$ mm$^{-1}$, $l = 10$ mm. For the full model, $n_{\omega }=1024$ and $n_{s}=7000$. For the reduced model, $n_{\omega } = 20$ and $n_s=7000$.

Download Full Size | PDF

5. Implementation of the EKF

The implementation of the EKF is straightforward once the inputs are specified. However, the main assumption of the method is the normality of the measurement error, i.e., if the measurement $y$ obtained from a sample with optical properties $\mu _{a,\;j}, \mu _{s,\;j}^{\prime }$ and layer thickness $l_j$, given an appropriate model $H$ evaluated in the same properties, we have:

$$y \sim \mathcal{N}(H(\mu _{a,\;j}, \mu _{s,\;j}^\prime),R) $$
$$y - H(\mu _{a,\;j}, \mu _{s,\;j}^\prime) \sim \mathcal{N}(0,R) $$
$$D^{{-}1}(y-H(\mu _{a,\;j}, \mu _{s,\;j}^\prime)) \sim \mathcal{N}(0,I) $$
Which means that the samples, after appropriate scaling, are independent and identically distributed samples of a standard normal distribution ($D$ is the matrix obtained after Cholesky factorization [27] of $R$, i.e. $DD^T = R$). This allows to develop a routine that can verify normality of the data. First, the covariance matrix $R$ must be provided (this should be the same covariance matrix that will be provided to the EKF, $\Gamma _t$) and its Cholesky factorization, $D$, must be calculated. Second, using the model and the corresponding optical properties, calculate $z = D^{-1}(y-H(\mu _{a,\;j}, \mu _{s,\;j}^\prime ))$. Finally, perform a hypothesis test on the normality of the resulting sample to decide whether the method is applicable or not. Also, considering Poisson statistics, it is possible to apply the same reasoning as above by considering the following transformation when the expected value of counts is large [28]:
$$X\sim Poisson(\lambda) $$
$$\frac{(X-\lambda)}{\sqrt{\lambda}}\sim N(0,1) $$
$$\frac{(y-H(\mu _{a,\;j},\;\mu _{s,\;j}^\prime))}{\sqrt{y}}\sim N(0,1) $$
and the procedure described above holds for $R$ a diagonal matrix with the square root of the measurement. As our model is based on the diffusion approximation, care must be taken when applying the method described above, especially for early times, where the assumed diffusive regime fails [1,29]. An example of this situation can be seen in Fig. 9 where, within the first nanosecond, the residuals are not in the range $[-3,3]$ where the samples should lie with $99\%$ probability, suggesting that our discrepancy is not only caused by noise but also because of the model which is not appropriate in that region and we get a histogram with way too heavy tails to follow a Gaussian distribution. The proposal here is similar to previous works [30]; the DTOFs and the model (reduced or not) are cropped and the region of interest selected follows a Gaussian distribution. In Fig. 10 the DTOF is cropped between 1 and 5 ns, where the resulting residuals histogram follows a Gaussian distribution as expected. In this particular case the employed test was the Kolmogorov-Smirnov [28] test, which uses the empirical cumulative distribution to infer whether the sample comes from another proposed distribution, in our case, a standard Normal one. After cropping the resulting data, the test fails to reject the normality hypothesis, allowing us to apply the EKF over this data.

 figure: Fig. 9.

Fig. 9. Full histogram of weighted residuals which does not fulfill normality conditions. The measurement chosen is the first, which corresponds to $\mu _{a,1} = 0.01$ mm$^{-1}$, $\mu _{a,2} = 0.01$, $\mu _{s}^\prime = 1$, all units in mm$^{-1}$. Lower subplot shows the weighted residual plot for each time-channel suggesting the region where the measurement fails the Kolmogorov-Smirnov test.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. The cropped DTOF satisfies conditions. Lower subplot shows the weighted residual plot for each time channel. The shape of the histogram resembles a Normal distribution. When the data is scaled we get a standard normal distribution according to the Kolmogorov-Smirnov test with $p$-value 0.81.

Download Full Size | PDF

6. Results

In this Section we discuss the results of two particular situations using the data acquired as explained in Section 2.2. First, we retrieve the parameters of interest $\mu _{a,1}$ and $\mu _{a,2}$ considering the rest of the parameters fixed (namely $\mu _{s,1}^\prime , \mu _{s,2}^\prime$, $l$ and $t_0$). The second retrieval is by allowing the entire set of parameters to vary.

The data was pre-processed in the way explained in the previous Section starting by the leftmost 5% percent of the maximum of the DTOF and ending in the rightmost 0.5% of the maximum of the DTOF, where we ensure to have a Gaussian distribution of the statistical uncertainty. In this way, we can get information from the lower layer by collecting late photons. As explained above, after we ensure that we can apply the EKF, we must provide the initial distributions in order to apply it. For the parameter distributions of $\mu _{a,\;j},$ $\mu _{s,\;j}^{\prime }$ and $l$, we use the same change of variables as in [15], but not for $t_0$ which we assume that follows a Gaussian distribution, as it can be either positive or negative. We assume that the parameters are independent a priori and, using a $3\sigma$-confidence interval around the mean we obtain the values in Table 1.

Tables Icon

Table 1. Parameter setup for situations 1 and 2. M1 and M2 stand for the initial values of the corresponding parameters. Analogously, CI stands for the $3\sigma$-confidence interval. The fourth and seventh column (Fixed?) indicate whether the corresponding parameter is fixed for situation 1 or 2, respectively.

In Fig. 11 the MAP for each absorption coefficient is shown, together with their corresponding limits of the 99% credibility intervals, where the behavior is consistent with the nominal values as well as the reported results in prior works [12,20], which make use of the same experimental data. It is possible to see that these intervals are very narrow, giving high confidence to the recovery. In particular, the interval for $\mu _{a,2}$ is narrower than the interval for $\mu _{a,1}$; the reason for this might rely in the fact that the DTOFs were cropped at rather late times only, gathering in this way a large fraction of information from late photons which should have travelled mostly through the second layer. As the method evolves through the iterations, we make use of information of past measurements which can lead to higher uncertainty as can be seen in Fig. 11. This phenomenon is well-known and can be reduced using memory-fading variants [31].

 figure: Fig. 11.

Fig. 11. Recovery of the absorption coefficient of the first and second layers (left and right plots, respectively) when fixing the rest of the parameters. The blue dots represent the retrieved values, together with the uncertainty intervals (red lines). The nominal values, indicated by the green line, are shown for comparison.

Download Full Size | PDF

Figure 12 presents the recovery results for all six parameters. Note that in this case the retrieved values of $\mu _{a,1}$ and $\mu _{a,2}$ show qualitatively the same behaviour as those obtained when fixing the other parameters, i.e., the algorithm correctly predicts the expected changes in both absorption coefficients throughout all the measurements (except, for example, in measurement 21); however, the credibility intervals are wider, being this effect particularly evident for $\mu _{a,1}$. We can also observe that $\mu _{s,1}^{\prime }$ remains almost unaltered along the measurements, as expected, although the interval for $\mu _{s,2}^{\prime }$ is wider than the interval for $\mu _{s,1}^{\prime }$. The reason for this behavior is that large variations in the scattering of the second layer do not change dramatically the shape of the DTOFs [16], resulting in high uncertainties for this particular parameter. The uncertainty increases also because of the correlation between the effects produced by different parameters and their impact in the model, i.e. each parameter does not have a unique effect in the shape of the DTOF (masked also because of the IRF) and this increases the plausibility region.

 figure: Fig. 12.

Fig. 12. Recovery of all the parameters. The blue dots represent the MAP estimate obtained by our methodology, the green lines represent the nominal values. The red lines corresponds to the 99% credibility intervals. It is important to note that our estimations are in good agreement with the nominal values and these lie within the credibility intervals.

Download Full Size | PDF

Although the only parameters that evolve in the experiment are $\mu _{a,1}$ and $\mu _{a,2}$ and the rest are constant, the latter must also be estimated. We expect the parameters to converge to the corresponding constant and then stay there. That behavior can be seen in all the constant parameters, showing the robustness of our method. The thickness of the first layer, which starts at 11 mm, takes some more iterations to reach a value closer to the nominal one of 9 mm. This is a reasonable behaviour, considering that the corresponding variance was set rather low, implying good a priori knowledge and, consequently, high difficulty in presenting large changes through iterations. Finally, the retrieved value for $t_0$, which was initially considered to be $0.006$ ns, instantly increases up to almost 0.025 ns which is about two or three channels in our experimental resolution, and remains stable for the rest of the measurements.

Figure 13 shows the marginalized distribution for each parameter, as opposed to Fig. 12, where we present point estimates. As explained in Ref. [15], the result of the EKF is a multivariate distribution which can be too complex to analyze, in order to study each parameter separately, is is possible to marginalize every parameter by integrating the distribution with respect to all the parameters but the one of interest. This gives us information of the uncertain of the parameter of interest after considering all the possible values of the other ones (according to our full distribution). In this case an asymmetry of the rescaled parameters can be observed, which is the result of using log-normal distributions. Since we are dealing with distributions, it can also be seen that values closer to the MAP present higher probability of being correct than those far away from it, even when lying inside the 99% credibility interval.

 figure: Fig. 13.

Fig. 13. Marginal a posteriori distributions for the optical, geometrical and temporal parameters. The more the intensity of the yellow, the higher the density of the corresponding parameter. These plots show the resulting distributions shown in Fig. 12.

Download Full Size | PDF

In Fig. 14 we can see the good agreement between the recovery and the collected data for the particular case of measurement 2 and the Reduced-$\chi ^2$ estimate between the recovery and the measurement, which shows a good agreement. The window used is shown in the lower plot of Fig. 15 which is the result of the analysis performed previously in Section 5. The resulting time window is $[1.1238, 3.9822]$ ns and it was used for all measurements. It is important to note that the change in frequency actually changes the temporal resolution; by interpolation it is possible to perform the required calculation within the chosen interval. Figure 15 summarizes the reduced $\chi ^2$ estimate for the analysis of all measurements. Although we are minimizing two terms (measurement and evolution) together, the EKF is capable of obtaining a good estimate according to the error variance.

 figure: Fig. 14.

Fig. 14. Fit comparison plot to show the goodness of fit for measurement 2. The upper plot shows the comparison between the reconstructed DTOF (blue line) vs the measured one (red line). In the lower graph, the residuals are plotted. The used temporal window is $[1.1238, 3.9822]$ ns.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. Reduced $\chi ^2$ square plot to show the goodness of fit for all measurements. As the values are near $1$, the method succeeds in capturing the noise structure while preserving the evolution proposed.

Download Full Size | PDF

7. Discussion and conclusions

In this work we present an implementation of the Extended Kalman Filter for determining the optical parameters as well as the thicknesses of layered media through single-distance, time-resolved reflectance measurements. This implementation incorporates several different improvements with respect to the algorithm introduced by some of the authors in a previous publication [15], by simplifying the data acquisition using just a single pair of light source and detector and avoiding a calibration procedure. It is also implemented in the time-domain, leaving aside the need of further data processing to take it to the frequency-domain. The present algorithm was validated with measurements on a two-layered liquid phantom, which have already been used in previous works [20], suggesting that this method could be enhanced in order to implement it in true dynamic situations. The algorithm is a probabilistic approach whose output returns a distribution fully characterized by its mean and covariance, and it can be analyzed using statistics tools.

We have introduced two possible approaches for the acceleration of the forward model. They are useful to reduce the recovery time which is directly related to the forward model computation times. The GPU approach led to a 4x improvement, but a better implementation can obtain a higher reduction. This is also technology-dependent which means that a better GPU may also obtain better times. The second approach is a model reduction which loses some numerical precision in order to gain speed. We have shown that, for the studied cases, this loss in negligible. The gain in this case was about 15x against our GPU implementation, which means a total gain of 60x compared to a full CPU implementation. A proper numerical analysis of the loss can be performed so the error can be bounded.

We have also performed an analysis of the DTOF that justifies its crop, needed by the EKF. As the measurements are described by Poisson statistics, if we take many measurements, we can apply the Central Limit Theorem [28] to describe the noise statistics of the mean through a Normal Distribution. However, there are some troubles in the tails of the error distribution, especially if we use a reduced model because the main loss is in that region, making the crop necessary. This was tested using the Kolmogorov-Smirnov test that failed to reject our hypotheses.

In Ref. [15], a calibration procedure was implemented to make measurements and theory each other compatible. In this work, we have shown that this calibration procedure was not necessary, reducing some uncertainties introduced by it; namely, the regularized deconvolution is unnecessary, leading to a simpler and faster method but also less prone to errors, in the region of interest where we have worked. This method can be applied to the multi-distance approach, as well as the analysis performed in Section 5, to enhance the recovery of parameters, i.e. the true values lies within our a posteriori credibility intervals, and those were to be narrow. Since the advantage of the multi-distance approach is the availability of perspectives through different layers, as the distances are related to the light penetration in the media, the combination of approaches of this work and [15] might result in a better performance, in terms of speed and certainty.

Despite that the set of measurements was not meant to be used for a dynamic setup, it could be used artificially to that purpose. The objective of using the extended Kalman Filter instead of performing a recovery from each measurement separately was to incorporate information from past measurements, given that they are not "far" from each other, since separated recoveries would mean start over and waste information recently acquired. It could be argued that the use of a recent recovery and used as an initial point of a subsequent iteration, would be analogous to the proposed approach, but this would neglect the information provided by the updated covariance matrix $\Gamma _{t+1|t+1}$. This is the idea of using prior information, to use something obtained before (or even assumed) in the retrieval of information in a further step. The extended Kalman Filter works for Normal distributions, but other approaches are available when this is not true, i.e. the Unscented Kalman Filter, the Particle Filter, the Gaussian Sum, etc. [31].

In a future work we propose to establish a comparison in the performance of this new implementation of the EKF between the time-domain and the frequency-domain approaches.

Funding

Comisión de Investigaciones Científicas (FCCIC 2016).

Acknowledgements

The authors would like to thank the valuable contribution of Dr. Thomas Gladitz and Mr. Lin Yang on discussing how to improve the method and deal with time-domain data.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. M. Patterson, B. Chance, and B. Wilson, “Time resolved reflectance and transmittance for the non-invasive measurement of tissue optical properties,” Appl. Opt. 28(12), 2331–2336 (1989). [CrossRef]  

2. M. Hiraoka, M. Firbank, M. Essenpreis, M. Cope, S. Arridge, P. van der Zee, and D. Delpy, “A monte carlo investigation of optical pathlength in inhomogeneous tissue and its application to near-infrared spectroscopy,” Phys. Med. Biol. 38(12), 1859–1876 (1993). [CrossRef]  

3. D. Contini, F. Martelli, and G. Zaccanti, “Photon migration through a turbid slab described by a model based on diffusion approximation. i. theory,” Appl. Opt. 36(19), 4587–4599 (1997). [CrossRef]  

4. B. Montcel, R. Chabrier, and P. Poulet, “Detection of cortical activation with time-resolved diffuse optical methods,” Appl. Opt. 44(10), 1942–1947 (2005). [CrossRef]  

5. B. Tromberg, A. Cerussi, N. Shah, M. Compton, A. Durkin, D. Hsiang, J. Butler, and R. Mehta, “Diffuse optics in breast cancer: detecting tumors in pre-menopausal women and monitoring neoadjuvant chemotherapy,” Breast Cancer Res. 7(6), 279–285 (2005). [CrossRef]  

6. D. Comelli, A. Bassi, A. Pifferi, P. Taroni, A. Torricelli, R. Cubeddu, F. Martelli, and G. Zaccanti, “In vivo time-resolved reflectance spectroscopy of the human forehead,” Appl. Opt. 46(10), 1717–1725 (2007). [CrossRef]  

7. D. Contini, A. Torricelli, A. Pifferi, F. Paglia, and R. Cubeddu, “Multi-channel time-resolved system for functional near infrared spectroscopy,” Opt. Express 14(12), 5418–5432 (2006). [CrossRef]  

8. E. Hillman, “Optical brain imaging in vivo: techniques and applications from animal to man,” J. Biomed. Opt. 12(5), 051402 (2007). [CrossRef]  

9. P. Jones, H. Shin, D. Boas, B. Hyman, M. Moskowitz, C. Ayata, and A. Dunn, “Simultaneous multiespectral reflectance imaging and laser speckle flowmetry of cerebral blood flow and oxygen metabolism in focal cerebral ischemia,” J. Biomed. Opt. 13(4), 044007 (2008). [CrossRef]  

10. C. Sato, M. Shimada, Y. Tanikawa, and Y. Hoshi, “Estimating the absorption coefficient of the bottom layer in four-layered turbid mediums based on the time-domain depth sensitivity of near-infrared light reflectance,” J. Biomed. Opt. 18(9), 097005 (2013). [CrossRef]  

11. Y.-K. Liao and S.-H. Tseng, “Reliable recovery of the optical properties of multi-layer turbid media by iteratively using a layered diffusion model at multiple source-detector separations,” Biomed. Opt. Express 5(3), 975–989 (2014). [CrossRef]  

12. F. Martelli, S. D. Bianco, L. Spinelli, S. Cavalieri, P. D. Ninni, T. Binzoni, A. Jelzow, R. Macdonald, and H. Wabnitz, “Optimal estimation reconstruction of the optical properties of a two-layered tissue phantom from time-resolved single-distance measurements,” J. Biomed. Opt. 20(11), 115001 (2015). [CrossRef]  

13. R. Re, D. Contini, L. Zucchelli, A. Torricelli, and L. Spinelli, “Effect of a thin superficial layer on the estimate of hemodynamic changes in a two-layer medium by time domain nirs,” Biomed. Opt. Express 7(2), 264–278 (2016). [CrossRef]  

14. D. Milej, A. Abdalmalak, P. McLachlan, M. Diop, A. Liebert, and K. S. Lawrence, “Subtraction-based approach for enhancing the depth sensitivity of time-resolved nirs,” Biomed. Opt. Express 7(11), 4514–4526 (2016). [CrossRef]  

15. H. García, G. Baez, and J. Pomarico, “Simultaneous retrieval of optical and geometrical parameters of multilayered turbid media via state-estimation algorithms,” Biomed. Opt. Express 9(8), 3953–3973 (2018). [CrossRef]  

16. H. García, D. Iriarte, J. Pomarico, D. Grosenick, and R. Macdonald, “Retrieval of the optical properties of a semiinfinite compartment in a layered scattering medium by single-distance, time-resolved diffuse reflectance measurements,” J. Quant. Spectrosc. Radiat. Transfer 189, 66–74 (2017). [CrossRef]  

17. H. Wabnitz, M. Möller, A. Liebert, A. Walter, R. Macdonald, H. Obrig, J. Steinbrink, R. Erdmann, and O. Raitza, “A time-domain nir brain imager applied in functional stimulation experiments,” in Photon Migration and Diffuse-Light Imaging II, (Optical Society of America, 2005), p. WA5.

18. H. Wabnitz, M. Moeller, A. Liebert, H. Obrig, J. Steinbrink, and R. Macdonald, “Time-resolved near-infrared spectroscopy and imaging of the adult human brain,” in Oxygen Transport to Tissue XXXI, E. Takahashi and D. F. Bruley, eds. (Springer US, Boston, MA, 2010), pp. 143–148.

19. H. Wabnitz, A. Jelzow, M. Mazurenka, O. Steinkellner, R. Macdonald, D. Milej, N. Zołek, M. Kacprzak, P. Sawosz, R. Maniewski, A. Liebert, A. Torricelli, D. Contini, R. Re, L. Zucchelli, L. Spinelli, R. Cubeddu, and A. Pifferi, “Performance assessment of time-domain optical brain imagers, part 2: neuropt protocol,” J. Biomed. Opt. 19(8), 086012 (2014). [CrossRef]  

20. A. Jelzow, H. Wabnitz, I. Tachtsidis, E. Kirilina, R. Brühl, and R. Macdonald, “Separation fo superficial and cerebral hemodynamics using a single-distance time-domain nirs measurement,” Biomed. Opt. Express 5(5), 1465–1482 (2014). [CrossRef]  

21. F. Martelli, P. D. Ninni, G. Zaccanti, D. Contini, L. Spinelli, A. Torricelli, R. Cubeddu, H. Wabnitz, M. Mazurenka, R. Macdonald, A. Sassaroli, and A. Pifferi, “Phantoms for diffuse optical imaging based on totally absorbing objects, part 2: experimental implementation,” J. Biomed. Opt. 19(7), 076011 (2014). [CrossRef]  

22. G. R. Baez, J. A. Pomarico, and G. E. Elicabe, “An improved extended kalman filter for diffuse optical tomography,” Biomed. Phys. Eng. Express 3(1), 015013 (2017). [CrossRef]  

23. R. A. Willoughby, “Solutions of ill-posed problems (A. N. Tikhonov and V. Y. Arsenin),” SIAM Rev. 21(2), 266–267 (1979). [CrossRef]  

24. P. Hansen and D. O’Leary, “The use of the l-curve in the regularization of discrete ill-posed problems,” SIAM J. Sci. Comput. 14(6), 1487–1503 (1993). [CrossRef]  

25. J. Kaipio and E. Somersalo, Statistical and Computational Inverse Problems, vol. 160 (Springer Science and Business Media, 2006).

26. J. M. Bernardo and A. F. M. Smith, Bayesian Theory (John Wiley & Sons, 1994).

27. G. H. Golub and C. F. van Loan, Matrix Computations (JHU Press, 2013), 4th ed.

28. V. Rohatgi, Statistical Inference, Dover Books on Mathematics (Dover Publications, 2003).

29. F. Martelli, D. Contini, A. Tadeucci, and G. Zacantti, “Photon migration through a turbid slab described by a model based on diffusion approximation. ii. comparison with monte carlo results,” Appl. Opt. 36(19), 4600–4612 (1997). [CrossRef]  

30. A. Liebert, H. Wabnitz, D. Grosenick, M. Möller, R. Macdonald, and H. Rinneberg, “Evaluation of optical properties of highly scattering media by moments of distributions of times of flight of photons,” Appl. Opt. 42(28), 5785–5792 (2003). [CrossRef]  

31. D. Simon, Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches (John Wiley and Sons, 2006).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Scheme for the modelling of light diffusion in a multi-layered cylinder with the last layer of infinite thickness.
Fig. 2.
Fig. 2. Scheme of the experimental setup for measuring the optical properties and the thickness of the two-layered phantom. The source-detector distance $\rho$ was set to 30 mm.
Fig. 3.
Fig. 3. Nominal parameter values of the studied two-layered phantom.
Fig. 4.
Fig. 4. Normalized DTOFs for the homogeneous case (measurement # 1), the largest absorption in layer 1 (measurement # 12), and the largest absorption in layer 2 (measurement # 24), together with the measured IRF.
Fig. 5.
Fig. 5. Speed comparison between models implemented in CPU and GPU. The chosen parameters were $\mu _{a,1}$ = $\mu _{a,2}$ = 0.01 mm $^{-1}$ , $\mu _{s,1}^\prime$ = $\mu _{s,2}^\prime$ = 1 mm $^{-1}$ and $l$ = 10 mm, using a number of frequencies $n_{\omega }$ = 1024 and a number of Bessel zeros $n_{s}$ = 7000
Fig. 6.
Fig. 6. Normalized logarithm of the amplitude (both for the theory and the deconvolved measurement number 1) vs. frequency. The region of interest (highlighted in red) shows the required frequencies used for the DTOF reconstruction.
Fig. 7.
Fig. 7. Comparison between full and reduced model after convolution with the IRF. The chosen parameters are the same as in Fig. 5.
Fig. 8.
Fig. 8. Speed gain ratio between the reduced model and the full model, both implemented in GPU. The total speed gain is around 60 times the original model (in its CPU version). The parameters were set as $\mu _{a,1,2} = 0.01$ mm $^{-1}$ , $\mu _{s,1,2} = 1$ mm $^{-1}$ , $l = 10$ mm. For the full model, $n_{\omega }=1024$ and $n_{s}=7000$ . For the reduced model, $n_{\omega } = 20$ and $n_s=7000$ .
Fig. 9.
Fig. 9. Full histogram of weighted residuals which does not fulfill normality conditions. The measurement chosen is the first, which corresponds to $\mu _{a,1} = 0.01$ mm $^{-1}$ , $\mu _{a,2} = 0.01$ , $\mu _{s}^\prime = 1$ , all units in mm $^{-1}$ . Lower subplot shows the weighted residual plot for each time-channel suggesting the region where the measurement fails the Kolmogorov-Smirnov test.
Fig. 10.
Fig. 10. The cropped DTOF satisfies conditions. Lower subplot shows the weighted residual plot for each time channel. The shape of the histogram resembles a Normal distribution. When the data is scaled we get a standard normal distribution according to the Kolmogorov-Smirnov test with $p$ -value 0.81.
Fig. 11.
Fig. 11. Recovery of the absorption coefficient of the first and second layers (left and right plots, respectively) when fixing the rest of the parameters. The blue dots represent the retrieved values, together with the uncertainty intervals (red lines). The nominal values, indicated by the green line, are shown for comparison.
Fig. 12.
Fig. 12. Recovery of all the parameters. The blue dots represent the MAP estimate obtained by our methodology, the green lines represent the nominal values. The red lines corresponds to the 99% credibility intervals. It is important to note that our estimations are in good agreement with the nominal values and these lie within the credibility intervals.
Fig. 13.
Fig. 13. Marginal a posteriori distributions for the optical, geometrical and temporal parameters. The more the intensity of the yellow, the higher the density of the corresponding parameter. These plots show the resulting distributions shown in Fig. 12.
Fig. 14.
Fig. 14. Fit comparison plot to show the goodness of fit for measurement 2. The upper plot shows the comparison between the reconstructed DTOF (blue line) vs the measured one (red line). In the lower graph, the residuals are plotted. The used temporal window is $[1.1238, 3.9822]$ ns.
Fig. 15.
Fig. 15. Reduced $\chi ^2$ square plot to show the goodness of fit for all measurements. As the values are near $1$ , the method succeeds in capturing the noise structure while preserving the evolution proposed.

Tables (1)

Tables Icon

Table 1. Parameter setup for situations 1 and 2. M1 and M2 stand for the initial values of the corresponding parameters. Analogously, CI stands for the 3 σ -confidence interval. The fourth and seventh column (Fixed?) indicate whether the corresponding parameter is fixed for situation 1 or 2, respectively.

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

R ( ρ , t ) = 1 4 A π 2 R E B 2 n = 1 [ G 1 ( s n , z = 0 , ω ) e i ω t d ω ] J 0 ( s n ρ ) J 1 2 ( s n R E B ) ,
G 1 ( s n , z , ω ) = exp [ α 1 ( z z 0 ) ] exp [ α 1 ( z + z 0 + 2 z b , 1 ) ] 2 D 1 α 1 + + sinh [ α 1 ( z 0 + z b , 1 ) ] sinh [ α 1 ( z + z b , 1 ) ] D 1 α 1 exp [ α 1 ( l 1 + z b , 1 ) ] × × D 1 α 1 n 1 2 β N D 2 α 2 n 2 2 γ N D 1 α 1 n 1 2 β N cosh [ α 1 ( l 1 + z b , 1 ) ] + D 2 α 2 n 2 2 γ N sinh [ α 1 ( l 1 + z b , 1 ) ] ,
x t + 1 = F ( x t ) + η t , η t N ( 0 , Q t )
y t + 1 = H ( x t + 1 ) + ν t , ν t N ( 0 , R t )
Prediction step: _ x t + 1 | t = F ( x t )
Γ t + 1 | t = F T Γ t | t F + Q t
Update step: _ x t + 1 | t + 1 = arg min x R n ( y t H ( x ) ) R t 2 + ( x x t + 1 | t ) Γ t + 1 | t 2
K t = Γ t + 1 | t H t ( H t Γ t + 1 | t H t T + R t ) ) 1
Γ t + 1 | t + 1 = ( I K t H t ) Γ t + 1 | t
f plain = F 1 [ F ( f measured ) F ( g ) ]
f Tikhonov = F 1 [ F ( g ) F ( f measured ) | F ( g ) | 2 + λ ]
y N ( H ( μ a , j , μ s , j ) , R )
y H ( μ a , j , μ s , j ) N ( 0 , R )
D 1 ( y H ( μ a , j , μ s , j ) ) N ( 0 , I )
X P o i s s o n ( λ )
( X λ ) λ N ( 0 , 1 )
( y H ( μ a , j , μ s , j ) ) y N ( 0 , 1 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.