Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Finite sampling corrected 3D noise with confidence intervals

Open Access Open Access

Abstract

When evaluated with a spatially uniform irradiance, an imaging sensor exhibits both spatial and temporal variations, which can be described as a three-dimensional (3D) random process considered as noise. In the 1990s, NVESD engineers developed an approximation to the 3D power spectral density for noise in imaging systems known as 3D noise. The goal was to decompose the 3D noise process into spatial and temporal components identify potential sources of origin. To characterize a sensor in terms of its 3D noise values, a finite number of samples in each of the three dimensions (two spatial, one temporal) were performed. In this correspondence, we developed the full sampling corrected 3D noise measurement and the corresponding confidence bounds. The accuracy of these methods was demonstrated through Monte Carlo simulations. Both the sampling correction as well as the confidence intervals can be applied a posteriori to the classic 3D noise calculation. The Matlab functions associated with this work can be found on the Mathworks file exchange [“Finite sampling corrected 3D noise with confidence intervals,” https://www.mathworks.com/matlabcentral/fileexchange/49657-finite-sampling-corrected-3d-noise-with-confidence-intervals.].

1. Introduction

Noise observed from an imaging sensor, when evaluated with a spatially uniform irradiance, exhibits both spatial and temporal variations that can be described as a three-dimensional (3D) random process. When the random process may be considered linear and shift invariant, one method of characterization is through the 3D power spectral density (PSD) [1,2]. Typically, the 3D PSD is a complex 3D function, making it difficult to draw significant conclusions without further processing or with additional metrics.

To aid in interpretation, in the early 1990s, NVESD engineers developed a model for the 3D power spectral density known as 3D noise [3]. The 3D noise model is a white noise approximation consisting of seven independent contributions that span the 3D correlation space (one 1D uncorrelated processes, three 2D uncorrelated processes, and one 3D uncorrelated process). Approximating the 3D power spectral density with the 3D noise model gives seven variances that can be loosely correlated with some underlying imager parameters. The spatial noise terms can be indicative of poor nonuniformity correction (NUC), while a high variance of any of the 1D temporal processes can be related to flicker [6].

A measurement of 3D noise is conducted by sampling the noise process along two spatial dimensions and a temporal dimension [47]. The number of samples available in the system being tested for the calculation of 3D noise depends both on the parameters of the system under test as well as the test geometry. For example, some testing methodologies employ a collimator to provide a uniform flat field, occupying only a portion of the field of view. Temporal limitations also can arise from either the source or system instability; for example, if the system under test employs any automatic features on a compressed time scale (automatic NUCing). In the case of these finite sampling geometries, errors will arise when using the classical 3D noise methodology because of the underlying assumptions of infinite sampling.

Recently, efforts have been made to correct for cases with finite sampling. In [8], a sampling correction was derived working directly with the 3D PSD. This method accurately captures the sampling artifacts, resulting in minimal errors on average. However, working with the PSD requires all samples to be considered, so the PSD can be significantly affected by outliers (an outlier, for example, could be a nonresponsive pixel that violates the typical assumption of Gaussianity of the underlying process). Another method in [9,10] was proposed to address the bias correction in sample space, where the effects of defective pixels can be removed from the calculation. Although this method provides a significant improvement in errors compared to the classic method, in the calculation of some of the processes it fails to capture instances of the coupling to the 2D random processing.

In addition to capturing sampling errors for the estimate in situations with limited sampling, a measure of uncertainty is also valuable. This is especially true in the current environment where the requirement margins for system performance are small and programmatic decisions are made from single measurements. To that extent, in this correspondence we derived and demonstrated the bounding confidence intervals for the 3D noise measurement. Additional details of the methods and calculation tools can be found in the MatLab functions on the Mathworks file exchange [1].

This correspondence is organized as follows: Section 2 presents the derivation of the 3D noise directional averaging and how the variance of all seven terms couple to the variance observed. Section 3 discusses confidence intervals as applied to the 3D noise measurement problem. Section 4 then shows the results of Monte Carlo simulations for both the mean estimation as well as the associated confidence intervals. Finally, we offer additional observations and comments on model assumptions, and then discuss additional preprocessing steps that can influence results.

2. 3D Noise Variance Estimation

The 3D noise model consists of three 1D random processes in each of the orthogonal directions: two spatial (horizontal and vertical); and one temporal (time). There are three 2D random processes describing the 2D correlations (time-vertical, time-horizontal, and vertical-horizontal), and one 3D random process (time-vertical-horizontal). The 3D noise model is defined as

Noise(h,v,t)=noiset(t)+noisev(v)+noiseh(h)+noisetv(v,t)+noiseth(h,t)+noisevh(h,v)+noisetvh(h,v,t).

Each of the independent noise sources is treated as a zero mean, delta correlated Gaussian random process completely characterized by its variance: σt2, σv2, σh2, σtv2, σth2, σvh2, σtvh2. An example cartoon depicting the 2D random process vh sampled in 3D is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Example cartoon depiction of the vh random process sampled in 3D across H, V, and T samples. Note that the vh frame is independent of time. The results of the directional averaging of this process also are shown.

Download Full Size | PDF

Experimentally, the noise is measured as a 3D cube of sampled observations, as shown in Fig. 1. To obtain the variance of the independent random processes, we calculated statistics following directional averaging. The required seven equations for each of the unknown variances can be found by calculating the variance for each of: three orthogonal directional averages applied across one dimension, three different two dimensional directional averages, and the total variance. Because the individual random processes are independent, we can independently assess the effects of directional averages on each random process.

The unbiased estimate of the sample variance for each of the noise processes is defined:

σt2=1T1(=1Tnoiset()2),σtv2=1TV1(=1Tk=1Vnoisetv(k,)2),σv2=1V1(k=1Vnoisev(k)2),σth2=1TH1(=1Tj=1Hnoiseth(j,)2),σh2=1H1(j=1Hnoiseh(j)2),σvh2=1VH1(j=1Hk=1Vnoisetv(j,k)2),σtvh2=1TVH1(=1Tj=1Hk=1Vnoisetvh(j,k,)2).
Another important definition is the directional average, defined as
x¯=1nj=1nxj.
Following a directional average, the unbiased sample variance is calculated [2]:
σ2=1n1j=1n(xjx¯)2.

The notation of Dx,y represents the directional average of the yth noise component along the xth direction, where x can be either one or two dimensions. Figure 1 shows a pictorial representation of the results of a directional average along with the relevant notation. Following the directional averaging, the variance calculated is denoted as σDt,v2.

The Appendix provided a complete derivation of the resulting variance following directional averaging for each of the seven independent random processes. Because the noise sources are independent, we can calculate the moments separately. Thus, the total variance of the directional averages is the sum of the individual terms:

σDt2=σDt,t2+σDt,v2+σDt,h2+σDt,tv2+σDt,th2+σDt,vh2+σDt,tvh2.
All the variances can be compactly represented in matrix form as
[σDvh2σDth2σDtv2σDh2σDv2σDt2σDtvh2]=M[σt2σv2σh2σtv2σth2σvh2σtvh2],
M=[1001V1H01VH0101T01H1TH00101T1V1TVV(T1)TV1T(V1)TV101VTV1T1HTTV1V1H1HH(T1)TH10T(H1)TH1HTH1T1V1TTH1H1V1V0H(V1)VH1V(H1)VH1HVH1V1TVVH1H1T11TVH(T1)TVH1TH(V1)TVH1TV(H1)TVH1H(TV1)TVH1V(TH1)TVH1T(VH1)TVH11],
which is the one of the main results of this correspondence. The 3D noise variances of the individual noise sources are found by applying the inverse of the above matrix to the measured variances after directional averaging. The inverse of Eq. (6) has a closed form analytical solution that is not presented here due to space constraints. The classic measurement matrix is the limit of the above matrix as T, V, and H go to infinity [3]:
[σDvh2σDth2σDtv2σDh2σDv2σDt2σDtvh2]=[1000000010000000100001101000101010001100101111111][σt2σv2σh2σtv2σth2σvh2σtvh2].
For comparison purposes, we also can represent the bias correction derived in [10] in the same form as above. In [10], the bias correction is represented as a correction to the classic mixing matrix Eq (7). The correction matrix proposed in [10] is:
[σ¯t2σ¯v2σ¯h2σ¯tv2σ¯th2σ¯vh2σ¯tvh2]=[1001V1H01VH0101T01H1TH00101T1V1TV0001001H0000101V0000011T0000001][σt2σv2σh2σtv2σth2σvh2σtvh2].
This matrix also can be represented as a single forward mixing operation in a form similar to Eq. (6) as
[σDvh2σDth2σDtv2σDh2σDv2σDt2σDtvh2]=MUB0[σt2σv2σh2σtv2σth2σvh2σtvh2],
MUB0=[1001V1H01VH0101T01H1TH00101T1V1TV110V+T+TVTV1H1HT+V+TVTVH1011VH+T+THTH1VH+T+HTTVH0111T1TH+V+HVVHH+V+HVTVH111V+T+TVTVH+T+THTHH+V+HVVHCTVH],
where, C=H+T+V+TV+HT+HV+TVH. Note that the 1D terms match Eq. (6); however, as the inverse of this matrix is applied to the measurements, the 1D measured variances will be different than the derived results of Eq. (6). Section 3 presents Monte Carlo comparisons between the derived measurement matrix Eq. (6), the classic method Eqs. (7), and (9) from the literature [10].

3. Confidence Intervals for Variance Estimation

Recall that each of the measured variances represents a single realization of a random process. If this measurement were to be repeated, a different value would likely be found, and the total variability would be determined by a governing distribution. We will approximate the distribution of σ^t2 and the other estimates to be Gaussian distributed, as they are a mix of many random processes. A Gaussian distribution is completely determined by its mean [as found by Eq. (6)] and its variance. To obtain the variance of the variance estimators, we must calculate the cross-covariance [11]:

cov([σt2σv2σh2σtv2σth2σvh2σtvh2],[σt2σv2σh2σtv2σth2σvh2σtvh2])=M1(cov([σDvh2σDth2σDtv2σDh2σDv2σDt2σDtvh2],[σDvh2σDth2σDtv2σDh2σDv2σDt2σDtvh2]))(M1)T,
where M represents the forward mixing matrix and cov represents the cross-covariance, defined as
cov(x,y)=E[(xμx)(yμy)]=σxσycorr(x,y).

In the Eq. (11), corr is the correlation, which can be used to aid in calculating the covariance. Recall that the random process σDvh2 and the others are each a summation of terms from the seven original independent random processes. Therefore, we can calculate the cross covariance for each one independently, then sum to obtain the total variance. To derive the cross covariance, we must derive the distribution of calculated variance for each of the independent random processes following each of the different directional averages.

Let X1,X2,Xn be random samples from a Gaussian distributed random process with a mean μ, population variance σ2, and sample variance S2. Then the random variable:

X2=(n1)S2σ2
has a chi-square (χ2) distribution with (n1) degrees of freedom. The probability density function of the sample variance has a chi-squared distribution [2]. To account for the additional scaling that is required by the mixing, the scaled distribution can be expressed:
fscaledχ2(x;v,a)={(x/a)(v/21)ex/2a2v/2Γ(v/2)ax00x<0,
where the scaling a=σ2/(n1). The first moment of this distribution can be found as
E[S2]=fscaledχ2(x;v,a)xdx=av=σ2.
Thus the expectation of the sample variance is an unbiased estimate of the population variance σ2, in accordance with Eq. (2). The variance of the variance estimator can be found as
VarS2=E[(S2)2]E[S2]=fscaledχ2(x;v,a)x2dxav=2a2ν.

To calculate the total cross covariance, we must know the scaling and degrees of freedom for each variance following the directional averaging, for each of the seven random processes. Below we present the scaling and degrees of freedom for the 1D temporal random process, the full details of the remaining terms can be found in the MatLab files [1].

a¯t=[aDvh,taDth,taDtv,taDh,taDv,taDt,taDtvh,t]=[1T100VTV1HTV10VHTVH1][σt2],ν¯t=[νDvh,tνDth,tνDtv,tνDh,tνDv,tνDt,tνDtvh,t]=[T00TT0T].

To calculate the cross covariance of the temporal random process, we also must take into account the correlations remaining between the different directional averaging. Below is the cross correlation of the variance following the directional averaging, for the temporal random process:

corrt([σDvh,t2σDth,t2σDtv,t2σDh,t2σDv,t2σDt,t2σDtvh,t2],[σDvh,t2σDth,t2σDtv,t2σDh,t2σDv,t2σDt,t2σDtvh,t2])=[1001101000000000000001001101100110100000001001101].

Using the above expressions, the cross covariance contribution from the temporal random is found from:

cov([σDvh,t2σDth,t2σDtv,t2σDh,t2σDv,t2σDt,t2σDtvh,t2],[σDvh,t2σDth,t2σDtv,t2σDh,t2σDv,t2σDt,t2σDtvh,t2])=corrt·((2a¯t2ν¯t)(2a¯t2ν¯t)T),
where “·” denotes the element-by-element multiplication. Once the covariance is found, this expression can be substituted into Eq. (10) to find the contribution to the cross covariance. The final cross covariance is the sum of contributions from each of the seven terms.

From the cross covariance in Eq. (10), the confidence intervals can be found, again assuming a Gaussian distribution. The confidence bounds for the conventional two-sided confidence of 100%(1α) are found from the PDF by solving for the confidence intervals [CIL,CIH]:

CILPDF(x)dx=α2,CIHPDF(x)dx=1α2.

For a Gaussian PDF, the bounds are symmetric; therefore, only the difference to the mean is required.

4. Monte Carlo Simulations

To test the derived expressions and the method of calculating confidence intervals from the above, we performed a series of Monte Carlo simulations across a variety of different sampling conditions. The underlying noise simulations used a Gaussian random number generator for the distribution creation. The notional values for the different 3D noise terms were selected as σt=σtv=σth=10, σv=σh=20, and σvh=σtvh=50. The simulations were carried out by varying percentages of sampling of a notional sensor size of 640×480 pixels using between 30 and 240 frames. For each sampling geometry, 10,000 independent noise cube realizations were generated and then processed with directional averaging and the 3D noise calculation. In addition to the classic method of processing and the derived method, the bias correction proposed by [10] was also calculated and reported.

Figure 2 demonstrates the variability of measured variance, the probability distribution of obtained variances using the derived mixing matrix in Eq. (6) for V=24, H=32, and T=30.

 figure: Fig. 2.

Fig. 2. Plot of distributions of observed variance compared to Gaussian assumption for V=24, H=32, and T=30; estimated (A) temporal variance, (B) vertical variance, (C) temporal-vertical variance, (D) vertical-horizontal variance, and (E) temporal-vertical-horizontal variance.

Download Full Size | PDF

From symmetry considerations, Fig. 2 omits the th and h distributions as th and tv have similar distributions, as do v and h (both differing slightly due to different number of samples along V and H). As can be seen in the figure, the distributions of values observed vary depending on the magnitude of the noise term as well as the number of samples. The average of the distribution agrees with the theoretical prediction. Also shown is the predicted distribution using the coupled Gaussian assumption derived for the confidence interval calculation (red solid). It is this distribution that is used to provide the confidence bounds. For comparison, the assumption of no coupling (the black dashed line) using a chi-squared distribution is also shown; for these particular values, there is limited coupling and the independent assumption works well for most of the noise components.

To summarize the Monte Carlo simulations first moment results, Table 1 presents the percent error on the average across the 10,000 realization for the different mixing matrices for a variety of sampling geometries.

Tables Icon

Table 1. Percent Error in Average Variance from the Monte Carlo Simulation. Class Refers to Mixing with Eq. (7), UB0 Refers to Eq. (9), and the New Refers to the Mixing Derived of Eq. (6)

As can be seen in Table 1, in general the percent error decreases with an increase in the sampling size. However, there still remains a bias in the classic method that is on the order of a few percent. The bias correction in [10] does improve the error as compared to the classic method; however, the new method presented captures the bias to a fraction of a percent. Although not reported here, using the method of [8] using the 3D PSD gave the same answer as the proposed mixing of Eq. (6).

To validate the assumption of a Gaussian distribution for the confidence intervals, we also can use the 10,000 realizations to test what percentage of the calculated confidence intervals enclose the true population variance. The results can be seen in Table 2 using 90% confidence intervals (alpha=0.1, 90%).

Tables Icon

Table 2. Observed Percentage of Confidence Intervals Capturing the Population Statistic

As Table 2 shows, as the number of samples increases the distributions become more Gaussian and the method of calculating the confidence intervals captures the uncertainty in the measurement. Even when the sampling is limited, the confidence intervals are still within approximately 5% of the observed. It is important to note that the above table is only for a single distribution of variance for the underlying random processes.

From the expressions for the cross covariance, coupling can be observed; for example, a large amount of noise in vh will leak into the v and h variance calculations. As a demonstration, consider the situation where all the noise terms in Eq. (9) are equal to 10 except for the vh term, which has a standard deviation of 100. The 3D random process is sampled with T=30, V=24, and H=32. The average values (10,000 realizations) correspond to the population values, but due to the coupling the confidence intervals can be significantly larger for the terms experiencing coupling

[σ^t2σ^v2σ^h2σ^tv2σ^th2σ^vh2σ^tvh2]=[100100100100100100100],CIσ2=[43.348.641.89.07.88.71.6],[σ^t2σ^v2σ^h2σ^tv2σ^th2σ^vh2σ^tvh2]=[10010010010010010000100],CIσ2=[43.3161.4182.59.07.8869.91.6].

As can be seen in Eq. (20), the v and h terms will have a negative value for their confidence bounds. We can further demonstrate these bounds by plotting the distributions.

As shown in Fig. 3.B, there are situations where the calculated variance is negative; this is in addition to situations where the confidence bound is negative. A negative variance should be seen as a situation where there was severe coupling, and caution should be used in interpreting the significance.

 figure: Fig. 3.

Fig. 3. Plot of the estimated variance of the v random process in the case of limited coupling (A) σv=10, σvh=10, and severe coupling (B) σv=10, σvh=100. Blue circles correspond to Monte Carlo, black dashed is independent chi-squared distribution, and red solid is coupled variance derived Gaussian distribution.

Download Full Size | PDF

The noise is typically reported through the use of the standard deviation as opposed to the variance for 3D noise (to retain the same units as the input, typically in units of counts or temperature). To accommodate a negative variance, we propose to retain the sign and take the square root as

σ^=sign(σ^2)|σ^2|,
which will signify that there is uncertainty in the measurement. For the confidence intervals, we must also deal with the occasional negative variance. As opposed to using the confidence intervals for the square root of Gaussian distribution (which is a strictly positive distribution), we approximate the distribution of the standard deviation as a Gaussian, using a single parameter to be added or subtracted to give the confidence interval. To accommodate the negative values, we suggest taking the maximum difference of the signed root with respect to the estimated standard deviation. In this way, the confidence intervals will tend to bound the problem and give some indication of the level of uncertainty:
CIσ=argmax(|σ^sign(σ^2±CIσ2)|σ^2±CIσ2||).

For example, the standard deviation and confidence intervals for the large coupling case above can be found as

[σ^tσ^vσ^hσ^tvσ^thσ^vhσ^tvh]=[101010101010010],CIσ,α=.1=[2.517.819.10.50.44.40.1].

The severity of the negative confidence bounds depends on the sampling as well as the strength of the coupling between the different processes. In addition to strong coupling, a negative variance value also could indicate that the underlying process is not well approximated by the 3D noise model.

5. Summary and Conclusions

We have derived and demonstrated the full sampling corrected 3D noise measurement and the corresponding confidence bounds. We accomplished this analysis by decomposing the temporal and spatial variances following the directional average into each noise component, and demonstrated it through the use of Monte Carlo simulations of noise cubes. Our results provide a significant improvement in the average (biased) error when small sampling sizes are used. In the limit of infinite sampling, the measurement matrix developed here converges to the original method.

An important property of our method is that it allows for the effects of defective pixels to be ignored. A large outlier violates the assumption of the underlying Gaussian processes and can skew the variance estimation. The method presented is conducted in the spatial domain and is based on statistical calculations of the variance. It allows for defective pixels to be ignored by simply removing them from the calculation.

The methods for both the sampling correction as well as the confidence intervals are determined by the average 3D noise variances and the corresponding sampling size; therefore, an a posteriori calculation of both the sampling correction and confidence bounds can be performed. The confidence intervals also can be used in a predictive fashion; for example, if the expectation of the noise is known, the confidence intervals can be used to provide an estimate of the necessary sampling required.

Our future work will examine the signal intensity transfer function (SiTF), which is used to convert noise in counts into noise of temperature, and its contribution of uncertainty to the 3D noise process. Additionally, the characterization of defective pixels and how to report their influence also is important, requiring adequate definitions and agreed-upon metrics. The 3D noise model we have discussed assumes a summation of seven independent spectrally white random processes; future work also could aim at characterizing nonwhite effects.

Finally, in support of this work a series of four Matlab functions have been provided to the Mathworks file exchange. These scripts allow users to simulate a noise cube, calculate the seven independent variances, and calculate the confidence intervals. An additional script provides examples of how the functions can be called under different circumstances; for example, correcting a past measurement, or iteratively calculating the noise cube by chunking the data.

Appendix A

The 1D directional average is calculated by simply taking the mean across a certain direction (horizontal, vertical, or time). There are two different possibilities, as the directional average is either along the direction of the random process, or it is orthogonal. First, we will consider the situation where the directional average is along the random process; for example, when taking the directional average with respect to time of the temporal noise:

Dt,t(h,v)=1Tj=1Tnoiset(tj)=constant,
which equals a constant, near zero, but random with realization.
σDt,t2=1VH1k=1Vj=1H(Dt,t(j,k)D¯t,t)2=0.
Similarly, the other 1D parallel averages of 1D random processes are also zero as seen in Eq. (6).

Now let us consider taking the orthogonal 1D average across a 1D random process followed by the 2D variance. For example, taking the directional average with respect to time of the vertical noise:

Dt,v(h,v)=1T=1Tnoisev(h,v,t)=noisev(v).
Calculating the variance, we have:
σDt,v2=1VH1k=1Vj=1H(Dt,v(j,k)D¯t,v)2=HVH1k=1V·noisev(k)2=H(V1)σv2VH1.

A similar procedure can be applied to the other 1D random processes, and the results are presented in Eq. (6).

For the 1D average across the 2D random processes, we again will have two scenarios, performing the 2D average along the dimension of the random process or orthogonally to it. Consider taking the directional average orthogonal to the random process; for example, taking the temporal average of the vh noise:

Dt,vh(h,v)=1T=1Tnoisevh(h,v,t)=noisevh(h,v).
Calculating the variance, we have:
σDt,vh2=1VH1k=1Vj=1H(Dt,vh(j,k)D¯t,vh)2σDt,vh2=1VH1k=1Vj=1H(noisevh(j,k))2=σvh2.
A similar procedure can be applied to the other 2D random processes, and the results are presented in Eq. (6). The other case occurs when the directional average is along a dimension of the 2D random process, such as when taking the temporal average of the tv noise:
Dt,tv(h,v)=1T=1Tnoisetv(h,v,t)=noiset,tv(v).
The result is a 1D random process with a sample variance of:
Dt,tv(h,v)=N(0,σt,tv2)σt,tv2=1V1(k=1V(noiset,tv(k)n¯oiset,tv(k))2)=σtv2T.
As we now have a 1D random process, the variance is calculated in the same way as before, giving:
σDt,tv2=1VH1k=1Vj=1H(Dt,tv(j,k)D¯t,tv)2=HVH1(k=1V(noiset,tv(k)n¯oiset,tv(k))2)=HVH1(V1)σtv2T.
A similar procedure can be applied to the other 2D random processes, and the results are presented in Eq. (6).

Taking the 1D average across the single 3D random process follows the same procedure as the first step in Eq. (A7), where we now will have a resulting 2D random process:

Dt,tvh(h,v)=1T=1Tnoisetvh(h,v,t)=noiset,tvh(0,σtvh2T).
Calculating the variance gives:
σDt,tvh2=1VH1k=1Vj=1H(Dt,tvh(j,k)D¯t,tvh)2=σtvh2T.
The 2D averages of a 1D random process again is in one of two different conditions: either the 2D averaging is applied along the direction of the random process, or it is applied orthogonally to the direction of the random process. The order of a directional average does not affect the results. Consider the case when either of the directional averages is applied along the direction of the 1D random process, such as when looking at a tv directional average applied to the t random process:
Dtv,t(h)=1V1Tk=1V=1Tnoiset(h,vk,t)=constant.
The constant will be a random variable across different noise realizations, but the resulting variance will be:
σDtv,t2=1H1j=1H(Dtv,t(j)D¯tv,t)2=0.
A similar procedure can be applied to the other 1D random processes, and the results are presented in Eq. (6). The other case is also straightforward, as both of the 2D averages are applied orthogonally to the 1D random process. Again consider the temporal and vertical directional averaging, we have:
Dtv,h(h)=1V1Tk=1V=1Tnoiseh(h,vk,t)=noiseh(h).
Calculating the variance, we have:
σDtv,h2=1H1j=1H(Dtv,t(j)D¯tv,t)2=σh2.
A similar procedure can be applied to the other 1D random processes, the results are presented in Eq. (6).

The 2D average across a 2D random process also will have a relatively straightforward calculation and can be one of two different conditions: either the 2D averages corresponds to both of the two dimensions of the random process or one of those directions is orthogonal. Let us first consider the case where the directional averaging is applied across both dimensions of the random process:

Dtv,tv(h)=1V1Tk=1V=1Tnoisetv(h,vk,t)=constant.
The constant will be a random variable across different noise realizations, but the resulting variance will be:
σDtv,tv2=1H1j=1H(Dtv,tv(j)D¯tv,tv)2=0.
Therefore, we also have:
σDtv,tv2=0,σDth,th2=0,σDvh,vh2=0.
The other situation is where one of the 2D averages occurs along one of the dimensions of the random process; for example:
Dtv,th(h)=1V1Tk=1V=1Tnoisetv(h,vk,t)Dtv,th(h)=noisetv,th(0,σth2T).
Only the directional averaging along the random process has an effect, as the other random process is a constant in the other. The result of the 1D average is again another 1D random process with a sample variance as shown above. The variance is then evaluated to be:
σDtv,th2=1H1j=1H(Dtv,tv(j)D¯tv,tv)2=σth2T.
A similar procedure can be applied to the other 2D random processes, and the results are presented in Eq. (6).

Similar to the 1D average of the 3D random process, a 2D average again yields a random process with a scaled variance:

Dtv,tvh(h)=1V1Tk=1V=1Tnoisetvh(h,vk,t)Dtv,tvh(h)=noisetv,thh(0,σtvh2TV),
giving:
σDtv,tvh2=1H1j=1H(Dtv,tv(j)D¯tv,tv)2=σtvh2TV.
Therefore:
σDtv,tvh2=σtvh2TV,σDth,tvh2=σtvh2TH,σDvh,tvh2=σtvh2VH.

The final expressions are the calculation of the 3D variance. Consider the total variance of the 1D temporal noise:

σNt,tvh2=1TVH1m=1Hk=1V=1Tnoiset(hm,vk,t)σNt,tvh2=VHTVH1=1Tnoiset(hm,vk,t)σNt,tvh2=VH(T1)TVH1σt2.
A similar procedure can be applied to the other 2D random processes, and the results are presented in Eq. (6).

References

1. D. Haefner, “Finite sampling corrected 3D noise with confidence intervals,” https://www.mathworks.com/matlabcentral/fileexchange/49657-finite-sampling-corrected-3d-noise-with-confidence-intervals.

2. A. Papoulis and S. U. Pillai, Probability, Random Variables, and Stochastic Processes (McGraw-Hill, 2002).

3. J. A. D’Agostino and C. M. Webb, “Three-dimensional analysis framework and measurement methodology for imaging system noise,” Proc. SPIE 1488, 110 (1991). [CrossRef]  

4. C. M. Webb, “Approach to three-dimensional noise spectral analysis,” Proc. SPIE 2470, 288 (1995). [CrossRef]  

5. P. O’Shea and S. Sousk, “Practical issues with 3D noise measurements and application to modern infrared sensors,” Proc. SPIE 5784, 262 (2005). [CrossRef]  

6. R. G. Driggers, S. J. Pruchnic Jr., C. E. Halford, and E. E. Burroughs Jr., “Laboratory measurement of sampled infrared imaging system performance,” Opt. Eng. 38(5), 852–861 (1999). [CrossRef]  

7. G. C. Holst, Testing and Evaluation of Infrared Imaging Systems, 2nd Ed (JCD Publishing, 1998).

8. A. Lundmark, “3D detector noise revisited,” Proc. SPIE 8014, 801410 (2011). [CrossRef]  

9. Z. Bomzon, “Biases in the estimation of 3D noise in thermal imagers,” Proc. SPIE 7834, 78340 (2010). [CrossRef]  

10. Z. Bomzon, “Removing ths statistical bias from three-dimensional noise measurements,” Proc. SPIE 8014, 801416 (2011). [CrossRef]  

11. P. Brémaud, An Introduction to Probabilistic Modeling (Springer, 1988).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1.
Fig. 1. Example cartoon depiction of the v h random process sampled in 3D across H , V , and T samples. Note that the v h frame is independent of time. The results of the directional averaging of this process also are shown.
Fig. 2.
Fig. 2. Plot of distributions of observed variance compared to Gaussian assumption for V = 24 , H = 32 , and T = 30 ; estimated (A) temporal variance, (B) vertical variance, (C) temporal-vertical variance, (D) vertical-horizontal variance, and (E) temporal-vertical-horizontal variance.
Fig. 3.
Fig. 3. Plot of the estimated variance of the v random process in the case of limited coupling (A)  σ v = 10 , σ v h = 10 , and severe coupling (B)  σ v = 10 , σ v h = 100 . Blue circles correspond to Monte Carlo, black dashed is independent chi-squared distribution, and red solid is coupled variance derived Gaussian distribution.

Tables (2)

Tables Icon

Table 1. Percent Error in Average Variance from the Monte Carlo Simulation. Class Refers to Mixing with Eq. (7), UB0 Refers to Eq. (9), and the New Refers to the Mixing Derived of Eq. (6)

Tables Icon

Table 2. Observed Percentage of Confidence Intervals Capturing the Population Statistic

Equations (49)

Equations on this page are rendered with MathJax. Learn more.

Noise ( h , v , t ) = noise t ( t ) + noise v ( v ) + noise h ( h ) + noise t v ( v , t ) + noise t h ( h , t ) + noise v h ( h , v ) + noise tvh ( h , v , t ) .
σ t 2 = 1 T 1 ( = 1 T noise t ( ) 2 ) , σ t v 2 = 1 T V 1 ( = 1 T k = 1 V noise t v ( k , ) 2 ) , σ v 2 = 1 V 1 ( k = 1 V noise v ( k ) 2 ) , σ t h 2 = 1 T H 1 ( = 1 T j = 1 H noise t h ( j , ) 2 ) , σ h 2 = 1 H 1 ( j = 1 H noise h ( j ) 2 ) , σ v h 2 = 1 V H 1 ( j = 1 H k = 1 V noise t v ( j , k ) 2 ) , σ tvh 2 = 1 T V H 1 ( = 1 T j = 1 H k = 1 V noise tvh ( j , k , ) 2 ) .
x ¯ = 1 n j = 1 n x j .
σ 2 = 1 n 1 j = 1 n ( x j x ¯ ) 2 .
σ D t 2 = σ D t , t 2 + σ D t , v 2 + σ D t , h 2 + σ D t , t v 2 + σ D t , t h 2 + σ D t , v h 2 + σ D t , tvh 2 .
[ σ D v h 2 σ D t h 2 σ D t v 2 σ D h 2 σ D v 2 σ D t 2 σ D tvh 2 ] = M [ σ t 2 σ v 2 σ h 2 σ t v 2 σ t h 2 σ v h 2 σ tvh 2 ] ,
M = [ 1 0 0 1 V 1 H 0 1 V H 0 1 0 1 T 0 1 H 1 T H 0 0 1 0 1 T 1 V 1 T V V ( T 1 ) T V 1 T ( V 1 ) T V 1 0 1 V T V 1 T 1 H T T V 1 V 1 H 1 H H ( T 1 ) T H 1 0 T ( H 1 ) T H 1 H T H 1 T 1 V 1 T T H 1 H 1 V 1 V 0 H ( V 1 ) V H 1 V ( H 1 ) V H 1 H V H 1 V 1 T V V H 1 H 1 T 1 1 T V H ( T 1 ) T V H 1 T H ( V 1 ) T V H 1 T V ( H 1 ) T V H 1 H ( T V 1 ) T V H 1 V ( T H 1 ) T V H 1 T ( V H 1 ) T V H 1 1 ] ,
[ σ D v h 2 σ D t h 2 σ D t v 2 σ D h 2 σ D v 2 σ D t 2 σ D tvh 2 ] = [ 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 1 0 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 0 1 1 1 1 1 1 1 ] [ σ t 2 σ v 2 σ h 2 σ t v 2 σ t h 2 σ v h 2 σ tvh 2 ] .
[ σ ¯ t 2 σ ¯ v 2 σ ¯ h 2 σ ¯ t v 2 σ ¯ t h 2 σ ¯ v h 2 σ ¯ tvh 2 ] = [ 1 0 0 1 V 1 H 0 1 V H 0 1 0 1 T 0 1 H 1 T H 0 0 1 0 1 T 1 V 1 T V 0 0 0 1 0 0 1 H 0 0 0 0 1 0 1 V 0 0 0 0 0 1 1 T 0 0 0 0 0 0 1 ] [ σ t 2 σ v 2 σ h 2 σ t v 2 σ t h 2 σ v h 2 σ tvh 2 ] .
[ σ D v h 2 σ D t h 2 σ D t v 2 σ D h 2 σ D v 2 σ D t 2 σ D tvh 2 ] = M U B 0 [ σ t 2 σ v 2 σ h 2 σ t v 2 σ t h 2 σ v h 2 σ tvh 2 ] ,
M U B 0 = [ 1 0 0 1 V 1 H 0 1 V H 0 1 0 1 T 0 1 H 1 T H 0 0 1 0 1 T 1 V 1 T V 1 1 0 V + T + T V T V 1 H 1 H T + V + T V T V H 1 0 1 1 V H + T + T H T H 1 V H + T + H T T V H 0 1 1 1 T 1 T H + V + H V V H H + V + H V T V H 1 1 1 V + T + T V T V H + T + T H T H H + V + H V V H C T V H ] ,
cov ( [ σ t 2 σ v 2 σ h 2 σ t v 2 σ t h 2 σ v h 2 σ tvh 2 ] , [ σ t 2 σ v 2 σ h 2 σ t v 2 σ t h 2 σ v h 2 σ tvh 2 ] ) = M 1 ( cov ( [ σ D v h 2 σ D t h 2 σ D t v 2 σ D h 2 σ D v 2 σ D t 2 σ D tvh 2 ] , [ σ D v h 2 σ D t h 2 σ D t v 2 σ D h 2 σ D v 2 σ D t 2 σ D tvh 2 ] ) ) ( M 1 ) T ,
cov ( x , y ) = E [ ( x μ x ) ( y μ y ) ] = σ x σ y corr ( x , y ) .
X 2 = ( n 1 ) S 2 σ 2
f scaled χ 2 ( x ; v , a ) = { ( x / a ) ( v / 2 1 ) e x / 2 a 2 v / 2 Γ ( v / 2 ) a x 0 0 x < 0 ,
E [ S 2 ] = f scaled χ 2 ( x ; v , a ) x d x = a v = σ 2 .
V a r S 2 = E [ ( S 2 ) 2 ] E [ S 2 ] = f scaled χ 2 ( x ; v , a ) x 2 d x a v = 2 a 2 ν .
a ¯ t = [ a D v h , t a D t h , t a D t v , t a D h , t a D v , t a D t , t a D tvh , t ] = [ 1 T 1 0 0 V T V 1 H T V 1 0 V H T V H 1 ] [ σ t 2 ] , ν ¯ t = [ ν D v h , t ν D t h , t ν D t v , t ν D h , t ν D v , t ν D t , t ν D tvh , t ] = [ T 0 0 T T 0 T ] .
cor r t ( [ σ D v h , t 2 σ D t h , t 2 σ D t v , t 2 σ D h , t 2 σ D v , t 2 σ D t , t 2 σ D tvh , t 2 ] , [ σ D v h , t 2 σ D t h , t 2 σ D t v , t 2 σ D h , t 2 σ D v , t 2 σ D t , t 2 σ D tvh , t 2 ] ) = [ 1 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 1 1 0 0 1 1 0 1 0 0 0 0 0 0 0 1 0 0 1 1 0 1 ] .
cov ( [ σ D v h , t 2 σ D t h , t 2 σ D t v , t 2 σ D h , t 2 σ D v , t 2 σ D t , t 2 σ D tvh , t 2 ] , [ σ D v h , t 2 σ D t h , t 2 σ D t v , t 2 σ D h , t 2 σ D v , t 2 σ D t , t 2 σ D tvh , t 2 ] ) = corr t · ( ( 2 a ¯ t 2 ν ¯ t ) ( 2 a ¯ t 2 ν ¯ t ) T ) ,
C I L PDF ( x ) d x = α 2 , C I H PDF ( x ) d x = 1 α 2 .
[ σ ^ t 2 σ ^ v 2 σ ^ h 2 σ ^ t v 2 σ ^ t h 2 σ ^ v h 2 σ ^ tvh 2 ] = [ 100 100 100 100 100 100 100 ] , CI σ 2 = [ 43.3 48.6 41.8 9.0 7.8 8.7 1.6 ] , [ σ ^ t 2 σ ^ v 2 σ ^ h 2 σ ^ t v 2 σ ^ t h 2 σ ^ v h 2 σ ^ tvh 2 ] = [ 100 100 100 100 100 10000 100 ] , CI σ 2 = [ 43.3 161.4 182.5 9.0 7.8 869.9 1.6 ] .
σ ^ = sign ( σ ^ 2 ) | σ ^ 2 | ,
CI σ = arg max ( | σ ^ sign ( σ ^ 2 ± CI σ 2 ) | σ ^ 2 ± CI σ 2 | | ) .
[ σ ^ t σ ^ v σ ^ h σ ^ t v σ ^ t h σ ^ v h σ ^ tvh ] = [ 10 10 10 10 10 100 10 ] , CI σ , α = . 1 = [ 2.5 17.8 19.1 0.5 0.4 4.4 0.1 ] .
D t , t ( h , v ) = 1 T j = 1 T noise t ( t j ) = constant ,
σ D t , t 2 = 1 V H 1 k = 1 V j = 1 H ( D t , t ( j , k ) D ¯ t , t ) 2 = 0 .
D t , v ( h , v ) = 1 T = 1 T noise v ( h , v , t ) = noise v ( v ) .
σ D t , v 2 = 1 V H 1 k = 1 V j = 1 H ( D t , v ( j , k ) D ¯ t , v ) 2 = H V H 1 k = 1 V · noise v ( k ) 2 = H ( V 1 ) σ v 2 V H 1 .
D t , v h ( h , v ) = 1 T = 1 T noise v h ( h , v , t ) = noise v h ( h , v ) .
σ D t , v h 2 = 1 V H 1 k = 1 V j = 1 H ( D t , v h ( j , k ) D ¯ t , v h ) 2 σ D t , v h 2 = 1 V H 1 k = 1 V j = 1 H ( noise v h ( j , k ) ) 2 = σ v h 2 .
D t , t v ( h , v ) = 1 T = 1 T noise t v ( h , v , t ) = noise t , t v ( v ) .
D t , t v ( h , v ) = N ( 0 , σ t , t v 2 ) σ t , t v 2 = 1 V 1 ( k = 1 V ( noise t , t v ( k ) n ¯ ois e t , t v ( k ) ) 2 ) = σ t v 2 T .
σ D t , t v 2 = 1 V H 1 k = 1 V j = 1 H ( D t , t v ( j , k ) D ¯ t , t v ) 2 = H V H 1 ( k = 1 V ( noise t , t v ( k ) n ¯ ois e t , t v ( k ) ) 2 ) = H V H 1 ( V 1 ) σ t v 2 T .
D t , tvh ( h , v ) = 1 T = 1 T noise tvh ( h , v , t ) = noise t , tvh ( 0 , σ tvh 2 T ) .
σ D t , tvh 2 = 1 V H 1 k = 1 V j = 1 H ( D t , tvh ( j , k ) D ¯ t , tvh ) 2 = σ tvh 2 T .
D t v , t ( h ) = 1 V 1 T k = 1 V = 1 T noise t ( h , v k , t ) = constant .
σ D t v , t 2 = 1 H 1 j = 1 H ( D t v , t ( j ) D ¯ t v , t ) 2 = 0 .
D t v , h ( h ) = 1 V 1 T k = 1 V = 1 T noise h ( h , v k , t ) = noise h ( h ) .
σ D t v , h 2 = 1 H 1 j = 1 H ( D t v , t ( j ) D ¯ t v , t ) 2 = σ h 2 .
D t v , t v ( h ) = 1 V 1 T k = 1 V = 1 T noise t v ( h , v k , t ) = constant .
σ D t v , t v 2 = 1 H 1 j = 1 H ( D t v , t v ( j ) D ¯ t v , t v ) 2 = 0 .
σ D t v , t v 2 = 0 , σ D t h , t h 2 = 0 , σ D v h , v h 2 = 0 .
D t v , t h ( h ) = 1 V 1 T k = 1 V = 1 T noise t v ( h , v k , t ) D t v , t h ( h ) = noise t v , t h ( 0 , σ t h 2 T ) .
σ D t v , t h 2 = 1 H 1 j = 1 H ( D t v , t v ( j ) D ¯ t v , t v ) 2 = σ t h 2 T .
D t v , tvh ( h ) = 1 V 1 T k = 1 V = 1 T noise tvh ( h , v k , t ) D t v , tvh ( h ) = noise t v , t h h ( 0 , σ tvh 2 T V ) ,
σ D t v , tvh 2 = 1 H 1 j = 1 H ( D t v , t v ( j ) D ¯ t v , t v ) 2 = σ tvh 2 T V .
σ D t v , tvh 2 = σ tvh 2 T V , σ D t h , tvh 2 = σ tvh 2 T H , σ D v h , tvh 2 = σ tvh 2 V H .
σ N t , tvh 2 = 1 T V H 1 m = 1 H k = 1 V = 1 T noise t ( h m , v k , t ) σ N t , tvh 2 = V H T V H 1 = 1 T noise t ( h m , v k , t ) σ N t , tvh 2 = V H ( T 1 ) T V H 1 σ t 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.