Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Rounding noise effects’ reduction for estimated movement of speckle patterns

Open Access Open Access

Abstract

The problem of resolution enhancement for speckle patterns analysis-based movement estimation is considered. In our previous publications we showed that this movement represents the corresponding tilt vibrations of the illuminated object and can be measured as a relative spatial shift between time adjacent images of the speckle pattern. In this paper we show how to overcome the resolution limitation obtained when using an optical sensor available in an optical mouse and which measures the Cartesian coordinates of the shift as an integer number of pixels. To overcome such a resolution limitation, it is proposed here to use simultaneous measurements from the same illuminated spot by a few cameras (sensors) each having imaging lenses with different amounts of defocusing. The amount of defocusing defines the proportion ratio between actual changes in the tilt plane and measured shift between speckle images. To utilize the diversity of such ratios we apply a beam-forming signal processing approach that makes it possible to achieve different design criteria and improve the measurement accuracy, respectively. The validity and properties of the proposed solution are demonstrated by a few examples of in-vivo touchless measurements of human heart beat sounds.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The history of usage of different speckle-based techniques for measurements of displacements, movements and vibrations had nice outcomes in the last few years [1–4] and covers a wide range of possible applications, e.g. object deformation measurements [5], improving the resolving capabilities of imaging sensors [6] or bio vibration estimation [7]. The popularity of these methods is largely determined by the fact that speckles are self-interference random patterns with a remarkable quality where each individual speckle serves as a reference point from which one can track the changes in the phase of the light that is being scattered from a surface.

Because of that, speckle methods may be especially useful for noncontact measurements of human vital signals that is, in its turn, can become the basis of continuous and long-term patient monitoring. The latter field attracts more attention in the recent years due to its huge economic potential [8]. Practical feasibility of electronic speckle-pattern interferometry (ESPI) for cardiovascular pulses detection was considered in Ref [9]. However, the great challenge in applying ESPI scheme to real-time physiological signals estimation, as for all interferometer-based schemes, is the need to have relatively complicated detector with high sensitivity to noise and external disturbances. In addition, remote and continuous monitoring of the important physiological signs (like heartbeat, respiration rate, blood pressure etc.) should be done at home rather than in controllable conditions of healthcare facilities. Therefore, any constraint regarding to the location of the detector and the reflected object should be facilitated as much as possible.

Appropriate configuration that overcomes the above-mentioned restrictions was proposed in Ref [10]. Application of this technique to vital signals estimation was considered in Refs [11,12]. The suggested system contains a regular camera with its optics as well as a laser source. The laser illuminates the region of interest where the measurement should be performed. Light is reflected from the illuminated surface and a speckle pattern is produced on the detector. The optics of the camera is slightly defocused, while this feature is important to convert the tilting movement of the inspected surface into the transversal movement of the speckles (rather the change of this reflected random pattern [11],). This causes the speckle pattern to be constant under vibration of the illuminated object and distinguishes it from other speckle-related techniques where the pattern varies in a noncontrollable manner. A shift in the constant speckle pattern, which is caused by vibration of the illuminated object, can be tracked by a correlation-based algorithm.

Since the light scatters due to microscopic structure of the object the speckle diffraction pattern occurs in wide angle (close to 2π Ste-Radians) and thus no matter where the camera is placed the speckles pattern may be captured. Therefore, no constrain exists any more regarding the location of the detector or the reflecting object. On the other hand, the detection is done only by simple imaging, so the detection module is not an interferometer and thus it is less sensitive to noises.

Due to such properties, the approach proposed in Refs [10–12]. serves as a convenient basis for developing easy-to-use system for the sake of home health continuous monitoring. However, correct estimation of physiological parameters requires precise speckle pattern movement tracking [12] and thus implementation of this technology should be based on precise imaging sensor with subpixel motion estimation accuracy. This excludes the use of simple and available commercial sensors [13] and may present a certain obstacle in the development of cost-effective health monitoring products.

Our aim in this paper is to develop processing procedure to extend possible implementation of configuration of Refs [10–12]. to scenario with simplified sensors that do not provide accurate motion estimation in subpixel accuracy. Namely, we assume that 2D speckle pattern movement is tracked in Cartesian coordinates and position along x-y axis, that is estimated by mean of correlation, possesses only integer values. Such low-cost sensors are widely used in optical mouse design and allow capturing of the consecutive images reflected from illuminated surface with very high rate. As each image is captured, it is transferred to the computation unit of the integrated circuit, where the movement is computed in integer pixels resolution by relatively simple analysis of successive images.

The idea to combine measurements over an array of optical sensors has been utilized in many applications in the last years. For example, mobile robot navigation, based on multiple optical flow sensors, and related issues of sensor array calibration and data selection are discussed in Ref [14]. and references herein. In the present work we exploit low-accuracy sensors’ fusion approach in speckle-based measurement system based on configuration of Refs [10–12]. To compensate resolution loss, that is caused by additional rounding noise in comparison with the best subpixel estimation scheme, we propose to combine simultaneous measurements from several sensors having different amounts of defocusing.

Additional information that can help to reconstruct high resolution movement trajectory may be extracted from different defocusing of the sensors. According to the results of Refs [10,11]. there is a linear relation between the vibration tilting of the illuminated object and the x-y shifts of the correlation plane. The amount of defocusing sets the proportionality ratio. By capturing several differently defocused images and processing them one can obtain different linear translations between x-y position of the correlation function and the tilting of the object. Subpixel resolving tilting extraction can be achieved by combination between these measured translations in such a way that allows accurate interpolation of the real tilting value.

The major problem is to construct appropriate combination schemes between sensors with different defocusing. We resolve it by re-formulation of this task as classical array signal processing problem [15,16] with known constant steering vector and time-and-space uniformly distributed noise. One of the possible approaches to this problem’s solution is beam-forming. Beamformer weights may be successfully used for appropriate interpolation between integer measurements for better reconstruction of sub-pixel accuracy. Different design goals may be achieved depending on optimization criterion of the chosen beam-forming solution.

As a more optical deign related explanation to the idea being used in this paper can be as follows: a laser and several sensors with imaging lenses having different amounts of defocusing are directed to the same analyzed surface to collect the time changing speckle patterns generated in the back reflected light. By capturing several differently defocused images and processing them one can obtain different linear translations between the x-y position of the speckles correlation function and the tilting of the inspected surface and then to perform a sub pixel super resolving tilting extraction of the relevant shift even if the x-y outputs are of integer values.

The rest of the paper is organized as follows. In the next section we formulate the resolution enhancement task for chosen cameras defocusing as beam-forming problem. Some general considerations about performance and characteristics of possible beam-former solution as well as about choice of relative defocusing for different cameras (i.e. good design of the steering vector) are given in Section 3. Then in Section 4 few different beam-forming schemes are proposed based on formalization of Sections 2 and 3. Few illustrative examples in Section 5 are based on in-vivo measurements of human heart beat sounds solutions. Some concluding remarks are given in Section 6.

2. Problem formulation for given sensors’ defocusing

Consider setup with N cameras of different defocusing that simultaneously capture the same illuminated spot. Let si define the actual tilt increment at time instant i, measured as 2D shift of the maximum peak in correlation plane between speckle images at time i and i-1. Assume for a moment that available subpixel accuracy correlation algorithm of any camera provides the best possible estimate of si. Results from various cameras should only differ by constant scalar factor that is, in fact, the ratio between different defocusing amounts. So, from practical point of view, si from one of the camera can be considered as output of subpixel motion estimation scheme and might be used as reference.

Without any loss of generality, we number the sensors from 1 to N in ascending order of their defocusing and pick output of one of them (say, signal from camera 1 with the smallest defocus) as reference signal. Thus, we have the following model for the proposed measurement setup:

yi=asi,      i=1M

where i is time index, si is real scalar, yiNis vector of sub-pixels (i.e. the best possible) si estimations from different cameras, and constant “steering-like” vector aN possesses the following form a=[1, d2d1  dNd1]T and d1,dN are defocusing amount for specific cameras. Let’s assume now that yi is measured in integer pixels accuracy, namely, we add rounding noise, vi, to Eq. (1) and re-write it in the following form

yi=asi+vi,      i=1M
where now yiN and  viN are assumed as being

  • • Independent of (or at least uncorrelated with) the original signal si
  • • Uncorrelated between different cameras
  • • Uniformly distributed in [-0.5 0.5] for each camera

Namely, we can write covariance matrix of v as σq2I, where σq2=112, as given by the rounding error to integer values. Our aim is to estimate si based on available measurements yi under the assumption of the underlying model of Eq. (2).

Remark 1: Signal siin Eq. (2) is assumed to be best possible sub-pixel accuracy estimation of actual tilt change. If siis assumed to be tilt change itself, then additive white Gaussian noise (AWGN) should be added to Eq. (2) to characterize total measurement uncertainty, that is mainly function of the sensor/camera. Namely, the proposed model is supposed to be:

yi=asi+ni+vi      i=1M

Steering vector a in this case should possess defocusing amounts and not their ratios. Assumption about spatial (between cameras) uniformity and independence of ni, i.e. ni~N(0,σ2I), keeps diagonal structure of noise covariance in Eq. (3).

Remark 2: Correctness of the assumption about uniform distribution of the rounding noise v in Eq. (2) clearly depends on signal amplitude and it is true for signals which are larger than 1 pixel. In the case where large part of the signal’s samples possess small amplitude (for example, near or less than 0.5 pixels), rounding noise should have distribution that is concentrated closer to zero. Note also that estimated signal amplitude is defined by defocusing amount and in practical application measurement that is lower than 0.5 pixels is, in fact, system noise component (like variable ni in Eq. (3)).

In Fig. 1 we show the experimental recording of signals while the sensor exhibits different defocusing and has either integer or sub-pixel accuracy for correlation peaks x-y position. Dependence of the rounding error on sensor’s defocus and signal amplitude is clearly seen.

 figure: Fig. 1

Fig. 1 Comparison of the recorded signals for different defocusing that are estimated with integer and sub-pixel accuracy.

Download Full Size | PDF

In Figs. 2 and 3 we present the errors (rounding noise) between signals with sub-pixel and integer estimation accuracy as well as the histograms of the error (rounding noise) for different defocusing respectively. The results are related to the signals form the above Fig. 1. Deviation from uniform distribution is only significant for signal with very small amplitudes, as expected considering Remark 2.

 figure: Fig. 2

Fig. 2 Errors (rounding noise) between signals with sub-pixel and integer accuracy estimated by sensors with different defocusing.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Errors (rounding noise) between signals with sub-pixel and integer accuracy estimated by sensors with different defocusing.

Download Full Size | PDF

3. Beam-forming solution for resolution enhancement

Our approach to compute the estimated shift, si^, is restricted by searching linear weighting function/vector wNin such manner that

si^=wTyi

In other words, we attempt to weight the output of the array of Eq. (2) to get proper estimation of signal s. In the light of the model of Eq. (2) such weighting may be interpreted as “steering” of output of array of Eq. (3) in specific “direction” into N-dimensional spatial space to achieve a certain optimization criterion [15]. Definition of this criterion and the corresponding solution depends on the possibility to adapt weights to actual measurements as will be described in Section 4 below.

Note first, that output of the beamformer of Eq. (2) may possesses linear scaling/gain with respect to the estimated signal s. Adaptive beam-forming will tend to absorb measurement’s properties and to set this scaling parameter close to unity. On the other hand, scaling will be defined by actual measurement amplitude and values of components of vector w in the case of fixed weights, that don’t depend on measurement data.

It is worth to note also that the noise component in the output of Eq. (4) is given by:

E{|wTv|2}=E{|j=1Nwjvj|2}=σq2j=1Nwj2

namely, the noise is amplified by the squared norm of the weighting vector. On the other hand, given a weight vector w, the beamformer output possesses the following form (we omit time index i for the sake of convenience)

wTy=swTa+wTv,

and the output SNR is given by

SNR=Ps|wTa|2E{|wTv|2}

where Ps=E{|s|2} is the average signal power. So, when applying Eq. (5) we conclude that

SNR=Psa2σq2

The components of vector a are always larger or equal 1 (recall that the smallest defocusing is defined as unity value). Therefore, from the one side, SNR increases linearly with the number of cameras N and relative defocusing ratios.

On the other hand, there is obvious tradeoff between increasing the number of sensors and norm of vector a and final beam-forming performance. This tradeoff stems from the fact that the output of Eq. (4) is linear combination of independent uniformly distributed variables with different means. Its probability density function (PDF) can be approximated by piecewise polynomial functions [17,18] (in the limiting case of the sum of such variables with equal means it is called the Irwin-Hall distribution). Such PDF tends to be close to normal distribution which has variance that also increases with the number of cameras and relative defocusing ratios. This tendency together with SNR considerations in the light of Eq. (7) may be a major factor for optimal choice of the resolution enhancement setup (i.e. choice of number N and vector a in Eq. (2)). As an example, we present the histograms of a few simulated independent components with uniform distribution (like the one shown in Fig. 4).

 figure: Fig. 4

Fig. 4 Examples of the independent uniformly distributed components (distributed in [-0.5 0.5] with zero-mean for all of them).

Download Full Size | PDF

To illustrate design considerations explained above, we also show the PDFs and the corresponding normal distribution fittings for different linear combinations of these simulated independent components as can be seen in Figs. 5 and 6. Examples in Fig. 5 illustrate dependence of the parameters of approximate normal distribution on number of components. In opposite, dependence of such parameters on linear weights is presented in Fig. 6.

 figure: Fig. 5

Fig. 5 Dependence of the parameters for the distribution of linear combination (normal PDF approximation) on number of components.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Dependence of the parameters for the distribution of linear combination (normal PDF approximation) on linear coefficients (weights).

Download Full Size | PDF

4. Different beam-forming solutions

In this Section, we suggest few different beam-forming schemes (with various design criteria) for given number of cameras N with given relative defocusing ratios, i.e. vector a. We divide the proposed beam-forming schemes by two major groups:

  • - Fixed beam-forming whose weights depend only on defocusing ratios in a
  • - Adaptive beam-forming whose weights depend on measured data

4.1 Fixed weights

4.1.1. Weighted averaging (center of mass)

Probably the most straightforward way to interpolate value between output of different sensors is to weight the output of the corresponding sensor by its relative defocusing ratio. Namely, in this case

w=ajaj
where j is index of elements in vector a.

4.1.2 Conventional beam-forming

Conventional beam-forming goal is to maximize output power of Eq. (4) in the direction of the steering vector a. The problem of maximization is then formulated as:

maxwE{wTyyTw}

Using assumption about spatially white noise and constrain w to |w|=1 to avoid trivial solution, the latter optimization problem clearly possesses the following solution:

w=aa

We see that Eq. (8) and Eq. (9) differ only by the normalization factor for steering vector a.However, as it follows from the analysis of Section 3, such difference may lead to essential variation in the estimation performance.

4.2 Adaptive weights

We may expect that allowing the adaptation of the chosen beam-forming weights to the measured data may essentially improve estimation performance. Consider therefore few possible schemes for adaptive beam-forming that vary by their optimization criteria.

4.2.1 Capon/MVDR beam-forming

The aim of this approach is to minimize the power contributed by noise (i.e. from any spatial direction that differs from the direction of the steering vector) and keep fix power gain in the direction of vector a. Thus, the problem is formulated as optimization with linear constrains, namely,

minwE{wTyyTw},  wTa=1

The solution of this optimization problem can be obtained using standard method of Lagrange multipliers and it possesses the following form:

w=R1aaTR1a,R=E{yyT}

In practical application measurements covariance matrix R should be replaced by its approximation as sampled covariance matrix. Namely, write all batch of measurements in matrix form as:

YN×M:    Y=[y1yM]
Thus

R1MYYT

This scheme is sometimes called in the literature as minimum variance distortionless response (MVDR) because the output has minimum energy (variance) and the desired signal is not distorted (the gain in the spatial direction of a is unity).

4.2.2 Maximum SNR beam-forming

Another reasonable design criterion is to maximize the output SNR, namely  maxwSNR, where SNR is given by Eq. (7). Utilizing the facts that multiplying weights by a constant does not change SNR and steering vector a is constant, we can restrict our search for weight vector that satisfies:

wTa=c,
c is any constant, for instance, c = 1.

We finally get optimization problem with constrains and it can be solved by method that is similar to one used in Section 4.2.1. Thus, the solution of this problem can be obtained in the following form:

w=Rv1aaTRv1a,Rv=E{vvT}

It can be observed that utilization of the special diagonal structure of noise covariance matrix Rv, that is defined in Eq. (2) as independent of the actual measurements y, namely Rv=σq2I, where σq2=112, turns Maximum SNR beam-forming to one of the fixed weighting scheme.

To keep adaptive behavior of this scheme we can calculate the actual noise covariance in the form σ2I by eigenvector decomposition of sampled covariance matrix R (procedure that allows including all possible artifacts, interferences and uncertainties into the noise calculation). In this case beamformer may be defined as maximum SNIR scheme. Indeed, it follows from the model of Eq. (2) that R possesses very predefined structure in the form of:

R=PsaaT+σ2I

Thus, eigenvector decomposition of R should provide single large eigenvalue that is equal to a2Ps+σ2 and N-1 small values, mean of whom may be considered as good estimation for actual σ2.

Remark 3: It follows from Eq. (12) that major eigenvector of R may be considered as proper estimation of the steering vector a. In this sense, the choice of w as this major eigenvector differs only by the normalization factor from its fixed weights counterparts (8) and (9) (unlike formally adaptive character of such estimation of w).

5. Illustrative examples

To test resolution enhancement methods of Sections 2-4 laboratory setup with 3 sensors were created. Cameras of different sensors have various amount of defocusing. Relative defocusing ratios (namely, design of steering vector a in Eq. (2)) were chosen quite arbitrary. Cameras possessing slightly different angle w.r.t. to the illuminated spot. Images of illuminated spots from every camera for a specific time frame are presented in Figs. 7(a)-7(c).

 figure: Fig. 7

Fig. 7 Experimental data. (a)-(c). Illuminated spots from different cameras. Blue points mark estimated spots’ center of mass.

Download Full Size | PDF

Remark 4: It is assumed that vector a in the model of Eq. (2) (or Eq. (3)) is known exactly since it includes the defocusing amount ratios and defocusing is part of the measurement setup. However, if exact defocusing ratio is not known from some technical reasons, the estimation of the steering vector a, that is based on mean ratio of all samples between different cameras/sensors, may replace its actual value. Figure 8 demonstrates estimated defocusing ratios. Every point in this figure represents amplitude ratio between the corresponding sample of two different sensors. Results for various sensor pairs along both x and y axes are shown.

 figure: Fig. 8

Fig. 8 Estimated relative defocusing ratio for different sensors.

Download Full Size | PDF

It is important to note relatively wide dispersion of the actual amplitude ratios around their average values that is assumed as defocusing ratios. The latter dispersion may be caused by non-stationary properties of illuminated surface in in-vivo experimental environment.

Small window of 64 × 64 pixels were extracted for every camera around estimated center of mass of illuminated spot. Movement of the speckle patterns with sub-pixel accuracy was estimated for each sensor by means of correlation between these 64 × 64 pixels windows in adjacent time frames. According to the formalization of Section 2, the estimation for the sensor with the smallest defocusing will be used as reference the signal. Then, movement estimation in integer pixels resolution was performed for every sensor (it is worth noting that for research purposes such integer-pixels estimation may be equivalently obtained either by passing of subpixel interpolation routine in the correlation-based transformation or just by rounding subpixel accuracy estimation results). Finally, different beamforming schemes of Section 4 were implemented.

Figures 9(a)-(b) present results of implementation of fixed beam-forming solutions of Eq. (8) and Eq. (9), respectively. Linear scaling factor between reference signal and its estimated counterpart is estimated as mean of ratios for all samples between reconstructed and reference signals and it is also indicated in these plots.

 figure: Fig. 9

Fig. 9 (a)-(c). Resolution performance based on fixed beam-forming solutions.

Download Full Size | PDF

Figure 9(c) shows estimated PDFs for error between scaled version of the reference and reconstructed signals. We can see that error PDFs fit normal distribution quite well (as could be expected from the analysis of Section 3). It is also seen that unlike similar structure of vector w in both cases weighting averaging provided better results (in the sense of error variance) than conventional beam-forming (BF) due to different distribution of weights between sensors.

Similar results for adaptive schemes of Capon beam-forming of Eq. (10) and maximum SNR beamformer of Eq. (11) are presented in Figs. 10(a)-(c). It is important to say that the beamformer of Eq. (11) keeps its adaptive nature since Rv in Eq. (11) possesses the form of σ2I where σ2 is estimated by eigenvector decomposition of Eq. (12).

 figure: Fig. 10

Fig. 10 (a)-(c). Resolution improvement based on adaptive beam-forming.

Download Full Size | PDF

It is seen that as could be expected from the analysis in Section 3, that:

  • • Scaling factor in both adaptive schemes is very close to 1
  • • Adaptive estimation is much better than its fixed counterpart
  • • Both adaptive schemes provide results with similar performance (although maximum SNR scheme is slightly better than MVDR beamformer)

Figure 11 presents the result (error PDF estimation) of major eigenvector beam-forming that is equivalent to results of conventional beamformer (as could be expected in the light of Remark 3).

 figure: Fig. 11

Fig. 11 Error PDF for major eigenvector beamformer.

Download Full Size | PDF

Finally, Figs. 12(a)-(c) compare errors in time domain for different schemes and demonstrate clear advantage of the adaptive approach.

 figure: Fig. 12

Fig. 12 (a)-(c). Performance (time domain errors) comparison for different schemes.

Download Full Size | PDF

Remark 5: It is worth noting here that errors for all beam-forming schemes in Fig. 12 possess accidental spikes (error level that is larger than 0.5 pixels). This phenomenon stems from the fact that the actual defocusing ratio for various sensors essentially differs from the average values used in steering vector a, as it illustrated in Fig. 8 above. It is well known ([15], [16]) that beam-forming approach is sensitive to such kind of parameter uncertainty (also known as calibration noise) and it may lead to large errors. It is clearly seen in Fig. 12 that adaptive beam-forming schemes is more robust to the calibration errors than their fixed weight counterpart. Appropriate solutions for the problem of strong calibration noise can be found among the methods of robust beam-forming and this can be direction for further research.

All experiments were repeated several times for various tested subjects and similar results were obtained.

6. Concluding remarks

In this paper we proposed method of resolution enhancement for speckle patterns movement estimation based on combination of simultaneous measurements from several low-resolution imaging sensors. Sensors differ by the applied defocusing amount and we formulated the resolution improvement problem in the terms of array-based signal processing. Then, we applied beam-forming approach to utilize the measurements diversity from different sensors by optimal manner (under different optimization criteria) and reconstructed the estimated speckle pattern trajectory with higher accuracy.

The main advantages of the proposed approach are the possibility to choose different design criteria and provide fully adaptive solutions in closed analytical form. This method can be easily adapted to wide range of use cases including non-contact and continuous estimation of variety of human vital signals. Once the chosen problem is proved, many possible extensions and improvements of the proposed solutions may be proposed based on the advanced method of array signal processing.

Experimental in-vivo remote measurements of the heart beat sounds, demonstrated the validity of the suggested solution.

References and links

1. J. C. Dainty, Laser Speckle and Related Phenomena, (2nd ed., Springer-Verlag, Berlin, 1989).

2. H. M. Pedersen, “Intensity correlation metrology: a comparative study,” Opt. Acta (Lond.) 29(1), 105–118 (1982). [CrossRef]  

3. P. K. Rastogi and P. Jacquot, “Measurement of difference deformation using speckle interferometry,” Opt. Lett. 12(8), 596–598 (1987). [CrossRef]   [PubMed]  

4. J. A. Leedertz, “Interferometric displacement measurements on scattering surfaces utilizing speckle effects,” J. Phys. E Sci. Instrum. 3(3), 214–218 (1970). [CrossRef]  

5. P. K. Rastogi and P. Jacquot, “Measurement of difference deformation using speckle interferometry,” Opt. Lett. 12(8), 596–598 (1987). [CrossRef]   [PubMed]  

6. J. García, Z. Zalevsky, and D. Fixler, “Synthetic aperture superresolution by speckle pattern projection,” Opt. Express 13(16), 6073–6078 (2005). [CrossRef]   [PubMed]  

7. V. V. Tuchin, A. V. Ampilogov, A. G. Bogoroditsky, E. M. Rabinovich, V. P. Ryabukhov, S. S. Ul’yanov, and M. E. V’yushkin, “Laser speckle and optical fiber sensors for micromovements monitoring in biotissue,” Proc. SPIE 1420, 81–92 (1991). [CrossRef]  

8. S. P. Slight, C. Franz, M. Olugbile, H. V. Brown, D. W. Bates, and E. Zimlichman, “The return on investment of implementing a continuous monitoring system in general medical-surgical units,” Crit. Care Med. 42(8), 1862–1868 (2014). [CrossRef]   [PubMed]  

9. S. S. Ul’yanov, V. P. Ryabukho, and V. V. Tuchin, “Speckle interferometry for biotissue vibration measurement,” Opt. Eng. 33(3), 908–914 (1994). [CrossRef]  

10. Z. Zalevsky and J. Garcia, “Motion detection system and method,” Israeli Patent Application No. 184868 (July 2007).

11. Z. Zalevsky, Y. Beiderman, I. Margalit, S. Gingold, M. Teicher, V. Mico, and J. Garcia, “Simultaneous remote extraction of multiple speech sources and heart beats from secondary speckles pattern,” Opt. Express 17(24), 21566–21580 (2009). [CrossRef]   [PubMed]  

12. Y. Beiderman, I. Horovitz, N. Burshtein, M. Teicher, J. Garcia, V. Mico, and Z. Zalevsky, “Remote estimation of blood pulse pressure via temporal tracking of reflected secondary speckles pattern,” J. Biomed. Opt. 15(6), 061707 (2010). [CrossRef]   [PubMed]  

13. STMicroelectronics VD5377 datasheet,” Ultra-low power motion sensor for optical finger navigation (OFN),” http://www.st.com/en/imaging-and-photonics-solutions/vd5377.html

14. J. Hu, Y. Chang, and Y. Hsu, “Calibration and on-line data selection of multiple optical flow sensors for odometry application,” Sens. Actuators A Phys. 149(1), 74–80 (2009). [CrossRef]  

15. P. Stoica and R. Moses, Spectral Analysis of Signals, (Prientice Hall,2005).

16. H. L. Van Trees, Detection, Estimation and Modulation Theory, Part IV: Optimum Array Processing, (John Willey & Sons, Inc. 2002).

17. S. K. Mitra, “On the probability distribution of the sum of uniformly distributed random variables,” SIAM J. Appl. Math. 20(2), 195–198 (1971). [CrossRef]  

18. S. Sadooghi-Alvandi, A. Nematollahi, and R. Habibi, “On the distribution of the sum of independent uniform random variables,” Stat. Papers 50(1), 171–175 (2009). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Comparison of the recorded signals for different defocusing that are estimated with integer and sub-pixel accuracy.
Fig. 2
Fig. 2 Errors (rounding noise) between signals with sub-pixel and integer accuracy estimated by sensors with different defocusing.
Fig. 3
Fig. 3 Errors (rounding noise) between signals with sub-pixel and integer accuracy estimated by sensors with different defocusing.
Fig. 4
Fig. 4 Examples of the independent uniformly distributed components (distributed in [-0.5 0.5] with zero-mean for all of them).
Fig. 5
Fig. 5 Dependence of the parameters for the distribution of linear combination (normal PDF approximation) on number of components.
Fig. 6
Fig. 6 Dependence of the parameters for the distribution of linear combination (normal PDF approximation) on linear coefficients (weights).
Fig. 7
Fig. 7 Experimental data. (a)-(c). Illuminated spots from different cameras. Blue points mark estimated spots’ center of mass.
Fig. 8
Fig. 8 Estimated relative defocusing ratio for different sensors.
Fig. 9
Fig. 9 (a)-(c). Resolution performance based on fixed beam-forming solutions.
Fig. 10
Fig. 10 (a)-(c). Resolution improvement based on adaptive beam-forming.
Fig. 11
Fig. 11 Error PDF for major eigenvector beamformer.
Fig. 12
Fig. 12 (a)-(c). Performance (time domain errors) comparison for different schemes.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

y i =a s i ,      i=1M
y i =a s i + v i ,      i=1M
y i =a s i + n i + v i       i=1M
s i ^ = w T y i
E{ | w T v | 2 }=E{ | j=1 N w j v j | 2 }= σ q 2 j=1 N w j 2
w T y=s w T a+ w T v,
SNR= P s | w T a | 2 E{ | w T v | 2 }
SNR= P s a 2 σ q 2
w= a j a j
max w E{ w T y y T w }
w= a a
min w E{ w T y y T w },   w T a=1
w= R 1 a a T R 1 a ,R=E{ y y T }
Y N×M :    Y=[ y 1 y M ]
R 1 M Y Y T
w T a=c,
w= R v 1 a a T R v 1 a , R v =E{ v v T }
R= P s a a T + σ 2 I
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.