Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optimal calibration of optical tweezers with arbitrary integration time and sampling frequencies: a general framework [Invited]

Open Access Open Access

Abstract

Optical tweezers (OT) have become an essential technique in several fields of physics, chemistry, and biology as precise micromanipulation tools and microscopic force transducers. Quantitative measurements require the accurate calibration of the trap stiffness of the optical trap and the diffusion constant of the optically trapped particle. This is typically done by statistical estimators constructed from the position signal of the particle, which is recorded by a digital camera or a quadrant photodiode. The finite integration time and sampling frequency of the detector need to be properly taken into account. Here, we present a general approach based on the joint probability density function of the sampled trajectory that corrects exactly the biases due to the detector’s finite integration time and limited sampling frequency, providing theoretical formulas for the most widely employed calibration methods: equipartition, mean squared displacement, autocorrelation, power spectral density, and force reconstruction via maximum-likelihood-estimator analysis (FORMA). Our results, tested with experiments and Monte Carlo simulations, will permit users of OT to confidently estimate the trap stiffness and diffusion constant, extending their use to a broader set of experimental conditions.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical tweezers (OT) are a key enabling technology in a wide range of fields including single-molecule biophysics, cell biology, colloidal studies, chemistry, statistical physics, transport phenomena, and even quantum physics [17]. They have been used to manipulate microscopic objects with forces ranging from femtonewtons ($10^{-15}\,\rm {N}$) to piconewtons ($10^{-12}\,\rm {N}$) [2]. Furthermore, they have also been used to characterize the mechanical properties of the (possibly complex) fluids where these particles are immersed [2,8,9].

In first approximation, OT exert a force $F$ on the trapped particle that is proportional to the displacement $x$ of the particle from the center of the trap, i.e., $F= -\kappa x$, where $\kappa$ is the trap stiffness. There are various ways to experimentally determine $\kappa$. Most of these methods rely on the tracking of the position of the particle as a function of time, typically measured by a digital camera or a quadrant photodiode [1,1013]. From the stochastic trajectory, different statistical estimators are used to estimate the trap stiffness of the optical trap and the diffusion constant of the optically trapped particle [2]. For instance, the equipartition (EP) method estimates $\kappa$ relying on the fact that, at thermodynamic equilibrium, the probability distribution of the particle position follows the Boltzmann distribution [1,2,13]. Other methods can additionally estimate the diffusion constant by relying on the mean square displacement (MSD) [1], the autocorrelation function (ACF) [14], the power spectral density (PSD) [15] and the force reconstruction via maximum likelihood estimator (FORMA) [2,16]. Generally speaking, the accurate and precise calibration of OT will depend on several parameters involved in the detection of the position of the particle, such as the signal-to-noise ratio (SNR), the sampling frequency or bandwidth, the integration time, and the length of the acquired data.

Typically, an OT setup includes a microscope digital camera or a quadrant photodiode (QPD) to measure the position of the optically trapped particle. QPDs represent an excellent solution to calibrate OT when dealing with a single static optical trap; they work at very high sampling frequencies (on the order of hundreds of kilohertz) and short integration times (on the order of microseconds) with high SNR. Nevertheless, when a larger field of view is needed (e.g., in holographic optical tweezers trapping multiple particles at the same time), digital cameras are commonly used to visualize the whole sample, permitting the user to select the region of interest and the particular particle(s) to be analyzed [1,13]; however, standard microscopy cameras have the disadvantage that have low sampling frequencies (usually up to some kilohertz) and long integration times (up to milliseconds), which can affect the quality of the OT calibration with standard methods.

Several works have discussed the effect of the integration time and sampling frequency on the calibration of OT [1728]. The EP method is arguably the simplest one and, often, the natural starting point for more complex analyses, with the limitation that it only provides the stiffness of the optical trap but not the diffusion constant of the particle [1719]; a finite integration time and a limited sampling frequency have been shown to introduce artifacts in the reconstructed trapping potentials [17]. There has also been some work exploring the effects of the integration time on different geometries of the trapping potential and in cases of anisotropic diffusion, which may have applications in nuclear magnetic resonance imaging techniques [19]. For MSD-based methods, the effect of the integration time has been discussed in the context of rheological studies measuring the viscoelastic properties of the liquid where the particles are immersed [17,19,2125], relevant for the study of processes intertwined with, e.g., cellular dynamics (such as cell growth, stem cell differentiation, cell crawling, wound healing, protein regulation, cell malignancy, and even cell death [24,26]). When using the PSD method, the finite integration time and limited sampling frequency result in an effective low-pass filtering of the signal whose signature can be identified in the acquired power spectra [18,20,27,28].

Recently, there has been a growing interest in applying techniques of classical information theory to the calibration of OT. For instance, estimations using posterior distributions and maximum posterior estimators (e.g., FORMA with an uninformative prior [16]) are two of the recently developed alternatives, which, nonetheless, still consider restrictive ideal conditions, such as high sampling frequency and negligible integration time [3,16,29,30]. Using maximum likelihood estimators, Refs. [23,25,28] were able to incorporate into the mathematical analysis a limited sampling frequency and integration time and, then, use concepts from information theory, such as Fisher information [25] and the Cramer-Rao bound [23], to ascertain the bona fide of their techniques. Other works have considered also a covariance-based estimator [31]

While many works and methodologies are available, a general and comprehensive approach to tackle the effects of the integration time, sampling rate and trajectory duration in standard calibration methods is still lacking. Also, some of the works mentioned previously rely on some intermediate procedures to fit the model’s functions, typically by means of non-linear fitting algorithms, with the disadvantage of being time consuming and potentially adding errors to the final estimators.

Here, we develop a general framework from which we derive analytical solutions that incorporate sampling frequency and integration time in their descriptions for the most commonly used methods to estimate the stiffness and the diffusion constant in OT (namely EP, MSD, ACF, PSD, and FORMA). We provide generalized analytical solutions for these methods, enabling accurate and precise estimations, independent of the frame rate and of the integration time of the camera. This will offer a framework for future research to tackle problems where fast or real time estimations of the stiffness and diffusion are required, such as those dealing with non equilibrium thermodynamics or molecular biology.

This article is organized as follows: First, in Sec. 2, we highlight the problems that arise due to a finite integration time and a limited sampling frequency on the estimates derived from the standard calibration methods (EP, MSD, ACF, PSD, and FORMA), which assume ideal conditions (i.e., high sampling frequency and negligible integration time). In Sec. 3, we derive analytical expressions that incorporate arbitrary integration time and sampling frequency for all the methods, and we further use them to estimate the stiffness and diffusion. A summary of the methods in their standard and generalized forms, including practical considerations, is presented in Table 1. Then, in Sec. 4, we compare the performance of these new formulas, to which we will refer as generalized formulas, to estimate the stiffness and diffusion constant against standard ones in a broad range of experimental conditions, going from low sampling frequencies and large integration times to high sampling frequencies and short integration times. Finally, in Sec. 5, we show the performance of the generalized formulas to estimate stiffness and diffusion constant when controlling total time and sample number. At the end of the article, in Secs. 6 and 7, we discuss our results and give general conclusions and future research directions.

Tables Icon

Table 1. Overview of the calibration analyses in the overdamped regime. These methods allow to estimate the stiffness $\kappa$ of the optical trap and, in most of the cases, the diffusion constant $D$ of the optically trapped particle from the sampled trajectory of the particle $\mathcal{D}\equiv\{ \tilde{x}_{j}\}_{j=1}^{N_{\rm s}}$ at sampled time $\Delta t$ and integration time $\delta$. The standard model functions assume much shorter sampling and integration times than the characteristic time of the optical trap. On the other hand, the generalized model functions take into account arbitrary sampling frequency and integration time, allowing to drastically improve the calibration of OT even if data acquisition is very far from the ideal conditions.

2. Problems due finite sampling frequency and integration time

Let us start with a reminder of the standard calibration methods to later dwell on the effects that a finite sampling frequency and integration time may have on them. We assume that the effect of the OT on a particle is well described by the Langevin equation modeling the motion of a Brownian particle in the overdamped regime subject to a Hookean restoring force. For simplicity, we also restrict ourselves to the one-dimensional case:

$$\frac{dx(t)}{dt}={-}\frac{\kappa}{\gamma} x(t)+\sqrt{2D} W(t)\,.$$

The diffusion constant is $D=k_{\rm B}\,T/\gamma$, with $k_{\rm B}$ the Boltzmann constant, $T$ the absolute temperature, $\gamma =3\pi \nu d_{\rm p}$ the drag coefficient, $d_{\rm p}$ the diameter of the particle, $\nu$ the dynamic viscosity of the medium, and $W(t)$ a white noise with zero mean and Dirac-delta-correlated variance $\left \langle Wt)W(t')\right \rangle =\delta (t-t')$ [32].

A formal solution to Eq. (1) reads

$$x(t)=x_0 e^{- t/\tau_{\text{ot}}}+\sqrt{2D} \int_0^{t} ds W(s) e^{-(t-s)/\tau_{\text{ot}}},$$
where $x_0$ is the initial position at time $t=0$ and $\tau _{\text {ot}}= \gamma /\kappa$ is the characteristic relaxation time of the particle in the optical trap. From this theoretical model, one can easily derive exact formulas for physical quantities of interest that will depend on the two parameters $\kappa$ and $D$. To infer these parameters, one must contrast the theoretical formulas with sample estimators constructed from a dataset containing a time series of the particle’s position measured experimentally. Let our dataset be $\mathcal {D}\equiv \{x_n\}_{n=1}^{N_{\rm s}}$, where $x_n\equiv x(t_n)$ are the particle’s positions taken at regular times $t_n=n\Delta t=n/f_{\rm s}$, for $n=1,2,\ldots,N_{\rm s}$, and $f_{\rm s}=1/\Delta t$ denotes the sampling frequency. A summary of the implementation and relevant functions involved in these methods are (for further details see Refs. [1,2]):
  • Potential and EP analyses. At thermodynamic equilibrium, the probability of finding the particle around a position $x$ is given by the Boltzmann distribution:
    $$\rho(x)=\rho_0 e^{-\frac{U(x)}{k_{\rm B} T}},$$
    where $\rho _0$ is a normalization factor. In this case, the statistical estimator is, fairly simply, the histogram of the probability of finding the particle at a given position constructed from the dataset $\mathcal {D}$. The experimental potential is estimated by means of the histogram $h(x_b)$ of the position, with $x_b$ the bins’ coordinates,
    $$U(x_b)={-}k_B T \log(h(x_b))+{\rm constant}.$$

    If we further assume that the potential is harmonic,

    $$U(x)=\frac{1}{2}\kappa (x-x_{\rm eq})^2,$$
    with $x_{\rm eq}$ the stable equilibrium position, we can fit this model function to the estimates in Eq. (4) and obtain $\kappa$ as a free parameter. Equivalently, by the equipartition theorem we know that
    $$\kappa=\frac{k_{\rm B} T}{\left\langle (x-x_{\rm eq})^2\right\rangle}\,.$$

    This implies that a simple way to infer the value of $\kappa$ is to use the dataset $\mathcal {D}$ to construct an estimate of the position’s variance $\left \langle (x-x_{\rm eq})^2\right \rangle$. Its statistical estimator is the so-called sample variance, usually denoted as $s^2_{N_{\rm s}}$. Thus,

    $$\kappa=\frac{k_B T}{s^2_{N_{\rm s}}},$$
    with $s^2_{N_{\rm s}}=\frac {1}{N_{\rm s}-1}\sum ^{N_{\rm s}}_{n=1}( x_n-x_{\rm {eq}})^2$ and $x_{\rm {eq}}=\frac {1}{N_{\rm s}}\sum ^{N_{\rm s}}_{n=1}x_n$. Since the potential and the EP formulas are essentially equivalent, very similar results are expected when the number of data is high enough; therefore, in the following, we will focus only in the EP method.

  • MSD analysis. The MSD is theoretically defined as
    $$\text{MSD}(\tau)=\left\langle [x(t+\tau)-x(t)]^2\right\rangle=\frac{2 k_{\rm B} T}{\kappa}\left(1-e^{-\tau/\tau_{{\rm ot}}}\right),$$
    for $t$ larger than $\tau _{{\rm ot}}$. Given the dataset $\mathcal {D}$, its statistical estimator is given by
    $$\text{MSD}(\tau_\ell)=\frac{1}{N_{\rm s}-\ell}\sum^{N_{\rm s}-\ell}_{n=1}(x_{n+\ell}-x_n)^2,$$
    with $x_n=x(n\Delta t)$ and $\tau _\ell =\ell \Delta t$. The theoretical formula and its estimator are then fitted to infer $\kappa$ and $\tau _{{\rm ot}}$ (from which $D$ can be straightforwardly obtained).
  • ACF analysis. The autocorrelation function is defined as:
    $$\text{ACF}(\tau)=\left\langle x(t+\tau)x(t)\right\rangle=\frac{k_{\rm B} T}{\kappa} e^{-\tau/\tau_{{\rm ot}}},$$
    again for $t$ larger than the characteristic relaxation time. In this case, its estimator is given by
    $$\text{ACF}(\tau_\ell)=\frac{1}{N_{\rm s}-\ell}\sum^{N_{\rm s}-\ell}_{n=1}(x_{n+\ell}x_n),$$
    which upon comparison with its theoretical counterpart allows us to infer $\kappa$ and $D$ (through $\tau _{{\rm ot}}$).
  • PSD analysis. For a continuous and infinitely long signal (solution to Eq. (1)), the PSD is given by
    $$P(f)=\frac{1}{2\pi^2}\frac{D}{f_{{\rm c}}^2+f^2},$$
    which depends on $f_{\rm c}=\kappa /(2\pi \gamma )$ and $D$. If, however, the signal is taken at discrete time steps, the above formula must be replaced by [15]:
    $$P(f)=\frac{{\Delta x}^2/f_{\rm s}}{1+c^2-2c\cos(2\pi f/f_{\rm s})},$$
    where $\Delta x=((1-c^2)D/2\pi f_{{\rm c}})^{1/2}$ and $c=e^{-2\pi f_{{\rm c}}/f_{\rm s}}$. The corresponding sampled estimator of the PSD is computed by means of the Fast Fourier Transform (FFT),
    $$P(f_k)=\frac{\left\langle |\hat{x}(f_k)|^2\right\rangle}{T_{\rm s}} =\frac{\Delta t^2}{T_{\rm s}} \left\langle |\rm{FFT}\{\{x_j\}^{N_{\rm s}}_{j=1}\}_k|^2\right\rangle$$
    with $f_k=k/T_{{\rm s}}$ for $k=1,\ldots, (N_{\rm s}-1)/2$. In this case, $\left \langle (\cdots )\right \rangle$ stands for the expected value estimated by averaging many experimental replicas or by data compression (typically obtained by moving average).
  • FORMA analysis. The values of $\kappa$ and $D$ are directly computed from the experimental dataset $\mathcal {D}$ by maximizing the likelihood of the linear model. The linear model comes about discretizing the Langevin equation to linear order, and rescaling the noise accordingly, yielding:
    $$\frac{x_{n+1}-x_{n}}{\Delta t}={-}\frac{\kappa}{\gamma }x_{n}+\sqrt{\frac{2D}{\Delta t}}w_n$$
    with $w_n$ Gaussian variables of zero mean and unit variance. From here, the model’s likelihood is:
    $$\mathcal{L}(\{x_n\}_{n=1}^{N_{\rm s}})=\frac{1}{\left(2D\Delta t\right)^{N_{\rm s}/2}}\exp\left[-\frac{1}{2}\sum_{n=1}^{N_{\rm s}-1}\left(\frac{x_{n+1}-\left(1-\frac{\Delta t}{\tau_{\rm ot}}\right)x_{n}}{\sqrt{2D \Delta t}}\right)^2\right]$$
    maximizing with respect to the parameters we obtain the maximum likelihood estimators:
    $$\frac{\kappa}{\gamma}={-}\frac{\sum_{n=1}^{N_{\rm s}-1}x_{n}\frac{x_{n+1} -x_{n}}{\Delta t}}{\sum_{n=1}^{N_{\rm s}-1} [x_{n}]^2},\quad\quad D=\frac{\Delta t}{2 (N_{\rm s}-1)}\sum_{n=1}^{N_{\rm s}-1}\left(\frac{x_{n+1}-{x_{n}}}{\Delta t}+\frac{\kappa}{\gamma}x_{n}\right)^2\,.$$
All these methods assume that each sample contained in the dataset $\mathcal {D}$ is measured instantaneously. Additionally, the MSD, ACF, PSD and FORMA methods need the sampling frequency to be high enough in comparison with the characteristic time of the trap to fully access the time dependence of these observables. However, a real sampled trajectory will be distorted by the time the detector takes to collect the data, called integration time, as schematically shown in Fig. 1(a). This will skew the estimations of $\kappa$ and $D$. A way to model the effect of the integration time is to assume that the sampled position at a given time $t_n$ is averaged around a time window of size $\delta$, which represents the integration time, that is:
$$\tilde{x}_n\equiv \frac{1}{\delta}\int_{t_n-\delta/2}^{t_n+\delta/2} x(t)\, dt\,.$$

The new dataset $\tilde {\mathcal {D}}\equiv \{\tilde {x}_n\}_{n=1}^{N_{\rm s}}$ captures better the trajectory obtained in an experiment. This is equivalent to a low-pass filter, with an upper bound, $\delta \leq \Delta t$. In a digital camera with global shutter, the integration time (exposure time) can be set in a range of values defined by the camera maker and is limited by the maximum sampling frequency (frame rate) allowed by the device. In a QPD, the effective integration time depends on the electronic bandwidth and typically is not a controllable parameter (in contrast to standard digital cameras, QPDs may reach quite easily sampling frequencies of the order of $\sim 10^5\,\rm {Hz}$, allowing very short integration times of the order of $10^{-5}\,\rm {s}$).

 figure: Fig. 1.

Fig. 1. Effect of low sampling frequency and long integration time on optical tweezers calibration. (a) The trajectory of a particle (solid line) is sampled every $\Delta t$, instantaneously (black dots) or with a finite integration time $\delta$ (red dots). (b-e) Behavior of the standard methods for various sampling frequencies and integration times on simulated trajectories generated through Monte Carlo simulations of a particle of diameter $d_{\rm p}=1.54\,{\mu}\rm {m}$ in a trap with stiffness $\kappa =4.08\,\rm {pN/}{\mu}\rm {m}$ and diffusion constant $D= 0.299\,{\mu}\rm {m}^2/\rm{s}$. The gray solid lines represent the analytical solutions of the standard methods, whereas the colored markers represent their corresponding estimates obtained from the simulated data. From left to right, potential (POT), mean square displacement (MSD), autocorrelation function (ACF), power spectrum density (PSD), and force reconstruction via maximum likelihood estimator (FORMA) methods. (b) When the conditions are ideal (i.e., with high sampling frequency $f_{\rm s}=5000\,\rm {Hz}$ and short integration time $\delta =0\,\rm {s}$), there is good agreement between the estimators and the theoretical predictions (see Eqs. (7), (8), (10), (13) and (17)). (c-e) This agreement worsens as the conditions become less ideal (c) by lowering the sampling frequency to $f_{\rm s}= 100\,\rm {Hz}$ ($\delta =0\,\rm {m s}$), and then by increasing the integration time to (d) $\delta =5\,\rm {m s}$ and (e) $\delta =10\,\rm {m s}$.

Download Full Size | PDF

To appreciate the effect of the sampling frequency and the integration time on the standard methods previously described, we perform some Monte Carlo simulations of the dynamics of a particle trapped in an OT varying the sampling frequency and the integration time (see Refs. [1] and Appendix A. for more details on the Monte Carlo simulations). For this purpose, we used a particle of diameter $d_{\rm p}=1.54\,{\mu}\rm {m}$ and diffusion constant $D= 0.299 \,{\mu}\rm {m}^2/\rm{s}$ trapped by an OT with $\kappa = 4.08 \,\rm {pN/}{\mu}\rm {m}$. These values were chosen to be in line with the experiments discussed in Secs. 4 and 5.

The scatter plots in Figs. 1(b-e) show the behavior of the estimators under various acquisition conditions (from left to right, the methods based on the potential, MSD, ACF, PSD, and FORMA, estimated by means of Eqs. (4), (9), (11), and (14), respectively). Figures 1(b) show the behavior for high sampling frequency, $f_{\rm s}=5000\,\rm {Hz}$, and zero integration time, while Figs. 1(c-e) show the behavior for low sampling frequency, $f_{\rm s}=100\,\rm {Hz}$, and three different integration times $\delta =0\,\rm {s}$, $5 \,\rm {ms}$, and $10\,\rm {ms}$ (the latter corresponds to the limit case where $\delta =\Delta t$). The solid gray lines show the behavior of the analytical expressions corresponding to the solutions of the standard methods considering the theoretical values of $\kappa$ and $D$ (Eqs. (5), (8), (10), (13) and the deterministic part of Eq. (15) for FORMA).

Figure 1(b) shows the ideal case where the data points corresponding to each estimator follow the theoretical predictions of the standard formulas. In this case, the sampling frequency is more than one order of magnitude higher than the characteristic frequency of the trap, $f_{\rm s} = 5000\,\rm {Hz} \gg f_{\rm ot}=1/\tau _{\rm ot}=\kappa /\gamma =297\,\rm {Hz}$.

A less ideal condition with still zero integration time but with a low sampling rate is shown in Figs. 1(c). In this case, the sampling frequency, $f_{\rm s}=100\,\rm {Hz}$, is slightly lower than the characteristic frequency of the trap. Broadly speaking, we can already notice some important effects on the estimators that could have repercussions on the estimated values of $\kappa$ and $D$. Probably, the least affected is the potential, since this method does not depend on the sampling frequency (as the main condition to properly recover the potential or the variance of the position is that the total elapsed time of the trajectory is much larger than the relaxation time of the trap ($T_{\rm s} \gg \tau _{\rm ot}$) [2], which is easily accomplished in the majority of cases since typically $\tau _{\rm ot}$ is of the order of some milliseconds and $T_{\rm s}$ could be on the order of seconds or more).

Regarding the MSD (second column in Figs. 1(c)), the data of the estimator do not appear appreciably distorted. In this respect, the only condition needed to properly resolve the MSD with the estimator relies on a high enough sampling frequency to recover the first part of the function. Since this curve reaches its maximum value at a characteristic time $\tau =\tau _{\rm ot}$, $\Delta t\lesssim \tau _{\rm ot}$ would enable enough data points in the first region of the plot.

The figure in the third column of Figs. 1(c) shows the case of the ACF. The estimator in this case, similarly to what happens with the MSD, is not distorted by the low sampling frequency, following the standard formula. Nevertheless, since this function falls rapidly to zero with time, with a characteristic time given by $\tau _{\rm ot}$, from a statistical point of view, the behavior of the ACF would be better reproduced by the estimator if $\Delta t \lesssim \tau _{\rm ot}$, similarly to what happens with the MSD.

The estimator of the PSD shown in the fourth column of Figs. 1(c) follows its corresponding analytical expression. Since the estimator of the PSD depends on the discrete Fourier transform, the aliasing of the data turns more evident as the sampling frequency gets lower, which is the case in this example. Here, the analytical solution of the aliasing PSD is more adequate (see Eq. (13)), which is in turn the one that is shown in these figures. Under these still ideal conditions, the only restrictions for the PSD are to have a sufficiently long trajectory, with $T_{\rm s}\gg \tau _{\rm ot}$, and high enough sampling frequency $f_{\rm s}\gtrsim f_{\rm ot}$ in order to have a good representation of the PSD at low and high frequencies around the corner frequency $f_{\rm c}$, which in our example is $f_{\rm c}=\kappa /(2 \pi \gamma )=47.3\,\rm {Hz}$.

The last figure in Figs. 1(c) depicts the behavior of FORMA. In this figure, in contrast to the one described above, where the sampling frequency was high, it is evident that the trend of the scattered plot, particularly its slope, is very different from the analytical formula. This was already expected since FORMA in its simple form was designed for data sets taken at sampling frequencies much higher than the characteristic frequencies of the trap [16], giving rise to wrong estimates of $\kappa$ and $D$ when these formulas were applied to experimental data under these conditions.

Figures 1(d-e) show the cases with low sampling frequency and a finite integration time, different from zero. These figures illustrate how the estimators of the different methods fail as the integration time increases. In the case of the potential, as the integration time increases, the estimated values follow a steeper parabola than the expected one, which, according to Eq. (3), would indicate an apparent higher stiffness than the real one. In the same way, this effect would affect the estimation of the stiffness via the equipartition formula in its standard form (see Eq. (7)).

In these same conditions, the estimator of the MSD is distorted at all times, getting lower values than expected, with a lower initial slope than the theoretical prediction. According to the ideal formula (see Eq. (8)), the MSD behaves linearly for short times with a slope of $2D$, while it reaches a plateau for long times at a value $2k_{\rm B}T/ \kappa$. Hence, comparing directly the estimated values of the MSD with a finite integration time and the corresponding standard formula (Eq. (8)) would give rise to biased values of $D$ and $\kappa$, namely lower diffusion constant and higher stiffness than the expected ones.

As the integration time increases, surprisingly, the ACF is not drastically distorted, as can be seen in the third column of Figs. 1(d-e). The difference between the estimated values and the analytical standard description is quite subtle, albeit clearly visible in the data points evaluated at $\tau =0$. We will show in the following section that the zero lag time of the ACF is actually the one that is mainly affected by the integration time while the following terms may present negligible distortions when $\delta$ has a low value. Considering this property, the ACF would enable more reliable estimations of $\kappa$ and $D$ than the other methods, particularly if the integration time is short or moderate and if the first data points of the ACF are ignored in the fitting procedure.

For the case of the PSD, the integration time distorts the estimated PSD by reducing its expected values at short frequencies and increasing the slope of the decay at large frequencies. Since at short frequencies the dominant behavior of the PSD, according to the analytical expression Eq. (13), is given by $D/(2\pi ) f_{\rm c}^{-2}$ and at large frequencies the PSD is expected to decay predominantly as $D/(2\pi ) f^{-2}$ when the aliasing is ignored, similarly to the MSD, this give rise to an apparent increment of $\kappa$ and a reduction of $D$.

In the case of FORMA, as the integration time increases, we see that the scatter plot becomes thinner and tilts counterclockwise, indicating again apparent higher and lower valued estimates of the stiffness and the diffusion constant, respectively.

3. Generalized methods

As we have mentioned before, the dynamics of a particle of diameter $d_{\rm p}$, immersed in a fluid with viscosity $\nu$ and undergoing the action of an OT with stiffness $\kappa$, can be modeled by the Langevin Eq. (1). Recall that the standard formulas of the methods, i.e., EP (Eq. (5) and (7)), MSD (Eq. (8)), ACF (Eq. (10)), PSD (Eq. (13)) and FORMA (Eq. (17)) described above, do not account for the integration time, which implies that the inferred parameters will be biased. To incorporate this effect in a general framework, we should find the probability of observing a whole particle’s trajectory already corrected for the integration time. If we were able to derive such an expression, then we could easily derive any physical quantity of interest from it. In the following, we show that this is indeed possible.

Starting from Eq. (18), using the formal solution Eq. (2), and after some calculations based on the path integral formalism (see Appendix C), it can be shown that the joint probability density function, denoted here as $p\left (\{\tilde {x}_n\}_{n=1}^{N_{\rm s}}\right )$ of observing the particle at sampled positions $\tilde {x}_1,\ldots, \tilde {x}_{N_{\rm s}}$ at times $0<t_1< \cdots <t_{N_{\rm s}}<T_{\rm s}$, is given by :

$$p(\{\tilde{x}_n\}_{n=1}^{N_{\rm s}})=\frac{1}{\sqrt{(2\pi)^{N_{\rm s}} \det I}}\exp\left[-\frac{1}{2}\sum_{n,m=1}^{N_{\rm s}} \left(\tilde{x}_n- \tilde{x}_0 e^{- t_n/\tau_{\text{ot}}}\right)[I^{{-}1}]_{nm}\left(\tilde{x}_m- \tilde{x_0} e^{- t_m/\tau_{\text{ot}}}\right) \right],$$
where $\tilde {x}_0=\frac {\sinh (\alpha )}{\alpha } x_0$, and $[I^{-1}]_{nm}$ corresponds to the $(n,m)$-entry of the inverse covariance matrix between two particle’s blurry positions at time $t_n$ and $t_m$,
$$I_{nm}=\left[\frac{\sinh(\alpha)}{\alpha} \right]^2 D \tau_{\text{ot},x}\left[e^{-|t_n-t_m|/\tau_{\text{ot},x}}-e^{-( t_n+t_m)/\tau_{\text{ot},x}}\right]+D\tau_{\text{ot},x} \frac{\alpha-\cosh(\alpha)\sinh(\alpha)}{\alpha^2}\delta_{nm},$$
where $\delta _{nm}$ denotes the Kronecker delta and $\alpha =\delta /2\tau _{\text {ot}}$ is half the ratio between the integration time and the relaxation time. For the derivation of this formula, we have solely assumed that $|t_n-t_m|>\delta$, which is quite realistic. Besides, since we assume that the process runs within an interval starting at zero, we must have that the first time $t_1$ at which we have the first point recorded must obey $t_1-\delta >0$.

In the limit of $\alpha$ going to zero, which corresponds to either a very fast shutter velocity or a very long relaxation time, we recover the standard covariance matrix, and the joint PDF given by Eq. (19), that can be written as $p(\{x_n\}_{n=1}^{N_{\rm s}})=p(x_0)\prod _{n=1}^{N_{\rm s}-1} G(x_{n+1}|x_n)$, with $G(x_{n+1}|x_n)$ the propagator of the standard Ornstein–Uhlenbeck (OU) process. Generally, the joint PDF given by Eq. (19) cannot be factorized in this way, since the corrected process is non-Markovian. While one might be tempted to seek for a numerically inversion of the covariance matrix given by Eq. (20), we will see in our subsequent analysis of the calibration methods, that this is not needed, not even for the method of FORMA, for which the inversion of $I$ is cleverly avoided. Other than that, this expression contains all the statistical information of the system and we can now proceed to rederive the theoretical formulas used in the calibration methods. Henceforth, we will take the initial condition $x_0=0$ and, moreover, for MSD and ACF we will assume that enough time has elapsed so that we only keep the time-translational invariant term in the covariance matrix.

  • Potential and EP analysis. From Eq. (19), we can evaluate the expectation value $\left \langle \tilde {x}^2_{N_{\rm s}}\right \rangle$ which equals to $I_{nn}$. This automatically implies that
    $$\left\langle \tilde{x}^2_{N_{\rm s}}\right\rangle=\frac{k_B T}{\kappa}\mathcal{F}(\alpha)\,\quad \text{with}\quad \mathcal{F}(\alpha)=\frac{e^{{-}2\alpha}+2\alpha-1}{2\alpha^2}\,.$$

    This implies, in turn, that the effective harmonic potential is given by the following formula:

    $$U(x)=\frac{1}{2}\frac{\kappa}{\mathcal{F}(\alpha)} x^2\,.$$

    When $\delta =0$, Eqs. (21) and (22) reduce to their standard form (Eqs. (7) and (5)). Notice that the formulas of the standard methods do not depend on $\tau _{\rm ot}$, so $\kappa$ can be directly determined from these formulas without any previous knowledge of the fluid properties. This must be contrasted to Eqs. (21) and (22), which depend on $\tau _{\rm ot}$, through $\alpha$, and hence on the drag coefficient of the particle in the fluid, $\gamma$. In this sense, to directly use the generalized equations to predict the stiffness of the trap from a single experiment, previous knowledge of $\gamma$ has to be incorporated in the solution. A different approach, as it was pointed out in Ref. [18], would be to carry out some experiments, at least two, with different exposure times in order to obtain a data set of the variance $\langle \tilde {x}^2_{N_{\rm s}}\rangle (\delta )=k_B T\mathcal {F}(\delta /2\tau _{\rm ot})/\kappa$, and from these determine $\kappa$ and $\tau _{\rm ot}$ either by solving the resulting equations or by fitting the data points to the analytical formula. A similar approach can be followed in the potential method, with the difference that the variance in this case is obtained through the fitting of the potential, Eq. (22). For simplicity, in the following sections, we apply the EP using $\gamma =3\pi \nu d_{\rm p}$, where we assume that the viscosity, $\nu$, is the one of water at the laboratory temperature and $d_{\rm p}$ is the diameter of the particle reported by the fabricant.

  • MSD analysis. The mean squared displacement for a lag time $\tau _\ell =t_{n+\ell }-t_{n}$ is defined as $\text {MSD}(\tau _\ell )=\left \langle (\tilde {x}_{n+\ell }-\tilde {x}_n)^2\right \rangle$. Now, from the new joint PDF we have that
    $$\left\langle \tilde{x}_{n}^2\right\rangle=\frac{k_B T}{\kappa}\mathcal{F}(\alpha),\quad\left\langle \tilde{x}_n\tilde{x}_{n+\ell}\right\rangle=\frac{k_B T}{\kappa}\left[\frac{\sinh(\alpha)}{\alpha} \right]^2 e^{-|\tau_{\ell}|/\tau_{\text{ot}}},$$
    and gathering these results yield
    $$\text{MSD}(\tau_\ell)=\left\langle (\tilde{x}_{n+\ell}-\tilde{x}_n)^2\right\rangle=\frac{2 k_B T}{\kappa}\left\{\mathcal{F}(\alpha)-\left[\frac{\sinh(\alpha)}{\alpha} \right]^2 e^{-|\tau_{\ell}|/\tau_{\text{ot}}}\right\}\,.$$

    Fitting this equation to the experimental estimators allows us to infer the three free parameters $a=2 k_B T \mathcal {F}(\alpha )/\kappa$, $b=2 k_B T (\sinh (\alpha )/\alpha )^2/\kappa$, and $c=\tau _{\rm ot}$. Interestingly, we could infer $\tau _{\rm ot}$ and $\kappa$ from these three equations without knowing $\delta$ in advance, which actually could be inferred as well. In our case, as we will see later, we use $\delta$ as a known parameter in all the realizations discussed below.

  • ACF analysis. Recall that the autocorrelation function is defined as $\left \langle \tilde {x}_{m}\tilde {x}_n\right \rangle =I_{mn}$, and therefore
    $$\text{ACF}(t_n-t_m)=\left\langle \tilde{x}_m\tilde{x}_n\right\rangle=\frac{k_B T}{\kappa}\left\{ \begin{array}{ll} \left[\frac{\sinh(\alpha)}{\alpha} \right]^2 e^{-|t_n-t_m|/\tau_{\text{ot}}} & t_n\neq t_m,\\ \mathcal{F}(\alpha) & t_n= t_m\,. \end{array}\right.$$

    Interestingly, since $\mathcal {F}(\alpha )\leq \left [\frac {\sinh (\alpha )}{\alpha } \right ]^2$ there is a jump from the first term at equal times to the following ones at different times in the autocorrelation function. Notice also that, for $\alpha$ small, the first leading correction to $\mathcal {F}(\alpha )$ is linear in $\alpha$, while it is quadratic for the factor $\left [\frac {\sinh (\alpha )}{\alpha } \right ]^2$. Comparing the generalised equation Eq. (25) with the standard one, Eq. (10), the terms corresponding to zero lag time ($t_n=t_m$) is the one that differs the most, while the behaviour for non-zero lag times has a more moderate correction to $\delta \neq 0$. Overall, ignoring the zero-time term makes the ACF more resilient to trajectories sampled with a finite integration time. This explains the robustness of the standard ACF as the integration time increases (Fig. 1(b-e)).

    Looking at Eq. (25), similarly to what happens to the MSD in its generalized form, one could divide the inference problems into two parts corresponding to zero and non-zero lag time and get the values of $\tau _{\rm ot}$, $\kappa$ and $\alpha$. For instance, we could use the non-zero lag time values to fit Eq. (25) and obtain the free parameters $a=k_B T (\sinh (\alpha )/\alpha )^2/\kappa$ and $b=\tau _{\rm ot}$ and the zero lag-time value to estimate $c=k_B T\mathcal {F}(\alpha )/\kappa$, then, we can compute $\kappa$, $\tau _{\rm ot}$ and $\alpha$, or equivalently $\delta$, from these three parameters. Importantly, this implies that we do not need to know $\delta$ in advance in order to infer $\kappa$ and $D$ accurately. In our case, as we will see later, we use $\delta$ as a known parameter in all the realizations discussed below.

  • PSD analysis. Given the dataset $\tilde {\mathcal {D}}$ of the blurry trajectory, the PSD is defined as follows:
    $$P(f)=\frac{1}{N_{\rm s}\Delta t}\left\langle \left|\Delta t \sum_{n=1}^{N_{\rm s}} \tilde{x}_{n} e^{ 2\pi i f t_n}\right|^2\right\rangle,$$
    which can be rewritten as:
    $$\begin{aligned}P(f)&=\frac{\Delta t}{N_{\rm s}}\sum_{n,m=1}^{N_{\rm s}}\left\langle \tilde{x}_n\tilde{x}_m\right\rangle\left[\cos(2\pi f t_n)\cos(2\pi f t_m)+\sin(2\pi f t_n)\sin(2\pi f t_m)\right] \\ &=\frac{\Delta t}{N_{\rm s}}\sum_{n,m=1}^{N_{\rm s}} I_{nm}\cos[2\pi f( t_n- t_m)], \end{aligned}$$
    where we have used the fact that $\left \langle \tilde {x}_n\tilde {x}_m\right \rangle =I_{nm}$ is the covariance matrix. Gathering the previous results, we get the generalised formula for the PSD:
    $$\begin{aligned}P(f)&=D\tau_{\text{ot}}\frac{\Delta t}{N_{\rm s}}\sum_{n,m=1}^{N_{\rm s}} \Bigg\{\left[\frac{\sinh(\alpha)}{\alpha} \right]^2 \left[e^{-|t_n-t_m|/\tau_{\text{ot}}}-e^{-( t_n+t_m)/\tau_{\text{ot}}}\right] \\&+ \delta_{nm}\frac{\alpha-\cosh(\alpha)\sinh(\alpha)}{\alpha^2}\Bigg\}\cos[2\pi f( t_n- t_m)]\,. \end{aligned}$$

    After a bit of an unrelentingly tedious algebra, one shows that the double sum reads

    $$\begin{aligned}&\sum_{n,m=1}^{N_{\rm s}} \left[e^{-|t_n-t_m|/\tau_{\text{ot}}}-e^{-( t_n+t_m)/\tau_{\text{ot}}}\right]\cos[2\pi f( t_n- t_m)] \\&=N_{\rm s}\frac{\sinh(\Delta t/\tau_{\text{ot}})}{\cosh(\Delta t/\tau_{\text{ot}})-\cos(2\pi f\Delta t)}-e^{-(N_{\rm s}+1)\Delta t/\tau_{\text{ot}}}\frac{\cos(2\pi f N_{\rm s}\Delta t)-\cosh(N_{\rm s}\Delta t/\tau_{\text{ot}})}{\cos(2\pi f\Delta t)-\cosh(\Delta t/\tau_{\text{ot}})} \\&-\frac{e^{{-}N_{\rm s}\Delta t/\tau_{\text{ot}}}}{\left(\cos(2\pi f\Delta t)-\cosh(\Delta t/\tau_{\text{ot}})\right)^2}\Bigg\{\cos(2\pi f N_{\rm s}\Delta t) \\&+\sin(2\pi f\Delta t)\sin(2\pi f N_{\rm s}\Delta t)\sinh(\Delta t/\tau_{\text{ot}})-e^{N_{\rm s}\Delta t/\tau_{\text{ot}}} \\&+\cos(2\pi f\Delta t)\cosh(\Delta t/\tau_{\text{ot}})\left[e^{N_{\rm s}\Delta t/\tau_{\text{ot}}}-\cos(2\pi f N_{\rm s}\Delta t)\right]\Bigg\}\,. \end{aligned}$$

    This allows us to get the final expression for the generalised PSD formula:

    $$\begin{aligned} P(f)&=D\tau_{\text{ot}}\frac{\Delta t}{N_{\rm s}}\left[\frac{\sinh(\alpha)}{\alpha} \right]^2\Bigg(N_{\rm s}\frac{\sinh(\Delta t/\tau_{\text{ot}})}{\cosh(\Delta t/\tau_{\text{ot}})-\cos(2\pi f\Delta t)} \\&-e^{-(N_{\rm s}+1)\Delta t/\tau_{\text{ot}}}\frac{\cos(2\pi f N_{\rm s}\Delta t)-\cosh(N_{\rm s}\Delta t/\tau_{\text{ot}})}{\cos(2\pi f\Delta t)-\cosh(\Delta t/\tau_{\text{ot}})} \\&-\frac{e^{{-}N_{\rm s}\Delta t/\tau_{\text{ot}}}}{\left(\cos(2\pi f\Delta t)-\cosh(\Delta t/\tau_{\text{ot}})\right)^2}\Bigg\{\cos(2\pi f N_{\rm s}\Delta t) \\&+\sin(2\pi f\Delta t)\sin(2\pi f N_{\rm s}\Delta t)\sinh(\Delta t/\tau_{\text{ot}})-e^{N_{\rm s}\Delta t/\tau_{\text{ot}}} \\&+\cos(2\pi f\Delta t)\cosh(\Delta t/\tau_{\text{ot}})\left[e^{N_{\rm s}\Delta t/\tau_{\text{ot}}}-\cos(2\pi f N_{\rm s}\Delta t)\right]\Bigg\}\Bigg) \\&+D\tau_{\text{ot}}\Delta t \frac{\alpha-\cosh(\alpha)\sinh(\alpha)}{\alpha^2}\,. \end{aligned}$$

    Notice that the leading term for large $N_{\rm s}$ is:

    $$\begin{aligned} P(f)&=D\tau_{\text{ot}}\Delta t\left(\left[\frac{\sinh(\alpha)}{\alpha} \right]^2\frac{\sinh(\Delta t/\tau_{\text{ot}})}{\cosh(\Delta t/\tau_{\text{ot}})-\cos(2\pi f\Delta t)}+\frac{\alpha-\cosh(\alpha)\sinh(\alpha)}{\alpha^2}\right) \\&+\mathcal{O}(N_{\rm s}^{{-}1})\,. \end{aligned}$$

  • Bayesian inference and maximum likelihood estimator (FORMA). For the model at hand, the likelihood $\mathcal {L}$ of observing the dataset $\tilde {\mathcal {D}}$ given the model’s parameters, denoted here collectively as $\theta$, is:
    $$\mathcal{L}(\tilde{\mathcal{D}}|\theta)=p(\{\tilde{x}_n\}_{n=1}^{N_{\rm s}}|\theta).$$

    Using Bayes’ rule, the posterior distribution of observing the parameters of the model given the dataset is:

    $$p(\theta|\{\tilde{x}_n\}_{n=1}^{N_{\rm s}})=\frac{\mathcal{L}(\tilde{\mathcal{D}}|\theta)p_0(\theta)}{\int d\theta \mathcal{L}(\tilde{\mathcal{D}}|\theta)p_0(\theta)}.$$

    Unfortunately, the model’s likelihood $\mathcal {L}(\tilde {\mathcal {D}}|\theta )$ is fairly complicated since the corrected process is non-Markovian, which implies that the inverse of the covariance matrix is not tridiagonal. Nevertheless, since $p(\{\tilde {x}_n\}_{n=1}^{N_{\rm s}}|\theta )$ is multivariate Gaussian, any marginal is multivariate Gaussian with the corresponding reduced covariance matrix. This allows us very easily to write the probability $G(\tilde {x}_{n+1})$ of observing the particle at the corrected position $\tilde {x}_{n+1}$ at at time $t_{n+1}$ conditioned that at a previous time $t_n$ it was at the corrected position $x_n$. Let us denote $\Delta t=t_{n+1}-t_n$. Then, one can show that

    $$G(\tilde{x}_{n+1}|\tilde{x}_{n})=\exp\left[-\frac{1}{2\left(I_{n+1,n+1}-\frac{I^2_{n+1,n}}{I_{n,n}}\right)}\left(\tilde{x}_{n+1}- \frac{I_{n+1,n}}{I_{n,n}} \tilde{x}_n\right)^2\right],$$
    but
    $$I_{nn}=I _{n+1,n+1}= \frac{k_BT}{\kappa}\mathcal{F}(\alpha),\quad I_{n+1,n}= \frac{k_BT}{\kappa}\left[\frac{\sinh(\alpha)}{\alpha} \right]^2 e^{-\Delta t/\tau_{\text{ot}}},$$
    and, therefore,
    $$\begin{aligned} G(\tilde{x}_{n+1}|\tilde{x}_{n})&=\frac{1}{\sqrt{2 \pi\frac{k_BT}{\kappa_x}\mathcal{F}(\alpha)\left[1-\mathcal{G}^2(\alpha) e^{{-}2\Delta t/\tau_{\text{ot}}}\right]}}\times \\ &\exp\left[-\frac{\left(\tilde{x}_{n+1}- \mathcal{G}(\alpha) e^{-\Delta t/\tau_{\text{ot}}} \tilde{x}_n\right)^2}{2\frac{k_BT}{\kappa}\mathcal{F}(\alpha)\left[1-\mathcal{G}^2(\alpha) e^{{-}2\Delta t/\tau_{\text{ot}}}\right]}\right], \end{aligned}$$
    where we have introduced the function
    $$\mathcal{G}(\alpha)=\frac{\left[\frac{\sinh(\alpha)}{\alpha} \right]^2}{\mathcal{F}(\alpha)}\,.$$

    From here, we can approximate the model’s likelihood as

    $$\mathcal{L}(\tilde{\mathcal{D}}|\theta)=\frac{\exp\left[-\sum_{n=1}^{N_{\rm s}}\frac{\left(\tilde{x}_{n+1}- \mathcal{G}(\theta_2\theta_3) e^{-\theta_2\Delta t} \tilde{x}_n\right)^2}{2\theta_1\mathcal{F}(\theta_2\theta_3)\left[1-\mathcal{G}^2(\theta_2\theta_3) e^{{-}2\theta_2\Delta t}\right]}\right]}{\left(2 \pi\theta_1\mathcal{F}(\theta_2\theta_3)\left[1-\mathcal{G}^2(\theta_2\theta_3) e^{{-}2\theta_2\Delta t}\right]\right)^{N_{\rm s}/2}},$$
    where we have considered as parameters $\theta =(\theta _1,\theta _2,\theta _3)=(\frac {k_BT}{\kappa },1/\tau _{\text {ot}},\delta /2)$. Let us next introduce the estimators:
    $$\mathcal{T}_1=\frac{1}{N_{\rm s}}\sum_{n=1}^{N_{\rm s}}[\tilde{x}_{n+1}]^2,\quad \mathcal{T}_2=\frac{1}{N_{\rm s}}\sum_{n=1}^{N_{\rm s}}\tilde{x}_{n+1}\tilde{x}_{n} ,\quad \mathcal{T}_3=\frac{1}{N_{\rm s}}\sum_{n=1}^{N_{\rm s}}[\tilde{x}_{n}]^2\,.$$

    Then, maximizing with respect to $\theta _1$ and $\theta _2$ we obtain the MLEs

    $$\mathcal{G}(\theta^\star_2\theta_3) e^{-\theta^\star_2\Delta t}=\frac{\mathcal{T}_2}{\mathcal{T}_3},\quad\quad \theta_1^\star\mathcal{F}(\theta^\star_2\theta_3)=\frac{\mathcal{T}_1-\frac{\mathcal{T}_2^2}{\mathcal{T}_3}}{1-\left(\frac{\mathcal{T}_2}{\mathcal{T}_3}\right)^2}.$$

    If we want to plot the dataset $\mathcal {D}$ in a two-dimensional scatter plot where the $x$-axis represents $\tilde {x}_{n}$ and the $y$-axis represents $(\tilde {x}_n-\tilde {x}_n)/\Delta t$, then, looking at the argument of the exponential in Eq. (37), the cloud of experimental points will have an elliptical form with the major axis having a slope given by $-\left (1-\mathcal {T}_2/\mathcal {T}_3\right )/\Delta t$ and a width proportional to $\sqrt {\mathcal {T}_1-\mathcal {T}_2^2/\mathcal {T}_3}/\Delta t$.

In the following section, we show the performance of these generalized solutions and compare them to the standard ones on experimental measurements.

4. Experimental estimation of the stiffness and diffusion

To demonstrate the performance of the generalized formulas presented in the previous section, we performed a series of experiments with an OT. The details of the experimental setup are described in Appendix B. Briefly, a silica particle of diameter $d_{\rm p}=1.54\pm 0.10\,{\mu}\rm {m}$ was trapped in water with an OT obtained by focusing a laser beam of $532\,\rm {nm}$ wavelength. The laboratory temperature was $T=22\,\pm 0.5^\circ \rm {C}$. The trapped particle was recorded at three different sampling frequencies and four integration times (exposure time of the camera) for each frequency: $f_{\rm s}=500\,\rm {Hz}$ with $\delta =59$, $500$, $1000$ , and $2000\,\mu \textrm {s}$; $f_{\rm s}=1499.25\,\rm {Hz}$ ($1500\,\rm {Hz}$ nominal value) with $\delta =59$, $200$, $350$ and $500\,\mu \textrm {s}$ and $f_{\rm s}=3496.5\,\rm {Hz}$ ($3500\,\rm {Hz}$ nominal value) with $\delta =59, 100, 150$ and $200\,\mu \textrm {s}$. In all these experiments, we acquired trajectories with $N_{\rm s}=10^5$ samples and in all the cases where fitting is required (e.g., MSD, ACF and PSD), we used standard non-linear least square fitting routines provided by MatLab.

The estimated values of $\kappa$ and $D$ are shown in Figs. 2, 3, and 4 for the three sampling frequencies $f_{\rm s}=500$, $1500$ and $3500\,\rm {Hz}$. For each of these experiments, the values of $\kappa$ and $D$ were estimated as a function of $\delta$ following standard and generalized formulas. In Figs. 2, 3, and 4, the dots represent the experimental results. To compare our results with theory, we performed Monte Carlo simulations of the OU proccess (Eq. (1)) taking the mean of the experimental values of $\kappa$ and $D$ retrieved by means of the generalized analyses, giving $\bar {\kappa }_\alpha =4.08\pm 0.05\,\rm {pN/}{\mu}\rm {m}$ and $\bar {D}_\alpha = 0.299\pm 0.008\,{\mu}\rm {m}^2/\rm{s}$. Performing 50 realizations with $N_{\rm s}=10^5$ samples for each case, the Monte Carlo simulations enable to estimate the confidence intervals obtained by means of the standard deviation of all the replicas as a function of $\delta$, which are depicted in colored shaded areas in Figs. 2, 3, and 4. The gray shaded areas of the diffusion constant depict the confidence interval of the expected diffusion computed from the size of the particle and the viscosity of water at the laboratory temperature ($\nu =0.95\pm 0.01\,\rm {mPa\,s}$, with its error estimated from the error range of $T$), $D^*=0.295\pm 0.020\,{\mu}\rm {m}^2/\rm{s}$, which is useful as a reference parameter to evaluate our estimations. Additionally, using $\bar {\kappa }_\alpha$ and $\bar {D}_\alpha$, together with the Stokes-Einstein relation $D=k_B T/\gamma$, we estimate the characteristic time of our OT to be $\tau _{\rm {ot}}=\gamma /\kappa =3.4\,\rm {ms}$, which corresponds to the characteristic frequency $f_{\rm {ot}}= 1/\tau _{\rm {ot}}=297\,\rm {Hz}$.

 figure: Fig. 2.

Fig. 2. Influence of $\delta$ on the estimation of the stiffness and diffusion: low sampling frequency. (a,c) standard and (b,d) generalized formulas at sampling frequency $f_{\rm s}=500\,\rm {Hz}$. The data points show estimates from experimental realizations and the colored shaded areas depict confidence intervals obtained from simulations using $\bar {\kappa }_\alpha = 4.08\,\rm {pN/}{\mu}\rm {m}$ and $\bar {D}_\alpha = 0.299\,{\mu}\rm {m}^2/\rm{s}$. As a reference, the gray shaded areas in (c,d) depict the range of the expected value $D^*=0.295\pm 0.020\,{\mu}\rm {m}^2/\rm{s}$. All the experimental data points were estimated using $N_{\rm s}=10^5$ samples.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Influence of $\delta$ on the estimation of the stiffness and diffusion: medium sampling frequency. (a,c) standard and (b,d) generalized formulas at sampling frequency $f_{\rm s}=1500\,\rm {Hz}$. The data points show estimates from experimental realizations and the colored shaded areas depict confidence intervals obtained from simulations using $\bar {\kappa }_\alpha = 4.08\,\rm {pN/}{\mu}\rm {m}$ and $\bar {D}_\alpha = 0.299\,{\mu}\rm {m}^2/\rm{s}$. As a reference, the gray shaded areas in (c,d) depict the range of the expected value $D^*=0.295\pm 0.020\,{\mu}\rm {m}^2/\rm{s}$. All the experimental data points were estimated using $N_{\rm s}=10^5$ samples.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Influence of $\delta$ on the estimation of the stiffness and diffusion: high sampling frequency. (a,c) standard and (b,d) generalized formulas at sampling frequency $f_{\rm s}=3500\,\rm {Hz}$. The data points show estimates from experimental realizations and the colored shaded areas depict confidence intervals obtained from simulations using $\bar {\kappa }_\alpha = 4.08\,\rm {pN/}{\mu}\rm {m}$ and $\bar {D}_\alpha = 0.299\,{\mu}\rm {m}^2/\rm{s}$. As a reference, the gray shaded areas in (c-d) depict the range of the expected value $D^*=0.295\pm 0.020\,{\mu}\rm {m}^2/\rm{s}$. All the experimental data points were estimated using $N_{\rm s}=10^5$ samples.

Download Full Size | PDF

As already illustrated in Fig. 1, the standard formulas fail to give a good description of the main quantities as $\delta$ increases. This gives rise to wrong estimations of $\kappa$ and $D$ in all the analyses, as can clearly be seen in Figs. 2(a) and 2(c). While $\delta$ increases, the values of $\kappa$ and $D$ linearly move away from the expected value: $\kappa$ is more and more overestimated, while $D$ is increasingly underestimated. An exception is the ACF method, which shows an opposite behavior and gives the best results under these conditions. These values were obtained from the fitting procedure ignoring the first data point of the ACF ($\rm {ACF}(\tau _{\ell =0})$), which, as we saw in previous section, carries most of the bias arising from the integration time. Moreover, in the MSD and in the ACF only the first data points corresponding at most to $6\,\tau _{\rm ot}$ ($\tau _\ell \leq 6\,\tau _{\rm ot}$) were taken into consideration for the fitting.

We can see from the estimated values that the bias with the integration time gets less important as the sampling frequency raises, as shown in Figs. 3 and 4, since in the extreme case with $f_{\rm s}=3500\,\rm {Hz}$ and $\delta =150\,\mu \rm {s}$ the sampling frequency is at least one order of magnitude larger than the characteristic frequency of the trap and the integration time is one order of magnitude shorter than the characteristic time of the trap ($f_{\rm s} \gg f_{\rm {ot}}$ and $\delta \ll \tau _{\rm {ot}}$). The bias of $\kappa$ and $D$ with respect to $\delta$ and sampling frequency disappear for all the methods when the generalized formulas are used (see Figs. 2(b,d), 3(b,d), and 4(b,d)). Interestingly, all the diffusion constants estimated by the generalized analyses fall within the confidence interval of $D^*$.

All of the experimental data points shown in Figs. 2, 3, and 4 were estimated with the same number of samples ($N_{\rm s}=10^5$). This means that the data obtained for one experiment at $500\,\rm {Hz}$ lasted $3.3$ minutes, while those with $f_{\rm s}=3500\,\rm {Hz}$ lasted only $28.5\,\textrm {s}$. To analyze the dependence of these results on the number of samples and on the total acquisition time, we show the convergence of the estimation of some data points in Figs. 2, 3, and 4 as a function of time and number of samples, as it is explained in the following section.

5. Influence of the trajectory length

While it is intuitive to assume that both the total time $T_{\rm s}$ and the number of data points $N_{\rm s}$ have to be large in order to obtain accurate and precise values, the increase of these two parameters do not necessarily lead to better estimates, as we show in Fig. 5. At first glance, as Figs. 5(a-c) show, the convergence of the stiffness with respect to the total sampled time $T_{\rm s}$ does not show any significant differences for the three considered sampling frequencies. After a long enough sampled time of about $T_{\rm s}=10\,\rm {s}$, all the methods yield good results within an error of $10{\%}$ (dashed-black lines). Since these results are independent of the sampling frequency, for a given $T_{\rm s}$ the number of data points do not affect importantly the result. Figure 5(a) has only $N_{\rm s}=14300$ samples, while Fig. 5(c) has $N_{\rm s}=10^5$ samples (see red vertical lines stressing the time that correspond to $N_{\rm s}=5000$ samples in the three cases). This result has important consequences if our main goal is to determine the stiffness, since an accurate and precise estimation will require enough time only, much larger than the characteristic relaxation time of the trap, and a faster camera or QPD will not necessarily improve the experimental accuracy. Moreover, this result could play a very important role when the experiment requires a long integration time, for instance when the light used for detection is very dimmed, whether it is for practical reasons or because the interesting sample could be photodamaged or altered with high intensity illumination.

 figure: Fig. 5.

Fig. 5. Performance of the generalized methods to estimate the stiffness and diffusion. (a-c) The stiffness is estimated as a function of the total sampled time $T_{\rm s}$, while the diffusion (d-f) is estimated as a function of the number of samples $N_{\rm s}$ in the trajectory (at three sampling frequencies $f_{\rm s}$ and three long integration times $\delta$). Experimental results are shown with solid dots while colored shaded areas show confidence intervals estimated by Monte Carlo simulations considering $\bar {\kappa }_\alpha =4.08\,\rm {pN/}{\mu}\rm {m}$ and $\bar {D}_\alpha =0.299\,{\mu}\rm {m}^2/\rm{s}$. Dashed lines in (a-c) depict the $10{\%}$ error range of $\bar {\kappa }$ and gray shaded areas in (d-f) depict the range of the expected value $D^*=0.295\pm 0.02\,{\mu}\rm {m}^2/\rm{s}$. The vertical red lines in (a-c) correspond to $N_{\rm s}=5000$ samples and the vertical red lines in (d-f) correspond to $T_{\rm s}=10\,\rm {s}$ acquisition time.

Download Full Size | PDF

On the contrary, a good estimation of the diffusion constant requires a large number of samples independently of the total time acquired. Figures 5(d-f) exhibit the behavior of the diffusion with the number of samples $N_{\rm s}$ for all the methods, showing no dramatic dependence for each sampling frequency. In this case, we fixed the maximum number of samples to $N_{\rm s}=10^5$ corresponding to $T_{\rm s}=28.6\,\rm {s}$ for the highest sampling frequency (Fig. 5(f)) and to $T_{\rm s}=3.3\,\rm {min}$ for the lowest sampling frequency (Fig. 5(d)). Due to this property, the estimation of the diffusion constant is not limited by the total acquisition time, so it can be retrieved much faster than the stiffness by increasing the speed of the camera or detector. This is important in practice if the main goal is to estimate the diffusion, drag coefficient or the viscosity in conditions where these parameters are not stationary. For instance, using the highest sampling frequency in our experiments of $f_{\rm s}=3500\,\rm {Hz}$ (Fig. 5(f)), the time taken to have a good estimation, under the gray shaded area in Figs. 5(d-f), will require only a few seconds, or even a fraction of a second using FORMA.

Comparing the performance of the generalized methods, in general all of them have a very similar behavior when estimating the stiffness. Looking at the diffusion as a function of the number of samples $N_{\rm s}$ (Fig. 5(d-f)), the rhythm at which the various methods converge is very different. While the MSD, ACF, and PSD methods have an error of about $10{\%}$ at $N_{\rm s}=2\times 10^4$ samples, FORMA achieves a much lower error, lower than $1{\%}$. The PSD seems to be more sensitive to sampling frequency, showing larger errors at the highest sampling frequencies. This is probably an error arising from a deficient representation of the PSD at low frequencies, biasing the fitting in this case, meaning that a longer total time could benefit this estimation. It is important to stress that these estimations were realized with standard non-linear fitting, but weighted non-linear fitting would improve the estimations making them closer to those obtained with FORMA, at the expense of using more computing time and more sophisticated algorithms. Moreover, the complete expression for the PSD shown in Eq. (30) may also contribute to improve these results.

6. Discussion

Integration time, sampling frequency and data length of the trajectory of the particle in an OT affect the estimation of stiffness and diffusion constant in all the standard methods normally employed to extract these observables. All the standard methods enable accurate results when (i) the sampling frequency $f_{\rm {s}}$ is much higher than the characteristic frequency of the trap $f_{\rm {ot}}$, (ii) the integration time $\delta$ is much shorter than the characteristic relaxation time $\tau _{\rm {ot}}$, and (iii) the total acquisition time is several orders of magnitude larger than the characteristic relaxation time of the optical trap.

When these ideal conditions start to weaken, the estimators for $\kappa$ and $D$ based on standard methods diverge from the real values. First, thinking of an ideal experimental situation where the integration time is negligible and only the sampling frequency is reduced until a value close or even lower than the characteristic frequency of the trap, most of the methods seem to give reasonable predictions as long as the sampling frequency remains similar to the characteristic frequency. Under this condition, FORMA shows less reliable results, which is understandable since this method is based on the assumption of high sampling frequencies [16]. If the sampling frequency gets much lower than the characteristic frequency, then an important loss of information at short times in the MSD and ACF could bias the fitting of their respective analytical functions, affecting particularly the prediction of $D$. The predictions by means of the PSD may also be affected if the Nyquist frequency (corresponding to half the sampling frequency) does not reach values around the corner frequency in order to properly define the PSD and obtain a good fitting, enabling a good estimation of $\kappa$ and $D$. The potential and equipartition methods are probably the least affected under these circumstances as long as the sampled trajectory is long enough in order minimize the statistical error in the experimental histogram or the variance.

Second, going a bit farther from the ideal conditions, as the integration time increases, the deviation of $\kappa$ and $D$ from their estimated values obtained using the standard methods increase as well. For the EP, MSD, and PSD the trend is more or less similar: the stiffness gets overestimated and diffusion underestimated as the integration time increases, as can be seen in Figs. 2, 3, and 4. As can be seen in Fig. 1, the main deviation of the ACF from the theoretical value occurs at $\rm {ACF}(\tau _{\ell =0})$. Ignoring this value allows to get very good estimations of the stiffness and diffusion, even when the data set are far from the ideal conditions, as can be seen in Figs. 2, 3, and 4. On the opposite end, there is FORMA, which is a reliable, simple and fast method when the sampling frequency is much higher than the characteristic time of the OT [16]. In this respect, low sampling and non-zero integration time drastically degrade the estimations obtained by means this method in its standard form.

Incorporating both the integration time and frequency sampling into the estimators in all the common methods improve the estimates drastically, making them more accurate and precise. For a sampled trajectory with a large number of data taken over a long time, the estimations for all these generalized methods are comparable, showing a high degree of accuracy and precision, even if the integration time is large or the sampling frequency low, as can be seen in Figs. 2, 3, and 4, for the generalized formulas. At this level, the main differences in the methods will come from the way in which the experimental estimators are used or compared with model functions in order to extract $\kappa$ and $D$. MSD, ACF, and PSD require an intermediate routine to fit their experimental estimators, commonly using well-established non-linear fitting routines, or some intermediate procedures such as data selection or windowing in the case of the PSD, requiring in many cases preliminary knowledge of the system to optimize the solutions and computing time. In this respect, EP and FORMA contrast with the other methods in the sense that there is not any fitting involved in their solutions and it is not necessary to incorporate any extra information or any intermediate routine, making them very fast and reliable. This can clearly be seen in Figs. 2, 3, and 4, where EP and FORMA in their generalized form, overall, retrieve the most accurate estimations of $\kappa$ and $D$.

In general, the estimations are expected to improve as the amount of information contained in the trajectories increases. This information is given by the number of data sampled and the total acquisition time. One might be tempted to increase them by recording longer times and/or by increasing the number of data points in the trajectory by means of the sampling frequency. However, one must make considerations about the specific requirements imposed by the experiment; for instance, there are many situations where the sampling frequency can in principle be increased to very high values (e.g., using a fast camera system or a quadrant photodiode), but degrading the signal-to-noise ratio and hence the estimations of $\kappa$ and $D$.

In practice, the total acquisition time cannot be extended indefinitely because the experimental conditions may be non-stationary or this would increase considerably the computational resources to record, store and analyze the data. In this respect, there are better approaches depending on whether one needs an accurate value of the stiffness $\kappa$ or of the diffusion constant $D$. For instance, if one wants to determine the stiffness with high accuracy, this will require enough acquisition time only, much larger than the characteristic time of the trap, while a faster camera or detector will not necessarily improve the estimation. On the other hand, contrary to the stiffness, the diffusion constant can be retrieved much faster by increasing the speed of the camera or detector (Fig. 5). Additionally, we can notice important discrepancies between the methods when insufficient information is contained in the trajectories. Particularly, in Fig. 5 we notice that the stiffness for all the methods behaves very similarly, but for the case of the diffusion constant, FORMA in its generalized form shows much better accuracy and precision than the other ones.

The analysis shown in this work is directly applicable to data acquired by means of a standard digital camera, where the integration time is typically well defined. When dealing with quadrant photodiodes, the integration time may not be accessible, but a slightly adapted version of the analysis presented here may be still applicable. For example, the generalized MSD and ACF allow one to estimate $\delta$ in addition to $D$ and $\kappa$. Similar considerations apply for the cases where an effective integration time is indirectly implemented in the data processing by means of low-pass filtering.

7. Conclusions

In this work, we have derived generalized formulas for the most common OT calibration methods by incorporating the integration time and sampling frequency. The effects of these factors on the estimates of the stiffness and diffusion were analyzed theoretically, numerically, and experimentally.

We have shown that the integration time has important consequences for all the standard methods, leading to errors of up to $50{\%}$ in some extreme cases. The ACF seems to be an exception, behaving better than the other methods if the zero lag time ($\text {ACF}(\tau _{\ell =0})$) is ignored in the fitting procedure . On the other hand, the generalized formulas give accurate and precise results independently of the technique, with errors that are less than $10{\%}$ for a few seconds, or a thousand of data sampled under our experimental conditions. The results can be drastically improved by incorporating either more data or sampling for a longer time, independently of the integration time or the sampling frequency. Particularly, we show that the generalized formulas give very good estimations of the stiffness if the sampled time is long enough, longer than an order of magnitude of the characteristic relaxation time, without worrying about the sampling frequency. In stark contrast, the diffusion constant can be estimated very fast and accurately with all generalized methods, even when the sampled time is very short or with very few data. More interestingly, the generalized expression for FORMA enables to retrieve very good estimations of the diffusion within an error of less than $10{\%}$ with some thousands, or even hundreds, of data points, making feasible to estimate the diffusion extremely fast with a high accuracy, depending mainly on the speed of the detector. In our case, we showed that a sampling frequency of $3500\,\rm {Hz}$ enables to obtain the diffusion within less than a second and with much less relative error than $10{\%}$ for a stiffness of $4.08\,\rm {p N/}{\mu}\rm {m}$.

This work paves the way to extend the applicability of common methods to contexts where the experimental conditions or the limitations of the setup do not allow measurements in ideal conditions, i.e., at short integration time, high frequency sampling and large amount of data, to obtain fast and reliable information about the stiffness and diffusion in OT.

Appendix A Brownian dynamics simulations

The dynamics of the particle in an optical trap is described by the Langevin equation defined in Eq. (1), which can be solved by means of the Wiener process as follows [1,33]:

$$x_{k+1}=x_k-\frac{\kappa}{\gamma}x_k\Delta t_{\rm s}+\sqrt{D\Delta t_{\rm s}}w_k,$$
where $w_k$ is a normally distributed random variable with zero mean and unitary variance. The trajectory $\{x_{k}\}^{M}_{k=1}$ is generated with a fixed time interval between successive points of $\Delta t_{\rm s}=1\,{\mu}\rm {s}$. To emulate the effect of a detection system with a given sampling frequency $f_{\rm s}$, it will suffice to select sub-trajectories $\{x_n\}_{n=1}^{N_{\rm s}}$ such that their consecutive points have a time difference of $\Delta t=1/f_{\rm s}$ larger than $\Delta t_{\rm s}$. For the effect of a finite integration time $\delta > 0$, one needs to find the closest integer of the ratio $\delta / \Delta t_{\rm s}$ and perform the average over the corresponding points in the trajectory $\{x_k\}$ to obtain the sampled trajectory that corresponds to the one of a given sampling frequency and a finite integration time. In our simulations, we used $\kappa = 4.08\,\rm {pN/}{\mu}\rm {m}$ and $D= 0.299\,{\mu}\rm {m}^2/\rm{s}$.

Appendix B Experimental methods

The OT was generated by means of a Gaussian laser beam (Coherent Verdi V6, $\lambda =532\,\rm {nm}$): this laser beam was expanded, phase corrected by a phase-only spatial light modulator (Hamamatsu X10468-04), relayed by a 1:1 telescope, and tightly focused by a water immersion objective (Olympus UPLANSApo $60\times$ with $\rm {NA}=1.2$), within an inverted microscope. The sample consists of spherical particles (Bangs Laboratories Inc., non-functionalized silica particles of diameter $d_{\rm p}=1.54\pm 0.10\,{\mu}\rm {m}$) suspended in an aqueous solution at laboratory temperature ($T=22\,\pm 0.5^\circ \rm {C}$) and put inside a sealed sample cell made with two coverslips and a spacer of about $100\,\mu \rm {m}$ thickness. The beam was focused deep into the sample cell ensuring that the 3D trapped particle was far from both the bottom and the upper coverslips (at least $10\,{\mu}\rm {m}$ from the bottom one), preventing in this way any significant hydrodynamic interactions with the walls of the sample cell. To perform long-term experiments with this microscope objective, which works with a thin layer of water between the lens and the bottom coverslip, we used a water dispenser based on capillary action to compensate for water loss that is constantly evaporating (see Ref. [34] for more details). The dynamics of the particle was recorded with a CMOS camera (Basler Ace acA640-750um) at different sampling frequencies and integration times (see main text), and its position was obtained using standard video microscopy techniques [10,35], with an accuracy of less than $5\,\rm {nm}$ in the position detection.

Appendix C Derivation of the joint PDF of the corrected particle’s trajectory

Starting from the definition of the time-average position given in Eq. (18) and using the formal solution given by Eq. (2), we need to evaluate two terms. The first term reads:

$$\frac{1}{\delta}\int_{t_n-\delta/2}^{t_n+\delta/2} dt x_0 e^{- t/\tau_{\text{ot}}}= x_0 \frac{e^{- t_n/\tau_{\text{ot}}}}{2\alpha} \left(e^{\alpha}-e^{-\alpha}\right)=\tilde{x}_0 e^{- t_n/\tau_{\text{ot}}}.$$

For the second term, the idea is to change the order of integration to express it as an integral over the whole trajectory of a function coupled with the noise. Thus, we write

$$ \begin{aligned} \frac{\sqrt{2D}}{\delta}\int_{t_n-\delta/2}^{t_n+\delta/2} dt\int_0^{t} ds W_x(s) e^{-(t-s)/\tau_{\text{ot}}} &= \frac{\sqrt{2D}}{\delta}\Bigg[\int^{t_n-\delta/2}_0 ds W_x(s)\int_{t_n-\delta/2}^{t_n+\delta/2} dte^{-(t-s)/\tau_{\text{ot}}} \\ & +\int^{t_n+\delta/2}_{t_n-\delta/2} ds W_x(s) \int_{s}^{t_n+\delta/2} dte^{-(t-s)/\tau_{\text{ot}}} \Bigg]\\ &=\sqrt{2D}\int_0^T ds W_x(s) g_n(s), \end{aligned}$$
where we have defined
$$\begin{aligned} g_n(s)&\equiv \Theta(t_{n}-\delta/2 -s)\frac{1}{\delta}\int_{t_n-\delta/2}^{t_n+\delta/2} dte^{-(t-s)/\tau_{\text{ot}}}\\ &+\Theta(s-t_{n}+\delta/2)\Theta(t_{n}+\delta/2- s) \frac{1}{\delta}\int_{s}^{t_n+\delta/2} dte^{-(t-s)/\tau_{\text{ot}}}, \end{aligned}$$
being $\Theta (x)$ the Heaviside step function. The two integrals appearing in the above definition of $g_n(s)$ can be easily done, viz.
$$\begin{aligned}\frac{1}{\delta}\int_{t_n-\delta/2}^{t_n+\delta/2} dte^{-(t-s)/\tau_{\text{ot}}}=\frac{1}{2\alpha}\left[e^{-(t_n-\delta/2-s)/\tau_{\text{ot}}}-e^{-(t_n+\delta/2-s)/\tau_{\text{ot}}}\right] \end{aligned}$$
$$\begin{aligned}=\frac{\sinh(\alpha)}{\alpha}e^{-(t_n-s)/\tau_{\text{ot}}}, \end{aligned}$$
$$\begin{aligned}\frac{1}{\delta}\int_{s}^{t_n+\delta/2} dte^{-(t-s)/\tau_{\text{ot}}}=\frac{1}{2\alpha}\left[1-e^{-(t_n+\delta/2-s)/\tau_{\text{ot}}}\right]\,. \end{aligned}$$

We finally obtain the following expression for the stochastic dynamics of the corrected trajectory:

$$\tilde{x}(t_n)=\tilde{x}_0 e^{- t_n/\tau_{\text{ot}}}+\sqrt{2D}\int_0^T ds W_x(s)g_n(s)$$
with
$$\begin{aligned} g_n(s)&\equiv \Theta(t_{n}-\delta/2 -s)\frac{\sinh(\alpha)}{\alpha}e^{-(t_n-s)/\tau_{\text{ot}}}\\ &+\Theta(s-t_{n}+\delta/2)\Theta(t_{n}+\delta/2- s) \frac{1}{2\alpha}\left[1-e^{-(t_n+\delta/2-s)/\tau_{\text{ot}}}\right]. \end{aligned}$$

Now, we are in position to derive the joint PDF, denoted here $p\{\tilde {x}_n\}_{n=1}^{N_{\rm s}}$, of observing the corrected particle’s trajectories at the blurry points $\tilde {x}_1,\ldots, \tilde {x}_{N_{\rm s}}$ and times $t_1,\ldots, t_{N_{\rm s}}$. Mathematically, this is given by:

$$p(\{\tilde{x}_n\}_{n=1}^{N_{\rm s}})=\left\langle \prod_{n=1}^L\delta \left(\tilde{x}_n-\tilde{x}(t_n)\right)\right\rangle_{W_x},$$
where $\left \langle (\cdots )\right \rangle _{W_x}$ denotes the average over all possible realization of the noise’s trajectory $W_x(t)$, whose probability measure is given by the following path integral:
$$\mathcal{P}[W_x(t)]=\frac{1}{Z}\exp\left[-\frac{1}{2}\int_0^T ds W_x^2(s)\right],$$
where $Z$ is a normalization constant. Let us now derive an exact expression for the joint PDF. To do this, we write
$$\begin{aligned} \left\langle \prod_{n=1}^{N_{\rm s}}\delta \left(\tilde{x}_n-\tilde{X}(t_n)\right)\right\rangle_{\xi}&=\left\langle \int\left[\prod_{n=1}^{N_{\rm s}}\frac{dy_n}{2\pi}\right]\exp\left[i\sum_{n=1}^{N_{\rm s}} y_n \left(\tilde{x}_n-\tilde{X}(t_n)\right)\right]\right\rangle_{W_x}\\ &=\int\left[\prod_{n=1}^{N_{\rm s}}\frac{dy_n}{2\pi}\right]\exp\left(i\sum_{n=1}^{N_{\rm s}} y_n \tilde{x}_n\right)\left\langle \exp\left[{-}i\sum_{n=1}^{N_{\rm s}} y_n \tilde{X}(t_n)\right]\right\rangle_{W_x}\,. \end{aligned}$$

The average over all possible realizations of the noise $W_x(t)$ has the following form:

$$\begin{aligned}\left\langle \exp\left[{-}i\sum_{n=1}^{N_{\rm s}} y_n \tilde{X}(t_n)\right]\right\rangle_{W_x}&=\frac{1}{Z}\int \mathcal{D}W_x(t) \exp\Bigg[-\frac{1}{2}\int_0^Tds W_x^{2}(s) \\&-i\sum_{n=1}^{N_{\rm s}} y_n \left(\tilde{x}_o e^{- t_n/\tau_{\text{ot}}} +\sqrt{2D}\int_0^Tds W_x(s) g_n(s)\right)\Bigg] \\&=e^{{-}i\sum_{n=1}^{N_{\rm s}} y_n \tilde{x}_0 e^{- t_n/\tau_{\text{ot}}}} \\&\times\frac{1}{Z}\int \mathcal{D}W_x(t) \exp\Bigg[-\frac{1}{2}\int_0^Tds W_x^{2}(s) \\&-i\sqrt{2D}\int_0^Tds W_x(s)\sum_{n=1}^{N_{\rm s}} y_n g_n(s)\Bigg]\,. \end{aligned}$$

But the path integral over $W_x(t)$ is easy to evaluate yielding:

$$\begin{aligned}&\frac{1}{Z}\int \mathcal{D}W_x(t) \exp\left[-\frac{1}{2}\int_0^Tds W_x^{2}(s)-i\sqrt{2D}\int_0^Tds W_x(s)\sum_{n=1}^{N_{\rm s}} y_n g_n(s)\right] \\&=\frac{1}{Z}\int \mathcal{D}W_x(t)\times \\& \exp\left[-\frac{1}{2}\int_0^Tds \left(W_x(s)+i\sqrt{2D}\sum_{n=1}^{N_{\rm s}} y_n g_n(s)\right)^2-D\sum_{n,m=1}^{N_{\rm s}} y_ny_m \int_0^Tdsg_n(s)g_{m}(s)\right] \\&=\exp\left[-\frac{1}{2}\sum_{n,m=1}^{N_{\rm s}} y_n I_{nm}y_m\right], \end{aligned}$$
where we have defined
$$I_{nm}=2D\int_0^Tdsg_n(s)g_{m}(s)\,.$$

Gathering the results, we have that:

$$p(\{\tilde{x}_n\}_{n=1}^{N_{\rm s}})=\int\left[\prod_{n=1}^{N_{\rm s}}\frac{dy_n}{2\pi}\right]\exp\left(i\sum_{n=1}^{N_{\rm s}}y_n\left( x_n-\tilde{x}_0 e^{- t_n/\tau_{\text{ot}}}\right)-\frac{1}{2}\sum_{n,m=1}^{N_{\rm s}} y_n I_{nm}y_m \right)\,.$$

With respect to the auxiliary variables $y_n$ this is a multi-variate Gaussian integral, easy to carry out. The final result for the joint PDF of observing a blurry trajectory is:

$$p(\{\tilde{x}_n\}_{n=1}^{N_{\rm s}})=\frac{1}{\sqrt{(2\pi)^{N_{\rm s}} \det I}}\exp\left[-\frac{1}{2}\sum_{n,m=1}^{N_{\rm s}} \left(x_n- \tilde{x}_0 e^{- t_n/\tau_{\text{ot}}}\right)[I^{{-}1}]_{nm}\left(x_m- \tilde{x}_0 e^{- t_m/\tau_{\text{ot}}}\right) \right],$$
where the matrix elements $I_{nm}$ are given by Eq. (54).

Appendix D Exact expression for the corrected covariant matrix $I$

We now proceed to derive a simple expression for the corrected covariance matrix, introduced in Eq. (54), where the function $g_n(s)$ is defined in Eq. (48). We first write the covariance matrix as the sum of three contributions, that is $I_{nm}=I^{(2)}_{nm}+I^{(1)}_{nm}+I^{(3)}_{nm}$, where

$$\begin{aligned}I^{(1)}_{nm}/2D&=\left[\frac{\sinh(\alpha)}{\alpha} \right]^2\int_0^Tds\Theta(t_n-\delta/2-s)\Theta(t_m-\delta/2-s)e^{-( t_n+t_m-2s)/\tau_{\text{ot}}} \\ I^{(2)}_{nm}/2D&=\left[\frac{\sinh(\alpha)}{\alpha} \right]\Bigg\{\int_0^Tds\Theta(t_n-\delta/2-s)\Theta(t_m+\delta/2-s)\Theta(s-t_m+\delta/2) \\ &\times\frac{e^{-( t_n-s)/\tau_{\text{ot}}}-e^{-(t_m+t_n+\delta/2-2s)/\tau_{\text{ot}}}}{2\alpha} \\&+\int_0^Tds\Theta(t_m-\delta/2-s)\Theta(t_n+\delta/2-s)\Theta(s-t_n+\delta/2) \\&\times\frac{e^{-( t_m-s)/\tau_{\text{ot}}}-e^{-(t_m+t_n+\delta/2-2s)/\tau_{\text{ot}}}}{2\alpha}\Bigg\}, \\ I^{(3)}_{nm}/2D&=\int_0^Tds\Theta(t_n+\delta/2-s)\Theta(s-t_n+\delta/2)\Theta(t_m+\delta/2-s)\Theta(s-t_m+\delta/2) \\&\times\frac{1-e^{-(t_n+\delta/2-s)/\tau_{\text{ot}}}}{2\alpha}\frac{1-e^{-(t_m+\delta/2-s)/\tau_{\text{ot}}}}{2\alpha}. \end{aligned}$$

The evaluation of these integrals is relatively simple. Let us evaluate each carefully. For the integral $I^{(1)}_{nm}$, we notice that the two Heaviside functions indicate that the time integral over the $s$-variable must be carried out in the interval $s\in [0,\min (t_n,t_m)-\delta /2]$. Thus, we write

$$\begin{aligned}I^{(1)}_{nm}/2D&=\left[\frac{\sinh(\alpha)}{\alpha} \right]^2\int_0^Tds\Theta(t_n-\delta/2-s)\Theta(t_m-\delta/2-s)e^{-( t_n+t_m-2s)/\tau_{\text{ot}}} \\&=\left[\frac{\sinh(\alpha)}{\alpha} \right]^2\int_0^{\min(t_n,t_m)-\delta/2}dse^{-( t_n+t_m-2s)/\tau_{\text{ot}}} \\&=\left[\frac{\sinh(\alpha)}{\alpha} \right]^2 \frac{\tau_{\text{ot}}}{2}\left[e^{-( t_n+t_m-2(\min(t_n,t_m)-\delta/2))/\tau_{\text{ot}}}-e^{-( t_n+t_m)/\tau_{\text{ot}}}\right]\,. \end{aligned}$$

Now, using the following expressions for the minimum and maximum of two real numbers, viz.

$$\min(t_n,t_m)=\frac{t_n+t_m}{2}-\frac{|t_n-t_m|}{2},\quad\quad \max(t_n,t_m)=\frac{t_n+t_m}{2}+\frac{|t_n-t_m|}{2},$$
we finally arrive at:
$$I^{(1)}_{nm}= D\tau_{\text{ot}}\left[\frac{\sinh(\alpha)}{\alpha} \right]^2 \left[e^{-(|t_n-t_m|+\delta)/\tau_{\text{ot}}}-e^{-( t_n+t_m)/\tau_{\text{ot}}}\right]\,.$$

To carry out the integral $I^{(2)}_{nm}$, we will assume that $|t_n-t_m|> \delta$, that is, the intervals $[t_n-\delta /2,t_n+\delta /2]$ and $[t_m-\delta /2,t_m+\delta /2]$ never overlap (even for consecutive ones) unless $n=m$. Next, looking at the expression of $I^{(2)}_{nm}$, we see that the three Heaviside functions indicate that for the integral to be non-zero we must have that the intersections of the intervals $[0,t_n-\delta /2]\cap [t_m-\delta /2,t_m+\delta /2]$ and $[0,t_m-\delta /2]\cap [t_n-\delta /2,t_n+\delta /2]$ must be non-zero. This automatically implies that whenever $t_n=t_m$, the integral is zero. We are left to consider the two cases $t_m>t_n$ or $t_m<t_n$. This automatically implies that:

$$\begin{aligned}I^{(2)}_{nm}/2D&=\left(1-\delta_{nm}\right)\left[\frac{\sinh(\alpha)}{\alpha} \right]\Bigg[\Theta(t_n>t_m)\int_{t_m-\delta}^{t_m+\delta} ds\frac{e^{-( t_n-s)/\tau_{\text{ot}}}-e^{-(t_m+t_n+\delta/2-2s)/\tau_{\text{ot}}}}{2\alpha} \\&+\Theta(t_m>t_n)\int_{t_n-\delta}^{t_n+\delta}ds\frac{e^{-( t_m-s)/\tau_{\text{ot}}}-e^{-(t_m+t_n+\delta/2-2s)/\tau_{\text{ot}}}}{2\alpha}\Bigg]\,. \end{aligned}$$

Doing the integrals and rearranging terms, we finally arrive at the following expression

$$I^{(2)}_{nm}= 2D\tau_{\text{ot}}\left(1-\delta_{nm}\right) \alpha\left[\frac{\sinh(\alpha)}{\alpha} \right]^3 e^{-\alpha-|t_m-t_n|/\tau_{\text{ot}}}\,.$$

Finally, looking at the expression of the integral $I^{(3)}_{nm}$, we observe that the integration interval is given by I$[t_n-\delta /2,t_n+\delta /2]\cap [t_m-\delta /2,t_m+\delta /2]$, so unless $t_n=t_m$ the integral is zero as the intervals do not overlap. Thus,

$$I^{(3)}_{nm}/2D=\delta_{nm}\int_{t_n-\delta/2}^{t_n+\delta/2}ds\left(\frac{1-e^{-(t_n+\delta/2-s)/\tau_{\text{ot}}}}{2\alpha}\right)^2=\delta_{nm} \delta/2 \frac{4e^{{-}2\alpha }+4\alpha-3-e^{{-}4\alpha}}{8\alpha^3},$$
and therefore
$$I^{(3)}_{nm}= 2D \tau_{\text{ot}}\delta_{nm} \alpha \frac{4e^{{-}2\alpha }+4\alpha-3-e^{{-}4\alpha}}{8\alpha^3}\,.$$

We can now add up the three contributions to obtain the following expression for $I_{nm}$, viz.

$$\begin{aligned}I_{nm}&= D\tau_{\text{ot}}\left[\frac{\sinh(\alpha)}{\alpha} \right]^2 \left[e^{{-}2\alpha-|t_n-t_m|/\tau_{\text{ot}}}-e^{-( t_n+t_m)/\tau_{\text{ot}}}\right]+2D\tau_{\text{ot}} \alpha\left[\frac{\sinh(\alpha)}{\alpha} \right]^3 e^{-\alpha-|t_m-t_n|/\tau_{\text{ot}}} \\&+ 2D \tau_{\text{ot}}\delta_{nm}\Bigg[ \alpha \frac{4e^{{-}2\alpha }+4\alpha-3-e^{{-}4\alpha}}{8\alpha^3}- \alpha\left[\frac{\sinh(\alpha)}{\alpha} \right]^3 e^{-\alpha}\Bigg]. \end{aligned}$$

Finally, using the following identities

$$\begin{aligned}\left[\frac{\sinh(\alpha)}{\alpha} \right]^2 e^{{-}2\alpha} +2\alpha \left[\frac{\sinh(\alpha)}{\alpha} \right]^3 e^{-\alpha}&=\left[\frac{\sinh(\alpha)}{\alpha} \right]^2, \\ \alpha \frac{4e^{{-}2\alpha }+4\alpha-3-e^{{-}4\alpha}}{8\alpha^3}- \alpha\left[\frac{\sinh(\alpha)}{\alpha} \right]^3 e^{-\alpha}&=\frac{\alpha-\cosh(\alpha)\sinh(\alpha)}{2\alpha^2}, \end{aligned}$$
allow us to simplify the covariance matrix, yielding:
$$I_{nm}= D\tau_{\text{ot}}\left[\frac{\sinh(\alpha)}{\alpha} \right]^2 \left[e^{-|t_n-t_m|/\tau_{\text{ot},x}}-e^{-( t_n+t_m)/\tau_{\text{ot},x}}\right]+ D \tau_{\text{ot}}\delta_{nm}\frac{\alpha-\cosh(\alpha)\sinh(\alpha)}{\alpha^2}$$

Funding

Dirección General de Asuntos del Personal Académico, Universidad Nacional Autónoma de México (IN113422, IN111919); Knut och Alice Wallenbergs Stiftelse (2019.0079); HORIZON EUROPE European Research Council (101001267); H2020 European Research Council (677511).

Acknowledgements

We thank Agnese Callegari for her insights about the presentation of the results.

A.V.A. had the original idea. G.V., I.P.C and A.V.A. supervised the study. I.P.C performed all theoretical derivations. L.P.G., M.S., A.C., G.P. and A.V.A. performed the experiments. L.P.G. and A.V.A. implemented the models, performed the simulations (some of which were initially carried out by I.P.C), did the data analysis and made the figures. All the authors discussed the results and L.P., I.P.C and A.V.A. wrote the draft of the article. All authors revised the final version of the article.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. P. H. Jones, O. M. Maragó, and G. Volpe, Optical Tweezers: Principles and Applications (Cambridge University Press, 2015).

2. J. Gieseler, J. R. Gomez-Solano, A. Magazzù, I. P. Castillo, L. P. García, M. Gironella-Torrent, X. Viader-Godoy, F. Ritort, G. Pesce, A. V. Arzola, K. Volke-Sepúlveda, and G. Volpe, “Optical tweezers — from calibration to applications: a tutorial,” Adv. Opt. Photonics 13(1), 74–241 (2021). [CrossRef]  

3. G. Volpe, O. M. Marago, H. Rubinsztein-Dunlop, et al., “Roadmap for optical tweezers 2023,” JPhys Photonics 5(2), 022501 (2023). [CrossRef]  

4. C. Bustamante, W. Cheng, and Y. X. Mejia, “Revisiting the central dogma one molecule at a time,” Cell 144(4), 480–497 (2011). [CrossRef]  

5. P. Zemánek, G. Volpe, A. Jonáš, and O. Brzobohatỳ, “Perspective on light-induced transport of particles: from optical forces to phoretic motion,” Adv. Opt. Photonics 11(3), 577–678 (2019). [CrossRef]  

6. P. Roca-Cusachs, V. Conte, and X. Trepat, “Quantifying forces in cell biology,” Nat. Cell Biol. 19(7), 742–751 (2017). [CrossRef]  

7. C. Gonzalez-Ballestero, M. Aspelmeyer, L. Novotny, R. Quidant, and O. Romero-Isart, “Levitodynamics: Levitation and control of microscopic objects in vacuum,” Science 374(6564), eabg3027 (2021). [CrossRef]  

8. M. Tassieri, F. D. Giudice, E. J. Robertson, et al., “Microrheology with optical tweezers: Measuring the relative viscosity of solutions "at a glance",” Sci. Rep. 5(1), 8831 (2015). [CrossRef]  

9. R. M. Robertson-Anderson, “Optical tweezers microrheology: from the basics to advanced techniques and applications,” (2018).

10. J. C. Crocker and D. G. Grier, “Methods of digital video microscopy for colloidal studies,” J. Colloid Interface Sci. 179(1), 298–310 (1996). [CrossRef]  

11. A. Pralle, M. Prummer, E.-L. Florin, E. Stelzer, and J. Hörber, “Three-dimensional high-resolution particle tracking for optical tweezers by forward scattered light,” Microsc. Res. Tech. 44(5), 378–386 (1999). [CrossRef]  

12. G. Pesce, P. H. Jones, O. M. Maragò, and G. Volpe, “Optical tweezers: theory and practice,” Eur. Phys. J. Plus 135(12), 949 (2020). [CrossRef]  

13. K. C. Neuman and S. M. Block, “Optical trapping,” Rev. Sci. Instrum. 75(9), 2787–2809 (2004). [CrossRef]  

14. G. Volpe, G. Volpe, and D. Petrov, “Brownian motion in a nonhomogeneous force field and photonic force microscope,” Phys. Rev. E 76(6), 061118 (2007). [CrossRef]  

15. K. Berg-Sørensen and H. Flyvbjerg, “Power spectrum analysis for optical tweezers,” Rev. Sci. Instrum. 75(3), 594–612 (2004). [CrossRef]  

16. L. Pérez García, J. Donlucas Pérez, G. Volpe, A. V. Arzola, and G. Volpe, “High-performance reconstruction of microscopic force fields from brownian trajectories,” Nat. Commun. 9(1), 5166 (2018). [CrossRef]  

17. M. J. Bogdan and T. Savin, “Errors in Energy Landscapes Measured with Particle Tracking,” Biophys. J. 115(1), 139–149 (2018). [CrossRef]  

18. W. P. Wong and K. Halvorsen, “The effect of integration time on fluctuation measurements: calibrating an optical trap in the presence of motion blur,” Opt. Express 14(25), 12517 (2006). [CrossRef]  

19. K. I. Mortensen, H. Flyvbjerg, and J. N. Pedersen, “Confined Brownian Motion Tracked With Motion Blur: Estimating Diffusion Coefficient and Size of Confining Space,” Front. Phys. 8, 1–15 (2021). [CrossRef]  

20. J. P. Sharpe, C. Iniguez-Palomares, and R. Jimenez-Flores, “Optical tweezers force calibration using a fast-shuttering camera,” Proc. SPIE 6441, 644114 (2007). [CrossRef]  

21. T. Savin and P. S. Doyle, “Role of a finite exposure time on measuring an elastic modulus using microrheology,” Phys. Rev. E 71(4), 041106 (2005). [CrossRef]  

22. T. Savin and P. S. Doyle, “Static and dynamic errors in particle tracking microrheology,” Biophys. J. 88(1), 623–638 (2005). [CrossRef]  

23. X. Michalet and A. J. Berglund, “Optimal diffusion coefficient estimation in single-particle tracking,” Phys. Rev. E 85(6), 061916 (2012). [CrossRef]  

24. V. E. Loosemore and N. R. Forde, “Effects of finite and discrete sampling and blur on microrheology experiments,” Opt. Express 25(25), 31239 (2017). [CrossRef]  

25. A. J. Berglund, “Statistics of camera-based single-particle tracking,” Phys. Rev. E 82(1), 011917 (2010). [CrossRef]  

26. A. van der Horst and N. R. Forde, “Power spectral analysis for optical trap stiffness calibration from high-speed camera position detection with limited bandwidth,” Opt. Express 18(8), 7670–7677 (2010). [CrossRef]  

27. Z. Gong, H. Chen, S. Xu, Y. Li, and L. Lou, “Monte-Carlo simulation of optical trap stiffness measurement,” Opt. Commun. 263(2), 229–234 (2006). [CrossRef]  

28. B. M. Lansdorp and O. A. Saleh, “Power spectrum and Allan variance methods for calibrating single-molecule video-tracking instruments,” Rev. Sci. Instrum. 83(2), 025115 (2012). [CrossRef]  

29. M. U. Richly, S. Türkcan, A. Le Gall, N. Fiszman, J.-B. Masson, N. Westbrook, K. Perronet, and A. Alexandrou, “Calibrating optical tweezers with Bayesian inference,” Opt. Express 21(25), 31578 (2013). [CrossRef]  

30. J. N. Pedersen, L. Li, C. Grǎdinaru, R. H. Austin, E. C. Cox, and H. Flyvbjerg, “How to connect time-lapse recorded trajectories of motile microorganisms with dynamical models in continuous time,” Phys. Rev. E 94(6), 062401 (2016). [CrossRef]  

31. C. L. Vestergaard, P. C. Blainey, and H. Flyvbjerg, “Optimal estimation of diffusion coefficients from single-particle trajectories,” Phys. Rev. E 89(2), 022726 (2014). [CrossRef]  

32. H. Risken, The Fokker-Planck equation (Springer, 1996).

33. G. Volpe and G. Volpe, “Simulation of a brownian particle in an optical trap,” Am. J. Phys. 81(3), 224–230 (2013). [CrossRef]  

34. A. V. Arzola, L. Chvátal, P. Jákl, and P. Zemánek, “Spin to orbital light momentum conversion visualized by particle trajectory,” Sci. Rep. 9(1), 4127 (2019). [CrossRef]  

35. M. D. Shattuck and S. V. Franklin, “Handbook of granular materials,” Contemp. Phys. 58, 181 (2017).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Effect of low sampling frequency and long integration time on optical tweezers calibration. (a) The trajectory of a particle (solid line) is sampled every $\Delta t$, instantaneously (black dots) or with a finite integration time $\delta$ (red dots). (b-e) Behavior of the standard methods for various sampling frequencies and integration times on simulated trajectories generated through Monte Carlo simulations of a particle of diameter $d_{\rm p}=1.54\,{\mu}\rm {m}$ in a trap with stiffness $\kappa =4.08\,\rm {pN/}{\mu}\rm {m}$ and diffusion constant $D= 0.299\,{\mu}\rm {m}^2/\rm{s}$. The gray solid lines represent the analytical solutions of the standard methods, whereas the colored markers represent their corresponding estimates obtained from the simulated data. From left to right, potential (POT), mean square displacement (MSD), autocorrelation function (ACF), power spectrum density (PSD), and force reconstruction via maximum likelihood estimator (FORMA) methods. (b) When the conditions are ideal (i.e., with high sampling frequency $f_{\rm s}=5000\,\rm {Hz}$ and short integration time $\delta =0\,\rm {s}$), there is good agreement between the estimators and the theoretical predictions (see Eqs. (7), (8), (10), (13) and (17)). (c-e) This agreement worsens as the conditions become less ideal (c) by lowering the sampling frequency to $f_{\rm s}= 100\,\rm {Hz}$ ($\delta =0\,\rm {m s}$), and then by increasing the integration time to (d) $\delta =5\,\rm {m s}$ and (e) $\delta =10\,\rm {m s}$.
Fig. 2.
Fig. 2. Influence of $\delta$ on the estimation of the stiffness and diffusion: low sampling frequency. (a,c) standard and (b,d) generalized formulas at sampling frequency $f_{\rm s}=500\,\rm {Hz}$. The data points show estimates from experimental realizations and the colored shaded areas depict confidence intervals obtained from simulations using $\bar {\kappa }_\alpha = 4.08\,\rm {pN/}{\mu}\rm {m}$ and $\bar {D}_\alpha = 0.299\,{\mu}\rm {m}^2/\rm{s}$. As a reference, the gray shaded areas in (c,d) depict the range of the expected value $D^*=0.295\pm 0.020\,{\mu}\rm {m}^2/\rm{s}$. All the experimental data points were estimated using $N_{\rm s}=10^5$ samples.
Fig. 3.
Fig. 3. Influence of $\delta$ on the estimation of the stiffness and diffusion: medium sampling frequency. (a,c) standard and (b,d) generalized formulas at sampling frequency $f_{\rm s}=1500\,\rm {Hz}$. The data points show estimates from experimental realizations and the colored shaded areas depict confidence intervals obtained from simulations using $\bar {\kappa }_\alpha = 4.08\,\rm {pN/}{\mu}\rm {m}$ and $\bar {D}_\alpha = 0.299\,{\mu}\rm {m}^2/\rm{s}$. As a reference, the gray shaded areas in (c,d) depict the range of the expected value $D^*=0.295\pm 0.020\,{\mu}\rm {m}^2/\rm{s}$. All the experimental data points were estimated using $N_{\rm s}=10^5$ samples.
Fig. 4.
Fig. 4. Influence of $\delta$ on the estimation of the stiffness and diffusion: high sampling frequency. (a,c) standard and (b,d) generalized formulas at sampling frequency $f_{\rm s}=3500\,\rm {Hz}$. The data points show estimates from experimental realizations and the colored shaded areas depict confidence intervals obtained from simulations using $\bar {\kappa }_\alpha = 4.08\,\rm {pN/}{\mu}\rm {m}$ and $\bar {D}_\alpha = 0.299\,{\mu}\rm {m}^2/\rm{s}$. As a reference, the gray shaded areas in (c-d) depict the range of the expected value $D^*=0.295\pm 0.020\,{\mu}\rm {m}^2/\rm{s}$. All the experimental data points were estimated using $N_{\rm s}=10^5$ samples.
Fig. 5.
Fig. 5. Performance of the generalized methods to estimate the stiffness and diffusion. (a-c) The stiffness is estimated as a function of the total sampled time $T_{\rm s}$, while the diffusion (d-f) is estimated as a function of the number of samples $N_{\rm s}$ in the trajectory (at three sampling frequencies $f_{\rm s}$ and three long integration times $\delta$). Experimental results are shown with solid dots while colored shaded areas show confidence intervals estimated by Monte Carlo simulations considering $\bar {\kappa }_\alpha =4.08\,\rm {pN/}{\mu}\rm {m}$ and $\bar {D}_\alpha =0.299\,{\mu}\rm {m}^2/\rm{s}$. Dashed lines in (a-c) depict the $10{\%}$ error range of $\bar {\kappa }$ and gray shaded areas in (d-f) depict the range of the expected value $D^*=0.295\pm 0.02\,{\mu}\rm {m}^2/\rm{s}$. The vertical red lines in (a-c) correspond to $N_{\rm s}=5000$ samples and the vertical red lines in (d-f) correspond to $T_{\rm s}=10\,\rm {s}$ acquisition time.

Tables (1)

Tables Icon

Table 1. Overview of the calibration analyses in the overdamped regime. These methods allow to estimate the stiffness κ of the optical trap and, in most of the cases, the diffusion constant D of the optically trapped particle from the sampled trajectory of the particle D { x ~ j } j = 1 N s at sampled time Δ t and integration time δ . The standard model functions assume much shorter sampling and integration times than the characteristic time of the optical trap. On the other hand, the generalized model functions take into account arbitrary sampling frequency and integration time, allowing to drastically improve the calibration of OT even if data acquisition is very far from the ideal conditions.

Equations (68)

Equations on this page are rendered with MathJax. Learn more.

d x ( t ) d t = κ γ x ( t ) + 2 D W ( t ) .
x ( t ) = x 0 e t / τ ot + 2 D 0 t d s W ( s ) e ( t s ) / τ ot ,
ρ ( x ) = ρ 0 e U ( x ) k B T ,
U ( x b ) = k B T log ( h ( x b ) ) + c o n s t a n t .
U ( x ) = 1 2 κ ( x x e q ) 2 ,
κ = k B T ( x x e q ) 2 .
κ = k B T s N s 2 ,
MSD ( τ ) = [ x ( t + τ ) x ( t ) ] 2 = 2 k B T κ ( 1 e τ / τ o t ) ,
MSD ( τ ) = 1 N s n = 1 N s ( x n + x n ) 2 ,
ACF ( τ ) = x ( t + τ ) x ( t ) = k B T κ e τ / τ o t ,
ACF ( τ ) = 1 N s n = 1 N s ( x n + x n ) ,
P ( f ) = 1 2 π 2 D f c 2 + f 2 ,
P ( f ) = Δ x 2 / f s 1 + c 2 2 c cos ( 2 π f / f s ) ,
P ( f k ) = | x ^ ( f k ) | 2 T s = Δ t 2 T s | F F T { { x j } j = 1 N s } k | 2
x n + 1 x n Δ t = κ γ x n + 2 D Δ t w n
L ( { x n } n = 1 N s ) = 1 ( 2 D Δ t ) N s / 2 exp [ 1 2 n = 1 N s 1 ( x n + 1 ( 1 Δ t τ o t ) x n 2 D Δ t ) 2 ]
κ γ = n = 1 N s 1 x n x n + 1 x n Δ t n = 1 N s 1 [ x n ] 2 , D = Δ t 2 ( N s 1 ) n = 1 N s 1 ( x n + 1 x n Δ t + κ γ x n ) 2 .
x ~ n 1 δ t n δ / 2 t n + δ / 2 x ( t ) d t .
p ( { x ~ n } n = 1 N s ) = 1 ( 2 π ) N s det I exp [ 1 2 n , m = 1 N s ( x ~ n x ~ 0 e t n / τ ot ) [ I 1 ] n m ( x ~ m x 0 ~ e t m / τ ot ) ] ,
I n m = [ sinh ( α ) α ] 2 D τ ot , x [ e | t n t m | / τ ot , x e ( t n + t m ) / τ ot , x ] + D τ ot , x α cosh ( α ) sinh ( α ) α 2 δ n m ,
x ~ N s 2 = k B T κ F ( α ) with F ( α ) = e 2 α + 2 α 1 2 α 2 .
U ( x ) = 1 2 κ F ( α ) x 2 .
x ~ n 2 = k B T κ F ( α ) , x ~ n x ~ n + = k B T κ [ sinh ( α ) α ] 2 e | τ | / τ ot ,
MSD ( τ ) = ( x ~ n + x ~ n ) 2 = 2 k B T κ { F ( α ) [ sinh ( α ) α ] 2 e | τ | / τ ot } .
ACF ( t n t m ) = x ~ m x ~ n = k B T κ { [ sinh ( α ) α ] 2 e | t n t m | / τ ot t n t m , F ( α ) t n = t m .
P ( f ) = 1 N s Δ t | Δ t n = 1 N s x ~ n e 2 π i f t n | 2 ,
P ( f ) = Δ t N s n , m = 1 N s x ~ n x ~ m [ cos ( 2 π f t n ) cos ( 2 π f t m ) + sin ( 2 π f t n ) sin ( 2 π f t m ) ] = Δ t N s n , m = 1 N s I n m cos [ 2 π f ( t n t m ) ] ,
P ( f ) = D τ ot Δ t N s n , m = 1 N s { [ sinh ( α ) α ] 2 [ e | t n t m | / τ ot e ( t n + t m ) / τ ot ] + δ n m α cosh ( α ) sinh ( α ) α 2 } cos [ 2 π f ( t n t m ) ] .
n , m = 1 N s [ e | t n t m | / τ ot e ( t n + t m ) / τ ot ] cos [ 2 π f ( t n t m ) ] = N s sinh ( Δ t / τ ot ) cosh ( Δ t / τ ot ) cos ( 2 π f Δ t ) e ( N s + 1 ) Δ t / τ ot cos ( 2 π f N s Δ t ) cosh ( N s Δ t / τ ot ) cos ( 2 π f Δ t ) cosh ( Δ t / τ ot ) e N s Δ t / τ ot ( cos ( 2 π f Δ t ) cosh ( Δ t / τ ot ) ) 2 { cos ( 2 π f N s Δ t ) + sin ( 2 π f Δ t ) sin ( 2 π f N s Δ t ) sinh ( Δ t / τ ot ) e N s Δ t / τ ot + cos ( 2 π f Δ t ) cosh ( Δ t / τ ot ) [ e N s Δ t / τ ot cos ( 2 π f N s Δ t ) ] } .
P ( f ) = D τ ot Δ t N s [ sinh ( α ) α ] 2 ( N s sinh ( Δ t / τ ot ) cosh ( Δ t / τ ot ) cos ( 2 π f Δ t ) e ( N s + 1 ) Δ t / τ ot cos ( 2 π f N s Δ t ) cosh ( N s Δ t / τ ot ) cos ( 2 π f Δ t ) cosh ( Δ t / τ ot ) e N s Δ t / τ ot ( cos ( 2 π f Δ t ) cosh ( Δ t / τ ot ) ) 2 { cos ( 2 π f N s Δ t ) + sin ( 2 π f Δ t ) sin ( 2 π f N s Δ t ) sinh ( Δ t / τ ot ) e N s Δ t / τ ot + cos ( 2 π f Δ t ) cosh ( Δ t / τ ot ) [ e N s Δ t / τ ot cos ( 2 π f N s Δ t ) ] } ) + D τ ot Δ t α cosh ( α ) sinh ( α ) α 2 .
P ( f ) = D τ ot Δ t ( [ sinh ( α ) α ] 2 sinh ( Δ t / τ ot ) cosh ( Δ t / τ ot ) cos ( 2 π f Δ t ) + α cosh ( α ) sinh ( α ) α 2 ) + O ( N s 1 ) .
L ( D ~ | θ ) = p ( { x ~ n } n = 1 N s | θ ) .
p ( θ | { x ~ n } n = 1 N s ) = L ( D ~ | θ ) p 0 ( θ ) d θ L ( D ~ | θ ) p 0 ( θ ) .
G ( x ~ n + 1 | x ~ n ) = exp [ 1 2 ( I n + 1 , n + 1 I n + 1 , n 2 I n , n ) ( x ~ n + 1 I n + 1 , n I n , n x ~ n ) 2 ] ,
I n n = I n + 1 , n + 1 = k B T κ F ( α ) , I n + 1 , n = k B T κ [ sinh ( α ) α ] 2 e Δ t / τ ot ,
G ( x ~ n + 1 | x ~ n ) = 1 2 π k B T κ x F ( α ) [ 1 G 2 ( α ) e 2 Δ t / τ ot ] × exp [ ( x ~ n + 1 G ( α ) e Δ t / τ ot x ~ n ) 2 2 k B T κ F ( α ) [ 1 G 2 ( α ) e 2 Δ t / τ ot ] ] ,
G ( α ) = [ sinh ( α ) α ] 2 F ( α ) .
L ( D ~ | θ ) = exp [ n = 1 N s ( x ~ n + 1 G ( θ 2 θ 3 ) e θ 2 Δ t x ~ n ) 2 2 θ 1 F ( θ 2 θ 3 ) [ 1 G 2 ( θ 2 θ 3 ) e 2 θ 2 Δ t ] ] ( 2 π θ 1 F ( θ 2 θ 3 ) [ 1 G 2 ( θ 2 θ 3 ) e 2 θ 2 Δ t ] ) N s / 2 ,
T 1 = 1 N s n = 1 N s [ x ~ n + 1 ] 2 , T 2 = 1 N s n = 1 N s x ~ n + 1 x ~ n , T 3 = 1 N s n = 1 N s [ x ~ n ] 2 .
G ( θ 2 θ 3 ) e θ 2 Δ t = T 2 T 3 , θ 1 F ( θ 2 θ 3 ) = T 1 T 2 2 T 3 1 ( T 2 T 3 ) 2 .
x k + 1 = x k κ γ x k Δ t s + D Δ t s w k ,
1 δ t n δ / 2 t n + δ / 2 d t x 0 e t / τ ot = x 0 e t n / τ ot 2 α ( e α e α ) = x ~ 0 e t n / τ ot .
2 D δ t n δ / 2 t n + δ / 2 d t 0 t d s W x ( s ) e ( t s ) / τ ot = 2 D δ [ 0 t n δ / 2 d s W x ( s ) t n δ / 2 t n + δ / 2 d t e ( t s ) / τ ot + t n δ / 2 t n + δ / 2 d s W x ( s ) s t n + δ / 2 d t e ( t s ) / τ ot ] = 2 D 0 T d s W x ( s ) g n ( s ) ,
g n ( s ) Θ ( t n δ / 2 s ) 1 δ t n δ / 2 t n + δ / 2 d t e ( t s ) / τ ot + Θ ( s t n + δ / 2 ) Θ ( t n + δ / 2 s ) 1 δ s t n + δ / 2 d t e ( t s ) / τ ot ,
1 δ t n δ / 2 t n + δ / 2 d t e ( t s ) / τ ot = 1 2 α [ e ( t n δ / 2 s ) / τ ot e ( t n + δ / 2 s ) / τ ot ]
= sinh ( α ) α e ( t n s ) / τ ot ,
1 δ s t n + δ / 2 d t e ( t s ) / τ ot = 1 2 α [ 1 e ( t n + δ / 2 s ) / τ ot ] .
x ~ ( t n ) = x ~ 0 e t n / τ ot + 2 D 0 T d s W x ( s ) g n ( s )
g n ( s ) Θ ( t n δ / 2 s ) sinh ( α ) α e ( t n s ) / τ ot + Θ ( s t n + δ / 2 ) Θ ( t n + δ / 2 s ) 1 2 α [ 1 e ( t n + δ / 2 s ) / τ ot ] .
p ( { x ~ n } n = 1 N s ) = n = 1 L δ ( x ~ n x ~ ( t n ) ) W x ,
P [ W x ( t ) ] = 1 Z exp [ 1 2 0 T d s W x 2 ( s ) ] ,
n = 1 N s δ ( x ~ n X ~ ( t n ) ) ξ = [ n = 1 N s d y n 2 π ] exp [ i n = 1 N s y n ( x ~ n X ~ ( t n ) ) ] W x = [ n = 1 N s d y n 2 π ] exp ( i n = 1 N s y n x ~ n ) exp [ i n = 1 N s y n X ~ ( t n ) ] W x .
exp [ i n = 1 N s y n X ~ ( t n ) ] W x = 1 Z D W x ( t ) exp [ 1 2 0 T d s W x 2 ( s ) i n = 1 N s y n ( x ~ o e t n / τ ot + 2 D 0 T d s W x ( s ) g n ( s ) ) ] = e i n = 1 N s y n x ~ 0 e t n / τ ot × 1 Z D W x ( t ) exp [ 1 2 0 T d s W x 2 ( s ) i 2 D 0 T d s W x ( s ) n = 1 N s y n g n ( s ) ] .
1 Z D W x ( t ) exp [ 1 2 0 T d s W x 2 ( s ) i 2 D 0 T d s W x ( s ) n = 1 N s y n g n ( s ) ] = 1 Z D W x ( t ) × exp [ 1 2 0 T d s ( W x ( s ) + i 2 D n = 1 N s y n g n ( s ) ) 2 D n , m = 1 N s y n y m 0 T d s g n ( s ) g m ( s ) ] = exp [ 1 2 n , m = 1 N s y n I n m y m ] ,
I n m = 2 D 0 T d s g n ( s ) g m ( s ) .
p ( { x ~ n } n = 1 N s ) = [ n = 1 N s d y n 2 π ] exp ( i n = 1 N s y n ( x n x ~ 0 e t n / τ ot ) 1 2 n , m = 1 N s y n I n m y m ) .
p ( { x ~ n } n = 1 N s ) = 1 ( 2 π ) N s det I exp [ 1 2 n , m = 1 N s ( x n x ~ 0 e t n / τ ot ) [ I 1 ] n m ( x m x ~ 0 e t m / τ ot ) ] ,
I n m ( 1 ) / 2 D = [ sinh ( α ) α ] 2 0 T d s Θ ( t n δ / 2 s ) Θ ( t m δ / 2 s ) e ( t n + t m 2 s ) / τ ot I n m ( 2 ) / 2 D = [ sinh ( α ) α ] { 0 T d s Θ ( t n δ / 2 s ) Θ ( t m + δ / 2 s ) Θ ( s t m + δ / 2 ) × e ( t n s ) / τ ot e ( t m + t n + δ / 2 2 s ) / τ ot 2 α + 0 T d s Θ ( t m δ / 2 s ) Θ ( t n + δ / 2 s ) Θ ( s t n + δ / 2 ) × e ( t m s ) / τ ot e ( t m + t n + δ / 2 2 s ) / τ ot 2 α } , I n m ( 3 ) / 2 D = 0 T d s Θ ( t n + δ / 2 s ) Θ ( s t n + δ / 2 ) Θ ( t m + δ / 2 s ) Θ ( s t m + δ / 2 ) × 1 e ( t n + δ / 2 s ) / τ ot 2 α 1 e ( t m + δ / 2 s ) / τ ot 2 α .
I n m ( 1 ) / 2 D = [ sinh ( α ) α ] 2 0 T d s Θ ( t n δ / 2 s ) Θ ( t m δ / 2 s ) e ( t n + t m 2 s ) / τ ot = [ sinh ( α ) α ] 2 0 min ( t n , t m ) δ / 2 d s e ( t n + t m 2 s ) / τ ot = [ sinh ( α ) α ] 2 τ ot 2 [ e ( t n + t m 2 ( min ( t n , t m ) δ / 2 ) ) / τ ot e ( t n + t m ) / τ ot ] .
min ( t n , t m ) = t n + t m 2 | t n t m | 2 , max ( t n , t m ) = t n + t m 2 + | t n t m | 2 ,
I n m ( 1 ) = D τ ot [ sinh ( α ) α ] 2 [ e ( | t n t m | + δ ) / τ ot e ( t n + t m ) / τ ot ] .
I n m ( 2 ) / 2 D = ( 1 δ n m ) [ sinh ( α ) α ] [ Θ ( t n > t m ) t m δ t m + δ d s e ( t n s ) / τ ot e ( t m + t n + δ / 2 2 s ) / τ ot 2 α + Θ ( t m > t n ) t n δ t n + δ d s e ( t m s ) / τ ot e ( t m + t n + δ / 2 2 s ) / τ ot 2 α ] .
I n m ( 2 ) = 2 D τ ot ( 1 δ n m ) α [ sinh ( α ) α ] 3 e α | t m t n | / τ ot .
I n m ( 3 ) / 2 D = δ n m t n δ / 2 t n + δ / 2 d s ( 1 e ( t n + δ / 2 s ) / τ ot 2 α ) 2 = δ n m δ / 2 4 e 2 α + 4 α 3 e 4 α 8 α 3 ,
I n m ( 3 ) = 2 D τ ot δ n m α 4 e 2 α + 4 α 3 e 4 α 8 α 3 .
I n m = D τ ot [ sinh ( α ) α ] 2 [ e 2 α | t n t m | / τ ot e ( t n + t m ) / τ ot ] + 2 D τ ot α [ sinh ( α ) α ] 3 e α | t m t n | / τ ot + 2 D τ ot δ n m [ α 4 e 2 α + 4 α 3 e 4 α 8 α 3 α [ sinh ( α ) α ] 3 e α ] .
[ sinh ( α ) α ] 2 e 2 α + 2 α [ sinh ( α ) α ] 3 e α = [ sinh ( α ) α ] 2 , α 4 e 2 α + 4 α 3 e 4 α 8 α 3 α [ sinh ( α ) α ] 3 e α = α cosh ( α ) sinh ( α ) 2 α 2 ,
I n m = D τ ot [ sinh ( α ) α ] 2 [ e | t n t m | / τ ot , x e ( t n + t m ) / τ ot , x ] + D τ ot δ n m α cosh ( α ) sinh ( α ) α 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.