Abstract
In this tutorial, three examples of stochastic systems are considered: a strongly damped oscillator, a weakly damped oscillator and an undamped oscillator (integrator) driven by noise. The evolution of these systems is characterized by the temporal correlation functions and spectral densities of their displacements, which are determined and discussed. Damped oscillators reach steady stochastic states. Their correlations are decreasing functions of the difference between the sample times, and their spectra have peaks near their resonance frequencies. An undamped oscillator never reaches a steady state. Its energy increases with time, and its spectrum is sharply peaked at low frequencies. The required mathematical methods and physical concepts are explained on a just-in-time basis, and some theoretical pitfalls are mentioned. The insights one gains from studies of oscillators can be applied to a wide variety of physical systems, such as atom and semiconductor lasers, which will be discussed in a subsequent tutorial.
© 2021 Optical Society of America
1. INTRODUCTION
A wide variety of physical systems are driven by noise. Systems of interest to us include atom and semiconductor lasers [1–4]. To model lasers, one has to solve equations for the upper-level electon density (or number), the optical power (or photon flux), and the optical phase. These equations are nonlinear and are driven by three different noise sources. Although power and phase fluctuations have been modeled successfully, the complexity of the mathematics obscures the underlying physics, which is relatively simple. In this tutorial, three simple stochastic systems (a strongly damped oscillator, a weakly damped oscillator and an undamped oscillator driven by noise) are studied, with only a moderate amount of mathematics. One can apply the insights gained from this study of oscillators to the aforementioned physical systems.
This tutorial focuses on systems whose evolution is governed by stochastic differential equations of the form
where $A$ is a dependent variable (an oscillator displacement or a complex wave amplitude), $L$ is a deterministic linear function of its argument, and $R$ is a random function of time (noise source). In addition to linear systems, there are many nonlinear systems for which Eq. (1) governs the initial growth of the amplitude, or perturbations of the steady-state amplitude. Because the amplitude is driven by noise, it too is a random function of time (or has a random component). The properties of the amplitude are characterized by its first two moments in the time and frequency domains. The first temporal moment $M(t) = \langle A(t)\rangle$, where $\langle \;\rangle$ denotes an ensemble average, is called the mean, and the second moment $C(t,t^\prime) = \langle {A^*}(t)A(t^\prime)\rangle$ is called the two-time correlation. The variance $\langle |A(t{)|^2}\rangle - {\langle A(t)\rangle ^2}$ is specified by the equal-time correlation and the mean. Similar definitions of mean, variance, and correlation apply to amplitude vectors (in which case $L$ is a matrix).Fourier transforms are usually defined for integrable functions, which tend to zero as time tends to infinity. However, in the examples considered, noise is always present, so the driven amplitudes are not described by integrable functions. To avoid singularities in the theory, we use the finite-time transforms
In one interpretation of these equations, $A(t)$ is a Fourier series, in which the frequencies are separated by $2\pi /T$. However, if this frequency difference is much smaller than any frequency of interest, the series can be replaced by an integral. In another interpretation, $A(\omega)$ is the infinite-time transform of the truncated function ${A_T}(t) = A(t)H(t)H(T - t)$, where $H$ is the Heaviside step function. This tutorial is based on the latter interpretation. Although we introduced finite integration times to regularize the mathematics, they represent reality, in the sense that measurements of physical systems are taken over finite time intervals, so one can think of $T$ as the measurement time.Some properties of Fourier transforms are reviewed in Appendix A. Of particular importance are the $\delta$-function identity
and the Parseval equation which follows from Eqs. (2) and (4).In this paper, $A(t)$ is dimensionless, so $A(\omega)$ has units of time. However, in some applications (electronics and photonics), $|A(t{)|^2}$ has units of power, so its time integral has units of energy, and $|A(\omega {)|^2}$ has units of energy multiplied by time (divided by frequency). For this reason, $|A(\omega {)|^2}$ is called the spectral energy density, and $|A(\omega {)|^2}/T$ is called the spectral power density. We will use the descriptive terms power and energy whenever it is helpful to do so. The Parseval equation states that the time- and frequency-domain formulas for the energy of a system are equivalent. This equivalence provides a way to check the consistency of time- and frequency-domain analyses.
The two-frequency correlation $C(\omega ,\omega ^\prime) = \langle {A^*}(\omega)A(\omega ^\prime)\rangle$, where ${A^*}(\omega)$ is an abbreviation for ${[A(\omega)]^*}$, is the double Fourier transform of the two-time correlation $C(t,t^\prime)$. In turn, the temporal correlation is the double inverse transform of the frequency correlation. Some consequences of this relation are discussed in Appendix B. In the main text, it is sufficient to consider the spectral energy density $S(\omega) = C(\omega ,\omega)$, which is specifed by the equation
This tutorial is organized as follows: In Section 2, the properties of white and colored noise are defined and discussed. In Sections 3–5, the behaviors of strongly damped, weakly damped and undamped oscillators driven by noise are studied. Damped oscillators attain stochastic steady states, whereas undamped oscillators do not. Formulas are derived for the temporal correlations of the displacements and their associated spectra. These formulas are nonsingular, but only because the integration (measurement) time is nonsingular. In Section 6, the specific results obtained in the preceding sections are discussed in the broader context of stochastic processes [5–8]. Finally, in Section 7, the main results of the tutorial are summarized. For convenience, the tutorial also contains appendices on Fourier transforms and the Wiener–Khinchin theorem [9,10].
2. NOISE
The most common model of noise is white noise, which is defined by the properties
where $\sigma$ is the source strength. $R(t)dt$ is the impulse applied to a realization of the system during the time interval $dt$. The first equation states that the average impulse is zero (positive and negative, real and imaginary impulses are equally likely). The second and third equations state that different impulses are independent, and the real and imaginary parts of the same impulse have the same strength ($\sigma /2$), but are otherwise independent.Although this noise model is common, the assumption of zero correlation time is problematic. First, the noise power $\langle |R(t{)|^2}\rangle = \sigma \delta (0)$ is undefined (infinite), as is the associated noise energy. Second, the spectral energy density
One can resolve these problems by assuming that the correlation time is very short, but nonzero. Colored noise also has properties (7) and (8). The correlation function
is exponentially localized and has a correlation (coherence) time of $1/\nu$. (Other localized functions are also suitable.) The normalized correlation, which is correlation (11) divided by $(\sigma \nu /2)$, is plotted as a function of the time difference $\tau = t^\prime - t$ in Fig. 1(a). The noise power $\langle |R(t{)|^2}\rangle = \sigma \nu /2$ is finite, because $\nu$ is finite, and the noise energy is finite, because $\nu$ and $T$ are both finite.The spectral energy density
Notice that for white noise, the spectral power density $\langle |R(\omega {)|^2}\rangle /T = \sigma$, which is the two-sided-in-time Fourier transform of correlation function (9) with respect to the time difference $t - t^\prime $. For colored noise, $\langle |R(\omega {)|^2}\rangle /T \sim \sigma {\nu ^2}/({\nu ^2} + {\omega ^2})$, which is the transform of correlation function (11) with respect to the time difference. These results are not coincidental. They are manifestations of the Wiener–Khinchin theorem, which states that if the correlation function depends on the time difference $t - t^\prime $ (rather than $t$ and $t^\prime $ separately), then the spectral power density is (asymptotically) equal to the two-sided Fourier transform of the correlation function with respect to the time difference. This important result is proved in Appendix C.
In summary of this section, the energy provided by a noise source is only finite if the measurement (reception) time and the noise bandwidth are both finite. If the correlation time of colored noise is much shorter than any other time scale relevant to the receiving system, its effects on that system should be similar to those of white noise. For this reason, one usually analyzes the effects of white noise first, because the analysis is simpler, and checks the results for singularities. If none is present, then the white-noise model is sufficient.
3. STRONGLY DAMPED OSCILLATOR
Consider the response of a driven oscillator (or wave), which is governed by the displacement (or amplitude) equation
where ${\nu _r}$ and ${\nu _i}$ are damping and detuning (frequency-shift) parameters, respectively. The Langevin source term ${R_a}$ [11] is a random function of time with the properties (7)–(9). For example, Eq. (18) governs the evolution of a cavity-mode amplitude in a below-threshold laser. Let $A(t) = B(t)\exp (- i{\nu _i}t)$. Then the transformed amplitude satisfies the stochastic differential equation where ${R_b}(t) = {R_a}(t)\exp (i{\nu _i}t)$. It is easy to verify that ${R_b}$ has the same properties as ${R_a}$. In statistical physics, Eq. (19) defines an Ornstein–Uhlenbeck process [5,8,12,13].Equation (19) is linear in the dependent variable, so one can use the superposition principle to write its solution in the Green-function form
where $G(t)$ is the solution of the equation together with the initial condition $G(0) = 0$. The solution of Eq. (21) is $H(t){e^{- {\nu _r}t}}$, from which it follows that the solution of Eq. (20) is where $R$ and $\nu$ are abbreviations of ${R_b}$ and ${\nu _r}$, respectively. Although the impulse $R(s)ds$ is a random function of $s$, its subsequent effect on the amplitude is determined by a homogenous linear equation, which one can solve using regular calculus. The stochastic differential equations that will be considered in Sections 4 and 5 are also linear, so regular calculus will be used throughout this tutorial. For nonlinear equations driven by noise (such as those that govern laser power and phase fluctuations), one has to choose between Ito and Stratonovich calculus [5]. It is fortunate that some interesting stochastic systems can be discussed without the extra complexity of stochastic calculus.It follows from solution (20) and property (7) that $\langle B(t)\rangle = 0$. The amplitude is a random function of time, with a mean of zero. Its variance is $\langle |B(t{)|^2}\rangle - {\langle |B(t)|\rangle ^2}$. For systems such as this one, which have means of zero, the variance equals the power $P(t) = \langle |B(t{)|^2}\rangle$. It follows from solution (20) that
The two-time correlation function
For times ($t$ and $t^\prime $) that are longer than the damping time, the second term on the right side of Eq. (26) is negligible, and the correlation is a function of only the time difference $\tau = t^\prime - t$. The correlation does not depend on the time origin, so $C(t,t + \tau) = C(0,\tau)$. In abbreviated notation,
The normalized correlation, which is correlation (27) divided by $\sigma /2\nu$, is plotted as a function of the time difference in Fig. 3(a). It decreases exponentially with the difference (positive or negative). This behavior reflects the fact that each noise impulse only affects the cumulative displacement for a time of the order of the damping time [Eq. (21)].The correlation function (26) has two contributions: The first (longer-lasting) term is proportional to the correlation function of colored noise, which was specified in Eq. (11). Its contribution to the spectrum was stated in Eq. (16). The contribution to the spectrum of the second (transient) term is ${-}\sigma /2\nu ({\nu ^2} + {\omega ^2})$, plus exponentially small terms. Hence, the spectrum of a strongly damped oscillator is
One can also work in the frequency domain, in which the source function has the properties
By transforming Eq. (19), applying the initial condition $B(0) = 0$ and omitting $B(T)$, one finds that
The convolution theorem (Appendix A) shows that the frequency-domain solution (33) is consistent with the time-domain solution (22). [In Appendix D, the inverse transform of $1/(\nu - i\omega)$ is shown to be $H(t){e^{- \nu t}}$.] By combining Eqs. (31) and (33), one obtains the spectrum The spectrum in Eq. (34) is consistent with the first term in Eq. (28). It misses the second and third terms, which are of relative order $1/\nu T$. (The second term is the finite-time correction to the Lorentzian, and the third term is the transient deficit.) Thus, for a strongly damped oscillator, which attains a steady stochastic state, the standard frequency-domain analysis, which omits $B(T)$, is only asymptotically correct.Results (27) and (28) apply to the transformed amplitude $B$. For the amplitude $A$, the corresponding results are
4. WEAKLY DAMPED OSCILLATOR
In this section, the response of a weakly damped oscillator to a noise source is studied. The material is important for two reasons. First, oscillatory behavior is ubiquitous in science and engineering. Second, the response of a weakly damped oscillator is intermediate between the responses of a strongly damped oscillator (Section 3) and an undamped oscillator (Section 5). However, the physical results of this section are similar to those of Section 3 (after transients have died out, the oscillator displacement reaches a stochastic steady state in which the temporal correlation depends on only the time difference, and the spectrum is peaked near the natural frequency), but require much more mathematical effort to obtain. Consequently, we recommend that readers proceed directly to Section 5 and return to Section 4 later.
Let $X$ and $V$ be the oscillator displacement and velocity, respectively. Then the oscillator equations are
where ${\nu _0}$ and ${\omega _0}$ are damping and frequency parameters, respectively. In Eq. (38), $\epsilon$ is a book-keeping parameter with units of frequency, which allows the source function $R$ to have the same properties in this section as it did in the previous ones [Eqs. (7)–(9)]. The oscillator equations are similar to the ones that govern electron-number ($N$) and photon-number ($P$) fluctuations in an above-threshold laser [3,4]. $X$ corresponds to $P$ and $V$ corresponds to $N$.Equations (37) and (38) are linear, so their solutions can be written in the forms
where the time-domain Green functionsIt follows from Eqs. (9), (39), and (41) that the squared displacement (power) is
The energy is the time integral of the power. The component integrals are
It also follows from Eqs. (9), (39), and (41) that the temporal correlation
For times ($t$ and $t^\prime $) that are longer than the damping time $1/{\nu _0}$, the second contributions to integrals (53) and (54) can be neglected. By omitting these terms and combining the cosine terms that remain, one obtains the time-asymptotic correlation
The spectral energy density is defined by Eqs. (6) and (56), the second of which can be rewritten in the form
Now consider the transient terms, which were omitted from correlation (56). When one Fourier transforms the transient term in Eq. (53), the double integral involves exponentials with the arguments
The frequency-domain versions of Eqs. (37) and (38), which are based on the standard derivative rule, can be written in the matrix form
By combining Eqs. (31) and (63), one obtains the spectral energy density
Solution (63) implies that $V(\omega) = - i\omega X(\omega)$. By doing another contour integral [14], one finds that the spectral energy
For a weakly damped oscillator, most of the spectral energy is contained in the peaks near ${\pm}{\omega _0}$. Let $\omega = \pm {\omega _0} + \delta$, where $\delta$, ${\nu _0} \ll {\omega _0}$. Then, near each peak, the spectral density
The normalized spectral density, which is density (69) divided by ${\epsilon ^2}\sigma T/\omega _0^4$, is plotted as a function of frequency in Fig. 6. Spectrum (69) has a maximum of ${\epsilon ^2}\sigma T/4\nu _0^2\omega _0^2$ and an effective width of ${\pm}{\nu _0}$, and contains the spectral energy ${\epsilon ^2}\sigma T/8{\nu _0}\omega _0^2$. [The integral of $1/(\nu _0^2 + {\delta ^2})$ is $\pi /{\nu _0}$.] The sum of the spectral energies of the two Lorentzian peaks equals the total spectral energy Eq. (66). These results suggest that the strongly-damped-oscillator equation (19) models the behavior of a resonantly driven, weakly damped oscillator.It is easy to establish this connection. By combining Eqs. (37) and (38), one obtains the second-order equation
Suppose that the displacement where the amplitude $B$ varies slowly compared to the phase factor. Then, by substituting ansatz (71) in Eq. (70), and omitting the terms $d_{\textit{tt}}^2B$ and $2{\nu _0}{d_t}B$, one finds that where ${R_b} = i\epsilon R(t){e^{i{\omega _0}t}}/2{\omega _0}$. This source function satisfies an equation similar to Eq. (9), in which the source strength ${\sigma _b} = {\epsilon ^2}\sigma /4\omega _0^2 = {\sigma _v}/4\omega _0^2$. Thus, a strongly damped oscillator can be considered as the near-resonance limit of a weakly damped oscillator.5. UNDAMPED OSCILLATOR
Now consider the response of an undamped oscillator (integrator) that is driven by noise. The amplitude equation is
where ${R_a}$ has the properties (7)–(9). The solution of Eq. (73) is This solution can be written in the form of Eq. (20), with $G(t) = H(t)$. Solution (74) describes a Wiener process, which is also called Brownian motion [5,8]. It follows from solution (74) that the correlation functionDespite the fact that the oscillator response is nonstationary, the spectral energy density can be determined. It follows from Eqs. (6) and (75) that
For low frequencies, $S(\omega) \approx \sigma {T^3}/3$, which is finite because $T$ is finite, whereas for high frequencies, $S(\omega) \approx 2\sigma T/{\omega ^2}$. The spectral energy
It is sometimes helpful to replace the complicated function $[1 - {\rm{sinc}}(x)]/{x^2}$ by the simple Lorentzian $1/({a^2} + {x^2})$, which is finite for $x = 0$ and tends to $1/{x^2}$ as $x \to \infty$. This Lorentzian has a maximum of $1/{a^2}$ and an integral of $\pi /a$. By choosing $a{= 6^{1/2}}$, one reproduces the maximum correctly, but underestimates the integral. In contrast, by choosing $a = 2$, one overestimates the maximum, but reproduces the integral correctly. We prefer the second choice, because the associated spectrum satisfies the Parseval relation. The exact and approximate spectral densities, divided by $2\sigma {T^3}$, are plotted as functions of frequency in Fig. 8.
If we had solved the problem in the frequency domain, using the standard derivative rule, we would have found that
It follows from this solution that the spectrum Although these results were easy to obtain, they are wrong in several ways. First, the inverse transform of $i/\omega$ is ${\rm{sign}}(t)/2$ (Appendix D), not $H(t)$, so solution (85) is not consistent with solution (74). Second, not only is spectrum (86) infinite at the origin, but its integral is infinite, and it underestimates spectrum (80) by a factor of two at high frequencies. These are all serious failures, which were caused by a naive application of the standard derivative rule. The derivative rule is applied carefully in Appendix E.When singularities occur in the analysis of a physical system, it is common to add some damping ($\alpha$) to the system, redo the analysis, and take the limit as $\alpha \to 0$. By following this prescription, one would obtain Eq. (85), with $\omega$ replaced by $\omega + i\alpha$, and Eq. (86), with ${\omega ^2}$ replaced by ${\omega ^2} + {\alpha ^2}$. The inverse transform of $i/(\omega + i\alpha)$ is $H(t){e^{- \alpha t}}$ (Appendix D), which tends to $H(t)$ as $\alpha \to 0$. This result is consistent with Eq. (74). However, the integral of the spectrum is $\sigma T/2\alpha$, which tends to $\infty$ as $\alpha \to 0$. For an undamped oscillator, the standard prescription does not work.
In this section and the previous ones, only the spectra $S(\omega)$ were calculated [Eqs. (10), (16), (28), (59), and (80)]. The associated frequency correlations $C(\omega ,\omega ^\prime)$ are calculated in Appendix F, for completeness.
6. DISCUSSION
In Sections 3–5, the systems (processes) considered were characterized by their means and two-time correlations (from which their variances can be deduced). These quantities provided useful information about the processes, but did not describe them completely. In this section, the probability distributions of the displacements are discussed and the processes are characterized.
Let $X$ denote a random variable (process). Then the probability that $X$ has a value between $x$ and $x + dx$ is $P(x)dx$, where $P$ is the probability distribution function. The process is said to have Gaussian statistics if the probability distribution is the Gaussian function
where $\mu$ and $\sigma$ are parameters. In the rest of this section, we will use the same notation for $X$ and $x$. The $n$th moment $\langle {x^n}\rangle$ is defined as the integral For distribution function (87), the first moment (mean) $\langle x\rangle =\mu$, and the second moment $\langle {x^2}\rangle = {\sigma ^2} + {\mu ^2}$, from which it follows that the variance $\langle {x^2}\rangle - {\langle x\rangle ^2} = {\sigma ^2}$.The characteristic function
Now let $X = [{x_1},{x_2} \ldots {x_n}{]^t}$ denote multiple random variables (displacement and velocity, for example) and $K = [{k_1},{k_2} \ldots {k_n}]$ denote multiple transform variables. Then this multiple-variable process is said to be Gaussian, with zero means, if its characteristic function
where $M = [{m_{\textit{ij}}}]$ is a matrix. The momentsFor many stochastic processes, it is reasonable to assume that the noise source has Gaussian statistics. To be precise, define the (Wiener) increment, or impulse, $W(\tau) = \int _0^\tau R(s){\rm d}s$, where $R(s)$ is the source function and $\tau$ is a short time interval. Then, for white noise, the impulses have a Gaussian distribution function with mean $\langle W(\tau)\rangle = 0$ and variance $\langle |W(\tau {)|^2}\rangle = \sigma \tau$, where $\sigma$ is the source strength. This assumption is common for two complementary reasons. First, suppose that $\hat a$ is the amplitude operator of a quantum mode, in which case $\langle {\hat a^\dagger}\hat a\rangle$ is the expected number of quanta. Then, if the mode is in a coherent state, the expected quadratures have Gaussian probability distributions with variances of 1/2 [15,16]. If the mode experiences linear amplification or attenuation, the quadrature means and variances change, but their distributions remain Gaussian. One can reproduce this behavior semi-classically by adding noise terms with Gaussian statistics to the classical amplitude equation. (Gain and loss both degrade the signal-to-noise ratio.) Second, if a classical diplacement is driven by many independent impulses, the central limit theorem [5,8] ensures that the displacement statistics are Gaussian, even if the impulse statistics are not.
A time-dependent random process is said to be strictly stationary if the $n$th moment $\langle x({t_1})x({t_2}) \ldots x({t_n})\rangle$ is independent of the origin of time. In other words, one can replace ${t_1}$, ${t_2}$, $\ldots ;{t_n}$ by ${t_1} - \tau$, ${t_2} - \tau$, $\ldots \;{t_n} - \tau$ without changing the moment. A process is said to be weakly (or wide-sense) stationary if the first two moments $\langle x({t_1})\rangle$ and $\langle x({t_1})x({t_2})\rangle$ are independent of the time origin. For such a process, the mean is constant, and the two-time correlation is a function of only the time difference ${t_2} - {t_1}$. Because the mean is constant, one can simplify the analysis of the process by measuring $x(t)$ relative to the mean.
A time-dependent process is said to have Gaussian statistics if its $n$-sample characteristic function has the form (92). If such a process is weakly stationary, it is also strictly stationary, because the correlation condition applies to arbitrary ${t_1}$ and ${t_2}$ (or ${t_i}$ and ${t_j}$). Hence, in the covariance matrix, each entry ${m_{\textit{ij}}}$ is a function of ${t_j} - {t_i}$, which is independent of the time origin. Consequently, so also is the $n$th moment, which is determined completely by the covariance matrix.
In Sections 3–5, the oscillator displacements (22), (39), and (74) are sums of Gaussian impulses multipled by deterministic response (Green) functions, so the displacements also have Gaussian statistics, which are specified completely by their means (of zero), and correlations (27), (56), and (75), from which the variances follow. In steady state, strongly and weakly damped oscillations are weakly stationary processes. Hence, as explained above, they are also stationary. In contrast, undamped oscillation is a nonstationary process, because the variance of the displacement increases linearly with time. Nonetheless, undamped oscillation has stationary increments: The displacement difference
Because the source term in Eq. (96) is stationary, the integral (increment) does not depend on the time origin $t$, so the displacement difference is a function of $\tau$ only: $\delta X(\tau) = \int _0^\tau R(s){\rm d}s$. Thus, the analyses of Sections 3–5 provide examples of the three main classes of stationarity, about which much is known [7].7. SUMMARY
In this tutorial, three examples of stochastic systems were discussed: a strongly damped oscillator, a weakly damped oscillator and an undamped oscillator (integrator) driven by noise. The dynamics of these systems are characterized by the temporal correlation functions and spectral densities of their displacements [$\langle {A^*}(t)A(t^\prime)\rangle$ and $\langle |A(\omega {)|^2}\rangle$, respectively], which were determined and discussed.
After transients associated with the initial conditions have died out, strongly and weakly damped oscillators attain stochastic steady states, in which the temporal correlations (27) and (56) depend on the time difference $|t - t^\prime |$, rather than $t$ and $t^\prime $ separately. The correlations are decreasing functions of the time difference, because the effects of noise impulses only influence the displacements for times of the order of the damping time. (These systems have finite memory.) Because these systems (processes) are weakly stationary, the Wiener–Khinchin theorem applies, and their spectral energy densities (28) and (59) are the Fourier transforms of their temporal correlations with respect to the time difference, multiplied by the measurement time $T$. Although the idealized noise model on which these results are based (white noise) has infinite energy, the spectral energy densities and integrated spectral energies are finite, because damping filters the applied noise.
In contrast, an undamped oscillator never reaches a steady state. The oscillator power increases monotonically with time, because the oscillator integrates the effects of the applied noise impulses. (This system has infinite memory.) The correlation (75) is a function of $\min (t,t^\prime)$, because the noise applied between $t$ and $t^\prime $ has no effect on the correlation of the earlier and later displacements. The spectrum (80) has a sharp peak at zero frequency, but the spectral energy density and integrated spectral energy are finite if the measurement time is finite. Because this system is nonstationary, the Wiener–Khinchin theorem does not apply.
For each system, the dynamics were analyzed in the time domain (by solving differential equations) and in the frequency domain (by solving algebraic equations). The frequency-domain analyses were based on the standard derivative rule that the Fourier transform of ${d_t}A$ is the transform of $A$ multiplied by ${-}i\omega$. For damped oscillators, the frequency-domain analyses miss the transient displacements, but predict the time-asymptotic steady states accurately. (The relative errors in the spectral densities and energies are of order $1/\nu T$, where $\nu$ is the damping rate.) However, for an undamped oscillator, the frequency-domain analysis produces results that are incorrect and singular. These failures are caused by the omission of the final value $A(T)$ in the derivative rule. When one models driven systems, in which the displacements do not tend to zero as time tends to infinity, one should use the derivative rule with caution.
The mathematical methods required to study randomly driven oscillators and the physical insights one gains from such a study can be applied to a variety of problems, in fields such as biology, chemistry, finance, and optimization [17–20]. In a subsequent tutorial [21], we will use the methods and insights discussed herein to study laser linewidths (phase fluctuations). This tutorial was restricted to linear oscillators subject to additive noise. A review of linear and nonlinear oscillators, subject to additive and multiplicative noise, can be found in [22].
APPENDIX A: FOURIER TRANSFORMS
Let ${A_2}(t)$ be an integrable two-sided function of time (defined for positive and negative times). Then the forward and backward Fourier transforms are defined by the equations
The convolution is defined by the integral
where $\tau$ is arbitrary. In contrast to the correlation integral, ${A_2}$ is not conjugated, and ${B_2}$ is a function of $\tau - t$, not $\tau + t$. The convolution theorem states that It is used in analyses of linear systems, in which context ${A_2}$ is the driving term, and ${B_2}$ is the response (Green) function.This tutorial involves the solutions of initial-value problems, which are defined for nonnegative times. If these solutions were integrable, one could derive one-sided versions of the correlation and convolution theorems from the two-sided versions by replacing ${A_2}(t)$ with ${A_1}(t) = {A_2}(t)H(t)$, where $H$ is the Heaviside step function. However, in the problems considered, the dependent variables are driven by noise, which is always present, so the solutions are not integrable. One can prevent singularities in the analyses of these problems by limiting the integration (measurement) time to the large, but finite, value $T$, and one can derive finite-time versions of the correlation and convolution theorems from the one-sided versions by replacing ${A_1}(t)$ with ${A_T}(t) = {A_1}(t)H(T - t) = {A_2}(t)H(t)H(T - t)$. In this appendix, finite-time results are derived directly from the two-sided results stated above. One can deduce the corresponding one-sided results by letting $T \to \infty$ and assuming integrability.
The finite-time amplitudes ${A_T}(t)$ and ${B_T}(t)$ are only nonzero for $0 \le t \le T$, so the Fourier transform
and the correlation On the right side of Eq. (A9), the second function is nonzero for ${-}t \le \tau \le T - t$. Because the extreme values of $t$ are zero and $T$, the extreme values of $\tau$ are ${-}T$ and $T$. Hence, the Fourier transform The integration region for the correlation theorem is illustrated in Fig. 9(a).By combining Eqs. (A9) and (A10), and reversing the order of integration, one finds that
The finite-time convolution
On the right side of Eq. (A13), the second function is nonzero for $t \le \tau \le T + t$, so the formal upper limit $T$ can be replaced by the effective limit $\tau$. Because the extreme values of $t$ are zero and $T$, the extreme values of $\tau$ are zero and $2T$. Hence, the Fourier transform The integration region for the convolution theorem is illustrated in Fig. 9(b).By combining Eqs. (A13) and (A14), and reversing the order of integration, one finds that
The preceding discussion of the convolution theorem was based on the assumption that the functions $A$ and $B$ are both defined for arguments between zero and $T$. Solutions (20), (39), and (74) were written in the Green-function form
where $0 \le s \le t$ (causality), and $t \le T$ (finite time). The transformed amplitudeFor a strongly damped oscillator, the transformed Green function ${G_T}(\omega) = 1/(\nu - i\omega)$, and for a weakly damped oscillator, ${G_T}(\omega) = \epsilon /(\omega _0^2 - 2i{\nu _0}\omega - {\omega ^2})$, where exponentially small terms were omitted. By squaring these functions and multiplying by the noise spectral density $\sigma T$, one obtains the largest contribution to spectrum (28) and spectrum (59), respectively. Thus, for damped oscillators, formula (A15) produces spectra that are asymptotically correct.
For an undamped oscillator, $G(t) = H(t)$, so its transform
and the associated spectrum where nothing was omitted. Formula (A20) is much better than the naive formula $\sigma T/{\omega ^2}$. It has a maximum of $\sigma {T^3}$ at $\omega = 0$, an integral of $2\pi \sigma {T^2}$, and is proportional to $2\sigma T/{\omega ^2}$ (with the correct factor of two). However, it differs from the exact formula (80), which has a maximum of $\sigma {T^3}/3$ and an integral of $\pi \sigma {T^2}$. Thus, for the systems of interest to us, the convolution theorem does not apply as written, and for an undamped oscillator, the relative error is of order one, which does not tend to zero as $T$ tends to infinity. It is for this reason that we did not use the standard formula $A(\omega) = G(\omega)R(\omega)$ to calculate spectra.APPENDIX B: DOUBLE FOURIER TRANSFORMS
Equation (6) shows that one can always determine the spectral energy density of a stochastic process from the two-time correlation. However, one cannot always determine the temporal correlation from the spectrum, because there is only one frequency variable ($\omega)$, but two time variables ($t$ and $t^\prime $) are required. (If one already knows that the process is stationary, then one can apply the Wiener–Khinchin theorem, which is stated in Appendix C.) In this appendix, the relations between the time- and frequency-domain characterizations of stochastic processes are discussed.
Define the two-time correlation $C(t,t^\prime) = \langle {A^*}(t)A(t^\prime)\rangle$ and the two-frequency correlation $C(\omega ,\omega ^\prime) = \langle {A^*}(\omega)A(\omega ^\prime)\rangle$, and suppose that both functions are integrable. Then they are linked by the double Fourier transforms
Now consider some examples. Suppose that the correlation,
is $\delta$-correlated in time. Then its transform, is a function of the frequency difference $\omega ^\prime - \omega$. For the special case in which $S(t) = \sigma$, where $\sigma$ is constant, $C(\omega ,\omega ^\prime) = 2\pi \sigma \delta (\omega - \omega ^\prime)$. Conversely, if the correlation, is $\delta$-correlated in frequency, then its inverse transform, is a function of the time difference $t^\prime - t$. For the special case in which $S(\omega) = \sigma$, then $C(t,t^\prime) = \sigma \delta (t - t^\prime)$, as stated above. If we had used frequency $f$ rather than angular frequency $\omega$, the factors of $2\pi$ in the preceding results would have been absent, because $a\delta (ax) = \delta (x)$. The bidirectional relations described by Eqs. (B5)–(B8) are well known. However, they are also problematic. In the first relation, the power $\langle |A(t{)|^2}\rangle = S(t)\delta (0)$ is undefined, whereas in the second relation, the spectrum $\langle |A(\omega {)|^2}\rangle = 2\pi S(\omega)\delta (0)$ is undefined.Because the displacements and correlation functions associated with driven processes are not integrable, one replaces infinite-time integrals by finite ones. In the one-sided formalism, the amplitude ${A_T}(t)$ is nonzero for $0 \le t \le T$. The frequency correlation
Now reconsider the previous examples with finite time integrals. If the temporal correlation ${C_T}(t,t^\prime) = {S_T}(t)\delta (t - t^\prime)$, then its transform
APPENDIX C: WIENER–KHINCHIN THEOREM
In Section 2, the spectral power densities of white and colored noise were shown to be the Fourier transforms of their temporal correlation functions with respect to the time difference. In Sections 3 and 4, this relation, which is called the Wiener–Khinchin theorem, was demonstrated for strongly and weakly damped oscillators that have attained stochastic steady states. This theorem is similar to the auto-correlation theorem of Appendix A. The difference is that the correlation theorem involves the time integral of an integrable function, whereas the Wiener–Khinchin theorem involves the ensemble average of a nonintegrable function.
The inverse transform of the spectral energy density is
For white noise, only the line $\tau = 0$ contributes to the integral in Eq. (C1), so Eq. (C3) is exact. For colored noise, Eq. (C3) is asymptotic, as written. For a damped system that starts from rest, the values of the correlation $C(t,t + \tau)$ are smaller than the steady-state value $C(\tau)$ for $0 \lt t \lt 1/\nu$ and comparable to it for $1/\nu \lt t \lt T$. The lengths of the left line segments (one line for each value of $\tau$) are much shorter than those of the right segments, so Eq. (C3) also applies when the transient response is included.
APPENDIX D: SPECIFIC FOURIER TRANSFORMS
In this appendix, the forward and backward Fourier transforms of some functions that appear in the main text are calculated. For a strongly damped oscillator (Section 3), the time-domain Green function
where the Heaviside step functionIf one uses contour integration to evaluate the inversion integral (A2), one needs to complete the contour in the upper (lower) half-plane for $t \lt 0$ ($t \gt 0$). The Green function in Eq. (D2) is analytic in the upper half-plane, but has a pole in the lower half-plane at $\omega = - i\nu$. Hence, for $t \lt 0$, $G(t) = 0$. For $t = 0$, the imaginary part of the inversion integral is zero, whereas the real part is 1/2 [because the integral of $1/({\nu ^2} + {\omega ^2})$ is $\pi /\nu$]. For $t \gt 0$, the inversion integral is ${-}2\pi i/2\pi$ multiplied by the residue $i{e^{- \nu t}}$, which is ${e^{- \nu t}}$. Hence, the inverse transform of the frequency-domain Green function equals the time-domain Green function (as it should do).
For an undamped oscillator (Section 5), the time-domain Green function
Unfortunately, this Green function is not integrable. The related function $H(t){e^{- \alpha t}}$ is integrable when $\alpha$ is positive, so one can try to fix the non-integrability problem by using the limit function It follows from Eqs. (D3) and (D5) thatThe backward transform of the first term on the right side of Eq. (D7) is 1/2, for all times. The inversion integral for the second term is
The real part of this integral is ${\rm{sign}}(t)/2$ [because the integral of ${\rm{sinc}}(x)$ is $\pi$ and sine is an odd function of $t$], whereas the imaginary part has a principal value of zero. Thus, the inverse transform of the frequency-domain limit function equals the time-domain limit function $H(t)$. The limit procedure used to obtain Eq. (D7) is sensible.Another important function (which was just mentioned) is the sign function
One can also try to fix the non-integrability problems by using finite-time transforms, in which context the truncated step function ${H_T}(t) = H(t)H(T - t)$. Its transform
Likewise, the truncated sign function ${\Sigma _T}(t) = \Sigma (t)H(t + T)H(T - t)$, and its transform
Now consider the inverse transforms of formulas (D13) and (D15). If the time-integration domain is extended slightly to the left of ${-}T$ and slightly to the right of $T$, the end values of the step function are zero, in which case one can apply the rule that the transform of ${d_t}A(t)$ is the transform of $A(t)$ multiplied by ${-}i\omega$. Conversely, the inverse transform of $iA(\omega)/\omega$ is the integral $\int _{- T}^tA(s)ds$. Hence, the inverse transform of ${H_T}(\omega)$ is
Suppose that $F(t)$ describes a causal response, which equals zero for $t \lt 0$. Then its Fourier transform $F(\omega)$ is analytic in the upper half-plane and tends to zero as $|\omega | \to \infty$. By integrating along a contour that consists of the real $\omega ^\prime $ axis, a small semicircle around the point $\omega ^\prime = \omega$, and a large semicircle in the upper half-plane, one obtains the Kramers–Kronig equation
Let $u$ denote the upper limit $\infty$ or $T$. Then it follows from Eqs. (2) and (3) that
APPENDIX E: DERIVATIVE RULE
One can reconcile the frequency- and time-domain analyses of Sections 3–5 if one applies the derivative rule carefully. We will prove this statement for an undamped oscillator, because the frequency-domain analysis of this system was the most problematic.
First, recall that the time-domain solution of Eq. (73) is
from which it follows thatThis proof of consistency depends on prior knowledge of the time-domain solution. It is also possible to work entirely in the frequency domain. By setting $\omega = 0$ in Eq. (E3), one finds that $A(T) = R(0) = \sigma T$. Hence, one can rewrite Eq. (E3) in the alternative form
in which the terms on the right side are known. Equation (E6) follows directly from Eq. (E7).APPENDIX F: SPECIFIC FREQUENCY CORRELATIONS
Let $R(t)$ be the source function for noise, which has properties (7) and (8). For white noise, the two-time correlation
White noise is $\delta$-correlated in time. The associated two-frequency correlationNow let $A(t)$ be the displacement of a damped oscillator. In Sections 3 and 4, and Appendix B, it was shown that $A(\omega) \sim G(\omega)R(\omega)$, where $A(\omega)$ and $G(\omega)$ are the transformed displacement and Green function, respectively. For such a system, the frequency correlation
For colored noise, the temporal correlation
The associated frequency correlationThe normalized correlation, which is correlation (F12) divided by $\sigma T$, is illustrated in Fig. 12. Its real part is a symmetric function of $\omega$ and $\omega ^\prime $, whereas its imaginary part is an asymmetric function. The first term on the right side of Eq. (F12), which is proportional to the effective $\delta$-function, is only significant for a narrow range of frequencies ($|\omega ^\prime - \omega | \sim 1/T$). Although the second term is significant for a wide range of frequencies ($\omega$, $\omega ^\prime \sim \nu$), it is smaller than the first term by a factor of $\nu T$. In this sense, colored noise is $\delta$-correlated in frequency.
In Section 5, it was shown that the temporal correlation of an undamped oscillator is $\sigma \min (t,t^\prime)$. One can calculate the associated frequency correlation by doing the $t$- and $t^\prime $-integrals in Eq. (B1) directly. Alternatively, by combining Eqs. (81) and (F1), one finds that
The normalized correlation, which is correlation (F16) divided by $\sigma {T^3}/3$, is illustrated in Fig. 13. Its real part is a symmetric function of $\omega$ and $\omega ^\prime $, whereas its imaginary part is an asymmetric function. If one sections the real and imaginary correlation surfaces along the lines $\omega ^\prime = - \omega$, the curves that result are similar to those in Fig. 11. Although the heights and depths of the peaks and troughs are independent of $T$, the range of frequencies for which the correlation is significant decreases as $T$ increases. In this sense, the fluctuations of an undamped oscillator are $\delta$-correlated in frequency.
Funding
Natural Sciences and Engineering Research Council of Canada.
Acknowledgment
We thank the reviewers for bringing to our attention [17,18,20,22]. TJS and ASH were supported by NSERC.
Disclosures
The authors declare no conflicts of interest.
Data Availability
No data were generated or analyzed in this research.
REFERENCES
1. A. E. Siegman, Lasers (University Science Books, 1986).
2. P. W. Milonni and J. H. Eberly, Lasers (Wiley, 1988).
3. G. P. Agrawal and N. K. Dutta, Semiconductor Lasers, 2nd ed. (Van Nostrand Reinhold, 1993).
4. L. A. Coldren, S. W. Corzine, and M. L. Masanovic, Diode Lasers and Photonic Integrated Circuits, 2nd ed. (Wiley, 2012).
5. C. W. Gardiner, Handbook of Stochastic Methods, 2nd ed. (Springer, 1985).
6. J. W. Goodman, Statistical Optics (Wiley, 1985).
7. A. Papoulis, Probability, Random Variables and Stochastic Processes, 3rd ed. (McGraw-Hill, 1991).
8. D. S. Lemons, An Introduction to Stochastic Processes in Physics (Johns Hopkins University, 2002).
9. N. Wiener, “Generalized harmonic analysis,” Acta Math. 55, 117–258 (1930). [CrossRef]
10. A. Khintchine, “Korrelationstheorie der stationären stochastischen Prozesse,” Math. Ann. 109, 604–615 (1934). [CrossRef]
11. D. S. Lemons and A. Gythiel, “Paul Langevin’s 1908 paper ‘On the Theory of Brownian Motion’ [‘Sur la théorie du mouvement brownien,’ C. R. Acad. Sci. (Paris) 146, 530–533 (1908)],” Am. J. Phys. 65, 1079–1081 (1997). [CrossRef]
12. L. S. Ornstein, “On the Brownian motion,” Proc. Acad. Amst. 21,96–108 (1919).
13. G. E. Uhlenbeck and L. S. Ornstein, “On the theory of Brownian motion,” Phys. Rev. 36, 823–841 (1930). [CrossRef]
14. C. J. McKinstrie, “Stochastic and probabilistic equations for three- and four-level lasers,” J. Opt. Soc. Am. B 37, 1333–1358 (2020). [CrossRef] The referenced integrals are in Appendix E.
15. C. J. McKinstrie and J. P. Gordon, “Field fluctuations produced by parametric processes in fibers,” IEEE J. Sel. Top. Quantum Electron. 18, 958–969 (2012). [CrossRef]
16. W. H. Louisell, Quantum Statistical Properties of Radiation (Wiley, 1973), Sec. 2.5.
17. H. C. Berg, Random Walks in Biology (Princeton University, 1983).
18. N. G. van Kampen, Stochastic Processes in Physics and Chemistry, 3rd ed. (Elsevier, 2007).
19. M. Baxter and A. Rennie, Financial Calculus (Cambridge, 1996).
20. K. Marti, Stochastic Optimization Methods, 3rd ed. (Springer, 2015).
21. C. J. McKinstrie, T. J. Stirling, and A. S. Helmy, “Laser linewidths: tutorial,” J. Opt. Soc. Am. B 38, 3837–3848 (2021). [CrossRef]
22. M. Gitterman, The Noisy Oscillator (World Scientific, 2005).
23. R. N. Bracewell, The Fourier Transform and its Applications, 3rd ed. (McGraw-Hill, 2000).
24. B. Y. K. Hu, “Kramers–Kronig in two lines,” Am. J. Phys. 57, 821 (1989). [CrossRef]
25. J. D. Jackson, Classical Electrodynamics, 3rd ed. (Wiley, 1999).