Conditioned on a particular value of x_0, the mean is : \operatorname \mathbb{E}(x_t \mid x_0)=x_0 e^{-\theta t}+\mu(1-e^{-\theta t}) and the
covariance is : \operatorname{cov}(x_s,x_t) = \frac{\sigma^2}{2\theta} \left( e^{-\theta|t-s|} - e^{-\theta(t+s)} \right). For the stationary (unconditioned) process, the mean of x_t is \mu, and the covariance of x_s and x_t is \frac{\sigma^2}{2\theta} e^{-\theta|t-s|}. The Ornstein–Uhlenbeck process is an example of a
Gaussian process that has a bounded variance and admits a
stationary probability distribution, in contrast to the
Wiener process; the difference between the two is in their "drift" term. For the Wiener process the drift term is constant, whereas for the Ornstein–Uhlenbeck process it is dependent on the current value of the process: if the current value of the process is less than the (long-term) mean, the drift will be positive; if the current value of the process is greater than the (long-term) mean, the drift will be negative. In other words, the mean acts as an equilibrium level for the process. This gives the process its informative name, "mean-reverting."
Properties of sample paths A temporally homogeneous Ornstein–Uhlenbeck process starting at x_0 = 0 can be represented as a scaled, time-transformed
Wiener process: : x_t = \frac{\sigma}{\sqrt{2\theta}} e^{-\theta t} W_{e^{2 \theta t}-1} where W_t is the standard Wiener process. This is roughly Theorem 1.2 in . Equivalently, with the change of variable s = e^{2 \theta t} this becomes : W_s = \frac{\sqrt{2 \theta}}{\sigma} s^{1/2} x_{(\ln s) / (2\theta)}, \qquad s > 0 Using this mapping, one can translate known properties of W_t into corresponding statements for x_t. For instance, the
law of the iterated logarithm for W_t becomes : \limsup_{t \to \infty} \frac{x_t}{\sqrt{(\sigma^2 / \theta) \ln t}} = 1, \quad \text{with probability 1.}
Formal solution The stochastic differential equation for x_t can be formally solved by
variation of parameters. Writing : f(x_t, t) = x_t e^{\theta t} \, we get : \begin{align} df(x_t,t) & = \theta\,x_t\,e^{\theta t}\, dt + e^{\theta t}\, dx_t \\[6pt] & = e^{\theta t}\theta\,\mu \, dt + \sigma\,e^{\theta t}\, dW_t. \end{align} Integrating from 0 to t we get : x_t e^{\theta t} = x_0 + \int_0^t e^{\theta s}\theta\,\mu \, ds + \int_0^t \sigma\,e^{\theta s}\, dW_s \, whereupon we see : x_t = x_0\,e^{-\theta t} + \mu\,(1-e^{-\theta t}) + \sigma \int_0^t e^{-\theta (t-s)}\, dW_s. \, From this representation, the first
moment (i.e. the mean) is shown to be : \operatorname E(x_t)=x_0 e^{-\theta t}+\mu(1-e^{-\theta t}) \!\ assuming x_0 is constant. Moreover, the
Itō isometry can be used to calculate the
covariance function by : \begin{align} \operatorname{cov}(x_s,x_t) & = \operatorname E[(x_s - \operatorname E[x_s])(x_t - \operatorname E[x_t])] \\[5pt] & = \operatorname E \left[ \int_0^s \sigma e^{\theta (u-s)}\, dW_u \int_0^t \sigma e^{\theta (v-t)}\, dW_v \right] \\[5pt] & = \sigma^2 e^{-\theta (s+t)} \operatorname E \left[ \int_0^s e^{\theta u}\, dW_u \int_0^t e^{\theta v}\, dW_v \right] \\[5pt] & = \frac{\sigma^2}{2\theta} \, e^{-\theta (s+t)}(e^{2\theta \min(s,t)}-1) \\[5pt] & = \frac{\sigma^2}{2\theta} \left( e^{-\theta|t-s|} - e^{-\theta(t+s)} \right). \end{align}
Kolmogorov equations The
infinitesimal generator of the process isLf = -\theta (x-\mu) f' + \frac 12 \sigma^2 f''If we let y = (x- \mu)\sqrt{\frac{2\theta}{\sigma^2}} , then the eigenvalue equation simplifies to: \frac{d^2}{dy^2}\phi - y\frac{d}{dy}\phi - \frac{\lambda}{\theta} \phi = 0which is the defining equation for
Hermite polynomials. Its solutions are \phi(y) = He_n(y) , with \lambda = -n\theta, which implies that the mean first passage time for a particle to hit a point on the boundary is on the order of \theta^{-1}. == Numerical simulation ==