Basic properties The unconditional
probability density function follows a
normal distribution with mean = 0 and variance = , at a fixed time : f_{W_t}(x) = \frac{1}{\sqrt{2 \pi t}} e^{-x^2/(2t)}. The
expectation is zero: \operatorname E[W_t] = 0. The
variance, using the computational formula, is : \operatorname{Var}(W_t) = t. These results follow immediately from the definition that increments have a
normal distribution, centered at zero. Thus W_t = W_t-W_0 \sim N(0,t). A useful decomposition for proving martingale properties also called
Brownian increment decomposition is W_t = W_s + (W_t - W_s),\; s \le t
Covariance and correlation The
covariance and
correlation (where s \leq t): \begin{align} \operatorname{cov}(W_s, W_t) &= s, \\ \operatorname{corr}(W_s,W_t) &= \frac{\operatorname{cov}(W_s,W_t)}{\sigma_{W_s} \sigma_{W_t}} = \frac{s}{\sqrt{st}} = \sqrt{\frac{s}{t}}. \end{align} These results follow from the definition that non-overlapping increments are independent, of which only the property that they are uncorrelated is used. Suppose that t_1\leq t_2. \operatorname{cov}(W_{t_1}, W_{t_2}) = \operatorname{E}\left[(W_{t_1}-\operatorname{E}[W_{t_1}]) \cdot (W_{t_2}-\operatorname{E}[W_{t_2}])\right] = \operatorname{E}\left[W_{t_1} \cdot W_{t_2} \right]. Substituting W_{t_2} = ( W_{t_2} - W_{t_1} ) + W_{t_1} we arrive at: \begin{align} \operatorname{E}[W_{t_1} \cdot W_{t_2}] & = \operatorname{E}\left[W_{t_1} \cdot ((W_{t_2} - W_{t_1})+ W_{t_1}) \right] \\ & = \operatorname{E}\left[W_{t_1} \cdot (W_{t_2} - W_{t_1} )\right] + \operatorname{E}\left[ W_{t_1}^2 \right]. \end{align} Since W_{t_1}=W_{t_1} - W_{t_0} and W_{t_2} - W_{t_1} are independent, \operatorname{E}\left [W_{t_1} \cdot (W_{t_2} - W_{t_1} ) \right ] = \operatorname{E}[W_{t_1}] \cdot \operatorname{E}[W_{t_2} - W_{t_1}] = 0. Thus \operatorname{cov}(W_{t_1}, W_{t_2}) = \operatorname{E} \left [W_{t_1}^2 \right ] = t_1. A corollary useful for simulation is that we can write, for : W_{t_2} = W_{t_1}+\sqrt{t_2-t_1}\cdot Z where is an independent standard normal variable.
Wiener representation Wiener (1923) also gave a representation of a Brownian path in terms of a random
Fourier series. If \xi_n are independent Gaussian variables with mean zero and variance one, then W_t = \xi_0 t+ \sqrt{2}\sum_{n=1}^\infty \xi_n\frac{\sin \pi n t}{\pi n} and W_t = \sqrt{2} \sum_{n=1}^\infty \xi_n \frac{\sin \left(\left(n - \frac{1}{2}\right) \pi t\right)}{ \left(n - \frac{1}{2}\right) \pi} represent a Brownian motion on [0,1]. The scaled process \sqrt{c}\, W\left(\frac{t}{c}\right) is a Brownian motion on [0,c] (cf.
Karhunen–Loève theorem).
Running maximum The joint distribution of the running maximum M_t = \max_{0 \leq s \leq t} W_s and is f_{M_t,W_t}(m,w) = \frac{2(2m - w)}{t\sqrt{2 \pi t}} e^{-\frac{(2m-w)^2}{2t}}, \qquad m \ge 0, w \leq m. To get the unconditional distribution of f_{M_t}, integrate over : \begin{align} f_{M_t}(m) & = \int_{-\infty}^m f_{M_t,W_t}(m,w)\,dw = \int_{-\infty}^m \frac{2(2m - w)}{t\sqrt{2 \pi t}} e^{-\frac{(2m-w)^2}{2t}} \,dw \\[5pt] & = \sqrt{\frac{2}{\pi t}}e^{-\frac{m^2}{2t}}, \qquad m \ge 0, \end{align} the probability density function of a
Half-normal distribution. The expectation is \operatorname{E}[M_t] = \int_0^\infty m f_{M_t}(m)\,dm = \int_0^\infty m \sqrt{\frac{2}{\pi t}}e^{-\frac{m^2}{2t}}\,dm = \sqrt{\frac{2t}{\pi}} If at time the Wiener process has a known value W_{t}, it is possible to calculate the conditional probability distribution of the maximum in interval [0, t] (cf.
Probability distribution of extreme points of a Wiener stochastic process). The
cumulative probability distribution function of the maximum value,
conditioned by the known value W_t, is: \, F_{M_{W_t}} (m) = \Pr \left( M_{W_t} = \max_{0 \leq s \leq t} W(s) \leq m \mid W(t) = W_t \right) = \ 1 -\ e^{-2\frac{m(m - W_t)}{t}}\ \, , \,\ \ m > \max(0,W_t)
Self-similarity Brownian scaling For every the process V_t = (1 / \sqrt c) W_{ct} is another Wiener process.
Time reversal The process V_t = W_{1-t} - W_{1} for is distributed like for .
Time inversion The process V_t = t W_{1/t} is another Wiener process.
Projective invariance Consider a Wiener process W(t), t\in\mathbb R, conditioned so that \lim_{t\to\pm\infty}tW(t)=0 (which holds almost surely) and as usual W(0)=0. Then the following are all Wiener processes: \begin{array}{rcl} W_{1,s}(t) &=& W(t+s)-W(s), \quad s\in\mathbb R\\ W_{2,\sigma}(t) &=& \sigma^{-1/2}W(\sigma t),\quad \sigma > 0\\ W_3(t) &=& tW(-1/t). \end{array} Thus the Wiener process is invariant under the projective group
PSL(2,R), being invariant under the generators of the group. The action of an element g = \begin{bmatrix}a&b\\c&d\end{bmatrix} is W_g(t) = (ct+d)W\left(\frac{at+b}{ct+d}\right) - ctW\left(\frac{a}{c}\right) - dW\left(\frac{b}{d}\right), which defines a
group action, in the sense that (W_g)_h = W_{gh}.
Conformal invariance in two dimensions Let W(t) be a two-dimensional Wiener process, regarded as a complex-valued process with W(0)=0\in\mathbb C. Let D\subset\mathbb C be an open set containing 0, and \tau_D be associated Markov time: \tau_D = \inf \{ t\ge 0 |W(t)\not\in D\}. If f:D\to \mathbb C is a
holomorphic function which is not constant, such that f(0)=0, then f(W_t) is a time-changed Wiener process in f(D). More precisely, the process Y(t) is Wiener in with the Markov time S(t) where Y(t) = f(W(\sigma(t))) S(t) = \int_0^t|f'(W(s))|^2\,ds \sigma(t) = S^{-1}(t):\quad t = \int_0^{\sigma(t)}|f'(W(s))|^2\,ds.
A class of Brownian martingales If a
polynomial satisfies the
partial differential equation \left( \frac{\partial}{\partial t} + \frac{1}{2} \frac{\partial^2}{\partial x^2} \right) p(x,t) = 0 then the stochastic process M_t = p ( W_t, t ) is a
martingale.
Example: W_t^2 - t is a martingale, which shows that the
quadratic variation of on is equal to . It follows that the expected
time of first exit of from (−
c,
c) is equal to . More generally, for every polynomial the following stochastic process is a martingale: M_t = p ( W_t, t ) - \int_0^t a(W_s,s) \, \mathrm{d}s, where is the polynomial a(x,t) = \left( \frac{\partial}{\partial t} + \frac 1 2 \frac{\partial^2}{\partial x^2} \right) p(x,t).
Example: p(x,t) = \left(x^2 - t\right)^2, a(x,t) = 4x^2; the process \left(W_t^2 - t\right)^2 - 4 \int_0^t W_s^2 \, \mathrm{d}s is a martingale, which shows that the quadratic variation of the martingale W_t^2 - t on [0,
t] is equal to 4 \int_0^t W_s^2 \, \mathrm{d}s. About functions more general than polynomials, see
local martingales.
Some properties of sample paths The set of all functions with these properties is of full Wiener measure. That is, a path (sample function) of the Wiener process has all these properties almost surely:
Qualitative properties • For every ε > 0, the function takes both (strictly) positive and (strictly) negative values on (0, ε). • The function is continuous everywhere but differentiable nowhere (like the
Weierstrass function). • For any \epsilon > 0, w(t) is almost surely not (\tfrac 1 2 + \epsilon)-
Hölder continuous, and almost surely (\tfrac 1 2 - \epsilon)-Hölder continuous. • Points of
local maximum of the function are a dense countable set; the maximum values are pairwise different; each local maximum is sharp in the following sense: if has a local maximum at then \lim_{s \to t} \frac \to \infty. The same holds for local minima. • The function has no points of local increase, that is, no satisfies the following for some ε in (0,
t): first, for all in (
t − ε,
t), and second, for all in (
t,
t + ε). (Local increase is a weaker condition than that is increasing on (
t −
ε,
t +
ε).) The same holds for local decrease. • The function is of
unbounded variation on every interval. • The
quadratic variation of over [0,
t] is . •
Zeros of the function are a
nowhere dense perfect set of Lebesgue measure 0 and
Hausdorff dimension 1/2 (therefore, uncountable).
Quantitative properties =====
Law of the iterated logarithm ===== \limsup_{t\to+\infty} \frac{ |w(t)| }{ \sqrt{ 2t \log\log t } } = 1, \quad \text{almost surely}. =====
Modulus of continuity ===== Local modulus of continuity: \limsup_{\varepsilon \to 0+} \frac{ |w(\varepsilon)| }{ \sqrt{ 2\varepsilon \log\log(1/\varepsilon) } } = 1, \qquad \text{almost surely}.
Global modulus of continuity (Lévy): \limsup_{\varepsilon\to0+} \sup_{0\le s =====
Dimension doubling theorem ===== The dimension doubling theorems say that the
Hausdorff dimension of a set under a Brownian motion doubles almost surely.
Local time The image of the
Lebesgue measure on [0,
t] under the map (the
pushforward measure) has a density . Thus, \int_0^t f(w(s)) \, \mathrm{d}s = \int_{-\infty}^{+\infty} f(x) L_t(x) \, \mathrm{d}x for a wide class of functions (namely: all continuous functions; all locally integrable functions; all non-negative measurable functions). The density is (more exactly, can and will be chosen to be) continuous. The number is called the
local time at of on [0,
t]. It is strictly positive for all of the interval (
a,
b) where
a and
b are the least and the greatest value of on [0,
t], respectively. (For outside this interval the local time evidently vanishes.) Treated as a function of two variables and , the local time is still continuous. Treated as a function of (while is fixed), the local time is a
singular function corresponding to a
nonatomic measure on the set of zeros of . These continuity properties are fairly non-trivial. Consider that the local time can also be defined (as the density of the pushforward measure) for a smooth function. Then, however, the density is discontinuous, unless the given function is monotone. In other words, there is a conflict between good behavior of a function and good behavior of its local time. In this sense, the continuity of the local time of the Wiener process is another manifestation of non-smoothness of the trajectory.
Information rate The
information rate of the Wiener process with respect to the squared error distance, i.e. its quadratic
rate-distortion function, is given by R(D) = \frac{2}{\pi^2 D \ln 2} \approx 0.29D^{-1}. Therefore, it is impossible to encode \{w_t \}_{t \in [0,T]} using a
binary code of less than T R(D)
bits and recover it with expected mean squared error less than . On the other hand, for any \varepsilon>0, there exists large enough and a
binary code of no more than 2^{TR(D)} distinct elements such that the expected
mean squared error in recovering \{w_t \}_{t \in [0,T]} from this code is at most D - \varepsilon. In many cases, it is impossible to
encode the Wiener process without
sampling it first. When the Wiener process is sampled at intervals T_s before applying a binary code to represent these samples, the optimal trade-off between
code rate R(T_s,D) and expected
mean square error (in estimating the continuous-time Wiener process) follows the parametric representation R(T_s,D_\theta) = \frac{T_s}{2} \int_0^1 \log_2^+\left[\frac{S(\varphi)- \frac{1}{6}}{\theta}\right] d\varphi, D_\theta = \frac{T_s}{6} + T_s\int_0^1 \min\left\{S(\varphi)-\frac{1}{6},\theta \right\} d\varphi, where S(\varphi) = (2 \sin(\pi \varphi /2))^{-2} and \log^+[x] = \max\{0,\log(x)\}. In particular, T_s/6 is the mean squared error associated only with the sampling operation (without encoding). == Related processes ==