Stochastic process A stochastic process is defined as a collection of random variables defined on a common
probability space (\Omega, \mathcal{F}, P), where \Omega is a
sample space, \mathcal{F} is a \sigma-
algebra, and P is a
probability measure; and the random variables, indexed by some set T, all take values in the same mathematical space S, which must be
measurable with respect to some \sigma-algebra \Sigma. \{X(t):t\in T \}. Historically, in many problems from the natural sciences a point t\in T had the meaning of time, so X(t) is a random variable representing a value observed at time t. A stochastic process can also be written as \{X(t,\omega):t\in T \} to reflect that it is actually a function of two variables, t\in T and \omega\in \Omega. There are other ways to consider a stochastic process, with the above definition being considered the traditional one. For example, a stochastic process can be interpreted or defined as a S^T-valued random variable, where S^T is the space of all the possible
functions from the set T into the space S.
Index set The set T is called the
index set of the stochastic process. Often this set is some subset of the
real line, such as the
natural numbers or an interval, giving the set T the interpretation of time. such as the Cartesian plane \mathbb{R}^2 or n-dimensional Euclidean space, where an element t\in T can represent a point in space. That said, many results and theorems are only possible for stochastic processes with a totally ordered index set.
State space The
mathematical space S of a stochastic process is called its
state space. This mathematical space can be defined using
integers,
real lines, n-dimensional
Euclidean spaces, complex planes, or more abstract mathematical spaces. The state space is defined using elements that reflect the different values that the stochastic process can take.
Sample function A
sample function is a single
outcome of a stochastic process, so it is formed by taking a single possible value of each random variable of the stochastic process. More precisely, if \{X(t,\omega):t\in T \} is a stochastic process, then for any point \omega\in\Omega, the
mapping X(\cdot,\omega): T \rightarrow S, is called a sample function, a
realization, or, particularly when T is interpreted as time, a
sample path of the stochastic process \{X(t,\omega):t\in T \}. This means that for a fixed \omega\in\Omega, there exists a sample function that maps the index set T to the state space S. or
path.
Increment An
increment of a stochastic process is the difference between two random variables of the same stochastic process. For a stochastic process with an index set that can be interpreted as time, an increment is how much the stochastic process changes over a certain time period. For example, if \{X(t):t\in T \} is a stochastic process with state space S and index set T=[0,\infty), then for any two non-negative numbers t_1\in [0,\infty) and t_2\in [0,\infty) such that t_1\leq t_2, the difference X_{t_2}-X_{t_1} is a S-valued random variable known as an increment. For a measurable subset B of S^T, the pre-image of X gives X^{-1}(B)=\{\omega\in \Omega: X(\omega)\in B \}, so the law of a X can be written as:
Finite-dimensional probability distributions For a stochastic process X with law \mu, its
finite-dimensional distribution for t_1,\dots,t_n\in T is defined as: \mu_{t_1,\dots,t_n} =P\circ (X({t_1}),\dots, X({t_n}))^{-1}, This measure \mu_{t_1,..,t_n} is the joint distribution of the random vector (X({t_1}),\dots, X({t_n})) ; it can be viewed as a "projection" of the law \mu onto a finite subset of T. For any measurable subset C of the n-fold
Cartesian power S^n=S\times\dots \times S, the finite-dimensional distributions of a stochastic process X can be written as: But the concept of stationarity also exists for point processes and random fields, where the index set is not interpreted as time. When the index set T can be interpreted as time, a stochastic process is said to be stationary if its finite-dimensional distributions are invariant under translations of time. This type of stochastic process can be used to describe a physical system that is in steady state, but still experiences random fluctuations. A sequence of random variables forms a stationary stochastic process only if the random variables are identically distributed.
Khinchin introduced the related concept of
stationarity in the wide sense, which has other names including
covariance stationarity or
stationarity in the broad sense.
Filtration A
filtration is an increasing sequence of sigma-algebras defined in relation to some probability space and an index set that has some
total order relation, such as in the case of the index set being some subset of the real numbers. More formally, if a stochastic process has an index set with a total order, then a filtration \{\mathcal{F}_t\}_{t\in T} , on a probability space (\Omega, \mathcal{F}, P) is a family of sigma-algebras such that \mathcal{F}_s \subseteq \mathcal{F}_t \subseteq \mathcal{F} for all s \leq t, where t, s\in T and \leq denotes the total order of the index set T.
Modification A
modification of a stochastic process is another stochastic process, which is closely related to the original stochastic process. More precisely, a stochastic process X that has the same index set T, state space S, and probability space (\Omega,{\cal F},P) as another stochastic process Y is said to be a modification of X if for all t\in T the following P(X_t=Y_t)=1 , holds. Two stochastic processes that are modifications of each other have the same finite-dimensional law and they are said to be
stochastically equivalent or
equivalent. Instead of modification, the term
version is also used, however some authors use the term version when two stochastic processes have the same finite-dimensional distributions, but they may be defined on different probability spaces, so two processes that are modifications of each other, are also versions of each other, in the latter sense, but not the converse. The theorem can also be generalized to random fields so the index set is n-dimensional Euclidean space as well as to stochastic processes with
metric spaces as their state spaces.
Indistinguishable Two stochastic processes X and Y defined on the same probability space (\Omega,\mathcal{F},P) with the same index set T and set space S are said be
indistinguishable if the following P(X_t=Y_t \text{ for all } t\in T )=1 , holds.
Separability Separability is a property of a stochastic process based on its index set in relation to the probability measure. The property is assumed so that functionals of stochastic processes or random fields with uncountable index sets can form random variables. For a stochastic process to be separable, in addition to other conditions, its index set must be a
separable space, which means that the index set has a dense countable subset. More precisely, a real-valued continuous-time stochastic process X on a probability space (\Omega,{\cal F},P) is separable iff its index set T has a dense countable subset U\subset T and there is a set \Omega_0 \subset \Omega of probability zero, so P(\Omega_0)=0, such that for every open set G\subset T and every closed set F\subset \mathbb{R} =(-\infty,\infty) , the two events \{ X_t \in F \text{ for all } t \in G\cap U\} and \{ X_t \in F \text{ for all } t \in G\} differ from each other at most on a subset of \Omega_0. The definition of separability can also be stated for other index sets and state spaces, such as in the case of random fields, where the index set as well as the state space can be n-dimensional Euclidean space. A theorem by Doob, sometimes known as Doob’s separability theorem, says that any real-valued continuous-time stochastic process has a separable modification. Versions of this theorem also exist for more general stochastic processes with index sets and state spaces other than the real line.
Uncorrelatedness Two stochastic processes \left\{X_t\right\} and \left\{Y_t\right\} are called
uncorrelated if their cross-covariance \operatorname{K}_{\mathbf{X}\mathbf{Y}}(t_1,t_2) = \operatorname{E} \left[ \left( X(t_1)- \mu_X(t_1) \right) \left( Y(t_2)- \mu_Y(t_2) \right) \right] is zero for all times. Formally: :\left\{X_t\right\},\left\{Y_t\right\} \text{ uncorrelated} \quad \iff \quad \operatorname{K}_{\mathbf{X}\mathbf{Y}}(t_1,t_2) = 0 \quad \forall t_1,t_2.
Independence implies uncorrelatedness If two stochastic processes X and Y are independent, then they are also uncorrelated. Such functions are known as càdlàg or cadlag functions, based on the acronym of the French phrase
continue à droite, limite à gauche. A Skorokhod function space, introduced by
Anatoliy Skorokhod, The notation of this function space can also include the interval on which all the càdlàg functions are defined, so, for example, D[0,1] denotes the space of càdlàg functions defined on the
unit interval [0,1]. Skorokhod function spaces are frequently used in the theory of stochastic processes because it often assumed that the sample functions of continuous-time stochastic processes belong to a Skorokhod space.
Regularity In the context of mathematical construction of stochastic processes, the term
regularity is used when discussing and assuming certain conditions for a stochastic process to resolve possible construction issues. For example, to study stochastic processes with uncountable index sets, it is assumed that the stochastic process adheres to some type of regularity condition such as the sample functions being continuous. ==Further examples==