For events Two events Two events A and B are independent (often written as A \perp B or A \perp\!\!\!\perp B, where the latter symbol often is also used for
conditional independence) if and only if their
joint probability equals the product of their probabilities:—that is, if and only if for all distinct pairs of indices m,k, {{Equation box 1 A finite set of events is
mutually independent if every event is independent of any intersection of the other events {{Equation box 1 More generally and equivalently if X and Y are real valued, if the pair of random variables (X,Y) has values in \mathcal X \times \mathcal Y with joint probability distribution P_{X,Y} and marginals P_X and P_Y we have the equality of measures :P_{X,Y}(d(x,y)) =P_X(dx)P_Y(dy), i.e. for every
Borel set A \subseteq \mathcal X \times \mathcal Y we have : P_{X,Y}(A) = \int_A P_{X,Y}(d(x,y)) = \int_A P_X(dx)P_Y(dy) where P_{X,Y}(A) = P\left((X,Y) \in A\right). If X and Y are discrete valued this simplifies to :P_{X,Y}(x_i, y_j) = P_X(x_i)P_Y(y_j) \text{ for all } i =1,..,|\mathcal X|,\, j= 1,..,|\mathcal Y|, while if X and Y are real valued and have
probability densities p_X(x) and p_Y(y) and joint probability density p_{X,Y}(x,y) it becomes :p_{X,Y}(x,y) = p_X(x) p_Y(y) \quad \text{for almost all } (x,y) \in \mathbb R^2. where "almost all" means all except for a set of
measure zero.
More than two random variables A finite set of n random variables \{X_1,\ldots,X_n\} is
pairwise independent if and only if every pair of random variables is independent. Even if the set of random variables is pairwise independent, it is not necessarily
mutually independent as defined next. A finite set of n random variables \{X_1,\ldots,X_n\} is
mutually independent if and only if for any sequence of numbers \{x_1, \ldots, x_n\}, the events \{X_1 \le x_1\}, \ldots, \{X_n \le x_n \} are mutually independent events (as defined above in ). This is equivalent to the following condition on the joint cumulative distribution function {{nowrap|F_{X_1,\ldots,X_n}(x_1,\ldots,x_n).}} A finite set of n random variables \{X_1,\ldots,X_n\} is mutually independent if and only if {{Equation box 1 where F_{\mathbf{X}}(\mathbf{x}) and F_{\mathbf{Y}}(\mathbf{y}) denote the cumulative distribution functions of \mathbf{X} and \mathbf{Y} and F_{\mathbf{X,Y}}(\mathbf{x,y}) denotes their joint cumulative distribution function. Independence of \mathbf{X} and \mathbf{Y} is often denoted by \mathbf{X} \perp\!\!\!\perp \mathbf{Y}. Written component-wise, \mathbf{X} and \mathbf{Y} are called independent if :F_{X_1,\ldots,X_m,Y_1,\ldots,Y_n}(x_1,\ldots,x_m,y_1,\ldots,y_n) = F_{X_1,\ldots,X_m}(x_1,\ldots,x_m) \cdot F_{Y_1,\ldots,Y_n}(y_1,\ldots,y_n) \quad \text{for all } x_1,\ldots,x_m,y_1,\ldots,y_n.
For stochastic processes For one stochastic process The definition of independence may be extended from random vectors to a
stochastic process. Therefore, it is required for an independent stochastic process that the random variables obtained by sampling the process at any n times t_1,\ldots,t_n are independent random variables for any n. Formally, a stochastic process \left\{ X_t \right\}_{t\in\mathcal{T}} is called independent, if and only if for all n\in \mathbb{N} and for all t_1,\ldots,t_n\in\mathcal{T} {{Equation box 1 where {{nowrap|F_{X_{t_1},\ldots,X_{t_n}}(x_1,\ldots,x_n) = \mathrm{P}(X(t_1) \leq x_1,\ldots,X(t_n) \leq x_n).}} Independence of a stochastic process is a property
within a stochastic process, not between two stochastic processes.
For two stochastic processes Independence of two stochastic processes is a property between two stochastic processes \left\{ X_t \right\}_{t\in\mathcal{T}} and \left\{ Y_t \right\}_{t\in\mathcal{T}} that are defined on the same probability space (\Omega,\mathcal{F},P). Formally, two stochastic processes \left\{ X_t \right\}_{t\in\mathcal{T}} and \left\{ Y_t \right\}_{t\in\mathcal{T}} are said to be independent if for all n\in \mathbb{N} and for all t_1,\ldots,t_n\in\mathcal{T}, the random vectors (X(t_1),\ldots,X(t_n)) and (Y(t_1),\ldots,Y(t_n)) are independent, i.e. if {{Equation box 1
Independent σ-algebras The definitions above ( and ) are both generalized by the following definition of independence for
σ-algebras. Let (\Omega, \Sigma, \mathrm{P}) be a
probability space and let \mathcal{A} and \mathcal{B} be two sub-σ-algebras of \Sigma. \mathcal{A} and \mathcal{B} are said to be independent if, whenever A \in \mathcal{A} and B \in \mathcal{B}, :\mathrm{P}(A \cap B) = \mathrm{P}(A) \mathrm{P}(B). Likewise, a finite family of σ-algebras (\tau_i)_{i\in I}, where I is an
index set, is said to be independent if and only if :\forall \left(A_i\right)_{i\in I} \in \prod\nolimits_{i\in I}\tau_i \ : \ \mathrm{P}\left(\bigcap\nolimits_{i\in I}A_i\right) = \prod\nolimits_{i\in I}\mathrm{P}\left(A_i\right) and an infinite family of σ-algebras is said to be independent if all its finite subfamilies are independent. The new definition relates to the previous ones very directly: • Two events are independent (in the old sense)
if and only if the σ-algebras that they generate are independent (in the new sense). The σ-algebra generated by an event E \in \Sigma is, by definition, ::\sigma(\{E\}) = \{ \emptyset, E, \Omega \setminus E, \Omega \}. • Two random variables X and Y defined over \Omega are independent (in the old sense) if and only if the σ-algebras that they generate are independent (in the new sense). The σ-algebra generated by a random variable X taking values in some
measurable space S consists, by definition, of all subsets of \Omega of the form X^{-1}(U), where U is any measurable subset of S. Using this definition, it is easy to show that if X and Y are random variables and Y is constant, then X and Y are independent, since the σ-algebra generated by a constant random variable is the trivial σ-algebra \{ \varnothing, \Omega \}. Probability zero events cannot affect independence so independence also holds if Y is only Pr-
almost surely constant. ==Properties==