===
Simple random walk on the integers === Take X=Y=\Z, and \mathcal A = \mathcal B = \mathcal P(\Z) (the
power set of \Z). Then a Markov kernel is fully determined by the probability it assigns to singletons \{m\},\, m \in Y = \Z for each n \in X = \Z: :\kappa(B|n )=\sum_{m \in B}\kappa(\{m\}|n), \qquad \forall n \in \mathbb{Z}, \, \forall B \in \mathcal B. Now the random walk \kappa that goes to the right with probability p and to the left with probability 1 - p is defined by :\kappa(\{m\}|n)= p \delta_{m, n + 1}+ (1-p) \delta_{m, n - 1}, \quad \forall n,m \in \Z where \delta is the
Kronecker delta. The transition probabilities P(m|n) = \kappa(\{m\}|n) for the random walk are equivalent to the Markov kernel. ===General
Markov processes with countable state space=== More generally take X and Y both countable and \mathcal A = \mathcal P(X),\ \mathcal B = \mathcal P(Y). Again a Markov kernel is defined by the probability it assigns to singleton sets for each i \in X :\kappa(B|i)=\sum_{j \in B}\kappa(\{j\}|i), \qquad \forall i \in X, \, \forall B \in \mathcal B, We define a Markov process by defining a transition probability P(j|i) = K_{ji} where the numbers K_{ji} define a (countable)
stochastic matrix (K_{ji}) i.e. :\begin{align} K_{ji} &\ge 0, \qquad &\forall (j,i) \in Y\times X, \\ \sum_{j \in Y}K_{ji}&=1, \qquad &\forall i \in X.\\ \end{align} We then define : \kappa(\{j\} | i) = K_{ji} = P(j|i), \qquad \forall i \in X, \quad \forall B \in \mathcal B. Again the transition probability, the stochastic matrix and the Markov kernel are equivalent reformulations.
Markov kernel defined by a kernel function and a measure Let \nu be a
measure on (Y, \mathcal B), and k: Y \times X\to [0, \infty] a
measurable function with respect to the
product \sigma-algebra \mathcal A \otimes \mathcal B such that : \int_Y k(y, x)\nu(\mathrm{d} y) = 1, \qquad \forall x \in X , then \kappa(dy |x) = k(y, x)\nu(dy) i.e. the mapping :\begin{cases} \kappa:\mathcal B \times X \to [0,1] \\ \kappa(B|x)=\int_{B}k(y, x)\nu(\mathrm{d} y) \end{cases} defines a Markov kernel. This example generalises the countable Markov process example where \nu was the
counting measure. Moreover it encompasses other important examples such as the convolution kernels, in particular the Markov kernels defined by the
heat equation. The latter example includes the
Gaussian kernel on X = Y = \mathbb R with \nu(dx) = dx standard
Lebesgue measure and :k_t(y, x) = \frac{1}{\sqrt{2\pi}t}e^{-(y - x)^2/(2t^2)}.
Measurable functions Take (X, \mathcal{A}) and (Y, \mathcal{B}) arbitrary measurable spaces, and let f:X \to Y be a measurable function. Now define \kappa(dy|x) = \delta_{f(x)}(dy) i.e. : \kappa(B|x) = \mathbf{1}_B(f(x)) = \mathbf{1}_{f^{-1}(B)}(x) = \begin{cases}1 & \text{if } f(x) \in B\\ 0 & \text{otherwise}\end{cases} for all B \in \mathcal{B}. Note that the indicator function \mathbf{1}_{f^{-1}(B)} is \mathcal{A}-measurable for all B \in \mathcal{B} iff f is measurable. This example allows us to think of a Markov kernel as a generalised function with a (in general) random rather than certain value. That is, it is a
multivalued function where the values are not equally weighted. ===
Galton–Watson process=== As a less obvious example, take X = \N, \mathcal A = \mathcal P(\N), and (Y, \mathcal B) the real numbers \R with the standard sigma algebra of
Borel sets. Then :\kappa(B|n)=\begin{cases} \mathbf{1}_B(0) & n=0\\ \Pr(\xi_1 + \cdots + \xi_x \in B) & n \neq 0 \\ \end{cases} where x is the number of element at the state n , \xi_i are
i.i.d. random variables (usually with mean 0) and where \mathbf{1}_B is the
indicator function. For the simple case of
coin flips this models the different levels of a
Galton board. == Composition of Markov Kernels==