MarketBig O in probability notation
Company Profile

Big O in probability notation

The order in probability notation is used in probability theory and statistical theory in direct parallel to the big O notation that is standard in mathematics. Where the big O notation deals with the convergence of sequences or sets of ordinary numbers, the order in probability notation deals with convergence of sets of random variables, where convergence is in the sense of convergence in probability.

Definitions
Small o: convergence in probability For a set of random variables Xn and corresponding set of constants an (both indexed by n, which need not be discrete), the notation :X_n=o_p(a_n) means that the set of values Xn/an converges to zero in probability as n approaches an appropriate limit. Equivalently, Xn = op(an) can be written as Xn/an = op(1), i.e. :\lim_{n \to \infty} P\left[\left|\frac{X_n}{a_{n}}\right| \geq \varepsilon\right] = 0, for every positive ε. Big O: stochastic boundedness The notation :X_n=O_p(a_n) \text{ as } n\to\infty means that the set of values Xn/an is stochastically bounded. That is, for any ε > 0, there exists a finite M > 0 and a finite N > 0 such that :P\left(|\frac{X_n}{a_n}| > M\right) N. \forall \varepsilon >0, \lim_{n\rightarrow\infty}\mathbb{P}\left(\left|f(X)/H(n)\right|\geq\varepsilon\right)=0. --> Comparison of the two definitions The difference between the definitions is subtle. If one uses the definition of the limit, one gets: • Big O_p(1): \forall \varepsilon \quad \exists N_{\varepsilon}, \delta_{\varepsilon} \quad \text{ such that } P(|X_n| \geq \delta_{\varepsilon}) \leq \varepsilon \quad \forall n> N_{\varepsilon} • Small o_p(1): \forall \varepsilon, \delta \quad \exists N_{\varepsilon,\delta} \quad \text{ such that } P(|X_n| \geq \delta) \leq \varepsilon \quad \forall n> N_{\varepsilon, \delta} The difference lies in the \delta: for stochastic boundedness, it suffices that there exists one (arbitrary large) \delta to satisfy the inequality, and \delta is allowed to be dependent on \varepsilon (hence the \delta_\varepsilon). On the other hand, for convergence, the statement has to hold not only for one, but for any (arbitrary small) \delta. In a sense, this means that the sequence must be bounded, with a bound that gets smaller as the sample size increases. This suggests that if a sequence is o_p(1), then it is O_p(1), i.e. convergence in probability implies stochastic boundedness. But the reverse does not hold. == Chebyshev lemma for stochastic order ==
Chebyshev lemma for stochastic order
{{Math proof |proof=Let's introduce another definition for convenience. X_n= O_p(1) if for every \eta>0 there exist a constant K(\eta) and integer n(\eta) such that if n\geq n(\eta) then : P\left(\left|X_n\right|\leq K(\eta)\right) \geq 1-\eta. Chebyshev's inequality states: : P\left(\left|X-\mu\right| \leq h\sigma\right) \geq 1 - h^{-2}, where h>0. If we set there h=\eta^{-1/2} for any 0 then we have : P\left(\frac{\left|X_n-\mu_n\right|}{\sigma_n} , which holds for n \geq 1. Setting K(\eta)=\eta^{-1/2} we apply our definition and conclude that : \frac{X_n-\mu_n}{\sigma_n} = O_p(1). }} If, moreover, a_n^{-2}\operatorname{var}(X_n) = \operatorname{var}(a_n^{-1}X_n) is a null sequence for a sequence (a_n) of real numbers, then a_n^{-1}(X_n - E(X_n)) converges to zero in probability by Chebyshev's inequality, so :X_n - E(X_n) = o_p(a_n). ==References==
tickerdossier.comtickerdossier.substack.com