Product : f_1 = O(g_1) \text{ and } f_2 = O(g_2) \Rightarrow f_1 f_2 = O(g_1 g_2) :f\cdot O(g) = O(|f| g)
Sum If f_1 = O(g_1) and f_2= O(g_2) then f_1 + f_2 = O(\max(g_1, g_2)). It follows that if f_1 = O(g) and f_2 = O(g) then f_1+f_2 = O(g) .
Multiplication by a constant Let be a nonzero constant. Then O(|k| \cdot g) = O(g). In other words, if f = O(g), then k \cdot f = O(g).
Transitive property If f=O(g) and g=O(h) then f=O(h) . If the function f of a positive integer n can be written as a finite sum of other functions, then the fastest growing one determines the order of f(n). For example, :f(n) = 9 \log n + 5 (\log n)^4 + 3n^2 + 2n^3 = O(n^3) \qquad\text{for } n\ge 1 . Some general rules about growth
toward infinity; the 2nd and 3rd property below can be proved rigorously using
L'Hôpital's rule:
Large powers dominate small powers For b\ge a, then n^a = O(n^b) as n \to \infty .
Powers dominate logarithms For any positive a,b, (\log n)^a = O_{a,b}(n^b), no matter how large a is and how small b is. Here, the implied constant depends on both a and b.
Exponentials dominate powers For any positive a,b, n^a = O_{a,b}(e^{bn}), no matter how large a is and how small b is. A function that grows faster than n^c for any c is called
superpolynomial. One that grows more slowly than any exponential function of the form c^n with c>1 is called
subexponential. An algorithm can require time that is both superpolynomial and subexponential; examples of this include the fastest known algorithms for
integer factorization and the function n^{\log n} . We may ignore any powers of n inside of the logarithms. For any positive c, the notation O(\log n) means exactly the same thing as O(\log (n^c)), since \log(n^c)=c\log n. Similarly, logs with different constant bases are equivalent with respect to Big O notation. On the other hand, exponentials with different bases are not of the same order. For example, 2^n and 3^n are not of the same order.
More complicated expressions In more complicated usage, O(\cdot) can appear in different places in an equation, even several times on each side. For example, the following are true for n a positive integer: \begin{align} (n+1)^2 & = n^2 + O(n), \\ (n + O(n^{1/2})) \cdot (n + O(\log n))^2 & = n^3 + O(n^{5/2}), \\ n^{O(1)} & = O(e^n). \end{align} The meaning of such statements is as follows: for
any functions which satisfy each O(\cdot) on the left side, there are
some functions satisfying each O(\cdot) on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function satisfying f(n)=O(1), there is some function g(n)=O(e^n) such that n^{f(n)}=g(n) ". The implied constant in the statement " g(n)=O(e^n) " may depend on the implied constant in the expression " f(n)=O(1)". Some further examples: \begin{align} f=O(g)\; &\Rightarrow\; \int_a^b f = O\bigg( \int_a^b g \bigg) \\ f(x)=g(x)+O(1)\; &\Rightarrow\; e^{f(x)}=O(e^{g(x)}) \\ (1+O(1/x))^{O(x)} &= O(1) \quad \text{ for } x>0\\ \sin x &= O(|x|) \quad \text{ for all real } x. \end{align}
Vinogradov's ≫ and Knuth's big Ω When f, g are both positive functions, Vinogradov Knuth's big \Omega enjoys widespread use today in computer science and combinatorics.
Hardy's ≍ and Knuth's big Θ In analytic number theory, the notation f(x) \asymp g(x) means both f(x)=O(g(x)) and g(x)=O(f(x)) . This notation is originally due to Hardy. Knuth's notation for the same notion is f(x) = \Theta(g(x)) . Roughly speaking, these statements assert that f(x) and g(x) have the
same order. These notations mean that there are positive constants M,N so that N g(x) \le f(x) \le M g(x) for all x in the common domain of f,g . When the functions are defined on the positive integers or positive real numbers, as with big O, writers oftentimes interpret statements f(x) = \Omega(g(x)) and f(x)=\Theta(g(x)) as holding for all sufficiently large x , that is, for all x beyond some point x_0 . Sometimes this is indicated by appending
x\to\infty to the statement. For example, 2n^2 - 10n = \Theta(n^2) is true for the domain n\ge 6 but false if the domain is all positive integers, since the function is zero at n=5 .
Further examples n^3 + 20n^2 +n+12 \asymp n^3 \quad \text{ for all } n\ge 1. (1+x)^8 = x^8 + \Theta(x^7) \quad \text{ for all } x\ge 1. The notation f(n) = e^{\Omega(n)} \quad \text{ for all } n\ge 1, means that there is a positive constant M so that f(n) \ge e^{Mn} for all n\ge 1. By contrast, f(n) = e^{-O(n)} \quad \text{ for all } n\ge 1, means that there is a positive constant M so that f(n) \ge e^{-Mn} for all n\ge 1 and f(n) = e^{\Theta(n)} \quad \text{ for all } n\ge 1, means that there are positive constants M,N so that e^{M n} \le f(n) \le e^{N n} for all n\ge 1. For any domain D, f(x) = g(x)+O(1) \Longleftrightarrow e^{f(x)} \asymp e^{g(x)}, each statement being for all x in D. == Orders of common functions ==