The iterated logarithm is useful in
analysis of algorithms and
computational complexity, appearing in the time and space complexity bounds of some algorithms such as: • Finding the
Delaunay triangulation of a set of points knowing the
Euclidean minimum spanning tree: randomized
O(
n n) time. •
Fürer's algorithm for integer multiplication: O(
n log
n 2
O(
n)). • Finding an approximate maximum (element at least as large as the median):
n − 1 ± 3 parallel operations. • Richard Cole and
Uzi Vishkin's
distributed algorithm for 3-coloring an n-cycle:
O(
n) synchronous communication rounds. The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself, or repeats of it. This is because the tetration grows much faster than iterated exponential: {^{y}b} = \underbrace{b^{b^{\cdot^{\cdot^{b}}}}}_y \gg \underbrace{b^{b^{\cdot^{\cdot^{b^{y}}}}}}_n the inverse grows much slower: \log_b^* x \ll \log_b^n x. For all values of
n relevant to counting the running times of algorithms implemented in practice (i.e.,
n ≤ 265536, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5. Higher bases give smaller iterated logarithms. ==Other applications==