in a truncated Stirling series vs. number of terms
Growth and approximation As a function the factorial has faster than
exponential growth, but grows more slowly than a
double exponential function. Its growth rate is similar but slower by an exponential factor. One way of approaching this result is by taking the
natural logarithm of the factorial, which turns its product formula into a sum, and then estimating the sum by an integral: \ln n! = \sum_{x=1}^n \ln x \approx \int_1^n\ln x\, dx=n\ln n-n+1. Exponentiating the result (and ignoring the negligible +1 term) approximates n! as More carefully bounding the sum both above and below by an integral, using the
trapezoid rule, shows that this estimate needs a correction factor proportional The constant of proportionality for this correction can be found from the
Wallis product, which expresses \pi as a limiting ratio of factorials and powers of two. The result of these corrections is
Stirling's approximation: n!\sim\sqrt{2\pi n}\left(\frac{n}{e}\right)^n\,. Here, the \sim symbol means that, as n goes to infinity, the ratio between the left and right sides approaches 1 in the
limit. Stirling's formula provides the first term in an
asymptotic series that becomes even more accurate when taken to greater numbers of terms: n! \sim \sqrt{2\pi n}\left(\frac{n}{e}\right)^n \left(1 +\frac{1}{12n}+\frac{1}{288n^2} - \frac{139}{51840n^3} -\frac{571}{2488320n^4}+ \cdots \right). An alternative version (the approximation derived directly from the
Euler–Maclaurin formula) converges faster because it only requires odd exponents in the correction terms: \log_2 n! = n\log_2 n- n \log_2 e + \frac12\log_2 n + O(1).
Divisibility and digits The product formula for the factorial implies that n! is
divisible by all
prime numbers that are at and by no larger prime numbers. More precise information about its divisibility is given by
Legendre's formula, which gives the exponent of each prime p in the prime factorization of n! as \sum_{i=1}^\infty \left \lfloor \frac n {p^i} \right \rfloor=\frac{n - s_p(n)}{p - 1}. Here s_p(n) denotes the sum of the digits The exponent given by this formula can more technically be called the
-adic valuation of the factorial. Grouping the prime factors of the factorial into
prime powers in different ways produces the
multiplicative partitions of factorials. The special case of Legendre's formula for p=5 gives the number of
trailing zeros in the decimal representation of the factorials. Legendre's formula implies that the exponent of the prime p=2 is always larger than the exponent for so each factor of five can be paired with a factor of two to produce one of these trailing zeros. The leading digits of the factorials are distributed according to
Benford's law. Every sequence of digits, in any base, is the sequence of initial digits of some factorial number in that base. Another result on divisibility of factorials,
Wilson's theorem, states that (n-1)!+1 is divisible by n if and only if n is a
prime number. The
greatest common divisor of the values of a
primitive polynomial of degree d over the integers evenly divides
Continuous interpolation and non-integer generalization There are infinitely many ways to extend the factorials to a
continuous function. The same integral converges more generally for any
complex number z whose real part is positive. It can be extended to the non-integer points in the rest of the
complex plane by solving for Euler's
reflection formula \Gamma(z)\Gamma(1-z)=\frac{\pi}{\sin\pi z}. However, this formula cannot be used at integers because, for them, the \sin\pi z term would produce a
division by zero. The result of this extension process is an
analytic function (more specifically a
meromorphic function), the
analytic continuation of the integral formula for the gamma function. It has a nonzero value at all complex numbers, except for the non-positive integers where it has
simple poles. Correspondingly, this provides a definition for the factorial at all complex numbers other than the negative integers. One property of the gamma function, distinguishing it from other continuous interpolations of the factorials, is given by the
Bohr–Mollerup theorem, which states that the gamma function (offset by one) is the only
log-convex function on the positive real numbers that interpolates the factorials and obeys the same functional equation. A related uniqueness theorem of
Helmut Wielandt states that the complex gamma function and its scalar multiples are the only
holomorphic functions on the positive complex half-plane that obey the functional equation and remain bounded for complex numbers with real part between 1 and 2. Other complex functions that interpolate the factorial values include
Hadamard's gamma function, which is an
entire function over all the complex numbers, including the non-positive integers. In the
-adic numbers, it is not possible to continuously interpolate the factorial function directly, because the factorials of large integers (a
dense subset of the -adic integers) converge to zero according to Legendre's formula, forcing any continuous function that is close to their values to be zero everywhere. Instead, the
-adic gamma function provides a continuous interpolation of a modified form of the factorial, omitting the factors in the factorial that are divisible by . The
digamma function is the
logarithmic derivative of the gamma function. Just as the gamma function provides a continuous interpolation of the factorials, offset by one, the digamma function provides a continuous interpolation of the
harmonic numbers, offset by the
Euler–Mascheroni constant.
Computation , a 1975 calculator with a factorial key (third row, center right) The factorial function is a common feature in
scientific calculators. It is also included in scientific programming libraries such as the
Python mathematical functions module and the
Boost C++ library. If efficiency is not a concern, computing factorials is trivial: just successively multiply a variable initialized by the integers up The simplicity of this computation makes it a common example in the use of different computer programming styles and methods. The computation of n! can be expressed in
pseudocode using
iteration as
define factorial(
n):
f := 1
for i := 1, 2, 3, ...,
n:
f :=
f *
i return f or using
recursion based on its recurrence relation as
define factorial(
n):
if (
n = 0)
return 1
return n * factorial(
n − 1) Other methods suitable for its computation include
memoization,
dynamic programming, and
functional programming. The
computational complexity of these algorithms may be analyzed using the unit-cost
random-access machine model of computation, in which each arithmetic operation takes constant time and each number uses a constant amount of storage space. In this model, these methods can compute n! in time and the iterative version uses space Unless optimized for
tail recursion, the recursive version takes linear space to store its
call stack. However, this model of computation is only suitable when n is small enough to allow n! to fit into a
machine word. The values 12! and 20! are the largest factorials that can be stored in, respectively, the
32-bit Floating point can represent larger factorials, but approximately rather than exactly, and will still overflow for factorials larger than The exact computation of larger factorials involves
arbitrary-precision arithmetic, because of
fast growth and
integer overflow. Time of computation can be analyzed as a function of the number of digits or bits in the result. However, computing the factorial involves repeated products, rather than a single multiplication, so these time bounds do not apply directly. In this setting, computing n! by multiplying the numbers from 1 in sequence is inefficient, because it involves n multiplications, a constant fraction of which take time O(n\log^2 n) each, giving total time A better approach is to perform the multiplications as a
divide-and-conquer algorithm that multiplies a sequence of i numbers by splitting it into two subsequences of i/2 numbers, multiplies each subsequence, and combines the results with one last multiplication. This approach to the factorial takes total time one logarithm comes from the number of bits in the factorial, a second comes from the multiplication algorithm, and a third comes from the divide and conquer. Even better efficiency is obtained by computing from its prime factorization, based on the principle that
exponentiation by squaring is faster than expanding an exponent into a product. An algorithm for this by
Arnold Schönhage begins by finding the list of the primes up for instance using the
sieve of Eratosthenes, and uses Legendre's formula to compute the exponent for each prime. Then it computes the product of the prime powers with these exponents, using a recursive algorithm, as follows: • Use divide and conquer to compute the product of the primes whose exponents are odd • Divide all of the exponents by two (rounding down to an integer), recursively compute the product of the prime powers with these smaller exponents, and square the result • Multiply together the results of the two previous steps The product of all primes up to n is an O(n)-bit number, by the
prime number theorem, so the time for the first step is O(n\log^2 n), with one logarithm coming from the divide and conquer and another coming from the multiplication algorithm. In the recursive calls to the algorithm, the prime number theorem can again be invoked to prove that the numbers of bits in the corresponding products decrease by a constant factor at each level of recursion, so the total time for these steps at all levels of recursion adds in a
geometric series The time for the squaring in the second step and the multiplication in the third step are again because each is a single multiplication of a number with O(n\log n) bits. Again, at each level of recursion the numbers involved have a constant fraction as many bits (because otherwise repeatedly squaring them would produce too large a final result) so again the amounts of time for these steps in the recursive calls add in a geometric series Consequentially, the whole algorithm takes proportional to a single multiplication with the same number of bits in its result. ==Related sequences and functions==