shell displaying a logarithmic spiral|alt=A photograph of a nautilus' shell. Logarithms have many applications inside and outside mathematics. Some of these occurrences are related to the notion of
scale invariance. For example, each chamber of the shell of a
nautilus is an approximate copy of the next one, scaled by a constant factor. This gives rise to a
logarithmic spiral.
Benford's law on the distribution of leading digits can also be explained by scale invariance. Logarithms are also linked to
self-similarity. For example, logarithms appear in the analysis of algorithms that solve a problem by dividing it into two similar smaller problems and patching their solutions. The dimensions of self-similar geometric shapes, that is, shapes whose parts resemble the overall picture are also based on logarithms.
Logarithmic scales are useful for quantifying the relative change of a value as opposed to its absolute difference. Moreover, because the logarithmic function grows very slowly for large , logarithmic scales are used to compress large-scale scientific data. Logarithms also occur in numerous scientific formulas, such as the
Tsiolkovsky rocket equation, the
Fenske equation, or the
Nernst equation.
Logarithmic scale in
Papiermarks during the
German hyperinflation in the 1920s|right|thumb|alt=A graph of the value of one mark over time. The line showing its value is increasing very quickly, even with logarithmic scale. Scientific quantities are often expressed as logarithms of other quantities, using a
logarithmic scale. For example, the
decibel is a
unit of measurement associated with
logarithmic-scale quantities. It is based on the common logarithm of
ratios—10 times the common logarithm of a
power ratio or 20 times the common logarithm of a
voltage ratio. It is used to quantify the attenuation or amplification of electrical signals, to describe power levels of sounds in
acoustics, and the
absorbance of light in the fields of
spectrometry and
optics. The
signal-to-noise ratio describing the amount of unwanted
noise in relation to a (meaningful)
signal is also measured in decibels. In a similar vein, the
peak signal-to-noise ratio is commonly used to assess the quality of sound and
image compression methods using the logarithm. The strength of an earthquake is measured by taking the common logarithm of the energy emitted at the quake. This is used in the
moment magnitude scale or the
Richter magnitude scale. For example, a 5.0 earthquake releases 32 times and a 6.0 releases 1000 times the energy of a 4.0.
Apparent magnitude measures the brightness of stars logarithmically. In
chemistry the negative of the decimal logarithm, the decimal '''''', is indicated by the letter p. For instance,
pH is the decimal cologarithm of the
activity of
hydronium ions (the form
hydrogen ions take in water). The activity of hydronium ions in neutral water is 10−7
mol·L−1, hence a pH of 7. Vinegar typically has a pH of about 3. The difference of 4 corresponds to a ratio of 104 of the activity, that is, vinegar's hydronium ion activity is about .
Semilog (log–linear) graphs use the logarithmic scale concept for visualization: one axis, typically the vertical one, is scaled logarithmically. For example, the chart at the right compresses the steep increase from 1 million to 1 trillion to the same space (on the vertical axis) as the increase from 1 to 1 million. In such graphs,
exponential functions of the form appear as straight lines with
slope equal to the logarithm of .
Log-log graphs scale both axes logarithmically, which causes functions of the form to be depicted as straight lines with slope equal to the exponent . This is applied in visualizing and analyzing
power laws.
Psychology Logarithms occur in several laws describing
human perception:
Hick's law proposes a logarithmic relation between the time individuals take to choose an alternative and the number of choices they have.
Fitts's law predicts that the time required to rapidly move to a target area is a logarithmic function of the ratio between the distance to a target and the size of the target. In
psychophysics, the
Weber–Fechner law proposes a logarithmic relationship between
stimulus and
sensation such as the actual vs. the perceived weight of an item a person is carrying. (This "law", however, is less realistic than more recent models, such as
Stevens's power law.) Psychological studies found that individuals with little mathematics education tend to estimate quantities logarithmically, that is, they position a number on an unmarked line according to its logarithm, so that 10 is positioned as close to 100 as 100 is to 1000. Increasing education shifts this to a linear estimate (positioning 1000 10 times as far away) in some circumstances, while logarithms are used when the numbers to be plotted are difficult to plot linearly.
Probability theory and statistics s (PDF) of random variables with log-normal distributions. The location parameter , which is zero for all three of the PDFs shown, is the mean of the logarithm of the random variable, not the mean of the variable itself. of the world. Black dots indicate the distribution predicted by Benford's law.|thumb|right|alt=A bar chart and a superimposed second chart. The two differ slightly, but both decrease in a similar fashion. Logarithms arise in
probability theory: the
law of large numbers dictates that, for a
fair coin, as the number of coin-tosses increases to infinity, the observed proportion of heads
approaches one-half. The fluctuations of this proportion about one-half are described by the
law of the iterated logarithm. Logarithms also occur in
log-normal distributions. When the logarithm of a
random variable has a
normal distribution, the variable is said to have a log-normal distribution. Log-normal distributions are encountered in many fields, wherever a variable is formed as the product of many independent positive random variables, for example in the study of turbulence. Logarithms are used for
maximum-likelihood estimation of parametric
statistical models. For such a model, the
likelihood function depends on at least one
parameter that must be estimated. A maximum of the likelihood function occurs at the same parameter-value as a maximum of the logarithm of the likelihood (the "
log likelihood"), because the logarithm is an increasing function. The log-likelihood is easier to maximize, especially for the multiplied likelihoods for
independent random variables.
Benford's law describes the occurrence of digits in many
data sets, such as heights of buildings. According to Benford's law, the probability that the first decimal-digit of an item in the data sample is (from 1 to 9) equals ,
regardless of the unit of measurement. Thus, about 30% of the data can be expected to have 1 as first digit, 18% start with 2, etc. Auditors examine deviations from Benford's law to detect fraudulent accounting. The
logarithm transformation is a type of
data transformation used to bring the empirical distribution closer to the assumed one.
Computational complexity Analysis of algorithms is a branch of
computer science that studies the
performance of
algorithms (computer programs solving a certain problem). Logarithms are valuable for describing algorithms that
divide a problem into smaller ones, and join the solutions of the subproblems. For example, to find a number in a sorted list, the
binary search algorithm checks the middle entry and proceeds with the half before or after the middle entry if the number is still not found. This algorithm requires, on average, comparisons, where is the list's length. Similarly, the
merge sort algorithm sorts an unsorted list by dividing the list into halves and sorting these first before merging the results. Merge sort algorithms typically require a time
approximately proportional to . The base of the logarithm is not specified here, because the result only changes by a constant factor when another base is used. A constant factor is usually disregarded in the analysis of algorithms under the standard
uniform cost model. A function is said to
grow logarithmically if is (exactly or approximately) proportional to the logarithm of . (Biological descriptions of organism growth, however, use this term for an exponential function.) For example, any
natural number can be represented in
binary form in no more than
bits. In other words, the amount of
memory needed to store grows logarithmically with .
Entropy and chaos on an oval
billiard table. Two particles, starting at the center with an angle differing by one degree, take paths that diverge chaotically because of
reflections at the boundary.|alt=An oval shape with the trajectories of two particles.
Entropy is broadly a measure of the disorder of some system. In
statistical thermodynamics, the entropy of some physical system is defined as S = - k \sum_i p_i \ln(p_i).\, The sum is over all possible states of the system in question, such as the positions of gas particles in a container. Moreover, is the probability that the state is attained and is the
Boltzmann constant. Similarly,
entropy in information theory measures the quantity of information. If a message recipient may expect any one of possible messages with equal likelihood, then the amount of information conveyed by any one such message is quantified as bits.
Lyapunov exponents use logarithms to gauge the degree of chaoticity of a
dynamical system. For example, for a particle moving on an oval billiard table, even small changes of the initial conditions result in very different paths of the particle. Such systems are
chaotic in a
deterministic way, because small measurement errors of the initial state predictably lead to largely different final states. At least one Lyapunov exponent of a deterministically chaotic system is positive.
Fractals s by three smaller ones.|thumb|alt=Parts of a triangle are removed in an iterated way. Logarithms occur in definitions of the
dimension of
fractals. Fractals are geometric objects that are self-similar in the sense that small parts reproduce, at least roughly, the entire global structure. The
Sierpinski triangle (pictured) can be covered by three copies of itself, each having sides half the original length. This makes the
Hausdorff dimension of this structure . Another logarithm-based notion of dimension is obtained by
counting the number of boxes needed to cover the fractal in question.
Music Logarithms are related to musical tones and
intervals. In
equal temperament tunings, the frequency ratio depends only on the interval between two tones, not on the specific frequency, or
pitch, of the individual tones. In the
12-tone equal temperament tuning common in modern Western music, each
octave (doubling of frequency) is broken into twelve equally spaced intervals called
semitones. For example, if the
note A has a frequency of 440
Hz then the note
B-flat has a frequency of 466 Hz. The interval between
A and
B-flat is a
semitone, as is the one between
B-flat and
B (frequency 493 Hz). Accordingly, the frequency ratios agree: \frac{466}{440} \approx \frac{493}{466} \approx 1.059 \approx \sqrt[12]2. Intervals between arbitrary pitches can be measured in octaves by taking the logarithm of the
frequency ratio, can be measured in equally tempered semitones by taking the logarithm ( times the logarithm), or can be measured in
cents, hundredths of a semitone, by taking the logarithm ( times the logarithm). The latter is used for finer encoding, as it is needed for finer measurements or non-equal temperaments.
Number theory Natural logarithms are closely linked to
counting prime numbers (2, 3, 5, 7, 11, ...), an important topic in
number theory. For any
integer , the quantity of
prime numbers less than or equal to is denoted . The
prime number theorem asserts that is approximately given by \frac{x}{\ln(x)}, in the sense that the ratio of and that fraction approaches 1 when tends to infinity. As a consequence, the probability that a randomly chosen number between 1 and is prime is inversely
proportional to the number of decimal digits of . A far better estimate of is given by the
offset logarithmic integral function , defined by \mathrm{Li}(x) = \int_2^x \frac1{\ln(t)} \,dt. The
Riemann hypothesis, one of the oldest open mathematical
conjectures, can be stated in terms of comparing and . The
Erdős–Kac theorem describing the number of distinct
prime factors also involves the
natural logarithm. The logarithm of
n factorial, , is given by \ln (n!) = \ln (1) + \ln (2) + \cdots + \ln (n). This can be used to obtain
Stirling's formula, an approximation of for large . ==Generalizations==