First use of numbers Bones and other artifacts have been discovered with marks cut into them that many believe are
tally marks. Some historians suggest that the
Lebombo bone (dated about 43,000 years ago) and the
Ishango bone (dated about 22,000 to 30,000 years ago) are the oldest arithmetic artifacts but this interpretation is disputed. These tally marks may have been used for counting elapsed time, such as numbers of days, lunar cycles or keeping records of
quantities, such as of animals. A
perceptual system for quantity thought to underlie numeracy, is shared with other species, a phylogenetic distribution suggesting it would have existed before the emergence of language. The earliest unambiguous numbers in the archaeological record are the
Mesopotamian base 60 (sexagesimal) system ( BC); place value emerged in the 3rd millennium BCE. The earliest known base 10 system dates to 3100 BC in
Egypt. A Babylonian clay tablet dated to provides an estimate of the circumference of a circle to its diameter of 3\frac{1}{8} = 3.125, possibly the oldest approximation of π.
Numerals , hindu-arabic,
Devanagari,
Eastern Arabic,
Chinese, Chinese financial, and
Roman numerals Numbers should be distinguished from
numerals, the symbols used to represent numbers. The Egyptians invented the first ciphered numeral system, and the Greeks followed by mapping their counting numbers onto Ionian and Doric alphabets. (However, in 300 BC,
Archimedes first demonstrated the use of a
positional numeral system to display extremely large numbers in
The Sand Reckoner.) Roman numerals, a system that used combinations of letters from the Roman alphabet, remained dominant in Europe until the spread of the
Hindu–Arabic numeral system around the late 14th century, and the Hindu–Arabic numeral system remains the most common system for representing numbers in the world today. The key to the effectiveness of the system was the symbol for
zero, which was developed by ancient
Indian mathematicians around 500 AD. The first known recorded use of
zero as an
integer dates to AD 628, and appeared in the
Brāhmasphuṭasiddhānta, the main work of the
Indian mathematician Brahmagupta. He is usually considered the first to formulate the mathematical concept of zero. Brahmagupta treated 0 as a number and discussed operations involving it, including
division by zero. He gave rules of using zero with negative and positive numbers, such as "zero plus a positive number is a positive number, and a negative number plus zero is the negative number". By this time (the 7th century), the concept had clearly reached Cambodia in the form of
Khmer numerals, There are other uses of zero before Brahmagupta, though the documentation is not as complete as it is in the
Brāhmasphuṭasiddhānta. Many ancient texts used 0, including Babylonian and Egyptian texts. Egyptians used the word
nfr to denote zero balance in
double entry accounting. Indian texts used a
Sanskrit word or to refer to the concept of
void. In mathematics texts this word often refers to the number zero. In a similar vein,
Pāṇini (5th century BC) used the null (zero) operator in the
Ashtadhyayi, an early example of an
algebraic grammar for the Sanskrit language (also see
Pingala). Records show that the Ancient Greeks seemed unsure about the status of 0 as a number: they asked themselves "How can 'nothing' be something?" leading to interesting
philosophical and, by the Medieval period, religious arguments about the nature and existence of 0 and the vacuum. The
paradoxes of
Zeno of Elea depend in part on the uncertain interpretation of 0. (The ancient Greeks even questioned whether was a number.) are an example of a base-20 numeral system. It would be the
Maya who developed zero as a cardinal number, employing it in their
numeral system and in the
Maya calendar. Maya used a
base 20 numerical system by combining a number of dots (base 5) with a number of bars (base 4).
George I. Sánchez in 1961 reported a base 4, base 5 "finger" abacus. By 130 AD,
Ptolemy, influenced by
Hipparchus and the Babylonians, was using a symbol for 0 (a small circle with a long overbar) within a
sexagesimal numeral system otherwise using alphabetic
Greek numerals. Because it was used alone, not as just a placeholder, this
Hellenistic zero was the first
documented use of a true zero in the Old World. In later
Byzantine manuscripts of his
Syntaxis Mathematica (
Almagest), the Hellenistic zero had morphed into the Greek letter
Omicron (otherwise meaning 70 in
isopsephy). A true zero was used in tables alongside
Roman numerals by 525 (first known use by
Dionysius Exiguus), but as a word, meaning
nothing, not as a symbol. When division produced 0 as a remainder, , also meaning
nothing, was used. These medieval zeros were used by all future medieval
computists (calculators of Easter). An isolated use of their initial, N, was used in a table of Roman numerals by
Bede or a colleague about 725, a true zero symbol.
Negative numbers The abstract concept of negative numbers was recognized as early as 100–50 BC in China.
The Nine Chapters on the Mathematical Art contains methods for finding the areas of figures; red rods were used to denote positive
coefficients, black for negative. The first reference in a Western work was in the 3rd century AD in Greece.
Diophantus referred to the equation equivalent to (the solution is negative) in
Arithmetica, saying that the equation gave an absurd result. During the 600s, negative numbers were in use in India to represent debts. Diophantus' previous reference was discussed more explicitly by Indian mathematician
Brahmagupta, in
Brāhmasphuṭasiddhānta in 628, who used negative numbers to produce the general form
quadratic formula that remains in use today. However, in the 12th century in India,
Bhaskara gives negative roots for quadratic equations but says the negative value "is in this case not to be taken, for it is inadequate; people do not approve of negative roots". At the same time, the Chinese were indicating negative numbers by drawing a diagonal stroke through the right-most non-zero digit of the corresponding positive number's numeral. An early European experimenter with negative numbers was
Nicolas Chuquet during the 15th century. He used them as
exponents, but referred to them as "absurd numbers". As recently as the 18th century, it was common practice to ignore any negative results returned by equations on the assumption that they were meaningless.
Rational numbers of confining the value of pi using the perimeters of circumscribed and inscribed polygons results in rational number estimates. It is likely that the concept of fractional numbers dates to
prehistoric times. The Rhind Papyrus includes an example of deriving the area of a circle from its diameter, which yields an estimate of π as \bigl(\frac{16}{9}\bigr)^2 ≈ 3.16049.... Of the Indian texts, the most relevant is the
Sthananga Sutra, which also covers number theory as part of a general study of mathematics. The concept of
decimal fractions is closely linked with decimal place-value notation; the two seem to have developed in tandem. For example, it is common for the Jain math
sutra to include calculations of decimal-fraction approximations to
pi or the
square root of 2. Similarly, Babylonian math texts used sexagesimal (base 60) fractions.
Real numbers and irrational numbers place values for an approximation of the square root of 2: These values were primarily used for practical calculations in geometry and land measurement. There were practical approximations of irrational numbers in the
Indian Shulba Sutras composed between 800 and 500 BC. The first existence proofs of irrational numbers is usually attributed to
Pythagoras, more specifically to the
Pythagorean Hippasus, who produced a (most likely geometrical) proof of the irrationality of the
square root of 2. The story goes that Hippasus discovered irrational numbers when trying to represent the square root of 2 as a fraction. However, Pythagoras believed in the absoluteness of numbers. He could not disprove the existence of irrational numbers, or accept them, so according to legend, he sentenced Hippasus to death by drowning, to impede the spread of this unsettling news. The 16th century brought final European acceptance of negative integers and fractional numbers. By the 17th century, mathematicians generally used decimal fractions with modern notation. The concept of
real numbers was introduced in the 17th century by
René Descartes. While studying
compound interest, in 1683
Jacob Bernoulli found that as the compounding intervals grew ever shorter, the rate of
exponential growth converged to a
base of 2.71828...; this key mathematical constant would later be named
Euler's number (). Irrational numbers began to be studied systematically in the 18th century, with
Leonhard Euler who proved that the irrational numbers are those numbers whose
simple continued fractions is not finite and that Euler's number () is irrational. The
irrationality of was proved in 1761 by
Johann Lambert. It is in the second half of the 19th century that real numbers, and thus irrational numbers, were rigorously defined, with the work of
Augustin-Louis Cauchy,
Charles Méray (1869),
Karl Weierstrass (1872),
Eduard Heine (1872),
Georg Cantor (1883), and
Richard Dedekind (1872).
Transcendental numbers and reals A
transcendental number is a numerical value that is not the root of a
polynomial with integer coefficients. This means it is not
algebraic and thus excludes all rational numbers. The existence of transcendental numbers was first established by
Liouville (1844, 1851).
Hermite proved in 1873 that
e is transcendental and
Lindemann proved in 1882 that π is transcendental. Finally,
Cantor showed that the set of all
real numbers is
uncountably infinite but the set of all
algebraic numbers is
countably infinite, so there is an uncountably infinite number of transcendental numbers.
Infinity and infinitesimals In mathematics,
infinity is considered an abstract
concept rather than a number; instead of being "greater than any number", infinite is the property of having no end. The earliest known conception of mathematical infinity appears in the
Yajurveda, an ancient Indian script, which at one point states, "If [the whole] was subtract from [the whole], the leftover will still be [the whole]". Infinity was a popular topic of philosophical study among the
Jain mathematicians c. 400 BC. They distinguished between five types of infinity: infinite in one and two directions, infinite in area, infinite everywhere, and infinite perpetually.
Aristotle defined the traditional Western notion of mathematical infinity. He distinguished between
actual infinity and potential infinity—the general consensus being that only the latter had true value.
Galileo Galilei's
Two New Sciences discussed the idea of
one-to-one correspondences between infinite sets, known as
Galileo's paradox. The next major advance in the theory was made by
Georg Cantor; in 1895 he published a book about his new
set theory, introducing, among other things,
transfinite numbers and formulating the
continuum hypothesis. In the 1960s,
Abraham Robinson showed how infinitely large and infinitesimal numbers can be rigorously defined and used to develop the field of nonstandard analysis. The system of
hyperreal numbers represents a rigorous method of treating the ideas about
infinite and
infinitesimal numbers that had been used casually by mathematicians, scientists, and engineers ever since the invention of
infinitesimal calculus by
Newton and
Leibniz. A modern geometrical version of infinity is given by
projective geometry, which introduces "ideal
points at infinity", one for each spatial direction. Each family of parallel lines in a given direction is postulated to converge to the corresponding ideal point. This is closely related to the idea of vanishing points in
perspective drawing.
Complex numbers The earliest fleeting reference to square roots of negative numbers occurred in the work of the mathematician and inventor
Heron of Alexandria in the , when he considered the volume of an impossible
frustum of a
pyramid. They became more prominent when in the 16th century closed formulas for the roots of third and fourth degree polynomials were discovered by Italian mathematicians such as
Niccolò Fontana Tartaglia and
Gerolamo Cardano. It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of negative numbers. This was doubly unsettling since they did not even consider negative numbers to be on firm ground at the time.
René Descartes is sometimes credited with coining the term "imaginary" for these quantities in 1637, intending it as derogatory. (See
imaginary number for a discussion of the "reality" of complex numbers.) A further source of confusion was that the equation :\left ( \sqrt{-1}\right )^2 =\sqrt{-1}\sqrt{-1}=-1 seemed capriciously inconsistent with the algebraic identity :\sqrt{a}\sqrt{b}=\sqrt{ab}, which is valid for positive real numbers
a and
b, and was also used in complex number calculations with one of
a,
b positive and the other negative. The incorrect use of this identity, and the related identity :\frac{1}{\sqrt{a}}=\sqrt{\frac{1}{a}} in the case when both
a and
b are negative even bedeviled
Euler. This difficulty eventually led him to the convention of using the special symbol
i in place of \sqrt{-1} to guard against this mistake. of Euler's formula in the
complex plane, showing re[al] and im[aginary] coordinates The 18th century saw the work of
Abraham de Moivre and
Leonhard Euler.
De Moivre's formula (1730) states: :(\cos \theta + i\sin \theta)^{n} = \cos n \theta + i\sin n \theta while
Euler's formula of
complex analysis (1748) gave us: :\cos \theta + i\sin \theta = e ^{i\theta }. A special case of this formula yields
Euler's identity: :e ^{i\pi} + 1 = 0 showing a profound connection between the most fundamental numbers in mathematics. The existence of complex numbers was not completely accepted until
Caspar Wessel described the geometrical interpretation in 1799.
Carl Friedrich Gauss rediscovered and popularized it several years later, and as a result the theory of complex numbers received a notable expansion. However, the idea of the graphic representation of complex numbers had appeared as early as 1685, in
Wallis's
De algebra tractatus. In the same year, Gauss provided the first generally accepted proof of the
fundamental theorem of algebra, showing that every polynomial over the complex numbers has a full set of solutions in that realm. Gauss studied complex numbers of the form , where
a and
b are integers (now called
Gaussian integers) or rational numbers. His student,
Gotthold Eisenstein, studied the type , where
ω is a complex root of (now called
Eisenstein integers). Other such classes (called
cyclotomic fields) of complex numbers derive from the
roots of unity for higher values of
k. This generalization is largely due to
Ernst Kummer, who also invented
ideal numbers, which were expressed as geometrical entities by
Felix Klein in 1893. In 1850
Victor Alexandre Puiseux took the key step of distinguishing between poles and branch points, and introduced the concept of
essential singular points. This eventually led to the concept of the
extended complex plane.
Prime numbers Prime numbers may have been studied throughout recorded history. They are natural numbers that are not a product of two smaller natural numbers. It has been suggested that the Ishango bone includes a list of the prime numbers between 10 and 20. The Rhind papyrus display different forms for prime numbers. But the formal study of prime numbers is first documented by the ancient Greek. Euclid devoted one book of the
Elements to the theory of primes; in it he proved the infinitude of the primes and the
fundamental theorem of arithmetic, and presented the
Euclidean algorithm for finding the
greatest common divisor of two numbers. In 240 BC,
Eratosthenes used the
Sieve of Eratosthenes to quickly isolate prime numbers. But most further development of the theory of primes in Europe dates to the
Renaissance and later eras. At around 1000 AD,
Ibn al-Haytham discovered
Wilson's theorem.
Ibn al-Banna' al-Marrakushi found a way to speed up the Sieve of Eratosthenes by only testing up to the square root of the number. Fibonacci communicated Islamic mathematical contributions to Europe, and in 1202 was the first to describe the method of
trial division. Other results concerning the distribution of the primes include Euler's proof that the sum of the reciprocals of the primes diverges, and the
Goldbach conjecture, which claims that any sufficiently large even number is the sum of two primes. Yet another conjecture related to the distribution of prime numbers is the
Riemann hypothesis, formulated by
Bernhard Riemann in 1859. The
prime number theorem was finally proved by
Jacques Hadamard and
Charles de la Vallée-Poussin in 1896. In Ancient Greece,
number symbolism heavily influenced the development of
Greek mathematics, stimulating the investigation of many problems in number theory which are still of interest today. Folktales in different cultures exhibit preferences for particular numbers, with three and seven holding special significance in European culture, while four and five are more prominent in Chinese folktales. Numbers are sometimes associated with luck: in Western society, the
number 13 is considered
unlucky while in Chinese culture the
number eight is considered auspicious. == Main classification==