Theorems and results within analytic number theory tend not to be exact structural results about the integers, for which algebraic and geometrical tools are more appropriate. Instead, they give approximate bounds and estimates for various number theoretical functions, as the following examples illustrate.
Multiplicative number theory Euclid showed that there are infinitely many prime numbers. An important question is to determine the asymptotic distribution of the prime numbers; that is, a rough description of how many primes are smaller than a given number.
Gauss, amongst others, after computing a large list of primes, conjectured that the number of primes less than or equal to a large number
N is close to the value of the
integral \int^N_2 \frac{1}{\log t} \, dt. In 1859
Bernhard Riemann used complex analysis and a special
meromorphic function now known as the
Riemann zeta function to derive an analytic expression for the number of primes less than or equal to a real number
x. Remarkably, the main term in Riemann's formula was exactly the above integral, lending substantial weight to Gauss's conjecture. Riemann found that the error terms in this expression, and hence the manner in which the primes are distributed, are closely related to the complex zeros of the zeta function. Using Riemann's ideas and by getting more information on the zeros of the zeta function,
Jacques Hadamard and
Charles Jean de la Vallée-Poussin managed to complete the proof of Gauss's conjecture. In particular, they proved that if \pi(x) = (\text{number of primes }\leq x), then \lim_{x \to \infty} \frac{\pi(x)}{x/\log x} = 1. This remarkable result is what is now known as the
prime number theorem. It is a central result in analytic number theory. Loosely speaking, it states that given a large number
N, the number of primes less than or equal to
N is about
N/log(
N). More generally, the same question can be asked about the number of primes in any
arithmetic progression a +
nq for any integer
n. In one of the first applications of analytic techniques to number theory, Dirichlet proved that any arithmetic progression with
a and
q coprime contains infinitely many primes. The prime number theorem can be generalised to this problem; letting \pi(x, a, q) = (\text {number of primes } \leq x \text{ in the arithmetic progression } a + nq, \ n \in \mathbf Z), then if
a and
q are coprime, \lim_{x \to \infty} \frac{\pi(x,a,q)\phi(q)}{x/\log x} = 1, where \phi is the
totient function. There are also many deep and wide-ranging conjectures in number theory whose proofs seem too difficult for current techniques, such as the
twin prime conjecture which asks whether there are infinitely many primes
p such that
p + 2 is prime. On the assumption of the
Elliott–Halberstam conjecture it has been proven recently that there are infinitely many primes
p such that
p +
k is prime for some positive even
k at most 12. Also, it has been proven unconditionally (i.e. not depending on unproven conjectures) that there are infinitely many primes
p such that
p +
k is prime for some positive even
k at most 246.
Additive number theory One of the most important problems in additive number theory is
Waring's problem, which asks whether it is possible, for any
k ≥ 2, to write any positive integer as the sum of a bounded number of
kth powers, :n=x_1^k+\cdots+x_\ell^k. The case for squares,
k = 2, was
answered by Lagrange in 1770, who proved that every positive integer is the sum of at most four squares. The general case was proved by
Hilbert in 1909, using algebraic techniques which gave no explicit bounds. An important breakthrough was the application of analytic tools to the problem by
Hardy and
Littlewood. These techniques are known as the circle method, and give explicit upper bounds for the function
G(
k), the smallest number of
kth powers needed, such as
Vinogradov's bound :G(k)\leq k(3\log k+11).
Diophantine problems Diophantine problems are concerned with integer solutions to polynomial equations: one may study the distribution of solutions, that is, counting solutions according to some measure of "size" or
height. An important example is the
Gauss circle problem, which asks for integers points (
x y) which satisfy :x^2+y^2\leq r^2. In geometrical terms, given a circle centered about the origin in the plane with radius
r, the problem asks how many
integer lattice points lie on or inside the circle. It is not hard to prove that the answer is \pi r^2 + E(r), where E(r)/r^2 \to 0 as r \to \infty. Again, the difficult part and a great achievement of analytic number theory is obtaining specific upper bounds on the error term
E(
r). It was shown by Gauss that E(r) = O(r). In general, an
O(
r) error term would be possible with the unit circle (or, more properly, the closed unit disk) replaced by the dilates of any bounded planar region with piecewise smooth boundary. Furthermore, replacing the unit circle by the unit square, the error term for the general problem can be as large as a linear function of
r. Therefore, getting an
error bound of the form O(r^{\delta}) for some \delta in the case of the circle is a significant improvement. The first to attain this was
Sierpiński in 1906, who showed E(r) = O(r^{2/3}). In 1915, Hardy and
Landau each showed that one does
not have E(r) = O(r^{1/2}). Since then the goal has been to show that for each fixed \epsilon > 0 there exists a real number C(\epsilon) such that E(r) \leq C(\epsilon) r^{1/2 + \epsilon}. In 2000
Huxley showed that E(r) = O(r^{131/208}), which is the best published result. == Methods of analytic number theory ==