Many of the applications of Hilbert spaces exploit the fact that Hilbert spaces support generalizations of simple geometric concepts like
projection and
change of basis from their usual finite dimensional setting. In particular, the
spectral theory of
continuous self-adjoint linear operators on a Hilbert space generalizes the usual
spectral decomposition of a
matrix, and this often plays a major role in applications of the theory to other areas of mathematics and physics.
Sturm–Liouville theory s of a vibrating string. These are
eigenfunctions of an associated Sturm–Liouville problem. The eigenvalues 1, , , ... form the (musical)
harmonic series. In the theory of
ordinary differential equations, spectral methods on a suitable Hilbert space are used to study the behavior of eigenvalues and eigenfunctions of differential equations. For example, the
Sturm–Liouville problem arises in the study of the harmonics of waves in a violin string or a drum, and is a central problem in
ordinary differential equations. The problem is a differential equation of the form -\frac{\mathrm{d}}{\mathrm{d}x}\left[p(x)\frac{\mathrm{d}y}{\mathrm{d}x}\right] + q(x)y = \lambda w(x)y for an unknown function on an interval , satisfying general homogeneous
Robin boundary conditions \begin{cases} \alpha y(a)+\alpha' y'(a) &= 0 \\ \beta y(b) + \beta' y'(b) &= 0 \,. \end{cases} The functions , , and are given in advance, and the problem is to find the function and constants for which the equation has a solution. The problem only has solutions for certain values of , called eigenvalues of the system, and this is a consequence of the spectral theorem for
compact operators applied to the
integral operator defined by the
Green's function for the system. Furthermore, another consequence of this general result is that the eigenvalues of the system can be arranged in an increasing sequence tending to infinity.
Partial differential equations Hilbert spaces form a basic tool in the study of
partial differential equations. For many classes of partial differential equations, such as linear
elliptic equations, it is possible to consider a generalized solution (known as a
weak solution) by enlarging the class of functions. Many weak formulations involve the class of
Sobolev functions, which is a Hilbert space. A suitable weak formulation reduces to a geometrical problem, the analytic problem of finding a solution or, often what is more important, showing that a solution exists and is unique for given boundary data. For linear elliptic equations, one geometrical result that ensures unique solvability for a large class of problems is the
Lax–Milgram theorem. This strategy forms the rudiment of the
Galerkin method (a
finite element method) for numerical solution of partial differential equations. A typical example is the
Poisson equation with
Dirichlet boundary conditions in a bounded domain in . The weak formulation consists of finding a function such that, for all continuously differentiable functions in vanishing on the boundary: \int_\Omega \nabla u\cdot\nabla v = \int_\Omega gv\,. This can be recast in terms of the Hilbert space consisting of functions such that , along with its weak partial derivatives, are square integrable on , and vanish on the boundary. The question then reduces to finding in this space such that for all in this space a(u, v) = b(v) where is a continuous
bilinear form, and is a continuous
linear functional, given respectively by a(u, v) = \int_\Omega \nabla u\cdot\nabla v,\quad b(v)= \int_\Omega gv\,. Since the Poisson equation is
elliptic, it follows from Poincaré's inequality that the bilinear form is
coercive. The Lax–Milgram theorem then ensures the existence and uniqueness of solutions of this equation. Hilbert spaces allow for many elliptic partial differential equations to be formulated in a similar way, and the Lax–Milgram theorem is then a basic tool in their analysis. With suitable modifications, similar techniques can be applied to
parabolic partial differential equations and certain
hyperbolic partial differential equations.
Ergodic theory ball in the
Bunimovich stadium is described by an ergodic
dynamical system. The field of
ergodic theory is the study of the long-term behavior of
chaotic dynamical systems. The protypical case of a field that ergodic theory applies to is
thermodynamics, in which—though the microscopic state of a system is extremely complicated (it is impossible to understand the ensemble of individual collisions between particles of matter)—the average behavior over sufficiently long time intervals is tractable. The
laws of thermodynamics are assertions about such average behavior. In particular, one formulation of the
zeroth law of thermodynamics asserts that over sufficiently long timescales, the only functionally independent measurement that one can make of a thermodynamic system in equilibrium is its total energy, in the form of
temperature. An ergodic dynamical system is one for which, apart from the energy—measured by the
Hamiltonian—there are no other functionally independent
conserved quantities on the
phase space. More explicitly, suppose that the energy is fixed, and let be the subset of the phase space consisting of all states of energy (an energy surface), and let denote the evolution operator on the phase space. The dynamical system is ergodic if every invariant measurable function on is constant
almost everywhere. An invariant function is one for which f(T_tw) = f(w) for all on and all time .
Liouville's theorem implies that there exists a
measure on the energy surface that is invariant under the
time translation. As a result, time translation is a
unitary transformation of the Hilbert space consisting of square-integrable functions on the energy surface with respect to the inner product \left\langle f, g\right\rangle_{L^2\left(\Omega_E, \mu\right)} = \int_{\Omega_E} f\bar{g}\,\mathrm{d}\mu\,. The von Neumann mean ergodic theorem \underset{T\to\infty}{L^2 - \lim} \frac{1}{T}\int_0^T f(T_tw)\,\mathrm{d}t = \int_{\Omega_E} f(y)\,\mathrm{d}\mu(y)\,. That is, the long time average of an observable is equal to its expectation value over an energy surface.
Fourier analysis , an orthonormal basis for the Hilbert space of square-integrable functions on the sphere, shown graphed along the radial direction One of the basic goals of
Fourier analysis is to decompose a function into a (possibly infinite)
linear combination of given basis functions: the associated
Fourier series. The classical Fourier series associated to a function defined on the interval is a series of the form \sum_{n=-\infty}^\infty a_n e^{2\pi in\theta} where a_n = \int_0^1f(\theta)\;\!e^{-2\pi in\theta}\,\mathrm{d}\theta\,. The example of adding up the first few terms in a Fourier series for a sawtooth function is shown in the figure. The basis functions are sine waves with wavelengths (for integer ) shorter than the wavelength of the sawtooth itself (except for , the
fundamental wave). A significant problem in classical Fourier series asks in what sense the Fourier series converges, if at all, to the function . Hilbert space methods provide one possible answer to this question. The functions form an orthogonal basis of the Hilbert space . Consequently, any square-integrable function can be expressed as a series f(\theta) = \sum_n a_n e_n(\theta)\,,\quad a_n = \langle f, e_n\rangle and, moreover, this series converges in the Hilbert space sense (that is, in the
mean). The problem can also be studied from the abstract point of view: every Hilbert space has an
orthonormal basis, and every element of the Hilbert space can be written in a unique way as a sum of multiples of these basis elements. The coefficients appearing on these basis elements are sometimes known abstractly as the Fourier coefficients of the element of the space. The abstraction is especially useful when it is more natural to use different basis functions for a space such as . In many circumstances, it is desirable not to decompose a function into trigonometric functions, but rather into
orthogonal polynomials or
wavelets for instance, and in higher dimensions into
spherical harmonics. For instance, if are any orthonormal basis functions of , then a given function in can be approximated as a finite linear combination f(x) \approx f_n (x) = a_1 e_1 (x) + a_2 e_2(x) + \cdots + a_n e_n (x)\,. The coefficients {{math|{
aj}}} are selected to make the magnitude of the difference as small as possible. Geometrically, the
best approximation is the
orthogonal projection of onto the subspace consisting of all linear combinations of the {{math|{
ej}}}, and can be calculated by a_j = \int_0^1 \overline{e_j(x)}f (x) \, \mathrm{d}x\,. That this formula minimizes the difference is a consequence of
Bessel's inequality and Parseval's formula. In various applications to physical problems, a function can be decomposed into physically meaningful
eigenfunctions of a
differential operator (typically the
Laplace operator): this forms the foundation for the spectral study of functions, in reference to the
spectrum of the differential operator. A concrete physical application involves the problem of
hearing the shape of a drum: given the fundamental modes of vibration that a drumhead is capable of producing, can one infer the shape of the drum itself? The mathematical formulation of this question involves the
Dirichlet eigenvalues of the Laplace equation in the plane, that represent the fundamental modes of vibration in direct analogy with the integers that represent the fundamental modes of vibration of the violin string.
Spectral theory also underlies certain aspects of the
Fourier transform of a function. Whereas Fourier analysis decomposes a function defined on a
compact set into the discrete spectrum of the Laplacian (which corresponds to the vibrations of a violin string or drum), the Fourier transform of a function is the decomposition of a function defined on all of Euclidean space into its components in the
continuous spectrum of the Laplacian. The Fourier transformation is also geometrical, in a sense made precise by the
Plancherel theorem, that asserts that it is an
isometry of one Hilbert space (the "time domain") with another (the "frequency domain"). This isometry property of the Fourier transformation is a recurring theme in abstract
harmonic analysis (since it reflects the conservation of energy for the continuous Fourier Transform), as evidenced for instance by the
Plancherel theorem for spherical functions occurring in
noncommutative harmonic analysis.
Quantum mechanics of an
electron in a
hydrogen atom are
eigenfunctions of the
energy. In the mathematically rigorous formulation of
quantum mechanics, developed by
John von Neumann, the possible states (more precisely, the
pure states) of a quantum mechanical system are represented by
unit vectors (called
state vectors) residing in a complex separable Hilbert space, known as the
state space, well defined up to a complex number of norm 1 (the
phase factor). In other words, the possible states are points in the
projectivization of a Hilbert space, usually called the
complex projective space. The exact nature of this Hilbert space is dependent on the system; for example, the position and momentum states for a single non-relativistic spin zero particle is the space of all
square-integrable functions, while the states for the spin of a single proton are unit elements of the two-dimensional complex Hilbert space of
spinors. Each observable is represented by a
self-adjoint linear operator acting on the state space. Each eigenstate of an observable corresponds to an
eigenvector of the operator, and the associated
eigenvalue corresponds to the value of the observable in that eigenstate. The inner product between two state vectors is a complex number known as a
probability amplitude. During an ideal
measurement of a quantum mechanical system, the probability that a system collapses from a given initial state to a particular eigenstate is given by the square of the
absolute value of the probability amplitudes between the initial and final states. The possible results of a measurement are the eigenvalues of the operator—which explains the choice of self-adjoint operators, for all the eigenvalues must be real. The probability distribution of an observable in a given state can be found by computing the spectral decomposition of the corresponding operator. For a general system, states are typically not pure, but instead are represented as statistical mixtures of pure states, or mixed states, given by
density matrices: self-adjoint operators of
trace one on a Hilbert space. Moreover, for general quantum mechanical systems, the effects of a single measurement can influence other parts of a system in a manner that is described instead by a
positive operator valued measure. Thus the structure both of the states and observables in the general theory is considerably more complicated than the idealization for pure states.
Probability theory In
probability theory, Hilbert spaces also have diverse applications. Here a fundamental Hilbert space is the space of
random variables on a given
probability space, having class L^2 (finite first and second
moments). A common operation in statistics is that of centering a random variable by subtracting its
expectation. Thus if X is a random variable, then X - E(X) is its centering. In the Hilbert space view, this is the orthogonal projection of X onto the
kernel of the expectation operator, which a
continuous linear functional on the Hilbert space (in fact, the inner product with the constant random variable 1), and so this kernel is a closed subspace. The
conditional expectation has a natural interpretation in the Hilbert space. Suppose that a probability space (\Omega, P, \mathcal B) is given, where \mathcal B is a
sigma algebra on the set \Omega, and P is a
probability measure on the measure space (\Omega, \mathcal B). If \mathcal F\le\mathcal B is a sigma subalgebra of \mathcal B, then the conditional expectation E[X\mid\mathcal F] is the orthogonal projection of X onto the subspace of L^2(\Omega, P) consisting of the \mathcal F-measurable functions. If the random variable X in L^2(\Omega, P) is independent of the sigma algebra \mathcal F then conditional expectation E(X\mid\mathcal F) = E(X), i.e., its projection onto the \mathcal F-measurable functions is constant. Equivalently, the projection of its centering is zero. In particular, if two random variables X and Y (in L^2(\Omega, P)) are independent, then the centered random variables X-E(X) and Y-E(Y) are orthogonal. (This means that the two variables have zero
covariance: they are
uncorrelated.) In that case, the Pythagorean theorem in the kernel of the expectation operator implies that the
variances of X and Y satisfy the identity: \operatorname{Var}(X+Y) = \operatorname{Var}(X) + \operatorname{Var}(Y), sometimes called the Pythagorean theorem of statistics, and is of importance in
linear regression. As puts it, "the
analysis of variance may be viewed as the decomposition of the squared length of a vector into the sum of the squared lengths of several vectors, using the Pythagorean Theorem." The theory of
martingales can be formulated in Hilbert spaces. A martingale in a Hilbert space is a sequence x_1,x_2,\dots of elements of a Hilbert space such that, for each , x_n is the orthogonal projection of x_{n+1} onto the linear hull of x_1,\dots,x_n. If the x_k are random variables, this reproduces the usual definition of a (discrete) martingale: the expectation of x_{n+1}, conditioned on x_1,\dots,x_n, is equal to x_n. Hilbert spaces are also used throughout the foundations of the
Itô calculus. To any square-integrable
martingale, it is possible to associate a Hilbert norm on the space of equivalence classes of
progressively measurable processes with respect to the martingale (using the
quadratic variation of the martingale as the measure). The
Itô integral can be constructed by first defining it for
simple processes, and then exploiting their density in the Hilbert space. A noteworthy result is then the
Itô isometry, which attests that for any martingale
M having quadratic variation measure d\langle M\rangle_t, and any progressively measurable process
H: E\left[\left(\int_0^tH_s \, dM_s\right)^2\right] = E\left[\int_0^tH_s^2 \, d\langle M\rangle_s\right] whenever the expectation on the right-hand side is finite. A deeper application of Hilbert spaces that is especially important in the theory of
Gaussian processes is an attempt, due to
Leonard Gross and others, to make sense of certain formal integrals over infinite dimensional spaces like the
Feynman path integral from
quantum field theory. The problem with integrals like this is that there is no
infinite dimensional Lebesgue measure. The notion of an
abstract Wiener space allows one to construct a measure on a Banach space that contains a Hilbert space , called the
Cameron–Martin space, as a dense subset, out of a finitely additive cylinder set measure on . The resulting measure on is countably additive and invariant under translation by elements of , and this provides a mathematically rigorous way of thinking of the
Wiener measure as a Gaussian measure adapted to the Sobolev space H^1([0,\infty)).
Color perception Any true physical color can be represented by a combination of pure
spectral colors. As physical colors can be composed of any number of spectral colors, the space of physical colors may aptly be represented by a Hilbert space over spectral colors. Humans have
three types of cone cells for color perception, so the perceivable colors can be represented by 3-dimensional Euclidean space. The many-to-one linear mapping from the Hilbert space of physical colors to the Euclidean space of human perceivable colors explains why many distinct physical colors may be perceived by humans to be identical (e.g., pure yellow light versus a mix of red and green light, see
Metamerism). == Properties ==