From the point of view of linear algebra, vector spaces are completely understood insofar as any vector space over a given field is characterized, up to isomorphism, by its dimension. However, vector spaces
per se do not offer a framework to deal with the question—crucial to analysis—whether a sequence of functions
converges to another function. Likewise, linear algebra is not adapted to deal with
infinite series, since the addition operation allows only finitely many terms to be added. Therefore, the needs of
functional analysis require considering additional structures. A vector space may be given a
partial order \,\leq,\, under which some vectors can be compared. For example, n-dimensional real space \mathbf{R}^n can be ordered by comparing its vectors componentwise.
Ordered vector spaces, for example
Riesz spaces, are fundamental to
Lebesgue integration, which relies on the ability to express a function as a difference of two positive functions f = f^+ - f^-. where f^+ denotes the positive part of f and f^- the negative part.
Normed vector spaces and inner product spaces "Measuring" vectors is done by specifying a
norm, a datum which measures lengths of vectors, or by an
inner product, which measures angles between vectors. Norms and inner products are denoted | \mathbf v| and respectively. The datum of an inner product entails that lengths of vectors can be defined too, by defining the associated norm {{nowrap||\mathbf v| := \sqrt {\langle \mathbf v , \mathbf v \rangle}.}} Vector spaces endowed with such data are known as
normed vector spaces and
inner product spaces, respectively. Coordinate space F^n can be equipped with the standard
dot product: \lang \mathbf x , \mathbf y \rang = \mathbf x \cdot \mathbf y = x_1 y_1 + \cdots + x_n y_n. In \mathbf{R}^2, this reflects the common notion of the angle between two vectors \mathbf{x} and \mathbf{y}, by the
law of cosines: \mathbf x \cdot \mathbf y = \cos\left(\angle (\mathbf x, \mathbf y)\right) \cdot |\mathbf x| \cdot |\mathbf y|. Because of this, two vectors satisfying \lang \mathbf x , \mathbf y \rang = 0 are called
orthogonal. An important variant of the standard dot product is used in
Minkowski space: \mathbf{R}^4 endowed with the Lorentz product \lang \mathbf x | \mathbf y \rang = x_1 y_1 + x_2 y_2 + x_3 y_3 - x_4 y_4. In contrast to the standard dot product, it is not
positive definite: \lang \mathbf x | \mathbf x \rang also takes negative values, for example, for \mathbf x = (0, 0, 0, 1). Singling out the fourth coordinate—
corresponding to time, as opposed to three space-dimensions—makes it useful for the mathematical treatment of
special relativity. Note that in other conventions time is often written as the first, or "zeroeth" component so that the Lorentz product is written \lang \mathbf x | \mathbf y \rang = - x_0 y_0+x_1 y_1 + x_2 y_2 + x_3 y_3.
Topological vector spaces Convergence questions are treated by considering vector spaces V carrying a compatible
topology, a structure that allows one to talk about elements being
close to each other. Compatible here means that addition and scalar multiplication have to be
continuous maps. Roughly, if \mathbf{x} and \mathbf{y} in V, and a in F vary by a bounded amount, then so do \mathbf{x} + \mathbf{y} and a \mathbf{x}. To make sense of specifying the amount a scalar changes, the field F also has to carry a topology in this context; a common choice is the reals or the complex numbers. In such
topological vector spaces one can consider
series of vectors. The
infinite sum \sum_{i=1}^\infty f_i ~=~ \lim_{n \to \infty} f_1 + \cdots + f_n denotes the
limit of the corresponding finite partial sums of the sequence f_1, f_2, \ldots of elements of V. For example, the f_i could be (real or complex) functions belonging to some
function space V, in which case the series is a
function series. The
mode of convergence of the series depends on the topology imposed on the function space. In such cases,
pointwise convergence and
uniform convergence are two prominent examples. in \mathbf{R}^2 consist of plane vectors of norm 1. Depicted are the unit spheres in different
p-norms, for p = 1, 2, and \infty. The bigger diamond depicts points of 1-norm equal to 2. A way to ensure the existence of limits of certain infinite series is to restrict attention to spaces where any
Cauchy sequence has a limit; such a vector space is called
complete. Roughly, a vector space is complete provided that it contains all necessary limits. For example, the vector space of polynomials on the unit interval [0, 1], equipped with the
topology of uniform convergence is not complete because any continuous function on [0, 1] can be uniformly approximated by a sequence of polynomials, by the
Weierstrass approximation theorem. In contrast, the space of
all continuous functions on [0, 1] with the same topology is complete. A norm gives rise to a topology by defining that a sequence of vectors \mathbf{v}_n converges to \mathbf{v} if and only if \lim_{n \to \infty} |\mathbf v_n - \mathbf v| = 0. Banach and Hilbert spaces are complete topological vector spaces whose topologies are given, respectively, by a norm and an inner product. Their study—a key piece of
functional analysis—focuses on infinite-dimensional vector spaces, since all norms on finite-dimensional topological vector spaces give rise to the same notion of convergence. The image at the right shows the equivalence of the 1-norm and \infty-norm on \mathbf{R}^2: as the unit "balls" enclose each other, a sequence converges to zero in one norm if and only if it so does in the other norm. In the infinite-dimensional case, however, there will generally be inequivalent topologies, which makes the study of topological vector spaces richer than that of vector spaces without additional data. From a conceptual point of view, all notions related to topological vector spaces should match the topology. For example, instead of considering all linear maps (also called
functionals) V \to W, maps between topological vector spaces are required to be continuous. In particular, the (topological) dual space V^* consists of continuous functionals V \to \mathbf{R} (or to \mathbf{C}). The fundamental
Hahn–Banach theorem is concerned with separating subspaces of appropriate topological vector spaces by continuous functionals.
Banach spaces Banach spaces, introduced by
Stefan Banach, are complete normed vector spaces. A first example is
the vector space \ell^p consisting of infinite vectors with real entries \mathbf{x} = \left(x_1, x_2, \ldots, x_n, \ldots\right) whose
p-norm (1 \leq p \leq \infty) given by \|\mathbf{x}\|_\infty := \sup_i |x_i| \qquad \text{ for } p = \infty, \text{ and } \|\mathbf{x}\|_p := \left(\sum_i |x_i|^p\right)^\frac{1}{p} \qquad \text{ for } p The topologies on the infinite-dimensional space \ell^p are inequivalent for different p. For example, the sequence of vectors \mathbf{x}_n = \left(2^{-n}, 2^{-n}, \ldots, 2^{-n}, 0, 0, \ldots\right), in which the first 2^n components are 2^{-n} and the following ones are 0, converges to the
zero vector for p = \infty, but does not for p = 1: \|\mathbf{x}_n\|_\infty = \sup (2^{-n}, 0) = 2^{-n} \to 0, but \|\mathbf{x}_n\|_1 = \sum_{i=1}^{2^n} 2^{-n} = 2^n \cdot 2^{-n} = 1. More generally than sequences of real numbers, functions f : \Omega \to \Reals are endowed with a norm that replaces the above sum by the
Lebesgue integral \|f\|_p := \left(\int_{\Omega} |f(x)|^p \, {d\mu(x)}\right)^\frac{1}{p}. The space of
integrable functions on a given
domain \Omega (for example an interval) satisfying \|f\|_p and equipped with this norm are called
Lebesgue spaces, denoted L^{\;\!p}(\Omega). These spaces are complete. (If one uses the
Riemann integral instead, the space is complete, which may be seen as a justification for Lebesgue's integration theory.) Concretely this means that for any sequence of Lebesgue-integrable functions f_1, f_2, \ldots, f_n, \ldots with \|f_n\|_p satisfying the condition \lim_{k,\ n \to \infty} \int_{\Omega} \left|f_k(x) - f_n(x)\right|^p \, {d\mu(x)} = 0 there exists a function f(x) belonging to the vector space L^{\;\!p}(\Omega) such that \lim_{k \to \infty} \int_{\Omega} \left|f(x) - f_k(x)\right|^p \, {d\mu(x)} = 0. Imposing boundedness conditions not only on the function, but also on its
derivatives leads to
Sobolev spaces.
Hilbert spaces Complete inner product spaces are known as
Hilbert spaces, in honor of
David Hilbert. The Hilbert space L^2(\Omega), with inner product given by \langle f\ , \ g \rangle = \int_\Omega f(x) \overline{g(x)} \, dx, where \overline{g(x)} denotes the
complex conjugate of g(x), is a key case. By definition, in a Hilbert space, any Cauchy sequence converges to a limit. Conversely, finding a sequence of functions f_n with desirable properties that approximate a given limit function is equally crucial. Early analysis, in the guise of the
Taylor approximation, established an approximation of
differentiable functions f by polynomials. By the
Stone–Weierstrass theorem, every continuous function on [a, b] can be approximated as closely as desired by a polynomial. A similar approximation technique by
trigonometric functions is commonly called
Fourier expansion, and is much applied in engineering. More generally, and more conceptually, the theorem yields a simple description of what "basic functions", or, in abstract Hilbert spaces, what basic vectors suffice to generate a Hilbert space H, in the sense that the
closure of their span (that is, finite linear combinations and limits of those) is the whole space. Such a set of functions is called a
basis of H, its cardinality is known as the
Hilbert space dimension. Not only does the theorem exhibit suitable basis functions as sufficient for approximation purposes, but also together with the
Gram–Schmidt process, it enables one to construct a
basis of orthogonal vectors. Such orthogonal bases are the Hilbert space generalization of the coordinate axes in finite-dimensional
Euclidean space. The solutions to various
differential equations can be interpreted in terms of Hilbert spaces. For example, a great many fields in physics and engineering lead to such equations, and frequently solutions with particular physical properties are used as basis functions, often orthogonal. As an example from physics, the time-dependent
Schrödinger equation in
quantum mechanics describes the change of physical properties in time by means of a
partial differential equation, whose solutions are called
wavefunctions. Definite values for physical properties such as energy, or momentum, correspond to
eigenvalues of a certain (linear)
differential operator and the associated wavefunctions are called
eigenstates. The
spectral theorem decomposes a linear
compact operator acting on functions in terms of these eigenfunctions and their eigenvalues.
Algebras over fields , given by the equation x \cdot y = 1. The
coordinate ring of functions on this hyperbola is given by \mathbf{R}[x, y] / (x \cdot y - 1), an infinite-dimensional vector space over \mathbf{R}. General vector spaces do not possess a multiplication between vectors. A vector space equipped with an additional
bilinear operator defining the multiplication of two vectors is an
algebra over a field (or
F-algebra if the field
F is specified). For example, the set of all
polynomials p(t) forms an algebra known as the
polynomial ring: using that the sum of two polynomials is a polynomial, they form a vector space; they form an algebra since the product of two polynomials is again a polynomial. Rings of polynomials (in several variables) and their
quotients form the basis of
algebraic geometry, because they are
rings of functions of algebraic geometric objects. Another crucial example are
Lie algebras, which are neither commutative nor associative, but the failure to be so is limited by the constraints ([x, y] denotes the product of x and y): • [x, y] = - [y, x] (
anticommutativity), and • [x, [y, z + [y, [z, x + [z, [x, y = 0 (
Jacobi identity). Examples include the vector space of n-by-n matrices, with [x, y] = x y - y x, the
commutator of two matrices, and \mathbf{R}^3, endowed with the
cross product. The
tensor algebra \operatorname{T}(V) is a formal way of adding products to any vector space V to obtain an algebra. As a vector space, it is spanned by symbols, called simple
tensors \mathbf{v}_1 \otimes \mathbf{v}_2 \otimes \cdots \otimes \mathbf{v}_n, where the
degree n varies. The multiplication is given by concatenating such symbols, imposing the
distributive law under addition, and requiring that scalar multiplication commute with the tensor product ⊗, much the same way as with the tensor product of two vector spaces introduced in the above section on
tensor products. In general, there are no relations between \mathbf{v}_1 \otimes \mathbf{v}_2 and \mathbf{v}_2 \otimes \mathbf{v}_1. Forcing two such elements to be equal leads to the
symmetric algebra, whereas forcing \mathbf{v}_1 \otimes \mathbf{v}_2 = - \mathbf{v}_2 \otimes \mathbf{v}_1 yields the
exterior algebra. ==Related structures==