Given any
vector space V over a
field F, the
(algebraic) dual space V^{*} (alternatively denoted by V^{\lor} or V') is defined as the set of all
linear maps
\varphi: V \to F (
linear functionals). Since linear maps are vector space
homomorphisms, the dual space may be denoted \hom (V, F). or
\varphi (x) = \langle x, \varphi \rangle. This pairing defines a nondegenerate
bilinear mapping \langle \cdot, \cdot \rangle : V \times V^* \to F called the
natural pairing.
Dual set Given a vector space V and a basis E on that space, one can define a
linearly independent set in V^* called the
dual set. Each vector in E corresponds to a unique vector in the dual set. This correspondence yields an injection V \to V^*. If V is finite-dimensional, the dual set is a basis, called the
dual basis, and the injection V \to V^* is an
isomorphism.
Finite-dimensional case If V is finite-dimensional and has a basis, \{\mathbf{e}_1,\dots,\mathbf{e}_n\} in V, the dual basis is a set \{\mathbf{e}^1,\dots,\mathbf{e}^n\} of linear functionals on V, defined by the relation \mathbf{e}^i(c^1 \mathbf{e}_1+\cdots+c^n\mathbf{e}_n) = c^i, \quad i=1,\ldots,n for any choice of coefficients c^i\in F. In particular, letting in turn each one of those coefficients be equal to one and the other coefficients zero, gives the system of equations \mathbf{e}^i(\mathbf{e}_j) = \delta^{i}_{j} where \delta^{i}_{j} is the
Kronecker delta symbol. This property is referred to as the
bi-orthogonality property. Consider \{\mathbf{e}_1,\dots,\mathbf{e}_n\} the basis of V. Let \{\mathbf{e}^1,\dots,\mathbf{e}^n\} be defined as the following: \mathbf{e}^i(c^1 \mathbf{e}_1+\cdots+c^n\mathbf{e}_n) = c^i, \quad i=1,\ldots,n . These are a basis of V^* because: • The \mathbf{e}^i , i=1, 2, \dots, n, are linear functionals, which map x,y \in V such as x= \alpha_1\mathbf{e}_1 + \dots + \alpha_n\mathbf{e}_n and y = \beta_1\mathbf{e}_1 + \dots + \beta_n \mathbf{e}_n to scalars \mathbf{e}^i(x)=\alpha_i and \mathbf{e}^i(y)=\beta_i. Then also, x+\lambda y=(\alpha_1+\lambda \beta_1)\mathbf{e}_1 + \dots + (\alpha_n+\lambda\beta_n)\mathbf{e}_n and \mathbf{e}^i(x+\lambda y)=\alpha_i+\lambda\beta_i=\mathbf{e}^i(x)+\lambda \mathbf{e}^i(y) . Therefore, \mathbf{e}^i \in V^* for i= 1, 2, \dots, n . • Suppose \lambda_1 \mathbf{e}^1 + \cdots + \lambda_n \mathbf{e}^n =0 \in V^*. Applying this functional on the basis vectors of V successively, lead us to \lambda_1=\lambda_2= \dots=\lambda_n=0 (The functional applied in \mathbf{e}_i results in \lambda_i ). Therefore, \{\mathbf{e}^1,\dots,\mathbf{e}^n\} is linearly independent on V^*. • Lastly, consider g \in V^* . Then g(x)=g(\alpha_1\mathbf{e}_1 + \dots + \alpha_n\mathbf{e}_n)=\alpha_1g(\mathbf{e}_1) + \dots + \alpha_ng(\mathbf{e}_n)=\mathbf{e}^1(x)g(\mathbf{e}_1) + \dots + \mathbf{e}^n(x)g(\mathbf{e}_n) so g=g(\mathbf{e}_1)\mathbf{e}^1 + \dots + g(\mathbf{e}_n)\mathbf{e}^n . So \{\mathbf{e}^1,\dots,\mathbf{e}^n\} generates V^*. Hence, it is a basis of V^*. For example, if V is \R^2, let its basis be chosen as \{\mathbf{e}_1=(1/2,1/2),\mathbf{e}_2=(0,1)\}. The basis vectors are not orthogonal to each other. Then, \mathbf{e}^1 and \mathbf{e}^2 are
one-forms (functions that map a vector to a scalar) such that \mathbf{e}^1(\mathbf{e}_1)=1, \mathbf{e}^1(\mathbf{e}_2)=0, \mathbf{e}^2(\mathbf{e}_1)=0, and \mathbf{e}^2(\mathbf{e}_2)=1. (Note: The superscript here is the index, not an exponent.) This system of equations can be expressed using matrix notation as \begin{bmatrix} e^{11} & e^{12} \\ e^{21} & e^{22} \end{bmatrix} \begin{bmatrix} e_{11} & e_{21} \\ e_{12} & e_{22} \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}. Solving for the unknown values in the first matrix shows the dual basis to be \{\mathbf{e}^1=(2,0),\mathbf{e}^2=(-1,1)\}. Because \mathbf{e}^1 and \mathbf{e}^2 are functionals, they can be rewritten as \mathbf{e}^1(x,y)=2x and \mathbf{e}^2(x,y)=-x+y. In general, when V is \R^n, if E=[\mathbf{e}_1|\cdots|\mathbf{e}_n] is a matrix whose columns are the basis vectors and \hat{E}=[\mathbf{e}^1|\cdots|\mathbf{e}^n] is a matrix whose columns are the dual basis vectors, then \hat{E}^\textrm{T}\cdot E = I_n, where I_n is the
identity matrix of order n. The biorthogonality property of these two basis sets allows any point \mathbf{x}\in V to be represented as : \mathbf{x} = \sum_i \langle\mathbf{x},\mathbf{e}^i \rangle \mathbf{e}_i = \sum_i \langle \mathbf{x}, \mathbf{e}_i \rangle \mathbf{e}^i, even when the basis vectors are not orthogonal to each other. Strictly speaking, the above statement only makes sense once the inner product \langle \cdot, \cdot \rangle and the corresponding duality pairing are introduced, as described below in ''''. In particular, \R^n can be interpreted as the space of columns of n
real numbers, its dual space is typically written as the space of
rows of n real numbers. Such a row acts on \R^n as a linear functional by ordinary
matrix multiplication. This is because a functional maps every n-vector x into a real number y. Then, seeing this functional as a matrix M, and x as an n\times 1 matrix, and y a 1\times 1 matrix (trivially, a real number) respectively, if Mx=y then, by dimension reasons, M must be a 1\times n matrix; that is, M must be a row vector. If V consists of the space of geometrical
vectors in the plane, then the
level curves of an element of V^* form a family of parallel lines in V, because the range is 1-dimensional, so that every point in the range is a multiple of any one nonzero element. So an element of V^* can be intuitively thought of as a particular family of parallel lines covering the plane. To compute the value of a functional on a given vector, it suffices to determine which of the lines the vector lies on. Informally, this "counts" how many lines the vector crosses. More generally, if V is a vector space of any dimension, then the
level sets of a linear functional in V^* are parallel hyperplanes in V, and the action of a linear functional on a vector can be visualized in terms of these hyperplanes.
Infinite-dimensional case If V is not finite-dimensional but has a
basis \mathbf{e}_\alpha indexed by an infinite set A, then the same construction as in the finite-dimensional case yields
linearly independent elements \mathbf{e}^\alpha (\alpha\in A) of the dual space, but they will not form a basis. For instance, consider the space \R^\infty, whose elements are those
sequences of real numbers that contain only finitely many non-zero entries, which has a basis indexed by the natural numbers \N. For i \in \N, \mathbf{e}_i is the sequence consisting of all zeroes except in the i-th position, which is 1. The dual space of \R^\infty is (isomorphic to) \R^\N, the space of
all sequences of real numbers: each real sequence (a_n) defines a function where the element (x_n) of \R^\infty is sent to the number \sum_n a_nx_n, which is a finite sum because there are only finitely many nonzero x_n. The
dimension of \R^\infty is
countably infinite, whereas \R^\N does not have a countable basis. This observation generalizes to any The exact dimension of the dual is given by the
Erdős–Kaplansky theorem.
Bilinear products and dual spaces If V is finite-dimensional, then V is isomorphic to V^*. But there is in general no
natural isomorphism between these two spaces. Any
bilinear form \langle \cdot, \cdot \rangle on V gives a mapping of V into its dual space via v\mapsto \langle v, \cdot\rangle where the right hand side is defined as the functional on V taking each w \in V to \langle v, w \rangle. In other words, the bilinear form determines a linear mapping \Phi_{\langle\cdot,\cdot\rangle} : V\to V^* defined by \left[\Phi_{\langle\cdot,\cdot\rangle}(v), w\right] = \langle v, w\rangle. If the bilinear form is
nondegenerate, then this is an isomorphism onto a subspace of V^*. If V is finite-dimensional, then this is an isomorphism onto all of V^*. Conversely, any isomorphism \Phi from V to a subspace of V^* (resp., all of V^* if V is finite dimensional) defines a unique nondegenerate bilinear form \langle \cdot, \cdot \rangle_{\Phi} on V by \langle v, w \rangle_\Phi = (\Phi (v))(w) = [\Phi (v), w].\, Thus there is a one-to-one correspondence between isomorphisms of V to a subspace of (resp., all of) V^* and nondegenerate bilinear forms on V. If the vector space V is over the
complex field, then sometimes it is more natural to consider
sesquilinear forms instead of bilinear forms. In that case, a given sesquilinear form \langle \cdot, \cdot \rangle determines an isomorphism of V with the
complex conjugate of the dual space \Phi_{\langle \cdot, \cdot \rangle} : V\to \overline{V^*}. The conjugate of the dual space \overline{V^*} can be identified with the set of all additive complex-valued functionals f: V \to \Complex such that f(\alpha v) = \overline{\alpha}f(v).
Injection into the double-dual There is a
natural homomorphism \Psi from V into the double dual V^{**}=\hom (V^*, F), defined by (\Psi(v))(\varphi)=\varphi(v) for all v\in V, \varphi\in V^*. In other words, if \mathrm{ev}_v:V^*\to F is the evaluation map defined by \varphi \mapsto \varphi(v), then \Psi: V \to V^{**} is defined as the map v\mapsto\mathrm{ev}_v. This map \Psi is always
injective; Indeed, the isomorphism of a finite-dimensional vector space with its double dual is an archetypal example of a
natural isomorphism. Infinite-dimensional Hilbert spaces are not isomorphic to their algebraic double duals, but instead to their continuous double duals.
Transpose of a linear map If f: V \to W is a
linear map, then the
transpose (or
dual) f^*: W^* \to V^* is defined by f^*(\varphi) = \varphi \circ f \, for every
\varphi \in W^*. The resulting functional
f^* (\varphi) in
V^* is called the
pullback of
\varphi along
f. The following identity holds for all
\varphi \in W^* and
v \in V: [f^*(\varphi),\, v] = [\varphi,\, f(v)], where the bracket [\cdot,\cdot] on the left is the natural pairing of V with its dual space, and that on the right is the natural pairing of W with its dual. This identity characterizes the transpose, and is formally similar to the definition of the
adjoint. The assignment f \mapsto f^* produces an
injective linear map between the space of linear operators from V to W and the space of linear operators from W^* to V^*; this homomorphism is an
isomorphism if and only if W is finite-dimensional. If V = W then the space of linear maps is actually an
algebra under
composition of maps, and the assignment is then an
antihomomorphism of algebras, meaning that {(fg)}^* = g^*f^*. In the language of
category theory, taking the dual of vector spaces and the transpose of linear maps is therefore a
contravariant functor from the category of vector spaces over F to itself. It is possible to identify f^{**} with f using the natural injection into the double dual. If the linear map f is represented by the
matrix A with respect to two bases of V and W, then f^* is represented by the
transpose matrix A^T with respect to the dual bases of W^* and V^*, hence the name. Alternatively, as f is represented by A acting on the left on column vectors, f^* is represented by the same matrix acting on the right on row vectors. These points of view are related by the canonical inner product on \Reals^n, which identifies the space of column vectors with the dual space of row vectors.
Quotient spaces and annihilators Let S be a subset of V. The
annihilator of S in V^*, denoted here S^0, is the collection of linear functionals f\in V^* such that [f,s]=0 for all s\in S. That is, S^0 consists of all linear functionals f:V\to F such that the restriction to S vanishes: f|_S = 0. Within finite dimensional vector spaces, the annihilator is dual to (isomorphic to) the
orthogonal complement. The annihilator of a subset is itself a vector space. The annihilator of the zero vector is the whole dual space: \{ 0 \}^0 = V^*, and the annihilator of the whole space is just the zero covector: V^0 = \{ 0 \} \subseteq V^*. Furthermore, the assignment of an annihilator to a subset of V reverses inclusions, so that if \{ 0 \} \subseteq S\subseteq T\subseteq V, then \{ 0 \} \subseteq T^0 \subseteq S^0 \subseteq V^* . If A and B are two subsets of V then A^0 + B^0 \subseteq (A \cap B)^0 . If (A_i)_{i\in I} is any family of subsets of V indexed by i belonging to some index set I, then \left( \bigcup_{i\in I} A_i \right)^0 = \bigcap_{i\in I} A_i^0 . In particular if A and B are subspaces of V then (A + B)^0 = A^0 \cap B^0 and Under the natural pairing, these units cancel, and the resulting scalar value is
dimensionless, as expected. For example, in (continuous)
Fourier analysis, or more broadly
time–frequency analysis: given a one-dimensional vector space with a
unit of time t, the dual space has units of
frequency: occurrences
per unit of time (units of 1/t). For example, if time is measured in
seconds, the corresponding dual unit is the
inverse second: over the course of 3 seconds, an event that occurs 2 times per second occurs a total of 6 times, corresponding to 3s \cdot 2s^{-1} = 6. Similarly, if the primal space measures length, the dual space measures
inverse length. == Continuous dual space == When dealing with
topological vector spaces, the
continuous linear functionals from the space into the base field \mathbb{F} = \Complex (or \R) are particularly important. This gives rise to the notion of the "continuous dual space" or "topological dual" which is a linear subspace of the algebraic dual space V^*, denoted by V'. For any
finite-dimensional normed vector space or topological vector space, such as
Euclidean n-space, the continuous dual and the algebraic dual coincide. This is however false for any infinite-dimensional normed space, as shown by the example of
discontinuous linear maps. Nevertheless, in the theory of
topological vector spaces the terms "continuous dual space" and "topological dual space" are often replaced by "dual space". For a
topological vector space V its
continuous dual space, or
topological dual space, or just
dual space (in the sense of the theory of topological vector spaces) V' is defined as the space of all continuous linear functionals \varphi:V\to{\mathbb F}. Important examples for continuous dual spaces are the space of compactly supported
test functions \mathcal{D} and its dual \mathcal{D}', the space of arbitrary
distributions (generalized functions); the space of arbitrary test functions \mathcal{E} and its dual \mathcal{E}', the space of compactly supported distributions; and the space of rapidly decreasing test functions \mathcal{S}, the
Schwartz space, and its dual \mathcal{S}', the space of
tempered distributions (slowly growing distributions) in the theory of
generalized functions.
Properties If X is a
Hausdorff topological vector space (TVS), then the continuous dual space of X is identical to the continuous dual space of the
completion of X.
Topologies on the dual There is a standard construction for introducing a topology on the continuous dual V' of a topological vector space V. Fix a collection \mathcal{A} of
bounded subsets of V. This gives the topology on V of
uniform convergence on sets from \mathcal{A}, or what is the same thing, the topology generated by
seminorms of the form \|\varphi\|_A = \sup_{x\in A} |\varphi(x)|, where \varphi is a continuous linear functional on V, and A runs over the class \mathcal{A}. This means that a net of functionals \varphi_i tends to a functional \varphi in V' if and only if \text{ for all } A\in\mathcal{A}\qquad \|\varphi_i-\varphi\|_A = \sup_{x\in A} |\varphi_i(x)-\varphi(x)|\underset{i\to\infty}{\longrightarrow} 0. Usually (but not necessarily) the class \mathcal{A} is supposed to satisfy the following conditions: • Each point x of V belongs to some set A\in\mathcal{A}. • Each two sets A \in \mathcal{A} and B \in \mathcal{A} are contained in some set C \in \mathcal{A}. • \mathcal{A} is closed under the operation of multiplication by scalars. If these requirements are fulfilled then the corresponding topology on V' is Hausdorff and the sets U_A ~=~ \left \{ \varphi \in V' ~:~ \quad \|\varphi\|_A form its local base. Here are the three most important special cases. • The
strong topology on V' is the topology of uniform convergence on
bounded subsets in V (so here \mathcal{A} can be chosen as the class of all bounded subsets in V). If V is a
normed vector space (for example, a
Banach space or a
Hilbert space) then the strong topology on V' is normed (in fact a Banach space if the field of scalars is complete), with the norm \|\varphi\| = \sup_{\|x\| \le 1 } |\varphi(x)|. • The
stereotype topology on V' is the topology of uniform convergence on
totally bounded sets in V (so here \mathcal{A} can be chosen as the class of all totally bounded subsets in V). • The
weak topology on V' is the topology of uniform convergence on finite subsets in V (so here \mathcal{A} can be chosen as the class of all finite subsets in V). Each of these three choices of topology on V' leads to a variant of
reflexivity property for topological vector spaces: • If V' is endowed with the
strong topology, then the corresponding notion of reflexivity is the standard one: the spaces reflexive in this sense are just called
reflexive. • If V' is endowed with the stereotype dual topology, then the corresponding reflexivity is presented in the theory of
stereotype spaces: the spaces reflexive in this sense are called
stereotype. • If V' is endowed with the
weak topology, then the corresponding reflexivity is presented in the theory of
dual pairs: the spaces reflexive in this sense are arbitrary (Hausdorff) locally convex spaces with the weak topology.
Examples Let 1 p p'' of all
sequences \mathbb{a} = a_n for which \|\mathbf{a}\|_p = \left ( \sum_{n=0}^\infty |a_n|^p \right) ^{\frac{1}{p}} Define the number q by 1/p + 1/q = 1. Then the continuous dual of \ell^p is naturally identified with \ell^q: given an element \varphi \in (\ell^p)', the corresponding element of \ell^q is the sequence (\varphi(\mathbf {e}_n)) where
\mathbf {e}_n denotes the sequence whose n-th term is 1 and all others are zero. Conversely, given an element \mathbb{a} = (a_n) \in \ell^q, the corresponding continuous linear functional
\varphi on \ell^p is defined by \varphi (\mathbf{b}) = \sum_n a_n b_n for all \mathbb{b} = (b_n) \in \ell^p (see
Hölder's inequality). In a similar manner, the continuous dual of \ell^1 is naturally identified with \ell^\infty (the space of bounded sequences). Furthermore, the continuous duals of the Banach spaces c (consisting of all
convergent sequences, with the
supremum norm) and c_0 (the sequences converging to zero) are both naturally identified with \ell^1. By the
Riesz representation theorem, the continuous dual of a Hilbert space is again a Hilbert space which is
anti-isomorphic to the original space. This gives rise to the
bra–ket notation used by physicists in the mathematical formulation of
quantum mechanics. By the
Riesz–Markov–Kakutani representation theorem, the continuous dual of certain spaces of continuous functions can be described using measures.
Transpose of a continuous linear map If T: V \to W is a continuous linear map between two topological vector spaces, then the (continuous) transpose T': W' \to V' is defined by the same formula as before: T'(\varphi) = \varphi \circ T, \quad \varphi \in W'. The resulting functional T'(\varphi) is in V'. The assignment T \to T' produces a linear map between the space of continuous linear maps from V to W and the space of linear maps from W' to V'. When T and U are composable continuous linear maps, then (U \circ T)' = T' \circ U'. When V and W are normed spaces, the norm of the transpose in L(W', V') is equal to that of T in L(V, W). Several properties of transposition depend upon the
Hahn–Banach theorem. For example, the bounded linear map T has dense range
if and only if the transpose T' is injective. When T is a
compact linear map between two Banach spaces V and W, then the transpose T' is compact. This can be proved using the
Arzelà–Ascoli theorem. When V is a Hilbert space, there is an antilinear isomorphism i_V from V onto its continuous dual V'. For every bounded linear map T on V, the transpose and the
adjoint operators are linked by i_V \circ T^* = T' \circ i_V. When T is a continuous linear map between two topological vector spaces V and W, then the transpose T' is continuous when W' and V' are equipped with "compatible" topologies: for example, when for X = V and X = W, both duals X' have the
strong topology \beta(X', X) of uniform convergence on bounded sets of X, or both have the weak-∗ topology \sigma(X', X) of pointwise convergence on X. The transpose T' is continuous from \beta(W', W) to \beta(V', V), or from \sigma(W', W) to \sigma(V', V).
Annihilators Assume that W is a closed linear subspace of a normed space V, and consider the annihilator of W in V', W^\perp = \{ \varphi \in V' : W \subseteq \ker \varphi\}. Then, the dual of the quotient V/W can be identified with W^\perp, and the dual of W can be identified with the quotient V'/{W^\perp}. Indeed, let P denote the canonical
surjection from V onto the quotient V/W. Then the transpose P' is an isometric isomorphism from (V/W)' into V', with range equal to W^\perp. If j denotes the injection map from W into V, then the kernel of the transpose j' is the annihilator of W: \ker (j') = W^\perp and it follows from the
Hahn–Banach theorem that j' induces an isometric isomorphism V/W^\perp.
Further properties If the dual of a normed space V is
separable, then so is the space V itself. The converse is not true: for example, the space l^1 is separable, but its dual \ell^\infty is not.
Double dual of vector addition from a vector space to its double dual. \langle x_1, x_2\rangle denotes the
ordered pair of two vectors. The addition + sends x_1 and x_2 to x_1 + x_2. The addition +' induced by the transformation can be defined as ''[\Psi(x_1) +' \Psi(x_2)](\varphi) = \varphi(x_1 + x_2) = \varphi(x)
for any \varphi'' in the dual space. In analogy with the case of the algebraic double dual, there is always a naturally defined continuous linear operator \Psi: V \to V'' from a normed space V into its continuous double dual V', defined by \Psi(x)(\varphi) = \varphi(x), \quad x \in V, \ \varphi \in V' . As a consequence of the
Hahn–Banach theorem, this map is in fact an
isometry, meaning \| \Psi(x) \| = \| x \| for all x \in V. Normed spaces for which the map \Psi is a
bijection are called
reflexive. When V is a
topological vector space then \Psi(x) can still be defined by the same formula, for every x \in V, however several difficulties arise. First, when V is not
locally convex, the continuous dual may be equal to { 0 } and the map \Psi trivial. However, if V is
Hausdorff and locally convex, the map \Psi is injective from V to the algebraic dual V^* of the continuous dual, again as a consequence of the Hahn–Banach theorem. Second, even in the locally convex setting, several natural vector space topologies can be defined on the continuous dual V', so that the continuous double dual V
is not uniquely defined as a set. Saying that \Psi maps from V to V, or in other words, that \Psi(x) is continuous on V' for every x \in V, is a reasonable minimal requirement on the topology of V', namely that the evaluation mappings \varphi \in V' \mapsto \varphi(x), \quad x \in V , be continuous for the chosen topology on V'. Further, there is still a choice of a topology on V'', and continuity of \Psi depends upon this choice. As a consequence, defining
reflexivity in this framework is more involved than in the normed case. == See also ==