A
linear form is a linear map from a vector space over a field to the field of scalars , viewed as a vector space over itself. Equipped by
pointwise addition and multiplication by a scalar, the linear forms form a vector space, called the
dual space of , and usually denoted or . If is a basis of (this implies that is finite-dimensional), then one can define, for , a linear map such that and if . These linear maps form a basis of , called the
dual basis of . (If is not finite-dimensional, the may be defined similarly; they are linearly independent, but do not form a basis.) For in , the map :f\to f(\mathbf v) is a linear form on . This defines the
canonical linear map from into , the dual of , called the
double dual or
bidual of . This canonical map is an
isomorphism if is finite-dimensional, and this allows identifying with its bidual. (In the infinite-dimensional case, the canonical map is injective, but not surjective.) There is thus a complete symmetry between a finite-dimensional vector space and its dual. This motivates the frequent use, in this context, of the
bra–ket notation :\langle f, \mathbf x\rangle for denoting .
Dual map Let :f:V\to W be a linear map. For every linear form on , the
composite function is a linear form on . This defines a linear map :f^*:W^*\to V^* between the dual spaces, which is called the
dual or the
transpose of . If and are finite-dimensional, and is the matrix of in terms of some ordered bases, then the matrix of over the dual bases is the
transpose of , obtained by exchanging rows and columns. If elements of vector spaces and their duals are represented by column vectors, this duality may be expressed in
bra–ket notation by :\langle h^\mathsf T , M \mathbf v\rangle = \langle h^\mathsf T M, \mathbf v\rangle. To highlight this symmetry, the two members of this equality are sometimes written :\langle h^\mathsf T \mid M \mid \mathbf v\rangle.
Inner-product spaces Besides these basic concepts, linear algebra also studies vector spaces with additional structure, such as an
inner product. The inner product is an example of a
bilinear form, and it gives the vector space a geometric structure by allowing for the definition of length and angles. Formally, an
inner product is a map. : \langle \cdot, \cdot \rangle : V \times V \to F that satisfies the following three
axioms for all vectors in and all scalars in : •
Conjugate symmetry: • :\langle \mathbf u, \mathbf v\rangle =\overline{\langle \mathbf v, \mathbf u\rangle}. :In \mathbb{R}, it is symmetric. •
Linearity in the first argument: • :\begin{align} \langle a \mathbf u, \mathbf v\rangle &= a \langle \mathbf u, \mathbf v\rangle. \\ \langle \mathbf u + \mathbf v, \mathbf w\rangle &= \langle \mathbf u, \mathbf w\rangle+ \langle \mathbf v, \mathbf w\rangle. \end{align} •
Positive-definiteness: • :\langle \mathbf v, \mathbf v\rangle \geq 0 :with equality only for . We can define the length of a vector
v in
V by :\|\mathbf v\|^2=\langle \mathbf v, \mathbf v\rangle, and we can prove the
Cauchy–Schwarz inequality: :|\langle \mathbf u, \mathbf v\rangle| \leq \|\mathbf u\| \cdot \|\mathbf v\|. In particular, the quantity :\frac{\|\mathbf u\| \cdot \|\mathbf v\|} \leq 1, and so we can call this quantity the cosine of the angle between the two vectors. Two vectors are orthogonal if . An orthonormal basis is a basis where all basis vectors have length 1 and are orthogonal to each other. Given any finite-dimensional vector space, an orthonormal basis could be found by the
Gram–Schmidt procedure. Orthonormal bases are particularly easy to deal with, since if , then :a_i = \langle \mathbf v, \mathbf v_i \rangle. The inner product facilitates the construction of many useful concepts. For instance, given a transform , we can define its
Hermitian conjugate as the linear transform satisfying : \langle T \mathbf u, \mathbf v \rangle = \langle \mathbf u, T^* \mathbf v\rangle. If satisfies , we call
normal. It turns out that normal matrices are precisely the matrices that have an orthonormal system of eigenvectors that span . ==Relationship with geometry==