In a finite-dimensional
vector space V over a field
K with a non-degenerate symmetric
bilinear form (which may be referred to as the
metric tensor), there is little distinction between covariant and contravariant vectors, because the
bilinear form allows covectors to be identified with vectors. That is, a vector
v uniquely determines a covector
α via \alpha(w) = g(v, w) for all vectors
w. Conversely, each covector
α determines a unique vector
v by this equation. Because of this identification of vectors with covectors, one may speak of the
covariant components or
contravariant components of a vector, that is, they are just representations of the same vector using the
reciprocal basis. Given a basis of
V, there is a unique reciprocal basis of
V determined by requiring that g(Y^i,X_j) = \delta^i_j, the
Kronecker delta. In terms of these bases, any vector
v can be written in two ways: \begin{align} v &= \sum_i v^i[\mathbf{f}]X_i = \mathbf{f}\,\mathbf{v}[\mathbf{f}]\\ &=\sum_i v_i[\mathbf{f^\sharp}]Y^i = \mathbf{f}^\sharp\mathbf{v}^\sharp[\mathbf{f}]. \end{align} The components
vi[
f] are the
contravariant components of the vector
v in the basis
f, and the components
vi[
f] are the
covariant components of
v in the basis
f. The terminology is justified because under a change of basis, \mathbf{v}[\mathbf{f}A] = A^{-1}\mathbf{v}[\mathbf{f}],\quad \mathbf{v}^\sharp[\mathbf{f}A] = A^T\mathbf{v}^\sharp[\mathbf{f}] where A is an invertible n\times n matrix, and the
matrix transpose has its usual meaning.
Euclidean plane In the Euclidean plane, the
dot product allows for vectors to be identified with covectors. If \mathbf{e}_1,\mathbf{e}_2 is a basis, then the dual basis \mathbf{e}^1,\mathbf{e}^2 satisfies \begin{align} \mathbf{e}^1\cdot\mathbf{e}_1 = 1, &\quad \mathbf{e}^1\cdot\mathbf{e}_2 = 0 \\ \mathbf{e}^2\cdot\mathbf{e}_1 = 0, &\quad \mathbf{e}^2\cdot\mathbf{e}_2 = 1. \end{align} Thus,
e1 and
e2 are perpendicular to each other, as are
e2 and
e1, and the lengths of
e1 and
e2 normalized against
e1 and
e2, respectively.
Example For example, suppose that we are given a basis
e1,
e2 consisting of a pair of vectors making a 45° angle with one another, such that
e1 has length 2 and
e2 has length 1. Then the dual basis vectors are given as follows: •
e2 is the result of rotating
e1 through an angle of 90° (where the sense is measured by assuming the pair
e1,
e2 to be positively oriented), and then rescaling so that holds. •
e1 is the result of rotating
e2 through an angle of 90°, and then rescaling so that holds. Applying these rules, we find \mathbf{e}^1 = \frac{1}{2}\mathbf{e}_1 - \frac{1}{\sqrt{2}}\mathbf{e}_2 and \mathbf{e}^2 = -\frac{1}{\sqrt{2}}\mathbf{e}_1 + 2\mathbf{e}_2. Thus the change of basis matrix in going from the original basis to the reciprocal basis is R = \begin{bmatrix} \frac{1}{2} & -\frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} & 2 \end{bmatrix}, since [\mathbf{e}^1\ \mathbf{e}^2] = [\mathbf{e}_1\ \mathbf{e}_2]\begin{bmatrix} \frac{1}{2} & -\frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} & 2 \end{bmatrix}. For instance, the vector v = \frac{3}{2}\mathbf{e}_1 + 2\mathbf{e}_2 is a vector with contravariant components v^1 = \frac{3}{2},\quad v^2 = 2. The covariant components are obtained by equating the two expressions for the vector
v: v = v_1\mathbf{e}^1 + v_2\mathbf{e}^2 = v^1\mathbf{e}_1 + v^2\mathbf{e}_2 so \begin{align} \begin{bmatrix}v_1\\ v_2\end{bmatrix} &= R^{-1}\begin{bmatrix}v^1 \\ v^2\end{bmatrix} \\ &= \begin{bmatrix}4 & \sqrt{2} \\ \sqrt{2} & 1\end{bmatrix} \begin{bmatrix}v^1 \\ v^2\end{bmatrix} \\ &= \begin{bmatrix}6 + 2\sqrt{2} \\ 2 + \frac{3}{\sqrt{2}}\end{bmatrix} \end{align}.
Three-dimensional Euclidean space In the three-dimensional
Euclidean space, one can also determine explicitly the dual basis to a given set of
basis vectors
e1,
e2,
e3 of
E3 that are not necessarily assumed to be orthogonal nor of unit norm. The dual basis vectors are: \mathbf{e}^1 = \frac{\mathbf{e}_2 \times \mathbf{e}_3}{\mathbf{e}_1 \cdot (\mathbf{e}_2 \times \mathbf{e}_3)} ; \qquad \mathbf{e}^2 = \frac{\mathbf{e}_3 \times \mathbf{e}_1}{\mathbf{e}_2 \cdot (\mathbf{e}_3 \times \mathbf{e}_1)}; \qquad \mathbf{e}^3 = \frac{\mathbf{e}_1 \times \mathbf{e}_2}{\mathbf{e}_3 \cdot (\mathbf{e}_1 \times \mathbf{e}_2)}. Even when the
ei and
ei are not
orthonormal, they are still mutually reciprocal: \mathbf{e}^i \cdot \mathbf{e}_j = \delta^i_j, Then the contravariant components of any vector
v can be obtained by the
dot product of
v with the dual basis vectors: q^1 = \mathbf{v} \cdot \mathbf{e}^1; \qquad q^2 = \mathbf{v} \cdot \mathbf{e}^2; \qquad q^3 = \mathbf{v} \cdot \mathbf{e}^3 . Likewise, the covariant components of
v can be obtained from the dot product of
v with basis vectors, viz. q_1 = \mathbf{v} \cdot \mathbf{e}_1; \qquad q_2 = \mathbf{v} \cdot \mathbf{e}_2; \qquad q_3 = \mathbf{v} \cdot \mathbf{e}_3 . Then
v can be expressed in two (reciprocal) ways, viz. \mathbf{v} = q^i \mathbf{e}_i = q^1 \mathbf{e}_1 + q^2 \mathbf{e}_2 + q^3 \mathbf{e}_3 . or \mathbf{v} = q_i \mathbf{e}^i = q_1 \mathbf{e}^1 + q_2 \mathbf{e}^2 + q_3 \mathbf{e}^3 Combining the above relations, we have \mathbf{v} = (\mathbf{v} \cdot \mathbf{e}^i) \mathbf{e}_i = (\mathbf{v} \cdot \mathbf{e}_i) \mathbf{e}^i and we can convert between the basis and dual basis with q_i = \mathbf{v}\cdot \mathbf{e}_i = (q^j \mathbf{e}_j)\cdot \mathbf{e}_i = (\mathbf{e}_j\cdot\mathbf{e}_i) q^j and q^i = \mathbf{v}\cdot \mathbf{e}^i = (q_j \mathbf{e}^j)\cdot \mathbf{e}^i = (\mathbf{e}^j\cdot\mathbf{e}^i) q_j . If the basis vectors are
orthonormal, then they are the same as the dual basis vectors.
Vector spaces of any dimension The following applies to any vector space of dimension equipped with a non-degenerate commutative and distributive dot product, and thus also to the Euclidean spaces of any dimension. All indices in the formulas run from 1 to . The
Einstein notation for the implicit summation of the terms with the same upstairs (contravariant) and downstairs (covariant) indices is followed. The historical and geometrical meaning of the terms
contravariant and
covariant will be explained at the end of this section.
Definitions •
Covariant basis of a vector space of dimension
n: \mathbf{e_j} \triangleq {any linearly independent basis for which in general is \mathbf{e_i} \cdot \mathbf{e_j} \neq \delta_{ij}}, i.e. not necessarily
orthonormal (D.1). •
Contravariant components of a vector \mathbf{v}: v^i \triangleq \{ v^i \mid \mathbf{v} = v^i \mathbf{e_i}\} (D.2). •
Dual (contravariant) basis of a vector space of dimension
n: \mathbf{e^i} \triangleq \{ \mathbf{e^i} : \mathbf{e^i} \cdot \mathbf{e_j} = \delta^i_j \} (D.3). •
Covariant components of a vector \mathbf{v}: v_i \triangleq \{ v_i \mid \mathbf{v} = v_i \mathbf{e^i}\} (D.4). •
Components of the covariant metric tensor: g_{ij} \triangleq \mathbf{e_i} \cdot \mathbf{e_j}; the metric tensor can be considered a square matrix, since it only has two covariant indices: G \triangleq \{ g_{ij} \}; for the
commutative property of the dot product, the g_{ij} are symmetric (D.5). •
Components of the contravariant metric tensor: g^{ij} \triangleq \{ h_{ij} : G^{-1} = \{ h_{ij} \}\}; these are the elements of the inverse of the covariant metric tensor/matrix G^{-1}, and for the properties of the inverse of a
symmetric matrix, they're also symmetric (D.6).
Corollaries • g^{ij}g_{jk} = \delta^i_k (1).
Proof: from the properties of the inverse matrix (D.6). • \mathbf{e}^i = g^{ij} \mathbf{e}_j (2).
Proof: let's suppose that \{A^{ij} \mid \mathbf{e}^i = A^{ij} \mathbf{e}_j\}; we will show that A^{ij} = g^{ij}. Taking the dot product of both sides with \mathbf{e}_k:\begin{align} &\mathbf{e}^i \cdot \mathbf{e}_k = (A^{ij} \mathbf{e}_j) \cdot \mathbf{e}_k \\ \xrightarrow{\text{(D.3,D.5)}}{}& A^{ij}g_{jk} = \delta^i_k; \end{align}multiplying both sides by {{nowrap|g^{mk}:}}\begin{align} &g^{mk} A^{ij}g_{jk} = g^{mk}\delta^i_k \\ \xrightarrow{\text{(D.5)}}{}&\; g^{mk}g_{kj}A^{ij} = g^{mi} \\ \xrightarrow{\text{(1)}} {}&\; \delta^m_j A^{ij} = g^{mi} \\ \longrightarrow{}&\; A^{im} = g^{mi} \stackrel{\text{(D.6)}}{=} g^{im}. \quad \blacksquare \end{align} • \mathbf{e}^i \cdot \mathbf{e}^j = g^{ij} (3).
Proof: \begin{align} &\mathbf{e}^i \stackrel{\text{(2)}}{{}={}} g^{ik} \mathbf{e}_k; \\ &\mathbf{e}^j \stackrel{\text{(2)}}{{}={}} g^{jm} \mathbf{e}_m \\ \longrightarrow{}& \mathbf{e}^i \cdot \mathbf{e}^j = g^{ik} g^{jm} (\mathbf{e}_k \cdot \mathbf{e}_m) \stackrel{\text{(D.5)}}{{}={}} g^{ik}g^{jm}g_{km} \stackrel{\text{(1)}}{{}={}} \delta^i_m g^{jm} = g^{ji} \stackrel{\text{(D.6)}}{{}={}} g^{ij}. \quad \blacksquare \end{align} • \mathbf{e}_i = g_{ij} \mathbf{e}^j (4).
Proof: let's suppose that \{B_{ij} \mid \mathbf{e}_i = B_{ij} \mathbf{e}^j\}; we will show that B_{ij} = g_{ij}. Taking the dot product of both sides with \mathbf{e}^k: \mathbf{e}_i \cdot \mathbf{e}^k = (B_{ij} \mathbf{e}^j) \cdot \mathbf{e}^k \stackrel{\text{(D.3,3)}}{\to} B_{ij}g^{jk} = \delta_i^k; multiplying both sides by {{nowrap|g_{mk}:}}\begin{align} &g_{mk}B_{ij}g^{jk} = g_{mk}\delta_i^k \\ \xrightarrow{\text{(D.5)}}{} & g_{mk}g^{kj}B_{ij} = g_{mi} \\ \xrightarrow{\text{(1)}}{} & \delta_m^j B_{ij} = g_{mi} \\ \longrightarrow{} & B_{im} = g_{mi} \stackrel{\text{(D.5)}}{{}={}} g_{im} . \quad \blacksquare \end{align} • v^i = g^{ij} v_j (5).
Proof: \begin{align} &\mathbf{v} \stackrel{\text{(D.2)}}{=} v^i \mathbf{e}_i; \\ &\mathbf{v} \stackrel{\text{(D.4)}}{=} v_j \mathbf{e}^j \stackrel{\text{(2)}}{=} v_j(g^{ji} \mathbf{e}_i) \\ \longrightarrow{}& v^i \mathbf{e}_i = v_j g^{ji} \mathbf{e}_i, \\ \forall i &: v^i = g^{ji}v_j \stackrel{\text{(D.6)}}{=} g^{ij} v_j. \quad \blacksquare \end{align} • v_i = g_{ij} v^j (6).
Proof: specular to (5). • v_i = \mathbf{v} \cdot \mathbf{e}_i (7).
Proof: \begin{align} \mathbf{v} \cdot \mathbf{e}_i \stackrel{\text{(D.4)}}{{}={}} (v_j \mathbf{e}^j) \cdot \mathbf{e}_i \stackrel{\text{(D.3)}}{{}={}} v_j \delta^j_i = v_i. \quad \blacksquare \end{align} • v^i = \mathbf{v} \cdot \mathbf{e}^i (8).
Proof: specular to (7). • \mathbf{u} \cdot \mathbf{v} = g_{ij}u^iv^j (9).
Proof: \mathbf{u} \cdot \mathbf{v} \stackrel{\text{(D.2)}}{{}={}} (u^i\mathbf{e}_i) \cdot (v^j\mathbf{e}_j) = (\mathbf{e}_i \cdot \mathbf{e}_j) u^i v^j \stackrel{\text{(D.5)}}{{}={}} g_{ij}u^iv^j . \quad \blacksquare • \mathbf{u} \cdot \mathbf{v} = g^{ij}u_iv_j (10).
Proof: specular to (9).
Historical and geometrical meaning Considering this figure for the case of an Euclidean space with n = 2, since \mathbf{v} = \mathbf{OA} + \mathbf{OB}, if we want to express \mathbf{v} in terms of the covariant basis, we have to multiply the basis vectors by the coefficients {{nowrap|v^1 = \frac{\vert \mathbf{OA} \vert}{\vert \mathbf{e_1} \vert},}} {{nowrap|v^2 = \frac{\vert \mathbf{OB} \vert}{\vert \mathbf{e_2} \vert}.}} With \mathbf{v} and thus \mathbf{OA} and \mathbf{OB} fixed, if the module of \mathbf{e_i} increases, the value of the v^i component decreases, and that's why they're called
contra-variant (with respect to the variation of the basis vectors module). Symmetrically, corollary (7) states that the v_i components equal the dot product \mathbf{v} \cdot \mathbf{e_i} between the vector and the covariant basis vectors, and since this is directly proportional to the basis vectors module, they're called
co-variant. If we consider the dual (contravariant) basis, the situation is perfectly specular: the covariant components are
contra-variant with respect to the module of the dual basis vectors, while the contravariant components are
co-variant. So in the end it all boils down to a matter of convention: historically the first non-
orthonormal basis of the vector space of choice was called "covariant", its dual basis "contravariant", and the corresponding components named specularly. If the covariant basis becomes
orthonormal, the dual contravariant basis aligns with it and the covariant components collapse into the contravariant ones, the most familiar situation when dealing with geometrical Euclidean vectors. G and G^{-1} become the
identity matrix I, and: \begin{align} g_{ij} &= \delta_{ij}, \\ g^{ij} &= \delta^{ij}, \\[1ex] \mathbf{u} \cdot \mathbf{v} &= \delta_{ij}u^iv^j = \sum_{i}u^iv^i \\ &= \delta^{ij}u_iv_j = \sum_{i}u_iv_i. \end{align} If the metric is non-Euclidean, but for instance
Minkowskian like in the
special relativity and
general relativity theories, the basis are never orthonormal, even in the case of special relativity where G and G^{-1} become, for n=4,\ \eta \triangleq \operatorname{diag}(1,-1,-1,-1). In this scenario, the covariant and contravariant components always differ. ==Use in tensor analysis==