There are a number of different ways to define a geometric algebra. Hestenes's original approach was axiomatic, "full of geometric significance" and equivalent to the universal Clifford algebra. Given a finite-dimensional vector space over a
field with a
symmetric bilinear form (the
inner product, e.g., the Euclidean or
Lorentzian metric) , the
geometric algebra of the
quadratic space is the
Clifford algebra {{tmath|1= \operatorname{Cl}(V, g) }}, an element of which is called a multivector. The Clifford algebra is commonly defined as a
quotient of the
tensor algebra of , though this definition is abstract, so the following definition is presented without requiring
abstract algebra. ; Definition : A unital associative algebra {{tmath|1= \operatorname{Cl}(V, g) }} with a
nondegenerate symmetric bilinear form is the Clifford algebra of the quadratic space if :* it contains and as distinct subspaces :* for :* generates {{tmath|1= \operatorname{Cl}(V, g) }} as an algebra :* {{tmath|1= \operatorname{Cl}(V, g) }} is not generated by any proper subspace of . To cover degenerate symmetric bilinear forms, the last condition must be modified.{{efn|It may be replaced by the condition that the product of any set of linearly independent vectors in must not be in or that the dimension of the algebra must be {{tmath|1= 2^{\dim V} }}.}} It can be shown that these conditions uniquely characterize the geometric product. For the remainder of this article, only the
real case, , will be considered. The notation {{tmath| 1=\mathcal{G}(p,q) }} (respectively {{tmath|1= \mathcal{G}(p,q,r) }}) will be used to denote a geometric algebra for which the bilinear form has the
signature (respectively ). The product in the geometric algebra is called the
geometric product, and the product in the contained exterior algebra is called the
exterior product (frequently called the
wedge product or the
outer product). It is standard to denote these respectively by juxtaposition (i.e., suppressing any explicit multiplication symbol) and the symbol . Their connection to the inner product will be elaborated on below. The above definition of the geometric algebra is still somewhat abstract, so we summarize the properties of the geometric product here. For multivectors {{tmath|1= A, B, C\in \mathcal{G}(p,q) }}: • {{tmath|1= AB \in \mathcal{G}(p,q) }} (
closure) • , where is the identity element (existence of an
identity element) • (
associativity) • and (
distributivity) • for . The exterior product has the same properties, except that the last property above is replaced by for . Note that in the last property above, the real number need not be nonnegative if is not positive-definite. An important property of the geometric product is the existence of elements that have a multiplicative inverse. For a vector , if a^2 \ne 0 then a^{-1} exists and is equal to {{tmath|1= g(a,a)^{-1}a }}. A nonzero element of the algebra does not necessarily have a multiplicative inverse. For example, if u is a vector in V such that , the element \textstyle\frac{1}{2}(1 + u) is both a nontrivial
idempotent element and a nonzero
zero divisor, and thus has no inverse.{{efn|Given , we have that (\tfrac{1}{2}(1 + u))^2 = \tfrac{1}{4}(1 + 2u + uu) = \tfrac{1}{4}(1 + 2u + 1) {{tmath|1= = \tfrac{1}{2}(1 + u) }}, showing that \tfrac{1}{2}(1 + u) is idempotent, and that \tfrac{1}{2}(1 + u)(1 - u) = \tfrac{1}{2}(1 - uu) {{tmath|1= = \tfrac{1}{2}(1 - 1) = 0 }}, showing that it is a nonzero zero divisor.}} It is usual to identify \R and V with their images under the natural
embeddings \R \to \mathcal{G}(p,q) and {{tmath|1= V \to \mathcal{G}(p,q) }}. In this article, this identification is assumed. Throughout, the terms
scalar and
vector refer to elements of \R and V respectively (and of their images under this embedding).
Geometric product For vectors and , we may write the geometric product of any two vectors and as the sum of a symmetric product and an antisymmetric product: : ab = \frac{1}{2} (ab + ba) + \frac{1}{2} (ab - ba) . Thus we can define the
inner product of vectors as : a \cdot b := g(a,b), so that the symmetric product can be written as : \frac{1}{2}(ab + ba) = \frac{1}{2} \left((a + b)^2 - a^2 - b^2\right) = a \cdot b . Conversely, is completely determined by the algebra. The antisymmetric part is the exterior product of the two vectors, the product of the contained
exterior algebra: : a \wedge b := \frac{1}{2}(ab - ba) = -(b \wedge a) . Then by simple addition: : ab=a \cdot b + a \wedge b the ungeneralized or vector form of the geometric product. The inner and exterior products are associated with familiar concepts from standard vector algebra. Geometrically, a and b are
parallel if their geometric product is equal to their inner product, whereas a and b are
perpendicular if their geometric product is equal to their exterior product. In a geometric algebra for which the square of any nonzero vector is positive, the inner product of two vectors can be identified with the
dot product of standard vector algebra. The exterior product of two vectors can be identified with the
signed area enclosed by a
parallelogram the sides of which are the vectors. The
cross product of two vectors in 3 dimensions with positive-definite quadratic form is closely related to their exterior product. Most instances of geometric algebras of interest have a nondegenerate quadratic form. If the quadratic form is fully
degenerate, the inner product of any two vectors is always zero, and the geometric algebra is then simply an exterior algebra. Unless otherwise stated, this article will treat only nondegenerate geometric algebras. The exterior product is naturally extended as an associative bilinear binary operator between any two elements of the algebra, satisfying the identities : \begin{align} 1 \wedge a_i &= a_i \wedge 1 = a_i \\ a_1 \wedge a_2\wedge\cdots\wedge a_r &= \frac{1}{r!}\sum_{\sigma\in\mathfrak{S}_r} \operatorname{sgn}(\sigma) a_{\sigma(1)}a_{\sigma(2)} \cdots a_{\sigma(r)}, \end{align} where the sum is over all permutations of the indices, with \operatorname{sgn}(\sigma) the
sign of the permutation, and a_i are vectors (not general elements of the algebra). Since every element of the algebra can be expressed as the sum of products of this form, this defines the exterior product for every pair of elements of the algebra. It follows from the definition that the exterior product forms an
alternating algebra. The equivalent structure equation for Clifford algebra is : a_1 a_2 a_3 \dots a_n = \sum^{[\frac{n}2]}_{i=0} \sum_{\mu\in{}\mathcal{C}} (-1)^k \operatorname{Pf}(a_{\mu_1}\cdot a_{\mu_2},\dots,a_{\mu_{2i-1}} \cdot a_{\mu_{2i}}) a_{\mu_{2i+1}}\land\dots\land a_{\mu_n} where \operatorname{Pf}(A) is the
Pfaffian of and \mathcal{C} = \binom{n}{2i} provides
combinations, , of indices divided into and parts and is the
parity of the
combination. The Pfaffian provides a metric for the exterior algebra and, as pointed out by Claude Chevalley, Clifford algebra reduces to the exterior algebra with a zero quadratic form. The role the Pfaffian plays can be understood from a geometric viewpoint by developing Clifford algebra from
simplices. This derivation provides a better connection between
Pascal's triangle and
simplices because it provides an interpretation of the first column of ones.
Blades, grades, and basis A multivector that is the exterior product of r linearly independent vectors is called a
blade, and is said to be of grade .{{efn|Grade is a synonym for
degree of a homogeneous element under the
grading as an algebra with the exterior product (a {{tmath|1= \mathrm{Z} }}-grading), and not under the geometric product.}} A multivector that is the sum of blades of grade r is called a (homogeneous) multivector of grade . From the axioms, with closure, every multivector of the geometric algebra is a sum of blades. with weights. Consider a set of r linearly independent vectors \{a_1,\ldots,a_r\} spanning an -dimensional subspace of the vector space. With these, we can define a real
symmetric matrix (in the same way as a
Gramian matrix) : [\mathbf{A}]_{ij} = a_i \cdot a_j By the
spectral theorem, \mathbf{A} can be diagonalized to
diagonal matrix \mathbf{D} by an
orthogonal matrix \mathbf{O} via : \sum_{k,l}[\mathbf{O}]_{ik}[\mathbf{A}]_{kl}[\mathbf{O}^{\mathrm{T}}]_{lj}=\sum_{k,l}[\mathbf{O}]_{ik}[\mathbf{O}]_{jl}[\mathbf{A}]_{kl}=[\mathbf{D}]_{ij} Define a new set of vectors {{tmath|1= \{e_1, \ldots,e_r\} }}, known as orthogonal basis vectors, to be those transformed by the orthogonal matrix: : e_i=\sum_j[\mathbf{O}]_{ij}a_j Since orthogonal transformations preserve inner products, it follows that e_i\cdot e_j=[\mathbf{D}]_{ij} and thus the \{e_1, \ldots, e_r\} are perpendicular. In other words, the geometric product of two distinct vectors e_i \ne e_j is completely specified by their exterior product, or more generally : \begin{array}{rl} e_1e_2\cdots e_r &= e_1 \wedge e_2 \wedge \cdots \wedge e_r \\ &= \left(\sum_j [\mathbf{O}]_{1j}a_j\right) \wedge \left(\sum_j [\mathbf{O}]_{2j}a_j \right) \wedge \cdots \wedge \left(\sum_j [\mathbf{O}]_{rj}a_j\right) \\ &= (\det \mathbf{O}) a_1 \wedge a_2 \wedge \cdots \wedge a_r \end{array} Therefore, every blade of grade r can be written as the exterior product of r vectors. More generally, if a degenerate geometric algebra is allowed, then the orthogonal matrix is replaced by a
block matrix that is orthogonal in the nondegenerate block, and the diagonal matrix has zero-valued entries along the degenerate dimensions. If the new vectors of the nondegenerate subspace are
normalized according to : \widehat{e_i}=\frac{1}{\sqrt}e_i, then these normalized vectors must square to +1 or . By
Sylvester's law of inertia, the total number of and the total number of s along the diagonal matrix is invariant. By extension, the total number p of these vectors that square to +1 and the total number q that square to -1 is invariant. (The total number of basis vectors that square to zero is also invariant, and may be nonzero if the degenerate case is allowed.) We denote this algebra {{tmath|1= \mathcal{G}(p,q) }}. For example, \mathcal{G}(3,0) models three-dimensional
Euclidean space, \mathcal{G}(1,3) relativistic
spacetime and \mathcal{G}(4,1) a
conformal geometric algebra of a three-dimensional space. The set of all possible products of n orthogonal basis vectors with indices in increasing order, including 1 as the
empty product, forms a basis for the entire geometric algebra (an analogue of the
PBW theorem). For example, the following is a basis for the geometric algebra {{tmath|1= \mathcal{G}(3,0) }}: : \{1, e_1, e_2, e_3, e_1e_2, e_2e_3, e_3e_1, e_1e_2e_3\} A basis formed this way is called a
standard basis for the geometric algebra, and any other orthogonal basis for V will produce another standard basis. Each standard basis consists of 2^n elements. Every multivector of the geometric algebra can be expressed as a linear combination of the standard basis elements. If the standard basis elements are \{ B_i \mid i \in S \} with S being an index set, then the geometric product of any two multivectors is : \left( \sum_i \alpha_i B_i \right) \left( \sum_j \beta_j B_j \right) = \sum_{i,j} \alpha_i\beta_j B_i B_j . The terminology "k-vector" is often encountered to describe multivectors containing elements of only one grade. In higher dimensional space, some such multivectors are not blades (cannot be factored into the exterior product of k vectors). By way of example, e_1 \wedge e_2 + e_3 \wedge e_4 in \mathcal{G}(4,0) cannot be factored; typically, however, such elements of the algebra do not yield to geometric interpretation as objects, although they may represent geometric quantities such as rotations. Only -, -, - and -vectors are always blades in -space.
Versor A -versor is a multivector that can be expressed as the geometric product of k invertible vectors. Unit quaternions (originally called versors by Hamilton) may be identified with rotors in 3D space in much the same way as real 2D rotors subsume complex numbers; for the details refer to Dorst. Some authors use the term "versor product" to refer to the frequently occurring case where an operand is "sandwiched" between operators. The descriptions for rotations and reflections, including their outermorphisms, are examples of such sandwiching. These outermorphisms have a particularly simple algebraic form. Specifically, a mapping of vectors of the form : V \to V : a \mapsto RaR^{-1} extends to the outermorphism \mathcal{G}(V) \to \mathcal{G}(V) : A \mapsto RAR^{-1}. Since both operators and operand are versors there is potential for alternative examples such as rotating a rotor or reflecting a spinor always provided that some geometrical or physical significance can be attached to such operations. By the
Cartan–Dieudonné theorem we have that every isometry can be given as reflections in hyperplanes and since composed reflections provide rotations then we have that orthogonal transformations are versors. In group terms, for a real, non-degenerate {{tmath|1= \mathcal{G}(p,q) }}, having identified the group \mathcal{G}^\times as the group of all invertible elements of {{tmath|1= \mathcal{G} }}, Lundholm gives a proof that the "versor group" \{ v_1 v_2 \cdots v_k \in \mathcal{G} \mid v_i \in V^\times\} (the set of invertible versors) is equal to the Lipschitz group \Gamma ( Clifford group, although Lundholm deprecates this usage).
Subgroups of the Lipschitz group We denote the grade involution as {{tmath|1= \widehat{S} }} and reversion as {{tmath|1= \widetilde{S} }}. Although the Lipschitz group (defined as {{tmath|1= \{ S \in \mathcal{G}^{\times} \mid \widehat{S} V S^{-1} \subseteq V \} }}) and the versor group (defined as {{tmath|1= \textstyle \{ \prod_{i=0}^{k} v_i \mid v_i \in V^{\times}, k \in \N \} }}) have divergent definitions, they are the same group. Lundholm defines the {{tmath|1= \operatorname{Pin} }}, {{tmath|1= \operatorname{Spin} }}, and {{tmath|1= \operatorname{Spin}^{+} }} subgroups of the Lipschitz group. Multiple analyses of spinors use GA as a representation.
Grade projection A -
graded vector space structure can be established on a geometric algebra by use of the exterior product that is naturally induced by the geometric product. Since the geometric product and the exterior product are equal on orthogonal vectors, this grading can be conveniently constructed by using an orthogonal basis {{tmath|1= \{e_1,\ldots,e_n\} }}. Elements of the geometric algebra that are scalar multiples of 1 are of grade 0 and are called
scalars. Elements that are in the span of \{e_1,\ldots,e_n\} are of grade and are the ordinary vectors. Elements in the span of \{e_ie_j\mid 1\leq i are of grade 2 and are the bivectors. This terminology continues through to the last grade of -vectors. Alternatively, -vectors are called
pseudoscalars, -vectors are called pseudovectors, etc. Many of the elements of the algebra are not graded by this scheme since they are sums of elements of differing grade. Such elements are said to be of
mixed grade. The grading of multivectors is independent of the basis chosen originally. This is a grading as a vector space, but not as an algebra. Because the product of an -blade and an -blade is contained in the span of 0 through -blades, the geometric algebra is a
filtered algebra. A multivector A may be decomposed with the
grade-projection operator , which outputs the grade- portion of . As a result: : A = \sum_{r=0}^{n} \langle A \rangle _r As an example, the geometric product of two vectors a b = a \cdot b + a \wedge b = \langle a b \rangle_0 + \langle a b \rangle_2 since \langle a b \rangle_0=a\cdot b and \langle a b \rangle_2 = a\wedge b and , for i other than 0 and . A multivector A may also be decomposed into even and odd components, which may respectively be expressed as the sum of the even and the sum of the odd grade components above: : A^{[0]} = \langle A \rangle _0 + \langle A \rangle _2 + \langle A \rangle _4 + \cdots : A^{[1]} = \langle A \rangle _1 + \langle A \rangle _3 + \langle A \rangle _5 + \cdots This is the result of forgetting structure from a {{tmath|1= \mathrm{Z} }}-
graded vector space to {{tmath|1= \mathrm{Z}_2 }}-
graded vector space. The geometric product respects this coarser grading. Thus in addition to being a {{tmath|1= \mathrm{Z}_2 }}-
graded vector space, the geometric algebra is a {{tmath|1= \mathrm{Z}_2 }}-
graded algebra, a
superalgebra. Restricting to the even part, the product of two even elements is also even. This means that the even multivectors defines an
even subalgebra. The even subalgebra of an -dimensional geometric algebra is
algebra-isomorphic (without preserving either filtration or grading) to a full geometric algebra of (n-1) dimensions. Examples include \mathcal{G}^{[0]}(2,0) \cong \mathcal{G}(0,1) and {{tmath|1= \mathcal{G}^{[0]}(1,3) \cong \mathcal{G}(3,0) }}.
Representation of subspaces Geometric algebra represents subspaces of V as blades, and so they coexist in the same algebra with vectors from . A -dimensional subspace W of V is represented by taking an orthogonal basis \{b_1,b_2,\ldots, b_k\} and using the geometric product to form the
blade . There are multiple blades representing ; all those representing W are scalar multiples of . These blades can be separated into two sets: positive multiples of D and negative multiples of . The positive multiples of D are said to have
the same orientation as , and the negative multiples the
opposite orientation. Blades are important since geometric operations such as projections, rotations and reflections depend on the factorability via the exterior product that (the restricted class of) -blades provide but that (the generalized class of) grade- multivectors do not when .
Unit pseudoscalars Unit pseudoscalars are blades that play important roles in GA. A
unit pseudoscalar for a non-degenerate subspace W of V is a blade that is the product of the members of an orthonormal basis for . It can be shown that if I and I' are both unit pseudoscalars for , then I = \pm I' and . If one doesn't choose an orthonormal basis for , then the
Plücker embedding gives a vector in the exterior algebra but only up to scaling. Using the vector space isomorphism between the geometric algebra and exterior algebra, this gives the
equivalence class of \alpha I for all . Orthonormality gets rid of this ambiguity except for the signs above. Suppose the geometric algebra \mathcal{G}(n,0) with the familiar positive definite inner product on \R^n is formed. Given a plane (two-dimensional subspace) of , one can find an orthonormal basis \{ b_1, b_2 \} spanning the plane, and thus find a unit pseudoscalar I = b_1 b_2 representing this plane. The geometric product of any two vectors in the span of b_1 and b_2 lies in {{tmath|1= \{ \alpha_0 + \alpha_1 I \mid \alpha_i \in \R \} }}, that is, it is the sum of a -vector and a -vector. By the properties of the geometric product, . The resemblance to the
imaginary unit is not incidental: the subspace \{ \alpha_0 + \alpha_1 I \mid \alpha_i \in \R \} is -algebra isomorphic to the
complex numbers. In this way, a copy of the complex numbers is embedded in the geometric algebra for each two-dimensional subspace of V on which the quadratic form is definite. It is sometimes possible to identify the presence of an imaginary unit in a physical equation. Such units arise from one of the many quantities in the real algebra that square to , and these have geometric significance because of the properties of the algebra and the interaction of its various subspaces. In {{tmath|1= \mathcal{G}(3,0) }}, a further familiar case occurs. Given a standard basis consisting of orthonormal vectors e_i of , the set of
all -vectors is spanned by : \{ e_3 e_2 , e_1 e_3 , e_2 e_1 \} . Labelling these , j and k (momentarily deviating from our uppercase convention), the subspace generated by -vectors and -vectors is exactly {{tmath|1= \{ \alpha_0 + i \alpha_1 + j \alpha_2 + k \alpha_3 \mid \alpha_i \in \R\} }}. This set is seen to be the even subalgebra of {{tmath|1= \mathcal{G}(3,0) }}, and furthermore is isomorphic as an -algebra to the
quaternions, another important algebraic system.
Extensions of the inner and exterior products It is common practice to extend the exterior product on vectors to the entire algebra of multivectors. This may be done through the use of the above-mentioned
grade projection operator: : A \wedge B := \sum_{r,s}\langle \langle A \rangle_r \langle B \rangle_s \rangle_{r+s} (the
exterior product) This generalization is consistent with the above definition involving antisymmetrization. Another generalization related to the exterior product is the commutator product: : A \times B := \tfrac{1}{2}(A B - B A) (the
commutator product) The regressive product is the dual of the exterior product (respectively corresponding to the "meet" and "join" in this context). The dual specification of elements permits, for blades and , the intersection (or meet) where the duality is to be taken relative to a blade containing both and (the smallest such blade being the join). : A_r \vee B_s := ((A_r I^{-1}) \wedge (B_s I^{-1}))I with the unit pseudoscalar of the algebra. The regressive product, like the exterior product, is associative. The inner product on vectors can also be generalized, but in more than one non-equivalent way. The paper gives a full treatment of several different inner products developed for geometric algebras and their interrelationships, and the notation is taken from there. Many authors use the same symbol as for the inner product of vectors for their chosen extension (e.g. Hestenes and Perwass). No consistent notation has emerged. The original extension by Hestenes defines the inner (or dot) product as the lowest grade of the geometric product, except that any scalar part of the factors will not contribute (the dot product of a multivector by a scalar is 0): : A \cdot B := \sum_{r \ne 0,s \ne 0}\langle \langle A\rangle_r \langle B \rangle_{s} \rangle_ The original intention was to make true for and scalars or vectors. Many authors don't use the restriction of in the definition, which is equivalent to the "fat dot" product below. Among the other several different generalizations of the inner product on vectors are: : A \mathbin{\rfloor} B := \sum_{r,s}\langle \langle A\rangle_r \langle B \rangle_{s} \rangle_{s-r} (the
left contraction) : A \mathbin{\lfloor} B := \sum_{r,s}\langle \langle A\rangle_r \langle B \rangle_{s} \rangle_{r-s} (the
right contraction) : A * B := \sum_{r,s}\langle \langle A \rangle_r \langle B \rangle_s \rangle_{0} (the
scalar product) : A \bullet B := \sum_{r,s}\langle \langle A\rangle_r \langle B \rangle_{s} \rangle_ (the "(fat) dot" product){{efn| This should not be confused with Hestenes's irregular generalization {{tmath|1= \textstyle A \bullet_\text{H} B := \sum_{r\ne0,s\ne0}\langle \langle A\rangle_r \langle B \rangle_{s} \rangle_{ \vert s-r \vert } }}, where the distinguishing notation is from }} makes an argument for the use of contractions in preference to Hestenes's inner product; they are algebraically more regular and have cleaner geometric interpretations. A number of identities incorporating the contractions are valid without restriction of their inputs. For example, : A \mathbin{\rfloor} B = ( A \wedge ( B I^{-1} ) ) I : A \mathbin{\lfloor} B = I ( ( I^{-1} A) \wedge B ) : ( A \wedge B ) * C = A * ( B \mathbin{\rfloor} C ) : C * ( B \wedge A ) = ( C \mathbin{\lfloor} B ) * A : A \mathbin{\rfloor} ( B \mathbin{\rfloor} C ) = ( A \wedge B ) \mathbin{\rfloor} C : ( A \mathbin{\rfloor} B ) \mathbin{\lfloor} C = A \mathbin{\rfloor} ( B \mathbin{\lfloor} C ) . Benefits of using the left contraction as an extension of the inner product on vectors include that the identity ab = a \cdot b + a \wedge b is extended to aB = a \;\rfloor\; B + a \wedge B for any vector a and multivector , and that the
projection operation \mathcal{P}_b (a) = (a \cdot b^{-1})b is extended to \mathcal{P}_B (A) = (A \mathbin{\rfloor} B^{-1}) \mathbin{\rfloor} B for any blade B and any multivector A (with a minor modification to accommodate null , given
below).
Dual basis Let \{ e_1 , \ldots , e_n \} be a basis of , i.e. a set of n linearly independent vectors that span the -dimensional vector space . The basis that is dual to \{ e_1 , \ldots , e_n \} is the set of elements of the
dual vector space V^{*} that forms a
biorthogonal system with this basis, thus being the elements denoted \{ e^1 , \ldots , e^n \} satisfying : e^i \cdot e_j = \delta^i{}_j, where \delta is the
Kronecker delta. Given a nondegenerate quadratic form on , V^{*} becomes naturally identified with , and the dual basis may be regarded as elements of , but are not in general the same set as the original basis. Given further a GA of , let : I = e_1 \wedge \cdots \wedge e_n be the pseudoscalar (which does not necessarily square to ) formed from the basis {{tmath|1= \{ e_1 , \ldots , e_n \} }}. The dual basis vectors may be constructed as : e^i=(-1)^{i-1}(e_1 \wedge \cdots \wedge \check{e}_i \wedge \cdots \wedge e_n) I^{-1}, where the \check{e}_i denotes that the th basis vector is omitted from the product. A dual basis is also known as a
reciprocal basis or reciprocal frame. A major usage of a dual basis is to separate vectors into components. Given a vector , scalar components a^i can be defined as : a^i=a\cdot e^i\ , in terms of which a can be separated into vector components as : a=\sum_i a^i e_i\ . We can also define scalar components a_i as : a_i=a\cdot e_i\ , in terms of which a can be separated into vector components in terms of the dual basis as : a=\sum_i a_i e^i\ . A dual basis as defined above for the vector subspace of a geometric algebra can be extended to cover the entire algebra. For compactness, we'll use a single capital letter to represent an ordered set of vector indices. I.e., writing : J=(j_1,\dots ,j_n)\ , where , we can write a basis blade as : e_J=e_{j_1}\wedge e_{j_2}\wedge\cdots\wedge e_{j_n}\ . The corresponding reciprocal blade has the indices in opposite order: : e^J=e^{j_n}\wedge\cdots \wedge e^{j_2}\wedge e^{j_1}\ . Similar to the case above with vectors, it can be shown that : e^J * e_K=\delta^J_K\ , where * is the scalar product. With A a multivector, we can define scalar components as : A^{ij\cdots k}=(e^k\wedge\cdots\wedge e^j\wedge e^i)*A\ , in terms of which A can be separated into component blades as : A=\sum_{i We can alternatively define scalar components : A_{ij\cdots k}=(e_k\wedge\cdots\wedge e_j\wedge e_i)*A\ , in terms of which A can be separated into component blades as : A=\sum_{i
Linear functions Although a versor is easier to work with because it can be directly represented in the algebra as a multivector, versors are a subgroup of
linear functions on multivectors, which can still be used when necessary. The geometric algebra of an -dimensional vector space is spanned by a basis of 2^n elements. If a multivector is represented by a 2^n \times 1 real
column matrix of coefficients of a basis of the algebra, then all linear transformations of the multivector can be expressed as the
matrix multiplication by a 2^n \times 2^n real matrix. However, such a general linear transformation allows arbitrary exchanges among grades, such as a "rotation" of a scalar into a vector, which has no evident geometric interpretation. A general linear transformation from vectors to vectors is of interest. With the natural restriction to preserving the induced exterior algebra, the
outermorphism of the linear transformation is the unique{{efn|The condition that \underline{\mathsf{f}}(1) = 1 is usually added to ensure that the
zero map is unique.}} extension of the versor. If f is a linear function that maps vectors to vectors, then its outermorphism is the function that obeys the rule : \underline{\mathsf{f}}(a_1 \wedge a_2 \wedge \cdots \wedge a_r) = f(a_1) \wedge f(a_2) \wedge \cdots \wedge f(a_r) for a blade, extended to the whole algebra through linearity. == Modeling geometries ==