Matrices can be generalized in different ways. Abstract algebra uses matrices with entries in more general
fields or even
rings, while linear algebra codifies properties of matrices in the notion of linear maps. It is possible to consider matrices with infinitely many columns and rows. Another extension is
tensors, which can be seen as higher-dimensional arrays of numbers, as opposed to vectors, which can often be realized as sequences of numbers, while matrices are rectangular or two-dimensional arrays of numbers. Matrices, subject to certain requirements tend to form
groups known as matrix groups. Similarly under certain conditions matrices form
rings known as
matrix rings. Though the product of matrices is not in general commutative, certain matrices form
fields sometimes called matrix fields. (However the term "matrix field" is ambiguous, also referring to certain forms of physical
fields that continuously map points of some space to matrices.) In general, matrices over any ring and their
multiplication can be represented as the arrows and composition of arrows in a
category, the
category of matrices over that ring. The objects of this category are natural numbers, representing the dimensions of the matrices.
Matrices with entries in a field or ring This article focuses on matrices whose entries are real or complex numbers. However, matrices can be considered with much more general types of entries than real or complex numbers. As a first step of generalization, any
field, that is, a
set where
addition,
subtraction,
multiplication, and
division operations are defined and well-behaved, may be used instead of or , for example
rational numbers or
finite fields. For example,
coding theory makes use of matrices over finite fields. Wherever
eigenvalues are considered, as these are roots of a polynomial, they may exist only in a larger field than that of the entries of the matrix. For instance, they may be complex in the case of a matrix with real entries. The possibility to reinterpret the entries of a matrix as elements of a larger field (for example, to view a real matrix as a complex matrix whose entries happen to be all real) then allows considering each square matrix to possess a full set of eigenvalues. Alternatively one can consider only matrices with entries in an
algebraically closed field, such as from the outset. Matrices whose entries are
polynomials, and more generally, matrices with entries in a
ring are widely used in mathematics. Rings are a more general notion than fields in that a division operation need not exist. The very same addition and multiplication operations of matrices extend to this setting, too. The set (also denoted ) of all square -by- matrices over is a ring called
matrix ring, isomorphic to the
endomorphism ring of the left -
module . If the ring is
commutative, that is, its multiplication is commutative, then the ring is also an
associative algebra over . The
determinant of square matrices over a commutative ring can still be defined using the
Leibniz formula; such a matrix is invertible if and only if its determinant is
invertible in , generalizing the situation over a field , where every nonzero element is invertible. Matrices over
superrings are called
supermatrices. Matrices do not always have all their entries in the same ring– or even in any ring at all. One special but common case is
block matrices, which may be considered as matrices whose entries themselves are matrices. The entries need not be square matrices, and thus need not be members of any
ring; but in order to multiply them, their sizes must fulfill certain conditions: each pair of submatrices that are multiplied in forming the overall product must have compatible sizes.
Relationship to linear maps Linear maps \R^n \to \R^m are equivalent to -by- matrices, as described
above. More generally, any linear map between finite-
dimensional vector spaces can be described by a matrix , after choosing
bases of , and of (so is the dimension of and is the dimension of ), which is such that f(\mathbf{v}_j) = \sum_{i=1}^m a_{i,j} \mathbf{w}_i\qquad\mbox{for}\ j=1,\ldots,n. In other words, column of expresses the image of in terms of the basis vectors of ; thus this relation uniquely determines the entries of the matrix . The matrix depends on the choice of the bases: different choices of bases give rise to different, but
equivalent matrices. Many of the above concrete notions can be reinterpreted in this light, for example, the transpose matrix describes the
transpose of the linear map given by , concerning the
dual bases. These properties can be restated more naturally: the
category of matrices with entries in a field k with multiplication as composition is
equivalent to the category of finite-dimensional
vector spaces and linear maps over this field. More generally, the set of matrices can be used to represent the -linear maps between the free modules and for an arbitrary ring with unity. When composition of these maps is possible, and this gives rise to the
matrix ring of matrices representing the
endomorphism ring of .
Matrix groups A
group is a mathematical structure consisting of a set of objects together with a
binary operation, that is, an operation combining any two objects to a third, subject to certain requirements. A group in which the objects are
invertible matrices and the group operation is matrix multiplication is called a
matrix group of degree . Every such matrix group is a
subgroup of (that is, a smaller group contained within) the group of
all invertible matrices, the
general linear group of degree . Any property of square matrices that is preserved under matrix products and inverses can be used to define a matrix group. For example, the set of all matrices whose determinant is form a group called the
special linear group of degree . The set of
orthogonal matrices, determined by the condition \bold M^{\rm T} \bold M = \bold I, form the
orthogonal group. Every orthogonal matrix has
determinant or . Orthogonal matrices with determinant form a group called the
special orthogonal group. Every
finite group is
isomorphic to a matrix group, as one can see by considering the
regular representation of the
symmetric group. General groups can be studied using matrix groups, which are comparatively well understood, using
representation theory.
Infinite matrices It is also possible to consider matrices with infinitely many rows and columns. The basic operations introduced above are defined the same way in this case. Matrix multiplication, however, and all operations stemming therefrom are only meaningful when restricted to certain matrices, since the
sum featuring in the above definition of the matrix product will contain an infinity of summands. An easy way to circumvent this issue is to restrict to
finitary matrices all of whose rows (or columns) contain only finitely many nonzero terms. As in the finite case (see
above), where matrices describe linear maps, infinite matrices can be used to describe
operators on Hilbert spaces, where convergence and
continuity questions arise. However, the explicit point of view of matrices tends to obfuscate the matter, and the abstract and more powerful tools of
functional analysis are used instead, by relating matrices to linear maps (as in the finite case
above), but imposing additional convergence and continuity constraints.
Empty matrix An
empty matrix is a matrix in which the number of rows or columns (or both) is zero. Empty matrices can be a useful
base case for certain
recursive constructions, and can help to deal with maps involving the
zero vector space. For example, if is a matrix and is a matrix, then is the
zero matrix corresponding to the null map from a 3-dimensional space to itself, while is a matrix. There is no common notation for empty matrices, but most
computer algebra systems allow creating and computing with them. The determinant of the matrix is conventionally defined to be 1, consistent with the
empty product occurring in the Leibniz formula for the determinant. This value is also needed for consistency with the case of the
Desnanot–Jacobi identity relating determinants to the determinants of smaller matrices.
Matrices with entries in a semiring A
semiring is similar to a ring, but elements need not have
additive inverses, therefore one cannot do subtraction freely there. The definition of addition and multiplication of matrices with entries in a ring applies to matrices with entries in a semiring without modification. Matrices of fixed size with entries in a semiring form a
commutative monoid \operatorname{Mat}(m,n;R) under addition. Square matrices of fixed size with entries in a semiring form a semiring \operatorname{Mat}(n;R) under addition and multiplication. The determinant of an square matrix M with entries in a
commutative semiring R cannot be defined in general because the definition would involve additive inverses of semiring elements. What plays its role instead is the pair of positive and negative determinants :\det\nolimits_+M=\sum_{\sigma\in\operatorname{Alt}(n)}M_{1\sigma(1)}\cdots M_{n\sigma(n)} :\det\nolimits_-M=\sum_{\sigma\in\operatorname{Sym}(n)\setminus\operatorname{Alt}(n)}M_{1\sigma(1)}\cdots M_{n\sigma(n)} where the sums are taken over
even permutations and odd permutations, respectively.
Matrices with entries in a category Matrices and their multiplication can be defined with entries objects of a
category equipped with a "
tensor product" similar to multiplication in a ring, having
coproducts similar to addition in a ring, in that the former is
distributive over the latter. However, the multiplication thus defined may be only associative in a sense weaker than usual. These are part of a bigger structure called the
bicategory of matrices. The complete description of the above summary for interested readers follows. Let (\mathcal C,\otimes,I) be a
monoidal category satisfying the following two conditions: • All (small)
coproducts exist; in particular, let \varnothing be an
initial object. • The functor \otimes is distributive over coproducts; i.e., for all object X and a family of objects (Y_i)_{i\in I} in \mathcal C, the canonical \mathcal C-morphisms \coprod_{i\in I}(X\otimes Y_i)\to X\otimes\coprod_{i\in I}Y_i\coprod_{i\in I}(Y_i\otimes X)\to\left(\coprod_{i\in I}Y_i\right)\otimes X are
isomorphisms. In particular, the canonical morphisms \varnothing\to X\otimes\varnothing and \varnothing\to\varnothing\otimes X are isomorphisms. Then, the
bicategory of \mathcal C-matrices \operatorname{Mat}(\mathcal C) is as follows: • The objects are the sets. • A
1-morphism M\colon A\to B is a map M\colon A\times B\to\operatorname{Ob}(\mathcal C); this is just a matrix over \mathcal C. • The composition of 1-morphisms M\colon A\to B and N\colon B\to C, which can be understood as matrix multiplication, is (N\circ M)(a,c)=\coprod_{b\in B}M(a,b)\otimes N(b,c). • The identity 1-morphism on A is \operatorname{id}_A(a,b)=\begin{cases} I & a=b \\ \varnothing & a\ne b \end{cases}. • A 2-morphism between 1-morphisms M,N\colon A\to B is a family of \mathcal C-morphisms (f_{ab}\colon M(a,b)\to N(a,b))_{(a,b)\in A\times B}. The definition of vertical and horizontal composition of 2-morphisms is natural: the vertical composition is componentwise composition of \mathcal C-morphisms; the horizontal composition is that derived from the functoriality of \otimes and the
universal property of coproducts. In general, the bicategory of matrices need not be a strict
2-category. For example, the composition of 1-morphisms may not be associative in the usual strict sense, but only up to
coherent isomorphism. == Applications ==