In all the definitions in this section, the matrix is taken to be an matrix over an arbitrary
field .
Dimension of image Given the matrix A, there is an associated
linear mapping f : F^n \to F^m defined by f(x) = Ax. The rank of A is the dimension of the image of f. This definition has the advantage that it can be applied to any linear map without need for a specific matrix.
Rank in terms of nullity Given the same linear mapping as above, the rank is minus the dimension of the
kernel of . The
rank–nullity theorem states that this definition is equivalent to the preceding one.
Column rank – dimension of column space The rank of is the maximal number of linearly independent columns \mathbf{c}_1,\mathbf{c}_2,\dots,\mathbf{c}_k of ; this is the
dimension of the
column space of (the column space being the subspace of generated by the columns of , which is in fact just the image of the linear map associated to ).
Row rank – dimension of row space The rank of is the maximal number of linearly independent rows of ; this is the dimension of the
row space of .
Decomposition rank The rank of is the smallest positive integer such that can be factored as A = CR, where is an matrix and is a matrix. In fact, for all integers , the following are equivalent: • the column rank of is less than or equal to , • there exist columns \mathbf{c}_1,\ldots,\mathbf{c}_k of size such that every column of is a linear combination of \mathbf{c}_1,\ldots,\mathbf{c}_k, • there exist an m \times k matrix and a k \times n matrix such that A = CR (when is the rank, this is a
rank factorization of ), • there exist rows \mathbf{r}_1,\ldots,\mathbf{r}_k of size such that every row of is a linear combination of \mathbf{r}_1,\ldots,\mathbf{r}_k, • the row rank of is less than or equal to . Indeed, the following equivalences are obvious: (1)\Leftrightarrow(2)\Leftrightarrow(3)\Leftrightarrow(4)\Leftrightarrow(5). For example, to prove (3) from (2), take to be the matrix whose columns are \mathbf{c}_1,\ldots,\mathbf{c}_k from (2). To prove (2) from (3), take \mathbf{c}_1,\ldots,\mathbf{c}_k to be the columns of . It follows from the equivalence (1)\Leftrightarrow(5) that the row rank is equal to the column rank. As in the case of the "dimension of image" characterization, this can be generalized to a definition of the rank of any linear map: the rank of a linear map is the minimal dimension of an intermediate space such that can be written as the composition of a map and a map . Unfortunately, this definition does not suggest an efficient manner to compute the rank (for which it is better to use one of the alternative definitions). See
rank factorization for details.
Rank in terms of singular values The rank of equals the number of non-zero
singular values, which is the same as the number of non-zero diagonal elements in Σ in the
singular value decomposition Determinantal rank – size of largest non-vanishing minor The rank of is the largest order of any non-zero
minor in . (The order of a minor is the side-length of the square sub-matrix of which it is the determinant.) Like the decomposition rank characterization, this does not give an efficient way of computing the rank, but it is useful theoretically: a single non-zero minor witnesses a lower bound (namely its order) for the rank of the matrix, which can be useful (for example) to prove that certain operations do not lower the rank of a matrix. A non-vanishing -minor ( submatrix with non-zero determinant) shows that the rows and columns of that submatrix are linearly independent, and thus those rows and columns of the full matrix are linearly independent (in the full matrix), so the row and column rank are at least as large as the determinantal rank; however, the converse is less straightforward. The equivalence of determinantal rank and column rank is a strengthening of the statement that if the span of vectors has dimension , then of those vectors span the space (equivalently, that one can choose a spanning set that is a
subset of the vectors): the equivalence implies that a subset of the rows and a subset of the columns simultaneously define an invertible submatrix (equivalently, if the span of vectors has dimension , then of these vectors span the space
and there is a set of coordinates on which they are linearly independent).
Tensor rank – minimum number of simple tensors The rank of is the smallest number such that can be written as a sum of rank 1 matrices, where a matrix is defined to have rank 1 if and only if it can be written as a nonzero product c \cdot r of a column vector and a row vector . This notion of rank is called
tensor rank; it can be generalized in the
separable models interpretation of the
singular value decomposition. == Properties ==