Cofactor expansion of the determinant The cofactors feature prominently in
Laplace's formula for the expansion of determinants, which is a method of computing larger determinants in terms of smaller ones. Given an matrix , the determinant of , denoted , can be written as the sum of the cofactors of any row or column of the matrix multiplied by the entries that generated them. In other words, defining C_{ij} = (-1)^{i+j} M_{ij} then the cofactor expansion along the -th column gives: \begin{align} \det(\mathbf A) &= a_{1j}C_{1j} + a_{2j}C_{2j} + a_{3j}C_{3j} + \cdots + a_{nj}C_{nj} \\[2pt] &= \sum_{i=1}^{n} a_{ij} C_{ij} \\[2pt] &= \sum_{i=1}^{n} a_{ij}(-1)^{i+j} M_{ij} \end{align} The cofactor expansion along the -th row gives: \begin{align} \det(\mathbf A) &= a_{i1}C_{i1} + a_{i2}C_{i2} + a_{i3}C_{i3} + \cdots + a_{in}C_{in} \\[2pt] &= \sum_{j=1}^{n} a_{ij} C_{ij} \\[2pt] &= \sum_{j=1}^{n} a_{ij} (-1)^{i+j} M_{ij} \end{align}
Inverse of a matrix One can write down the inverse of an
invertible matrix by computing its cofactors by using
Cramer's rule, as follows. The matrix formed by all of the cofactors of a square matrix is called the
cofactor matrix (also called the
matrix of cofactors or, sometimes,
comatrix): \mathbf C = \begin{bmatrix} C_{11} & C_{12} & \cdots & C_{1n} \\ C_{21} & C_{22} & \cdots & C_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ C_{n1} & C_{n2} & \cdots & C_{nn} \end{bmatrix} Then the inverse of is the transpose of the cofactor matrix times the reciprocal of the determinant of : \mathbf A^{-1} = \frac{1}{\operatorname{det}(\mathbf A)} \mathbf C^\mathsf{T}. The
transpose of the cofactor matrix is called the
adjugate matrix (also called the
classical adjoint) of . The above formula can be generalized as follows: Let \begin{align} I &= 1 \le i_1 be ordered sequences (in natural order) of indexes (here is an matrix). Then [\mathbf A^{-1}]_{I,J} = \pm\frac{[\mathbf A]_{J',I'}}{\det \mathbf A}, where denote the ordered sequences of indices (the indices are in natural order of magnitude, as above) complementary to , so that every index appears exactly once in either or , but not in both (similarly for the and ) and denotes the determinant of the submatrix of formed by choosing the rows of the index set and columns of index set . Also, [\mathbf A]_{I,J} = \det \bigl( (A_{i_p, j_q})_{p,q = 1, \ldots, k} \bigr). A simple proof can be given using the wedge product. Indeed, \bigl[ \mathbf A^{-1} \bigr]_{I,J} (e_1\wedge\ldots \wedge e_n) = \pm(\mathbf A^{-1}e_{j_1})\wedge \ldots \wedge(\mathbf A^{-1}e_{j_k})\wedge e_{i'_1}\wedge\ldots \wedge e_{i'_{n-k}}, where e_1, \ldots, e_n are the basis vectors. Acting by on both sides, one gets \begin{align} &\ \bigl[\mathbf A^{-1} \bigr]_{I,J} \det \mathbf A (e_1\wedge\ldots \wedge e_n) \\[2pt] =&\ \pm (e_{j_1})\wedge \ldots \wedge(e_{j_k})\wedge (\mathbf A e_{i'_1})\wedge\ldots \wedge (\mathbf A e_{i'_{n-k}}) \\[2pt] =&\ \pm [\mathbf A]_{J',I'}(e_1\wedge\ldots \wedge e_n). \end{align} The sign can be worked out to be (-1)^{\left( \sum_{s=1}^{k} i_s - \sum_{s=1}^{k} j_s \right)}, so the sign is determined by the sums of elements in and .
Other applications Given an matrix with
real entries (or entries from any other
field) and
rank , then there exists at least one non-zero minor, while all larger minors are zero. We will use the following notation for minors: if is an matrix, is a
subset of with elements, and is a subset of with elements, then we write for the minor of that corresponds to the rows with index in and the columns with index in . • If is square and , then is called a
principal minor. • If is square and , then the principal minor is called a
leading principal minor (of order ) or
corner (principal) minor (of order ). For an square matrix, there are leading principal minors. • A
basic minor of a matrix with rank is an minor with nonzero value. • For
Hermitian matrices, the leading principal minors can be used to test for
positive definiteness and the principal minors can be used to test for
positive semidefiniteness. See
Sylvester's criterion for more details. Both the formula for ordinary
matrix multiplication and the
Cauchy–Binet formula for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices. Suppose that is an matrix, is an matrix, is a
subset of with elements and is a subset of with elements. Then [\mathbf{AB}]_{I,J} = \sum_{K} [\mathbf{A}]_{I,K} [\mathbf{B}]_{K,J}\, where the sum extends over all subsets of with elements. ==Multilinear algebra approach==