MarketMinor (linear algebra)
Company Profile

Minor (linear algebra)

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix generated from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are useful for calculating matrix cofactors, which are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.

Definition and illustration
First minors If is a square matrix, then the minor of the entry in the -th row and -th column (also called the minor, or a first minor) is the determinant of the submatrix formed by deleting the -th row and -th column. This number is often denoted . The cofactor is obtained by multiplying the minor by , and is often denoted . To illustrate these definitions, consider the following matrix, \begin{bmatrix} 1 & 4 & 7 \\ 3 & 0 & 5 \\ -1 & 9 & 11 \\ \end{bmatrix} To compute the minor and the cofactor , we find the determinant of the above matrix with row 2 and column 3 removed. M_{2,3} = \det \begin{bmatrix} 1 & 4 & \Box \\ \Box & \Box & \Box \\ -1 & 9 & \Box \\ \end{bmatrix}= \det \begin{bmatrix} 1 & 4 \\ -1 & 9 \\ \end{bmatrix} = 9-(-4) = 13 So the cofactor of the entry is C_{2,3} = (-1)^{2+3}(M_{2,3}) = -13. General definition Let be an matrix and an integer with , and . A minor of , also called minor determinant of order of or, if , the th minor determinant of (the word "determinant" is often omitted, and the word "degree" is sometimes used instead of "order") is the determinant of a matrix obtained from by deleting rows and columns. Sometimes the term is used to refer to the matrix obtained from as above (by deleting rows and columns), but this matrix should be referred to as a (square) submatrix of , leaving the term "minor" to refer to the determinant of this matrix. For a matrix as above, there are a total of {m \choose k} \cdot {n \choose k} minors of size . The minor of order zero is often defined to be 1. For a square matrix, the zeroth minor is just the determinant of the matrix. mean the determinant of the matrix that is formed as above, by taking the elements of the original matrix from the rows whose indexes are in and columns whose indexes are in , whereas some other authors mean by a minor associated to and the determinant of the matrix formed from the original matrix by deleting the rows in and columns in ; ==Applications of minors and cofactors==
Applications of minors and cofactors
Cofactor expansion of the determinant The cofactors feature prominently in Laplace's formula for the expansion of determinants, which is a method of computing larger determinants in terms of smaller ones. Given an matrix , the determinant of , denoted , can be written as the sum of the cofactors of any row or column of the matrix multiplied by the entries that generated them. In other words, defining C_{ij} = (-1)^{i+j} M_{ij} then the cofactor expansion along the -th column gives: \begin{align} \det(\mathbf A) &= a_{1j}C_{1j} + a_{2j}C_{2j} + a_{3j}C_{3j} + \cdots + a_{nj}C_{nj} \\[2pt] &= \sum_{i=1}^{n} a_{ij} C_{ij} \\[2pt] &= \sum_{i=1}^{n} a_{ij}(-1)^{i+j} M_{ij} \end{align} The cofactor expansion along the -th row gives: \begin{align} \det(\mathbf A) &= a_{i1}C_{i1} + a_{i2}C_{i2} + a_{i3}C_{i3} + \cdots + a_{in}C_{in} \\[2pt] &= \sum_{j=1}^{n} a_{ij} C_{ij} \\[2pt] &= \sum_{j=1}^{n} a_{ij} (-1)^{i+j} M_{ij} \end{align} Inverse of a matrix One can write down the inverse of an invertible matrix by computing its cofactors by using Cramer's rule, as follows. The matrix formed by all of the cofactors of a square matrix is called the cofactor matrix (also called the matrix of cofactors or, sometimes, comatrix): \mathbf C = \begin{bmatrix} C_{11} & C_{12} & \cdots & C_{1n} \\ C_{21} & C_{22} & \cdots & C_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ C_{n1} & C_{n2} & \cdots & C_{nn} \end{bmatrix} Then the inverse of is the transpose of the cofactor matrix times the reciprocal of the determinant of : \mathbf A^{-1} = \frac{1}{\operatorname{det}(\mathbf A)} \mathbf C^\mathsf{T}. The transpose of the cofactor matrix is called the adjugate matrix (also called the classical adjoint) of . The above formula can be generalized as follows: Let \begin{align} I &= 1 \le i_1 be ordered sequences (in natural order) of indexes (here is an matrix). Then [\mathbf A^{-1}]_{I,J} = \pm\frac{[\mathbf A]_{J',I'}}{\det \mathbf A}, where denote the ordered sequences of indices (the indices are in natural order of magnitude, as above) complementary to , so that every index appears exactly once in either or , but not in both (similarly for the and ) and denotes the determinant of the submatrix of formed by choosing the rows of the index set and columns of index set . Also, [\mathbf A]_{I,J} = \det \bigl( (A_{i_p, j_q})_{p,q = 1, \ldots, k} \bigr). A simple proof can be given using the wedge product. Indeed, \bigl[ \mathbf A^{-1} \bigr]_{I,J} (e_1\wedge\ldots \wedge e_n) = \pm(\mathbf A^{-1}e_{j_1})\wedge \ldots \wedge(\mathbf A^{-1}e_{j_k})\wedge e_{i'_1}\wedge\ldots \wedge e_{i'_{n-k}}, where e_1, \ldots, e_n are the basis vectors. Acting by on both sides, one gets \begin{align} &\ \bigl[\mathbf A^{-1} \bigr]_{I,J} \det \mathbf A (e_1\wedge\ldots \wedge e_n) \\[2pt] =&\ \pm (e_{j_1})\wedge \ldots \wedge(e_{j_k})\wedge (\mathbf A e_{i'_1})\wedge\ldots \wedge (\mathbf A e_{i'_{n-k}}) \\[2pt] =&\ \pm [\mathbf A]_{J',I'}(e_1\wedge\ldots \wedge e_n). \end{align} The sign can be worked out to be (-1)^{\left( \sum_{s=1}^{k} i_s - \sum_{s=1}^{k} j_s \right)}, so the sign is determined by the sums of elements in and . Other applications Given an matrix with real entries (or entries from any other field) and rank , then there exists at least one non-zero minor, while all larger minors are zero. We will use the following notation for minors: if is an matrix, is a subset of with elements, and is a subset of with elements, then we write for the minor of that corresponds to the rows with index in and the columns with index in . • If is square and , then is called a principal minor. • If is square and , then the principal minor is called a leading principal minor (of order ) or corner (principal) minor (of order ). For an square matrix, there are leading principal minors. • A basic minor of a matrix with rank is an minor with nonzero value. • For Hermitian matrices, the leading principal minors can be used to test for positive definiteness and the principal minors can be used to test for positive semidefiniteness. See Sylvester's criterion for more details. Both the formula for ordinary matrix multiplication and the Cauchy–Binet formula for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices. Suppose that is an matrix, is an matrix, is a subset of with elements and is a subset of with elements. Then [\mathbf{AB}]_{I,J} = \sum_{K} [\mathbf{A}]_{I,K} [\mathbf{B}]_{K,J}\, where the sum extends over all subsets of with elements. ==Multilinear algebra approach==
Multilinear algebra approach
A more systematic, algebraic treatment of minors is given in multilinear algebra, using the wedge product: the -minors of a matrix are the entries in the -th exterior power map. If the columns of a matrix are wedged together at a time, the minors appear as the components of the resulting -vectors. For example, the 2 × 2 minors of the matrix \begin{pmatrix} 1 & 4 \\ 3 & \!\!-1 \\ 2 & 1 \\ \end{pmatrix} are −13 (from the first two rows), −7 (from the first and last row), and 5 (from the last two rows). Now consider the wedge product (\mathbf{e}_1 + 3\mathbf{e}_2 + 2\mathbf{e}_3) \wedge (4\mathbf{e}_1 - \mathbf{e}_2 + \mathbf{e}_3) where the two expressions correspond to the two columns of our matrix. Using the properties of the wedge product, namely that it is bilinear and alternating, \mathbf{e}_i \wedge \mathbf{e}_i = 0, and antisymmetric, \mathbf{e}_i\wedge \mathbf{e}_j = - \mathbf{e}_j\wedge \mathbf{e}_i, we can simplify this expression to -13 \mathbf{e}_1\wedge \mathbf{e}_2 -7 \mathbf{e}_1\wedge \mathbf{e}_3 +5 \mathbf{e}_2\wedge \mathbf{e}_3 where the coefficients agree with the minors computed earlier. ==A remark about different notation==
A remark about different notation
In some books, instead of cofactor the term adjunct is used. Moreover, it is denoted as and defined in the same way as cofactor: \mathbf{A}_{ij} = (-1)^{i+j} \mathbf{M}_{ij} Using this notation the inverse matrix is written this way: \mathbf{M}^{-1} = \frac{1}{\det(M)}\begin{bmatrix} A_{11} & A_{21} & \cdots & A_{n1} \\ A_{12} & A_{22} & \cdots & A_{n2} \\ \vdots & \vdots & \ddots & \vdots \\ A_{1n} & A_{2n} & \cdots & A_{nn} \end{bmatrix} Keep in mind that adjunct is not adjugate or adjoint. In modern terminology, the "adjoint" of a matrix most often refers to the corresponding adjoint operator. ==See also==
tickerdossier.comtickerdossier.substack.com