There are three types of elementary matrices, which correspond to three types of row operations (respectively, column operations): ;Row switching: A row within the matrix can be switched with another row. : R_i \leftrightarrow R_j ;Row multiplication: Each element in a row can be multiplied by a non-zero constant. It is also known as
scaling a row. : kR_i \rightarrow R_i,\ \mbox{where } k \neq 0 ;Row addition: A row can be replaced by the sum of that row and a multiple of another row. : R_i + kR_j \rightarrow R_i, \mbox{where } i \neq j If is an elementary matrix, as described below, to apply the elementary row operation to a matrix , one multiplies by the elementary matrix on the left, . The elementary matrix for any row operation is obtained by executing the operation on the
identity matrix. This fact can be understood as an instance of the
Yoneda lemma applied to the category of matrices.
Row-switching transformations The first type of row operation on a matrix switches all matrix elements on row with their counterparts on a different row . The corresponding elementary matrix is obtained by swapping row and row of the
identity matrix. :T_{i,j} = \begin{bmatrix} 1 & & & & & & \\ & \ddots & & & & & \\ & & 0 & & 1 & & \\ & & & \ddots & & & \\ & & 1 & & 0 & & \\ & & & & & \ddots & \\ & & & & & & 1 \end{bmatrix} So is the matrix produced by exchanging row and row of . Coefficient wise, the matrix is defined by : : [T_{i,j}]_{k,l} = \begin{cases} 0 & k \neq i, k \neq j ,k \neq l \\ 1 & k \neq i, k \neq j, k = l\\ 0 & k = i, l \neq j\\ 1 & k = i, l = j\\ 0 & k = j, l \neq i\\ 1 & k = j, l = i\\ \end{cases}
Properties • The inverse of this matrix is itself: T_{i,j}^{-1} = T_{i,j}. • Since the
determinant of the identity matrix is unity, \det(T_{i,j}) = -1. It follows that for any square matrix (of the correct size), we have \det(T_{i,j}A) = -\det(A). • For theoretical considerations, the row-switching transformation can be obtained from row-addition and row-multiplication transformations introduced below because T_{i,j}=D_i(-1)\,L_{i,j}(-1)\,L_{j,i}(1)\,L_{i,j}(-1).
Row-multiplying transformations The next type of row operation on a matrix multiplies all elements on row by where is a non-zero
scalar (usually a real number). The corresponding elementary matrix is a diagonal matrix, with diagonal entries 1 everywhere except in the th position, where it is . :D_i(m) = \begin{bmatrix} 1 & & & & & & \\ & \ddots & & & & & \\ & & 1 & & & & \\ & & & m & & & \\ & & & & 1 & & \\ & & & & & \ddots & \\ & & & & & & 1 \end{bmatrix} So is the matrix produced from by multiplying row by . Coefficient wise, the matrix is defined by : : [D_i(m)]_{k,l} = \begin{cases} 0 & k \neq l \\ 1 & k = l, k \neq i \\ m & k = l, k= i \end{cases}
Properties • The inverse of this matrix is given by D_i(m)^{-1} = D_i \left(\tfrac 1 m \right). • The matrix and its inverse are
diagonal matrices. • \det(D_i(m)) = m. Therefore, for a square matrix (of the correct size), we have \det(D_i(m)A) = m\det(A).
Row-addition transformations The final type of row operation on a matrix adds row multiplied by a scalar to row . The corresponding elementary matrix is the identity matrix but with an in the position. :L_{ij}(m) = \begin{bmatrix} 1 & & & & & & \\ & \ddots & & & & & \\ & & 1 & & & & \\ & & & \ddots & & & \\ & & m & & 1 & & \\ & & & & & \ddots & \\ & & & & & & 1 \end{bmatrix} So is the matrix produced from by adding times row to row . And is the matrix produced from by adding times column to column . Coefficient wise, the matrix is defined by : :[L_{i,j}(m)]_{k,l} = \begin{cases} 0 & k \neq l, k \neq i, l \neq j \\ 1 & k = l \\ m & k = i, l = j \end{cases}
Properties • These transformations are a kind of
shear mapping, also known as a
transvections. • The inverse of this matrix is given by L_{ij}(m)^{-1} = L_{ij}(-m). • The matrix and its inverse are
triangular matrices. • \det(L_{ij}(m)) = 1. Therefore, for a square matrix (of the correct size) we have \det(L_{ij}(m)A) = \det(A). • Row-addition transforms satisfy the
Steinberg relations. ==See also==