Singularity and regularity The only non-
singular idempotent matrix is the
identity matrix; that is, if a non-identity matrix is idempotent, its number of independent rows (and columns) is less than its number of rows (and columns). This can be seen from writing A^2 = A, assuming that has full rank (is non-singular), and pre-multiplying by A^{-1} to obtain A = IA = A^{-1}A^2 = A^{-1}A = I. When an idempotent matrix is subtracted from the identity matrix, the result is also idempotent. This holds since :(I-A)(I-A) = I-A-A+A^2 = I-A-A+A = I-A. If a matrix is idempotent then for all positive integers n, A^n = A. This can be shown using proof by induction. Clearly we have the result for n = 1, as A^1 = A. Suppose that A^{k-1} = A. Then, A^k = A^{k-1}A = AA = A, since is idempotent. Hence by the principle of induction, the result follows.
Eigenvalues An idempotent matrix is always
diagonalizable. Its
eigenvalues are either 0 or 1: if \mathbf{x} is a non-zero eigenvector of some idempotent matrix A and \lambda its associated eigenvalue, then \lambda \mathbf{x} = A \mathbf{x} = A^2\mathbf{x} = A \lambda \mathbf{x} = \lambda A \mathbf{x} = \lambda^2 \mathbf{x} , which implies \lambda \in \{ 0, 1 \} . This further implies that the
determinant of an idempotent matrix is always 0 or 1. As stated above, if the determinant is equal to one, the matrix is
invertible and is therefore the
identity matrix.
Trace The
trace of an idempotent matrix—the sum of the elements on its main diagonal—equals the
rank of the matrix and thus is always an integer. This provides an easy way of computing the rank, or alternatively an easy way of determining the trace of a matrix whose elements are not specifically known (which is helpful in
statistics, for example, in establishing the degree of
bias in using a
sample variance as an estimate of a
population variance).
Relationships between idempotent matrices In regression analysis, the matrix M = I - X(X'X)^{-1} X' is known to produce the residuals e from the regression of the vector of dependent variables y on the matrix of covariates X. (See the section on Applications.) Now, let X_1 be a matrix formed from a subset of the columns of X, and let M_1 = I - X_1 (X_1'X_1)^{-1}X_1'. It is easy to show that both M and M_1 are idempotent, but a somewhat surprising fact is that M M_1 = M. This is because M X_1 = 0, or in other words, the residuals from the regression of the columns of X_1 on X are 0 since X_1 can be perfectly interpolated as it is a subset of X (by direct substitution it is also straightforward to show that M X = 0). This leads to two other important results: one is that (M_1 - M) is symmetric and idempotent, and the other is that (M_1 - M) M = 0, i.e., (M_1 - M) is orthogonal to M. These results play a key role, for example, in the derivation of the F test. Any
similar matrices of an idempotent matrix are also idempotent. Idempotency is conserved under a
change of basis. This can be shown through multiplication of the transformed matrix S A S^{-1} with A being idempotent: (S A S^{-1})^2 =(S A S^{-1})(S A S^{-1}) = S A (S^{-1}S) A S^{-1} = S A^2 S^{-1} = S A S^{-1} . ==Applications==