While there is no simple algorithm to directly calculate eigenvalues for general matrices, there are numerous special classes of matrices where eigenvalues can be directly calculated. These include:
Triangular matrices Since the determinant of a
triangular matrix is the product of its diagonal entries, if
T is triangular, then \det(\lambda I - T) = \prod_i (\lambda - T_{ii}). Thus the eigenvalues of
T are its diagonal entries.
Factorable polynomial equations If is any polynomial and then the eigenvalues of also satisfy the same equation. If happens to have a known factorization, then the eigenvalues of lie among its roots. For example, a
projection is a square matrix satisfying . The roots of the corresponding scalar polynomial equation, , are 0 and 1. Thus any projection has 0 and 1 for its eigenvalues. The multiplicity of 0 as an eigenvalue is the
nullity of , while the multiplicity of 1 is the rank of . Another example is a matrix that satisfies for some scalar . The eigenvalues must be . The projection operators :P_+=\frac{1}{2}\left(I+\frac{A}{\alpha}\right) :P_-=\frac{1}{2}\left(I-\frac{A}{\alpha}\right) satisfy :AP_+=\alpha P_+ \quad AP_-=-\alpha P_- and :P_+P_+=P_+ \quad P_-P_-=P_- \quad P_+P_-=P_-P_+=0. The
column spaces of and are the eigenspaces of corresponding to and , respectively.
2×2 matrices For dimensions 2 through 4, formulas involving radicals exist that can be used to find the eigenvalues. While a common practice for 2×2 and 3×3 matrices, for 4×4 matrices the increasing complexity of the
root formulas makes this approach less attractive. For the 2×2 matrix :A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}, the characteristic polynomial is :\det \begin{bmatrix} \lambda - a & -b \\ -c & \lambda - d \end{bmatrix} = \lambda^2\, -\, \left( a + d \right )\lambda\, +\, \left ( ad - bc \right ) = \lambda^2\, -\, \lambda\, {\rm tr}(A)\, +\, \det(A). Thus the eigenvalues can be found by using the
quadratic formula: :\lambda = \frac{{\rm tr}(A) \pm \sqrt{{\rm tr}^2 (A) - 4 \det(A)}}{2}. Defining {\rm gap}\left ( A \right ) = \sqrt{{\rm tr}^2 (A) - 4 \det(A)} to be the distance between the two eigenvalues, it is straightforward to calculate :\frac{\partial\lambda}{\partial a} = \frac{1}{2}\left ( 1 \pm \frac{a - d}{{\rm gap}(A)} \right ),\qquad \frac{\partial\lambda}{\partial b} = \frac{\pm c}{{\rm gap}(A)} with similar formulas for and . From this it follows that the calculation is well-conditioned if the eigenvalues are isolated. Eigenvectors can be found by exploiting the
Cayley–Hamilton theorem. If are the eigenvalues, then , so the columns of are annihilated by and vice versa. Assuming neither matrix is zero, the columns of each must include eigenvectors for the other eigenvalue. (If either matrix is zero, then is a multiple of the identity and any non-zero vector is an eigenvector.) For example, suppose :A = \begin{bmatrix} 4 & 3 \\ -2 & -3 \end{bmatrix}, then and , so the characteristic equation is : 0 = \lambda^2 - \lambda - 6 = (\lambda - 3)(\lambda + 2), and the eigenvalues are 3 and -2. Now, :A - 3I = \begin{bmatrix} 1 & 3 \\ -2 & -6 \end{bmatrix}, \qquad A + 2I = \begin{bmatrix} 6 & 3 \\ -2 & -1 \end{bmatrix}. In both matrices, the columns are multiples of each other, so either column can be used. Thus, can be taken as an eigenvector associated with the eigenvalue -2, and as an eigenvector associated with the eigenvalue 3, as can be verified by multiplying them by .
Symmetric 3×3 matrices The characteristic equation of a symmetric 3×3 matrix is: :\det \left( \alpha I - A \right) = \alpha^3 - \alpha^2 {\rm tr}(A) - \alpha \frac{1}{2}\left( {\rm tr}(A^2) - {\rm tr}^2(A) \right) - \det(A) = 0. This equation may be solved using the methods of
Cardano or
Lagrange, but an affine change to will simplify the expression considerably, and lead directly to a
trigonometric solution. If , then and have the same eigenvectors, and is an eigenvalue of if and only if is an eigenvalue of . Letting q = {\rm tr}(A)/3 and p =\left({\rm tr}\left((A - qI)^2\right)/ 6\right)^{1/2}, gives :\det \left( \beta I - B \right) = \beta^3 - 3 \beta - \det(B) = 0. The substitution and some simplification using the identity reduces the equation to . Thus :\beta = 2{\cos}\left(\frac{1}{3}{\arccos}\left( \det(B)/2 \right) + \frac{2k\pi}{3}\right), \quad k = 0, 1, 2. If is complex or is greater than 2 in absolute value, the arccosine should be taken along the same branch for all three values of . This issue doesn't arise when is real and symmetric, resulting in a simple algorithm: % Given a real symmetric 3x3 matrix A, compute the eigenvalues % Note that acos and cos operate on angles in radians p1 = A(1,2)^2 + A(1,3)^2 + A(2,3)^2 if (p1 == 0) % A is diagonal. eig1 = A(1,1) eig2 = A(2,2) eig3 = A(3,3) else q = trace(A)/3 % trace(A) is the sum of all diagonal values p2 = (A(1,1) - q)^2 + (A(2,2) - q)^2 + (A(3,3) - q)^2 + 2 * p1 p = sqrt(p2 / 6) B = (1 / p) * (A - q * I) % I is the identity matrix r = det(B) / 2 % In exact arithmetic for a symmetric matrix -1 = 1) phi = 0 else phi = acos(r) / 3 end % the eigenvalues satisfy eig3 Once again, the eigenvectors of can be obtained by recourse to the
Cayley–Hamilton theorem. If are distinct eigenvalues of , then . Thus the columns of the product of any two of these matrices will contain an eigenvector for the third eigenvalue. However, if , then and . Thus the
generalized eigenspace of is spanned by the columns of while the ordinary eigenspace is spanned by the columns of . The ordinary eigenspace of is spanned by the columns of . For example, let :A = \begin{bmatrix} 3 & 2 & 6 \\ 2 & 2 & 5 \\ -2 & -1 & -4 \end{bmatrix}. The characteristic equation is : 0 = \lambda^3 - \lambda^2 - \lambda + 1 = (\lambda - 1)^2(\lambda + 1), with eigenvalues 1 (of multiplicity 2) and -1. Calculating, :A - I = \begin{bmatrix} 2 & 2 & 6 \\ 2 & 1 & 5 \\ -2 & -1 & -5 \end{bmatrix}, \qquad A + I = \begin{bmatrix} 4 & 2 & 6 \\ 2 & 3 & 5 \\ -2 & -1 & -3 \end{bmatrix} and :(A - I)^2 = \begin{bmatrix} -4 & 0 & -8 \\ -4 & 0 & -8 \\ 4 & 0 & 8 \end{bmatrix}, \qquad (A - I)(A + I) = \begin{bmatrix} 0 & 4 & 4 \\ 0 & 2 & 2 \\ 0 & -2 & -2 \end{bmatrix} Thus is an eigenvector for −1, and is an eigenvector for 1. and are both generalized eigenvectors associated with 1, either one of which could be combined with and to form a basis of generalized eigenvectors of . Once found, the eigenvectors can be normalized if needed.
Eigenvectors of normal 3×3 matrices If a 3×3 matrix A is normal, then the cross-product can be used to find eigenvectors. If \lambda is an eigenvalue of A, then the null space of A - \lambda I is perpendicular to its column space. The
cross product of two independent columns of A - \lambda I will be in the null space. That is, it will be an eigenvector associated with \lambda. Since the column space is two dimensional in this case, the eigenspace must be one dimensional, so any other eigenvector will be parallel to it. If A - \lambda I does not contain two independent columns but is not , the cross-product can still be used. In this case \lambda is an eigenvalue of multiplicity 2, so any vector perpendicular to the column space will be an eigenvector. Suppose \mathbf v is a non-zero column of A - \lambda I. Choose an arbitrary vector \mathbf u not parallel to \mathbf v. Then \mathbf v\times \mathbf u and (\mathbf v\times \mathbf u)\times \mathbf v will be perpendicular to \mathbf v and thus will be eigenvectors of \lambda. This does not work when A is not normal, as the null space and column space do not need to be perpendicular for such matrices. ==See also==