Let be a square matrix with linearly independent eigenvectors (where ). Then can be
factored as \mathbf{A}=\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1} where is the square matrix whose th column is the eigenvector of , and is the
diagonal matrix whose diagonal elements are the corresponding eigenvalues, . Note that only
diagonalizable matrices can be factorized in this way. For example, the
defective matrix \left[ \begin{smallmatrix} 1 & 1 \\ 0 & 1 \end{smallmatrix} \right] (which is a
shear matrix) cannot be diagonalized. The eigenvectors are usually normalized, but they don't have to be. A non-normalized set of eigenvectors, can also be used as the columns of . That can be understood by noting that the magnitude of the eigenvectors in gets canceled in the decomposition by the presence of . If one of the eigenvalues has multiple linearly independent eigenvectors (that is, the geometric multiplicity of is greater than 1), then these eigenvectors for this eigenvalue can be chosen to be mutually
orthogonal; however, if two eigenvectors belong to two different eigenvalues, it may be impossible for them to be orthogonal to each other (see Example below). One special case is that if is a normal matrix, then by the spectral theorem, it's always possible to diagonalize in an
orthonormal basis {{mvar|{qi}}}. The decomposition can be derived from the fundamental property of eigenvectors: \begin{align} \mathbf{A} \mathbf{v} &= \lambda \mathbf{v} \\ \mathbf{A} \mathbf{Q} &= \mathbf{Q} \mathbf{\Lambda} \\ \mathbf{A} &= \mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1} . \end{align} The linearly independent eigenvectors with nonzero eigenvalues form a basis (not necessarily orthonormal) for all possible products , for , which is the same as the
image (or
range) of the corresponding
matrix transformation, and also the
column space of the matrix . The number of linearly independent eigenvectors with nonzero eigenvalues is equal to the
rank of the matrix , and also the dimension of the image (or range) of the corresponding matrix transformation, as well as its column space. The linearly independent eigenvectors with an eigenvalue of zero form a basis (which can be chosen to be orthonormal) for the
null space (also known as the kernel) of the matrix transformation .
Example The 2 × 2 real matrix \mathbf{A} = \begin{bmatrix} 1 & 0 \\ 1 & 3 \\ \end{bmatrix} may be decomposed into a diagonal matrix through multiplication of a non-singular matrix \mathbf{Q} = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \in \mathbb{R}^{2\times2}. Then \begin{bmatrix} a & b \\ c & d \end{bmatrix}^{-1}\begin{bmatrix} 1 & 0 \\ 1 & 3 \end{bmatrix}\begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} x & 0 \\ 0 & y \end{bmatrix}, for some real diagonal matrix \left[ \begin{smallmatrix} x & 0 \\ 0 & y \end{smallmatrix} \right]. Multiplying both sides of the equation on the left by : \begin{bmatrix} 1 & 0 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} x & 0 \\ 0 & y \end{bmatrix}. The above equation can be decomposed into two
simultaneous equations: \begin{cases} \begin{bmatrix} 1 & 0\\ 1 & 3 \end{bmatrix} \begin{bmatrix} a \\ c \end{bmatrix} = \begin{bmatrix} ax \\ cx \end{bmatrix} \\[1.2ex] \begin{bmatrix} 1 & 0\\ 1 & 3 \end{bmatrix} \begin{bmatrix} b \\ d \end{bmatrix} = \begin{bmatrix} by \\ dy \end{bmatrix} \end{cases} . Factoring out the
eigenvalues and : \begin{cases} \begin{bmatrix} 1 & 0\\ 1 & 3 \end{bmatrix} \begin{bmatrix} a \\ c \end{bmatrix} = x\begin{bmatrix} a \\ c \end{bmatrix} \\[1.2ex] \begin{bmatrix} 1 & 0\\ 1 & 3 \end{bmatrix} \begin{bmatrix} b \\ d \end{bmatrix} = y\begin{bmatrix} b \\ d \end{bmatrix} \end{cases} Letting \mathbf{a} = \begin{bmatrix} a \\ c \end{bmatrix}, \quad \mathbf{b} = \begin{bmatrix} b \\ d \end{bmatrix}, this gives us two vector equations: \begin{cases} \mathbf{A} \mathbf{a} = x \mathbf{a} \\ \mathbf{A} \mathbf{b} = y \mathbf{b} \end{cases} And can be represented by a single vector equation involving two solutions as eigenvalues: \mathbf{A} \mathbf{u} = \lambda \mathbf{u} where represents the two eigenvalues and , and represents the vectors and . Shifting to the left hand side and factoring out \left(\mathbf{A} - \lambda \mathbf{I}\right) \mathbf{u} = \mathbf{0} Since is non-singular, it is essential that is nonzero. Therefore, \det(\mathbf{A} - \lambda \mathbf{I}) = 0 Thus (1- \lambda)(3 - \lambda) = 0 giving us the solutions of the eigenvalues for the matrix as or , and the resulting diagonal matrix from the eigendecomposition of is thus {{nowrap|\left[ \begin{smallmatrix} 1 & 0 \\ 0 & 3 \end{smallmatrix} \right].}} Putting the solutions back into the above simultaneous equations \begin{cases} \begin{bmatrix} 1 & 0 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} a \\ c \end{bmatrix} = 1\begin{bmatrix} a \\ c \end{bmatrix} \\[1.2ex] \begin{bmatrix} 1 & 0 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} b \\ d \end{bmatrix} = 3\begin{bmatrix} b \\ d \end{bmatrix} \end{cases} Solving the equations, we have a = -2c \quad\text{and} \quad b = 0, \qquad c,d \in \mathbb{R}\setminus\{0\}. Thus the matrix required for the eigendecomposition of is \mathbf{Q} = \begin{bmatrix} -2c & 0 \\ c & d \end{bmatrix},\qquad c, d\in \mathbb{R}\setminus\{0\}, that is: \begin{bmatrix} -2c & 0 \\ c & d \end{bmatrix}^{-1} \begin{bmatrix} 1 & 0 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} -2c & 0 \\ c & d \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 3 \end{bmatrix},\qquad c, d\in \mathbb{R}\setminus\{0\}.The exclusion of the number 0 from the set of real numbers, \mathbb{R}, is necessary to ensure that the matrix \mathbf{Q} is non-singular.
Matrix inverse via eigendecomposition If a matrix can be eigendecomposed and if none of its eigenvalues are zero, then is
invertible and its inverse is given by \mathbf{A}^{-1} = \mathbf{Q}\mathbf{\Lambda}^{-1}\mathbf{Q}^{-1} If \mathbf{A} is a symmetric matrix, since \mathbf{Q} is formed from the eigenvectors of \mathbf{A}, \mathbf{Q} is guaranteed to be an
orthogonal matrix, therefore \mathbf{Q}^{-1} = \mathbf{Q}^\mathrm{T}. Furthermore, because is a
diagonal matrix, its inverse is easy to calculate: \left[\mathbf{\Lambda}^{-1}\right]_{ii} = \frac{1}{\lambda_i}
Practical implications When eigendecomposition is used on a matrix of measured, real
data, the
inverse may be less valid when all eigenvalues are used unmodified in the form above. This is because as eigenvalues become relatively small, their contribution to the inversion is large. Those near zero or at the "noise" of the measurement system will have undue influence and could hamper solutions (detection) using the inverse. Two mitigations have been proposed: truncating small or zero eigenvalues, and extending the lowest reliable eigenvalue to those below it. See also
Tikhonov regularization as a statistically motivated but biased method for rolling off eigenvalues as they become dominated by noise. The first mitigation method is similar to a sparse sample of the original matrix, removing components that are not considered valuable. However, if the solution or detection process is near the noise level, truncating may remove components that influence the desired solution. The second mitigation extends the eigenvalue so that lower values have much less influence over inversion, but do still contribute, such that solutions near the noise will still be found. The reliable eigenvalue can be found by assuming that eigenvalues of extremely similar and low value are a good representation of measurement noise (which is assumed low for most systems). If the eigenvalues are rank-sorted by value, then the reliable eigenvalue can be found by minimization of the
Laplacian of the sorted eigenvalues: \min\left|\nabla^2 \lambda_\mathrm{s}\right| where the eigenvalues are subscripted with an to denote being sorted. The position of the minimization is the lowest reliable eigenvalue. In measurement systems, the square root of this reliable eigenvalue is the average noise over the components of the system. == Functional calculus ==