Formally, let be a real matrix of which we want to compute the eigenvalues, and let . At the -th step (starting with ), we compute the
QR decomposition where is an
orthogonal matrix (i.e., ) and is an upper triangular matrix. We then form . Note that A_{k+1} = R_k Q_k = Q_k^{-1} Q_k R_k Q_k = Q_k^{-1} A_k Q_k = Q_k^{\mathsf{T}} A_k Q_k, so all the are
similar and hence they have the same eigenvalues. The algorithm is
numerically stable because it proceeds by
orthogonal similarity transforms. Under certain conditions, the matrices
Ak converge to a triangular matrix, the
Schur form of
A. The eigenvalues of a triangular matrix are listed on the diagonal, and the eigenvalue problem is solved. In testing for convergence it is impractical to require exact zeros, but the
Gershgorin circle theorem provides a bound on the error. If the matrices converge, then the eigenvalues along the diagonal will appear according to their geometric multiplicity. To guarantee convergence, must be a symmetric matrix, and for all non zero eigenvalues \lambda there must not be a corresponding eigenvalue -\lambda. Due to the fact that a single QR iteration has a cost of \mathcal{O}(n^3) and the convergence is linear, the standard QR algorithm is extremely expensive to compute, especially considering it is not guaranteed to converge.
Using Hessenberg form In the above crude form the iterations are relatively expensive. This can be mitigated by first bringing the matrix to upper
Hessenberg form (which costs \tfrac{10}{3} n^3 + \mathcal{O}(n^2) arithmetic operations using a technique based on
Householder reduction), with a finite sequence of orthogonal similarity transforms, somewhat like a two-sided QR decomposition. (For QR decomposition, the Householder reflectors are multiplied only on the left, but for the Hessenberg case they are multiplied on both left and right.) Determining the QR decomposition of an upper Hessenberg matrix costs 6 n^2 + \mathcal{O}(n) arithmetic operations. Moreover, because the Hessenberg form is already nearly upper-triangular (it has just one nonzero entry below each diagonal), using it as a starting point reduces the number of steps required for convergence of the QR algorithm. If the original matrix is
symmetric, then the upper Hessenberg matrix is also symmetric and thus
tridiagonal, and so are all the . In this case reaching Hessenberg form costs \tfrac{4}{3} n^3 + \mathcal{O}(n^2) arithmetic operations using a technique based on Householder reduction.
Iteration phase If a Hessenberg matrix A has element a_{k,k-1} = 0 for some k, i.e., if one of the elements just below the diagonal is in fact zero, then it decomposes into blocks whose eigenproblems may be solved separately; an eigenvalue is either an eigenvalue of the submatrix of the first k-1 rows and columns, or an eigenvalue of the submatrix of remaining rows and columns. The purpose of the QR iteration step is to shrink one of these a_{k,k-1} elements so that effectively a small block along the diagonal is split off from the bulk of the matrix. In the case of a real eigenvalue that is usually the 1 \times 1 block in the lower right corner (in which case element a_{nn} holds that eigenvalue), whereas in the case of a pair of conjugate complex eigenvalues it is the 2 \times 2 block in the lower right corner. The
rate of convergence depends on the separation between eigenvalues, so a practical algorithm will use shifts, either explicit or implicit, to increase separation and accelerate convergence. A typical symmetric QR algorithm isolates each eigenvalue (then reduces the size of the matrix) with only one or two iterations, making it efficient as well as robust.
A single iteration with explicit shift The steps of a QR iteration with explicit shift on a real Hessenberg matrix A are: • Pick a shift \mu and subtract it from all diagonal elements, producing the matrix A - \mu I . A basic strategy is to use \mu = a_{n,n} , but there are more refined strategies that would further accelerate convergence. The idea is that \mu should be close to an eigenvalue, since making this shift will accelerate convergence to that eigenvalue. • Perform a sequence of
Givens rotations G_1, G_2, \dots, G_{n-1} on A - \mu I , where G_i acts on rows i and i+1, and G_i is chosen to zero out position (i+1,i) of G_{i-1} \dotsb G_1 (A - \mu I) . This produces the upper triangular matrix R = G_{n-1} \dotsb G_1 (A - \mu I) . The orthogonal factor Q would be G_1^\mathrm{T} G_2^\mathrm{T} \dotsb G_{n-1}^\mathrm{T} , but it is neither necessary nor efficient to produce that explicitly. • Now multiply R by the Givens matrices G_1^\mathrm{T} , G_2^\mathrm{T} , ..., G_{n-1}^\mathrm{T} on the right, where G_i^\mathrm{T} instead acts on columns i and i+1 . This produces the matrix RQ = R G_1^\mathrm{T} G_2^\mathrm{T} \dotsb G_{n-1}^\mathrm{T} , which is again on Hessenberg form. • Finally undo the shift by adding \mu to all diagonal entries. The result is A' = RQ + \mu I . Since Q commutes with I, we have that A' = Q^\mathrm{T} (A-\mu I) Q + \mu I = Q^\mathrm{T} A Q . The purpose of the shift is to change which Givens rotations are chosen. In more detail, the structure of one of these G_i matrices are G_i = \begin{bmatrix} I & 0 & 0 & 0 \\ 0 & c & -s & 0 \\ 0 & s & c & 0 \\ 0 & 0 & 0 & I \end{bmatrix} where the I in the upper left corner is an (n-1) \times (n-1)
identity matrix, and the two scalars c = \cos\theta and s = \sin\theta are determined by what rotation angle \theta is appropriate for zeroing out position (i+1,i). It is not necessary to exhibit \theta ; the factors c and s can be determined directly from elements in the matrix G_i should act on. Nor is it necessary to produce the whole matrix; multiplication (from the left) by G_i only affects rows i and i+1 , so it is easier to just update those two rows in place. Likewise, for the Step 3 multiplication by G_i^\mathrm{T} from the right, it is sufficient to remember i, c, and s. If using the simple \mu = a_{n,n} strategy, then at the beginning of Step 2 we have a matrix A - a_{n,n} I = \begin{pmatrix} \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & \times & 0 \end{pmatrix} where the \times denotes “could be whatever”. The first Givens rotation G_1 zeroes out the (i+1,i) position of this, producing G_1 (A - a_{n,n} I) = \begin{pmatrix} \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & \times & 0 \end{pmatrix} \text{.} Each new rotation zeroes out another subdiagonal element, thus increasing the number of known zeroes until we are at \begin{align} H &= G_{n-2} \dotsb G_1 (A - a_{n,n} I) \\[1ex] &= \begin{pmatrix} \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & h_{n-1,n-1} & h_{n-1,n} \\ 0 & 0 & 0 & h_{n,n-1} & 0 \end{pmatrix}. \end{align} The final rotation G_{n-1} has (c,s) chosen so that s h_{n-1,n-1} + c h_{n,n-1} = 0 . If |h_{n-1,n-1}| \gg |h_{n,n-1}| , as is typically the case when we approach convergence, then c \approx 1 and |s| \ll 1 . Making this rotation produces \begin{align} R &= G_{n-1} G_{n-2} \dotsb G_1 (A - a_{n,n} I) \\[1ex] &= \begin{pmatrix} \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & \times & c h_{n-1,n} \\ 0 & 0 & 0 & 0 & s h_{n-1,n} \end{pmatrix}, \end{align} which is our upper triangular matrix. But now we reach Step 3, and need to start rotating data between columns. The first rotation acts on columns 1 and 2, producing R G_1^\mathrm{T} = \begin{pmatrix} \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & \times & c h_{n-1,n} \\ 0 & 0 & 0 & 0 & s h_{n-1,n} \end{pmatrix} \text{.} The expected pattern is that each rotation moves some nonzero value from the diagonal out to the subdiagonal, returning the matrix to Hessenberg form. This ends at R G_1^\mathrm{T} \dotsb G_{n-1}^\mathrm{T} = \begin{pmatrix} \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & -s^2 h_{n-1,n} & cs h_{n-1,n} \end{pmatrix} \text{.} Algebraically the form is unchanged, but numerically the element in position (n, n{-}1) has gotten a lot closer to zero: there used to be a factor s gap between it and the diagonal element above, but now the gap is more like a factor s^2 , and another iteration would make it factor s^4 ; we have quadratic convergence. Practically that means O(1) iterations per eigenvalue suffice for convergence, and thus overall we can complete in O(n) QR steps, each of which does a mere O(n^2) arithmetic operations (or as little as O(n) operations, in the case that A is symmetric). == Visualization ==