There are various methods for calculating the Cholesky decomposition. The computational complexity of commonly used algorithms is in general. The algorithms described below all involve about
FLOPs ( multiplications and the same number of additions) for real flavors and
FLOPs for complex flavors, where is the size of the matrix . Hence, they have half the cost of the
LU decomposition, which uses FLOPs (see Trefethen and Bau 1997). Which of the algorithms below is faster depends on the details of the implementation. Generally, the first algorithm will be slightly slower because it accesses the data in a less regular manner. The Cholesky decomposition was shown to be numerically stable without need for pivoting.
The Cholesky algorithm The
Cholesky algorithm, used to calculate the decomposition matrix , is a modified version of
Gaussian elimination. The recursive algorithm starts with and :. At step , the matrix has the following form: \mathbf{A}^{(i)}= \begin{pmatrix} \mathbf{I}_{i-1} & 0 & 0 \\ 0 & a_{i,i} & \mathbf{b}_{i}^{*} \\ 0 & \mathbf{b}_{i} & \mathbf{B}^{(i)} \end{pmatrix}, where denotes the
identity matrix of dimension . If the matrix is defined by \mathbf{L}_{i}:= \begin{pmatrix} \mathbf{I}_{i-1} & 0 & 0 \\ 0 & \sqrt{a_{i,i}} & 0 \\ 0 & \frac{1}{\sqrt{a_{i,i}}} \mathbf{b}_{i} & \mathbf{I}_{n-i} \end{pmatrix}, (note that since is positive definite), then can be written as \mathbf{A}^{(i)} = \mathbf{L}_{i} \mathbf{A}^{(i+1)} \mathbf{L}_{i}^{*} where \mathbf{A}^{(i+1)}= \begin{pmatrix} \mathbf{I}_{i-1} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & \mathbf{B}^{(i)} - \frac{1}{a_{i,i}} \mathbf{b}_{i} \mathbf{b}_{i}^{*} \end{pmatrix}. Note that is an
outer product, therefore this algorithm is called the
outer-product version in (Golub & Van Loan). This is repeated for from 1 to . After steps, is obtained, and hence, the lower triangular matrix sought for is calculated as \mathbf{L} := \mathbf{L}_{1} \mathbf{L}_{2} \dots \mathbf{L}_{n}.
The Cholesky–Banachiewicz and Cholesky–Crout algorithms If the equation \begin{align} \mathbf{A} = \mathbf{LL}^T & = \begin{pmatrix} L_{11} & 0 & 0 \\ L_{21} & L_{22} & 0 \\ L_{31} & L_{32} & L_{33}\\ \end{pmatrix} \begin{pmatrix} L_{11} & L_{21} & L_{31} \\ 0 & L_{22} & L_{32} \\ 0 & 0 & L_{33} \end{pmatrix} \\[8pt] & = \begin{pmatrix} L_{11}^2 & &(\text{symmetric}) \\ L_{21}L_{11} & L_{21}^2 + L_{22}^2& \\ L_{31}L_{11} & L_{31}L_{21}+L_{32}L_{22} & L_{31}^2 + L_{32}^2+L_{33}^2 \end{pmatrix}, \end{align} is written out, the following is obtained: \begin{align} \mathbf{L} = \begin{pmatrix} \sqrt{A_{11}} & 0 & 0 \\ A_{21}/L_{11} & \sqrt{A_{22} - L_{21}^2} & 0 \\ A_{31}/L_{11} & \left( A_{32} - L_{31}L_{21} \right) /L_{22} &\sqrt{A_{33}- L_{31}^2 - L_{32}^2} \end{pmatrix} \end{align} and therefore the following formulas for the entries of : L_{j,j} = (\pm)\sqrt{ A_{j,j} - \sum_{k=1}^{j-1} L_{j,k}^2 }, L_{i,j} = \frac{1}{L_{j,j}} \left( A_{i,j} - \sum_{k=1}^{j-1} L_{i,k} L_{j,k} \right) \quad \text{for } i>j. For complex and real matrices, inconsequential arbitrary sign changes of diagonal and associated off-diagonal elements are allowed. The expression under the
square root is always positive if is real and positive-definite. For complex Hermitian matrix, the following formula applies: L_{j,j} = \sqrt{ A_{j,j} - \sum_{k=1}^{j-1} L_{j,k}^*L_{j,k} }, L_{i,j} = \frac{1}{L_{j,j}} \left( A_{i,j} - \sum_{k=1}^{j-1} L_{j,k}^* L_{i,k} \right) \quad \text{for } i>j. and it can be shown that L_{j,j} is always
real and positive if is positive-definite. So it now is possible to compute the entry if the entries to the left and above are known. The computation is usually arranged in either of the following orders: • The
Cholesky–Banachiewicz algorithm starts from the upper left corner of the matrix and proceeds to calculate the matrix row by row. for (i = 0; i The above algorithm can be succinctly expressed as combining a
dot product and
matrix multiplication in vectorized programming languages such as
Fortran as the following, do i = 1, size(A,1) L(i,i) = sqrt(A(i,i) - dot_product(L(i,1:i-1), L(i,1:i-1))) L(i+1:,i) = (A(i+1:,i) - matmul(conjg(L(i,1:i-1)), L(i+1:,1:i-1))) / L(i,i) end do where conjg refers to complex conjugate of the elements. • The
Cholesky–Crout algorithm starts from the upper left corner of the matrix and proceeds to calculate the matrix column by column. for (j = 0; j The above algorithm can be succinctly expressed as combining a
dot product and
matrix multiplication in vectorized programming languages such as
Fortran as the following, do i = 1, size(A,1) L(i,i) = sqrt(A(i,i) - dot_product(L(1:i-1,i), L(1:i-1,i))) L(i,i+1:) = (A(i,i+1:) - matmul(conjg(L(1:i-1,i)), L(1:i-1,i+1:))) / L(i,i) end do where conjg refers to complex conjugate of the elements. Either pattern of access allows the entire computation to be performed in-place if desired.
Stability of the computation Suppose that there is a desire to solve a
well-conditioned system of linear equations. If the LU decomposition is used, then the algorithm is unstable unless some sort of pivoting strategy is used. In the latter case, the error depends on the so-called growth factor of the matrix, which is usually (but not always) small. Now, suppose that the Cholesky decomposition is applicable. As mentioned above, the algorithm will be twice as fast. Furthermore, no
pivoting is necessary, and the error will always be small. Specifically, if , and denotes the computed solution, then solves the perturbed system (, where \|\mathbf{E}\|_2 \le c_n \varepsilon \|\mathbf{A}\|_2. Here ||·||2 is the
matrix 2-norm,
cn is a small constant depending on , and denotes the
unit round-off. One concern with the Cholesky decomposition to be aware of is the use of square roots. If the matrix being factorized is positive definite as required, the numbers under the square roots are always positive
in exact arithmetic. Unfortunately, the numbers can become negative because of
round-off errors, in which case the algorithm cannot continue. However, this can only happen if the matrix is very ill-conditioned. One way to address this is to add a diagonal correction matrix to the matrix being decomposed in an attempt to promote the positive-definiteness. While this might lessen the accuracy of the decomposition, it can be very favorable for other reasons; for example, when performing
Newton's method in optimization, adding a diagonal matrix can improve stability when far from the optimum.
LDL decomposition An alternative form, eliminating the need to take square roots when is symmetric, is the symmetric indefinite factorization \begin{align} \mathbf{A} = \mathbf{LDL}^\mathrm{T} & = \begin{pmatrix} 1 & 0 & 0 \\ L_{21} & 1 & 0 \\ L_{31} & L_{32} & 1\\ \end{pmatrix} \begin{pmatrix} D_1 & 0 & 0 \\ 0 & D_2 & 0 \\ 0 & 0 & D_3\\ \end{pmatrix} \begin{pmatrix} 1 & L_{21} & L_{31} \\ 0 & 1 & L_{32} \\ 0 & 0 & 1\\ \end{pmatrix} \\[8pt] & = \begin{pmatrix} D_1 & &(\mathrm{symmetric}) \\ L_{21}D_1 & L_{21}^2D_1 + D_2& \\ L_{31}D_1 & L_{31}L_{21}D_{1}+L_{32}D_2 & L_{31}^2D_1 + L_{32}^2D_2+D_3. \end{pmatrix}. \end{align} The following recursive relations apply for the entries of and : D_j = A_{jj} - \sum_{k=1}^{j-1} L_{jk}^2 D_k, L_{ij} = \frac{1}{D_j} \left( A_{ij} - \sum_{k=1}^{j-1} L_{ik} L_{jk} D_k \right) \quad \text{for } i>j. This works as long as the generated diagonal elements in stay non-zero. The decomposition is then unique. and are real if is real. For complex Hermitian matrix , the following formula applies: D_{j} = A_{jj} - \sum_{k=1}^{j-1} L_{jk}L_{jk}^* D_k, L_{ij} = \frac{1}{D_j} \left( A_{ij} - \sum_{k=1}^{j-1} L_{ik} L_{jk}^* D_k \right) \quad \text{for } i>j. Again, the pattern of access allows the entire computation to be performed in-place if desired.
Block variant When used on indefinite matrices, the
LDL* factorization is known to be unstable without careful pivoting; specifically, the elements of the factorization can grow arbitrarily. A possible improvement is to perform the factorization on block sub-matrices, commonly 2 × 2: \begin{align} \mathbf{A} = \mathbf{LDL}^\mathrm{T} & = \begin{pmatrix} \mathbf I & 0 & 0 \\ \mathbf L_{21} & \mathbf I & 0 \\ \mathbf L_{31} & \mathbf L_{32} & \mathbf I\\ \end{pmatrix} \begin{pmatrix} \mathbf D_1 & 0 & 0 \\ 0 & \mathbf D_2 & 0 \\ 0 & 0 & \mathbf D_3\\ \end{pmatrix} \begin{pmatrix} \mathbf I & \mathbf L_{21}^\mathrm T & \mathbf L_{31}^\mathrm T \\ 0 & \mathbf I & \mathbf L_{32}^\mathrm T \\ 0 & 0 & \mathbf I\\ \end{pmatrix} \\[8pt] & = \begin{pmatrix} \mathbf D_1 & &(\mathrm{symmetric}) \\ \mathbf L_{21} \mathbf D_1 & \mathbf L_{21} \mathbf D_1 \mathbf L_{21}^\mathrm T + \mathbf D_2& \\ \mathbf L_{31} \mathbf D_1 & \mathbf L_{31} \mathbf D_{1} \mathbf L_{21}^\mathrm T + \mathbf L_{32} \mathbf D_2 & \mathbf L_{31} \mathbf D_1 \mathbf L_{31}^\mathrm T + \mathbf L_{32} \mathbf D_2 \mathbf L_{32}^\mathrm T + \mathbf D_3 \end{pmatrix}, \end{align} where every element in the matrices above is a square submatrix. From this, these analogous recursive relations follow: \mathbf D_j = \mathbf A_{jj} - \sum_{k=1}^{j-1} \mathbf L_{jk} \mathbf D_k \mathbf L_{jk}^\mathrm T, \mathbf L_{ij} = \left(\mathbf A_{ij} - \sum_{k=1}^{j-1} \mathbf L_{ik} \mathbf D_k \mathbf L_{jk}^\mathrm T\right) \mathbf D_j^{-1}. This involves matrix products and explicit inversion, thus limiting the practical block size.
Updating the decomposition A task that often arises in practice is that one needs to update a Cholesky decomposition. In more details, one has already computed the Cholesky decomposition \mathbf{A} = \mathbf{L}\mathbf{L}^* of some matrix \mathbf{A}, then one changes the matrix \mathbf{A} in some way into another matrix, say \tilde{\mathbf{A}} , and one wants to compute the Cholesky decomposition of the updated matrix: \tilde{\mathbf{A}} = \tilde{\mathbf{L}} \tilde{\mathbf{L}}^* . The question is now whether one can use the Cholesky decomposition of \mathbf{A} that was computed before to compute the Cholesky decomposition of \tilde{\mathbf{A}} .
Rank-one update The specific case, where the updated matrix \tilde{\mathbf{A}} is related to the matrix \mathbf{A} by \tilde{\mathbf{A}} = \mathbf{A} + c\,\mathbf{x} \mathbf{x}^*, is known as a
rank-one update. Here the constant c is allowed to be negative, but must always be such that the new matrix \tilde{\mathbf{A}} is still positive definite. Here is a function written in
Matlab syntax that realizes a rank-one update: function L=updateChol(L,x,c) % given the L*L' Cholesky decomposition of a matrix, compute the updated % factor L so that we have the Cholesky decomposition of L*L'+c*x*x'; n=length(x); for k=1:n-1 l=L(:,k); % old value of k-th column lk=l(k); xk=x(k); dk=sqrt(lk^2+c*xk^2); % new diagonal value L(:,k)=(lk/dk)*l+(c*xk/dk)*x; % new column value x=x-l*(xk/lk); c=c*(lk/dk)^2; end L(n,n)=sqrt(L(n,n)^2+c*x(n)^2); end A
rank-n update is one where for a matrix \mathbf{M} one updates the decomposition such that \tilde{\mathbf{A}} = \mathbf{A} + \mathbf{M} \mathbf{M}^* . This can be achieved by successively performing rank-one updates for each of the columns of \mathbf{M}.
Adding and removing rows and columns If a symmetric and positive definite matrix \mathbf A is represented in block form as \mathbf{A} = \begin{pmatrix} \mathbf A_{11} & \mathbf A_{13} \\ \mathbf A_{13}^{\mathrm{T}} & \mathbf A_{33} \\ \end{pmatrix} and its upper Cholesky factor \mathbf{L} = \begin{pmatrix} \mathbf L_{11} & \mathbf L_{13} \\ 0 & \mathbf L_{33} \\ \end{pmatrix}, then for a new matrix \tilde{\mathbf{A}} , which is the same as \mathbf A but with the insertion of new rows and columns, \begin{align} \tilde{\mathbf{A}} &= \begin{pmatrix} \mathbf A_{11} & \mathbf A_{12} & \mathbf A_{13} \\ \mathbf A_{12}^{\mathrm{T}} & \mathbf A_{22} & \mathbf A_{23} \\ \mathbf A_{13}^{\mathrm{T}} & \mathbf A_{23}^{\mathrm{T}} & \mathbf A_{33} \\ \end{pmatrix} \end{align} Now there is an interest in finding the Cholesky factorization of \tilde{\mathbf{A}} , which can be called \tilde{\mathbf S} , without directly computing the entire decomposition. \begin{align} \tilde{\mathbf{S}} &= \begin{pmatrix} \mathbf S_{11} & \mathbf S_{12} & \mathbf S_{13} \\ 0 & \mathbf S_{22} & \mathbf S_{23} \\ 0 & 0 & \mathbf S_{33} \\ \end{pmatrix}. \end{align} Writing \mathbf A \setminus \mathbf{b} for the solution of \mathbf A \mathbf x = \mathbf b, which can be found easily for triangular matrices, and \text{chol} (\mathbf M) for the Cholesky decomposition of \mathbf M , the following relations can be found: \begin{align} \mathbf S_{11} &= \mathbf L_{11}, \\ \mathbf S_{12} &= \mathbf L_{11}^{\mathrm{T}} \setminus \mathbf A_{12}, \\ \mathbf S_{13} &= \mathbf L_{13}, \\ \mathbf S_{22} &= \mathrm{chol} \left(\mathbf A_{22} - \mathbf S_{12}^{\mathrm{T}} \mathbf S_{12}\right), \\ \mathbf S_{23} &= \mathbf S_{22}^{\mathrm{T}} \setminus \left(\mathbf A_{23} - \mathbf S_{12}^{\mathrm{T}} \mathbf S_{13}\right), \\ \mathbf S_{33} &= \mathrm{chol} \left(\mathbf L_{33}^{\mathrm{T}} \mathbf L_{33} - \mathbf S_{23}^{\mathrm{T}} \mathbf S_{23}\right). \end{align} These formulas may be used to determine the Cholesky factor after the insertion of rows or columns in any position, if the row and column dimensions are appropriately set (including to zero). The inverse problem, \begin{align} \tilde{\mathbf{A}} &= \begin{pmatrix} \mathbf A_{11} & \mathbf A_{12} & \mathbf A_{13} \\ \mathbf A_{12}^{\mathrm{T}} & \mathbf A_{22} & \mathbf A_{23} \\ \mathbf A_{13}^{\mathrm{T}} & \mathbf A_{23}^{\mathrm{T}} & \mathbf A_{33} \\ \end{pmatrix} \end{align} with known Cholesky decomposition \begin{align} \tilde{\mathbf{S}} &= \begin{pmatrix} \mathbf S_{11} & \mathbf S_{12} & \mathbf S_{13} \\ 0 & \mathbf S_{22} & \mathbf S_{23} \\ 0 & 0 & \mathbf S_{33} \\ \end{pmatrix} \end{align} and the desire to determine the Cholesky factor \begin{align} \mathbf{L} &= \begin{pmatrix} \mathbf L_{11} & \mathbf L_{13} \\ 0 & \mathbf L_{33} \\ \end{pmatrix} \end{align} of the matrix \mathbf A with rows and columns removed, \begin{align} \mathbf{A} &= \begin{pmatrix} \mathbf A_{11} & \mathbf A_{13} \\ \mathbf A_{13}^{\mathrm{T}} & \mathbf A_{33} \\ \end{pmatrix}, \end{align} yields the following rules: \begin{align} \mathbf L_{11} &= \mathbf S_{11}, \\ \mathbf L_{13} &= \mathbf S_{13}, \\ \mathbf L_{33} &= \mathrm{chol} \left(\mathbf S_{33}^{\mathrm{T}} \mathbf S_{33} + \mathbf S_{23}^{\mathrm{T}} \mathbf S_{23}\right). \end{align} Notice that the equations above that involve finding the Cholesky decomposition of a new matrix are all of the form \tilde{\mathbf{A}} = \mathbf{A} +c\, \mathbf{x} \mathbf{x}^* for some constant c=\pm 1 , which allows them to be efficiently calculated using procedure detailed in the previous section. == Proof for positive semi-definite matrices ==