Gaussian elimination From the numerical linear algebra perspective, Gaussian elimination is a procedure for factoring a matrix
A into its
LU factorization, which Gaussian elimination accomplishes by left-multiplying
A by a succession of matrices L_{m-1} \cdots L_2 L_1 A = U until
U is upper triangular and
L is lower triangular, where L \equiv L_1^{-1}L_2^{-1} \cdots L_{m-1}^{-1}. Naive programs for Gaussian elimination are notoriously highly unstable, and produce huge errors when applied to matrices with many significant digits. The simplest solution is to introduce
pivoting, which produces a modified Gaussian elimination algorithm that is stable.
Solutions of linear systems Numerical linear algebra characteristically approaches matrices as a concatenation of column vectors. In order to solve the linear system x = A^{-1}b, the traditional algebraic approach is to understand
x as the product of A^{-1} with
b. Numerical linear algebra instead interprets
x as the vector of coefficients of the linear expansion of
b in the basis formed by the columns of
A. Many different decompositions can be used to solve the linear problem, depending on the characteristics of the matrix
A and the vectors
x and
b, which may make one factorization much easier to obtain than others. If
A =
QR is a QR factorization of
A, then equivalently Rx = Q^\ast b. This is as easy to compute as a matrix factorization. If A = X \Lambda X^{-1} is an eigendecomposition
A, and we seek to find
b so that
b =
Ax, with b' = X^{-1}b and x' = X^{-1}x, then we have b' = \Lambda x'. This is closely related to the solution to the linear system using the singular value decomposition, because singular values of a matrix are the absolute values of its eigenvalues, which are also equivalent to the square roots of the absolute values of the eigenvalues of the Gram matrix X^{*} X . And if
A =
LU is an
LU factorization of
A, then
Ax =
b can be solved using the triangular matrices
Ly =
b and
Ux =
y.
Least squares optimisation Matrix decompositions suggest a number of ways to solve the linear system
r =
b −
Ax where we seek to minimize
r, as in the
regression problem. The QR algorithm solves this problem by computing the reduced QR factorization of
A and rearranging to obtain \widehat{R}x = \widehat{Q}^\ast b. This upper triangular system can then be solved for
x. The SVD also suggests an algorithm for obtaining linear least squares. By computing the reduced SVD decomposition A = \widehat{U}\widehat{\Sigma}V^\ast and then computing the vector \widehat{U}^\ast b, we reduce the least squares problem to a simple diagonal system. The fact that least squares solutions can be produced by the QR and SVD factorizations means that, in addition to the classical
normal equations method for solving least squares problems, these problems can also be solved by methods that include the Gram-Schmidt algorithm and Householder methods. ==Conditioning and stability==