Let A\mathbf x = \mathbf b be a square system of
n linear equations, where:A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}, \qquad \mathbf{x} = \begin{bmatrix} x_{1} \\ x_2 \\ \vdots \\ x_n \end{bmatrix} , \qquad \mathbf{b} = \begin{bmatrix} b_{1} \\ b_2 \\ \vdots \\ b_n \end{bmatrix}. When A and \mathbf b are known, and \mathbf x is unknown, we can use the Jacobi method to approximate \mathbf x. The vector \mathbf x^{(0)} denotes our initial guess for \mathbf x (often \mathbf x^{(0)}_i=0 for i=1,2,...,n). We denote \mathbf{x}^{(k)} as the
k-th approximation or iteration of \mathbf{x}, and \mathbf{x}^{(k+1)} is the next (or
k+1) iteration of \mathbf{x}.
Matrix-based formula Then
A can be decomposed into a
diagonal component
D, a lower triangular part
L and an
upper triangular part
U:A=D+L+U \qquad \text{where} \qquad D = \begin{bmatrix} a_{11} & 0 & \cdots & 0 \\ 0 & a_{22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\0 & 0 & \cdots & a_{nn} \end{bmatrix} \text{ and } L+U = \begin{bmatrix} 0 & a_{12} & \cdots & a_{1n} \\ a_{21} & 0 & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & 0 \end{bmatrix}. The solution is then obtained iteratively via : \mathbf{x}^{(k+1)} = D^{-1} (\mathbf{b} - (L+U) \mathbf{x}^{(k)}).
Element-based formula The element-based formula for each row i is thus: x^{(k+1)}_i = \frac{1}{a_{ii}} \left(b_i -\sum_{j\ne i}a_{ij}x^{(k)}_j\right),\quad i=1,2,\ldots,n. The computation of x_i^{(k+1)} requires each element in \mathbf{x}^{(k)} except itself. Unlike the
Gauss–Seidel method, we cannot overwrite x_i^{(k)} with x_i^{(k+1)}, as that value will be needed by the rest of the computation. The minimum amount of storage is two vectors of size
n. == Algorithm ==