# Chapter 89. Jacobi Method

# Chapter 89. Jacobi Method

The Jacobi method is a stationary iterative method for solving a linear system

$$
Ax=b.
$$

It is one of the simplest iterative solvers in numerical linear algebra. The method updates each unknown by using the values from the previous iteration. This makes it easy to understand, easy to implement, and naturally parallel.

The method is most useful as a basic model for iterative solution, as a smoother in multigrid methods, and as a simple preconditioner. By itself, it may converge slowly, but it introduces the main ideas behind stationary methods: matrix splitting, iteration matrices, residuals, convergence criteria, and stopping rules.

## 89.1 The Basic Problem

Let

$$
A=
\begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nn}
\end{bmatrix},
\qquad
x=
\begin{bmatrix}
x_1\\
x_2\\
\vdots\\
x_n
\end{bmatrix},
\qquad
b=
\begin{bmatrix}
b_1\\
b_2\\
\vdots\\
b_n
\end{bmatrix}.
$$

The system

$$
Ax=b
$$

is equivalent to the equations

$$
a_{i1}x_1+a_{i2}x_2+\cdots+a_{in}x_n=b_i,
\qquad i=1,\ldots,n.
$$

Assume that

$$
a_{ii}\ne 0
$$

for every \(i\). Then the \(i\)-th equation can be solved for \(x_i\):

$$
x_i =
\frac{1}{a_{ii}}
\left(
b_i-\sum_{j\ne i}a_{ij}x_j
\right).
$$

The Jacobi method turns this identity into an iteration.

## 89.2 Jacobi Update Formula

Starting from an initial guess

$$
x^{(0)},
$$

the Jacobi method defines

$$
x_i^{(k+1)} =
\frac{1}{a_{ii}}
\left(
b_i-\sum_{j\ne i}a_{ij}x_j^{(k)}
\right),
\qquad i=1,\ldots,n.
$$

All entries of \(x^{(k+1)}\) are computed from entries of \(x^{(k)}\), not from entries already updated during the same iteration.

This is the defining feature of the Jacobi method.

The update is simultaneous:

$$
x^{(k)}
\longmapsto
x^{(k+1)}.
$$

Each component uses old values from the previous iterate.

## 89.3 Example

Consider the system

$$
\begin{aligned}
4x_1+x_2 &= 9,\\
x_1+3x_2 &= 7.
\end{aligned}
$$

Solving each equation for its diagonal variable gives

$$
x_1=\frac{9-x_2}{4},
\qquad
x_2=\frac{7-x_1}{3}.
$$

Thus the Jacobi iteration is

$$
x_1^{(k+1)}=\frac{9-x_2^{(k)}}{4},
\qquad
x_2^{(k+1)}=\frac{7-x_1^{(k)}}{3}.
$$

Start with

$$
x^{(0)}=
\begin{bmatrix}
0\\
0
\end{bmatrix}.
$$

Then

$$
x_1^{(1)}=\frac{9}{4}=2.25,
\qquad
x_2^{(1)}=\frac{7}{3}\approx 2.3333.
$$

So

$$
x^{(1)} =
\begin{bmatrix}
2.25\\
2.3333
\end{bmatrix}.
$$

Next,

$$
x_1^{(2)} =
\frac{9-2.3333}{4}
\approx 1.6667,
$$

$$
x_2^{(2)} =
\frac{7-2.25}{3}
\approx 1.5833.
$$

The exact solution is

$$
x=
\begin{bmatrix}
2\\
5/3
\end{bmatrix}.
$$

The iterates move toward this solution.

## 89.4 Matrix Splitting

Write the matrix \(A\) as

$$
A = D + L + U,
$$

where:

| Symbol | Meaning |
|---|---|
| \(D\) | diagonal part of \(A\) |
| \(L\) | strictly lower triangular part of \(A\) |
| \(U\) | strictly upper triangular part of \(A\) |

Then

$$
Ax=b
$$

becomes

$$
(D+L+U)x=b.
$$

Move the off-diagonal terms to the right side:

$$
Dx=-(L+U)x+b.
$$

The Jacobi iteration is

$$
Dx^{(k+1)} =
-(L+U)x^{(k)}+b.
$$

Since \(D\) is diagonal, solving with \(D\) is cheap:

$$
x^{(k+1)} =
-D^{-1}(L+U)x^{(k)}+D^{-1}b.
$$

Thus the Jacobi iteration matrix is

$$
G_J=-D^{-1}(L+U).
$$

## 89.5 Fixed-Point Form

The Jacobi method has the fixed-point form

$$
x^{(k+1)}=G_Jx^{(k)}+c,
$$

where

$$
G_J=-D^{-1}(L+U),
\qquad
c=D^{-1}b.
$$

If the iteration converges to a vector \(x\), then

$$
x=G_Jx+c.
$$

Substituting the definitions gives

$$
x=-D^{-1}(L+U)x+D^{-1}b.
$$

Multiplying by \(D\),

$$
Dx=-(L+U)x+b.
$$

Therefore,

$$
(D+L+U)x=b.
$$

So the limit is a solution of the original system.

## 89.6 Error Recurrence

Let \(x\) be the exact solution. The exact solution satisfies

$$
x=G_Jx+c.
$$

The computed iterate satisfies

$$
x^{(k+1)}=G_Jx^{(k)}+c.
$$

Subtracting gives

$$
x-x^{(k+1)} =
G_J(x-x^{(k)}).
$$

Thus the error

$$
e^{(k)}=x-x^{(k)}
$$

satisfies

$$
e^{(k+1)}=G_Je^{(k)}.
$$

By repeated substitution,

$$
e^{(k)}=G_J^k e^{(0)}.
$$

The convergence of Jacobi is therefore controlled entirely by powers of the iteration matrix.

## 89.7 Convergence Criterion

The Jacobi method converges for every initial guess if and only if

$$
\rho(G_J)<1,
$$

where \(\rho(G_J)\) is the spectral radius of \(G_J\). The spectral radius is the largest absolute value of the eigenvalues of \(G_J\).

This is the standard convergence condition for stationary linear iterations. The same criterion is commonly used for Jacobi and related stationary methods.

If

$$
\rho(G_J)<1,
$$

then

$$
G_J^k \to 0
$$

as

$$
k\to\infty.
$$

Consequently,

$$
e^{(k)}=G_J^ke^{(0)}\to 0.
$$

The iterates converge to the exact solution.

## 89.8 Diagonal Dominance

A common sufficient condition for convergence is strict diagonal dominance.

A matrix \(A\) is strictly row diagonally dominant if

$$
|a_{ii}|
>
\sum_{j\ne i}|a_{ij}|
$$

for every row \(i\).

This means each diagonal entry is larger in magnitude than the sum of the off-diagonal entries in the same row.

For strictly diagonally dominant systems, the Jacobi method is commonly introduced as a convergent iterative algorithm.

Strict diagonal dominance is sufficient, but not necessary. Some matrices that are not strictly diagonally dominant still allow Jacobi to converge.

## 89.9 Why Diagonal Dominance Helps

The Jacobi update divides each equation by its diagonal coefficient.

If the diagonal coefficient dominates the off-diagonal coefficients, then the new value of \(x_i\) depends only weakly on the other variables.

In that case, the iteration matrix has small enough entries to contract errors.

For example, if

$$
|a_{ii}|
>
\sum_{j\ne i}|a_{ij}|,
$$

then the sum of the absolute values in row \(i\) of \(G_J\) is less than \(1\):

$$
\sum_{j=1}^n |(G_J)_{ij}| =
\sum_{j\ne i}
\left|
\frac{a_{ij}}{a_{ii}}
\right|
<1.
$$

Thus

$$
\|G_J\|_\infty < 1.
$$

Since

$$
\rho(G_J)\le \|G_J\|_\infty,
$$

we get

$$
\rho(G_J)<1.
$$

Therefore, Jacobi converges.

## 89.10 Residual Form

The residual at step \(k\) is

$$
r^{(k)}=b-Ax^{(k)}.
$$

The Jacobi correction can be written using the residual.

Since

$$
x^{(k+1)} =
x^{(k)}+D^{-1}r^{(k)},
$$

Jacobi updates the current approximation by adding a diagonally scaled residual.

This form is useful in implementation and interpretation.

It says that Jacobi corrects each variable according to the residual in its corresponding equation, scaled by the diagonal entry.

## 89.11 Algorithm

Input:

$$
A,\quad b,\quad x^{(0)},\quad \tau,\quad k_{\max}.
$$

Repeat for

$$
k=0,1,\ldots,k_{\max}-1:
$$

Compute, for each \(i\),

$$
x_i^{(k+1)} =
\frac{1}{a_{ii}}
\left(
b_i-\sum_{j\ne i}a_{ij}x_j^{(k)}
\right).
$$

Compute the residual

$$
r^{(k+1)}=b-Ax^{(k+1)}.
$$

Stop if

$$
\frac{\|r^{(k+1)}\|}{\|b\|}
\le \tau.
$$

Return

$$
x^{(k+1)}.
$$

## 89.12 Pseudocode

```text
jacobi(A, b, x, tol, max_iter):
    n = length(b)

    for k = 0 to max_iter - 1:
        x_new = zero vector of length n

        for i = 1 to n:
            s = 0

            for j = 1 to n:
                if j != i:
                    s = s + A[i,j] * x[j]

            x_new[i] = (b[i] - s) / A[i,i]

        r = b - A * x_new

        if norm(r) / norm(b) <= tol:
            return x_new

        x = x_new

    return x
```

The new vector must be stored separately from the old vector. Otherwise the method becomes Gauss-Seidel rather than Jacobi.

## 89.13 Storage Requirements

Jacobi needs:

| Object | Storage |
|---|---:|
| Matrix \(A\) | problem-dependent |
| Right-hand side \(b\) | \(n\) |
| Current vector \(x^{(k)}\) | \(n\) |
| New vector \(x^{(k+1)}\) | \(n\) |
| Residual \(r^{(k)}\), optional | \(n\) |

For sparse matrices, the matrix is stored using a sparse format such as compressed sparse row.

The method does not require factorizing \(A\). This makes it memory-light compared with direct methods.

## 89.14 Computational Cost

For a dense \(n\times n\) matrix, each Jacobi iteration costs

$$
O(n^2)
$$

operations.

For a sparse matrix, each iteration costs approximately

$$
O(\operatorname{nnz}(A)),
$$

where \(\operatorname{nnz}(A)\) is the number of nonzero entries.

The cost per iteration is low, but many iterations may be required.

This is the main tradeoff.

## 89.15 Parallelism

Jacobi is naturally parallel.

Each component

$$
x_i^{(k+1)}
$$

depends only on values from the previous iterate \(x^{(k)}\). Therefore all components of \(x^{(k+1)}\) can be computed independently after \(x^{(k)}\) is known.

This makes Jacobi attractive on parallel hardware.

However, parallel efficiency alone does not guarantee overall efficiency. If convergence is slow, a faster converging method with less parallelism may still be better.

## 89.16 Weighted Jacobi

Weighted Jacobi modifies the update by mixing the old iterate with the ordinary Jacobi iterate.

Let

$$
\tilde{x}^{(k+1)} =
D^{-1}\bigl(b-(L+U)x^{(k)}\bigr)
$$

be the ordinary Jacobi update.

Weighted Jacobi defines

$$
x^{(k+1)} =
(1-\omega)x^{(k)}
+
\omega \tilde{x}^{(k+1)}.
$$

Here \(\omega\) is a relaxation parameter.

Equivalently,

$$
x^{(k+1)} =
x^{(k)}+\omega D^{-1}r^{(k)}.
$$

When

$$
\omega=1,
$$

weighted Jacobi becomes ordinary Jacobi.

For some problems, choosing

$$
0<\omega<1
$$

improves smoothing behavior.

## 89.17 Jacobi as a Smoother

In multigrid methods, Jacobi is often used as a smoother.

A smoother reduces high-frequency error components quickly, even if it reduces low-frequency components slowly.

This property is useful because multigrid handles different error scales on different grids.

Weighted Jacobi is especially common in this context.

The method may be poor as a standalone solver but useful as a component inside a stronger algorithm.

## 89.18 Jacobi Preconditioning

Jacobi preconditioning uses the diagonal of \(A\) as a preconditioner.

Set

$$
P=D.
$$

Then the preconditioned system is

$$
D^{-1}Ax=D^{-1}b.
$$

This scales each equation by its diagonal entry.

Jacobi preconditioning is cheap and easy to apply. It may help when diagonal scaling is the main issue.

It is weak for strongly coupled systems, where off-diagonal entries play a major role.

## 89.19 Comparison with Gauss-Seidel

Jacobi and Gauss-Seidel are closely related.

| Feature | Jacobi | Gauss-Seidel |
|---|---|---|
| Update style | simultaneous | sequential |
| Uses newest values immediately | no | yes |
| Needs separate new vector | yes | no |
| Parallelism | high | lower |
| Typical convergence speed | slower | faster |
| Implementation simplicity | high | high |

Gauss-Seidel often converges faster because it uses updated values during the same iteration. Jacobi is easier to parallelize because each update is independent.

## 89.20 Failure Modes

Jacobi may fail for several reasons.

| Failure mode | Cause |
|---|---|
| Divergence | \(\rho(G_J)\ge 1\) |
| Slow convergence | \(\rho(G_J)\) close to \(1\) |
| Division by zero | some \(a_{ii}=0\) |
| Poor scaling | diagonal entries too small |
| Misleading residual | ill-conditioned \(A\) |

Before applying Jacobi, one should check that diagonal entries are nonzero and that convergence is plausible.

## 89.21 Residual Versus Error

A small residual does not always mean a small error.

For the residual

$$
r^{(k)}=b-Ax^{(k)},
$$

the error satisfies

$$
e^{(k)}=A^{-1}r^{(k)}.
$$

Thus

$$
\|e^{(k)}\|
\le
\|A^{-1}\|\|r^{(k)}\|.
$$

If \(A\) is ill-conditioned, then \(\|A^{-1}\|\) may be large. The residual can be small while the solution error remains significant.

This is not specific to Jacobi. It applies to all linear solvers.

## 89.22 Practical Use

Jacobi is rarely the best standalone method for difficult linear systems.

It remains important because it is:

| Role | Reason |
|---|---|
| Teaching method | exposes core iteration ideas |
| Parallel method | updates are independent |
| Preconditioner | diagonal scaling is cheap |
| Multigrid smoother | damps high-frequency error |
| Baseline solver | simple reference implementation |

In production solvers, Jacobi is often replaced or supplemented by Krylov methods, multigrid methods, domain decomposition, or stronger preconditioners.

## 89.23 Implementation Notes

A robust implementation should:

| Issue | Practical handling |
|---|---|
| Zero diagonal | reject matrix or reorder equations |
| Small diagonal | warn or scale system |
| Sparse storage | avoid looping over zeros |
| Residual check | use relative residual |
| Maximum iterations | always enforce a limit |
| Parallel update | separate old and new vectors |
| Weighted update | expose \(\omega\) as parameter |

For sparse matrices, it is inefficient to compute sums over all \(j=1,\ldots,n\). Instead, loop only over nonzero entries in each row.

## 89.24 Simple Sparse Row Update

For a sparse row, the update is computed by separating the diagonal entry from the off-diagonal entries:

```text
for i = 1 to n:
    s = 0
    diag = 0

    for each nonzero (j, aij) in row i:
        if j == i:
            diag = aij
        else:
            s = s + aij * x[j]

    x_new[i] = (b[i] - s) / diag
```

This structure avoids unnecessary operations on zero entries.

It is the natural implementation for compressed sparse row storage.

## 89.25 Summary

The Jacobi method is the iteration

$$
x_i^{(k+1)} =
\frac{1}{a_{ii}}
\left(
b_i-\sum_{j\ne i}a_{ij}x_j^{(k)}
\right).
$$

In matrix form,

$$
x^{(k+1)} =
-D^{-1}(L+U)x^{(k)}+D^{-1}b.
$$

Its iteration matrix is

$$
G_J=-D^{-1}(L+U).
$$

The method converges for every initial guess exactly when

$$
\rho(G_J)<1.
$$

Jacobi is simple, parallel, and memory-light. Its main weakness is slow or absent convergence. It is often more useful as a building block than as a final solver.
