# Chapter 83. Tridiagonal Form

# Chapter 83. Tridiagonal Form

A tridiagonal matrix is a matrix whose nonzero entries lie only on the main diagonal, the first subdiagonal, and the first superdiagonal. Tridiagonal form is central in numerical linear algebra because many eigenvalue and linear system algorithms become much faster and simpler on tridiagonal matrices.

An \(n \times n\) matrix \(T\) is tridiagonal if

$$
t_{ij}=0
\qquad
\text{whenever}
\qquad
|i-j|>1.
$$

Thus \(T\) has the structure

$$
T =
\begin{bmatrix}
d_1 & e_1 & 0 & \cdots & 0 \\
f_1 & d_2 & e_2 & \cdots & 0 \\
0 & f_2 & d_3 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & e_{n-1} \\
0 & 0 & 0 & f_{n-1} & d_n
\end{bmatrix}.
$$

If the matrix is symmetric or Hermitian, then

$$
f_i = e_i
$$

or

$$
f_i = \overline{e_i}.
$$

Tridiagonal matrices arise naturally in discretized differential equations, orthogonal polynomial recurrences, Krylov methods, and symmetric eigenvalue algorithms. Dense symmetric matrices are typically reduced to tridiagonal form before eigenvalue iteration. ([people.inf.ethz.ch](https://people.inf.ethz.ch/arbenz/ewp/Lnotes/chapter4.pdf?utm_source=chatgpt.com))

## 83.1 Definition

A tridiagonal matrix has nonzero entries only on three diagonals:

| Diagonal | Allowed entries |
|---|---|
| Main diagonal | \(t_{ii}\) |
| First subdiagonal | \(t_{i+1,i}\) |
| First superdiagonal | \(t_{i,i+1}\) |

All other entries must be zero.

For example,

$$
\begin{bmatrix}
4 & 2 & 0 & 0 \\
1 & 5 & 3 & 0 \\
0 & 6 & 7 & 8 \\
0 & 0 & 9 & 2
\end{bmatrix}
$$

is tridiagonal.

The matrix

$$
\begin{bmatrix}
4 & 2 & 1 & 0 \\
1 & 5 & 3 & 0 \\
0 & 6 & 7 & 8 \\
0 & 0 & 9 & 2
\end{bmatrix}
$$

is not tridiagonal because the entry in position \((1,3)\) is nonzero.

## 83.2 Relation to Hessenberg Form

Every tridiagonal matrix is both upper Hessenberg and lower Hessenberg.

Indeed, if

$$
|i-j|>1,
$$

then the entry is zero. In particular:

- if \(i>j+1\), the entry is zero, so the matrix is upper Hessenberg;
- if \(j>i+1\), the entry is zero, so the matrix is lower Hessenberg.

Thus tridiagonal form is more restrictive than Hessenberg form.

For symmetric matrices, Hessenberg reduction automatically produces tridiagonal form. Symmetry forces the upper and lower structures to mirror each other.

## 83.3 Symmetric Tridiagonal Matrices

A symmetric tridiagonal matrix has the form

$$
T =
\begin{bmatrix}
d_1 & e_1 & 0 & \cdots & 0 \\
e_1 & d_2 & e_2 & \cdots & 0 \\
0 & e_2 & d_3 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & e_{n-1} \\
0 & 0 & 0 & e_{n-1} & d_n
\end{bmatrix}.
$$

This form is extremely important in eigenvalue computation.

A dense real symmetric matrix can be reduced by orthogonal similarity to symmetric tridiagonal form:

$$
T = Q^TAQ.
$$

The reduction preserves eigenvalues. Since tridiagonal matrices are much cheaper to process than dense matrices, eigenvalue algorithms usually operate on \(T\) instead of directly on \(A\). ([en.wikipedia.org](https://en.wikipedia.org/wiki/Tridiagonal_matrix?utm_source=chatgpt.com))

## 83.4 Reduction by Householder Transformations

Dense symmetric matrices are reduced to tridiagonal form using Householder reflections.

A Householder reflector has the form

$$
P = I - 2vv^T,
$$

where

$$
\|v\|_2=1.
$$

At step \(k\), the reflector zeros entries below the first subdiagonal in column \(k\). Since the matrix is symmetric, applying the reflector from both sides also zeros the corresponding row entries:

$$
A \leftarrow P_k A P_k.
$$

After \(n-2\) steps, the matrix becomes symmetric tridiagonal.

The final result satisfies

$$
T = Q^TAQ,
$$

where \(Q\) is the product of the Householder reflectors.

## 83.5 A Four by Four Example

A symmetric \(4 \times 4\) matrix

$$
A =
\begin{bmatrix}
* & * & * & * \\
* & * & * & * \\
* & * & * & * \\
* & * & * & *
\end{bmatrix}
$$

is reduced to

$$
T =
\begin{bmatrix}
* & * & 0 & 0 \\
* & * & * & 0 \\
0 & * & * & * \\
0 & 0 & * & *
\end{bmatrix}.
$$

Only the main diagonal and the first neighboring diagonals remain nonzero.

This sparse structure dramatically reduces storage and computational cost.

## 83.6 Why Tridiagonal Form Matters

The QR algorithm preserves tridiagonal structure.

Suppose \(T_k\) is tridiagonal. If

$$
T_k = Q_kR_k
$$

and

$$
T_{k+1}=R_kQ_k,
$$

then \(T_{k+1}\) is also tridiagonal.

Thus the QR algorithm can operate entirely within the tridiagonal structure. This makes each iteration much cheaper than a dense QR iteration.

For dense symmetric eigenvalue problems, the standard pipeline is:

| Stage | Cost |
|---|---|
| Reduce dense symmetric matrix to tridiagonal form | \(O(n^3)\) |
| QR iteration on tridiagonal matrix | \(O(n^2)\) total or better in practice |

The expensive dense work occurs only once.

## 83.7 Storage Efficiency

A dense \(n \times n\) matrix requires

$$
n^2
$$

storage locations.

A tridiagonal matrix requires only

$$
3n-2
$$

entries:

- \(n\) diagonal entries,
- \(n-1\) subdiagonal entries,
- \(n-1\) superdiagonal entries.

For a symmetric tridiagonal matrix, only

$$
2n-1
$$

entries are needed, since the superdiagonal and subdiagonal are identical.

This storage reduction is important for large problems.

## 83.8 Solving Tridiagonal Systems

Linear systems

$$
Tx=b
$$

with tridiagonal \(T\) can be solved very efficiently.

A specialized form of Gaussian elimination, called the Thomas algorithm, solves such systems in linear time:

$$
O(n).
$$

This is much cheaper than the

$$
O(n^3)
$$

cost of dense elimination.

The efficiency comes from the sparse structure. Elimination creates no additional fill outside the tridiagonal pattern.

## 83.9 The Thomas Algorithm

Suppose

$$
T =
\begin{bmatrix}
d_1 & e_1 & 0 & \cdots & 0 \\
f_1 & d_2 & e_2 & \cdots & 0 \\
0 & f_2 & d_3 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & e_{n-1} \\
0 & 0 & 0 & f_{n-1} & d_n
\end{bmatrix}.
$$

The Thomas algorithm performs forward elimination and backward substitution using only neighboring entries.

The forward sweep computes modified coefficients recursively. The backward sweep recovers the solution vector.

Because each step touches only a constant number of entries, the total cost grows linearly with \(n\).

## 83.10 Algorithm

The following algorithm solves a tridiagonal system.

```text id="aqk5rb"
thomas_algorithm(f, d, e, b):
    n = length(d)

    for i = 2 to n:
        m = f[i-1] / d[i-1]

        d[i] = d[i] - m * e[i-1]
        b[i] = b[i] - m * b[i-1]

    x[n] = b[n] / d[n]

    for i = n-1 down to 1:
        x[i] = (b[i] - e[i] * x[i+1]) / d[i]

    return x
```

Here:

- \(d_i\) are diagonal entries,
- \(e_i\) are superdiagonal entries,
- \(f_i\) are subdiagonal entries.

The algorithm assumes no zero pivots appear during elimination.

## 83.11 Example

Consider

$$
T =
\begin{bmatrix}
2 & -1 & 0 \\
-1 & 2 & -1 \\
0 & -1 & 2
\end{bmatrix},
\qquad
b =
\begin{bmatrix}
1 \\
0 \\
1
\end{bmatrix}.
$$

The system is

$$
2x_1 - x_2 = 1,
$$

$$
-x_1 + 2x_2 - x_3 = 0,
$$

$$
-x_2 + 2x_3 = 1.
$$

Forward elimination gives

$$
\begin{aligned}
d_1 &= 2, \\
d_2 &= 2 - \frac{1}{2} = \frac{3}{2}, \\
d_3 &= 2 - \frac{2}{3} = \frac{4}{3}.
\end{aligned}
$$

Backward substitution yields

$$
x_3 = 1,
\qquad
x_2 = 1,
\qquad
x_1 = 1.
$$

Thus

$$
x =
\begin{bmatrix}
1 \\
1 \\
1
\end{bmatrix}.
$$

## 83.12 Eigenvalues of Symmetric Tridiagonal Matrices

Symmetric tridiagonal matrices have especially favorable eigenvalue properties.

Their eigenvalues are real because the matrix is symmetric.

Moreover, eigenvalue algorithms become much faster because each QR step preserves tridiagonal structure. Specialized algorithms for symmetric tridiagonal matrices can compute eigenvalues with very high efficiency and accuracy.

This is why dense symmetric eigenvalue computation nearly always begins with tridiagonal reduction.

## 83.13 Jacobi Matrices

A symmetric tridiagonal matrix with positive subdiagonal entries is often called a Jacobi matrix.

These matrices appear naturally in orthogonal polynomial theory. If

$$
p_0(x), p_1(x), p_2(x), \ldots
$$

are orthogonal polynomials satisfying a three-term recurrence relation, then multiplication by \(x\) in that basis is represented by a Jacobi matrix.

This connection links tridiagonal matrices to Gaussian quadrature, spectral theory, and approximation theory.

## 83.14 Discrete Differential Operators

Finite difference discretizations of differential equations often produce tridiagonal matrices.

For example, consider the second derivative approximation

$$
u''(x_i)
\approx
\frac{
u_{i-1} - 2u_i + u_{i+1}
}{h^2}.
$$

The resulting linear system has matrix

$$
\frac{1}{h^2}
\begin{bmatrix}
-2 & 1 & 0 & \cdots \\
1 & -2 & 1 & \cdots \\
0 & 1 & -2 & \cdots \\
\vdots & \vdots & \vdots & \ddots
\end{bmatrix}.
$$

This is symmetric tridiagonal.

Such matrices arise in heat equations, Poisson equations, wave equations, quantum mechanics, and many other discretized physical systems.

## 83.15 Tridiagonalization and Lanczos Iteration

Lanczos iteration is a Krylov subspace method for symmetric matrices.

Given a symmetric matrix \(A\), it constructs orthonormal vectors

$$
q_1, q_2, \ldots
$$

such that

$$
AQ_k = Q_kT_k + r_ke_k^T,
$$

where \(T_k\) is symmetric tridiagonal.

The tridiagonal structure appears because symmetry shortens the recurrence. Each new vector depends only on the previous two basis vectors.

This is analogous to how Arnoldi iteration produces Hessenberg matrices for general matrices.

## 83.16 Three-Term Recurrence

Lanczos iteration satisfies the recurrence

$$
Aq_k =
\beta_{k-1}q_{k-1}
+
\alpha_k q_k
+
\beta_k q_{k+1}.
$$

Only three neighboring basis vectors appear.

This recurrence corresponds exactly to the tridiagonal structure:

$$
T_k =
\begin{bmatrix}
\alpha_1 & \beta_1 & 0 & \cdots \\
\beta_1 & \alpha_2 & \beta_2 & \cdots \\
0 & \beta_2 & \alpha_3 & \cdots \\
\vdots & \vdots & \vdots & \ddots
\end{bmatrix}.
$$

The symmetry of \(A\) is what allows the Hessenberg structure to collapse into tridiagonal structure.

## 83.17 Positive Definite Tridiagonal Matrices

Many tridiagonal matrices are also positive definite.

For example,

$$
T =
\begin{bmatrix}
2 & -1 & 0 & \cdots \\
-1 & 2 & -1 & \cdots \\
0 & -1 & 2 & \cdots \\
\vdots & \vdots & \vdots & \ddots
\end{bmatrix}
$$

is symmetric positive definite.

Such matrices arise in discretized elliptic partial differential equations.

Positive definiteness allows the use of Cholesky factorization. For tridiagonal matrices, the Cholesky factors are bidiagonal, so factorization and solves remain linear-time operations.

## 83.18 Comparison with Other Structured Forms

| Form | Nonzero pattern | Main use |
|---|---|---|
| Diagonal | Main diagonal only | Fully decoupled systems |
| Tridiagonal | Three diagonals | Symmetric eigenproblems, PDEs |
| Hessenberg | Main diagonal plus one subdiagonal band | General eigenproblems |
| Bidiagonal | Main diagonal plus one adjacent diagonal | SVD computation |
| Band matrix | Limited-width diagonal band | Sparse structured systems |

Tridiagonal matrices are among the simplest structured sparse matrices while still capturing important coupling between neighboring variables.

## 83.19 Numerical Stability

Algorithms for symmetric tridiagonal matrices are usually highly stable.

Orthogonal similarity transformations preserve symmetry and eigenvalues while controlling roundoff growth. QR iteration on symmetric tridiagonal matrices is particularly reliable and efficient.

The Thomas algorithm, however, may require pivoting if the matrix is not diagonally dominant or positive definite. Without pivoting, zero or very small pivots can cause instability.

For symmetric positive definite tridiagonal matrices, Cholesky-based methods are especially robust.

## 83.20 Summary

A tridiagonal matrix has nonzero entries only on the main diagonal and the first neighboring diagonals:

$$
t_{ij}=0
\qquad
\text{when}
\qquad
|i-j|>1.
$$

Symmetric matrices can be reduced by orthogonal similarity to symmetric tridiagonal form:

$$
T = Q^TAQ.
$$

Tridiagonal structure greatly reduces storage and computational cost. Linear systems can be solved in linear time, and eigenvalue algorithms become much more efficient.

Tridiagonal matrices arise naturally in finite difference methods, orthogonal polynomials, Lanczos iteration, and symmetric eigenvalue computation. They form one of the most important structured matrix classes in numerical linear algebra.
