Skip to content

Chapter 75. LU Decomposition

LU decomposition is a factorization of a matrix into a lower triangular part and an upper triangular part. It is one of the standard matrix decompositions used to solve linear systems, compute determinants, and organize Gaussian elimination in matrix form.

For a square matrix AA, an LU decomposition has the form

A=LU, A = LU,

where LL is lower triangular and UU is upper triangular. A lower triangular matrix has zero entries above the main diagonal. An upper triangular matrix has zero entries below the main diagonal. This is the standard lower-upper factorization described in numerical linear algebra references.

75.1 Triangular Matrices

A lower triangular matrix has the form

L=[110002122003132330n1n2n3nn]. L = \begin{bmatrix} \ell_{11} & 0 & 0 & \cdots & 0 \\ \ell_{21} & \ell_{22} & 0 & \cdots & 0 \\ \ell_{31} & \ell_{32} & \ell_{33} & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \ell_{n1} & \ell_{n2} & \ell_{n3} & \cdots & \ell_{nn} \end{bmatrix}.

An upper triangular matrix has the form

U=[u11u12u13u1n0u22u23u2n00u33u3n000unn]. U = \begin{bmatrix} u_{11} & u_{12} & u_{13} & \cdots & u_{1n} \\ 0 & u_{22} & u_{23} & \cdots & u_{2n} \\ 0 & 0 & u_{33} & \cdots & u_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & u_{nn} \end{bmatrix}.

Triangular systems are easy to solve because each equation contains only a controlled set of unknowns. A lower triangular system is solved from top to bottom. An upper triangular system is solved from bottom to top.

This is the reason LU decomposition is useful. It replaces one general linear system by two triangular systems.

75.2 The Basic Factorization

Let

A=[a11a12a1na21a22a2nan1an2ann]. A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}.

An LU decomposition writes AA as

A=LU. A = LU.

In many treatments, LL is chosen to have ones on the diagonal:

L=[100021100313210n1n2n31]. L = \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ \ell_{21} & 1 & 0 & \cdots & 0 \\ \ell_{31} & \ell_{32} & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \ell_{n1} & \ell_{n2} & \ell_{n3} & \cdots & 1 \end{bmatrix}.

Such a matrix is called unit lower triangular. With this convention, the diagonal scaling is placed in UU. This convention is often called the Doolittle form of LU decomposition. A related convention, called the Crout form, places the unit diagonal in UU instead.

75.3 LU and Gaussian Elimination

LU decomposition is Gaussian elimination recorded as a matrix factorization.

Suppose Gaussian elimination transforms AA into an upper triangular matrix UU. At each step, a multiple of one row is subtracted from a lower row to eliminate an entry below the pivot. The multipliers used in these eliminations become the entries of LL.

For example, to eliminate a21a_{21} using a11a_{11}, one uses the multiplier

21=a21a11. \ell_{21} = \frac{a_{21}}{a_{11}}.

To eliminate a31a_{31}, one uses

31=a31a11. \ell_{31} = \frac{a_{31}}{a_{11}}.

These multipliers are stored below the diagonal in LL. The remaining matrix after elimination is UU.

Thus UU records the result of elimination, while LL records how the elimination was performed.

75.4 A Two by Two Example

Let

A=[2347]. A = \begin{bmatrix} 2 & 3 \\ 4 & 7 \end{bmatrix}.

We seek

A=LU, A = LU,

with

L=[10211],U=[u11u120u22]. L = \begin{bmatrix} 1 & 0 \\ \ell_{21} & 1 \end{bmatrix}, \qquad U = \begin{bmatrix} u_{11} & u_{12} \\ 0 & u_{22} \end{bmatrix}.

Multiplying gives

LU=[u11u1221u1121u12+u22]. LU = \begin{bmatrix} u_{11} & u_{12} \\ \ell_{21}u_{11} & \ell_{21}u_{12} + u_{22} \end{bmatrix}.

Equating this with AA, we obtain

u11=2,u12=3, u_{11} = 2, \qquad u_{12} = 3, 21u11=4, \ell_{21}u_{11} = 4,

so

21=2. \ell_{21} = 2.

The final entry gives

21u12+u22=7. \ell_{21}u_{12} + u_{22} = 7.

Therefore

23+u22=7, 2 \cdot 3 + u_{22} = 7,

so

u22=1. u_{22} = 1.

Hence

L=[1021],U=[2301]. L = \begin{bmatrix} 1 & 0 \\ 2 & 1 \end{bmatrix}, \qquad U = \begin{bmatrix} 2 & 3 \\ 0 & 1 \end{bmatrix}.

Indeed,

[1021][2301]=[2347]. \begin{bmatrix} 1 & 0 \\ 2 & 1 \end{bmatrix} \begin{bmatrix} 2 & 3 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 2 & 3 \\ 4 & 7 \end{bmatrix}.

75.5 Solving Linear Systems with LU

Suppose

Ax=b Ax = b

and

A=LU. A = LU.

Then

LUx=b. LUx = b.

Introduce an intermediate vector

y=Ux. y = Ux.

Then the original system becomes two triangular systems:

Ly=b Ly = b

and

Ux=y. Ux = y.

The first system is solved by forward substitution. The second system is solved by backward substitution.

This procedure is especially useful when many systems have the same coefficient matrix AA but different right-hand sides bb. The factorization A=LUA = LU is computed once. Each new right-hand side then requires only two triangular solves.

75.6 Forward Substitution

Let

Ly=b, Ly = b,

where LL is lower triangular. Written componentwise,

11y1=b1, \ell_{11}y_1 = b_1, 21y1+22y2=b2, \ell_{21}y_1 + \ell_{22}y_2 = b_2, 31y1+32y2+33y3=b3, \ell_{31}y_1 + \ell_{32}y_2 + \ell_{33}y_3 = b_3,

and so on.

The first equation gives y1y_1. The second equation then gives y2y_2. The third equation gives y3y_3. Continuing in this order gives all entries of yy.

For a general lower triangular matrix,

yi=bij=1i1ijyjii. y_i = \frac{ b_i - \sum_{j=1}^{i-1} \ell_{ij}y_j }{ \ell_{ii} }.

If LL is unit lower triangular, then ii=1\ell_{ii} = 1, and the formula becomes

yi=bij=1i1ijyj. y_i = b_i - \sum_{j=1}^{i-1} \ell_{ij}y_j.

75.7 Backward Substitution

Let

Ux=y, Ux = y,

where UU is upper triangular. Written componentwise, the last equation is

unnxn=yn. u_{nn}x_n = y_n.

This gives xnx_n. The previous equation gives xn1x_{n-1}, since xnx_n is already known. The process continues upward.

For a general upper triangular matrix,

xi=yij=i+1nuijxjuii. x_i = \frac{ y_i - \sum_{j=i+1}^{n} u_{ij}x_j }{ u_{ii} }.

Backward substitution works only when the diagonal entries needed for division are nonzero. In the LU setting, this means the pivots must be nonzero.

75.8 Example: Solving a System

Let

A=[2347],b=[818]. A = \begin{bmatrix} 2 & 3 \\ 4 & 7 \end{bmatrix}, \qquad b = \begin{bmatrix} 8 \\ 18 \end{bmatrix}.

From the previous example,

A=LU, A = LU,

where

L=[1021],U=[2301]. L = \begin{bmatrix} 1 & 0 \\ 2 & 1 \end{bmatrix}, \qquad U = \begin{bmatrix} 2 & 3 \\ 0 & 1 \end{bmatrix}.

First solve

Ly=b. Ly = b.

That is,

[1021][y1y2]=[818]. \begin{bmatrix} 1 & 0 \\ 2 & 1 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} = \begin{bmatrix} 8 \\ 18 \end{bmatrix}.

The equations are

y1=8, y_1 = 8, 2y1+y2=18. 2y_1 + y_2 = 18.

Therefore

y2=1816=2. y_2 = 18 - 16 = 2.

So

y=[82]. y = \begin{bmatrix} 8 \\ 2 \end{bmatrix}.

Now solve

Ux=y. Ux = y.

That is,

[2301][x1x2]=[82]. \begin{bmatrix} 2 & 3 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 8 \\ 2 \end{bmatrix}.

The second equation gives

x2=2. x_2 = 2.

The first equation gives

2x1+3x2=8. 2x_1 + 3x_2 = 8.

Thus

2x1+6=8, 2x_1 + 6 = 8,

so

x1=1. x_1 = 1.

The solution is

x=[12]. x = \begin{bmatrix} 1 \\ 2 \end{bmatrix}.

75.9 Existence of LU Decomposition

An LU decomposition without row exchanges does not always exist. The elimination process requires nonzero pivots. If a pivot is zero, division by that pivot cannot be performed.

For example,

A=[0110] A = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}

cannot be decomposed as A=LUA = LU with LL unit lower triangular and UU upper triangular. The first pivot would be zero.

However, row exchanges fix this problem. Instead of factoring AA directly, one factors a row-permuted version of AA:

PA=LU. PA = LU.

Here PP is a permutation matrix. This form is called LU decomposition with pivoting, or LUP decomposition. Standard numerical algorithms usually include pivoting because it avoids zero pivots and improves numerical stability.

75.10 Pivoting

Pivoting means exchanging rows before or during elimination.

The most common form is partial pivoting. At each step, the algorithm chooses a pivot row by looking down the current column and selecting an entry with large absolute value. That row is swapped into the pivot position.

The factorization then has the form

PA=LU. PA = LU.

Equivalently,

A=P1LU. A = P^{-1}LU.

Since a permutation matrix satisfies

P1=PT, P^{-1} = P^T,

one may also write

A=PTLU. A = P^TLU.

Pivoting has two purposes. First, it prevents division by zero when a possible nonzero pivot is available. Second, it can reduce the growth of rounding errors in floating point arithmetic.

75.11 Determinants from LU

LU decomposition also simplifies determinant computation.

If

A=LU, A = LU,

then

det(A)=det(L)det(U). \det(A) = \det(L)\det(U).

The determinant of a triangular matrix is the product of its diagonal entries. Therefore,

det(A)=(i=1nii)(i=1nuii). \det(A) = \left(\prod_{i=1}^{n} \ell_{ii}\right) \left(\prod_{i=1}^{n} u_{ii}\right).

If LL is unit lower triangular, then

det(L)=1. \det(L) = 1.

Hence

det(A)=i=1nuii. \det(A) = \prod_{i=1}^{n} u_{ii}.

If pivoting is used and

PA=LU, PA = LU,

then

det(P)det(A)=det(L)det(U). \det(P)\det(A) = \det(L)\det(U).

Thus

det(A)=det(P)det(L)det(U), \det(A) = \det(P)\det(L)\det(U),

because det(P)=±1\det(P) = \pm 1. Each row swap changes the sign of the determinant.

75.12 Inverses from LU

LU decomposition can be used to compute the inverse of a matrix, although direct inversion is often avoided in numerical work when only a solution of Ax=bAx=b is needed.

To find A1A^{-1}, solve

AX=I. AX = I.

If

A=LU, A = LU,

then

LUX=I. LUX = I.

This is equivalent to solving

LY=I LY = I

and then

UX=Y. UX = Y.

Each column of XX is found by solving two triangular systems. The columns of XX are the columns of A1A^{-1}.

This method is systematic, but it is usually better to solve Ax=bAx=b directly by LU rather than compute A1A^{-1} explicitly.

75.13 Algorithm

The following algorithm computes an LU decomposition without pivoting, using the convention that LL has ones on the diagonal.

lu_decomposition(A):
    n = number of rows of A

    L = identity matrix of size n
    U = zero matrix of size n by n

    for k = 1 to n:
        for j = k to n:
            U[k,j] = A[k,j]
            for s = 1 to k-1:
                U[k,j] = U[k,j] - L[k,s] * U[s,j]

        for i = k+1 to n:
            L[i,k] = A[i,k]
            for s = 1 to k-1:
                L[i,k] = L[i,k] - L[i,s] * U[s,k]
            L[i,k] = L[i,k] / U[k,k]

    return L, U

The division by U[k,k]U[k,k] requires a nonzero pivot. If the pivot is zero or very small, the algorithm should use pivoting.

75.14 Operation Count

For a dense n×nn \times n matrix, LU decomposition requires on the order of

23n3 \frac{2}{3}n^3

arithmetic operations.

Solving the two triangular systems requires on the order of

2n2 2n^2

operations for each right-hand side.

Thus the expensive step is the factorization. Once the factorization is available, solving for additional right-hand sides is much cheaper.

This explains why LU decomposition is useful in repeated-solve problems.

75.15 Interpretation

LU decomposition separates elimination into two pieces.

The matrix UU is the upper triangular matrix obtained after elimination. The matrix LL stores the multipliers that produced the eliminations. The equation

A=LU A = LU

says that the original matrix can be reconstructed from these two pieces.

In computational terms, LU decomposition is a reusable form of Gaussian elimination. In algebraic terms, it is a factorization into simpler matrices. In geometric terms, it decomposes a linear transformation into transformations with triangular structure.

75.16 Common Variants

VariantFormDescription
LUA=LUA = LUFactorization without pivoting
LUPPA=LUPA = LULU with row pivoting
PLUA=PLUA = PLUEquivalent convention using a permutation factor
LDUA=LDUA = LDUSeparates diagonal scaling into DD
CroutA=LUA = LUUU has unit diagonal
DoolittleA=LUA = LULL has unit diagonal

Different conventions place diagonal entries in different factors. The essential idea remains the same: a general matrix is represented by triangular factors.

75.17 Summary

LU decomposition factors a matrix into lower and upper triangular matrices:

A=LU. A = LU.

With pivoting, the more stable and general form is

PA=LU. PA = LU.

The decomposition is Gaussian elimination written as a matrix product. It is used to solve linear systems, compute determinants, and perform repeated solves efficiently. Its practical value comes from replacing one general system by two triangular systems:

Ly=b,Ux=y. Ly = b, \qquad Ux = y.

The triangular systems are solved by forward and backward substitution. This makes LU decomposition one of the central tools of computational linear algebra.