LU decomposition is a factorization of a matrix into a lower triangular part and an upper triangular part. It is one of the standard matrix decompositions used to solve linear systems, compute determinants, and organize Gaussian elimination in matrix form.
For a square matrix , an LU decomposition has the form
where is lower triangular and is upper triangular. A lower triangular matrix has zero entries above the main diagonal. An upper triangular matrix has zero entries below the main diagonal. This is the standard lower-upper factorization described in numerical linear algebra references.
75.1 Triangular Matrices
A lower triangular matrix has the form
An upper triangular matrix has the form
Triangular systems are easy to solve because each equation contains only a controlled set of unknowns. A lower triangular system is solved from top to bottom. An upper triangular system is solved from bottom to top.
This is the reason LU decomposition is useful. It replaces one general linear system by two triangular systems.
75.2 The Basic Factorization
Let
An LU decomposition writes as
In many treatments, is chosen to have ones on the diagonal:
Such a matrix is called unit lower triangular. With this convention, the diagonal scaling is placed in . This convention is often called the Doolittle form of LU decomposition. A related convention, called the Crout form, places the unit diagonal in instead.
75.3 LU and Gaussian Elimination
LU decomposition is Gaussian elimination recorded as a matrix factorization.
Suppose Gaussian elimination transforms into an upper triangular matrix . At each step, a multiple of one row is subtracted from a lower row to eliminate an entry below the pivot. The multipliers used in these eliminations become the entries of .
For example, to eliminate using , one uses the multiplier
To eliminate , one uses
These multipliers are stored below the diagonal in . The remaining matrix after elimination is .
Thus records the result of elimination, while records how the elimination was performed.
75.4 A Two by Two Example
Let
We seek
with
Multiplying gives
Equating this with , we obtain
so
The final entry gives
Therefore
so
Hence
Indeed,
75.5 Solving Linear Systems with LU
Suppose
and
Then
Introduce an intermediate vector
Then the original system becomes two triangular systems:
and
The first system is solved by forward substitution. The second system is solved by backward substitution.
This procedure is especially useful when many systems have the same coefficient matrix but different right-hand sides . The factorization is computed once. Each new right-hand side then requires only two triangular solves.
75.6 Forward Substitution
Let
where is lower triangular. Written componentwise,
and so on.
The first equation gives . The second equation then gives . The third equation gives . Continuing in this order gives all entries of .
For a general lower triangular matrix,
If is unit lower triangular, then , and the formula becomes
75.7 Backward Substitution
Let
where is upper triangular. Written componentwise, the last equation is
This gives . The previous equation gives , since is already known. The process continues upward.
For a general upper triangular matrix,
Backward substitution works only when the diagonal entries needed for division are nonzero. In the LU setting, this means the pivots must be nonzero.
75.8 Example: Solving a System
Let
From the previous example,
where
First solve
That is,
The equations are
Therefore
So
Now solve
That is,
The second equation gives
The first equation gives
Thus
so
The solution is
75.9 Existence of LU Decomposition
An LU decomposition without row exchanges does not always exist. The elimination process requires nonzero pivots. If a pivot is zero, division by that pivot cannot be performed.
For example,
cannot be decomposed as with unit lower triangular and upper triangular. The first pivot would be zero.
However, row exchanges fix this problem. Instead of factoring directly, one factors a row-permuted version of :
Here is a permutation matrix. This form is called LU decomposition with pivoting, or LUP decomposition. Standard numerical algorithms usually include pivoting because it avoids zero pivots and improves numerical stability.
75.10 Pivoting
Pivoting means exchanging rows before or during elimination.
The most common form is partial pivoting. At each step, the algorithm chooses a pivot row by looking down the current column and selecting an entry with large absolute value. That row is swapped into the pivot position.
The factorization then has the form
Equivalently,
Since a permutation matrix satisfies
one may also write
Pivoting has two purposes. First, it prevents division by zero when a possible nonzero pivot is available. Second, it can reduce the growth of rounding errors in floating point arithmetic.
75.11 Determinants from LU
LU decomposition also simplifies determinant computation.
If
then
The determinant of a triangular matrix is the product of its diagonal entries. Therefore,
If is unit lower triangular, then
Hence
If pivoting is used and
then
Thus
because . Each row swap changes the sign of the determinant.
75.12 Inverses from LU
LU decomposition can be used to compute the inverse of a matrix, although direct inversion is often avoided in numerical work when only a solution of is needed.
To find , solve
If
then
This is equivalent to solving
and then
Each column of is found by solving two triangular systems. The columns of are the columns of .
This method is systematic, but it is usually better to solve directly by LU rather than compute explicitly.
75.13 Algorithm
The following algorithm computes an LU decomposition without pivoting, using the convention that has ones on the diagonal.
lu_decomposition(A):
n = number of rows of A
L = identity matrix of size n
U = zero matrix of size n by n
for k = 1 to n:
for j = k to n:
U[k,j] = A[k,j]
for s = 1 to k-1:
U[k,j] = U[k,j] - L[k,s] * U[s,j]
for i = k+1 to n:
L[i,k] = A[i,k]
for s = 1 to k-1:
L[i,k] = L[i,k] - L[i,s] * U[s,k]
L[i,k] = L[i,k] / U[k,k]
return L, UThe division by requires a nonzero pivot. If the pivot is zero or very small, the algorithm should use pivoting.
75.14 Operation Count
For a dense matrix, LU decomposition requires on the order of
arithmetic operations.
Solving the two triangular systems requires on the order of
operations for each right-hand side.
Thus the expensive step is the factorization. Once the factorization is available, solving for additional right-hand sides is much cheaper.
This explains why LU decomposition is useful in repeated-solve problems.
75.15 Interpretation
LU decomposition separates elimination into two pieces.
The matrix is the upper triangular matrix obtained after elimination. The matrix stores the multipliers that produced the eliminations. The equation
says that the original matrix can be reconstructed from these two pieces.
In computational terms, LU decomposition is a reusable form of Gaussian elimination. In algebraic terms, it is a factorization into simpler matrices. In geometric terms, it decomposes a linear transformation into transformations with triangular structure.
75.16 Common Variants
| Variant | Form | Description |
|---|---|---|
| LU | Factorization without pivoting | |
| LUP | LU with row pivoting | |
| PLU | Equivalent convention using a permutation factor | |
| LDU | Separates diagonal scaling into | |
| Crout | has unit diagonal | |
| Doolittle | has unit diagonal |
Different conventions place diagonal entries in different factors. The essential idea remains the same: a general matrix is represented by triangular factors.
75.17 Summary
LU decomposition factors a matrix into lower and upper triangular matrices:
With pivoting, the more stable and general form is
The decomposition is Gaussian elimination written as a matrix product. It is used to solve linear systems, compute determinants, and perform repeated solves efficiently. Its practical value comes from replacing one general system by two triangular systems:
The triangular systems are solved by forward and backward substitution. This makes LU decomposition one of the central tools of computational linear algebra.