# Chapter 9. Gaussian Elimination

# Chapter 9. Gaussian Elimination

Gaussian elimination is the standard method for solving systems of linear equations by row reduction. It transforms an augmented matrix into row echelon form by using elementary row operations. Once the matrix has echelon form, the solution is obtained by back substitution. The same procedure also exposes pivots, free variables, inconsistent rows, rank, and triangular structure. Gaussian elimination uses row operations to create zeros below each pivot, producing a row-equivalent system with the same solution set.

## 9.1 Purpose of Gaussian Elimination

Given a linear system

$$
Ax=b,
$$

Gaussian elimination rewrites the system in a simpler equivalent form.

The goal is to transform the augmented matrix

$$
[A\mid b]
$$

into row echelon form:

$$
[U\mid c],
$$

where \(U\) has zeros below its pivots.

The new system

$$
Ux=c
$$

has the same solutions as the original system, but it is easier to solve because the equations have a triangular structure.

## 9.2 Row Echelon Target

A matrix is in row echelon form when:

| Condition | Meaning |
|---|---|
| Zero rows are below nonzero rows | Empty equations appear at the bottom |
| Leading entries move rightward | Each pivot is to the right of the pivot above it |
| Entries below pivots are zero | The system has triangular structure |

For example,

$$
\left[
\begin{array}{ccc|c}
2&1&-1&8\\
0&3&1&7\\
0&0&5&10
\end{array}
\right]
$$

is in row echelon form.

It represents

$$
2x+y-z=8,
$$

$$
3y+z=7,
$$

$$
5z=10.
$$

The last equation gives \(z\). The second equation then gives \(y\). The first equation then gives \(x\).

## 9.3 The Basic Elimination Step

Suppose the pivot is \(a_{kk}\ne 0\). To eliminate an entry \(a_{ik}\) below it, replace row \(i\) by

$$
R_i \leftarrow R_i-\frac{a_{ik}}{a_{kk}}R_k.
$$

The new entry in position \((i,k)\) is

$$
a_{ik}-\frac{a_{ik}}{a_{kk}}a_{kk}=0.
$$

This formula is the central arithmetic step of Gaussian elimination.

For example, in

$$
\left[
\begin{array}{cc|c}
2&1&5\\
6&7&19
\end{array}
\right],
$$

the first pivot is \(2\). The entry below it is \(6\). Use

$$
R_2\leftarrow R_2-3R_1.
$$

Then

$$
\left[
\begin{array}{cc|c}
2&1&5\\
0&4&4
\end{array}
\right].
$$

The second equation is now simpler.

## 9.4 Forward Elimination

Forward elimination is the first phase of Gaussian elimination. It creates zeros below successive pivots.

The process is:

| Step | Action |
|---|---|
| 1 | Choose a pivot column |
| 2 | Choose a nonzero pivot entry |
| 3 | Swap rows if needed |
| 4 | Eliminate entries below the pivot |
| 5 | Move to the next row and next column |
| 6 | Continue until echelon form is reached |

Forward elimination works from left to right and from top to bottom.

It does not usually eliminate entries above pivots. That extra work belongs to Gauss-Jordan elimination.

## 9.5 Back Substitution

Back substitution solves a system after forward elimination has produced echelon form.

For example,

$$
\left[
\begin{array}{ccc|c}
2&1&-1&8\\
0&3&1&7\\
0&0&5&10
\end{array}
\right]
$$

represents

$$
2x+y-z=8,
$$

$$
3y+z=7,
$$

$$
5z=10.
$$

Start from the bottom:

$$
5z=10,
$$

so

$$
z=2.
$$

Substitute into the second equation:

$$
3y+2=7,
$$

so

$$
y=\frac{5}{3}.
$$

Substitute into the first equation:

$$
2x+\frac{5}{3}-2=8.
$$

Thus

$$
2x+\frac{-1}{3}=8,
$$

so

$$
2x=\frac{25}{3},
$$

and

$$
x=\frac{25}{6}.
$$

Back substitution solves from the last pivot upward.

## 9.6 Worked Example with a Unique Solution

Consider the system

$$
\begin{aligned}
x+y+z&=6,\\
2x-y+z&=3,\\
x+2y-z&=2.
\end{aligned}
$$

The augmented matrix is

$$
\left[
\begin{array}{ccc|c}
1&1&1&6\\
2&-1&1&3\\
1&2&-1&2
\end{array}
\right].
$$

Use the first row as the first pivot row.

Eliminate the \(2\) below the first pivot:

$$
R_2\leftarrow R_2-2R_1.
$$

Eliminate the \(1\) below the first pivot:

$$
R_3\leftarrow R_3-R_1.
$$

This gives

$$
\left[
\begin{array}{ccc|c}
1&1&1&6\\
0&-3&-1&-9\\
0&1&-2&-4
\end{array}
\right].
$$

Swap rows \(2\) and \(3\):

$$
R_2\leftrightarrow R_3.
$$

Then

$$
\left[
\begin{array}{ccc|c}
1&1&1&6\\
0&1&-2&-4\\
0&-3&-1&-9
\end{array}
\right].
$$

Eliminate the entry below the second pivot:

$$
R_3\leftarrow R_3+3R_2.
$$

Then

$$
\left[
\begin{array}{ccc|c}
1&1&1&6\\
0&1&-2&-4\\
0&0&-7&-21
\end{array}
\right].
$$

This is row echelon form.

Back substitution gives

$$
-7z=-21,
$$

so

$$
z=3.
$$

Then

$$
y-2z=-4
$$

gives

$$
y-6=-4,
$$

so

$$
y=2.
$$

Finally,

$$
x+y+z=6
$$

gives

$$
x+2+3=6,
$$

so

$$
x=1.
$$

The solution is

$$
\begin{bmatrix}
1\\
2\\
3
\end{bmatrix}.
$$

## 9.7 Pivots

A pivot is a leading nonzero entry used to eliminate entries below it.

In the echelon matrix

$$
\left[
\begin{array}{ccc|c}
1&1&1&6\\
0&1&-2&-4\\
0&0&-7&-21
\end{array}
\right],
$$

the pivot entries are

$$
1,\quad 1,\quad -7.
$$

The pivot columns are columns \(1\), \(2\), and \(3\). Since every variable column contains a pivot, the system has a unique solution.

Pivots measure how many independent constraints the system contains.

## 9.8 Row Swapping

A pivot entry must be nonzero. If the entry in the intended pivot position is zero, one must swap with a lower row containing a nonzero entry in the same column.

For example,

$$
\left[
\begin{array}{cc|c}
0&2&4\\
3&1&5
\end{array}
\right]
$$

cannot use the first row, first column as a pivot. Swap the two rows:

$$
R_1\leftrightarrow R_2.
$$

Then

$$
\left[
\begin{array}{cc|c}
3&1&5\\
0&2&4
\end{array}
\right].
$$

Now the pivot is \(3\).

Row swapping preserves the solution set. It only changes the order of equations.

## 9.9 Free Variables

If a column has no pivot, the corresponding variable is free.

Consider

$$
\left[
\begin{array}{ccc|c}
1&2&-1&4\\
0&0&3&6
\end{array}
\right].
$$

The pivot columns are \(1\) and \(3\). Therefore \(x_1\) and \(x_3\) are leading variables. The variable \(x_2\) is free.

The equations are

$$
x_1+2x_2-x_3=4,
$$

$$
3x_3=6.
$$

Thus

$$
x_3=2.
$$

Let

$$
x_2=t.
$$

Then

$$
x_1+2t-2=4,
$$

so

$$
x_1=6-2t.
$$

The solution set is

$$
\left\{
\begin{bmatrix}
6-2t\\
t\\
2
\end{bmatrix}
:t\in\mathbb{R}
\right\}.
$$

The system has infinitely many solutions.

## 9.10 Inconsistent Systems

Gaussian elimination detects inconsistency by producing a row of the form

$$
\left[
\begin{array}{cccc|c}
0&0&\cdots&0&b
\end{array}
\right],
\qquad b\ne 0.
$$

This row represents

$$
0=b,
$$

which is impossible.

For example,

$$
\left[
\begin{array}{cc|c}
1&1&2\\
2&2&5
\end{array}
\right]
$$

after

$$
R_2\leftarrow R_2-2R_1
$$

becomes

$$
\left[
\begin{array}{cc|c}
1&1&2\\
0&0&1
\end{array}
\right].
$$

The second row says

$$
0=1.
$$

Therefore the system is inconsistent.

## 9.11 Infinitely Many Solutions

A consistent system with at least one free variable has infinitely many solutions.

Consider

$$
\left[
\begin{array}{ccc|c}
1&1&1&4\\
2&2&2&8
\end{array}
\right].
$$

Use

$$
R_2\leftarrow R_2-2R_1.
$$

Then

$$
\left[
\begin{array}{ccc|c}
1&1&1&4\\
0&0&0&0
\end{array}
\right].
$$

There is one pivot and three variables. Therefore there are two free variables.

The equation is

$$
x+y+z=4.
$$

Let

$$
y=s,
\qquad
z=t.
$$

Then

$$
x=4-s-t.
$$

Thus

$$
\begin{bmatrix}
x\\
y\\
z
\end{bmatrix} =
\begin{bmatrix}
4-s-t\\
s\\
t
\end{bmatrix}.
$$

Equivalently,

$$
\begin{bmatrix}
x\\
y\\
z
\end{bmatrix} =
\begin{bmatrix}
4\\
0\\
0
\end{bmatrix}
+
s
\begin{bmatrix}
-1\\
1\\
0
\end{bmatrix}
+
t
\begin{bmatrix}
-1\\
0\\
1
\end{bmatrix}.
$$

## 9.12 Gaussian Elimination Algorithm

For an augmented matrix with \(m\) rows and \(n\) variable columns, the algorithm is:

```
row = 1

for col = 1 to n:
    find a row r >= row with a nonzero entry in column col

    if no such row exists:
        continue to next column

    swap row r with current row

    for each row i below current row:
        factor = A[i, col] / A[row, col]
        R_i = R_i - factor * R_row

    row = row + 1
```

After this process, the matrix is in row echelon form. Then one checks for contradictions and performs back substitution.

The exact pivot choice may vary. In exact arithmetic, any nonzero pivot is valid. In floating point arithmetic, pivot choice affects numerical stability.

## 9.13 Partial Pivoting

Partial pivoting chooses a pivot with large absolute value among the available entries in the pivot column.

Instead of using the first nonzero entry below the current row, one chooses the row \(r\) for which

$$
|a_{r k}|
$$

is largest among rows still available.

Then that row is swapped into the pivot position.

Partial pivoting is used in numerical computation because small pivots can magnify rounding error. Choosing a larger pivot generally improves stability when computations are performed with floating point arithmetic.

## 9.14 Exact Arithmetic and Floating Point Arithmetic

Gaussian elimination can be studied in exact arithmetic or floating point arithmetic.

In exact arithmetic, fractions and integers are represented without rounding. The main concern is algebraic correctness.

In floating point arithmetic, numbers are rounded after operations. The main concerns are accuracy, stability, and conditioning.

For example, if a pivot is very small, then the factor

$$
\frac{a_{ik}}{a_{kk}}
$$

may be very large. Large factors can amplify errors in the row entries. Pivoting reduces this risk.

Numerical linear algebra studies these issues systematically.

## 9.15 Operation Count

For a dense \(n\times n\) system, Gaussian elimination requires on the order of

$$
n^3
$$

arithmetic operations.

More precisely, the leading cost of forward elimination is approximately

$$
\frac{2}{3}n^3
$$

floating point operations for large \(n\), with lower-order costs for back substitution.

The cubic cost is acceptable for moderate dense systems. For very large systems, especially sparse systems, specialized methods may be needed. Gaussian elimination has cubic arithmetic complexity for dense systems.

## 9.16 Gaussian Elimination and Rank

Gaussian elimination also computes rank.

The rank of a matrix is the number of pivot positions in an echelon form. Since row operations preserve rank, reducing a matrix to echelon form makes the rank visible.

For example,

$$
A=
\begin{bmatrix}
1&2&3\\
2&4&6\\
1&1&0
\end{bmatrix}.
$$

Use

$$
R_2\leftarrow R_2-2R_1,
\qquad
R_3\leftarrow R_3-R_1.
$$

Then

$$
\begin{bmatrix}
1&2&3\\
0&0&0\\
0&-1&-3
\end{bmatrix}.
$$

Swap rows \(2\) and \(3\):

$$
\begin{bmatrix}
1&2&3\\
0&-1&-3\\
0&0&0
\end{bmatrix}.
$$

There are two nonzero rows in echelon form. Therefore

$$
\operatorname{rank}(A)=2.
$$

## 9.17 Gaussian Elimination and Determinants

For square matrices, Gaussian elimination can help compute determinants.

Row operations affect determinants as follows:

| Row operation | Effect on determinant |
|---|---|
| Swap two rows | Changes sign |
| Multiply a row by \(c\) | Multiplies determinant by \(c\) |
| Add a multiple of one row to another | No change |

After reducing a matrix to triangular form, the determinant is the product of the diagonal entries, adjusted for row swaps and scalings.

This use of elimination will be developed fully in the chapter on determinants.

## 9.18 Gaussian Elimination and Inverses

Gaussian elimination can also compute matrix inverses.

For an \(n\times n\) matrix \(A\), form the augmented matrix

$$
[A\mid I].
$$

Then row reduce. If \(A\) can be reduced to \(I\), then the right-hand side becomes \(A^{-1}\):

$$
[A\mid I]\longrightarrow [I\mid A^{-1}].
$$

If \(A\) cannot be reduced to \(I\), then \(A\) is not invertible.

This method is a direct application of row operations and will be studied in detail later.

## 9.19 Relation to Gauss-Jordan Elimination

Gaussian elimination stops after reaching row echelon form and then uses back substitution.

Gauss-Jordan elimination continues until reduced row echelon form is reached. It eliminates entries above pivots as well as below pivots.

| Method | Final matrix form | Final solution step |
|---|---|---|
| Gaussian elimination | Row echelon form | Back substitution |
| Gauss-Jordan elimination | Reduced row echelon form | Read solution directly |

Gaussian elimination usually does less row-reduction work when one only needs to solve a system. Gauss-Jordan elimination gives a more explicit final matrix form.

## 9.20 Common Mistakes

Several mistakes occur often in Gaussian elimination.

| Mistake | Correction |
|---|---|
| Dividing by a zero pivot | Swap with a row containing a nonzero entry |
| Changing only coefficient entries | Apply operations to the whole augmented row |
| Forgetting row swaps | Record swaps when determinant signs matter |
| Assuming every system has a unique solution | Check for contradictions and free variables |
| Eliminating above pivots too early | This belongs to Gauss-Jordan elimination |
| Rounding too aggressively | Keep exact values when possible |

Gaussian elimination is mechanical, but small arithmetic errors can change the final classification.

## 9.21 Summary

Gaussian elimination solves a linear system by reducing its augmented matrix to row echelon form. It uses elementary row operations to create zeros below pivots. The resulting triangular system is solved by back substitution.

The main phases are:

| Phase | Purpose |
|---|---|
| Forward elimination | Create zeros below pivots |
| Echelon inspection | Detect contradiction and free variables |
| Back substitution | Solve for leading variables |

The possible outcomes are:

| Echelon result | Solution type |
|---|---|
| Contradictory row | No solution |
| Pivot in every variable column | Unique solution |
| Free variable and no contradiction | Infinitely many solutions |

Gaussian elimination is more than a method for solving equations. It is also a method for revealing the structure of a matrix: pivots, rank, dependence, invertibility, and triangular form.
