# Chapter 10. Gauss-Jordan Elimination

# Chapter 10. Gauss-Jordan Elimination

Gauss-Jordan elimination is the row-reduction method that carries an augmented matrix all the way to reduced row echelon form. Gaussian elimination stops at row echelon form and then uses back substitution. Gauss-Jordan elimination continues the reduction until each pivot is \(1\) and each pivot column has zeros everywhere except at the pivot. In this final form, the solution can be read directly.

## 10.1 Purpose

Given a linear system

$$
Ax=b,
$$

Gauss-Jordan elimination transforms the augmented matrix

$$
[A\mid b]
$$

into reduced row echelon form.

The target has the form

$$
[R\mid c],
$$

where \(R\) is reduced as far as row operations allow. The resulting system is equivalent to the original system, but the equations are arranged so that leading variables are already isolated.

For example,

$$
\left[
\begin{array}{ccc|c}
1&0&0&2\\
0&1&0&-1\\
0&0&1&5
\end{array}
\right]
$$

directly represents

$$
x_1=2,\qquad x_2=-1,\qquad x_3=5.
$$

No back substitution is needed.

## 10.2 Reduced Row Echelon Form

A matrix is in reduced row echelon form when it satisfies the following conditions:

| Condition | Meaning |
|---|---|
| Zero rows are at the bottom | Empty equations appear last |
| Each nonzero row has a leading \(1\) | The first nonzero entry of each nonzero row is normalized |
| Leading \(1\)'s move rightward | Each pivot is to the right of the pivot above it |
| Each pivot column has zeros elsewhere | A leading \(1\) is the only nonzero entry in its column |

These conditions define the final target of Gauss-Jordan elimination.

For example,

$$
\begin{bmatrix}
1&0&3&0\\
0&1&-2&0\\
0&0&0&1
\end{bmatrix}
$$

is in reduced row echelon form.

The matrix

$$
\begin{bmatrix}
1&2&0\\
0&1&3
\end{bmatrix}
$$

is in row echelon form, but not reduced row echelon form, because the pivot in the second column has a nonzero entry above it.

## 10.3 Difference from Gaussian Elimination

Gaussian elimination and Gauss-Jordan elimination use the same elementary row operations. They differ in the stopping point.

| Method | Final form | Final step |
|---|---|---|
| Gaussian elimination | Row echelon form | Back substitution |
| Gauss-Jordan elimination | Reduced row echelon form | Read solution directly |

Gaussian elimination creates zeros below pivots. Gauss-Jordan elimination creates zeros below and above pivots.

For the echelon matrix

$$
\left[
\begin{array}{ccc|c}
1&2&-1&4\\
0&1&3&5\\
0&0&1&2
\end{array}
\right],
$$

Gaussian elimination would solve by back substitution. Gauss-Jordan elimination continues by eliminating the entries above the pivot in column \(3\), then above the pivot in column \(2\).

## 10.4 The Basic Gauss-Jordan Step

Suppose a pivot has been normalized to \(1\) in row \(k\) and column \(j\). To eliminate another entry \(a_{ij}\) in the same column, replace row \(i\) by

$$
R_i \leftarrow R_i-a_{ij}R_k.
$$

The new entry in column \(j\) is

$$
a_{ij}-a_{ij}\cdot 1=0.
$$

This same formula works for entries below the pivot and above the pivot. Gaussian elimination uses it only below pivots. Gauss-Jordan elimination uses it throughout the pivot column.

## 10.5 Algorithm

For an augmented matrix, Gauss-Jordan elimination proceeds as follows:

```text
row = 1

for each column from left to right:
    find a nonzero entry in or below the current row

    if no such entry exists:
        continue to the next column

    swap that row into the current row position

    scale the current row so that the pivot equals 1

    eliminate every other entry in the pivot column

    move to the next row
```

When the process stops, the matrix is in reduced row echelon form.

The algorithm uses only elementary row operations: row swaps, nonzero row scalings, and row replacements. Since each operation is reversible, the final system is equivalent to the original system.

## 10.6 Example with a Unique Solution

Consider

$$
\begin{aligned}
x+y+z&=6,\\
2x-y+z&=3,\\
x+2y-z&=2.
\end{aligned}
$$

The augmented matrix is

$$
\left[
\begin{array}{ccc|c}
1&1&1&6\\
2&-1&1&3\\
1&2&-1&2
\end{array}
\right].
$$

Eliminate below the first pivot:

$$
R_2\leftarrow R_2-2R_1,
\qquad
R_3\leftarrow R_3-R_1.
$$

Then

$$
\left[
\begin{array}{ccc|c}
1&1&1&6\\
0&-3&-1&-9\\
0&1&-2&-4
\end{array}
\right].
$$

Swap rows \(2\) and \(3\):

$$
\left[
\begin{array}{ccc|c}
1&1&1&6\\
0&1&-2&-4\\
0&-3&-1&-9
\end{array}
\right].
$$

Eliminate the entry below the second pivot:

$$
R_3\leftarrow R_3+3R_2.
$$

This gives

$$
\left[
\begin{array}{ccc|c}
1&1&1&6\\
0&1&-2&-4\\
0&0&-7&-21
\end{array}
\right].
$$

Scale the third row:

$$
R_3\leftarrow -\frac17R_3.
$$

Then

$$
\left[
\begin{array}{ccc|c}
1&1&1&6\\
0&1&-2&-4\\
0&0&1&3
\end{array}
\right].
$$

Eliminate entries above the third pivot:

$$
R_2\leftarrow R_2+2R_3,
\qquad
R_1\leftarrow R_1-R_3.
$$

Then

$$
\left[
\begin{array}{ccc|c}
1&1&0&3\\
0&1&0&2\\
0&0&1&3
\end{array}
\right].
$$

Eliminate the entry above the second pivot:

$$
R_1\leftarrow R_1-R_2.
$$

Thus

$$
\left[
\begin{array}{ccc|c}
1&0&0&1\\
0&1&0&2\\
0&0&1&3
\end{array}
\right].
$$

The solution is

$$
\begin{bmatrix}
1\\
2\\
3
\end{bmatrix}.
$$

## 10.7 Reading a Unique Solution

When the left side of an augmented matrix reduces to the identity matrix, the solution is the right-hand side.

If

$$
[A\mid b]\longrightarrow [I\mid c],
$$

then

$$
x=c.
$$

For example,

$$
\left[
\begin{array}{ccc|c}
1&0&0&4\\
0&1&0&-2\\
0&0&1&7
\end{array}
\right]
$$

represents

$$
x_1=4,\qquad x_2=-2,\qquad x_3=7.
$$

The solution vector is

$$
x=
\begin{bmatrix}
4\\
-2\\
7
\end{bmatrix}.
$$

## 10.8 Free Variables in Reduced Form

Reduced row echelon form makes free variables explicit.

Consider

$$
\left[
\begin{array}{ccc|c}
1&0&2&3\\
0&1&-1&4\\
0&0&0&0
\end{array}
\right].
$$

The pivot columns are columns \(1\) and \(2\). The third variable is free.

The equations are

$$
x_1+2x_3=3,
$$

$$
x_2-x_3=4.
$$

Let

$$
x_3=t.
$$

Then

$$
x_1=3-2t,
\qquad
x_2=4+t.
$$

Thus

$$
x=
\begin{bmatrix}
3-2t\\
4+t\\
t
\end{bmatrix} =
\begin{bmatrix}
3\\
4\\
0
\end{bmatrix}
+
t
\begin{bmatrix}
-2\\
1\\
1
\end{bmatrix}.
$$

The reduced form separates the particular solution from the homogeneous direction.

## 10.9 Inconsistent Systems in Reduced Form

An inconsistent system produces a row of the form

$$
\left[
\begin{array}{cccc|c}
0&0&\cdots&0&b
\end{array}
\right],
\qquad b\ne 0.
$$

This row represents

$$
0=b.
$$

For example,

$$
\left[
\begin{array}{ccc|c}
1&0&2&5\\
0&1&-1&3\\
0&0&0&4
\end{array}
\right]
$$

contains the equation

$$
0=4.
$$

Therefore the system has no solution.

## 10.10 Pivot Columns and Leading Variables

Each pivot column corresponds to a leading variable. Each nonpivot column corresponds to a free variable.

In

$$
\left[
\begin{array}{ccccc|c}
1&0&3&0&-1&2\\
0&1&-2&0&4&5\\
0&0&0&1&6&-3
\end{array}
\right],
$$

the pivot columns are \(1\), \(2\), and \(4\). Thus

$$
x_1,\quad x_2,\quad x_4
$$

are leading variables.

The nonpivot columns are \(3\) and \(5\). Thus

$$
x_3,\quad x_5
$$

are free variables.

Let

$$
x_3=s,\qquad x_5=t.
$$

Then the equations give

$$
x_1=2-3s+t,
$$

$$
x_2=5+2s-4t,
$$

$$
x_4=-3-6t.
$$

So the solution set is

$$
x=
\begin{bmatrix}
2-3s+t\\
5+2s-4t\\
s\\
-3-6t\\
t
\end{bmatrix}.
$$

## 10.11 Homogeneous Systems

For a homogeneous system

$$
Ax=0,
$$

the augmented matrix has a zero right-hand side:

$$
[A\mid 0].
$$

Row operations preserve the zero right-hand side. Therefore the reduced row echelon form also has zero in the augmented column:

$$
[R\mid 0].
$$

The system always has the trivial solution

$$
x=0.
$$

If every variable column has a pivot, the trivial solution is the only solution. If at least one variable is free, the system has infinitely many solutions.

For example,

$$
\left[
\begin{array}{ccc|c}
1&0&-2&0\\
0&1&3&0
\end{array}
\right]
$$

gives

$$
x_1-2x_3=0,
\qquad
x_2+3x_3=0.
$$

Let

$$
x_3=t.
$$

Then

$$
x_1=2t,
\qquad
x_2=-3t.
$$

Thus

$$
x=t
\begin{bmatrix}
2\\
-3\\
1
\end{bmatrix}.
$$

## 10.12 Computing an Inverse

Gauss-Jordan elimination gives a direct method for computing the inverse of a square matrix.

For an \(n\times n\) matrix \(A\), form

$$
[A\mid I_n].
$$

Then row reduce.

If

$$
[A\mid I_n]\longrightarrow [I_n\mid B],
$$

then

$$
B=A^{-1}.
$$

If the left side cannot be reduced to \(I_n\), then \(A\) is not invertible.

This works because the row operations applied to \(A\) are also applied to \(I_n\). The accumulated row operations form the inverse matrix.

## 10.13 Example: Inverse of a \(2\times 2\) Matrix

Let

$$
A=
\begin{bmatrix}
1&2\\
3&5
\end{bmatrix}.
$$

Form

$$
[A\mid I] =
\left[
\begin{array}{cc|cc}
1&2&1&0\\
3&5&0&1
\end{array}
\right].
$$

Eliminate below the first pivot:

$$
R_2\leftarrow R_2-3R_1.
$$

Then

$$
\left[
\begin{array}{cc|cc}
1&2&1&0\\
0&-1&-3&1
\end{array}
\right].
$$

Scale row \(2\):

$$
R_2\leftarrow -R_2.
$$

Then

$$
\left[
\begin{array}{cc|cc}
1&2&1&0\\
0&1&3&-1
\end{array}
\right].
$$

Eliminate above the second pivot:

$$
R_1\leftarrow R_1-2R_2.
$$

Thus

$$
\left[
\begin{array}{cc|cc}
1&0&-5&2\\
0&1&3&-1
\end{array}
\right].
$$

Therefore

$$
A^{-1} =
\begin{bmatrix}
-5&2\\
3&-1
\end{bmatrix}.
$$

One may check:

$$
\begin{bmatrix}
1&2\\
3&5
\end{bmatrix}
\begin{bmatrix}
-5&2\\
3&-1
\end{bmatrix} =
\begin{bmatrix}
1&0\\
0&1
\end{bmatrix}.
$$

## 10.14 Rank from Reduced Row Echelon Form

The rank of a matrix is the number of pivot positions in its row echelon form. Since reduced row echelon form makes pivots explicit, it also makes rank explicit.

For example,

$$
R=
\begin{bmatrix}
1&0&3&0\\
0&1&-2&0\\
0&0&0&1\\
0&0&0&0
\end{bmatrix}
$$

has three pivots. Therefore

$$
\operatorname{rank}(R)=3.
$$

Any matrix row equivalent to \(R\) has rank \(3\).

## 10.15 Uniqueness of Reduced Row Echelon Form

Every matrix is row equivalent to exactly one reduced row echelon matrix.

This is a useful distinction. A matrix may have many possible echelon forms because pivot choices and row swaps may vary. But its reduced row echelon form is unique.

Thus the notation

$$
\operatorname{rref}(A)
$$

is meaningful: it refers to the unique reduced row echelon form of \(A\).

## 10.16 Column Information and a Caution

Row reduction preserves row relations and solution sets of augmented systems. It does not preserve the actual columns of the original matrix.

For example, pivot columns in the reduced form identify which columns of the original matrix are pivot columns. But the columns in the reduced matrix are usually not the same as the original columns.

This distinction matters when finding a basis for the column space. One uses the pivot column positions found from row reduction, then returns to the corresponding original columns of \(A\).

## 10.17 Computational Cost

For a dense \(n\times n\) system, Gauss-Jordan elimination has cubic arithmetic cost, like Gaussian elimination. It usually performs more arithmetic than Gaussian elimination because it eliminates both below and above pivots.

For solving a single system, Gaussian elimination followed by back substitution is often more efficient. For finding reduced row echelon form or computing inverses by \([A\mid I]\), Gauss-Jordan elimination is the natural method.

## 10.18 Common Mistakes

| Mistake | Correction |
|---|---|
| Stopping at echelon form | Gauss-Jordan continues to reduced row echelon form |
| Forgetting to scale pivots to \(1\) | Every leading entry in RREF must be \(1\) |
| Eliminating only below pivots | RREF requires zeros above pivots too |
| Applying row operations only to \(A\) in \([A\mid b]\) | Apply each operation to the whole augmented matrix |
| Treating free variables as zero without saying so | Free variables must be parameterized |
| Using reduced columns as original column-space basis vectors | Use pivot positions to select columns from the original matrix |

## 10.19 Summary

Gauss-Jordan elimination is the row-reduction process that produces reduced row echelon form. It uses elementary row operations to create leading \(1\)'s and zeros everywhere else in each pivot column.

The method gives direct access to:

| Output | How it appears in RREF |
|---|---|
| Unique solution | Left side becomes identity |
| No solution | A row \(0=b\), \(b\ne 0\) |
| Infinitely many solutions | At least one free variable |
| Rank | Number of pivots |
| Invertibility | Square matrix reduces to identity |
| Inverse | \([A\mid I]\to[I\mid A^{-1}]\) |

Gaussian elimination is usually preferred for solving one system efficiently. Gauss-Jordan elimination is preferred when the reduced form itself is needed.
