Skip to content

Chapter 9. Gaussian Elimination

Gaussian elimination is the standard method for solving systems of linear equations by row reduction. It transforms an augmented matrix into row echelon form by using elementary row operations. Once the matrix has echelon form, the solution is obtained by back substitution. The same procedure also exposes pivots, free variables, inconsistent rows, rank, and triangular structure. Gaussian elimination uses row operations to create zeros below each pivot, producing a row-equivalent system with the same solution set.

9.1 Purpose of Gaussian Elimination

Given a linear system

Ax=b, Ax=b,

Gaussian elimination rewrites the system in a simpler equivalent form.

The goal is to transform the augmented matrix

[Ab] [A\mid b]

into row echelon form:

[Uc], [U\mid c],

where UU has zeros below its pivots.

The new system

Ux=c Ux=c

has the same solutions as the original system, but it is easier to solve because the equations have a triangular structure.

9.2 Row Echelon Target

A matrix is in row echelon form when:

ConditionMeaning
Zero rows are below nonzero rowsEmpty equations appear at the bottom
Leading entries move rightwardEach pivot is to the right of the pivot above it
Entries below pivots are zeroThe system has triangular structure

For example,

[2118031700510] \left[ \begin{array}{ccc|c} 2&1&-1&8\\ 0&3&1&7\\ 0&0&5&10 \end{array} \right]

is in row echelon form.

It represents

2x+yz=8, 2x+y-z=8, 3y+z=7, 3y+z=7, 5z=10. 5z=10.

The last equation gives zz. The second equation then gives yy. The first equation then gives xx.

9.3 The Basic Elimination Step

Suppose the pivot is akk0a_{kk}\ne 0. To eliminate an entry aika_{ik} below it, replace row ii by

RiRiaikakkRk. R_i \leftarrow R_i-\frac{a_{ik}}{a_{kk}}R_k.

The new entry in position (i,k)(i,k) is

aikaikakkakk=0. a_{ik}-\frac{a_{ik}}{a_{kk}}a_{kk}=0.

This formula is the central arithmetic step of Gaussian elimination.

For example, in

[2156719], \left[ \begin{array}{cc|c} 2&1&5\\ 6&7&19 \end{array} \right],

the first pivot is 22. The entry below it is 66. Use

R2R23R1. R_2\leftarrow R_2-3R_1.

Then

[215044]. \left[ \begin{array}{cc|c} 2&1&5\\ 0&4&4 \end{array} \right].

The second equation is now simpler.

9.4 Forward Elimination

Forward elimination is the first phase of Gaussian elimination. It creates zeros below successive pivots.

The process is:

StepAction
1Choose a pivot column
2Choose a nonzero pivot entry
3Swap rows if needed
4Eliminate entries below the pivot
5Move to the next row and next column
6Continue until echelon form is reached

Forward elimination works from left to right and from top to bottom.

It does not usually eliminate entries above pivots. That extra work belongs to Gauss-Jordan elimination.

9.5 Back Substitution

Back substitution solves a system after forward elimination has produced echelon form.

For example,

[2118031700510] \left[ \begin{array}{ccc|c} 2&1&-1&8\\ 0&3&1&7\\ 0&0&5&10 \end{array} \right]

represents

2x+yz=8, 2x+y-z=8, 3y+z=7, 3y+z=7, 5z=10. 5z=10.

Start from the bottom:

5z=10, 5z=10,

so

z=2. z=2.

Substitute into the second equation:

3y+2=7, 3y+2=7,

so

y=53. y=\frac{5}{3}.

Substitute into the first equation:

2x+532=8. 2x+\frac{5}{3}-2=8.

Thus

2x+13=8, 2x+\frac{-1}{3}=8,

so

2x=253, 2x=\frac{25}{3},

and

x=256. x=\frac{25}{6}.

Back substitution solves from the last pivot upward.

9.6 Worked Example with a Unique Solution

Consider the system

x+y+z=6,2xy+z=3,x+2yz=2. \begin{aligned} x+y+z&=6,\\ 2x-y+z&=3,\\ x+2y-z&=2. \end{aligned}

The augmented matrix is

[111621131212]. \left[ \begin{array}{ccc|c} 1&1&1&6\\ 2&-1&1&3\\ 1&2&-1&2 \end{array} \right].

Use the first row as the first pivot row.

Eliminate the 22 below the first pivot:

R2R22R1. R_2\leftarrow R_2-2R_1.

Eliminate the 11 below the first pivot:

R3R3R1. R_3\leftarrow R_3-R_1.

This gives

[111603190124]. \left[ \begin{array}{ccc|c} 1&1&1&6\\ 0&-3&-1&-9\\ 0&1&-2&-4 \end{array} \right].

Swap rows 22 and 33:

R2R3. R_2\leftrightarrow R_3.

Then

[111601240319]. \left[ \begin{array}{ccc|c} 1&1&1&6\\ 0&1&-2&-4\\ 0&-3&-1&-9 \end{array} \right].

Eliminate the entry below the second pivot:

R3R3+3R2. R_3\leftarrow R_3+3R_2.

Then

[1116012400721]. \left[ \begin{array}{ccc|c} 1&1&1&6\\ 0&1&-2&-4\\ 0&0&-7&-21 \end{array} \right].

This is row echelon form.

Back substitution gives

7z=21, -7z=-21,

so

z=3. z=3.

Then

y2z=4 y-2z=-4

gives

y6=4, y-6=-4,

so

y=2. y=2.

Finally,

x+y+z=6 x+y+z=6

gives

x+2+3=6, x+2+3=6,

so

x=1. x=1.

The solution is

[123]. \begin{bmatrix} 1\\ 2\\ 3 \end{bmatrix}.

9.7 Pivots

A pivot is a leading nonzero entry used to eliminate entries below it.

In the echelon matrix

[1116012400721], \left[ \begin{array}{ccc|c} 1&1&1&6\\ 0&1&-2&-4\\ 0&0&-7&-21 \end{array} \right],

the pivot entries are

1,1,7. 1,\quad 1,\quad -7.

The pivot columns are columns 11, 22, and 33. Since every variable column contains a pivot, the system has a unique solution.

Pivots measure how many independent constraints the system contains.

9.8 Row Swapping

A pivot entry must be nonzero. If the entry in the intended pivot position is zero, one must swap with a lower row containing a nonzero entry in the same column.

For example,

[024315] \left[ \begin{array}{cc|c} 0&2&4\\ 3&1&5 \end{array} \right]

cannot use the first row, first column as a pivot. Swap the two rows:

R1R2. R_1\leftrightarrow R_2.

Then

[315024]. \left[ \begin{array}{cc|c} 3&1&5\\ 0&2&4 \end{array} \right].

Now the pivot is 33.

Row swapping preserves the solution set. It only changes the order of equations.

9.9 Free Variables

If a column has no pivot, the corresponding variable is free.

Consider

[12140036]. \left[ \begin{array}{ccc|c} 1&2&-1&4\\ 0&0&3&6 \end{array} \right].

The pivot columns are 11 and 33. Therefore x1x_1 and x3x_3 are leading variables. The variable x2x_2 is free.

The equations are

x1+2x2x3=4, x_1+2x_2-x_3=4, 3x3=6. 3x_3=6.

Thus

x3=2. x_3=2.

Let

x2=t. x_2=t.

Then

x1+2t2=4, x_1+2t-2=4,

so

x1=62t. x_1=6-2t.

The solution set is

{[62tt2]:tR}. \left\{ \begin{bmatrix} 6-2t\\ t\\ 2 \end{bmatrix} :t\in\mathbb{R} \right\}.

The system has infinitely many solutions.

9.10 Inconsistent Systems

Gaussian elimination detects inconsistency by producing a row of the form

[000b],b0. \left[ \begin{array}{cccc|c} 0&0&\cdots&0&b \end{array} \right], \qquad b\ne 0.

This row represents

0=b, 0=b,

which is impossible.

For example,

[112225] \left[ \begin{array}{cc|c} 1&1&2\\ 2&2&5 \end{array} \right]

after

R2R22R1 R_2\leftarrow R_2-2R_1

becomes

[112001]. \left[ \begin{array}{cc|c} 1&1&2\\ 0&0&1 \end{array} \right].

The second row says

0=1. 0=1.

Therefore the system is inconsistent.

9.11 Infinitely Many Solutions

A consistent system with at least one free variable has infinitely many solutions.

Consider

[11142228]. \left[ \begin{array}{ccc|c} 1&1&1&4\\ 2&2&2&8 \end{array} \right].

Use

R2R22R1. R_2\leftarrow R_2-2R_1.

Then

[11140000]. \left[ \begin{array}{ccc|c} 1&1&1&4\\ 0&0&0&0 \end{array} \right].

There is one pivot and three variables. Therefore there are two free variables.

The equation is

x+y+z=4. x+y+z=4.

Let

y=s,z=t. y=s, \qquad z=t.

Then

x=4st. x=4-s-t.

Thus

[xyz]=[4stst]. \begin{bmatrix} x\\ y\\ z \end{bmatrix} = \begin{bmatrix} 4-s-t\\ s\\ t \end{bmatrix}.

Equivalently,

[xyz]=[400]+s[110]+t[101]. \begin{bmatrix} x\\ y\\ z \end{bmatrix} = \begin{bmatrix} 4\\ 0\\ 0 \end{bmatrix} + s \begin{bmatrix} -1\\ 1\\ 0 \end{bmatrix} + t \begin{bmatrix} -1\\ 0\\ 1 \end{bmatrix}.

9.12 Gaussian Elimination Algorithm

For an augmented matrix with mm rows and nn variable columns, the algorithm is:

row = 1

for col = 1 to n:
    find a row r >= row with a nonzero entry in column col

    if no such row exists:
        continue to next column

    swap row r with current row

    for each row i below current row:
        factor = A[i, col] / A[row, col]
        R_i = R_i - factor * R_row

    row = row + 1

After this process, the matrix is in row echelon form. Then one checks for contradictions and performs back substitution.

The exact pivot choice may vary. In exact arithmetic, any nonzero pivot is valid. In floating point arithmetic, pivot choice affects numerical stability.

9.13 Partial Pivoting

Partial pivoting chooses a pivot with large absolute value among the available entries in the pivot column.

Instead of using the first nonzero entry below the current row, one chooses the row rr for which

ark |a_{r k}|

is largest among rows still available.

Then that row is swapped into the pivot position.

Partial pivoting is used in numerical computation because small pivots can magnify rounding error. Choosing a larger pivot generally improves stability when computations are performed with floating point arithmetic.

9.14 Exact Arithmetic and Floating Point Arithmetic

Gaussian elimination can be studied in exact arithmetic or floating point arithmetic.

In exact arithmetic, fractions and integers are represented without rounding. The main concern is algebraic correctness.

In floating point arithmetic, numbers are rounded after operations. The main concerns are accuracy, stability, and conditioning.

For example, if a pivot is very small, then the factor

aikakk \frac{a_{ik}}{a_{kk}}

may be very large. Large factors can amplify errors in the row entries. Pivoting reduces this risk.

Numerical linear algebra studies these issues systematically.

9.15 Operation Count

For a dense n×nn\times n system, Gaussian elimination requires on the order of

n3 n^3

arithmetic operations.

More precisely, the leading cost of forward elimination is approximately

23n3 \frac{2}{3}n^3

floating point operations for large nn, with lower-order costs for back substitution.

The cubic cost is acceptable for moderate dense systems. For very large systems, especially sparse systems, specialized methods may be needed. Gaussian elimination has cubic arithmetic complexity for dense systems.

9.16 Gaussian Elimination and Rank

Gaussian elimination also computes rank.

The rank of a matrix is the number of pivot positions in an echelon form. Since row operations preserve rank, reducing a matrix to echelon form makes the rank visible.

For example,

A=[123246110]. A= \begin{bmatrix} 1&2&3\\ 2&4&6\\ 1&1&0 \end{bmatrix}.

Use

R2R22R1,R3R3R1. R_2\leftarrow R_2-2R_1, \qquad R_3\leftarrow R_3-R_1.

Then

[123000013]. \begin{bmatrix} 1&2&3\\ 0&0&0\\ 0&-1&-3 \end{bmatrix}.

Swap rows 22 and 33:

[123013000]. \begin{bmatrix} 1&2&3\\ 0&-1&-3\\ 0&0&0 \end{bmatrix}.

There are two nonzero rows in echelon form. Therefore

rank(A)=2. \operatorname{rank}(A)=2.

9.17 Gaussian Elimination and Determinants

For square matrices, Gaussian elimination can help compute determinants.

Row operations affect determinants as follows:

Row operationEffect on determinant
Swap two rowsChanges sign
Multiply a row by ccMultiplies determinant by cc
Add a multiple of one row to anotherNo change

After reducing a matrix to triangular form, the determinant is the product of the diagonal entries, adjusted for row swaps and scalings.

This use of elimination will be developed fully in the chapter on determinants.

9.18 Gaussian Elimination and Inverses

Gaussian elimination can also compute matrix inverses.

For an n×nn\times n matrix AA, form the augmented matrix

[AI]. [A\mid I].

Then row reduce. If AA can be reduced to II, then the right-hand side becomes A1A^{-1}:

[AI][IA1]. [A\mid I]\longrightarrow [I\mid A^{-1}].

If AA cannot be reduced to II, then AA is not invertible.

This method is a direct application of row operations and will be studied in detail later.

9.19 Relation to Gauss-Jordan Elimination

Gaussian elimination stops after reaching row echelon form and then uses back substitution.

Gauss-Jordan elimination continues until reduced row echelon form is reached. It eliminates entries above pivots as well as below pivots.

MethodFinal matrix formFinal solution step
Gaussian eliminationRow echelon formBack substitution
Gauss-Jordan eliminationReduced row echelon formRead solution directly

Gaussian elimination usually does less row-reduction work when one only needs to solve a system. Gauss-Jordan elimination gives a more explicit final matrix form.

9.20 Common Mistakes

Several mistakes occur often in Gaussian elimination.

MistakeCorrection
Dividing by a zero pivotSwap with a row containing a nonzero entry
Changing only coefficient entriesApply operations to the whole augmented row
Forgetting row swapsRecord swaps when determinant signs matter
Assuming every system has a unique solutionCheck for contradictions and free variables
Eliminating above pivots too earlyThis belongs to Gauss-Jordan elimination
Rounding too aggressivelyKeep exact values when possible

Gaussian elimination is mechanical, but small arithmetic errors can change the final classification.

9.21 Summary

Gaussian elimination solves a linear system by reducing its augmented matrix to row echelon form. It uses elementary row operations to create zeros below pivots. The resulting triangular system is solved by back substitution.

The main phases are:

PhasePurpose
Forward eliminationCreate zeros below pivots
Echelon inspectionDetect contradiction and free variables
Back substitutionSolve for leading variables

The possible outcomes are:

Echelon resultSolution type
Contradictory rowNo solution
Pivot in every variable columnUnique solution
Free variable and no contradictionInfinitely many solutions

Gaussian elimination is more than a method for solving equations. It is also a method for revealing the structure of a matrix: pivots, rank, dependence, invertibility, and triangular form.