# Chapter 13. Determinants

# Chapter 13. Determinants

A determinant is a scalar attached to a square matrix. It measures several things at once: whether the matrix is invertible, how the corresponding linear transformation scales volume, and whether orientation is preserved or reversed. A matrix has a determinant only when it is square. For an \(n\times n\) matrix \(A\), the determinant is written as

$$
\det(A)
$$

or

$$
|A|.
$$

The determinant is zero exactly when the matrix is singular. Equivalently, a square matrix is invertible exactly when its determinant is nonzero.

## 13.1 Determinants of \(1\times 1\) Matrices

The determinant of a \(1\times 1\) matrix is its only entry:

$$
\det([a])=a.
$$

For example,

$$
\det([7])=7.
$$

This base case is used in recursive definitions of larger determinants.

## 13.2 Determinants of \(2\times 2\) Matrices

For a \(2\times 2\) matrix

$$
A=
\begin{bmatrix}
a&b\\
c&d
\end{bmatrix},
$$

the determinant is

$$
\det(A)=ad-bc.
$$

For example,

$$
\det
\begin{bmatrix}
3&2\\
5&4
\end{bmatrix} =
3\cdot 4-2\cdot 5 =
12-10 =
2.
$$

The two products \(ad\) and \(bc\) compare the two diagonal contributions. If they cancel, the determinant is zero.

## 13.3 Area Interpretation in the Plane

Let

$$
A=
\begin{bmatrix}
a&b\\
c&d
\end{bmatrix}.
$$

The columns of \(A\) are

$$
u=
\begin{bmatrix}
a\\
c
\end{bmatrix},
\qquad
v=
\begin{bmatrix}
b\\
d
\end{bmatrix}.
$$

The absolute value

$$
|\det(A)|
$$

is the area of the parallelogram spanned by \(u\) and \(v\).

If

$$
\det(A)>0,
$$

the transformation preserves orientation. If

$$
\det(A)<0,
$$

the transformation reverses orientation. If

$$
\det(A)=0,
$$

the parallelogram has zero area, so the two column vectors are linearly dependent.

## 13.4 Determinants of \(3\times 3\) Matrices

For

$$
A=
\begin{bmatrix}
a&b&c\\
d&e&f\\
g&h&i
\end{bmatrix},
$$

one formula is

$$
\det(A) =
a(ei-fh)-b(di-fg)+c(dh-eg).
$$

For example,

$$
A=
\begin{bmatrix}
1&2&3\\
0&4&5\\
1&0&6
\end{bmatrix}.
$$

Then

$$
\det(A) =
1(4\cdot 6-5\cdot 0) -
2(0\cdot 6-5\cdot 1)
+
3(0\cdot 0-4\cdot 1).
$$

Thus

$$
\det(A)=24-2(-5)+3(-4)=24+10-12=22.
$$

The absolute value of a \(3\times 3\) determinant is the volume of the parallelepiped spanned by its three column vectors.

## 13.5 Minors

Let \(A\) be an \(n\times n\) matrix. The minor \(M_{ij}\) is the determinant of the matrix obtained by deleting row \(i\) and column \(j\) from \(A\).

For example, if

$$
A=
\begin{bmatrix}
1&2&3\\
4&5&6\\
7&8&9
\end{bmatrix},
$$

then the minor \(M_{12}\) is obtained by deleting row \(1\) and column \(2\):

$$
M_{12} =
\det
\begin{bmatrix}
4&6\\
7&9
\end{bmatrix}.
$$

Therefore

$$
M_{12}=4\cdot 9-6\cdot 7=36-42=-6.
$$

## 13.6 Cofactors

The cofactor \(C_{ij}\) is the signed minor

$$
C_{ij}=(-1)^{i+j}M_{ij}.
$$

The signs follow the checkerboard pattern

$$
\begin{bmatrix}
+&-&+&-\cdots\\
-&+&-&+\cdots\\
+&-&+&-\cdots\\
-&+&-&+\cdots\\
\vdots&\vdots&\vdots&\vdots
\end{bmatrix}.
$$

For the previous example,

$$
C_{12}=(-1)^{1+2}M_{12}=-M_{12}=6.
$$

Cofactors are used in Laplace expansion and in the adjugate formula for the inverse.

## 13.7 Laplace Expansion

The determinant can be computed by expanding along any row or column.

Expansion along row \(i\) gives

$$
\det(A)=\sum_{j=1}^{n} a_{ij}C_{ij}.
$$

Expansion along column \(j\) gives

$$
\det(A)=\sum_{i=1}^{n} a_{ij}C_{ij}.
$$

This is called Laplace expansion. It reduces an \(n\times n\) determinant to determinants of \((n-1)\times(n-1)\) matrices.

## 13.8 Example of Cofactor Expansion

Let

$$
A=
\begin{bmatrix}
2&0&1\\
3&4&-1\\
0&5&2
\end{bmatrix}.
$$

Expand along the first row:

$$
\det(A) =
2
\det
\begin{bmatrix}
4&-1\\
5&2
\end{bmatrix} -
0
\det
\begin{bmatrix}
3&-1\\
0&2
\end{bmatrix}
+
1
\det
\begin{bmatrix}
3&4\\
0&5
\end{bmatrix}.
$$

Compute the \(2\times 2\) determinants:

$$
\det
\begin{bmatrix}
4&-1\\
5&2
\end{bmatrix} =
4\cdot 2-(-1)\cdot 5=13,
$$

and

$$
\det
\begin{bmatrix}
3&4\\
0&5
\end{bmatrix} =
3\cdot 5-4\cdot 0=15.
$$

Thus

$$
\det(A)=2(13)+15=41.
$$

Choosing a row or column with zeros reduces the amount of computation.

## 13.9 Triangular Matrices

If \(A\) is upper triangular or lower triangular, then its determinant is the product of its diagonal entries.

For

$$
A=
\begin{bmatrix}
a_{11}&*&\cdots&*\\
0&a_{22}&\cdots&*\\
\vdots&\vdots&\ddots&\vdots\\
0&0&\cdots&a_{nn}
\end{bmatrix},
$$

we have

$$
\det(A)=a_{11}a_{22}\cdots a_{nn}.
$$

For example,

$$
\det
\begin{bmatrix}
2&5&1\\
0&-3&4\\
0&0&7
\end{bmatrix} =
2(-3)(7) =
-42.
$$

This property is central to computing determinants by elimination.

## 13.10 Determinants and Row Operations

Elementary row operations affect determinants in simple ways.

| Row operation | Effect on determinant |
|---|---|
| Swap two rows | Multiplies determinant by \(-1\) |
| Multiply one row by \(c\) | Multiplies determinant by \(c\) |
| Add a multiple of one row to another | Does not change determinant |

These rules allow determinants to be computed by row reduction, provided the effects of row operations are tracked.

## 13.11 Example by Row Reduction

Let

$$
A=
\begin{bmatrix}
1&2&3\\
2&5&7\\
1&0&6
\end{bmatrix}.
$$

Use row replacement operations, which do not change the determinant:

$$
R_2\leftarrow R_2-2R_1,
\qquad
R_3\leftarrow R_3-R_1.
$$

Then

$$
\begin{bmatrix}
1&2&3\\
0&1&1\\
0&-2&3
\end{bmatrix}.
$$

Now use

$$
R_3\leftarrow R_3+2R_2.
$$

This gives

$$
\begin{bmatrix}
1&2&3\\
0&1&1\\
0&0&5
\end{bmatrix}.
$$

The matrix is upper triangular. Since only row replacement operations were used, the determinant has not changed. Therefore

$$
\det(A)=1\cdot 1\cdot 5=5.
$$

## 13.12 Row Swaps and Scaling

If row swaps or row scalings are used, their effects must be recorded.

For example,

$$
A=
\begin{bmatrix}
0&2\\
3&4
\end{bmatrix}.
$$

Swap rows:

$$
\begin{bmatrix}
0&2\\
3&4
\end{bmatrix}
\longrightarrow
\begin{bmatrix}
3&4\\
0&2
\end{bmatrix}.
$$

The triangular determinant is

$$
3\cdot 2=6.
$$

But one row swap was used, so the original determinant is

$$
\det(A)=-6.
$$

Indeed,

$$
\det(A)=0\cdot 4-2\cdot 3=-6.
$$

## 13.13 Zero Determinants

A square matrix has determinant zero when its rows or columns are linearly dependent.

For example,

$$
A=
\begin{bmatrix}
1&2\\
2&4
\end{bmatrix}.
$$

The second row is twice the first. Therefore the rows are dependent.

Compute:

$$
\det(A)=1\cdot 4-2\cdot 2=4-4=0.
$$

Geometrically, the corresponding transformation collapses area or volume to zero. Algebraically, the matrix is singular.

## 13.14 Determinants and Invertibility

For an \(n\times n\) matrix \(A\),

$$
A \text{ is invertible}
$$

if and only if

$$
\det(A)\ne 0.
$$

Equivalently,

$$
A \text{ is singular}
$$

if and only if

$$
\det(A)=0.
$$

This criterion connects determinants with rank, nullity, pivots, and solutions of linear systems.

## 13.15 Determinant of a Product

If \(A\) and \(B\) are \(n\times n\) matrices, then

$$
\det(AB)=\det(A)\det(B).
$$

This rule is one of the most important determinant identities. It says that volume-scaling factors multiply under composition of linear transformations.

For example, if

$$
\det(A)=3
$$

and

$$
\det(B)=-2,
$$

then

$$
\det(AB)=-6.
$$

## 13.16 Determinant of an Inverse

If \(A\) is invertible, then

$$
\det(A^{-1})=\frac{1}{\det(A)}.
$$

This follows from

$$
AA^{-1}=I
$$

and the product rule:

$$
\det(A)\det(A^{-1})=\det(I)=1.
$$

Thus the determinant of the inverse is the reciprocal of the determinant.

## 13.17 Determinant of a Transpose

For every square matrix \(A\),

$$
\det(A^T)=\det(A).
$$

Therefore every row property of determinants has a corresponding column property. For example, swapping two columns changes the sign of the determinant, multiplying one column by \(c\) multiplies the determinant by \(c\), and adding a multiple of one column to another leaves the determinant unchanged.

## 13.18 Determinant of a Scalar Multiple

If \(A\) is an \(n\times n\) matrix and \(c\) is a scalar, then

$$
\det(cA)=c^n\det(A).
$$

The exponent \(n\) appears because multiplying the whole matrix by \(c\) multiplies every row by \(c\). Since there are \(n\) rows, the determinant is multiplied by \(c\) exactly \(n\) times.

For example, if \(A\) is \(3\times 3\), then

$$
\det(2A)=2^3\det(A)=8\det(A).
$$

## 13.19 Determinants and Volume

If \(A\) is an \(n\times n\) real matrix, then \(|\det(A)|\) is the factor by which the linear transformation \(x\mapsto Ax\) scales \(n\)-dimensional volume.

For example, if

$$
\det(A)=5,
$$

then \(A\) multiplies volumes by \(5\).

If

$$
\det(A)=-5,
$$

then \(A\) still multiplies volumes by \(5\), but it reverses orientation.

If

$$
\det(A)=0,
$$

then \(A\) collapses \(n\)-dimensional volume to zero.

## 13.20 Orientation

In \(\mathbb{R}^2\), a positive determinant preserves counterclockwise orientation. A negative determinant reverses it.

For example,

$$
A=
\begin{bmatrix}
1&0\\
0&1
\end{bmatrix}
$$

has determinant \(1\), so it preserves orientation.

The reflection matrix

$$
B=
\begin{bmatrix}
1&0\\
0&-1
\end{bmatrix}
$$

has determinant \(-1\), so it reverses orientation.

The sign of the determinant therefore records orientation, while its absolute value records volume scaling.

## 13.21 Determinants and Eigenvalues

If \(A\) is an \(n\times n\) matrix with eigenvalues

$$
\lambda_1,\lambda_2,\ldots,\lambda_n,
$$

counted with algebraic multiplicity, then

$$
\det(A)=\lambda_1\lambda_2\cdots\lambda_n.
$$

This fact is developed later through the characteristic polynomial. It explains why a zero eigenvalue is equivalent to a zero determinant.

## 13.22 Determinants and Linear Independence

The columns of an \(n\times n\) matrix \(A\) are linearly independent if and only if

$$
\det(A)\ne 0.
$$

They are linearly dependent if and only if

$$
\det(A)=0.
$$

Thus the determinant tests whether the columns form a basis of \(F^n\).

For example, if

$$
A=
\begin{bmatrix}
|&|&&|\\
a_1&a_2&\cdots&a_n\\
|&|&&|
\end{bmatrix},
$$

then

$$
\det(A)\ne 0
$$

means that \(a_1,\ldots,a_n\) form a basis of \(F^n\).

## 13.23 Determinants and Rank

For an \(n\times n\) matrix \(A\),

$$
\det(A)\ne 0
$$

if and only if

$$
\operatorname{rank}(A)=n.
$$

If

$$
\det(A)=0,
$$

then

$$
\operatorname{rank}(A)<n.
$$

Thus determinant zero means that at least one pivot is missing.

## 13.24 Cramer's Rule

Cramer's rule gives a formula for solving a square system

$$
Ax=b
$$

when

$$
\det(A)\ne 0.
$$

Let \(A_i(b)\) be the matrix obtained from \(A\) by replacing column \(i\) with \(b\). Then the solution satisfies

$$
x_i=\frac{\det(A_i(b))}{\det(A)}.
$$

Cramer's rule is theoretically important because it gives an explicit formula for the solution. For computation, elimination and matrix factorizations are usually preferred, especially for large systems.

## 13.25 Determinants and Computation

Cofactor expansion is useful for small matrices or matrices with many zeros. For large dense matrices, row reduction is more efficient.

The practical method is:

| Step | Action |
|---|---|
| 1 | Use row operations to reduce to triangular form |
| 2 | Track row swaps and row scalings |
| 3 | Multiply the diagonal entries |
| 4 | Adjust for recorded row operations |

This connects determinant computation directly with Gaussian elimination.

## 13.26 Common Mistakes

| Mistake | Correction |
|---|---|
| Taking determinants of non-square matrices | Determinants are defined for square matrices |
| Forgetting sign changes from row swaps | Each row swap multiplies determinant by \(-1\) |
| Treating row scaling as harmless | Scaling a row by \(c\) scales determinant by \(c\) |
| Forgetting that row replacement preserves determinant | Adding a multiple of one row to another leaves determinant unchanged |
| Assuming \(\det(A+B)=\det(A)+\det(B)\) | This is generally false |
| Writing \(\det(cA)=c\det(A)\) for \(n\times n\) matrices | Correct formula is \(\det(cA)=c^n\det(A)\) |
| Ignoring orientation | The sign of the determinant has geometric meaning |

## 13.27 Summary

The determinant is a scalar invariant of a square matrix. It detects invertibility, measures volume scaling, and records orientation.

The main formulas are:

| Concept | Formula |
|---|---|
| \(1\times 1\) determinant | \(\det([a])=a\) |
| \(2\times 2\) determinant | \(\det\begin{bmatrix}a&b\\c&d\end{bmatrix}=ad-bc\) |
| Triangular matrix | Product of diagonal entries |
| Product | \(\det(AB)=\det(A)\det(B)\) |
| Inverse | \(\det(A^{-1})=1/\det(A)\) |
| Transpose | \(\det(A^T)=\det(A)\) |
| Scalar multiple | \(\det(cA)=c^n\det(A)\) |
| Invertibility | \(A\) invertible iff \(\det(A)\ne 0\) |

Determinants are not only formulas. They encode how a square matrix changes space. A nonzero determinant means the transformation preserves full dimension. A zero determinant means the transformation collapses space in at least one direction.
