Skip to content

Chapter 11. Invertible Matrices

An invertible matrix is a square matrix that has a matrix inverse. The inverse reverses the action of the original matrix. If AA sends xx to AxAx, then A1A^{-1} sends AxAx back to xx. Formally, an n×nn\times n matrix AA is invertible if there exists an n×nn\times n matrix BB such that AB=BA=InAB=BA=I_n. In that case, BB is unique and is written A1A^{-1}.

11.1 Definition

Let AA be an n×nn\times n matrix over a field FF. The matrix AA is invertible if there exists an n×nn\times n matrix BB such that

AB=In AB=I_n

and

BA=In. BA=I_n.

The matrix BB is called the inverse of AA. It is denoted by

A1. A^{-1}.

Thus

AA1=In AA^{-1}=I_n

and

A1A=In. A^{-1}A=I_n.

The identity matrix plays the role of 11 in matrix multiplication. An invertible matrix is therefore a matrix that has a multiplicative inverse.

11.2 Why the Matrix Must Be Square

The usual inverse is defined for square matrices.

If AA is m×nm\times n, then a product ABAB can equal ImI_m only if BB is n×mn\times m. A product BABA can equal InI_n only if the same BB is n×mn\times m. For both identities

AB=Im AB=I_m

and

BA=In BA=I_n

to hold as a two-sided inverse in the ordinary finite-dimensional setting, the dimensions must match in the square case. Non-square matrices may have one-sided inverses in special cases, but they do not have ordinary two-sided inverses of the same kind used for square matrices.

11.3 The Meaning of an Inverse

If AA is invertible, then the transformation

xAx x\mapsto Ax

can be undone.

Starting with a vector xx, apply AA:

xAx. x\mapsto Ax.

Then apply A1A^{-1}:

AxA1(Ax). Ax\mapsto A^{-1}(Ax).

By associativity,

A1(Ax)=(A1A)x=Inx=x. A^{-1}(Ax)=(A^{-1}A)x=I_nx=x.

Thus A1A^{-1} recovers the original vector.

Likewise,

A(A1y)=y A(A^{-1}y)=y

for every vector yFny\in F^n.

11.4 Uniqueness of the Inverse

If a matrix has an inverse, that inverse is unique.

Suppose BB and CC are both inverses of AA. Then

AB=BA=I AB=BA=I

and

AC=CA=I. AC=CA=I.

We show that B=CB=C:

B=BI. B=BI.

Since AC=IAC=I,

B=B(AC). B=B(AC).

By associativity,

B=(BA)C. B=(BA)C.

Since BA=IBA=I,

B=IC=C. B=IC=C.

Therefore the inverse is unique.

This justifies the notation A1A^{-1}.

11.5 Invertibility and Linear Systems

Invertibility gives the simplest solution rule for square systems.

If

Ax=b Ax=b

and AA is invertible, then multiply both sides by A1A^{-1}:

A1Ax=A1b. A^{-1}Ax=A^{-1}b.

Since

A1A=I, A^{-1}A=I,

we get

x=A1b. x=A^{-1}b.

Thus an invertible coefficient matrix gives a unique solution for every right-hand side bb. Conversely, if every system Ax=bAx=b has a unique solution, then AA is invertible. This equivalence is one form of the invertible matrix theorem.

11.6 Singular Matrices

A square matrix that is not invertible is called singular.

A singular matrix cannot be undone. The transformation xAxx\mapsto Ax loses information. In geometric terms, it collapses at least one nonzero direction. In algebraic terms, the homogeneous equation

Ax=0 Ax=0

has a nonzero solution.

For example,

A=[1224] A= \begin{bmatrix} 1&2\\ 2&4 \end{bmatrix}

is singular. Its second row is twice its first row, so the two rows do not give two independent constraints.

Indeed,

A[21]=[1(2)+2(1)2(2)+4(1)]=[00]. A \begin{bmatrix} -2\\ 1 \end{bmatrix} = \begin{bmatrix} 1(-2)+2(1)\\ 2(-2)+4(1) \end{bmatrix} = \begin{bmatrix} 0\\ 0 \end{bmatrix}.

A nonzero vector has been sent to zero, so AA cannot be invertible.

11.7 The Invertible Matrix Theorem

For an n×nn\times n matrix AA over a field, the following statements are equivalent:

StatementMeaning
AA is invertibleThere exists A1A^{-1}
AA is row equivalent to InI_nRow reduction gives the identity
AA has a pivot in every columnNo free variables occur
AA has rank nnFull rank
Ax=0Ax=0 has only the trivial solutionNo nonzero vector is collapsed
Ax=bAx=b has a unique solution for every bFnb\in F^nEvery right-hand side is reachable exactly once
The columns of AA are linearly independentNo column is redundant
The columns of AA span FnF^nEvery vector can be produced
The linear map xAxx\mapsto Ax is bijectiveOne-to-one and onto
det(A)0\det(A)\ne 0Nonzero determinant

These conditions are either all true or all false for a given square matrix.

11.8 Invertibility and Row Reduction

A square matrix AA is invertible exactly when its reduced row echelon form is the identity matrix.

For example,

A=[1235]. A= \begin{bmatrix} 1&2\\ 3&5 \end{bmatrix}.

Row reduce:

[1235][1201] \begin{bmatrix} 1&2\\ 3&5 \end{bmatrix} \longrightarrow \begin{bmatrix} 1&2\\ 0&-1 \end{bmatrix}

using

R2R23R1. R_2\leftarrow R_2-3R_1.

Then

[1201][1201] \begin{bmatrix} 1&2\\ 0&-1 \end{bmatrix} \longrightarrow \begin{bmatrix} 1&2\\ 0&1 \end{bmatrix}

using

R2R2. R_2\leftarrow -R_2.

Then

[1201][1001] \begin{bmatrix} 1&2\\ 0&1 \end{bmatrix} \longrightarrow \begin{bmatrix} 1&0\\ 0&1 \end{bmatrix}

using

R1R12R2. R_1\leftarrow R_1-2R_2.

Since AA row reduces to I2I_2, it is invertible.

11.9 Computing the Inverse by Row Reduction

To compute A1A^{-1}, form the augmented matrix

[AIn]. [A\mid I_n].

Then row reduce. If the left side becomes InI_n, the right side becomes A1A^{-1}:

[AIn][InA1]. [A\mid I_n]\longrightarrow [I_n\mid A^{-1}].

If the left side cannot be reduced to InI_n, then AA is singular.

This method works because each row operation is equivalent to multiplying on the left by an elementary matrix. Reducing AA to InI_n records the inverse operation on the identity side.

11.10 Example: Computing an Inverse

Let

A=[1235]. A= \begin{bmatrix} 1&2\\ 3&5 \end{bmatrix}.

Form

[AI]=[12103501]. [A\mid I] = \left[ \begin{array}{cc|cc} 1&2&1&0\\ 3&5&0&1 \end{array} \right].

Use

R2R23R1. R_2\leftarrow R_2-3R_1.

Then

[12100131]. \left[ \begin{array}{cc|cc} 1&2&1&0\\ 0&-1&-3&1 \end{array} \right].

Use

R2R2. R_2\leftarrow -R_2.

Then

[12100131]. \left[ \begin{array}{cc|cc} 1&2&1&0\\ 0&1&3&-1 \end{array} \right].

Use

R1R12R2. R_1\leftarrow R_1-2R_2.

Then

[10520131]. \left[ \begin{array}{cc|cc} 1&0&-5&2\\ 0&1&3&-1 \end{array} \right].

Therefore

A1=[5231]. A^{-1} = \begin{bmatrix} -5&2\\ 3&-1 \end{bmatrix}.

Check:

AA1=[1235][5231]=[1001]. AA^{-1} = \begin{bmatrix} 1&2\\ 3&5 \end{bmatrix} \begin{bmatrix} -5&2\\ 3&-1 \end{bmatrix} = \begin{bmatrix} 1&0\\ 0&1 \end{bmatrix}.

11.11 Formula for a 2×22\times 2 Inverse

Let

A=[abcd]. A= \begin{bmatrix} a&b\\ c&d \end{bmatrix}.

Then AA is invertible exactly when

adbc0. ad-bc\ne 0.

In that case,

A1=1adbc[dbca]. A^{-1} = \frac{1}{ad-bc} \begin{bmatrix} d&-b\\ -c&a \end{bmatrix}.

The scalar adbcad-bc is the determinant of AA.

For example,

A=[1235] A= \begin{bmatrix} 1&2\\ 3&5 \end{bmatrix}

has determinant

1523=1. 1\cdot 5-2\cdot 3=-1.

Therefore

A1=11[5231]=[5231]. A^{-1} = \frac{1}{-1} \begin{bmatrix} 5&-2\\ -3&1 \end{bmatrix} = \begin{bmatrix} -5&2\\ 3&-1 \end{bmatrix}.

11.12 Invertibility and Determinants

For a square matrix over a field,

A is invertible A \text{ is invertible}

if and only if

det(A)0. \det(A)\ne 0.

If

det(A)=0, \det(A)=0,

then AA is singular.

The determinant criterion is often useful theoretically. In computation, however, solving systems by row reduction or decomposition is usually preferred to explicitly computing an inverse. Matrix inversion is often unnecessary when the goal is only to solve Ax=bAx=b.

11.13 Products of Invertible Matrices

If AA and BB are invertible n×nn\times n matrices, then ABAB is invertible, and

(AB)1=B1A1. (AB)^{-1}=B^{-1}A^{-1}.

The order reverses.

To verify this, compute:

(AB)(B1A1)=A(BB1)A1=AIA1=AA1=I. (AB)(B^{-1}A^{-1}) = A(BB^{-1})A^{-1} = AI A^{-1} = AA^{-1} = I.

Also,

(B1A1)(AB)=B1(A1A)B=B1IB=B1B=I. (B^{-1}A^{-1})(AB) = B^{-1}(A^{-1}A)B = B^{-1}IB = B^{-1}B = I.

Thus B1A1B^{-1}A^{-1} is the inverse of ABAB.

11.14 Powers of an Invertible Matrix

If AA is invertible, then every positive power

Ak A^k

is invertible, and

(Ak)1=(A1)k. (A^k)^{-1}=(A^{-1})^k.

We also define negative powers by

Ak=(A1)k. A^{-k}=(A^{-1})^k.

Thus

A2=A1A1. A^{-2}=A^{-1}A^{-1}.

This notation is meaningful only when AA is invertible.

11.15 Transposes of Invertible Matrices

If AA is invertible, then ATA^T is invertible, and

(AT)1=(A1)T. (A^T)^{-1}=(A^{-1})^T.

To see this, transpose the identity

AA1=I. AA^{-1}=I.

Using the rule

(AB)T=BTAT, (AB)^T=B^TA^T,

we get

(A1)TAT=I. (A^{-1})^TA^T=I.

Similarly, transposing

A1A=I A^{-1}A=I

gives

AT(A1)T=I. A^T(A^{-1})^T=I.

Therefore (A1)T(A^{-1})^T is the inverse of ATA^T.

11.16 Elementary Matrices and Inverses

Every elementary row operation corresponds to multiplication by an elementary matrix.

Each elementary matrix is invertible because each elementary row operation is reversible.

For example, the row operation

R2R23R1 R_2\leftarrow R_2-3R_1

on a 2×22\times 2 matrix is represented by

E=[1031]. E= \begin{bmatrix} 1&0\\ -3&1 \end{bmatrix}.

The inverse operation is

R2R2+3R1, R_2\leftarrow R_2+3R_1,

represented by

E1=[1031]. E^{-1} = \begin{bmatrix} 1&0\\ 3&1 \end{bmatrix}.

Indeed,

EE1=[1031][1031]=[1001]. EE^{-1} = \begin{bmatrix} 1&0\\ -3&1 \end{bmatrix} \begin{bmatrix} 1&0\\ 3&1 \end{bmatrix} = \begin{bmatrix} 1&0\\ 0&1 \end{bmatrix}.

11.17 Invertible Matrices as a Group

The set of all invertible n×nn\times n matrices over a field FF is denoted

GLn(F). GL_n(F).

It is called the general linear group.

This set is closed under matrix multiplication. It contains the identity matrix. Every element has an inverse. Matrix multiplication is associative. Thus GLn(F)GL_n(F) forms a group under multiplication.

This group is important because it represents all invertible linear changes of coordinates in FnF^n.

11.18 Geometric Meaning

An invertible matrix represents a transformation that preserves dimension.

It may rotate, reflect, shear, scale, or combine these actions. But it cannot collapse a line to a point, a plane to a line, or a three-dimensional region to a plane.

For example,

A=[2003] A= \begin{bmatrix} 2&0\\ 0&3 \end{bmatrix}

is invertible. It stretches the xx-direction by 22 and the yy-direction by 33. Its inverse is

A1=[1/2001/3]. A^{-1} = \begin{bmatrix} 1/2&0\\ 0&1/3 \end{bmatrix}.

The inverse reverses the stretching.

By contrast,

B=[1000] B= \begin{bmatrix} 1&0\\ 0&0 \end{bmatrix}

is singular. It sends

[xy] \begin{bmatrix} x\\ y \end{bmatrix}

to

[x0]. \begin{bmatrix} x\\ 0 \end{bmatrix}.

It collapses the plane onto the xx-axis, losing the yy-coordinate. No inverse can recover information that has been lost.

11.19 Solving Without Explicit Inversion

Although the formula

x=A1b x=A^{-1}b

is mathematically correct, it is often not the best computational method.

In numerical computation, one usually solves

Ax=b Ax=b

by elimination or factorization, such as LU decomposition, rather than computing A1A^{-1} explicitly. Computing an inverse can cost extra work and may introduce additional numerical error.

Thus the inverse is conceptually important, but direct inversion is not always computationally preferred.

11.20 Common Mistakes

MistakeCorrection
Assuming every square matrix is invertibleA square matrix may be singular
Trying to invert a non-square matrixOrdinary two-sided inverses require square matrices
Writing (AB)1=A1B1(AB)^{-1}=A^{-1}B^{-1}The correct order is (AB)1=B1A1(AB)^{-1}=B^{-1}A^{-1}
Using x=A1bx=A^{-1}b when A1A^{-1} does not existFirst check invertibility
Treating A1A^{-1} as entrywise reciprocalMatrix inversion is not entrywise inversion
Computing an inverse just to solve one systemUse elimination or factorization when appropriate

11.21 Summary

An invertible matrix is a square matrix whose action can be reversed. Its inverse A1A^{-1} satisfies

AA1=A1A=I. AA^{-1}=A^{-1}A=I.

Invertibility has many equivalent forms:

ViewpointEquivalent condition
AlgebraicA1A^{-1} exists
Row-reductionAA row reduces to InI_n
SystemsAx=bAx=b has a unique solution for every bb
Homogeneous systemsAx=0Ax=0 has only the zero solution
Rankrank(A)=n\operatorname{rank}(A)=n
ColumnsColumns are linearly independent and span FnF^n
GeometryThe transformation does not collapse dimension
Determinantsdet(A)0\det(A)\ne 0

Invertible matrices are the algebraic model of reversible linear transformations. They are central to solving systems, changing coordinates, understanding dimension, and studying linear maps.