# Chapter 36. Isomorphisms

# Chapter 36. Isomorphisms

An isomorphism is an invertible linear transformation. It gives a precise meaning to the statement that two vector spaces have the same linear structure.

Let \(V\) and \(W\) be vector spaces over the same field \(F\). A linear map

$$
T:V\to W
$$

is called an isomorphism if there exists a linear map

$$
T^{-1}:W\to V
$$

such that

$$
T^{-1}(T(v))=v
$$

for every \(v\in V\), and

$$
T(T^{-1}(w))=w
$$

for every \(w\in W\).

Equivalently, an isomorphism is a linear map that is both injective and surjective. Two vector spaces are called isomorphic when there exists an isomorphism between them. Isomorphic vector spaces have the same structure from the viewpoint of linear algebra.

## 36.1 The Meaning of Isomorphism

The word isomorphism means same form. In linear algebra, it means same vector-space structure.

If

$$
T:V\to W
$$

is an isomorphism, then every vector in \(V\) corresponds to exactly one vector in \(W\), and every vector in \(W\) comes from exactly one vector in \(V\). Addition and scalar multiplication are preserved by the correspondence.

Thus

$$
T(u+v)=T(u)+T(v)
$$

and

$$
T(cv)=cT(v).
$$

The inverse transformation preserves the same operations in the reverse direction.

An isomorphism does not say that the elements of \(V\) and \(W\) look the same. It says that they behave the same under the operations of linear algebra.

For example, the space of polynomials

$$
P_2=\{a+bx+cx^2:a,b,c\in\mathbb{R}\}
$$

looks different from \(\mathbb{R}^3\). One space contains polynomials. The other contains coordinate columns. But they are isomorphic.

The map

$$
T:P_2\to\mathbb{R}^3
$$

defined by

$$
T(a+bx+cx^2)=
\begin{bmatrix}
a\\
b\\
c
\end{bmatrix}
$$

is an isomorphism.

It preserves addition and scalar multiplication, and every vector in \(\mathbb{R}^3\) arises from exactly one polynomial in \(P_2\).

## 36.2 Injective and Surjective Linear Maps

An isomorphism is both injective and surjective.

A linear map

$$
T:V\to W
$$

is injective if

$$
T(u)=T(v)
$$

implies

$$
u=v.
$$

It is surjective if

$$
\operatorname{im}(T)=W.
$$

Thus \(T\) is an isomorphism exactly when every output vector has exactly one input vector.

For linear maps, injectivity is controlled by the kernel:

$$
T \text{ is injective}
\quad\Longleftrightarrow\quad
\ker(T)=\{0\}.
$$

Surjectivity is controlled by the image:

$$
T \text{ is surjective}
\quad\Longleftrightarrow\quad
\operatorname{im}(T)=W.
$$

Therefore

$$
T \text{ is an isomorphism}
\quad\Longleftrightarrow\quad
\ker(T)=\{0\}
\text{ and }
\operatorname{im}(T)=W.
$$

The kernel records failure of injectivity. The image records failure of surjectivity.

## 36.3 The Inverse Map

Suppose

$$
T:V\to W
$$

is an isomorphism. Since \(T\) is bijective, each vector \(w\in W\) has a unique preimage \(v\in V\). Define

$$
T^{-1}(w)=v
$$

when

$$
T(v)=w.
$$

This gives a function

$$
T^{-1}:W\to V.
$$

We now show that \(T^{-1}\) is linear.

Let \(w_1,w_2\in W\). Since \(T\) is surjective, there exist \(v_1,v_2\in V\) such that

$$
T(v_1)=w_1
$$

and

$$
T(v_2)=w_2.
$$

Then

$$
w_1+w_2=T(v_1)+T(v_2)=T(v_1+v_2).
$$

Therefore

$$
T^{-1}(w_1+w_2)=v_1+v_2.
$$

But

$$
v_1=T^{-1}(w_1),
\qquad
v_2=T^{-1}(w_2).
$$

So

$$
T^{-1}(w_1+w_2)=T^{-1}(w_1)+T^{-1}(w_2).
$$

For scalar multiplication, let \(c\in F\). If \(T(v)=w\), then

$$
cw=cT(v)=T(cv).
$$

Therefore

$$
T^{-1}(cw)=cv=cT^{-1}(w).
$$

Thus \(T^{-1}\) is linear.

## 36.4 Isomorphic Vector Spaces

Two vector spaces \(V\) and \(W\) are isomorphic if there exists an isomorphism

$$
T:V\to W.
$$

This is written

$$
V\cong W.
$$

The symbol \(\cong\) means isomorphic to.

Isomorphism is an equivalence relation on vector spaces over a fixed field. It is reflexive, symmetric, and transitive.

It is reflexive because every vector space is isomorphic to itself by the identity map:

$$
I_V(v)=v.
$$

It is symmetric because if

$$
T:V\to W
$$

is an isomorphism, then

$$
T^{-1}:W\to V
$$

is also an isomorphism.

It is transitive because if

$$
T:U\to V
$$

and

$$
S:V\to W
$$

are isomorphisms, then

$$
S\circ T:U\to W
$$

is an isomorphism.

## 36.5 Dimension and Isomorphism

Finite-dimensional vector spaces over the same field are isomorphic exactly when they have the same dimension.

First suppose

$$
T:V\to W
$$

is an isomorphism. Since \(T\) is injective and surjective, it sends a basis of \(V\) to a basis of \(W\). Therefore

$$
\dim(V)=\dim(W).
$$

Conversely, suppose

$$
\dim(V)=\dim(W)=n.
$$

Choose a basis

$$
B=(v_1,\ldots,v_n)
$$

of \(V\), and choose a basis

$$
C=(w_1,\ldots,w_n)
$$

of \(W\).

Define \(T:V\to W\) by sending

$$
T(v_i)=w_i
$$

for each \(i\), and extending linearly.

If

$$
v=c_1v_1+\cdots+c_nv_n,
$$

define

$$
T(v)=c_1w_1+\cdots+c_nw_n.
$$

This map is linear by construction. It is injective because the only linear combination of \(w_1,\ldots,w_n\) equal to zero is the trivial one. It is surjective because the vectors \(w_1,\ldots,w_n\) span \(W\).

Therefore \(T\) is an isomorphism.

Thus, for finite-dimensional vector spaces over the same field,

$$
V\cong W
\quad\Longleftrightarrow\quad
\dim(V)=\dim(W).
$$

## 36.6 Coordinate Isomorphism

Every finite-dimensional vector space is isomorphic to a coordinate space.

Let \(V\) be an \(n\)-dimensional vector space over \(F\), and let

$$
B=(v_1,\ldots,v_n)
$$

be an ordered basis of \(V\).

Define

$$
\Phi_B:V\to F^n
$$

by

$$
\Phi_B(v)=[v]_B.
$$

This map sends each vector to its coordinate vector relative to \(B\).

If

$$
v=c_1v_1+\cdots+c_nv_n,
$$

then

$$
\Phi_B(v)=
\begin{bmatrix}
c_1\\
\vdots\\
c_n
\end{bmatrix}.
$$

The map \(\Phi_B\) is linear because coordinates respect addition and scalar multiplication.

It is injective because a vector has only one coordinate representation in a basis. It is surjective because every coordinate vector in \(F^n\) determines a vector in \(V\).

Therefore

$$
V\cong F^n.
$$

This is why finite-dimensional linear algebra can often be done with coordinate columns. A basis converts abstract vectors into coordinates.

## 36.7 Examples

### Polynomial Spaces

Let

$$
P_3=\{a+bx+cx^2+dx^3:a,b,c,d\in\mathbb{R}\}.
$$

The set

$$
(1,x,x^2,x^3)
$$

is a basis. Therefore

$$
\dim(P_3)=4.
$$

Since

$$
\dim(\mathbb{R}^4)=4,
$$

we have

$$
P_3\cong \mathbb{R}^4.
$$

An explicit isomorphism is

$$
T(a+bx+cx^2+dx^3)=
\begin{bmatrix}
a\\
b\\
c\\
d
\end{bmatrix}.
$$

### Matrix Spaces

Let \(M_{2\times 2}(\mathbb{R})\) be the vector space of all \(2\times 2\) real matrices. Each matrix has the form

$$
\begin{bmatrix}
a & b\\
c & d
\end{bmatrix}.
$$

Define

$$
T:M_{2\times 2}(\mathbb{R})\to\mathbb{R}^4
$$

by

$$
T
\left(
\begin{bmatrix}
a & b\\
c & d
\end{bmatrix}
\right) =
\begin{bmatrix}
a\\
b\\
c\\
d
\end{bmatrix}.
$$

This is an isomorphism. It preserves addition and scalar multiplication, and it has an inverse:

$$
\begin{bmatrix}
a\\
b\\
c\\
d
\end{bmatrix}
\mapsto
\begin{bmatrix}
a & b\\
c & d
\end{bmatrix}.
$$

Thus

$$
M_{2\times 2}(\mathbb{R})\cong\mathbb{R}^4.
$$

### Solution Spaces

The solution space of a homogeneous linear differential equation may be isomorphic to a coordinate space.

For example, the equation

$$
y''+y=0
$$

has solution space

$$
\{a\cos x+b\sin x:a,b\in\mathbb{R}\}.
$$

This space has basis

$$
(\cos x,\sin x).
$$

Therefore it is isomorphic to \(\mathbb{R}^2\).

An isomorphism is

$$
a\cos x+b\sin x
\mapsto
\begin{bmatrix}
a\\
b
\end{bmatrix}.
$$

## 36.8 Nonexamples

Not every linear map is an isomorphism.

Define

$$
T:\mathbb{R}^2\to\mathbb{R}^2
$$

by

$$
T
\begin{bmatrix}
x\\
y
\end{bmatrix} =
\begin{bmatrix}
x\\
0
\end{bmatrix}.
$$

This is projection onto the \(x\)-axis. It is linear, but it is not an isomorphism.

Its kernel is

$$
\ker(T)=
\operatorname{span}
\left\{
\begin{bmatrix}
0\\
1
\end{bmatrix}
\right\}.
$$

Since the kernel contains a nonzero vector, \(T\) is not injective.

Its image is

$$
\operatorname{im}(T)=
\left\{
\begin{bmatrix}
x\\
0
\end{bmatrix}
:x\in\mathbb{R}
\right\}.
$$

Since the image is a proper subspace of \(\mathbb{R}^2\), \(T\) is not surjective.

Thus projection loses information and cannot be reversed.

## 36.9 Matrix Isomorphisms

Let

$$
A\in F^{m\times n}
$$

define a linear map

$$
T_A:F^n\to F^m
$$

by

$$
T_A(x)=Ax.
$$

The map \(T_A\) is an isomorphism exactly when \(A\) is square and invertible.

If \(A\) is invertible, then \(m=n\), and the inverse transformation is

$$
T_A^{-1}(y)=A^{-1}y.
$$

If \(A\) is not square, then \(T_A\) cannot be an isomorphism between \(F^n\) and \(F^m\), because the domain and codomain have different finite dimensions.

If \(A\) is square but singular, then \(T_A\) is not an isomorphism. It has either a nontrivial kernel, or an image smaller than the codomain, or both.

For square matrices, the following conditions are equivalent:

| Condition | Meaning |
|---|---|
| \(A\) is invertible | \(A^{-1}\) exists |
| \(T_A\) is an isomorphism | The matrix map is reversible |
| \(\ker(A)=\{0\}\) | No nonzero vector is collapsed |
| \(\operatorname{im}(A)=F^n\) | Every output is reached |
| \(\operatorname{rank}(A)=n\) | Full rank |
| The columns of \(A\) form a basis of \(F^n\) | Independent and spanning |

## 36.10 Isomorphism and Rank-Nullity

Let

$$
T:V\to W
$$

be a linear map, with \(V\) finite-dimensional.

The rank-nullity theorem states

$$
\dim(V)=\operatorname{rank}(T)+\operatorname{nullity}(T).
$$

If \(T\) is an isomorphism, then

$$
\ker(T)=\{0\},
$$

so

$$
\operatorname{nullity}(T)=0.
$$

Also,

$$
\operatorname{im}(T)=W,
$$

so

$$
\operatorname{rank}(T)=\dim(W).
$$

Thus

$$
\dim(V)=\dim(W).
$$

Conversely, if

$$
\dim(V)=\dim(W)
$$

and \(T:V\to W\) is linear, then injectivity implies surjectivity, and surjectivity implies injectivity. This follows from rank-nullity.

Suppose \(T\) is injective. Then

$$
\operatorname{nullity}(T)=0.
$$

So

$$
\operatorname{rank}(T)=\dim(V)=\dim(W).
$$

Hence

$$
\operatorname{im}(T)=W,
$$

so \(T\) is surjective.

Suppose \(T\) is surjective. Then

$$
\operatorname{rank}(T)=\dim(W)=\dim(V).
$$

So

$$
\operatorname{nullity}(T)=0.
$$

Hence

$$
\ker(T)=\{0\},
$$

so \(T\) is injective.

Therefore, between finite-dimensional spaces of equal dimension, it is enough to prove either injectivity or surjectivity.

## 36.11 Isomorphism Preserves Structure

An isomorphism preserves all vector-space properties.

If \(T:V\to W\) is an isomorphism and \(S\subseteq V\) is a subspace, then

$$
T(S)=\{T(s):s\in S\}
$$

is a subspace of \(W\). Moreover,

$$
\dim(T(S))=\dim(S).
$$

If

$$
(v_1,\ldots,v_k)
$$

is a linearly independent list in \(V\), then

$$
(T(v_1),\ldots,T(v_k))
$$

is linearly independent in \(W\).

If

$$
(v_1,\ldots,v_k)
$$

spans \(V\), then

$$
(T(v_1),\ldots,T(v_k))
$$

spans \(W\).

If

$$
(v_1,\ldots,v_n)
$$

is a basis of \(V\), then

$$
(T(v_1),\ldots,T(v_n))
$$

is a basis of \(W\).

This explains why isomorphic spaces are treated as structurally identical. Basis, dimension, linear independence, span, subspaces, and linear equations transfer through an isomorphism.

## 36.12 Proof That Bases Are Preserved

Let \(T:V\to W\) be an isomorphism, and let

$$
B=(v_1,\ldots,v_n)
$$

be a basis of \(V\).

First, prove that

$$
(T(v_1),\ldots,T(v_n))
$$

is linearly independent.

Suppose

$$
c_1T(v_1)+\cdots+c_nT(v_n)=0.
$$

By linearity,

$$
T(c_1v_1+\cdots+c_nv_n)=0.
$$

Since \(T\) is injective, its kernel is \(\{0\}\). Hence

$$
c_1v_1+\cdots+c_nv_n=0.
$$

Since \(B\) is linearly independent,

$$
c_1=\cdots=c_n=0.
$$

Therefore the image list is linearly independent.

Next, prove that it spans \(W\). Let \(w\in W\). Since \(T\) is surjective, there exists \(v\in V\) such that

$$
T(v)=w.
$$

Since \(B\) spans \(V\), write

$$
v=c_1v_1+\cdots+c_nv_n.
$$

Then

$$
w=T(v)=c_1T(v_1)+\cdots+c_nT(v_n).
$$

Thus every \(w\in W\) lies in the span of the image list.

So the image list is a basis of \(W\).

## 36.13 Isomorphism Versus Equality

Isomorphic spaces are not necessarily equal as sets.

For example,

$$
P_2
$$

and

$$
\mathbb{R}^3
$$

are different sets. One contains polynomials. The other contains ordered triples. But they are isomorphic because both are three-dimensional real vector spaces.

Equality is stricter than isomorphism. Equality says two objects are the same object. Isomorphism says two objects have the same structure.

In linear algebra, structure is usually what matters. Once a basis is chosen, any \(n\)-dimensional vector space can be represented by \(F^n\). But this representation depends on the basis. A different basis gives a different isomorphism.

Thus the statement

$$
V\cong F^n
$$

is canonical only after a basis has been chosen.

## 36.14 Natural and Chosen Isomorphisms

Some isomorphisms are natural. Others depend on arbitrary choices.

The map

$$
T:P_2\to\mathbb{R}^3
$$

defined by

$$
T(a+bx+cx^2)=
\begin{bmatrix}
a\\
b\\
c
\end{bmatrix}
$$

depends on the chosen basis

$$
(1,x,x^2).
$$

If instead we choose the basis

$$
(1,x-1,(x-1)^2),
$$

we get a different coordinate isomorphism.

Both are valid. Neither changes the fact that

$$
P_2\cong\mathbb{R}^3.
$$

But the actual coordinate vector assigned to a polynomial may change.

This is a recurring theme. Vector spaces of the same finite dimension are isomorphic, but a specific isomorphism usually requires choosing a basis.

## 36.15 Isomorphism of Operators

Isomorphism also appears when comparing linear operators.

Let

$$
T:V\to V
$$

and

$$
S:W\to W
$$

be linear operators. Suppose there is an isomorphism

$$
P:V\to W
$$

such that

$$
S\circ P=P\circ T.
$$

Then \(T\) and \(S\) represent the same operator structure under the identification \(P\).

Equivalently,

$$
S=P\circ T\circ P^{-1}.
$$

In matrix form, this becomes similarity:

$$
B=PAP^{-1}
$$

or, depending on the coordinate convention,

$$
B=P^{-1}AP.
$$

Similar matrices represent the same linear operator in different bases. They share structural invariants such as rank, determinant, trace, eigenvalues, and characteristic polynomial.

## 36.16 First Isomorphism Theorem

Let

$$
T:V\to W
$$

be linear. The first isomorphism theorem states that the quotient space \(V/\ker(T)\) is isomorphic to the image of \(T\):

$$
V/\ker(T)\cong \operatorname{im}(T).
$$

The idea is that vectors in \(V\) that differ by an element of the kernel have the same image under \(T\).

Indeed,

$$
T(u)=T(v)
$$

if and only if

$$
T(u-v)=0,
$$

which holds if and only if

$$
u-v\in\ker(T).
$$

Thus each coset of \(\ker(T)\) corresponds to exactly one output vector in \(\operatorname{im}(T)\). This gives a well-defined isomorphism

$$
v+\ker(T)\mapsto T(v).
$$

The theorem formalizes a simple idea: after collapsing exactly the kernel, the remaining domain is the image.

## 36.17 Geometric Interpretation

An isomorphism is a reversible linear change of description.

In \(\mathbb{R}^2\), an invertible matrix may rotate, reflect, shear, or stretch the plane. It may change lengths and angles, unless it is orthogonal, but it does not collapse the plane into a line or a point.

A projection is not an isomorphism because it loses information. A map from \(\mathbb{R}^2\) onto a line cannot be reversed on all of \(\mathbb{R}^2\). A map from a line onto \(\mathbb{R}^2\) cannot reach every point.

In finite-dimensional geometry, an isomorphism preserves dimension. It may change coordinates or shape, but it keeps the number of independent directions.

## 36.18 Summary

An isomorphism is an invertible linear map.

For vector spaces \(V\) and \(W\), a linear map

$$
T:V\to W
$$

is an isomorphism when it is both injective and surjective. Equivalently,

$$
\ker(T)=\{0\}
$$

and

$$
\operatorname{im}(T)=W.
$$

If \(V\) and \(W\) are finite-dimensional over the same field, then

$$
V\cong W
\quad\Longleftrightarrow\quad
\dim(V)=\dim(W).
$$

Every \(n\)-dimensional vector space over \(F\) is isomorphic to \(F^n\) after a basis is chosen.

Isomorphisms preserve linear structure. They send bases to bases, linearly independent sets to linearly independent sets, spanning sets to spanning sets, and subspaces to subspaces of the same dimension.

Thus isomorphism is the correct notion of sameness for vector spaces.
