Skip to content

Chapter 36. Isomorphisms

An isomorphism is an invertible linear transformation. It gives a precise meaning to the statement that two vector spaces have the same linear structure.

Let VV and WW be vector spaces over the same field FF. A linear map

T:VW T:V\to W

is called an isomorphism if there exists a linear map

T1:WV T^{-1}:W\to V

such that

T1(T(v))=v T^{-1}(T(v))=v

for every vVv\in V, and

T(T1(w))=w T(T^{-1}(w))=w

for every wWw\in W.

Equivalently, an isomorphism is a linear map that is both injective and surjective. Two vector spaces are called isomorphic when there exists an isomorphism between them. Isomorphic vector spaces have the same structure from the viewpoint of linear algebra.

36.1 The Meaning of Isomorphism

The word isomorphism means same form. In linear algebra, it means same vector-space structure.

If

T:VW T:V\to W

is an isomorphism, then every vector in VV corresponds to exactly one vector in WW, and every vector in WW comes from exactly one vector in VV. Addition and scalar multiplication are preserved by the correspondence.

Thus

T(u+v)=T(u)+T(v) T(u+v)=T(u)+T(v)

and

T(cv)=cT(v). T(cv)=cT(v).

The inverse transformation preserves the same operations in the reverse direction.

An isomorphism does not say that the elements of VV and WW look the same. It says that they behave the same under the operations of linear algebra.

For example, the space of polynomials

P2={a+bx+cx2:a,b,cR} P_2=\{a+bx+cx^2:a,b,c\in\mathbb{R}\}

looks different from R3\mathbb{R}^3. One space contains polynomials. The other contains coordinate columns. But they are isomorphic.

The map

T:P2R3 T:P_2\to\mathbb{R}^3

defined by

T(a+bx+cx2)=[abc] T(a+bx+cx^2)= \begin{bmatrix} a\\ b\\ c \end{bmatrix}

is an isomorphism.

It preserves addition and scalar multiplication, and every vector in R3\mathbb{R}^3 arises from exactly one polynomial in P2P_2.

36.2 Injective and Surjective Linear Maps

An isomorphism is both injective and surjective.

A linear map

T:VW T:V\to W

is injective if

T(u)=T(v) T(u)=T(v)

implies

u=v. u=v.

It is surjective if

im(T)=W. \operatorname{im}(T)=W.

Thus TT is an isomorphism exactly when every output vector has exactly one input vector.

For linear maps, injectivity is controlled by the kernel:

T is injectiveker(T)={0}. T \text{ is injective} \quad\Longleftrightarrow\quad \ker(T)=\{0\}.

Surjectivity is controlled by the image:

T is surjectiveim(T)=W. T \text{ is surjective} \quad\Longleftrightarrow\quad \operatorname{im}(T)=W.

Therefore

T is an isomorphismker(T)={0} and im(T)=W. T \text{ is an isomorphism} \quad\Longleftrightarrow\quad \ker(T)=\{0\} \text{ and } \operatorname{im}(T)=W.

The kernel records failure of injectivity. The image records failure of surjectivity.

36.3 The Inverse Map

Suppose

T:VW T:V\to W

is an isomorphism. Since TT is bijective, each vector wWw\in W has a unique preimage vVv\in V. Define

T1(w)=v T^{-1}(w)=v

when

T(v)=w. T(v)=w.

This gives a function

T1:WV. T^{-1}:W\to V.

We now show that T1T^{-1} is linear.

Let w1,w2Ww_1,w_2\in W. Since TT is surjective, there exist v1,v2Vv_1,v_2\in V such that

T(v1)=w1 T(v_1)=w_1

and

T(v2)=w2. T(v_2)=w_2.

Then

w1+w2=T(v1)+T(v2)=T(v1+v2). w_1+w_2=T(v_1)+T(v_2)=T(v_1+v_2).

Therefore

T1(w1+w2)=v1+v2. T^{-1}(w_1+w_2)=v_1+v_2.

But

v1=T1(w1),v2=T1(w2). v_1=T^{-1}(w_1), \qquad v_2=T^{-1}(w_2).

So

T1(w1+w2)=T1(w1)+T1(w2). T^{-1}(w_1+w_2)=T^{-1}(w_1)+T^{-1}(w_2).

For scalar multiplication, let cFc\in F. If T(v)=wT(v)=w, then

cw=cT(v)=T(cv). cw=cT(v)=T(cv).

Therefore

T1(cw)=cv=cT1(w). T^{-1}(cw)=cv=cT^{-1}(w).

Thus T1T^{-1} is linear.

36.4 Isomorphic Vector Spaces

Two vector spaces VV and WW are isomorphic if there exists an isomorphism

T:VW. T:V\to W.

This is written

VW. V\cong W.

The symbol \cong means isomorphic to.

Isomorphism is an equivalence relation on vector spaces over a fixed field. It is reflexive, symmetric, and transitive.

It is reflexive because every vector space is isomorphic to itself by the identity map:

IV(v)=v. I_V(v)=v.

It is symmetric because if

T:VW T:V\to W

is an isomorphism, then

T1:WV T^{-1}:W\to V

is also an isomorphism.

It is transitive because if

T:UV T:U\to V

and

S:VW S:V\to W

are isomorphisms, then

ST:UW S\circ T:U\to W

is an isomorphism.

36.5 Dimension and Isomorphism

Finite-dimensional vector spaces over the same field are isomorphic exactly when they have the same dimension.

First suppose

T:VW T:V\to W

is an isomorphism. Since TT is injective and surjective, it sends a basis of VV to a basis of WW. Therefore

dim(V)=dim(W). \dim(V)=\dim(W).

Conversely, suppose

dim(V)=dim(W)=n. \dim(V)=\dim(W)=n.

Choose a basis

B=(v1,,vn) B=(v_1,\ldots,v_n)

of VV, and choose a basis

C=(w1,,wn) C=(w_1,\ldots,w_n)

of WW.

Define T:VWT:V\to W by sending

T(vi)=wi T(v_i)=w_i

for each ii, and extending linearly.

If

v=c1v1++cnvn, v=c_1v_1+\cdots+c_nv_n,

define

T(v)=c1w1++cnwn. T(v)=c_1w_1+\cdots+c_nw_n.

This map is linear by construction. It is injective because the only linear combination of w1,,wnw_1,\ldots,w_n equal to zero is the trivial one. It is surjective because the vectors w1,,wnw_1,\ldots,w_n span WW.

Therefore TT is an isomorphism.

Thus, for finite-dimensional vector spaces over the same field,

VWdim(V)=dim(W). V\cong W \quad\Longleftrightarrow\quad \dim(V)=\dim(W).

36.6 Coordinate Isomorphism

Every finite-dimensional vector space is isomorphic to a coordinate space.

Let VV be an nn-dimensional vector space over FF, and let

B=(v1,,vn) B=(v_1,\ldots,v_n)

be an ordered basis of VV.

Define

ΦB:VFn \Phi_B:V\to F^n

by

ΦB(v)=[v]B. \Phi_B(v)=[v]_B.

This map sends each vector to its coordinate vector relative to BB.

If

v=c1v1++cnvn, v=c_1v_1+\cdots+c_nv_n,

then

ΦB(v)=[c1cn]. \Phi_B(v)= \begin{bmatrix} c_1\\ \vdots\\ c_n \end{bmatrix}.

The map ΦB\Phi_B is linear because coordinates respect addition and scalar multiplication.

It is injective because a vector has only one coordinate representation in a basis. It is surjective because every coordinate vector in FnF^n determines a vector in VV.

Therefore

VFn. V\cong F^n.

This is why finite-dimensional linear algebra can often be done with coordinate columns. A basis converts abstract vectors into coordinates.

36.7 Examples

Polynomial Spaces

Let

P3={a+bx+cx2+dx3:a,b,c,dR}. P_3=\{a+bx+cx^2+dx^3:a,b,c,d\in\mathbb{R}\}.

The set

(1,x,x2,x3) (1,x,x^2,x^3)

is a basis. Therefore

dim(P3)=4. \dim(P_3)=4.

Since

dim(R4)=4, \dim(\mathbb{R}^4)=4,

we have

P3R4. P_3\cong \mathbb{R}^4.

An explicit isomorphism is

T(a+bx+cx2+dx3)=[abcd]. T(a+bx+cx^2+dx^3)= \begin{bmatrix} a\\ b\\ c\\ d \end{bmatrix}.

Matrix Spaces

Let M2×2(R)M_{2\times 2}(\mathbb{R}) be the vector space of all 2×22\times 2 real matrices. Each matrix has the form

[abcd]. \begin{bmatrix} a & b\\ c & d \end{bmatrix}.

Define

T:M2×2(R)R4 T:M_{2\times 2}(\mathbb{R})\to\mathbb{R}^4

by

T([abcd])=[abcd]. T \left( \begin{bmatrix} a & b\\ c & d \end{bmatrix} \right) = \begin{bmatrix} a\\ b\\ c\\ d \end{bmatrix}.

This is an isomorphism. It preserves addition and scalar multiplication, and it has an inverse:

[abcd][abcd]. \begin{bmatrix} a\\ b\\ c\\ d \end{bmatrix} \mapsto \begin{bmatrix} a & b\\ c & d \end{bmatrix}.

Thus

M2×2(R)R4. M_{2\times 2}(\mathbb{R})\cong\mathbb{R}^4.

Solution Spaces

The solution space of a homogeneous linear differential equation may be isomorphic to a coordinate space.

For example, the equation

y+y=0 y''+y=0

has solution space

{acosx+bsinx:a,bR}. \{a\cos x+b\sin x:a,b\in\mathbb{R}\}.

This space has basis

(cosx,sinx). (\cos x,\sin x).

Therefore it is isomorphic to R2\mathbb{R}^2.

An isomorphism is

acosx+bsinx[ab]. a\cos x+b\sin x \mapsto \begin{bmatrix} a\\ b \end{bmatrix}.

36.8 Nonexamples

Not every linear map is an isomorphism.

Define

T:R2R2 T:\mathbb{R}^2\to\mathbb{R}^2

by

T[xy]=[x0]. T \begin{bmatrix} x\\ y \end{bmatrix} = \begin{bmatrix} x\\ 0 \end{bmatrix}.

This is projection onto the xx-axis. It is linear, but it is not an isomorphism.

Its kernel is

ker(T)=span{[01]}. \ker(T)= \operatorname{span} \left\{ \begin{bmatrix} 0\\ 1 \end{bmatrix} \right\}.

Since the kernel contains a nonzero vector, TT is not injective.

Its image is

im(T)={[x0]:xR}. \operatorname{im}(T)= \left\{ \begin{bmatrix} x\\ 0 \end{bmatrix} :x\in\mathbb{R} \right\}.

Since the image is a proper subspace of R2\mathbb{R}^2, TT is not surjective.

Thus projection loses information and cannot be reversed.

36.9 Matrix Isomorphisms

Let

AFm×n A\in F^{m\times n}

define a linear map

TA:FnFm T_A:F^n\to F^m

by

TA(x)=Ax. T_A(x)=Ax.

The map TAT_A is an isomorphism exactly when AA is square and invertible.

If AA is invertible, then m=nm=n, and the inverse transformation is

TA1(y)=A1y. T_A^{-1}(y)=A^{-1}y.

If AA is not square, then TAT_A cannot be an isomorphism between FnF^n and FmF^m, because the domain and codomain have different finite dimensions.

If AA is square but singular, then TAT_A is not an isomorphism. It has either a nontrivial kernel, or an image smaller than the codomain, or both.

For square matrices, the following conditions are equivalent:

ConditionMeaning
AA is invertibleA1A^{-1} exists
TAT_A is an isomorphismThe matrix map is reversible
ker(A)={0}\ker(A)=\{0\}No nonzero vector is collapsed
im(A)=Fn\operatorname{im}(A)=F^nEvery output is reached
rank(A)=n\operatorname{rank}(A)=nFull rank
The columns of AA form a basis of FnF^nIndependent and spanning

36.10 Isomorphism and Rank-Nullity

Let

T:VW T:V\to W

be a linear map, with VV finite-dimensional.

The rank-nullity theorem states

dim(V)=rank(T)+nullity(T). \dim(V)=\operatorname{rank}(T)+\operatorname{nullity}(T).

If TT is an isomorphism, then

ker(T)={0}, \ker(T)=\{0\},

so

nullity(T)=0. \operatorname{nullity}(T)=0.

Also,

im(T)=W, \operatorname{im}(T)=W,

so

rank(T)=dim(W). \operatorname{rank}(T)=\dim(W).

Thus

dim(V)=dim(W). \dim(V)=\dim(W).

Conversely, if

dim(V)=dim(W) \dim(V)=\dim(W)

and T:VWT:V\to W is linear, then injectivity implies surjectivity, and surjectivity implies injectivity. This follows from rank-nullity.

Suppose TT is injective. Then

nullity(T)=0. \operatorname{nullity}(T)=0.

So

rank(T)=dim(V)=dim(W). \operatorname{rank}(T)=\dim(V)=\dim(W).

Hence

im(T)=W, \operatorname{im}(T)=W,

so TT is surjective.

Suppose TT is surjective. Then

rank(T)=dim(W)=dim(V). \operatorname{rank}(T)=\dim(W)=\dim(V).

So

nullity(T)=0. \operatorname{nullity}(T)=0.

Hence

ker(T)={0}, \ker(T)=\{0\},

so TT is injective.

Therefore, between finite-dimensional spaces of equal dimension, it is enough to prove either injectivity or surjectivity.

36.11 Isomorphism Preserves Structure

An isomorphism preserves all vector-space properties.

If T:VWT:V\to W is an isomorphism and SVS\subseteq V is a subspace, then

T(S)={T(s):sS} T(S)=\{T(s):s\in S\}

is a subspace of WW. Moreover,

dim(T(S))=dim(S). \dim(T(S))=\dim(S).

If

(v1,,vk) (v_1,\ldots,v_k)

is a linearly independent list in VV, then

(T(v1),,T(vk)) (T(v_1),\ldots,T(v_k))

is linearly independent in WW.

If

(v1,,vk) (v_1,\ldots,v_k)

spans VV, then

(T(v1),,T(vk)) (T(v_1),\ldots,T(v_k))

spans WW.

If

(v1,,vn) (v_1,\ldots,v_n)

is a basis of VV, then

(T(v1),,T(vn)) (T(v_1),\ldots,T(v_n))

is a basis of WW.

This explains why isomorphic spaces are treated as structurally identical. Basis, dimension, linear independence, span, subspaces, and linear equations transfer through an isomorphism.

36.12 Proof That Bases Are Preserved

Let T:VWT:V\to W be an isomorphism, and let

B=(v1,,vn) B=(v_1,\ldots,v_n)

be a basis of VV.

First, prove that

(T(v1),,T(vn)) (T(v_1),\ldots,T(v_n))

is linearly independent.

Suppose

c1T(v1)++cnT(vn)=0. c_1T(v_1)+\cdots+c_nT(v_n)=0.

By linearity,

T(c1v1++cnvn)=0. T(c_1v_1+\cdots+c_nv_n)=0.

Since TT is injective, its kernel is {0}\{0\}. Hence

c1v1++cnvn=0. c_1v_1+\cdots+c_nv_n=0.

Since BB is linearly independent,

c1==cn=0. c_1=\cdots=c_n=0.

Therefore the image list is linearly independent.

Next, prove that it spans WW. Let wWw\in W. Since TT is surjective, there exists vVv\in V such that

T(v)=w. T(v)=w.

Since BB spans VV, write

v=c1v1++cnvn. v=c_1v_1+\cdots+c_nv_n.

Then

w=T(v)=c1T(v1)++cnT(vn). w=T(v)=c_1T(v_1)+\cdots+c_nT(v_n).

Thus every wWw\in W lies in the span of the image list.

So the image list is a basis of WW.

36.13 Isomorphism Versus Equality

Isomorphic spaces are not necessarily equal as sets.

For example,

P2 P_2

and

R3 \mathbb{R}^3

are different sets. One contains polynomials. The other contains ordered triples. But they are isomorphic because both are three-dimensional real vector spaces.

Equality is stricter than isomorphism. Equality says two objects are the same object. Isomorphism says two objects have the same structure.

In linear algebra, structure is usually what matters. Once a basis is chosen, any nn-dimensional vector space can be represented by FnF^n. But this representation depends on the basis. A different basis gives a different isomorphism.

Thus the statement

VFn V\cong F^n

is canonical only after a basis has been chosen.

36.14 Natural and Chosen Isomorphisms

Some isomorphisms are natural. Others depend on arbitrary choices.

The map

T:P2R3 T:P_2\to\mathbb{R}^3

defined by

T(a+bx+cx2)=[abc] T(a+bx+cx^2)= \begin{bmatrix} a\\ b\\ c \end{bmatrix}

depends on the chosen basis

(1,x,x2). (1,x,x^2).

If instead we choose the basis

(1,x1,(x1)2), (1,x-1,(x-1)^2),

we get a different coordinate isomorphism.

Both are valid. Neither changes the fact that

P2R3. P_2\cong\mathbb{R}^3.

But the actual coordinate vector assigned to a polynomial may change.

This is a recurring theme. Vector spaces of the same finite dimension are isomorphic, but a specific isomorphism usually requires choosing a basis.

36.15 Isomorphism of Operators

Isomorphism also appears when comparing linear operators.

Let

T:VV T:V\to V

and

S:WW S:W\to W

be linear operators. Suppose there is an isomorphism

P:VW P:V\to W

such that

SP=PT. S\circ P=P\circ T.

Then TT and SS represent the same operator structure under the identification PP.

Equivalently,

S=PTP1. S=P\circ T\circ P^{-1}.

In matrix form, this becomes similarity:

B=PAP1 B=PAP^{-1}

or, depending on the coordinate convention,

B=P1AP. B=P^{-1}AP.

Similar matrices represent the same linear operator in different bases. They share structural invariants such as rank, determinant, trace, eigenvalues, and characteristic polynomial.

36.16 First Isomorphism Theorem

Let

T:VW T:V\to W

be linear. The first isomorphism theorem states that the quotient space V/ker(T)V/\ker(T) is isomorphic to the image of TT:

V/ker(T)im(T). V/\ker(T)\cong \operatorname{im}(T).

The idea is that vectors in VV that differ by an element of the kernel have the same image under TT.

Indeed,

T(u)=T(v) T(u)=T(v)

if and only if

T(uv)=0, T(u-v)=0,

which holds if and only if

uvker(T). u-v\in\ker(T).

Thus each coset of ker(T)\ker(T) corresponds to exactly one output vector in im(T)\operatorname{im}(T). This gives a well-defined isomorphism

v+ker(T)T(v). v+\ker(T)\mapsto T(v).

The theorem formalizes a simple idea: after collapsing exactly the kernel, the remaining domain is the image.

36.17 Geometric Interpretation

An isomorphism is a reversible linear change of description.

In R2\mathbb{R}^2, an invertible matrix may rotate, reflect, shear, or stretch the plane. It may change lengths and angles, unless it is orthogonal, but it does not collapse the plane into a line or a point.

A projection is not an isomorphism because it loses information. A map from R2\mathbb{R}^2 onto a line cannot be reversed on all of R2\mathbb{R}^2. A map from a line onto R2\mathbb{R}^2 cannot reach every point.

In finite-dimensional geometry, an isomorphism preserves dimension. It may change coordinates or shape, but it keeps the number of independent directions.

36.18 Summary

An isomorphism is an invertible linear map.

For vector spaces VV and WW, a linear map

T:VW T:V\to W

is an isomorphism when it is both injective and surjective. Equivalently,

ker(T)={0} \ker(T)=\{0\}

and

im(T)=W. \operatorname{im}(T)=W.

If VV and WW are finite-dimensional over the same field, then

VWdim(V)=dim(W). V\cong W \quad\Longleftrightarrow\quad \dim(V)=\dim(W).

Every nn-dimensional vector space over FF is isomorphic to FnF^n after a basis is chosen.

Isomorphisms preserve linear structure. They send bases to bases, linearly independent sets to linearly independent sets, spanning sets to spanning sets, and subspaces to subspaces of the same dimension.

Thus isomorphism is the correct notion of sameness for vector spaces.