# Chapter 33. Kernel and Image

# Chapter 33. Kernel and Image

Let \(T : V \to W\) be a linear transformation. Two subspaces are naturally attached to \(T\): the kernel and the image.

The kernel records the vectors in the domain that are sent to zero. The image records the vectors in the codomain that are actually reached by the transformation. In standard notation,

$$
\ker(T)=\{v\in V:T(v)=0_W\}
$$

and

$$
\operatorname{im}(T)=\{T(v):v\in V\}.
$$

The kernel is also called the null space. The image is also called the range. For a matrix transformation \(T_A(x)=Ax\), the kernel is the null space of \(A\), and the image is the column space of \(A\).

## 33.1 The Kernel

The kernel of a linear transformation is the set of all input vectors that disappear under the transformation.

If

$$
T : V \to W,
$$

then

$$
\ker(T)=\{v\in V:T(v)=0_W\}.
$$

The zero vector in the definition is the zero vector of the codomain \(W\), not the zero vector of the domain \(V\). The distinction matters when \(V\) and \(W\) are different spaces.

For example, define

$$
T:\mathbb{R}^3\to\mathbb{R}^2
$$

by

$$
T
\begin{bmatrix}
x\\
y\\
z
\end{bmatrix} =
\begin{bmatrix}
x+y\\
y+z
\end{bmatrix}.
$$

A vector belongs to the kernel when

$$
T
\begin{bmatrix}
x\\
y\\
z
\end{bmatrix} =
\begin{bmatrix}
0\\
0
\end{bmatrix}.
$$

Thus

$$
x+y=0
$$

and

$$
y+z=0.
$$

From these equations,

$$
x=-y,
\qquad
z=-y.
$$

Let \(y=t\). Then

$$
\begin{bmatrix}
x\\
y\\
z
\end{bmatrix} =
\begin{bmatrix}
-t\\
t\\
-t
\end{bmatrix} =
t
\begin{bmatrix}
-1\\
1\\
-1
\end{bmatrix}.
$$

Therefore

$$
\ker(T)=
\operatorname{span}
\left\{
\begin{bmatrix}
-1\\
1\\
-1
\end{bmatrix}
\right\}.
$$

The kernel is a line through the origin in \(\mathbb{R}^3\).

## 33.2 The Kernel as a Subspace

The kernel of a linear transformation is always a subspace of the domain.

Let \(T : V \to W\) be linear. We prove that \(\ker(T)\) is a subspace of \(V\).

First, since \(T\) is linear,

$$
T(0_V)=0_W.
$$

Thus

$$
0_V\in \ker(T).
$$

Second, suppose \(u,v\in\ker(T)\). Then

$$
T(u)=0_W
$$

and

$$
T(v)=0_W.
$$

By linearity,

$$
T(u+v)=T(u)+T(v)=0_W+0_W=0_W.
$$

So

$$
u+v\in\ker(T).
$$

Third, suppose \(u\in\ker(T)\) and \(c\) is a scalar. Then

$$
T(cu)=cT(u)=c0_W=0_W.
$$

So

$$
cu\in\ker(T).
$$

The kernel contains zero, is closed under addition, and is closed under scalar multiplication. Hence it is a subspace of \(V\).

## 33.3 The Image

The image of a linear transformation is the set of all possible outputs.

If

$$
T : V \to W,
$$

then

$$
\operatorname{im}(T)=\{T(v):v\in V\}.
$$

The image is a subset of the codomain \(W\). It may be all of \(W\), or it may be a smaller subspace.

For example, define

$$
P:\mathbb{R}^3\to\mathbb{R}^3
$$

by

$$
P
\begin{bmatrix}
x\\
y\\
z
\end{bmatrix} =
\begin{bmatrix}
x\\
y\\
0
\end{bmatrix}.
$$

The output always has third coordinate zero. Therefore

$$
\operatorname{im}(P) =
\left\{
\begin{bmatrix}
a\\
b\\
0
\end{bmatrix}
:a,b\in\mathbb{R}
\right\}.
$$

This is the \(xy\)-plane in \(\mathbb{R}^3\).

The transformation \(P\) projects space onto the \(xy\)-plane. The image is the plane that remains after projection.

## 33.4 The Image as a Subspace

The image of a linear transformation is always a subspace of the codomain.

Let \(T : V \to W\) be linear. We prove that \(\operatorname{im}(T)\) is a subspace of \(W\).

First,

$$
0_W=T(0_V),
$$

so

$$
0_W\in\operatorname{im}(T).
$$

Second, suppose \(y_1,y_2\in\operatorname{im}(T)\). Then there exist \(v_1,v_2\in V\) such that

$$
y_1=T(v_1)
$$

and

$$
y_2=T(v_2).
$$

Then

$$
y_1+y_2=T(v_1)+T(v_2)=T(v_1+v_2).
$$

Since \(v_1+v_2\in V\), this shows

$$
y_1+y_2\in\operatorname{im}(T).
$$

Third, suppose \(y_1\in\operatorname{im}(T)\) and \(c\) is a scalar. Then \(y_1=T(v_1)\) for some \(v_1\in V\). Hence

$$
cy_1=cT(v_1)=T(cv_1).
$$

Since \(cv_1\in V\),

$$
cy_1\in\operatorname{im}(T).
$$

Thus the image is a subspace of \(W\).

## 33.5 Matrix Transformations

Let \(A\) be an \(m\times n\) matrix. It defines a linear transformation

$$
T_A:\mathbb{R}^n\to\mathbb{R}^m
$$

by

$$
T_A(x)=Ax.
$$

The kernel of \(T_A\) is

$$
\ker(T_A)=\{x\in\mathbb{R}^n:Ax=0\}.
$$

This is the null space of \(A\).

The image of \(T_A\) is

$$
\operatorname{im}(T_A)=\{Ax:x\in\mathbb{R}^n\}.
$$

If the columns of \(A\) are

$$
a_1,a_2,\ldots,a_n,
$$

then

$$
Ax=x_1a_1+x_2a_2+\cdots+x_na_n.
$$

Therefore

$$
\operatorname{im}(T_A)=\operatorname{span}\{a_1,a_2,\ldots,a_n\}.
$$

So the image of a matrix transformation is exactly the column space of the matrix.

## 33.6 Example: Computing Kernel and Image

Let

$$
A=
\begin{bmatrix}
1 & 2 & 1\\
0 & 1 & -1
\end{bmatrix}.
$$

Define

$$
T_A:\mathbb{R}^3\to\mathbb{R}^2
$$

by

$$
T_A(x)=Ax.
$$

To find the kernel, solve

$$
Ax=0.
$$

That is,

$$
\begin{bmatrix}
1 & 2 & 1\\
0 & 1 & -1
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix} =
\begin{bmatrix}
0\\
0
\end{bmatrix}.
$$

This gives

$$
x_1+2x_2+x_3=0
$$

and

$$
x_2-x_3=0.
$$

The second equation gives

$$
x_2=x_3.
$$

Let

$$
x_3=t.
$$

Then

$$
x_2=t.
$$

Substitute into the first equation:

$$
x_1+2t+t=0,
$$

so

$$
x_1=-3t.
$$

Thus

$$
x=
\begin{bmatrix}
-3t\\
t\\
t
\end{bmatrix} =
t
\begin{bmatrix}
-3\\
1\\
1
\end{bmatrix}.
$$

Therefore

$$
\ker(A)=
\operatorname{span}
\left\{
\begin{bmatrix}
-3\\
1\\
1
\end{bmatrix}
\right\}.
$$

To find the image, examine the columns of \(A\):

$$
a_1=
\begin{bmatrix}
1\\
0
\end{bmatrix},
\qquad
a_2=
\begin{bmatrix}
2\\
1
\end{bmatrix},
\qquad
a_3=
\begin{bmatrix}
1\\
-1
\end{bmatrix}.
$$

The image is

$$
\operatorname{im}(A) =
\operatorname{span}\{a_1,a_2,a_3\}.
$$

Since

$$
a_1=
\begin{bmatrix}
1\\
0
\end{bmatrix}
$$

and

$$
a_2=
\begin{bmatrix}
2\\
1
\end{bmatrix}
$$

are linearly independent, they already span \(\mathbb{R}^2\). Hence

$$
\operatorname{im}(A)=\mathbb{R}^2.
$$

The transformation collapses one direction in \(\mathbb{R}^3\) to zero, but it still reaches every vector in \(\mathbb{R}^2\).

## 33.7 Kernel and Injectivity

The kernel determines whether a linear transformation is injective.

A function is injective when different inputs have different outputs. For a linear transformation, this condition has a simple test:

$$
T \text{ is injective}
\quad\Longleftrightarrow\quad
\ker(T)=\{0\}.
$$

Suppose \(T\) is injective. Since

$$
T(0)=0,
$$

no other vector can map to zero. Hence the kernel contains only \(0\).

Conversely, suppose

$$
\ker(T)=\{0\}.
$$

If

$$
T(u)=T(v),
$$

then

$$
T(u)-T(v)=0.
$$

By linearity,

$$
T(u-v)=0.
$$

Thus

$$
u-v\in\ker(T).
$$

Since the kernel contains only \(0\),

$$
u-v=0.
$$

Therefore

$$
u=v.
$$

So \(T\) is injective.

The kernel measures failure of injectivity. A large kernel means many different vectors are identified by the transformation.

## 33.8 Image and Surjectivity

The image determines whether a linear transformation is surjective.

A function

$$
T:V\to W
$$

is surjective if every vector in \(W\) occurs as an output. In symbols,

$$
T \text{ is surjective}
\quad\Longleftrightarrow\quad
\operatorname{im}(T)=W.
$$

For a matrix transformation

$$
T_A:\mathbb{R}^n\to\mathbb{R}^m,
$$

this means that the columns of \(A\) span \(\mathbb{R}^m\).

Thus a transformation into \(\mathbb{R}^m\) is surjective exactly when its image has dimension \(m\).

The image measures failure of surjectivity. If the image is a proper subspace of the codomain, then some vectors in the codomain are never reached.

## 33.9 Rank and Nullity

The dimension of the image is called the rank:

$$
\operatorname{rank}(T)=\dim(\operatorname{im}(T)).
$$

The dimension of the kernel is called the nullity:

$$
\operatorname{nullity}(T)=\dim(\ker(T)).
$$

For a linear transformation

$$
T:V\to W
$$

with finite-dimensional domain \(V\), the rank-nullity theorem states

$$
\dim(V)=\operatorname{rank}(T)+\operatorname{nullity}(T).
$$

This theorem divides the dimension of the domain into two parts. One part survives in the image. The other part collapses into the kernel.

For the matrix

$$
A=
\begin{bmatrix}
1 & 2 & 1\\
0 & 1 & -1
\end{bmatrix},
$$

we found

$$
\ker(A)=
\operatorname{span}
\left\{
\begin{bmatrix}
-3\\
1\\
1
\end{bmatrix}
\right\}.
$$

So

$$
\operatorname{nullity}(A)=1.
$$

We also found

$$
\operatorname{im}(A)=\mathbb{R}^2.
$$

So

$$
\operatorname{rank}(A)=2.
$$

The domain is \(\mathbb{R}^3\), and

$$
3=2+1.
$$

This is rank-nullity in this example.

## 33.10 Geometric Meaning

The kernel describes directions that are lost.

If a transformation sends a whole line to zero, then all vectors on that line become indistinguishable from the zero vector after the transformation. If a transformation sends a plane to zero, then even more information is lost.

The image describes the space that remains visible.

A projection from \(\mathbb{R}^3\) onto the \(xy\)-plane has a one-dimensional kernel:

$$
\ker(P)=
\operatorname{span}
\left\{
\begin{bmatrix}
0\\
0\\
1
\end{bmatrix}
\right\}.
$$

This is the \(z\)-axis.

Its image is the \(xy\)-plane:

$$
\operatorname{im}(P)=
\left\{
\begin{bmatrix}
x\\
y\\
0
\end{bmatrix}
:x,y\in\mathbb{R}
\right\}.
$$

The \(z\)-direction is lost. The \(x\)- and \(y\)-directions remain.

## 33.11 Kernel, Image, and Solutions

Kernel and image are directly connected to solving linear systems.

Consider

$$
Ax=b.
$$

A solution exists exactly when

$$
b\in\operatorname{im}(A).
$$

This means the right-hand side must lie in the column space of \(A\).

If one solution \(x_0\) exists, then every solution has the form

$$
x=x_0+z,
$$

where

$$
z\in\ker(A).
$$

Indeed,

$$
A(x_0+z)=Ax_0+Az=b+0=b.
$$

Thus the kernel describes the freedom in the solution set.

If

$$
\ker(A)=\{0\},
$$

then the solution is unique whenever it exists.

If the kernel contains nonzero vectors, then any solution gives infinitely many solutions, provided the field is infinite.

## 33.12 Computing Kernel and Image by Row Reduction

For a matrix \(A\), the kernel is found by solving the homogeneous system

$$
Ax=0.
$$

Row reduction gives the pivot variables and free variables. The free variables parametrize the kernel. A basis for the kernel is obtained by assigning one free variable at a time to \(1\), setting the others to \(0\), and solving for the pivot variables.

The image is the span of the columns of \(A\). To find a basis for the image, row-reduce \(A\) and identify the pivot columns. The corresponding original columns of \(A\) form a basis for the column space.

The word original is important. Row operations change the column space in general. They preserve linear relations among columns, which is why the pivot positions can be read from the row-reduced form, but the basis vectors for the image are taken from the original matrix.

## 33.13 Example with Row Reduction

Let

$$
A=
\begin{bmatrix}
1 & 2 & 3\\
2 & 4 & 6\\
1 & 1 & 1
\end{bmatrix}.
$$

Row-reduce:

$$
\begin{bmatrix}
1 & 2 & 3\\
2 & 4 & 6\\
1 & 1 & 1
\end{bmatrix}
\to
\begin{bmatrix}
1 & 2 & 3\\
0 & 0 & 0\\
0 & -1 & -2
\end{bmatrix}
\to
\begin{bmatrix}
1 & 2 & 3\\
0 & 1 & 2\\
0 & 0 & 0
\end{bmatrix}.
$$

Continue to reduced echelon form:

$$
\begin{bmatrix}
1 & 0 & -1\\
0 & 1 & 2\\
0 & 0 & 0
\end{bmatrix}.
$$

The homogeneous system gives

$$
x_1-x_3=0
$$

and

$$
x_2+2x_3=0.
$$

Let

$$
x_3=t.
$$

Then

$$
x_1=t,
\qquad
x_2=-2t.
$$

So

$$
x=
t
\begin{bmatrix}
1\\
-2\\
1
\end{bmatrix}.
$$

Hence

$$
\ker(A)=
\operatorname{span}
\left\{
\begin{bmatrix}
1\\
-2\\
1
\end{bmatrix}
\right\}.
$$

The pivot columns are columns \(1\) and \(2\). Therefore a basis for the image is given by the first two original columns:

$$
\left\{
\begin{bmatrix}
1\\
2\\
1
\end{bmatrix},
\begin{bmatrix}
2\\
4\\
1
\end{bmatrix}
\right\}.
$$

So

$$
\operatorname{rank}(A)=2
$$

and

$$
\operatorname{nullity}(A)=1.
$$

Again,

$$
3=2+1.
$$

## 33.14 Kernel and Image for Abstract Vector Spaces

Kernel and image also apply when vectors are not coordinate columns.

Let \(P_2\) be the vector space of polynomials of degree at most \(2\), and define

$$
D:P_2\to P_1
$$

by

$$
D(p)=p',
$$

where \(p'\) is the derivative.

If

$$
p(x)=a+bx+cx^2,
$$

then

$$
D(p)=b+2cx.
$$

The kernel consists of all polynomials whose derivative is zero. These are the constant polynomials:

$$
\ker(D)=\{a:a\in\mathbb{R}\}.
$$

The image consists of all polynomials in \(P_1\). Given any

$$
q(x)=\alpha+\beta x,
$$

we can choose

$$
p(x)=\alpha x+\frac{\beta}{2}x^2.
$$

Then

$$
D(p)=q.
$$

Thus

$$
\operatorname{im}(D)=P_1.
$$

Here

$$
\dim(P_2)=3,
$$

$$
\operatorname{nullity}(D)=1,
$$

and

$$
\operatorname{rank}(D)=2.
$$

Again,

$$
3=1+2.
$$

## 33.15 Summary

The kernel and image are the two basic subspaces associated with a linear transformation.

For

$$
T:V\to W,
$$

the kernel is

$$
\ker(T)=\{v\in V:T(v)=0_W\},
$$

and the image is

$$
\operatorname{im}(T)=\{T(v):v\in V\}.
$$

The kernel is a subspace of the domain. The image is a subspace of the codomain.

For a matrix transformation \(T_A(x)=Ax\), the kernel is the null space of \(A\), and the image is the column space of \(A\).

The kernel controls injectivity:

$$
T \text{ is injective}
\quad\Longleftrightarrow\quad
\ker(T)=\{0\}.
$$

The image controls surjectivity:

$$
T \text{ is surjective}
\quad\Longleftrightarrow\quad
\operatorname{im}(T)=W.
$$

Rank and nullity measure their dimensions:

$$
\operatorname{rank}(T)=\dim(\operatorname{im}(T)),
\qquad
\operatorname{nullity}(T)=\dim(\ker(T)).
$$

For finite-dimensional domains,

$$
\dim(V)=\operatorname{rank}(T)+\operatorname{nullity}(T).
$$

Kernel and image are therefore not auxiliary notions. They describe exactly what a linear transformation destroys and what it produces.
