# Chapter 34. Matrix Representation of Linear Maps

# Chapter 34. Matrix Representation of Linear Maps

A linear map can be studied abstractly as a function between vector spaces. It can also be studied concretely as a matrix. The connection between these two descriptions is one of the main bridges in linear algebra.

If \(V\) and \(W\) are finite-dimensional vector spaces over the same field \(F\), and

$$
T:V\to W
$$

is linear, then \(T\) can be represented by a matrix after bases are chosen for \(V\) and \(W\). The matrix stores the coordinates of the images of the basis vectors of \(V\). In the standard coordinate spaces, a linear transformation \(T:\mathbb{R}^n\to\mathbb{R}^m\) has an \(m\times n\) matrix \(A\) such that \(T(x)=Ax\). The columns of \(A\) are the images of the standard basis vectors.

## 34.1 The Basic Idea

A linear map is determined by what it does to a basis.

Let

$$
B=(v_1,v_2,\ldots,v_n)
$$

be a basis of \(V\). Every vector \(v\in V\) can be written uniquely as

$$
v=c_1v_1+c_2v_2+\cdots+c_nv_n.
$$

If \(T:V\to W\) is linear, then

$$
T(v)=c_1T(v_1)+c_2T(v_2)+\cdots+c_nT(v_n).
$$

Thus, to know \(T(v)\) for every \(v\), it is enough to know

$$
T(v_1),T(v_2),\ldots,T(v_n).
$$

A matrix representation is a compact way to store these output vectors.

## 34.2 Standard Matrix Representation

Consider a linear transformation

$$
T:\mathbb{R}^n\to\mathbb{R}^m.
$$

Let

$$
e_1,e_2,\ldots,e_n
$$

be the standard basis of \(\mathbb{R}^n\). The standard matrix of \(T\) is the \(m\times n\) matrix

$$
A=
\begin{bmatrix}
| & | & & |\\
T(e_1) & T(e_2) & \cdots & T(e_n)\\
| & | & & |
\end{bmatrix}.
$$

That is, the first column of \(A\) is \(T(e_1)\), the second column is \(T(e_2)\), and so on.

Then, for every

$$
x=
\begin{bmatrix}
x_1\\
x_2\\
\vdots\\
x_n
\end{bmatrix},
$$

we have

$$
x=x_1e_1+x_2e_2+\cdots+x_ne_n.
$$

By linearity,

$$
T(x)=x_1T(e_1)+x_2T(e_2)+\cdots+x_nT(e_n).
$$

But this is exactly matrix multiplication:

$$
T(x)=Ax.
$$

So a linear transformation on coordinate space is the same data as a matrix.

## 34.3 Example in \(\mathbb{R}^2\)

Define

$$
T:\mathbb{R}^2\to\mathbb{R}^2
$$

by

$$
T
\begin{bmatrix}
x\\
y
\end{bmatrix} =
\begin{bmatrix}
2x+y\\
3x-y
\end{bmatrix}.
$$

The standard basis vectors are

$$
e_1=
\begin{bmatrix}
1\\
0
\end{bmatrix},
\qquad
e_2=
\begin{bmatrix}
0\\
1
\end{bmatrix}.
$$

Compute their images:

$$
T(e_1) =
T
\begin{bmatrix}
1\\
0
\end{bmatrix} =
\begin{bmatrix}
2\\
3
\end{bmatrix},
$$

and

$$
T(e_2) =
T
\begin{bmatrix}
0\\
1
\end{bmatrix} =
\begin{bmatrix}
1\\
-1
\end{bmatrix}.
$$

Therefore the standard matrix of \(T\) is

$$
A=
\begin{bmatrix}
2 & 1\\
3 & -1
\end{bmatrix}.
$$

Indeed,

$$
A
\begin{bmatrix}
x\\
y
\end{bmatrix} =
\begin{bmatrix}
2 & 1\\
3 & -1
\end{bmatrix}
\begin{bmatrix}
x\\
y
\end{bmatrix} =
\begin{bmatrix}
2x+y\\
3x-y
\end{bmatrix}.
$$

This agrees with the original definition of \(T\).

## 34.4 Matrix Size

The size of the representing matrix is determined by the dimensions of the domain and codomain.

If

$$
T:\mathbb{R}^n\to\mathbb{R}^m,
$$

then the matrix of \(T\) has \(m\) rows and \(n\) columns.

The number of columns equals the number of input coordinates. The number of rows equals the number of output coordinates.

Thus:

| Linear map | Matrix size |
|---|---:|
| \(T:\mathbb{R}^2\to\mathbb{R}^2\) | \(2\times 2\) |
| \(T:\mathbb{R}^3\to\mathbb{R}^2\) | \(2\times 3\) |
| \(T:\mathbb{R}^2\to\mathbb{R}^4\) | \(4\times 2\) |
| \(T:\mathbb{R}^n\to\mathbb{R}^m\) | \(m\times n\) |

This convention matches matrix-vector multiplication:

$$
(m\times n)(n\times 1)=(m\times 1).
$$

The input vector has \(n\) entries, and the output vector has \(m\) entries.

## 34.5 From Matrix to Linear Map

Every \(m\times n\) matrix defines a linear map.

Let

$$
A\in F^{m\times n}.
$$

Define

$$
T_A:F^n\to F^m
$$

by

$$
T_A(x)=Ax.
$$

Then \(T_A\) is linear. For vectors \(u,v\in F^n\) and scalar \(c\in F\),

$$
T_A(u+v)=A(u+v)=Au+Av=T_A(u)+T_A(v),
$$

and

$$
T_A(cu)=A(cu)=cAu=cT_A(u).
$$

Thus matrices and linear maps between coordinate spaces are equivalent descriptions.

The matrix is the data. The linear map is the action.

## 34.6 From Linear Map to Matrix

Every linear map between finite-dimensional coordinate spaces gives a matrix.

Let

$$
T:F^n\to F^m
$$

be linear. Let \(e_1,\ldots,e_n\) be the standard basis of \(F^n\). Define

$$
A=
\begin{bmatrix}
| & | & & |\\
T(e_1) & T(e_2) & \cdots & T(e_n)\\
| & | & & |
\end{bmatrix}.
$$

Then

$$
T(x)=Ax
$$

for every \(x\in F^n\).

This construction is unique. If another matrix \(B\) satisfies

$$
T(x)=Bx
$$

for every \(x\), then in particular

$$
T(e_j)=Be_j
$$

for each \(j\). But \(Be_j\) is the \(j\)-th column of \(B\). Hence every column of \(B\) equals the corresponding column of \(A\). Therefore

$$
B=A.
$$

So a linear transformation between coordinate spaces has one and only one standard matrix.

## 34.7 Coordinates Relative to a Basis

The previous sections used the standard basis. For abstract vector spaces, or for nonstandard coordinate systems, we must use coordinates relative to chosen bases.

Let

$$
B=(v_1,v_2,\ldots,v_n)
$$

be an ordered basis of \(V\). If

$$
v=c_1v_1+c_2v_2+\cdots+c_nv_n,
$$

then the coordinate vector of \(v\) relative to \(B\) is

$$
[v]_B=
\begin{bmatrix}
c_1\\
c_2\\
\vdots\\
c_n
\end{bmatrix}.
$$

The order of the basis matters. Changing the order changes the coordinate vector.

For example, in \(\mathbb{R}^2\), let

$$
B=\left(
\begin{bmatrix}
1\\
1
\end{bmatrix},
\begin{bmatrix}
1\\
-1
\end{bmatrix}
\right).
$$

If

$$
v=
\begin{bmatrix}
3\\
1
\end{bmatrix},
$$

then we solve

$$
\begin{bmatrix}
3\\
1
\end{bmatrix} =
c_1
\begin{bmatrix}
1\\
1
\end{bmatrix}
+
c_2
\begin{bmatrix}
1\\
-1
\end{bmatrix}.
$$

This gives

$$
c_1+c_2=3,
\qquad
c_1-c_2=1.
$$

Solving,

$$
c_1=2,
\qquad
c_2=1.
$$

Therefore

$$
[v]_B=
\begin{bmatrix}
2\\
1
\end{bmatrix}.
$$

## 34.8 Matrix of a Linear Map Relative to Bases

Let

$$
T:V\to W
$$

be linear. Let

$$
B=(v_1,\ldots,v_n)
$$

be an ordered basis of \(V\), and let

$$
C=(w_1,\ldots,w_m)
$$

be an ordered basis of \(W\).

The matrix of \(T\) relative to \(B\) and \(C\) is the \(m\times n\) matrix whose \(j\)-th column is the coordinate vector of \(T(v_j)\) in the basis \(C\):

$$
[T]_{C\leftarrow B} =
\begin{bmatrix}
| & | & & |\\
[T(v_1)]_C & [T(v_2)]_C & \cdots & [T(v_n)]_C\\
| & | & & |
\end{bmatrix}.
$$

The notation \(C\leftarrow B\) means that inputs are written in basis \(B\), and outputs are written in basis \(C\).

The defining relation is

$$
[T(v)]_C=[T]_{C\leftarrow B}[v]_B.
$$

This equation is the coordinate form of the linear map.

## 34.9 Example with Nonstandard Bases

Let

$$
T:\mathbb{R}^2\to\mathbb{R}^2
$$

be defined by

$$
T
\begin{bmatrix}
x\\
y
\end{bmatrix} =
\begin{bmatrix}
x+y\\
x-y
\end{bmatrix}.
$$

Let the domain basis be

$$
B=
\left(
\begin{bmatrix}
1\\
1
\end{bmatrix},
\begin{bmatrix}
1\\
-1
\end{bmatrix}
\right),
$$

and let the codomain basis be the standard basis

$$
E=(e_1,e_2).
$$

Compute the images of the basis vectors:

$$
T
\begin{bmatrix}
1\\
1
\end{bmatrix} =
\begin{bmatrix}
2\\
0
\end{bmatrix},
$$

and

$$
T
\begin{bmatrix}
1\\
-1
\end{bmatrix} =
\begin{bmatrix}
0\\
2
\end{bmatrix}.
$$

Since \(E\) is the standard basis, these are already their coordinate vectors in \(E\). Therefore

$$
[T]_{E\leftarrow B} =
\begin{bmatrix}
2 & 0\\
0 & 2
\end{bmatrix}.
$$

With respect to this input basis, the transformation looks like simple scaling by \(2\).

This illustrates an important point: the same linear map can have different matrices under different choices of bases.

## 34.10 The Same Map, Different Matrices

The matrix is not the transformation itself. It is the coordinate description of the transformation.

A linear map \(T:V\to W\) exists independently of coordinates. To write it as a matrix, we choose a basis for \(V\) and a basis for \(W\). Different choices may produce different matrices.

For example, a rotation or reflection in the plane has a familiar standard matrix. But if we describe the plane using a different basis, the entries of the matrix change.

The transformation remains the same. Only its coordinate representation changes.

This distinction prevents a common error: two different matrices may represent the same linear map in different bases, and the same matrix may represent different maps when different bases are being used.

## 34.11 Change-of-Coordinates Matrices

Let

$$
B=(v_1,\ldots,v_n)
$$

be a basis of \(F^n\). The change-of-coordinates matrix from \(B\)-coordinates to standard coordinates is

$$
P_B=
\begin{bmatrix}
| & | & & |\\
v_1 & v_2 & \cdots & v_n\\
| & | & & |
\end{bmatrix}.
$$

If \(x\) is a vector in \(F^n\), then

$$
x=P_B[x]_B.
$$

Thus \(P_B\) converts coordinates in basis \(B\) into standard coordinates.

Since \(B\) is a basis, \(P_B\) is invertible. Therefore

$$
[x]_B=P_B^{-1}x.
$$

The inverse matrix converts standard coordinates into \(B\)-coordinates.

## 34.12 Matrix Relative to Domain and Codomain Bases

Suppose \(T:F^n\to F^m\) has standard matrix \(A\). Let \(B\) be a basis of \(F^n\), and let \(C\) be a basis of \(F^m\).

Let \(P_B\) be the matrix with columns equal to the basis vectors in \(B\), and let \(P_C\) be the matrix with columns equal to the basis vectors in \(C\).

We want a matrix \(M\) such that

$$
[T(x)]_C=M[x]_B.
$$

Since

$$
x=P_B[x]_B,
$$

we have

$$
T(x)=A x=A P_B[x]_B.
$$

Now convert the output to \(C\)-coordinates:

$$
[T(x)]_C=P_C^{-1}T(x)=P_C^{-1}A P_B[x]_B.
$$

Therefore

$$
M=P_C^{-1}AP_B.
$$

So

$$
[T]_{C\leftarrow B}=P_C^{-1}AP_B.
$$

This formula is the main computational rule for changing matrix representations.

## 34.13 Operators and Similarity

A linear operator is a linear map from a vector space to itself:

$$
T:V\to V.
$$

When the same basis \(B\) is used for both the domain and codomain, the matrix is written

$$
[T]_B.
$$

If \(A\) is the standard matrix of a linear operator on \(F^n\), and \(B\) is a basis with change-of-coordinates matrix \(P_B\), then

$$
[T]_B=P_B^{-1}AP_B.
$$

Matrices related by

$$
B=P^{-1}AP
$$

are called similar matrices.

Similar matrices represent the same linear operator in different bases. They have many shared properties, including determinant, trace, rank, characteristic polynomial, and eigenvalues.

This is why choosing a good basis matters. A complicated matrix may become simpler after a change of basis.

## 34.14 Diagonal Representation

A particularly useful case occurs when a linear operator has a basis of eigenvectors.

Let

$$
T:V\to V
$$

have basis

$$
B=(v_1,\ldots,v_n)
$$

such that

$$
T(v_j)=\lambda_j v_j
$$

for each \(j\). Then the matrix of \(T\) in basis \(B\) is diagonal:

$$
[T]_B=
\begin{bmatrix}
\lambda_1 & 0 & \cdots & 0\\
0 & \lambda_2 & \cdots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \cdots & \lambda_n
\end{bmatrix}.
$$

The reason is simple. The \(j\)-th column of \([T]_B\) is the coordinate vector of \(T(v_j)\) in basis \(B\). Since

$$
T(v_j)=\lambda_jv_j,
$$

that coordinate vector has \(\lambda_j\) in position \(j\) and zeros elsewhere.

Diagonalization is the process of finding such a basis when it exists.

## 34.15 Composition of Linear Maps

Matrix representation converts composition of linear maps into multiplication of matrices.

Let

$$
T:U\to V
$$

and

$$
S:V\to W
$$

be linear maps. Choose bases \(A\) for \(U\), \(B\) for \(V\), and \(C\) for \(W\).

Then

$$
[T]_{B\leftarrow A}
$$

represents \(T\), and

$$
[S]_{C\leftarrow B}
$$

represents \(S\).

The composition

$$
S\circ T:U\to W
$$

has matrix

$$
[S\circ T]_{C\leftarrow A} =
[S]_{C\leftarrow B}[T]_{B\leftarrow A}.
$$

The order of multiplication follows the order of application: first \(T\), then \(S\). With column vectors, the first transformation appears on the right. Matrix multiplication represents composition of linear transformations.

## 34.16 Inverses

If \(T:V\to W\) is an isomorphism, then \(T^{-1}:W\to V\) exists and is linear.

Choose bases \(B\) for \(V\) and \(C\) for \(W\). If

$$
M=[T]_{C\leftarrow B},
$$

then the matrix of the inverse map is

$$
[T^{-1}]_{B\leftarrow C}=M^{-1}.
$$

Indeed,

$$
[T^{-1}(T(v))]_B=[v]_B.
$$

In coordinates,

$$
[T^{-1}]_{B\leftarrow C}[T]_{C\leftarrow B}[v]_B=[v]_B.
$$

Therefore

$$
[T^{-1}]_{B\leftarrow C}M=I.
$$

So

$$
[T^{-1}]_{B\leftarrow C}=M^{-1}.
$$

A linear map is invertible exactly when its representing matrix is invertible.

## 34.17 Kernel and Image in Matrix Form

Let

$$
M=[T]_{C\leftarrow B}.
$$

Then the kernel of \(T\) corresponds to the null space of \(M\) in coordinates:

$$
v\in\ker(T)
\quad\Longleftrightarrow\quad
M[v]_B=0.
$$

Thus

$$
[\ker(T)]_B=\ker(M).
$$

Similarly, the image of \(T\) corresponds to the column space of \(M\) in codomain coordinates:

$$
[\operatorname{im}(T)]_C=\operatorname{col}(M).
$$

This means that row reduction and matrix methods can be used to compute bases for kernels and images of abstract linear maps, once bases have been chosen.

## 34.18 Example: A Polynomial Map

Let \(P_2\) be the vector space of polynomials of degree at most \(2\), and let \(P_1\) be the vector space of polynomials of degree at most \(1\).

Define

$$
D:P_2\to P_1
$$

by differentiation:

$$
D(p)=p'.
$$

Choose the ordered basis

$$
B=(1,x,x^2)
$$

for \(P_2\), and

$$
C=(1,x)
$$

for \(P_1\).

Compute the images of the basis vectors:

$$
D(1)=0,
$$

$$
D(x)=1,
$$

$$
D(x^2)=2x.
$$

Now write each image in the basis \(C\):

$$
[D(1)]_C=
\begin{bmatrix}
0\\
0
\end{bmatrix},
\qquad
[D(x)]_C=
\begin{bmatrix}
1\\
0
\end{bmatrix},
\qquad
[D(x^2)]_C=
\begin{bmatrix}
0\\
2
\end{bmatrix}.
$$

Therefore

$$
[D]_{C\leftarrow B} =
\begin{bmatrix}
0 & 1 & 0\\
0 & 0 & 2
\end{bmatrix}.
$$

If

$$
p=a+bx+cx^2,
$$

then

$$
[p]_B=
\begin{bmatrix}
a\\
b\\
c
\end{bmatrix}.
$$

Multiplying,

$$
[D]_{C\leftarrow B}[p]_B =
\begin{bmatrix}
0 & 1 & 0\\
0 & 0 & 2
\end{bmatrix}
\begin{bmatrix}
a\\
b\\
c
\end{bmatrix} =
\begin{bmatrix}
b\\
2c
\end{bmatrix}.
$$

This is the coordinate vector of

$$
p'=b+2cx
$$

in the basis \(C=(1,x)\).

## 34.19 Example: Integration as a Linear Map

Let \(P_1\) be the vector space of polynomials of degree at most \(1\), and let \(P_2\) be the vector space of polynomials of degree at most \(2\).

Define

$$
I:P_1\to P_2
$$

by

$$
I(p)(x)=\int_0^x p(t)\,dt.
$$

Use bases

$$
B=(1,x)
$$

for \(P_1\), and

$$
C=(1,x,x^2)
$$

for \(P_2\).

Compute:

$$
I(1)=x,
$$

and

$$
I(x)=\frac{x^2}{2}.
$$

Therefore

$$
[I(1)]_C=
\begin{bmatrix}
0\\
1\\
0
\end{bmatrix},
\qquad
[I(x)]_C=
\begin{bmatrix}
0\\
0\\
\frac12
\end{bmatrix}.
$$

So

$$
[I]_{C\leftarrow B} =
\begin{bmatrix}
0 & 0\\
1 & 0\\
0 & \frac12
\end{bmatrix}.
$$

If

$$
p=a+bx,
$$

then

$$
[p]_B=
\begin{bmatrix}
a\\
b
\end{bmatrix}.
$$

Multiplying gives

$$
[I]_{C\leftarrow B}[p]_B =
\begin{bmatrix}
0\\
a\\
b/2
\end{bmatrix},
$$

which represents

$$
ax+\frac{b}{2}x^2.
$$

## 34.20 Reading a Matrix as a Linear Map

A matrix can be read column by column.

If

$$
A=
\begin{bmatrix}
1 & 4 & -2\\
0 & 3 & 5
\end{bmatrix},
$$

then \(A\) represents a map

$$
T_A:F^3\to F^2.
$$

The columns are

$$
a_1=
\begin{bmatrix}
1\\
0
\end{bmatrix},
\qquad
a_2=
\begin{bmatrix}
4\\
3
\end{bmatrix},
\qquad
a_3=
\begin{bmatrix}
-2\\
5
\end{bmatrix}.
$$

For

$$
x=
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix},
$$

we have

$$
Ax=x_1a_1+x_2a_2+x_3a_3.
$$

Thus a matrix does not merely multiply numbers. It forms a linear combination of its columns using the input coordinates as coefficients.

This column view explains why the image of \(A\) is the span of its columns.

## 34.21 Row View

The row view is also useful.

If \(A\) is an \(m\times n\) matrix, then each row defines a linear functional on \(F^n\). The output vector \(Ax\) contains the dot products of the rows of \(A\) with \(x\).

For

$$
A=
\begin{bmatrix}
r_1\\
r_2\\
\vdots\\
r_m
\end{bmatrix},
$$

where each \(r_i\) is a row vector,

$$
Ax=
\begin{bmatrix}
r_1x\\
r_2x\\
\vdots\\
r_mx
\end{bmatrix}.
$$

The row view is useful for systems of equations. The equation

$$
Ax=b
$$

means that each row imposes one linear equation on \(x\).

The column view emphasizes the image. The row view emphasizes constraints.

## 34.22 Standard Geometric Matrices

Many geometric transformations in the plane are represented by simple matrices.

A scaling transformation is

$$
\begin{bmatrix}
a & 0\\
0 & b
\end{bmatrix}.
$$

A reflection across the \(x\)-axis is

$$
\begin{bmatrix}
1 & 0\\
0 & -1
\end{bmatrix}.
$$

A projection onto the \(x\)-axis is

$$
\begin{bmatrix}
1 & 0\\
0 & 0
\end{bmatrix}.
$$

A shear parallel to the \(x\)-axis is

$$
\begin{bmatrix}
1 & k\\
0 & 1
\end{bmatrix}.
$$

A rotation through angle \(\theta\) is

$$
\begin{bmatrix}
\cos\theta & -\sin\theta\\
\sin\theta & \cos\theta
\end{bmatrix}.
$$

These matrices act on column vectors in \(\mathbb{R}^2\). They are linear because they keep the origin fixed and preserve linear combinations.

## 34.23 Common Mistakes

The first common mistake is reversing the matrix size. A map

$$
T:F^n\to F^m
$$

has an \(m\times n\) matrix, not an \(n\times m\) matrix.

The second common mistake is putting images of basis vectors in rows instead of columns. With column vectors, the image of the \(j\)-th basis vector goes in the \(j\)-th column.

The third common mistake is forgetting the codomain basis. To form \([T]_{C\leftarrow B}\), compute \(T(v_j)\), then express it in the basis \(C\).

The fourth common mistake is treating a matrix as basis-independent. A matrix represents a linear map only after the relevant bases are known.

The fifth common mistake is using changed columns from a row-reduced matrix as a basis for the original image. Row reduction helps identify pivot columns, but the corresponding basis vectors for the image must be taken from the original matrix.

## 34.24 Summary

A matrix representation gives coordinates for a linear map.

For a linear map

$$
T:V\to W,
$$

with ordered basis \(B=(v_1,\ldots,v_n)\) of \(V\) and ordered basis \(C=(w_1,\ldots,w_m)\) of \(W\), the matrix of \(T\) is

$$
[T]_{C\leftarrow B} =
\begin{bmatrix}
| & | & & |\\
[T(v_1)]_C & [T(v_2)]_C & \cdots & [T(v_n)]_C\\
| & | & & |
\end{bmatrix}.
$$

It satisfies

$$
[T(v)]_C=[T]_{C\leftarrow B}[v]_B.
$$

In standard coordinate spaces, the matrix of \(T:\mathbb{R}^n\to\mathbb{R}^m\) has columns \(T(e_1),\ldots,T(e_n)\), and

$$
T(x)=Ax.
$$

Composition of linear maps becomes matrix multiplication. Inverses of linear maps become inverse matrices. A change of basis changes the matrix representation by multiplication with change-of-coordinates matrices.

The linear map is the coordinate-free object. The matrix is its coordinate description.
