# Chapter 7. Matrix Operations

# Chapter 7. Matrix Operations

Matrix operations are the algebraic rules used to combine and transform matrices. The basic operations are equality, addition, subtraction, scalar multiplication, matrix-vector multiplication, matrix multiplication, transpose, and powers. Each operation has size conditions. A matrix expression is meaningful only when these size conditions are satisfied. Standard matrix algebra defines addition entry by entry, scalar multiplication entry by entry, matrix multiplication by row-column dot products, and transpose by interchanging rows and columns.

## 7.1 Equality

Two matrices are equal when they have the same size and the same entries in corresponding positions.

If

$$
A=(a_{ij})
$$

and

$$
B=(b_{ij}),
$$

then

$$
A=B
$$

means

$$
a_{ij}=b_{ij}
$$

for every valid pair \((i,j)\).

For example,

$$
\begin{bmatrix}
1&2\\
3&4
\end{bmatrix} =
\begin{bmatrix}
1&2\\
3&4
\end{bmatrix}.
$$

But

$$
\begin{bmatrix}
1&2\\
3&4
\end{bmatrix}
\ne
\begin{bmatrix}
1&3\\
2&4
\end{bmatrix}.
$$

The entries are the same numbers, but they occupy different positions.

## 7.2 Addition

Matrices of the same size can be added. The sum is formed by adding corresponding entries.

If \(A=(a_{ij})\) and \(B=(b_{ij})\) are both \(m\times n\) matrices, then

$$
A+B=(a_{ij}+b_{ij}).
$$

For example,

$$
\begin{bmatrix}
1&2&3\\
4&5&6
\end{bmatrix}
+
\begin{bmatrix}
7&8&9\\
10&11&12
\end{bmatrix} =
\begin{bmatrix}
8&10&12\\
14&16&18
\end{bmatrix}.
$$

Matrix addition is defined only when the two matrices have the same number of rows and the same number of columns.

## 7.3 Subtraction

Matrix subtraction is defined by adding the additive inverse:

$$
A-B=A+(-1)B.
$$

If \(A\) and \(B\) have the same size, then

$$
A-B=(a_{ij}-b_{ij}).
$$

For example,

$$
\begin{bmatrix}
9&4\\
7&2
\end{bmatrix} -
\begin{bmatrix}
5&1\\
3&8
\end{bmatrix} =
\begin{bmatrix}
4&3\\
4&-6
\end{bmatrix}.
$$

Like addition, subtraction requires equal sizes.

## 7.4 Scalar Multiplication

A scalar multiplies a matrix by multiplying each entry.

If \(c\) is a scalar and \(A=(a_{ij})\), then

$$
cA=(ca_{ij}).
$$

For example,

$$
-2
\begin{bmatrix}
1&3\\
-4&5
\end{bmatrix} =
\begin{bmatrix}
-2&-6\\
8&-10
\end{bmatrix}.
$$

Scalar multiplication preserves the size of the matrix. If \(A\) is \(m\times n\), then \(cA\) is also \(m\times n\).

## 7.5 Algebraic Laws for Addition and Scaling

For matrices \(A,B,C\) of the same size and scalars \(r,s\), the following laws hold:

| Law | Formula |
|---|---|
| Commutativity of addition | \(A+B=B+A\) |
| Associativity of addition | \((A+B)+C=A+(B+C)\) |
| Additive identity | \(A+0=A\) |
| Additive inverse | \(A+(-A)=0\) |
| Scalar distributivity over matrix addition | \(r(A+B)=rA+rB\) |
| Scalar distributivity over scalar addition | \((r+s)A=rA+sA\) |
| Compatibility of scalar multiplication | \(r(sA)=(rs)A\) |

These laws follow from the corresponding laws for scalar arithmetic because addition and scalar multiplication are defined entry by entry.

## 7.6 Matrix-Vector Multiplication

Let \(A\) be an \(m\times n\) matrix and let \(x\in F^n\). Then \(Ax\) is a vector in \(F^m\).

If

$$
A=
\begin{bmatrix}
a_{11}&a_{12}&\cdots&a_{1n}\\
a_{21}&a_{22}&\cdots&a_{2n}\\
\vdots&\vdots&\ddots&\vdots\\
a_{m1}&a_{m2}&\cdots&a_{mn}
\end{bmatrix},
\qquad
x=
\begin{bmatrix}
x_1\\
x_2\\
\vdots\\
x_n
\end{bmatrix},
$$

then

$$
Ax=
\begin{bmatrix}
a_{11}x_1+a_{12}x_2+\cdots+a_{1n}x_n\\
a_{21}x_1+a_{22}x_2+\cdots+a_{2n}x_n\\
\vdots\\
a_{m1}x_1+a_{m2}x_2+\cdots+a_{mn}x_n
\end{bmatrix}.
$$

Each entry of \(Ax\) is the dot product of one row of \(A\) with \(x\).

For example,

$$
\begin{bmatrix}
2&-1&3\\
0&4&5
\end{bmatrix}
\begin{bmatrix}
1\\
2\\
-1
\end{bmatrix} =
\begin{bmatrix}
2(1)-1(2)+3(-1)\\
0(1)+4(2)+5(-1)
\end{bmatrix} =
\begin{bmatrix}
-3\\
3
\end{bmatrix}.
$$

## 7.7 Column Combination Form

Matrix-vector multiplication can also be read by columns.

If

$$
A=
\begin{bmatrix}
|&|&&|\\
a_1&a_2&\cdots&a_n\\
|&|&&|
\end{bmatrix},
$$

then

$$
Ax=x_1a_1+x_2a_2+\cdots+x_na_n.
$$

Thus \(Ax\) is a linear combination of the columns of \(A\).

This view is often more important than the row view. It shows that the equation

$$
Ax=b
$$

asks whether \(b\) lies in the span of the columns of \(A\).

## 7.8 Matrix Multiplication

Let \(A\) be an \(m\times n\) matrix and let \(B\) be an \(n\times p\) matrix. Then the product \(AB\) is an \(m\times p\) matrix.

The entry in row \(i\) and column \(j\) of \(AB\) is

$$
(AB)_{ij} =
a_{i1}b_{1j}+a_{i2}b_{2j}+\cdots+a_{in}b_{nj}.
$$

Equivalently,

$$
(AB)_{ij}=\sum_{k=1}^{n} a_{ik}b_{kj}.
$$

This is the dot product of row \(i\) of \(A\) with column \(j\) of \(B\). Matrix multiplication is defined only when the number of columns of the left factor equals the number of rows of the right factor.

For example,

$$
A=
\begin{bmatrix}
1&2&3\\
4&5&6
\end{bmatrix},
\qquad
B=
\begin{bmatrix}
7&8\\
9&10\\
11&12
\end{bmatrix}.
$$

Then

$$
AB=
\begin{bmatrix}
1(7)+2(9)+3(11)&1(8)+2(10)+3(12)\\
4(7)+5(9)+6(11)&4(8)+5(10)+6(12)
\end{bmatrix} =
\begin{bmatrix}
58&64\\
139&154
\end{bmatrix}.
$$

## 7.9 Size Rule for Matrix Products

The product

$$
AB
$$

is defined when

$$
A \text{ is } m\times n
$$

and

$$
B \text{ is } n\times p.
$$

The inner dimensions must match:

$$
(m\times n)(n\times p)=m\times p.
$$

The outer dimensions give the size of the result.

For example,

$$
(2\times 3)(3\times 4)
$$

produces a

$$
2\times 4
$$

matrix.

But

$$
(2\times 3)(2\times 4)
$$

is undefined, because the inner dimensions \(3\) and \(2\) do not match.

## 7.10 Columns of a Product

The columns of \(AB\) can be computed by multiplying \(A\) by each column of \(B\).

If

$$
B=
\begin{bmatrix}
|&|&&|\\
b_1&b_2&\cdots&b_p\\
|&|&&|
\end{bmatrix},
$$

then

$$
AB=
\begin{bmatrix}
|&|&&|\\
Ab_1&Ab_2&\cdots&Ab_p\\
|&|&&|
\end{bmatrix}.
$$

This interpretation is useful because it reduces matrix multiplication to repeated matrix-vector multiplication.

It also shows that each column of \(AB\) is a linear combination of the columns of \(A\).

## 7.11 Rows of a Product

The rows of \(AB\) can also be read from the rows of \(A\).

If the rows of \(A\) are

$$
r_1,r_2,\ldots,r_m,
$$

then the rows of \(AB\) are

$$
r_1B,r_2B,\ldots,r_mB.
$$

Thus left multiplication by \(A\) combines rows, while right multiplication by \(B\) combines columns. This distinction matters in computation and in proofs.

## 7.12 Matrix Multiplication as Composition

Matrix multiplication represents composition of linear transformations.

Suppose

$$
B:F^p\to F^n
$$

and

$$
A:F^n\to F^m.
$$

Then

$$
AB:F^p\to F^m
$$

represents the transformation that first applies \(B\), then applies \(A\):

$$
x \mapsto A(Bx).
$$

Thus

$$
(AB)x=A(Bx).
$$

This explains the size rule. The output of \(B\) must live in the input space of \(A\). It also explains the order: the right factor acts first. Matrix multiplication corresponds to composition of the linear transformations represented by the matrices.

## 7.13 Noncommutativity

Matrix multiplication is generally not commutative. Usually,

$$
AB\ne BA.
$$

Sometimes one product is defined and the other is not. Even when both are defined, they may differ.

For example, let

$$
A=
\begin{bmatrix}
1&1\\
0&1
\end{bmatrix},
\qquad
B=
\begin{bmatrix}
1&0\\
1&1
\end{bmatrix}.
$$

Then

$$
AB=
\begin{bmatrix}
2&1\\
1&1
\end{bmatrix},
$$

while

$$
BA=
\begin{bmatrix}
1&1\\
1&2
\end{bmatrix}.
$$

Thus

$$
AB\ne BA.
$$

The order of multiplication must be preserved.

## 7.14 Associativity

Matrix multiplication is associative when all products are defined:

$$
(AB)C=A(BC).
$$

This law allows the expression

$$
ABC
$$

to be written without ambiguity.

Associativity is essential for linear transformations. If \(A\), \(B\), and \(C\) represent transformations, then composing them in a fixed order gives the same result no matter how the products are grouped.

## 7.15 Distributive Laws

Matrix multiplication distributes over matrix addition:

$$
A(B+C)=AB+AC,
$$

and

$$
(A+B)C=AC+BC.
$$

These formulas are valid when the sizes make all products and sums meaningful.

For example,

$$
A(B+C)
$$

requires \(B\) and \(C\) to have the same size, and \(A\) must be compatible with both.

## 7.16 Identity Matrices

The identity matrix acts as a multiplicative identity.

If \(A\) is \(m\times n\), then

$$
I_mA=A
$$

and

$$
AI_n=A.
$$

The identity on the left has size \(m\times m\). The identity on the right has size \(n\times n\).

For example,

$$
I_2
\begin{bmatrix}
1&2&3\\
4&5&6
\end{bmatrix} =
\begin{bmatrix}
1&2&3\\
4&5&6
\end{bmatrix}.
$$

The size of the identity matrix is determined by context.

## 7.17 Zero Products

If \(A\) is \(m\times n\), then

$$
0_{p\times m}A=0_{p\times n}
$$

and

$$
A0_{n\times q}=0_{m\times q}.
$$

A product may be zero even when neither factor is zero.

For example,

$$
\begin{bmatrix}
1&0\\
0&0
\end{bmatrix}
\begin{bmatrix}
0&0\\
0&1
\end{bmatrix} =
\begin{bmatrix}
0&0\\
0&0
\end{bmatrix}.
$$

This behavior differs from ordinary scalar multiplication. It is one reason matrix algebra must be handled carefully.

## 7.18 Transpose

The transpose of an \(m\times n\) matrix \(A\) is the \(n\times m\) matrix \(A^T\) obtained by interchanging rows and columns.

If \(A=(a_{ij})\), then

$$
(A^T)_{ij}=a_{ji}.
$$

For example,

$$
\begin{bmatrix}
1&2&3\\
4&5&6
\end{bmatrix}^T =
\begin{bmatrix}
1&4\\
2&5\\
3&6
\end{bmatrix}.
$$

The transpose changes the shape of a rectangular matrix. It leaves a square matrix square.

## 7.19 Laws of Transpose

The transpose satisfies the following identities:

| Law | Formula |
|---|---|
| Double transpose | \((A^T)^T=A\) |
| Transpose of a sum | \((A+B)^T=A^T+B^T\) |
| Transpose of a scalar multiple | \((cA)^T=cA^T\) |
| Transpose of a product | \((AB)^T=B^TA^T\) |

The last formula is especially important. The order of multiplication reverses under transpose.

## 7.20 Powers of a Matrix

Powers are defined only for square matrices.

If \(A\) is \(n\times n\), then

$$
A^2=AA,
$$

$$
A^3=AAA,
$$

and in general,

$$
A^k=\underbrace{AA\cdots A}_{k\text{ factors}}.
$$

Also,

$$
A^0=I_n.
$$

For example, if

$$
A=
\begin{bmatrix}
1&1\\
0&1
\end{bmatrix},
$$

then

$$
A^2=
\begin{bmatrix}
1&1\\
0&1
\end{bmatrix}
\begin{bmatrix}
1&1\\
0&1
\end{bmatrix} =
\begin{bmatrix}
1&2\\
0&1
\end{bmatrix}.
$$

Matrix powers occur in difference equations, graph theory, Markov chains, dynamical systems, and matrix functions.

## 7.21 Polynomial Expressions in a Matrix

If \(A\) is square, then one may form polynomial expressions such as

$$
p(A)=c_0I+c_1A+c_2A^2+\cdots+c_kA^k.
$$

For example, if

$$
p(t)=t^2-3t+2,
$$

then

$$
p(A)=A^2-3A+2I.
$$

The identity matrix appears in the constant term because each term must be an \(n\times n\) matrix.

Matrix polynomials are used in the Cayley-Hamilton theorem, minimal polynomials, diagonalization, and matrix functions.

## 7.22 Trace

The trace of a square matrix is the sum of its diagonal entries.

If

$$
A=
\begin{bmatrix}
a_{11}&a_{12}&\cdots&a_{1n}\\
a_{21}&a_{22}&\cdots&a_{2n}\\
\vdots&\vdots&\ddots&\vdots\\
a_{n1}&a_{n2}&\cdots&a_{nn}
\end{bmatrix},
$$

then

$$
\operatorname{tr}(A)=a_{11}+a_{22}+\cdots+a_{nn}.
$$

For example,

$$
\operatorname{tr}
\begin{bmatrix}
2&5&1\\
0&-3&4\\
7&6&9
\end{bmatrix} =
2+(-3)+9=8.
$$

The trace is defined only for square matrices. It becomes important in eigenvalue theory and inner products on matrix spaces.

## 7.23 Hadamard Product

The Hadamard product is the entrywise product of two matrices of the same size.

If \(A=(a_{ij})\) and \(B=(b_{ij})\), then

$$
A\circ B=(a_{ij}b_{ij}).
$$

For example,

$$
\begin{bmatrix}
1&2\\
3&4
\end{bmatrix}
\circ
\begin{bmatrix}
5&6\\
7&8
\end{bmatrix} =
\begin{bmatrix}
5&12\\
21&32
\end{bmatrix}.
$$

This operation differs from matrix multiplication. It does not represent composition of linear transformations. It is useful in statistics, numerical analysis, optimization, and elementwise computation.

## 7.24 Common Errors

Many errors in matrix algebra come from ignoring size or order.

| Error | Correction |
|---|---|
| Adding different-sized matrices | Addition requires equal sizes |
| Multiplying \(AB\) when inner dimensions do not match | \(A\) must be \(m\times n\), \(B\) must be \(n\times p\) |
| Assuming \(AB=BA\) | Matrix multiplication is generally noncommutative |
| Treating \(A^2\) as entrywise squaring | \(A^2=AA\), not \((a_{ij}^2)\) |
| Forgetting transpose reverses products | \((AB)^T=B^TA^T\) |
| Using the wrong identity size | \(I_mA=A\), \(AI_n=A\) |

Matrix notation is compact, but the size constraints must always be checked.

## 7.25 Summary

Matrix operations turn matrices into algebraic objects. Addition and scalar multiplication are entrywise. Matrix multiplication uses row-column dot products and represents composition of linear transformations. Transpose interchanges rows and columns. Powers and polynomial expressions are defined for square matrices.

The main operations are:

| Operation | Condition | Result |
|---|---|---|
| \(A+B\) | Same size | Same size |
| \(A-B\) | Same size | Same size |
| \(cA\) | Scalar \(c\), any matrix \(A\) | Same size as \(A\) |
| \(Ax\) | \(A\) is \(m\times n\), \(x\in F^n\) | Vector in \(F^m\) |
| \(AB\) | \(A\) is \(m\times n\), \(B\) is \(n\times p\) | \(m\times p\) matrix |
| \(A^T\) | \(A\) is \(m\times n\) | \(n\times m\) matrix |
| \(A^k\) | \(A\) is square | Same size as \(A\) |

These operations form the computational language used throughout the rest of linear algebra.
