# Chapter 48. Orthogonal Complements

# Chapter 48. Orthogonal Complements

An orthogonal complement records all directions perpendicular to a given set of vectors. If a subspace describes the directions allowed by a problem, its orthogonal complement describes the directions excluded by it.

Let \(V\) be an inner product space, and let \(S \subseteq V\). The orthogonal complement of \(S\) is

$$
S^\perp =
\{x \in V : \langle x,s\rangle = 0 \text{ for every } s \in S\}.
$$

The notation \(S^\perp\) is read as “\(S\) perp.” It is the set of all vectors orthogonal to every vector in \(S\). Standard references define it this way and note that it is always a subspace of the ambient inner product space.

## 48.1 First Examples

In \(\mathbb{R}^2\), let

$$
S =
\operatorname{span}
\left\{
\begin{bmatrix}
1 \\
0
\end{bmatrix}
\right\}.
$$

Then \(S\) is the \(x\)-axis. A vector

$$
x =
\begin{bmatrix}
a \\
b
\end{bmatrix}
$$

belongs to \(S^\perp\) precisely when

$$
\left\langle
\begin{bmatrix}
a \\
b
\end{bmatrix},
\begin{bmatrix}
1 \\
0
\end{bmatrix}
\right\rangle
= 0.
$$

This gives

$$
a = 0.
$$

Therefore

$$
S^\perp =
\left\{
\begin{bmatrix}
0 \\
b
\end{bmatrix}
: b\in\mathbb{R}
\right\}.
$$

Thus the orthogonal complement of the \(x\)-axis is the \(y\)-axis.

In \(\mathbb{R}^3\), the orthogonal complement of a line through the origin is the plane through the origin perpendicular to that line. The orthogonal complement of a plane through the origin is the line through the origin perpendicular to that plane.

## 48.2 Orthogonal Complement of a Set

The definition applies to any subset \(S\), not only to a subspace.

If

$$
S = \{s_1,s_2,\ldots,s_k\},
$$

then

$$
S^\perp =
\{x\in V : \langle x,s_i\rangle=0
\text{ for } i=1,\ldots,k\}.
$$

Thus \(S^\perp\) is the common solution set of several homogeneous linear equations.

For example, in \(\mathbb{R}^3\), let

$$
S =
\left\{
\begin{bmatrix}
1 \\
1 \\
0
\end{bmatrix},
\begin{bmatrix}
0 \\
1 \\
1
\end{bmatrix}
\right\}.
$$

A vector

$$
x =
\begin{bmatrix}
a \\
b \\
c
\end{bmatrix}
$$

lies in \(S^\perp\) when

$$
a+b=0
$$

and

$$
b+c=0.
$$

Thus

$$
a=-b,
\qquad
c=-b.
$$

So

$$
x =
b
\begin{bmatrix}
-1 \\
1 \\
-1
\end{bmatrix}.
$$

Therefore

$$
S^\perp =
\operatorname{span}
\left\{
\begin{bmatrix}
-1 \\
1 \\
-1
\end{bmatrix}
\right\}.
$$

## 48.3 The Orthogonal Complement Is a Subspace

For every subset \(S\subseteq V\), the set \(S^\perp\) is a subspace of \(V\). This remains true even when \(S\) itself is not a subspace.

First, the zero vector belongs to \(S^\perp\), since

$$
\langle 0,s\rangle = 0
$$

for every \(s\in S\).

Now suppose \(x,y\in S^\perp\), and let \(a,b\) be scalars. For every \(s\in S\),

$$
\langle ax+by,s\rangle =
a\langle x,s\rangle + b\langle y,s\rangle.
$$

Since \(x\in S^\perp\) and \(y\in S^\perp\),

$$
\langle x,s\rangle=0,
\qquad
\langle y,s\rangle=0.
$$

Therefore

$$
\langle ax+by,s\rangle=0.
$$

Thus

$$
ax+by\in S^\perp.
$$

So \(S^\perp\) is closed under linear combinations and is a subspace.

## 48.4 Orthogonal Complement of a Span

A vector is orthogonal to a set if and only if it is orthogonal to every linear combination of vectors in that set. Hence

$$
S^\perp = \operatorname{span}(S)^\perp.
$$

This identity is useful because it allows us to replace a set by its span without changing the orthogonal complement.

Proof: Suppose \(x\in S^\perp\). Let

$$
w = c_1s_1+\cdots+c_ks_k
$$

be a finite linear combination of vectors from \(S\). Then

$$
\langle x,w\rangle =
\langle x,c_1s_1+\cdots+c_ks_k\rangle.
$$

By linearity,

$$
\langle x,w\rangle =
c_1\langle x,s_1\rangle+\cdots+c_k\langle x,s_k\rangle =
0.
$$

Thus \(x\) is orthogonal to every vector in \(\operatorname{span}(S)\).

The converse is immediate because

$$
S\subseteq \operatorname{span}(S).
$$

Therefore

$$
S^\perp = \operatorname{span}(S)^\perp.
$$

## 48.5 Inclusion Reverses

If

$$
S\subseteq T,
$$

then

$$
T^\perp \subseteq S^\perp.
$$

The inclusion reverses direction.

This happens because being orthogonal to a larger set is a stronger condition. If a vector is orthogonal to every vector in \(T\), then it is certainly orthogonal to every vector in the smaller set \(S\).

For example, in \(\mathbb{R}^3\), if \(S\) is a line inside a plane \(T\), then \(T^\perp\) is a line perpendicular to the plane, while \(S^\perp\) is a plane perpendicular to the line. The complement of the larger subspace is smaller.

## 48.6 Orthogonal Complements in Finite Dimensions

Let \(W\) be a subspace of a finite-dimensional inner product space \(V\). Then

$$
\dim W + \dim W^\perp = \dim V.
$$

Equivalently,

$$
\dim W^\perp = \dim V - \dim W.
$$

This gives the expected geometric rule. In \(\mathbb{R}^3\), a line has dimension \(1\), so its orthogonal complement has dimension \(2\). A plane has dimension \(2\), so its orthogonal complement has dimension \(1\). In finite-dimensional inner product spaces, a \(k\)-dimensional subspace has an \((n-k)\)-dimensional orthogonal complement.

## 48.7 Trivial Intersection

If \(W\) is a subspace of an inner product space \(V\), then

$$
W \cap W^\perp = \{0\}.
$$

Indeed, if \(x\in W\cap W^\perp\), then \(x\in W\) and \(x\) is orthogonal to every vector in \(W\). Since \(x\in W\), it is orthogonal to itself:

$$
\langle x,x\rangle = 0.
$$

Positive definiteness gives

$$
x=0.
$$

Thus the only vector that lies both in a subspace and in its orthogonal complement is the zero vector.

## 48.8 Direct Sum Decomposition

If \(W\) is a subspace of a finite-dimensional inner product space \(V\), then

$$
V = W \oplus W^\perp.
$$

This means every vector \(v\in V\) can be written uniquely as

$$
v = w + z,
$$

where

$$
w\in W,
\qquad
z\in W^\perp.
$$

The uniqueness follows from

$$
W\cap W^\perp=\{0\}.
$$

The existence follows from the dimension formula:

$$
\dim W + \dim W^\perp = \dim V.
$$

This decomposition is called the orthogonal decomposition of \(V\) with respect to \(W\). In finite-dimensional inner product spaces, this direct-sum decomposition is one of the central structural properties of orthogonal complements.

## 48.9 Double Orthogonal Complement

In finite-dimensional inner product spaces,

$$
(W^\perp)^\perp = W.
$$

The inclusion

$$
W \subseteq (W^\perp)^\perp
$$

is direct. Every vector in \(W\) is orthogonal to every vector in \(W^\perp\), so every vector in \(W\) belongs to \((W^\perp)^\perp\).

To prove equality, compare dimensions. Since

$$
\dim W^\perp = \dim V - \dim W,
$$

we have

$$
\dim (W^\perp)^\perp =
\dim V - \dim W^\perp.
$$

Substituting,

$$
\dim (W^\perp)^\perp =
\dim V - (\dim V - \dim W) =
\dim W.
$$

Thus \(W\) is a subspace of \((W^\perp)^\perp\) with the same dimension. Therefore

$$
(W^\perp)^\perp = W.
$$

This finite-dimensional identity must be handled carefully in infinite-dimensional spaces. In Hilbert spaces, the double orthogonal complement of a subspace is its closure, so closedness becomes part of the statement.

## 48.10 Computing Orthogonal Complements

In \(\mathbb{R}^n\), orthogonal complements are often computed by solving homogeneous systems.

Suppose

$$
W = \operatorname{span}\{w_1,\ldots,w_k\}.
$$

A vector \(x\in\mathbb{R}^n\) belongs to \(W^\perp\) if and only if

$$
w_1^T x = 0,
\quad
w_2^T x = 0,
\quad
\ldots,
\quad
w_k^T x = 0.
$$

If we form the matrix

$$
A =
\begin{bmatrix}
w_1^T \\
w_2^T \\
\vdots \\
w_k^T
\end{bmatrix},
$$

then the conditions become

$$
Ax=0.
$$

Therefore

$$
W^\perp = \operatorname{Null}(A).
$$

So computing an orthogonal complement reduces to computing a null space.

## 48.11 Example in \(\mathbb{R}^4\)

Let

$$
W=
\operatorname{span}
\left\{
\begin{bmatrix}
1\\
1\\
0\\
0
\end{bmatrix},
\begin{bmatrix}
0\\
1\\
1\\
0
\end{bmatrix}
\right\}.
$$

Let

$$
x=
\begin{bmatrix}
a\\
b\\
c\\
d
\end{bmatrix}.
$$

The condition \(x\in W^\perp\) gives

$$
a+b=0
$$

and

$$
b+c=0.
$$

Thus

$$
a=-b,
\qquad
c=-b,
$$

while \(d\) is free. Hence

$$
x=
b
\begin{bmatrix}
-1\\
1\\
-1\\
0
\end{bmatrix}
+
d
\begin{bmatrix}
0\\
0\\
0\\
1
\end{bmatrix}.
$$

Therefore

$$
W^\perp =
\operatorname{span}
\left\{
\begin{bmatrix}
-1\\
1\\
-1\\
0
\end{bmatrix},
\begin{bmatrix}
0\\
0\\
0\\
1
\end{bmatrix}
\right\}.
$$

Since \(W\) has dimension \(2\) in \(\mathbb{R}^4\), its orthogonal complement also has dimension \(2\), as expected.

## 48.12 Orthogonal Complement and Null Space

Let \(A\) be an \(m\times n\) real matrix. The null space of \(A\) is the orthogonal complement of the row space of \(A\):

$$
\operatorname{Null}(A) =
\operatorname{Row}(A)^\perp.
$$

Indeed,

$$
Ax=0
$$

means that every row of \(A\) has dot product zero with \(x\). Thus \(x\) is orthogonal to every vector in the row space.

Similarly,

$$
\operatorname{Null}(A^T) =
\operatorname{Col}(A)^\perp.
$$

The orthogonal-complement identities for row, column, and null spaces are standard finite-dimensional facts. They express the fundamental relation between equations and orthogonality.

## 48.13 Four Fundamental Subspaces

For an \(m\times n\) matrix \(A\), the four fundamental subspaces are:

| Subspace | Ambient space | Orthogonal complement |
|---|---:|---|
| \(\operatorname{Row}(A)\) | \(\mathbb{R}^n\) | \(\operatorname{Null}(A)\) |
| \(\operatorname{Null}(A)\) | \(\mathbb{R}^n\) | \(\operatorname{Row}(A)\) |
| \(\operatorname{Col}(A)\) | \(\mathbb{R}^m\) | \(\operatorname{Null}(A^T)\) |
| \(\operatorname{Null}(A^T)\) | \(\mathbb{R}^m\) | \(\operatorname{Col}(A)\) |

Thus

$$
\mathbb{R}^n =
\operatorname{Row}(A)
\oplus
\operatorname{Null}(A),
$$

and

$$
\mathbb{R}^m =
\operatorname{Col}(A)
\oplus
\operatorname{Null}(A^T).
$$

These decompositions separate each ambient space into a range part and a null part. They are central in solving linear systems, least squares, and understanding rank.

## 48.14 Orthogonal Complement and Projection

The orthogonal complement gives the residual part of a projection.

Let \(W\) be a finite-dimensional subspace of an inner product space \(V\). For every \(v\in V\), there exists a unique decomposition

$$
v = w + z,
$$

where

$$
w\in W,
\qquad
z\in W^\perp.
$$

The vector \(w\) is the orthogonal projection of \(v\) onto \(W\), and \(z\) is the residual.

Thus

$$
z = v-w.
$$

The defining condition for projection is

$$
v-w \in W^\perp.
$$

Equivalently,

$$
\langle v-w,u\rangle = 0
$$

for every \(u\in W\).

This is the main equation behind least squares approximation.

## 48.15 Projection onto a Subspace with Orthonormal Basis

Suppose \(W\) has an orthonormal basis

$$
q_1,\ldots,q_k.
$$

Then the projection of \(v\) onto \(W\) is

$$
\operatorname{proj}_W(v) =
\sum_{j=1}^k \langle v,q_j\rangle q_j.
$$

The residual is

$$
r =
v-\operatorname{proj}_W(v).
$$

For every \(i\),

$$
\langle r,q_i\rangle =
\left\langle
v-\sum_{j=1}^k \langle v,q_j\rangle q_j,
q_i
\right\rangle.
$$

Using orthonormality,

$$
\langle r,q_i\rangle =
\langle v,q_i\rangle -
\langle v,q_i\rangle =
0.
$$

Therefore \(r\in W^\perp\).

## 48.16 Least Squares Interpretation

Consider a system

$$
Ax=b
$$

where \(A\) is an \(m\times n\) matrix and \(b\in\mathbb{R}^m\). If \(b\) does not lie in \(\operatorname{Col}(A)\), the system has no exact solution.

The least squares problem asks for \(\hat{x}\) such that

$$
A\hat{x}
$$

is the closest vector in \(\operatorname{Col}(A)\) to \(b\).

The residual

$$
r=b-A\hat{x}
$$

must lie in the orthogonal complement of the column space:

$$
r\in \operatorname{Col}(A)^\perp.
$$

Since

$$
\operatorname{Col}(A)^\perp=\operatorname{Null}(A^T),
$$

we get

$$
A^T r = 0.
$$

Substituting \(r=b-A\hat{x}\) gives the normal equations:

$$
A^T(b-A\hat{x})=0,
$$

or

$$
A^T A\hat{x}=A^T b.
$$

This derivation shows that least squares is fundamentally an orthogonal-complement problem.

## 48.17 Complex Inner Product Spaces

In complex inner product spaces, the definition remains

$$
S^\perp =
\{x\in V : \langle x,s\rangle=0 \text{ for every } s\in S\}.
$$

The main change is conjugation. In \(\mathbb{C}^n\), the standard inner product is

$$
\langle x,y\rangle =
x^*y
$$

or, depending on convention,

$$
\langle x,y\rangle =
y^*x.
$$

The zero condition is unaffected by the convention, provided it is used consistently.

For a complex matrix \(A\),

$$
\operatorname{Null}(A) =
\operatorname{Row}(A)^\perp
$$

with rows interpreted through the complex inner product. Also,

$$
\operatorname{Null}(A^*) =
\operatorname{Col}(A)^\perp.
$$

The transpose in the real case becomes the conjugate transpose in the complex case.

## 48.18 Infinite-Dimensional Caution

In finite dimensions, every subspace is closed, and

$$
(W^\perp)^\perp = W.
$$

In infinite-dimensional Hilbert spaces, a subspace may fail to be closed. In that setting,

$$
(W^\perp)^\perp = \overline{W},
$$

where \(\overline{W}\) is the closure of \(W\). If \(W\) is closed, then

$$
(W^\perp)^\perp = W.
$$

This distinction is invisible in elementary finite-dimensional linear algebra but becomes important in functional analysis. Orthogonal complements are always closed in Hilbert spaces, even when the original subspace is not closed.

## 48.19 Common Identities

For subspaces \(U,W\) of a finite-dimensional inner product space,

$$
U\subseteq W
\quad
\Longrightarrow
\quad
W^\perp\subseteq U^\perp.
$$

Also,

$$
(U+W)^\perp =
U^\perp\cap W^\perp.
$$

Indeed, a vector is orthogonal to every vector in \(U+W\) exactly when it is orthogonal to every vector in \(U\) and every vector in \(W\).

In finite dimensions,

$$
(U\cap W)^\perp =
U^\perp + W^\perp.
$$

These identities show that orthogonal complementation exchanges sums and intersections. It reverses inclusion and changes dimension by complementarity.

## 48.20 Summary

The orthogonal complement of a set \(S\) is the subspace of all vectors orthogonal to every vector in \(S\):

$$
S^\perp =
\{x\in V : \langle x,s\rangle=0 \text{ for all } s\in S\}.
$$

It is always a subspace. It depends only on the span of \(S\), so

$$
S^\perp = \operatorname{span}(S)^\perp.
$$

For a finite-dimensional subspace \(W\subseteq V\),

$$
\dim W + \dim W^\perp = \dim V,
$$

$$
W\cap W^\perp=\{0\},
$$

and

$$
V=W\oplus W^\perp.
$$

Orthogonal complements connect geometry with computation. They describe null spaces, residuals, projections, least squares, and the four fundamental subspaces of a matrix.
