Skip to content

Chapter 48. Orthogonal Complements

An orthogonal complement records all directions perpendicular to a given set of vectors. If a subspace describes the directions allowed by a problem, its orthogonal complement describes the directions excluded by it.

Let VV be an inner product space, and let SVS \subseteq V. The orthogonal complement of SS is

S={xV:x,s=0 for every sS}. S^\perp = \{x \in V : \langle x,s\rangle = 0 \text{ for every } s \in S\}.

The notation SS^\perp is read as “SS perp.” It is the set of all vectors orthogonal to every vector in SS. Standard references define it this way and note that it is always a subspace of the ambient inner product space.

48.1 First Examples

In R2\mathbb{R}^2, let

S=span{[10]}. S = \operatorname{span} \left\{ \begin{bmatrix} 1 \\ 0 \end{bmatrix} \right\}.

Then SS is the xx-axis. A vector

x=[ab] x = \begin{bmatrix} a \\ b \end{bmatrix}

belongs to SS^\perp precisely when

[ab],[10]=0. \left\langle \begin{bmatrix} a \\ b \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \end{bmatrix} \right\rangle = 0.

This gives

a=0. a = 0.

Therefore

$$
S^\perp =
\left{
\begin{bmatrix}
0 \
b
\end{bmatrix}
b\in\mathbb{R} \right}. $$

Thus the orthogonal complement of the xx-axis is the yy-axis.

In R3\mathbb{R}^3, the orthogonal complement of a line through the origin is the plane through the origin perpendicular to that line. The orthogonal complement of a plane through the origin is the line through the origin perpendicular to that plane.

48.2 Orthogonal Complement of a Set

The definition applies to any subset SS, not only to a subspace.

If

S={s1,s2,,sk}, S = \{s_1,s_2,\ldots,s_k\},

then

S={xV:x,si=0 for i=1,,k}. S^\perp = \{x\in V : \langle x,s_i\rangle=0 \text{ for } i=1,\ldots,k\}.

Thus SS^\perp is the common solution set of several homogeneous linear equations.

For example, in R3\mathbb{R}^3, let

S={[110],[011]}. S = \left\{ \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} \right\}.

A vector

x=[abc] x = \begin{bmatrix} a \\ b \\ c \end{bmatrix}

lies in SS^\perp when

a+b=0 a+b=0

and

b+c=0. b+c=0.

Thus

a=b,c=b. a=-b, \qquad c=-b.

So

x=b[111]. x = b \begin{bmatrix} -1 \\ 1 \\ -1 \end{bmatrix}.

Therefore

S=span{[111]}. S^\perp = \operatorname{span} \left\{ \begin{bmatrix} -1 \\ 1 \\ -1 \end{bmatrix} \right\}.

48.3 The Orthogonal Complement Is a Subspace

For every subset SVS\subseteq V, the set SS^\perp is a subspace of VV. This remains true even when SS itself is not a subspace.

First, the zero vector belongs to SS^\perp, since

0,s=0 \langle 0,s\rangle = 0

for every sSs\in S.

Now suppose x,ySx,y\in S^\perp, and let a,ba,b be scalars. For every sSs\in S,

ax+by,s=ax,s+by,s. \langle ax+by,s\rangle = a\langle x,s\rangle + b\langle y,s\rangle.

Since xSx\in S^\perp and ySy\in S^\perp,

x,s=0,y,s=0. \langle x,s\rangle=0, \qquad \langle y,s\rangle=0.

Therefore

ax+by,s=0. \langle ax+by,s\rangle=0.

Thus

ax+byS. ax+by\in S^\perp.

So SS^\perp is closed under linear combinations and is a subspace.

48.4 Orthogonal Complement of a Span

A vector is orthogonal to a set if and only if it is orthogonal to every linear combination of vectors in that set. Hence

S=span(S). S^\perp = \operatorname{span}(S)^\perp.

This identity is useful because it allows us to replace a set by its span without changing the orthogonal complement.

Proof: Suppose xSx\in S^\perp. Let

w=c1s1++cksk w = c_1s_1+\cdots+c_ks_k

be a finite linear combination of vectors from SS. Then

x,w=x,c1s1++cksk. \langle x,w\rangle = \langle x,c_1s_1+\cdots+c_ks_k\rangle.

By linearity,

x,w=c1x,s1++ckx,sk=0. \langle x,w\rangle = c_1\langle x,s_1\rangle+\cdots+c_k\langle x,s_k\rangle = 0.

Thus xx is orthogonal to every vector in span(S)\operatorname{span}(S).

The converse is immediate because

Sspan(S). S\subseteq \operatorname{span}(S).

Therefore

S=span(S). S^\perp = \operatorname{span}(S)^\perp.

48.5 Inclusion Reverses

If

ST, S\subseteq T,

then

TS. T^\perp \subseteq S^\perp.

The inclusion reverses direction.

This happens because being orthogonal to a larger set is a stronger condition. If a vector is orthogonal to every vector in TT, then it is certainly orthogonal to every vector in the smaller set SS.

For example, in R3\mathbb{R}^3, if SS is a line inside a plane TT, then TT^\perp is a line perpendicular to the plane, while SS^\perp is a plane perpendicular to the line. The complement of the larger subspace is smaller.

48.6 Orthogonal Complements in Finite Dimensions

Let WW be a subspace of a finite-dimensional inner product space VV. Then

dimW+dimW=dimV. \dim W + \dim W^\perp = \dim V.

Equivalently,

dimW=dimVdimW. \dim W^\perp = \dim V - \dim W.

This gives the expected geometric rule. In R3\mathbb{R}^3, a line has dimension 11, so its orthogonal complement has dimension 22. A plane has dimension 22, so its orthogonal complement has dimension 11. In finite-dimensional inner product spaces, a kk-dimensional subspace has an (nk)(n-k)-dimensional orthogonal complement.

48.7 Trivial Intersection

If WW is a subspace of an inner product space VV, then

WW={0}. W \cap W^\perp = \{0\}.

Indeed, if xWWx\in W\cap W^\perp, then xWx\in W and xx is orthogonal to every vector in WW. Since xWx\in W, it is orthogonal to itself:

x,x=0. \langle x,x\rangle = 0.

Positive definiteness gives

x=0. x=0.

Thus the only vector that lies both in a subspace and in its orthogonal complement is the zero vector.

48.8 Direct Sum Decomposition

If WW is a subspace of a finite-dimensional inner product space VV, then

V=WW. V = W \oplus W^\perp.

This means every vector vVv\in V can be written uniquely as

v=w+z, v = w + z,

where

wW,zW. w\in W, \qquad z\in W^\perp.

The uniqueness follows from

WW={0}. W\cap W^\perp=\{0\}.

The existence follows from the dimension formula:

dimW+dimW=dimV. \dim W + \dim W^\perp = \dim V.

This decomposition is called the orthogonal decomposition of VV with respect to WW. In finite-dimensional inner product spaces, this direct-sum decomposition is one of the central structural properties of orthogonal complements.

48.9 Double Orthogonal Complement

In finite-dimensional inner product spaces,

(W)=W. (W^\perp)^\perp = W.

The inclusion

W(W) W \subseteq (W^\perp)^\perp

is direct. Every vector in WW is orthogonal to every vector in WW^\perp, so every vector in WW belongs to (W)(W^\perp)^\perp.

To prove equality, compare dimensions. Since

dimW=dimVdimW, \dim W^\perp = \dim V - \dim W,

we have

dim(W)=dimVdimW. \dim (W^\perp)^\perp = \dim V - \dim W^\perp.

Substituting,

dim(W)=dimV(dimVdimW)=dimW. \dim (W^\perp)^\perp = \dim V - (\dim V - \dim W) = \dim W.

Thus WW is a subspace of (W)(W^\perp)^\perp with the same dimension. Therefore

(W)=W. (W^\perp)^\perp = W.

This finite-dimensional identity must be handled carefully in infinite-dimensional spaces. In Hilbert spaces, the double orthogonal complement of a subspace is its closure, so closedness becomes part of the statement.

48.10 Computing Orthogonal Complements

In Rn\mathbb{R}^n, orthogonal complements are often computed by solving homogeneous systems.

Suppose

W=span{w1,,wk}. W = \operatorname{span}\{w_1,\ldots,w_k\}.

A vector xRnx\in\mathbb{R}^n belongs to WW^\perp if and only if

w1Tx=0,w2Tx=0,,wkTx=0. w_1^T x = 0, \quad w_2^T x = 0, \quad \ldots, \quad w_k^T x = 0.

If we form the matrix

A=[w1Tw2TwkT], A = \begin{bmatrix} w_1^T \\ w_2^T \\ \vdots \\ w_k^T \end{bmatrix},

then the conditions become

Ax=0. Ax=0.

Therefore

W=Null(A). W^\perp = \operatorname{Null}(A).

So computing an orthogonal complement reduces to computing a null space.

48.11 Example in R4\mathbb{R}^4

Let

W=span{[1100],[0110]}. W= \operatorname{span} \left\{ \begin{bmatrix} 1\\ 1\\ 0\\ 0 \end{bmatrix}, \begin{bmatrix} 0\\ 1\\ 1\\ 0 \end{bmatrix} \right\}.

Let

x=[abcd]. x= \begin{bmatrix} a\\ b\\ c\\ d \end{bmatrix}.

The condition xWx\in W^\perp gives

a+b=0 a+b=0

and

b+c=0. b+c=0.

Thus

a=b,c=b, a=-b, \qquad c=-b,

while dd is free. Hence

x=b[1110]+d[0001]. x= b \begin{bmatrix} -1\\ 1\\ -1\\ 0 \end{bmatrix} + d \begin{bmatrix} 0\\ 0\\ 0\\ 1 \end{bmatrix}.

Therefore

W=span{[1110],[0001]}. W^\perp = \operatorname{span} \left\{ \begin{bmatrix} -1\\ 1\\ -1\\ 0 \end{bmatrix}, \begin{bmatrix} 0\\ 0\\ 0\\ 1 \end{bmatrix} \right\}.

Since WW has dimension 22 in R4\mathbb{R}^4, its orthogonal complement also has dimension 22, as expected.

48.12 Orthogonal Complement and Null Space

Let AA be an m×nm\times n real matrix. The null space of AA is the orthogonal complement of the row space of AA:

Null(A)=Row(A). \operatorname{Null}(A) = \operatorname{Row}(A)^\perp.

Indeed,

Ax=0 Ax=0

means that every row of AA has dot product zero with xx. Thus xx is orthogonal to every vector in the row space.

Similarly,

Null(AT)=Col(A). \operatorname{Null}(A^T) = \operatorname{Col}(A)^\perp.

The orthogonal-complement identities for row, column, and null spaces are standard finite-dimensional facts. They express the fundamental relation between equations and orthogonality.

48.13 Four Fundamental Subspaces

For an m×nm\times n matrix AA, the four fundamental subspaces are:

SubspaceAmbient spaceOrthogonal complement
Row(A)\operatorname{Row}(A)Rn\mathbb{R}^nNull(A)\operatorname{Null}(A)
Null(A)\operatorname{Null}(A)Rn\mathbb{R}^nRow(A)\operatorname{Row}(A)
Col(A)\operatorname{Col}(A)Rm\mathbb{R}^mNull(AT)\operatorname{Null}(A^T)
Null(AT)\operatorname{Null}(A^T)Rm\mathbb{R}^mCol(A)\operatorname{Col}(A)

Thus

Rn=Row(A)Null(A), \mathbb{R}^n = \operatorname{Row}(A) \oplus \operatorname{Null}(A),

and

Rm=Col(A)Null(AT). \mathbb{R}^m = \operatorname{Col}(A) \oplus \operatorname{Null}(A^T).

These decompositions separate each ambient space into a range part and a null part. They are central in solving linear systems, least squares, and understanding rank.

48.14 Orthogonal Complement and Projection

The orthogonal complement gives the residual part of a projection.

Let WW be a finite-dimensional subspace of an inner product space VV. For every vVv\in V, there exists a unique decomposition

v=w+z, v = w + z,

where

wW,zW. w\in W, \qquad z\in W^\perp.

The vector ww is the orthogonal projection of vv onto WW, and zz is the residual.

Thus

z=vw. z = v-w.

The defining condition for projection is

vwW. v-w \in W^\perp.

Equivalently,

vw,u=0 \langle v-w,u\rangle = 0

for every uWu\in W.

This is the main equation behind least squares approximation.

48.15 Projection onto a Subspace with Orthonormal Basis

Suppose WW has an orthonormal basis

q1,,qk. q_1,\ldots,q_k.

Then the projection of vv onto WW is

projW(v)=j=1kv,qjqj. \operatorname{proj}_W(v) = \sum_{j=1}^k \langle v,q_j\rangle q_j.

The residual is

r=vprojW(v). r = v-\operatorname{proj}_W(v).

For every ii,

r,qi=vj=1kv,qjqj,qi. \langle r,q_i\rangle = \left\langle v-\sum_{j=1}^k \langle v,q_j\rangle q_j, q_i \right\rangle.

Using orthonormality,

r,qi=v,qiv,qi=0. \langle r,q_i\rangle = \langle v,q_i\rangle - \langle v,q_i\rangle = 0.

Therefore rWr\in W^\perp.

48.16 Least Squares Interpretation

Consider a system

Ax=b Ax=b

where AA is an m×nm\times n matrix and bRmb\in\mathbb{R}^m. If bb does not lie in Col(A)\operatorname{Col}(A), the system has no exact solution.

The least squares problem asks for x^\hat{x} such that

Ax^ A\hat{x}

is the closest vector in Col(A)\operatorname{Col}(A) to bb.

The residual

r=bAx^ r=b-A\hat{x}

must lie in the orthogonal complement of the column space:

rCol(A). r\in \operatorname{Col}(A)^\perp.

Since

Col(A)=Null(AT), \operatorname{Col}(A)^\perp=\operatorname{Null}(A^T),

we get

ATr=0. A^T r = 0.

Substituting r=bAx^r=b-A\hat{x} gives the normal equations:

AT(bAx^)=0, A^T(b-A\hat{x})=0,

or

ATAx^=ATb. A^T A\hat{x}=A^T b.

This derivation shows that least squares is fundamentally an orthogonal-complement problem.

48.17 Complex Inner Product Spaces

In complex inner product spaces, the definition remains

S={xV:x,s=0 for every sS}. S^\perp = \{x\in V : \langle x,s\rangle=0 \text{ for every } s\in S\}.

The main change is conjugation. In Cn\mathbb{C}^n, the standard inner product is

x,y=xy \langle x,y\rangle = x^*y

or, depending on convention,

x,y=yx. \langle x,y\rangle = y^*x.

The zero condition is unaffected by the convention, provided it is used consistently.

For a complex matrix AA,

Null(A)=Row(A) \operatorname{Null}(A) = \operatorname{Row}(A)^\perp

with rows interpreted through the complex inner product. Also,

Null(A)=Col(A). \operatorname{Null}(A^*) = \operatorname{Col}(A)^\perp.

The transpose in the real case becomes the conjugate transpose in the complex case.

48.18 Infinite-Dimensional Caution

In finite dimensions, every subspace is closed, and

(W)=W. (W^\perp)^\perp = W.

In infinite-dimensional Hilbert spaces, a subspace may fail to be closed. In that setting,

(W)=W, (W^\perp)^\perp = \overline{W},

where W\overline{W} is the closure of WW. If WW is closed, then

(W)=W. (W^\perp)^\perp = W.

This distinction is invisible in elementary finite-dimensional linear algebra but becomes important in functional analysis. Orthogonal complements are always closed in Hilbert spaces, even when the original subspace is not closed.

48.19 Common Identities

For subspaces U,WU,W of a finite-dimensional inner product space,

UWWU. U\subseteq W \quad \Longrightarrow \quad W^\perp\subseteq U^\perp.

Also,

(U+W)=UW. (U+W)^\perp = U^\perp\cap W^\perp.

Indeed, a vector is orthogonal to every vector in U+WU+W exactly when it is orthogonal to every vector in UU and every vector in WW.

In finite dimensions,

(UW)=U+W. (U\cap W)^\perp = U^\perp + W^\perp.

These identities show that orthogonal complementation exchanges sums and intersections. It reverses inclusion and changes dimension by complementarity.

48.20 Summary

The orthogonal complement of a set SS is the subspace of all vectors orthogonal to every vector in SS:

S={xV:x,s=0 for all sS}. S^\perp = \{x\in V : \langle x,s\rangle=0 \text{ for all } s\in S\}.

It is always a subspace. It depends only on the span of SS, so

S=span(S). S^\perp = \operatorname{span}(S)^\perp.

For a finite-dimensional subspace WVW\subseteq V,

dimW+dimW=dimV, \dim W + \dim W^\perp = \dim V, WW={0}, W\cap W^\perp=\{0\},

and

V=WW. V=W\oplus W^\perp.

Orthogonal complements connect geometry with computation. They describe null spaces, residuals, projections, least squares, and the four fundamental subspaces of a matrix.