An orthogonal complement records all directions perpendicular to a given set of vectors. If a subspace describes the directions allowed by a problem, its orthogonal complement describes the directions excluded by it.
Let be an inner product space, and let . The orthogonal complement of is
The notation is read as “ perp.” It is the set of all vectors orthogonal to every vector in . Standard references define it this way and note that it is always a subspace of the ambient inner product space.
48.1 First Examples
In , let
Then is the -axis. A vector
belongs to precisely when
This gives
Therefore
- $$
- S^\perp =
- \left{
- \begin{bmatrix}
- 0 \
- b
- \end{bmatrix}
- b\in\mathbb{R} \right}. $$
Thus the orthogonal complement of the -axis is the -axis.
In , the orthogonal complement of a line through the origin is the plane through the origin perpendicular to that line. The orthogonal complement of a plane through the origin is the line through the origin perpendicular to that plane.
48.2 Orthogonal Complement of a Set
The definition applies to any subset , not only to a subspace.
If
then
Thus is the common solution set of several homogeneous linear equations.
For example, in , let
A vector
lies in when
and
Thus
So
Therefore
48.3 The Orthogonal Complement Is a Subspace
For every subset , the set is a subspace of . This remains true even when itself is not a subspace.
First, the zero vector belongs to , since
for every .
Now suppose , and let be scalars. For every ,
Since and ,
Therefore
Thus
So is closed under linear combinations and is a subspace.
48.4 Orthogonal Complement of a Span
A vector is orthogonal to a set if and only if it is orthogonal to every linear combination of vectors in that set. Hence
This identity is useful because it allows us to replace a set by its span without changing the orthogonal complement.
Proof: Suppose . Let
be a finite linear combination of vectors from . Then
By linearity,
Thus is orthogonal to every vector in .
The converse is immediate because
Therefore
48.5 Inclusion Reverses
If
then
The inclusion reverses direction.
This happens because being orthogonal to a larger set is a stronger condition. If a vector is orthogonal to every vector in , then it is certainly orthogonal to every vector in the smaller set .
For example, in , if is a line inside a plane , then is a line perpendicular to the plane, while is a plane perpendicular to the line. The complement of the larger subspace is smaller.
48.6 Orthogonal Complements in Finite Dimensions
Let be a subspace of a finite-dimensional inner product space . Then
Equivalently,
This gives the expected geometric rule. In , a line has dimension , so its orthogonal complement has dimension . A plane has dimension , so its orthogonal complement has dimension . In finite-dimensional inner product spaces, a -dimensional subspace has an -dimensional orthogonal complement.
48.7 Trivial Intersection
If is a subspace of an inner product space , then
Indeed, if , then and is orthogonal to every vector in . Since , it is orthogonal to itself:
Positive definiteness gives
Thus the only vector that lies both in a subspace and in its orthogonal complement is the zero vector.
48.8 Direct Sum Decomposition
If is a subspace of a finite-dimensional inner product space , then
This means every vector can be written uniquely as
where
The uniqueness follows from
The existence follows from the dimension formula:
This decomposition is called the orthogonal decomposition of with respect to . In finite-dimensional inner product spaces, this direct-sum decomposition is one of the central structural properties of orthogonal complements.
48.9 Double Orthogonal Complement
In finite-dimensional inner product spaces,
The inclusion
is direct. Every vector in is orthogonal to every vector in , so every vector in belongs to .
To prove equality, compare dimensions. Since
we have
Substituting,
Thus is a subspace of with the same dimension. Therefore
This finite-dimensional identity must be handled carefully in infinite-dimensional spaces. In Hilbert spaces, the double orthogonal complement of a subspace is its closure, so closedness becomes part of the statement.
48.10 Computing Orthogonal Complements
In , orthogonal complements are often computed by solving homogeneous systems.
Suppose
A vector belongs to if and only if
If we form the matrix
then the conditions become
Therefore
So computing an orthogonal complement reduces to computing a null space.
48.11 Example in
Let
Let
The condition gives
and
Thus
while is free. Hence
Therefore
Since has dimension in , its orthogonal complement also has dimension , as expected.
48.12 Orthogonal Complement and Null Space
Let be an real matrix. The null space of is the orthogonal complement of the row space of :
Indeed,
means that every row of has dot product zero with . Thus is orthogonal to every vector in the row space.
Similarly,
The orthogonal-complement identities for row, column, and null spaces are standard finite-dimensional facts. They express the fundamental relation between equations and orthogonality.
48.13 Four Fundamental Subspaces
For an matrix , the four fundamental subspaces are:
| Subspace | Ambient space | Orthogonal complement |
|---|---|---|
Thus
and
These decompositions separate each ambient space into a range part and a null part. They are central in solving linear systems, least squares, and understanding rank.
48.14 Orthogonal Complement and Projection
The orthogonal complement gives the residual part of a projection.
Let be a finite-dimensional subspace of an inner product space . For every , there exists a unique decomposition
where
The vector is the orthogonal projection of onto , and is the residual.
Thus
The defining condition for projection is
Equivalently,
for every .
This is the main equation behind least squares approximation.
48.15 Projection onto a Subspace with Orthonormal Basis
Suppose has an orthonormal basis
Then the projection of onto is
The residual is
For every ,
Using orthonormality,
Therefore .
48.16 Least Squares Interpretation
Consider a system
where is an matrix and . If does not lie in , the system has no exact solution.
The least squares problem asks for such that
is the closest vector in to .
The residual
must lie in the orthogonal complement of the column space:
Since
we get
Substituting gives the normal equations:
or
This derivation shows that least squares is fundamentally an orthogonal-complement problem.
48.17 Complex Inner Product Spaces
In complex inner product spaces, the definition remains
The main change is conjugation. In , the standard inner product is
or, depending on convention,
The zero condition is unaffected by the convention, provided it is used consistently.
For a complex matrix ,
with rows interpreted through the complex inner product. Also,
The transpose in the real case becomes the conjugate transpose in the complex case.
48.18 Infinite-Dimensional Caution
In finite dimensions, every subspace is closed, and
In infinite-dimensional Hilbert spaces, a subspace may fail to be closed. In that setting,
where is the closure of . If is closed, then
This distinction is invisible in elementary finite-dimensional linear algebra but becomes important in functional analysis. Orthogonal complements are always closed in Hilbert spaces, even when the original subspace is not closed.
48.19 Common Identities
For subspaces of a finite-dimensional inner product space,
Also,
Indeed, a vector is orthogonal to every vector in exactly when it is orthogonal to every vector in and every vector in .
In finite dimensions,
These identities show that orthogonal complementation exchanges sums and intersections. It reverses inclusion and changes dimension by complementarity.
48.20 Summary
The orthogonal complement of a set is the subspace of all vectors orthogonal to every vector in :
It is always a subspace. It depends only on the span of , so
For a finite-dimensional subspace ,
and
Orthogonal complements connect geometry with computation. They describe null spaces, residuals, projections, least squares, and the four fundamental subspaces of a matrix.