# Chapter 45. Inner Products

# Chapter 45. Inner Products

An inner product is a rule that assigns a scalar to a pair of vectors. It generalizes the ordinary dot product in Euclidean space. With an inner product, a vector space gains geometric structure: lengths, angles, orthogonality, projections, and distances become meaningful. Standard references define an inner product space as a real or complex vector space equipped with such a product satisfying linearity, symmetry or conjugate symmetry, and positive definiteness.

## 45.1 The Dot Product as the Model Example

In \(\mathbb{R}^n\), the standard inner product is the dot product. If

$$
x =
\begin{bmatrix}
x_1 \\
x_2 \\
\vdots \\
x_n
\end{bmatrix},
\qquad
y =
\begin{bmatrix}
y_1 \\
y_2 \\
\vdots \\
y_n
\end{bmatrix},
$$

then

$$
\langle x, y \rangle =
x_1y_1 + x_2y_2 + \cdots + x_ny_n.
$$

Equivalently,

$$
\langle x, y \rangle = x^T y.
$$

For example,

$$
\left\langle
\begin{bmatrix}
2 \\
-1 \\
4
\end{bmatrix},
\begin{bmatrix}
3 \\
5 \\
2
\end{bmatrix}
\right\rangle =
2 \cdot 3 + (-1) \cdot 5 + 4 \cdot 2 =
9.
$$

The dot product is the simplest inner product. It measures how much two vectors point in the same direction.

## 45.2 Definition

Let \(V\) be a vector space over \(\mathbb{R}\). An inner product on \(V\) is a function

$$
\langle \cdot, \cdot \rangle : V \times V \to \mathbb{R}
$$

satisfying the following properties for all \(u, v, w \in V\) and all scalars \(a, b \in \mathbb{R}\).

First, it is linear in the first argument:

$$
\langle au + bv, w \rangle =
a \langle u, w \rangle + b \langle v, w \rangle.
$$

Second, it is symmetric:

$$
\langle u, v \rangle = \langle v, u \rangle.
$$

Third, it is positive definite:

$$
\langle v, v \rangle \ge 0,
$$

and

$$
\langle v, v \rangle = 0
\quad
\text{only when}
\quad
v = 0.
$$

A vector space equipped with an inner product is called an inner product space.

## 45.3 Complex Inner Products

If \(V\) is a vector space over \(\mathbb{C}\), symmetry must be replaced by conjugate symmetry:

$$
\langle u, v \rangle =
\overline{\langle v, u \rangle}.
$$

A common convention is that the inner product is linear in the first argument and conjugate-linear in the second:

$$
\langle au + bv, w \rangle =
a\langle u, w \rangle + b\langle v, w \rangle,
$$

and

$$
\langle u, av + bw \rangle =
\overline{a}\langle u, v \rangle
+
\overline{b}\langle u, w \rangle.
$$

Some books use the opposite convention. The mathematics is equivalent, but the convention must remain consistent.

For \(x, y \in \mathbb{C}^n\), the standard complex inner product is

$$
\langle x, y \rangle =
x_1\overline{y_1}
+
x_2\overline{y_2}
+
\cdots
+
x_n\overline{y_n}.
$$

The conjugates are necessary. Without them, \(\langle x, x \rangle\) may fail to be nonnegative.

For example, if \(x = i \in \mathbb{C}\), then

$$
x^2 = i^2 = -1,
$$

which cannot be a squared length. But

$$
x\overline{x} = i(-i) = 1.
$$

Thus conjugation is essential in complex inner product spaces.

## 45.4 Length from an Inner Product

An inner product defines the length, or norm, of a vector by

$$
\|v\| = \sqrt{\langle v, v \rangle}.
$$

The positive definiteness axiom ensures that this expression is meaningful. It also ensures that only the zero vector has length zero.

In \(\mathbb{R}^n\), this gives the usual Euclidean length:

$$
\|v\| =
\sqrt{v_1^2 + v_2^2 + \cdots + v_n^2}.
$$

For example,

$$
v =
\begin{bmatrix}
3 \\
4
\end{bmatrix}
$$

has length

$$
\|v\| =
\sqrt{3^2 + 4^2} =
5.
$$

Thus inner products generalize the Pythagorean notion of length.

## 45.5 Orthogonality

Two vectors \(u\) and \(v\) are orthogonal if

$$
\langle u, v \rangle = 0.
$$

Orthogonality generalizes perpendicularity.

In \(\mathbb{R}^2\),

$$
u =
\begin{bmatrix}
1 \\
2
\end{bmatrix},
\qquad
v =
\begin{bmatrix}
2 \\
-1
\end{bmatrix}.
$$

Then

$$
\langle u, v \rangle =
1 \cdot 2 + 2 \cdot (-1) =
0.
$$

Therefore \(u\) and \(v\) are orthogonal.

Orthogonality is one of the main reasons inner products are useful. It allows vectors to be decomposed into independent geometric components.

## 45.6 Angles

In a real inner product space, the angle \(\theta\) between two nonzero vectors \(u\) and \(v\) is defined by

$$
\cos \theta =
\frac{\langle u, v \rangle}{\|u\|\|v\|}.
$$

This formula agrees with elementary geometry in \(\mathbb{R}^2\) and \(\mathbb{R}^3\).

If \(\langle u, v \rangle > 0\), the angle is acute.

If \(\langle u, v \rangle = 0\), the angle is right.

If \(\langle u, v \rangle < 0\), the angle is obtuse.

The formula depends on the Cauchy-Schwarz inequality, which guarantees that

$$
-1
\le
\frac{\langle u, v \rangle}{\|u\|\|v\|}
\le
1.
$$

This ensures that the angle is well-defined.

## 45.7 Cauchy-Schwarz Inequality

The Cauchy-Schwarz inequality states that for all vectors \(u\) and \(v\) in an inner product space,

$$
|\langle u, v \rangle|
\le
\|u\|\|v\|.
$$

This is one of the fundamental inequalities in linear algebra.

It says that the inner product of two vectors cannot exceed the product of their lengths in absolute value.

Equality holds precisely when one vector is a scalar multiple of the other.

In \(\mathbb{R}^n\), this becomes

$$
|x_1y_1 + x_2y_2 + \cdots + x_ny_n|
\le
\sqrt{x_1^2 + \cdots + x_n^2}
\sqrt{y_1^2 + \cdots + y_n^2}.
$$

The inequality is the algebraic foundation for angles, projections, and many estimates in analysis.

## 45.8 Distance

An inner product defines a norm, and a norm defines a distance.

The distance between \(u\) and \(v\) is

$$
d(u,v) = \|u - v\|.
$$

In \(\mathbb{R}^n\), this is the Euclidean distance:

$$
d(u,v) =
\sqrt{(u_1-v_1)^2 + \cdots + (u_n-v_n)^2}.
$$

For example, if

$$
u =
\begin{bmatrix}
1 \\
2
\end{bmatrix},
\qquad
v =
\begin{bmatrix}
4 \\
6
\end{bmatrix},
$$

then

$$
d(u,v) =
\left\|
\begin{bmatrix}
-3 \\
-4
\end{bmatrix}
\right\| =
5.
$$

Thus an inner product space is not only algebraic. It also has metric structure.

## 45.9 Projection onto a Vector

Let \(v\) be a nonzero vector. The projection of \(u\) onto \(v\) is the component of \(u\) in the direction of \(v\).

It is given by

$$
\operatorname{proj}_v(u) =
\frac{\langle u, v \rangle}{\langle v, v \rangle}v.
$$

The scalar

$$
\frac{\langle u, v \rangle}{\langle v, v \rangle}
$$

measures how much of \(u\) lies in the direction of \(v\).

The difference

$$
u - \operatorname{proj}_v(u)
$$

is orthogonal to \(v\).

Indeed,

$$
\left\langle
u - \frac{\langle u, v \rangle}{\langle v, v \rangle}v,
v
\right\rangle =
\langle u, v \rangle -
\frac{\langle u, v \rangle}{\langle v, v \rangle}
\langle v, v \rangle =
0.
$$

Projection is the basic operation behind orthogonal decomposition, least squares, Fourier approximation, and many numerical algorithms.

## 45.10 Orthogonal Decomposition

Suppose \(v\) is nonzero. Every vector \(u\) can be decomposed as

$$
u = p + r,
$$

where \(p\) is parallel to \(v\) and \(r\) is orthogonal to \(v\).

The parallel part is

$$
p = \operatorname{proj}_v(u),
$$

and the orthogonal part is

$$
r = u - \operatorname{proj}_v(u).
$$

This decomposition separates a vector into a part explained by \(v\) and a residual part perpendicular to \(v\).

In data fitting, the projection is the best approximation in the chosen direction. The residual is the error that remains after the approximation.

## 45.11 Orthonormal Vectors

A vector \(v\) is a unit vector if

$$
\|v\| = 1.
$$

A collection of vectors \(v_1, v_2, \ldots, v_k\) is orthogonal if

$$
\langle v_i, v_j \rangle = 0
\quad
\text{whenever}
\quad
i \ne j.
$$

It is orthonormal if it is orthogonal and each vector has length one:

$$
\langle v_i, v_j \rangle =
\begin{cases}
1, & i = j, \\
0, & i \ne j.
\end{cases}
$$

Orthonormal vectors are especially convenient because coordinates are computed directly by inner products.

If \(e_1, e_2, \ldots, e_n\) is an orthonormal basis and \(v \in V\), then

$$
v =
\langle v, e_1 \rangle e_1
+
\langle v, e_2 \rangle e_2
+
\cdots
+
\langle v, e_n \rangle e_n.
$$

The coefficient of \(e_i\) is simply \(\langle v, e_i \rangle\).

## 45.12 Matrix Form of an Inner Product

On \(\mathbb{R}^n\), not every inner product must be the standard dot product. Many inner products have the form

$$
\langle x, y \rangle_A = x^T A y,
$$

where \(A\) is a symmetric positive definite matrix.

The symmetry of \(A\) gives

$$
x^T A y = y^T A x.
$$

Positive definiteness gives

$$
x^T A x > 0
\quad
\text{for every nonzero } x.
$$

For example, let

$$
A =
\begin{bmatrix}
2 & 0 \\
0 & 1
\end{bmatrix}.
$$

Then

$$
\langle x, y \rangle_A =
x^T
\begin{bmatrix}
2 & 0 \\
0 & 1
\end{bmatrix}
y.
$$

This inner product weights the first coordinate twice as strongly as the second.

If

$$
x =
\begin{bmatrix}
x_1 \\
x_2
\end{bmatrix},
\qquad
y =
\begin{bmatrix}
y_1 \\
y_2
\end{bmatrix},
$$

then

$$
\langle x, y \rangle_A =
2x_1y_1 + x_2y_2.
$$

Different inner products impose different geometries on the same vector space.

## 45.13 Inner Products on Function Spaces

Inner products also appear on spaces of functions.

For continuous real-valued functions on an interval \([a,b]\), a standard inner product is

$$
\langle f, g \rangle =
\int_a^b f(x)g(x)\,dx.
$$

This is analogous to the dot product. Instead of summing products of components, we integrate products of function values.

The corresponding norm is

$$
\|f\| =
\left(
\int_a^b f(x)^2\,dx
\right)^{1/2}.
$$

Two functions \(f\) and \(g\) are orthogonal if

$$
\int_a^b f(x)g(x)\,dx = 0.
$$

For example, on \([-\pi,\pi]\),

$$
\sin x
$$

and

$$
\cos x
$$

are orthogonal because

$$
\int_{-\pi}^{\pi} \sin x \cos x \, dx = 0.
$$

This idea is central in Fourier series, approximation theory, probability, and differential equations.

## 45.14 Inner Products on Matrix Spaces

Matrices can also form inner product spaces.

For real \(m \times n\) matrices, the Frobenius inner product is

$$
\langle A, B \rangle =
\operatorname{tr}(A^T B).
$$

Equivalently,

$$
\langle A, B \rangle =
\sum_{i=1}^m \sum_{j=1}^n a_{ij}b_{ij}.
$$

This treats a matrix as a long vector formed from its entries.

The corresponding norm is the Frobenius norm:

$$
\|A\|_F =
\sqrt{
\sum_{i=1}^m \sum_{j=1}^n a_{ij}^2
}.
$$

This inner product is used in numerical linear algebra, statistics, optimization, and matrix approximation.

## 45.15 Weighted Inner Products

A weighted inner product assigns different importance to different coordinates or regions.

In \(\mathbb{R}^n\), if \(w_1, \ldots, w_n\) are positive weights, define

$$
\langle x, y \rangle_w =
w_1x_1y_1
+
w_2x_2y_2
+
\cdots
+
w_nx_ny_n.
$$

This is an inner product because all weights are positive.

On a function space, a weighted inner product may have the form

$$
\langle f, g \rangle_w =
\int_a^b f(x)g(x)w(x)\,dx,
$$

where \(w(x) > 0\).

Weighted inner products are common when some coordinates, samples, or regions matter more than others.

## 45.16 Inner Products and Bases

Let \(V\) be a finite-dimensional real vector space with basis

$$
B = (v_1, v_2, \ldots, v_n).
$$

An inner product on \(V\) is determined by the inner products of the basis vectors.

Define the Gram matrix

$$
G =
\begin{bmatrix}
\langle v_1, v_1 \rangle & \langle v_1, v_2 \rangle & \cdots & \langle v_1, v_n \rangle \\
\langle v_2, v_1 \rangle & \langle v_2, v_2 \rangle & \cdots & \langle v_2, v_n \rangle \\
\vdots & \vdots & \ddots & \vdots \\
\langle v_n, v_1 \rangle & \langle v_n, v_2 \rangle & \cdots & \langle v_n, v_n \rangle
\end{bmatrix}.
$$

If

$$
u = x_1v_1 + \cdots + x_nv_n,
\qquad
v = y_1v_1 + \cdots + y_nv_n,
$$

then

$$
\langle u, v \rangle =
x^T G y.
$$

The Gram matrix records the geometry of the basis. If the basis is orthonormal, then \(G = I\). If the basis is not orthonormal, \(G\) contains the correction terms needed to compute lengths and angles.

## 45.17 The Gram Matrix

For vectors \(v_1, \ldots, v_k\) in an inner product space, the Gram matrix is

$$
G =
\begin{bmatrix}
\langle v_1, v_1 \rangle & \langle v_1, v_2 \rangle & \cdots & \langle v_1, v_k \rangle \\
\langle v_2, v_1 \rangle & \langle v_2, v_2 \rangle & \cdots & \langle v_2, v_k \rangle \\
\vdots & \vdots & \ddots & \vdots \\
\langle v_k, v_1 \rangle & \langle v_k, v_2 \rangle & \cdots & \langle v_k, v_k \rangle
\end{bmatrix}.
$$

The Gram matrix is symmetric in real spaces and Hermitian in complex spaces.

It is positive semidefinite:

$$
c^T G c \ge 0
$$

for all coefficient vectors \(c\).

It is positive definite precisely when the vectors \(v_1, \ldots, v_k\) are linearly independent.

Thus the Gram matrix connects inner products with linear independence.

## 45.18 Polarization

An inner product determines a norm. In real inner product spaces, the inner product can also be recovered from the norm by the polarization identity:

$$
\langle u, v \rangle =
\frac{1}{4}
\left(
\|u+v\|^2 - \|u-v\|^2
\right).
$$

This identity shows that the inner product contains exactly the information needed to compute lengths, and conversely the norm contains enough information to recover the inner product when the norm comes from an inner product.

Not every norm comes from an inner product. A norm comes from an inner product precisely when it satisfies the parallelogram law.

## 45.19 Parallelogram Law

In every inner product space,

$$
\|u+v\|^2 + \|u-v\|^2 =
2\|u\|^2 + 2\|v\|^2.
$$

This is the parallelogram law.

It expresses a geometric fact: in a parallelogram, the sum of the squares of the diagonal lengths equals the sum of the squares of all four side lengths.

The law is a signature property of norms induced by inner products. For example, the norm

$$
\|x\|_1 = |x_1| + |x_2|
$$

on \(\mathbb{R}^2\) does not satisfy the parallelogram law, so it does not come from an inner product.

## 45.20 Examples and Nonexamples

The standard dot product on \(\mathbb{R}^n\) is an inner product.

The standard complex product

$$
\langle x, y \rangle = \sum_{i=1}^n x_i\overline{y_i}
$$

is an inner product on \(\mathbb{C}^n\).

The integral formula

$$
\langle f,g\rangle = \int_a^b f(x)g(x)\,dx
$$

is an inner product on suitable real function spaces.

The formula

$$
\langle x,y\rangle = x_1y_1 - x_2y_2
$$

on \(\mathbb{R}^2\) is not an inner product because

$$
\langle (0,1),(0,1)\rangle = -1.
$$

It fails positive definiteness.

The formula

$$
\langle x,y\rangle = x_1y_1
$$

on \(\mathbb{R}^2\) is also not an inner product because

$$
\langle (0,1),(0,1)\rangle = 0
$$

even though \((0,1)\) is not the zero vector.

It fails positive definiteness.

## 45.21 Summary

An inner product turns a vector space into a geometric object. It defines length, angle, orthogonality, projection, distance, and approximation.

The standard dot product is the basic example, but inner products also occur on complex vector spaces, matrix spaces, polynomial spaces, and function spaces.

The essential properties are linearity, symmetry or conjugate symmetry, and positive definiteness. These axioms are strong enough to support the main geometry of linear algebra.
