# Chapter 114. Fourier Series and Transforms

# Chapter 114. Fourier Series and Transforms

Fourier analysis studies how functions and signals can be decomposed into waves.

The basic idea is the same as the idea of coordinates in linear algebra. A vector can be written as a linear combination of basis vectors. A function can often be written as a linear combination of basis functions. In Fourier analysis, the basis functions are sines, cosines, or complex exponentials.

Fourier series represent periodic functions by infinite sums of sines and cosines. Fourier transforms represent nonperiodic functions by continuous superpositions of complex exponentials. In both cases, the main operation is a change of basis from the original domain to a frequency domain. MIT's linear algebra notes describe Fourier series as linear algebra for functions, with sines and cosines acting as an orthogonal basis.

## 114.1 Periodic Functions

A function \(f\) is periodic with period \(T>0\) if

$$
f(x+T)=f(x)
$$

for all \(x\) in its domain.

The graph repeats after every interval of length \(T\). For example, sine and cosine are periodic with period \(2\pi\):

$$
\sin(x+2\pi)=\sin x,
\qquad
\cos(x+2\pi)=\cos x.
$$

Fourier series are designed for periodic functions. They express such functions as sums of waves whose frequencies are integer multiples of a fundamental frequency.

If the period is \(2\pi\), the basic functions are

$$
1,\quad
\cos x,\quad
\sin x,\quad
\cos 2x,\quad
\sin 2x,\quad
\cos 3x,\quad
\sin 3x,\ldots
$$

These functions repeat every \(2\pi\).

## 114.2 Function Spaces

Linear algebra begins with vector spaces. Fourier analysis uses vector spaces whose elements are functions.

For example, consider real-valued functions on the interval \([-\pi,\pi]\). Such functions can be added and multiplied by scalars:

$$
(f+g)(x)=f(x)+g(x),
$$

$$
(cf)(x)=c f(x).
$$

These operations obey the usual vector space rules.

Thus functions can be treated as vectors. The vector may have infinitely many degrees of freedom, but the algebraic ideas remain the same.

| Finite-dimensional linear algebra | Fourier analysis |
|---|---|
| Vector | Function |
| Dot product | Integral inner product |
| Basis vector | Basis function |
| Coordinates | Fourier coefficients |
| Projection | Fourier approximation |
| Orthogonal basis | Sine and cosine system |

This analogy is the foundation of the chapter.

## 114.3 Inner Products of Functions

For real-valued functions on \([-\pi,\pi]\), define the inner product by

$$
\langle f,g\rangle =
\int_{-\pi}^{\pi} f(x)g(x)\,dx.
$$

This is the function-space analogue of the dot product.

It allows us to define orthogonality. Two functions \(f\) and \(g\) are orthogonal if

$$
\langle f,g\rangle=0.
$$

For example,

$$
\int_{-\pi}^{\pi} \sin x \cos x\,dx = 0.
$$

Thus \(\sin x\) and \(\cos x\) are orthogonal on \([-\pi,\pi]\).

More generally, the sine and cosine functions of different integer frequencies are orthogonal on this interval. This orthogonality is what makes Fourier coefficients computable by projection.

## 114.4 Orthogonality of Sines and Cosines

The basic orthogonality relations are

$$
\int_{-\pi}^{\pi} \cos(mx)\cos(nx)\,dx = 0
\quad\text{if }m\neq n,
$$

$$
\int_{-\pi}^{\pi} \sin(mx)\sin(nx)\,dx = 0
\quad\text{if }m\neq n,
$$

and

$$
\int_{-\pi}^{\pi} \sin(mx)\cos(nx)\,dx = 0
$$

for positive integers \(m,n\).

Also,

$$
\int_{-\pi}^{\pi} \cos^2(nx)\,dx = \pi,
\qquad
\int_{-\pi}^{\pi} \sin^2(nx)\,dx = \pi.
$$

The constant function has norm

$$
\int_{-\pi}^{\pi} 1^2\,dx = 2\pi.
$$

These formulas make the sine and cosine system behave like an orthogonal coordinate system.

## 114.5 Fourier Series

A Fourier series represents a periodic function as

$$
f(x)
\sim
\frac{a_0}{2}
+
\sum_{n=1}^{\infty}
\left(
a_n\cos nx + b_n\sin nx
\right).
$$

The symbol \(\sim\) indicates that the series is associated with \(f\). Under suitable hypotheses, it converges to \(f\) at many points, but convergence requires separate analysis.

The coefficients \(a_n\) and \(b_n\) are the coordinates of \(f\) in the trigonometric basis.

They are computed by projection:

$$
a_n =
\frac{1}{\pi}
\int_{-\pi}^{\pi} f(x)\cos(nx)\,dx,
\qquad n\geq 0,
$$

$$
b_n =
\frac{1}{\pi}
\int_{-\pi}^{\pi} f(x)\sin(nx)\,dx,
\qquad n\geq 1.
$$

This is the same idea as computing coordinates in an orthogonal basis:

$$
\text{coordinate} =
\frac{\langle v,b\rangle}{\langle b,b\rangle}.
$$

## 114.6 The Constant Term

The coefficient \(a_0\) is

$$
a_0 =
\frac{1}{\pi}
\int_{-\pi}^{\pi} f(x)\,dx.
$$

The Fourier series uses \(a_0/2\), so the constant term is

$$
\frac{a_0}{2} =
\frac{1}{2\pi}
\int_{-\pi}^{\pi} f(x)\,dx.
$$

This is the average value of \(f\) over one period.

Thus the Fourier series begins with the average level of the function. The remaining sine and cosine terms describe oscillatory deviations from that average.

## 114.7 Example: A Simple Cosine Function

Let

$$
f(x)=3+2\cos x.
$$

This function is already written in Fourier form:

$$
f(x) =
\frac{a_0}{2}
+
a_1\cos x
+
b_1\sin x
+
a_2\cos 2x
+
\cdots.
$$

The coefficients are

$$
a_0=6,
\qquad
a_1=2,
$$

and all other coefficients are zero.

The average value is \(3\). The only nonconstant frequency present is frequency \(1\), with cosine amplitude \(2\).

This example shows that Fourier coefficients measure the amount of each frequency in a function.

## 114.8 Even and Odd Functions

Symmetry simplifies Fourier series.

A function is even if

$$
f(-x)=f(x).
$$

A function is odd if

$$
f(-x)=-f(x).
$$

Cosine is even. Sine is odd.

If \(f\) is even, then all sine coefficients vanish:

$$
b_n=0.
$$

The Fourier series contains only cosines.

If \(f\) is odd, then all cosine coefficients vanish, including the constant term:

$$
a_n=0.
$$

The Fourier series contains only sines.

This is a useful application of orthogonality and symmetry.

## 114.9 Complex Exponential Form

Fourier series can also be written using complex exponentials.

Euler's formula gives

$$
e^{inx}=\cos(nx)+i\sin(nx).
$$

A Fourier series may be written as

$$
f(x)
\sim
\sum_{n=-\infty}^{\infty} c_n e^{inx}.
$$

The complex Fourier coefficients are

$$
c_n =
\frac{1}{2\pi}
\int_{-\pi}^{\pi} f(x)e^{-inx}\,dx.
$$

The complex form is often cleaner than the sine-cosine form. It treats positive and negative frequencies symmetrically.

For real-valued functions, the coefficients satisfy

$$
c_{-n}=\overline{c_n}.
$$

This condition ensures that the imaginary parts cancel and the reconstructed function is real.

## 114.10 Fourier Series as Projection

Let \(V_N\) be the finite-dimensional subspace spanned by

$$
1,\cos x,\sin x,\ldots,\cos Nx,\sin Nx.
$$

The \(N\)-th partial Fourier sum is the orthogonal projection of \(f\) onto \(V_N\):

$$
S_N f(x) =
\frac{a_0}{2}
+
\sum_{n=1}^{N}
(a_n\cos nx+b_n\sin nx).
$$

This is the best approximation to \(f\) inside \(V_N\) in the mean-square sense:

$$
\|f-S_Nf\|_2
\leq
\|f-g\|_2
$$

for every \(g\in V_N\).

Thus Fourier approximation is least squares approximation in a function space.

This connects Fourier series directly with orthogonal projections from linear algebra.

## 114.11 Parseval's Identity

In finite-dimensional linear algebra, an orthonormal basis preserves length:

$$
\|v\|^2 =
\sum_i |\langle v,q_i\rangle|^2.
$$

Fourier analysis has an analogous result.

For suitable functions,

$$
\frac{1}{\pi}
\int_{-\pi}^{\pi}|f(x)|^2\,dx =
\frac{a_0^2}{2}
+
\sum_{n=1}^{\infty}(a_n^2+b_n^2).
$$

In complex form,

$$
\frac{1}{2\pi}
\int_{-\pi}^{\pi}|f(x)|^2\,dx =
\sum_{n=-\infty}^{\infty}|c_n|^2.
$$

This identity says that energy in the original function equals energy in its Fourier coefficients. Fourier transform theory has a related Parseval-Plancherel theorem, which preserves \(L^2\) norms and inner products under the transform.

## 114.12 Convergence

Fourier series may converge in different senses.

Pointwise convergence asks whether

$$
S_Nf(x)\to f(x)
$$

for each fixed \(x\).

Mean-square convergence asks whether

$$
\|f-S_Nf\|_2\to 0.
$$

Uniform convergence asks whether

$$
\sup_x |f(x)-S_Nf(x)|\to 0.
$$

These are different notions. A Fourier series may converge in mean square without converging uniformly.

At a jump discontinuity, the Fourier series often converges to the midpoint of the left and right limits:

$$
\frac{f(x^-)+f(x^+)}{2}.
$$

Near a jump, partial sums may overshoot. This is called the Gibbs phenomenon.

## 114.13 The Fourier Transform

Fourier series apply to periodic functions. The Fourier transform applies to functions on the whole real line.

One common convention defines the Fourier transform by

$$
\hat{f}(\xi) =
\int_{-\infty}^{\infty} f(x)e^{-i\xi x}\,dx.
$$

The inverse transform is

$$
f(x) =
\frac{1}{2\pi}
\int_{-\infty}^{\infty}\hat{f}(\xi)e^{i\xi x}\,d\xi.
$$

Other conventions distribute factors of \(2\pi\) differently. The mathematical content remains the same.

The Fourier transform converts a function of position or time into a function of frequency.

Fourier series use discrete frequencies. Fourier transforms use a continuum of frequencies. Stanford's Fourier transform lecture notes develop this transform as a basic tool for signals, systems, and applications.

## 114.14 Frequency Domain

The original function \(f(x)\) is often called the time-domain or spatial-domain representation.

The transform \(\hat{f}(\xi)\) is called the frequency-domain representation.

Large values of \(|\hat{f}(\xi)|\) indicate strong presence of frequency \(\xi\).

Low frequencies describe slow variation. High frequencies describe rapid variation.

| Domain | Object | Interpretation |
|---|---|---|
| Time or space | \(f(x)\) | Original signal |
| Frequency | \(\hat{f}(\xi)\) | Frequency content |

The Fourier transform is a change of representation. It does not destroy information when the inverse transform exists. It rewrites the same object in a basis of oscillatory functions.

## 114.15 Linearity of the Fourier Transform

The Fourier transform is linear.

If \(f\) and \(g\) have Fourier transforms and \(\alpha,\beta\) are scalars, then

$$
\mathcal{F}(\alpha f+\beta g) =
\alpha\mathcal{F}(f)+\beta\mathcal{F}(g).
$$

Equivalently,

$$
\widehat{\alpha f+\beta g} =
\alpha\hat{f}+\beta\hat{g}.
$$

This property places Fourier analysis directly inside linear algebra.

The transform is a linear operator between function spaces.

## 114.16 Differentiation and Multiplication

The Fourier transform converts differentiation into multiplication.

Under suitable hypotheses,

$$
\widehat{f'}(\xi)=i\xi \hat{f}(\xi).
$$

Thus derivatives become algebraic factors in frequency space.

This is one reason Fourier methods are powerful for differential equations. A differential equation may become an algebraic equation after transformation.

For example, if

$$
f'(x)=g(x),
$$

then

$$
i\xi\hat{f}(\xi)=\hat{g}(\xi).
$$

Solving for \(\hat{f}\) can be easier than solving directly in the original domain.

Iowa State lecture notes summarize several standard Fourier transform rules, including conversion of differentiation into multiplication by \(i\xi\), translation into modulation, and convolution into multiplication.

## 114.17 Translation and Modulation

Translation in one domain becomes modulation in the other.

If

$$
g(x)=f(x-a),
$$

then

$$
\hat{g}(\xi)=e^{-ia\xi}\hat{f}(\xi).
$$

Thus shifting a function changes the phase of its Fourier transform.

Multiplication by a complex exponential shifts the transform:

$$
g(x)=e^{iax}f(x)
$$

implies

$$
\hat{g}(\xi)=\hat{f}(\xi-a).
$$

These dualities are central in signal processing.

## 114.18 Convolution

The convolution of two functions is

$$
(f*g)(x) =
\int_{-\infty}^{\infty} f(t)g(x-t)\,dt.
$$

Convolution combines one function with a shifted copy of another.

The Fourier transform converts convolution into multiplication:

$$
\widehat{f*g}(\xi)=\hat{f}(\xi)\hat{g}(\xi)
$$

up to a convention-dependent constant.

This is one of the most important Fourier identities.

It means that filtering in the time domain becomes multiplication in the frequency domain.

Conversely, multiplication in the time domain becomes convolution in the frequency domain.

## 114.19 Fourier Transform and Differential Equations

Consider the differential equation

$$
-u''(x)+u(x)=f(x).
$$

Take the Fourier transform of both sides.

Since

$$
\widehat{u''}(\xi)=-(\xi^2)\hat{u}(\xi),
$$

we obtain

$$
(\xi^2+1)\hat{u}(\xi)=\hat{f}(\xi).
$$

Therefore

$$
\hat{u}(\xi)=\frac{\hat{f}(\xi)}{\xi^2+1}.
$$

The differential equation has become an algebraic equation.

After solving in frequency space, the inverse Fourier transform recovers \(u\).

This method is a standard reason Fourier transforms appear in partial differential equations, heat flow, wave propagation, and signal analysis.

## 114.20 The Discrete Fourier Transform

In computation, data are finite.

The discrete Fourier transform, or DFT, maps a finite sequence to another finite sequence.

Let

$$
x =
(x_0,x_1,\ldots,x_{N-1})
$$

be a complex sequence. Its DFT is

$$
X_k =
\sum_{j=0}^{N-1}
x_j e^{-2\pi i jk/N},
\qquad
k=0,1,\ldots,N-1.
$$

The inverse DFT is

$$
x_j =
\frac{1}{N}
\sum_{k=0}^{N-1}
X_k e^{2\pi i jk/N}.
$$

The DFT is a linear transformation from \(\mathbb{C}^N\) to \(\mathbb{C}^N\). Iowa State notes describe the DFT as mapping a finite sequence into a finite sequence, in contrast with Fourier series and continuous Fourier transforms.

## 114.21 The Fourier Matrix

The DFT can be written as matrix multiplication.

Let

$$
\omega=e^{-2\pi i/N}.
$$

Define the Fourier matrix \(F\) by

$$
F_{kj}=\omega^{kj}.
$$

Then

$$
X=Fx.
$$

The inverse is

$$
x=\frac{1}{N}F^*X,
$$

where \(F^*\) is the conjugate transpose of \(F\).

If we normalize by \(1/\sqrt{N}\), the Fourier matrix becomes unitary:

$$
U=\frac{1}{\sqrt{N}}F,
\qquad
U^*U=I.
$$

Thus the normalized DFT is a unitary change of basis in \(\mathbb{C}^N\).

## 114.22 Fast Fourier Transform

Computing the DFT directly requires \(O(N^2)\) arithmetic operations.

The fast Fourier transform, or FFT, computes the same result in

$$
O(N\log N)
$$

operations for many lengths \(N\).

The FFT is an algorithmic factorization of the Fourier matrix.

This is a linear algebraic idea: a dense matrix-vector multiplication is replaced by a product of sparse structured transformations.

The FFT is one of the most important algorithms in scientific computing. It makes Fourier methods practical for large signals, images, simulations, and numerical differential equations.

## 114.23 Fourier Analysis and Filtering

A filter modifies a signal by changing its frequency content.

A low-pass filter keeps low frequencies and suppresses high frequencies.

A high-pass filter suppresses low frequencies and keeps high frequencies.

A band-pass filter keeps a selected frequency band.

In Fourier terms, filtering often has the form

$$
\hat{g}(\xi)=H(\xi)\hat{f}(\xi),
$$

where \(H(\xi)\) is the frequency response of the filter.

Then

$$
g=\mathcal{F}^{-1}(H\hat{f}).
$$

This is why Fourier transforms are central in signal processing and image processing.

## 114.24 Fourier Analysis and Linear Algebra

Fourier analysis can be viewed as linear algebra in several related settings.

| Setting | Linear algebra object |
|---|---|
| Fourier series | Infinite orthogonal expansion |
| Fourier transform | Continuous frequency representation |
| DFT | Finite-dimensional unitary matrix |
| FFT | Fast matrix factorization |
| Filtering | Diagonal multiplication in frequency basis |
| PDE solving | Diagonalization of differential operators |
| Signal compression | Approximation by dominant coefficients |

The common theme is diagonalization.

Many operators that are complicated in the original domain become simple in the Fourier basis.

For example, differentiation becomes multiplication by \(i\xi\). Convolution becomes multiplication by a frequency response. Circulant matrices are diagonalized by the Fourier matrix.

## 114.25 Summary

Fourier series and Fourier transforms decompose functions into oscillatory components.

Fourier series apply to periodic functions and use discrete frequencies. Fourier transforms apply to functions on the real line and use continuous frequencies. The discrete Fourier transform applies to finite sequences and is represented by the Fourier matrix.

The linear algebra viewpoint is central. Sines, cosines, and complex exponentials act as basis functions. Fourier coefficients are coordinates. Partial sums are projections. Parseval identities preserve energy. The DFT is a unitary change of basis.

Fourier analysis shows that the language of vectors, inner products, bases, projections, and diagonalization extends from finite-dimensional spaces to functions and signals.
