Skip to content

Chapter 114. Fourier Series and Transforms

Fourier analysis studies how functions and signals can be decomposed into waves.

The basic idea is the same as the idea of coordinates in linear algebra. A vector can be written as a linear combination of basis vectors. A function can often be written as a linear combination of basis functions. In Fourier analysis, the basis functions are sines, cosines, or complex exponentials.

Fourier series represent periodic functions by infinite sums of sines and cosines. Fourier transforms represent nonperiodic functions by continuous superpositions of complex exponentials. In both cases, the main operation is a change of basis from the original domain to a frequency domain. MIT’s linear algebra notes describe Fourier series as linear algebra for functions, with sines and cosines acting as an orthogonal basis.

114.1 Periodic Functions

A function ff is periodic with period T>0T>0 if

f(x+T)=f(x) f(x+T)=f(x)

for all xx in its domain.

The graph repeats after every interval of length TT. For example, sine and cosine are periodic with period 2π2\pi:

sin(x+2π)=sinx,cos(x+2π)=cosx. \sin(x+2\pi)=\sin x, \qquad \cos(x+2\pi)=\cos x.

Fourier series are designed for periodic functions. They express such functions as sums of waves whose frequencies are integer multiples of a fundamental frequency.

If the period is 2π2\pi, the basic functions are

1,cosx,sinx,cos2x,sin2x,cos3x,sin3x, 1,\quad \cos x,\quad \sin x,\quad \cos 2x,\quad \sin 2x,\quad \cos 3x,\quad \sin 3x,\ldots

These functions repeat every 2π2\pi.

114.2 Function Spaces

Linear algebra begins with vector spaces. Fourier analysis uses vector spaces whose elements are functions.

For example, consider real-valued functions on the interval [π,π][-\pi,\pi]. Such functions can be added and multiplied by scalars:

(f+g)(x)=f(x)+g(x), (f+g)(x)=f(x)+g(x), (cf)(x)=cf(x). (cf)(x)=c f(x).

These operations obey the usual vector space rules.

Thus functions can be treated as vectors. The vector may have infinitely many degrees of freedom, but the algebraic ideas remain the same.

Finite-dimensional linear algebraFourier analysis
VectorFunction
Dot productIntegral inner product
Basis vectorBasis function
CoordinatesFourier coefficients
ProjectionFourier approximation
Orthogonal basisSine and cosine system

This analogy is the foundation of the chapter.

114.3 Inner Products of Functions

For real-valued functions on [π,π][-\pi,\pi], define the inner product by

f,g=ππf(x)g(x)dx. \langle f,g\rangle = \int_{-\pi}^{\pi} f(x)g(x)\,dx.

This is the function-space analogue of the dot product.

It allows us to define orthogonality. Two functions ff and gg are orthogonal if

f,g=0. \langle f,g\rangle=0.

For example,

ππsinxcosxdx=0. \int_{-\pi}^{\pi} \sin x \cos x\,dx = 0.

Thus sinx\sin x and cosx\cos x are orthogonal on [π,π][-\pi,\pi].

More generally, the sine and cosine functions of different integer frequencies are orthogonal on this interval. This orthogonality is what makes Fourier coefficients computable by projection.

114.4 Orthogonality of Sines and Cosines

The basic orthogonality relations are

ππcos(mx)cos(nx)dx=0if mn, \int_{-\pi}^{\pi} \cos(mx)\cos(nx)\,dx = 0 \quad\text{if }m\neq n, ππsin(mx)sin(nx)dx=0if mn, \int_{-\pi}^{\pi} \sin(mx)\sin(nx)\,dx = 0 \quad\text{if }m\neq n,

and

ππsin(mx)cos(nx)dx=0 \int_{-\pi}^{\pi} \sin(mx)\cos(nx)\,dx = 0

for positive integers m,nm,n.

Also,

ππcos2(nx)dx=π,ππsin2(nx)dx=π. \int_{-\pi}^{\pi} \cos^2(nx)\,dx = \pi, \qquad \int_{-\pi}^{\pi} \sin^2(nx)\,dx = \pi.

The constant function has norm

ππ12dx=2π. \int_{-\pi}^{\pi} 1^2\,dx = 2\pi.

These formulas make the sine and cosine system behave like an orthogonal coordinate system.

114.5 Fourier Series

A Fourier series represents a periodic function as

f(x)a02+n=1(ancosnx+bnsinnx). f(x) \sim \frac{a_0}{2} + \sum_{n=1}^{\infty} \left( a_n\cos nx + b_n\sin nx \right).

The symbol \sim indicates that the series is associated with ff. Under suitable hypotheses, it converges to ff at many points, but convergence requires separate analysis.

The coefficients ana_n and bnb_n are the coordinates of ff in the trigonometric basis.

They are computed by projection:

an=1πππf(x)cos(nx)dx,n0, a_n = \frac{1}{\pi} \int_{-\pi}^{\pi} f(x)\cos(nx)\,dx, \qquad n\geq 0, bn=1πππf(x)sin(nx)dx,n1. b_n = \frac{1}{\pi} \int_{-\pi}^{\pi} f(x)\sin(nx)\,dx, \qquad n\geq 1.

This is the same idea as computing coordinates in an orthogonal basis:

coordinate=v,bb,b. \text{coordinate} = \frac{\langle v,b\rangle}{\langle b,b\rangle}.

114.6 The Constant Term

The coefficient a0a_0 is

a0=1πππf(x)dx. a_0 = \frac{1}{\pi} \int_{-\pi}^{\pi} f(x)\,dx.

The Fourier series uses a0/2a_0/2, so the constant term is

a02=12πππf(x)dx. \frac{a_0}{2} = \frac{1}{2\pi} \int_{-\pi}^{\pi} f(x)\,dx.

This is the average value of ff over one period.

Thus the Fourier series begins with the average level of the function. The remaining sine and cosine terms describe oscillatory deviations from that average.

114.7 Example: A Simple Cosine Function

Let

f(x)=3+2cosx. f(x)=3+2\cos x.

This function is already written in Fourier form:

f(x)=a02+a1cosx+b1sinx+a2cos2x+. f(x) = \frac{a_0}{2} + a_1\cos x + b_1\sin x + a_2\cos 2x + \cdots.

The coefficients are

a0=6,a1=2, a_0=6, \qquad a_1=2,

and all other coefficients are zero.

The average value is 33. The only nonconstant frequency present is frequency 11, with cosine amplitude 22.

This example shows that Fourier coefficients measure the amount of each frequency in a function.

114.8 Even and Odd Functions

Symmetry simplifies Fourier series.

A function is even if

f(x)=f(x). f(-x)=f(x).

A function is odd if

f(x)=f(x). f(-x)=-f(x).

Cosine is even. Sine is odd.

If ff is even, then all sine coefficients vanish:

bn=0. b_n=0.

The Fourier series contains only cosines.

If ff is odd, then all cosine coefficients vanish, including the constant term:

an=0. a_n=0.

The Fourier series contains only sines.

This is a useful application of orthogonality and symmetry.

114.9 Complex Exponential Form

Fourier series can also be written using complex exponentials.

Euler’s formula gives

einx=cos(nx)+isin(nx). e^{inx}=\cos(nx)+i\sin(nx).

A Fourier series may be written as

f(x)n=cneinx. f(x) \sim \sum_{n=-\infty}^{\infty} c_n e^{inx}.

The complex Fourier coefficients are

cn=12πππf(x)einxdx. c_n = \frac{1}{2\pi} \int_{-\pi}^{\pi} f(x)e^{-inx}\,dx.

The complex form is often cleaner than the sine-cosine form. It treats positive and negative frequencies symmetrically.

For real-valued functions, the coefficients satisfy

cn=cn. c_{-n}=\overline{c_n}.

This condition ensures that the imaginary parts cancel and the reconstructed function is real.

114.10 Fourier Series as Projection

Let VNV_N be the finite-dimensional subspace spanned by

1,cosx,sinx,,cosNx,sinNx. 1,\cos x,\sin x,\ldots,\cos Nx,\sin Nx.

The NN-th partial Fourier sum is the orthogonal projection of ff onto VNV_N:

SNf(x)=a02+n=1N(ancosnx+bnsinnx). S_N f(x) = \frac{a_0}{2} + \sum_{n=1}^{N} (a_n\cos nx+b_n\sin nx).

This is the best approximation to ff inside VNV_N in the mean-square sense:

fSNf2fg2 \|f-S_Nf\|_2 \leq \|f-g\|_2

for every gVNg\in V_N.

Thus Fourier approximation is least squares approximation in a function space.

This connects Fourier series directly with orthogonal projections from linear algebra.

114.11 Parseval’s Identity

In finite-dimensional linear algebra, an orthonormal basis preserves length:

v2=iv,qi2. \|v\|^2 = \sum_i |\langle v,q_i\rangle|^2.

Fourier analysis has an analogous result.

For suitable functions,

1πππf(x)2dx=a022+n=1(an2+bn2). \frac{1}{\pi} \int_{-\pi}^{\pi}|f(x)|^2\,dx = \frac{a_0^2}{2} + \sum_{n=1}^{\infty}(a_n^2+b_n^2).

In complex form,

12πππf(x)2dx=n=cn2. \frac{1}{2\pi} \int_{-\pi}^{\pi}|f(x)|^2\,dx = \sum_{n=-\infty}^{\infty}|c_n|^2.

This identity says that energy in the original function equals energy in its Fourier coefficients. Fourier transform theory has a related Parseval-Plancherel theorem, which preserves L2L^2 norms and inner products under the transform.

114.12 Convergence

Fourier series may converge in different senses.

Pointwise convergence asks whether

SNf(x)f(x) S_Nf(x)\to f(x)

for each fixed xx.

Mean-square convergence asks whether

fSNf20. \|f-S_Nf\|_2\to 0.

Uniform convergence asks whether

supxf(x)SNf(x)0. \sup_x |f(x)-S_Nf(x)|\to 0.

These are different notions. A Fourier series may converge in mean square without converging uniformly.

At a jump discontinuity, the Fourier series often converges to the midpoint of the left and right limits:

f(x)+f(x+)2. \frac{f(x^-)+f(x^+)}{2}.

Near a jump, partial sums may overshoot. This is called the Gibbs phenomenon.

114.13 The Fourier Transform

Fourier series apply to periodic functions. The Fourier transform applies to functions on the whole real line.

One common convention defines the Fourier transform by

f^(ξ)=f(x)eiξxdx. \hat{f}(\xi) = \int_{-\infty}^{\infty} f(x)e^{-i\xi x}\,dx.

The inverse transform is

f(x)=12πf^(ξ)eiξxdξ. f(x) = \frac{1}{2\pi} \int_{-\infty}^{\infty}\hat{f}(\xi)e^{i\xi x}\,d\xi.

Other conventions distribute factors of 2π2\pi differently. The mathematical content remains the same.

The Fourier transform converts a function of position or time into a function of frequency.

Fourier series use discrete frequencies. Fourier transforms use a continuum of frequencies. Stanford’s Fourier transform lecture notes develop this transform as a basic tool for signals, systems, and applications.

114.14 Frequency Domain

The original function f(x)f(x) is often called the time-domain or spatial-domain representation.

The transform f^(ξ)\hat{f}(\xi) is called the frequency-domain representation.

Large values of f^(ξ)|\hat{f}(\xi)| indicate strong presence of frequency ξ\xi.

Low frequencies describe slow variation. High frequencies describe rapid variation.

DomainObjectInterpretation
Time or spacef(x)f(x)Original signal
Frequencyf^(ξ)\hat{f}(\xi)Frequency content

The Fourier transform is a change of representation. It does not destroy information when the inverse transform exists. It rewrites the same object in a basis of oscillatory functions.

114.15 Linearity of the Fourier Transform

The Fourier transform is linear.

If ff and gg have Fourier transforms and α,β\alpha,\beta are scalars, then

F(αf+βg)=αF(f)+βF(g). \mathcal{F}(\alpha f+\beta g) = \alpha\mathcal{F}(f)+\beta\mathcal{F}(g).

Equivalently,

αf+βg^=αf^+βg^. \widehat{\alpha f+\beta g} = \alpha\hat{f}+\beta\hat{g}.

This property places Fourier analysis directly inside linear algebra.

The transform is a linear operator between function spaces.

114.16 Differentiation and Multiplication

The Fourier transform converts differentiation into multiplication.

Under suitable hypotheses,

f^(ξ)=iξf^(ξ). \widehat{f'}(\xi)=i\xi \hat{f}(\xi).

Thus derivatives become algebraic factors in frequency space.

This is one reason Fourier methods are powerful for differential equations. A differential equation may become an algebraic equation after transformation.

For example, if

f(x)=g(x), f'(x)=g(x),

then

iξf^(ξ)=g^(ξ). i\xi\hat{f}(\xi)=\hat{g}(\xi).

Solving for f^\hat{f} can be easier than solving directly in the original domain.

Iowa State lecture notes summarize several standard Fourier transform rules, including conversion of differentiation into multiplication by iξi\xi, translation into modulation, and convolution into multiplication.

114.17 Translation and Modulation

Translation in one domain becomes modulation in the other.

If

g(x)=f(xa), g(x)=f(x-a),

then

g^(ξ)=eiaξf^(ξ). \hat{g}(\xi)=e^{-ia\xi}\hat{f}(\xi).

Thus shifting a function changes the phase of its Fourier transform.

Multiplication by a complex exponential shifts the transform:

g(x)=eiaxf(x) g(x)=e^{iax}f(x)

implies

g^(ξ)=f^(ξa). \hat{g}(\xi)=\hat{f}(\xi-a).

These dualities are central in signal processing.

114.18 Convolution

The convolution of two functions is

(fg)(x)=f(t)g(xt)dt. (f*g)(x) = \int_{-\infty}^{\infty} f(t)g(x-t)\,dt.

Convolution combines one function with a shifted copy of another.

The Fourier transform converts convolution into multiplication:

fg^(ξ)=f^(ξ)g^(ξ) \widehat{f*g}(\xi)=\hat{f}(\xi)\hat{g}(\xi)

up to a convention-dependent constant.

This is one of the most important Fourier identities.

It means that filtering in the time domain becomes multiplication in the frequency domain.

Conversely, multiplication in the time domain becomes convolution in the frequency domain.

114.19 Fourier Transform and Differential Equations

Consider the differential equation

u(x)+u(x)=f(x). -u''(x)+u(x)=f(x).

Take the Fourier transform of both sides.

Since

u^(ξ)=(ξ2)u^(ξ), \widehat{u''}(\xi)=-(\xi^2)\hat{u}(\xi),

we obtain

(ξ2+1)u^(ξ)=f^(ξ). (\xi^2+1)\hat{u}(\xi)=\hat{f}(\xi).

Therefore

u^(ξ)=f^(ξ)ξ2+1. \hat{u}(\xi)=\frac{\hat{f}(\xi)}{\xi^2+1}.

The differential equation has become an algebraic equation.

After solving in frequency space, the inverse Fourier transform recovers uu.

This method is a standard reason Fourier transforms appear in partial differential equations, heat flow, wave propagation, and signal analysis.

114.20 The Discrete Fourier Transform

In computation, data are finite.

The discrete Fourier transform, or DFT, maps a finite sequence to another finite sequence.

Let

x=(x0,x1,,xN1) x = (x_0,x_1,\ldots,x_{N-1})

be a complex sequence. Its DFT is

Xk=j=0N1xje2πijk/N,k=0,1,,N1. X_k = \sum_{j=0}^{N-1} x_j e^{-2\pi i jk/N}, \qquad k=0,1,\ldots,N-1.

The inverse DFT is

xj=1Nk=0N1Xke2πijk/N. x_j = \frac{1}{N} \sum_{k=0}^{N-1} X_k e^{2\pi i jk/N}.

The DFT is a linear transformation from CN\mathbb{C}^N to CN\mathbb{C}^N. Iowa State notes describe the DFT as mapping a finite sequence into a finite sequence, in contrast with Fourier series and continuous Fourier transforms.

114.21 The Fourier Matrix

The DFT can be written as matrix multiplication.

Let

ω=e2πi/N. \omega=e^{-2\pi i/N}.

Define the Fourier matrix FF by

Fkj=ωkj. F_{kj}=\omega^{kj}.

Then

X=Fx. X=Fx.

The inverse is

x=1NFX, x=\frac{1}{N}F^*X,

where FF^* is the conjugate transpose of FF.

If we normalize by 1/N1/\sqrt{N}, the Fourier matrix becomes unitary:

U=1NF,UU=I. U=\frac{1}{\sqrt{N}}F, \qquad U^*U=I.

Thus the normalized DFT is a unitary change of basis in CN\mathbb{C}^N.

114.22 Fast Fourier Transform

Computing the DFT directly requires O(N2)O(N^2) arithmetic operations.

The fast Fourier transform, or FFT, computes the same result in

O(NlogN) O(N\log N)

operations for many lengths NN.

The FFT is an algorithmic factorization of the Fourier matrix.

This is a linear algebraic idea: a dense matrix-vector multiplication is replaced by a product of sparse structured transformations.

The FFT is one of the most important algorithms in scientific computing. It makes Fourier methods practical for large signals, images, simulations, and numerical differential equations.

114.23 Fourier Analysis and Filtering

A filter modifies a signal by changing its frequency content.

A low-pass filter keeps low frequencies and suppresses high frequencies.

A high-pass filter suppresses low frequencies and keeps high frequencies.

A band-pass filter keeps a selected frequency band.

In Fourier terms, filtering often has the form

g^(ξ)=H(ξ)f^(ξ), \hat{g}(\xi)=H(\xi)\hat{f}(\xi),

where H(ξ)H(\xi) is the frequency response of the filter.

Then

g=F1(Hf^). g=\mathcal{F}^{-1}(H\hat{f}).

This is why Fourier transforms are central in signal processing and image processing.

114.24 Fourier Analysis and Linear Algebra

Fourier analysis can be viewed as linear algebra in several related settings.

SettingLinear algebra object
Fourier seriesInfinite orthogonal expansion
Fourier transformContinuous frequency representation
DFTFinite-dimensional unitary matrix
FFTFast matrix factorization
FilteringDiagonal multiplication in frequency basis
PDE solvingDiagonalization of differential operators
Signal compressionApproximation by dominant coefficients

The common theme is diagonalization.

Many operators that are complicated in the original domain become simple in the Fourier basis.

For example, differentiation becomes multiplication by iξi\xi. Convolution becomes multiplication by a frequency response. Circulant matrices are diagonalized by the Fourier matrix.

114.25 Summary

Fourier series and Fourier transforms decompose functions into oscillatory components.

Fourier series apply to periodic functions and use discrete frequencies. Fourier transforms apply to functions on the real line and use continuous frequencies. The discrete Fourier transform applies to finite sequences and is represented by the Fourier matrix.

The linear algebra viewpoint is central. Sines, cosines, and complex exponentials act as basis functions. Fourier coefficients are coordinates. Partial sums are projections. Parseval identities preserve energy. The DFT is a unitary change of basis.

Fourier analysis shows that the language of vectors, inner products, bases, projections, and diagonalization extends from finite-dimensional spaces to functions and signals.