Skip to content

Chapter 113. Differential Equations

Differential equations describe quantities that change continuously.

A differential equation relates an unknown function to one or more of its derivatives. The unknown is usually a function of time, position, or both. The derivative expresses rate of change.

Linear algebra enters differential equations because many differential equations can be written as vector equations. Systems of first-order linear differential equations are governed by matrices. Their solutions depend on eigenvalues, eigenvectors, matrix exponentials, diagonalization, Jordan form, and numerical linear algebra.

The central linear system has the form

dxdt=Ax. \frac{dx}{dt} = Ax.

Here x(t)x(t) is a vector-valued function and AA is a matrix. The matrix determines how the state changes over time.

113.1 Scalar Differential Equations

A scalar differential equation involves one unknown function.

For example,

dydt=ay \frac{dy}{dt} = ay

describes exponential growth or decay.

If a>0a > 0, the solution grows. If a<0a < 0, the solution decays.

The solution is

y(t)=y(0)eat. y(t) = y(0)e^{at}.

This equation says that the rate of change of yy is proportional to yy itself.

Many physical and mathematical models begin with this principle.

EquationInterpretation
dydt=ay\frac{dy}{dt}=ayGrowth or decay
dydt=a(yb)\frac{dy}{dt}=a(y-b)Relaxation toward or away from bb
d2ydt2+ω2y=0\frac{d^2y}{dt^2}+\omega^2y=0Harmonic oscillation
dydt+ay=f(t)\frac{dy}{dt}+ay=f(t)Forced first-order system

The scalar case gives the basic idea. The vector case shows where linear algebra becomes essential.

113.2 Systems of Differential Equations

A system of differential equations has several unknown functions.

Let

x(t)=[x1(t)x2(t)xn(t)]. x(t)= \begin{bmatrix} x_1(t) \\ x_2(t) \\ \vdots \\ x_n(t) \end{bmatrix}.

A first-order linear homogeneous system has the form

dxdt=Ax, \frac{dx}{dt}=Ax,

where

A=[a11a12a1na21a22a2nan1an2ann]. A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}.

Written componentwise, this means

x1(t)=a11x1(t)+a12x2(t)++a1nxn(t),x2(t)=a21x1(t)+a22x2(t)++a2nxn(t),xn(t)=an1x1(t)+an2x2(t)++annxn(t). \begin{aligned} x_1'(t) &= a_{11}x_1(t)+a_{12}x_2(t)+\cdots+a_{1n}x_n(t), \\ x_2'(t) &= a_{21}x_1(t)+a_{22}x_2(t)+\cdots+a_{2n}x_n(t), \\ &\vdots \\ x_n'(t) &= a_{n1}x_1(t)+a_{n2}x_2(t)+\cdots+a_{nn}x_n(t). \end{aligned}

The matrix AA encodes all interactions among the components.

113.3 Initial Value Problems

An initial value problem specifies both the differential equation and the initial state:

x(t)=Ax(t),x(0)=x0. x'(t)=Ax(t), \qquad x(0)=x_0.

The goal is to find the function x(t)x(t) satisfying both conditions.

The matrix AA determines the dynamics. The vector x0x_0 determines which particular trajectory occurs.

For a scalar equation, the solution is eatx0e^{at}x_0. For a matrix equation, the analogous solution is

x(t)=eAtx0. x(t)=e^{At}x_0.

The object eAte^{At} is the matrix exponential.

113.4 Matrix Exponential

The matrix exponential is defined by the power series

eAt=I+At+(At)22!+(At)33!+. e^{At} = I + At + \frac{(At)^2}{2!} + \frac{(At)^3}{3!} + \cdots.

This definition is valid for every square matrix AA.

The solution of

x(t)=Ax(t),x(0)=x0 x'(t)=Ax(t), \qquad x(0)=x_0

is

x(t)=eAtx0. x(t)=e^{At}x_0.

This is the exact vector analogue of the scalar solution y(t)=eaty(0)y(t)=e^{at}y(0). Matrix exponentials are standard tools for solving systems of linear differential equations and for describing linear time evolution.

113.5 Why the Matrix Exponential Works

Differentiate the power series term by term:

ddteAt=A+A2t+A3t22!+. \frac{d}{dt}e^{At} = A + A^2t + \frac{A^3t^2}{2!} + \cdots.

Factor out AA:

ddteAt=A(I+At+(At)22!+). \frac{d}{dt}e^{At} = A\left(I+At+\frac{(At)^2}{2!}+\cdots\right).

Therefore

ddteAt=AeAt. \frac{d}{dt}e^{At}=Ae^{At}.

If

x(t)=eAtx0, x(t)=e^{At}x_0,

then

x(t)=AeAtx0=Ax(t). x'(t)=Ae^{At}x_0=Ax(t).

Also,

x(0)=eA0x0=Ix0=x0. x(0)=e^{A0}x_0=Ix_0=x_0.

Thus eAtx0e^{At}x_0 solves the initial value problem.

113.6 Diagonal Matrices

Matrix exponentials are easy when AA is diagonal.

Let

A=[λ1000λ2000λn]. A = \begin{bmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n \end{bmatrix}.

Then

eAt=[eλ1t000eλ2t000eλnt]. e^{At} = \begin{bmatrix} e^{\lambda_1t} & 0 & \cdots & 0 \\ 0 & e^{\lambda_2t} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{\lambda_nt} \end{bmatrix}.

Each coordinate evolves independently:

xi(t)=eλitxi(0). x_i(t)=e^{\lambda_i t}x_i(0).

A diagonal system is therefore a collection of uncoupled scalar equations.

113.7 Diagonalization

Suppose AA is diagonalizable. Then

A=PDP1, A = PDP^{-1},

where DD is diagonal.

The columns of PP are eigenvectors of AA, and the diagonal entries of DD are eigenvalues.

Since powers of AA satisfy

Ak=PDkP1, A^k = PD^kP^{-1},

the matrix exponential satisfies

eAt=PeDtP1. e^{At}=Pe^{Dt}P^{-1}.

Thus diagonalization reduces a coupled system to independent scalar equations.

This is one of the main reasons eigenvalues and eigenvectors are central in differential equations.

113.8 Eigenvector Solutions

If vv is an eigenvector of AA with eigenvalue λ\lambda, then

Av=λv. Av=\lambda v.

Consider

x(t)=eλtv. x(t)=e^{\lambda t}v.

Then

x(t)=λeλtv, x'(t)=\lambda e^{\lambda t}v,

and

Ax(t)=Aeλtv=eλtAv=eλtλv. Ax(t)=Ae^{\lambda t}v=e^{\lambda t}Av=e^{\lambda t}\lambda v.

Thus

x(t)=Ax(t). x'(t)=Ax(t).

Each eigenvector gives a special solution.

If AA has a basis of eigenvectors v1,,vnv_1,\ldots,v_n, then the general solution is

x(t)=c1eλ1tv1++cneλntvn. x(t)=c_1e^{\lambda_1t}v_1+\cdots+c_ne^{\lambda_nt}v_n.

The constants are determined by the initial condition.

113.9 Example: A Diagonalizable System

Let

A=[3102]. A = \begin{bmatrix} 3 & 1 \\ 0 & 2 \end{bmatrix}.

The eigenvalues are 33 and 22.

For λ=3\lambda=3, an eigenvector is

v1=[10]. v_1= \begin{bmatrix} 1 \\ 0 \end{bmatrix}.

For λ=2\lambda=2, an eigenvector is

v2=[11]. v_2= \begin{bmatrix} -1 \\ 1 \end{bmatrix}.

Therefore the general solution of

x(t)=Ax(t) x'(t)=Ax(t)

is

x(t)=c1e3t[10]+c2e2t[11]. x(t) = c_1e^{3t} \begin{bmatrix} 1 \\ 0 \end{bmatrix} + c_2e^{2t} \begin{bmatrix} -1 \\ 1 \end{bmatrix}.

This expression separates the motion into two independent eigen-directions.

The term with eigenvalue 33 grows faster than the term with eigenvalue 22.

113.10 Stability

Stability concerns the behavior of solutions as tt\to\infty.

For the linear homogeneous system

x(t)=Ax(t), x'(t)=Ax(t),

the eigenvalues of AA determine stability.

If all eigenvalues have negative real parts, then

x(t)0 x(t)\to 0

for every initial condition.

If some eigenvalue has positive real part, then there are solutions that grow without bound.

If eigenvalues lie on the imaginary axis, the behavior may involve oscillation, neutral stability, or instability, depending on the matrix structure.

Eigenvalue conditionTypical behavior
Reλ<0\operatorname{Re}\lambda < 0Decay
Reλ>0\operatorname{Re}\lambda > 0Growth
λ=iω\lambda = i\omegaOscillation
Repeated eigenvalue with defective structurePolynomial factors may appear

Thus spectral information gives qualitative information about the differential equation.

113.11 Complex Eigenvalues

Real matrices may have complex eigenvalues.

Suppose

λ=α+iβ \lambda = \alpha + i\beta

is an eigenvalue.

Then the corresponding exponential is

eλt=eαteiβt. e^{\lambda t} = e^{\alpha t}e^{i\beta t}.

Using Euler’s formula,

eiβt=cos(βt)+isin(βt). e^{i\beta t}=\cos(\beta t)+i\sin(\beta t).

Thus complex eigenvalues produce oscillation.

The real part α\alpha controls growth or decay. The imaginary part β\beta controls angular frequency.

EigenvalueBehavior
α+iβ\alpha+i\beta, α<0\alpha<0Decaying oscillation
α+iβ\alpha+i\beta, α=0\alpha=0Sustained oscillation
α+iβ\alpha+i\beta, α>0\alpha>0Growing oscillation

This explains spirals in planar systems.

113.12 Planar Systems

A planar linear system has the form

[xy]=A[xy]. \begin{bmatrix} x' \\ y' \end{bmatrix} = A \begin{bmatrix} x \\ y \end{bmatrix}.

The phase plane shows trajectories in the (x,y)(x,y)-plane.

The eigenvalues of AA classify many common behaviors.

EigenvaluesPhase portrait
Two negative real eigenvaluesStable node
Two positive real eigenvaluesUnstable node
Opposite signsSaddle
Complex with negative real partStable spiral
Complex with positive real partUnstable spiral
Pure imaginaryCenter

For example,

A=[0110] A = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}

has eigenvalues

λ=±i. \lambda = \pm i.

The system is

x=y,y=x. x'=-y, \qquad y'=x.

Its solutions rotate around the origin.

113.13 Second-Order Equations as First-Order Systems

Many differential equations involve second derivatives. Linear algebra handles them by rewriting them as first-order systems.

Consider

y+ay+by=0. y'' + ay' + by = 0.

Set

x1=y,x2=y. x_1=y, \qquad x_2=y'.

Then

x1=x2, x_1'=x_2,

and

x2=bx1ax2. x_2'=-bx_1-ax_2.

Thus

[x1x2]=[01ba][x1x2]. \begin{bmatrix} x_1' \\ x_2' \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ -b & -a \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}.

This converts one second-order scalar equation into a first-order vector system.

The matrix

[01ba] \begin{bmatrix} 0 & 1 \\ -b & -a \end{bmatrix}

then determines the behavior.

113.14 Forced Linear Systems

A nonhomogeneous linear system has the form

x(t)=Ax(t)+f(t). x'(t)=Ax(t)+f(t).

Here f(t)f(t) is an external forcing term.

The solution is given by variation of constants:

x(t)=eAtx0+0teA(ts)f(s)ds. x(t)=e^{At}x_0+\int_0^t e^{A(t-s)}f(s)\,ds.

The first term is the natural response. The integral term is the forced response.

This formula shows how the matrix exponential propagates both the initial condition and the external input.

113.15 Equilibrium Points

An equilibrium point is a constant solution.

For the system

x(t)=Ax(t), x'(t)=Ax(t),

the vector x=0x=0 is always an equilibrium.

For an affine system

x(t)=Ax(t)+b, x'(t)=Ax(t)+b,

an equilibrium xx^\ast satisfies

Ax+b=0. Ax^\ast+b=0.

If AA is invertible, then

x=A1b. x^\ast=-A^{-1}b.

The stability of this equilibrium is determined by the eigenvalues of AA.

By shifting variables,

z=xx, z=x-x^\ast,

the affine system becomes

z=Az. z'=Az.

Thus the study of affine systems reduces to homogeneous linear systems.

113.16 Systems with Constant Coefficients

The matrix equation

x=Ax x'=Ax

is called a constant-coefficient linear system because AA does not depend on tt.

If the matrix depends on time,

x=A(t)x, x'=A(t)x,

then the problem is more complicated. In general,

eA(t)dt e^{\int A(t)\,dt}

does not give the solution unless the matrices A(t)A(t) commute at different times.

For constant coefficients, all powers of AA commute with each other, and the exponential formula is exact.

This makes constant-coefficient systems a fundamental class.

113.17 Defective Matrices and Jordan Form

A matrix may fail to have enough eigenvectors for diagonalization.

In that case, Jordan form gives the replacement.

A Jordan block has the form

J=[λ1000λ1000λ01000λ]. J = \begin{bmatrix} \lambda & 1 & 0 & \cdots & 0 \\ 0 & \lambda & 1 & \cdots & 0 \\ 0 & 0 & \lambda & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & 1 \\ 0 & 0 & 0 & \cdots & \lambda \end{bmatrix}.

Jordan form records eigenvalues and the structure of generalized eigenvectors.

For a Jordan block,

J=λI+N, J=\lambda I+N,

where NN is nilpotent. Therefore

eJt=eλteNt. e^{Jt}=e^{\lambda t}e^{Nt}.

Since Nk=0N^k=0 for some kk, the exponential eNte^{Nt} is a finite polynomial in tt.

Thus defective matrices produce terms such as

teλt,t2eλt, te^{\lambda t}, \qquad t^2e^{\lambda t},

in solutions.

113.18 Discretization

Differential equations are often solved numerically.

A simple method is Euler’s method. For

x=Ax, x'=Ax,

choose a step size h>0h>0. Approximate

x((k+1)h) x((k+1)h)

by

xk+1=xk+hAxk. x_{k+1}=x_k+hAx_k.

Thus

xk+1=(I+hA)xk. x_{k+1}=(I+hA)x_k.

This turns a continuous differential equation into a discrete linear recurrence.

After kk steps,

xk=(I+hA)kx0. x_k=(I+hA)^kx_0.

This approximation should be compared with the exact solution

x(kh)=eAkhx0. x(kh)=e^{Akh}x_0.

Numerical methods for differential equations therefore depend on matrix powers, spectral stability, conditioning, and approximation.

113.19 Linear Differential Equations in Applications

Linear differential equations appear in many areas.

FieldModel
MechanicsCoupled springs and masses
Electrical engineeringCircuits
Control theoryState-space systems
BiologyLinear population models
ChemistryReaction networks near equilibrium
EconomicsLinear dynamic systems
Quantum mechanicsSchrödinger equation
Heat flowDiscretized diffusion equations
VibrationsNormal modes
Signal processingLinear filters

Many nonlinear systems are also studied by linearization near equilibrium points. This means replacing a nonlinear system by its derivative matrix at a point.

The resulting matrix describes local behavior.

113.20 Summary

Differential equations describe change. Linear algebra describes coupled change.

A first-order homogeneous linear system has the form

x(t)=Ax(t). x'(t)=Ax(t).

Its solution is

x(t)=eAtx0. x(t)=e^{At}x_0.

When AA is diagonalizable, the system decomposes into independent modes determined by eigenvalues and eigenvectors. Eigenvalues describe growth, decay, oscillation, and stability.

Second-order equations can be rewritten as first-order systems. Forced systems are solved using matrix exponentials and integrals. Numerical methods convert differential equations into matrix recurrences.

The main lesson is that a linear differential equation is a dynamic form of a matrix problem. The matrix determines how the state evolves, and the tools of linear algebra reveal the structure of that evolution.