# Chapter 113. Differential Equations

# Chapter 113. Differential Equations

Differential equations describe quantities that change continuously.

A differential equation relates an unknown function to one or more of its derivatives. The unknown is usually a function of time, position, or both. The derivative expresses rate of change.

Linear algebra enters differential equations because many differential equations can be written as vector equations. Systems of first-order linear differential equations are governed by matrices. Their solutions depend on eigenvalues, eigenvectors, matrix exponentials, diagonalization, Jordan form, and numerical linear algebra.

The central linear system has the form

$$
\frac{dx}{dt} = Ax.
$$

Here \(x(t)\) is a vector-valued function and \(A\) is a matrix. The matrix determines how the state changes over time.

## 113.1 Scalar Differential Equations

A scalar differential equation involves one unknown function.

For example,

$$
\frac{dy}{dt} = ay
$$

describes exponential growth or decay.

If \(a > 0\), the solution grows. If \(a < 0\), the solution decays.

The solution is

$$
y(t) = y(0)e^{at}.
$$

This equation says that the rate of change of \(y\) is proportional to \(y\) itself.

Many physical and mathematical models begin with this principle.

| Equation | Interpretation |
|---|---|
| \(\frac{dy}{dt}=ay\) | Growth or decay |
| \(\frac{dy}{dt}=a(y-b)\) | Relaxation toward or away from \(b\) |
| \(\frac{d^2y}{dt^2}+\omega^2y=0\) | Harmonic oscillation |
| \(\frac{dy}{dt}+ay=f(t)\) | Forced first-order system |

The scalar case gives the basic idea. The vector case shows where linear algebra becomes essential.

## 113.2 Systems of Differential Equations

A system of differential equations has several unknown functions.

Let

$$
x(t)=
\begin{bmatrix}
x_1(t) \\
x_2(t) \\
\vdots \\
x_n(t)
\end{bmatrix}.
$$

A first-order linear homogeneous system has the form

$$
\frac{dx}{dt}=Ax,
$$

where

$$
A =
\begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nn}
\end{bmatrix}.
$$

Written componentwise, this means

$$
\begin{aligned}
x_1'(t) &= a_{11}x_1(t)+a_{12}x_2(t)+\cdots+a_{1n}x_n(t), \\
x_2'(t) &= a_{21}x_1(t)+a_{22}x_2(t)+\cdots+a_{2n}x_n(t), \\
&\vdots \\
x_n'(t) &= a_{n1}x_1(t)+a_{n2}x_2(t)+\cdots+a_{nn}x_n(t).
\end{aligned}
$$

The matrix \(A\) encodes all interactions among the components.

## 113.3 Initial Value Problems

An initial value problem specifies both the differential equation and the initial state:

$$
x'(t)=Ax(t),
\qquad
x(0)=x_0.
$$

The goal is to find the function \(x(t)\) satisfying both conditions.

The matrix \(A\) determines the dynamics. The vector \(x_0\) determines which particular trajectory occurs.

For a scalar equation, the solution is \(e^{at}x_0\). For a matrix equation, the analogous solution is

$$
x(t)=e^{At}x_0.
$$

The object \(e^{At}\) is the matrix exponential.

## 113.4 Matrix Exponential

The matrix exponential is defined by the power series

$$
e^{At} =
I + At + \frac{(At)^2}{2!} + \frac{(At)^3}{3!} + \cdots.
$$

This definition is valid for every square matrix \(A\).

The solution of

$$
x'(t)=Ax(t),
\qquad
x(0)=x_0
$$

is

$$
x(t)=e^{At}x_0.
$$

This is the exact vector analogue of the scalar solution \(y(t)=e^{at}y(0)\). Matrix exponentials are standard tools for solving systems of linear differential equations and for describing linear time evolution.

## 113.5 Why the Matrix Exponential Works

Differentiate the power series term by term:

$$
\frac{d}{dt}e^{At} =
A + A^2t + \frac{A^3t^2}{2!} + \cdots.
$$

Factor out \(A\):

$$
\frac{d}{dt}e^{At} =
A\left(I+At+\frac{(At)^2}{2!}+\cdots\right).
$$

Therefore

$$
\frac{d}{dt}e^{At}=Ae^{At}.
$$

If

$$
x(t)=e^{At}x_0,
$$

then

$$
x'(t)=Ae^{At}x_0=Ax(t).
$$

Also,

$$
x(0)=e^{A0}x_0=Ix_0=x_0.
$$

Thus \(e^{At}x_0\) solves the initial value problem.

## 113.6 Diagonal Matrices

Matrix exponentials are easy when \(A\) is diagonal.

Let

$$
A =
\begin{bmatrix}
\lambda_1 & 0 & \cdots & 0 \\
0 & \lambda_2 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & \lambda_n
\end{bmatrix}.
$$

Then

$$
e^{At} =
\begin{bmatrix}
e^{\lambda_1t} & 0 & \cdots & 0 \\
0 & e^{\lambda_2t} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & e^{\lambda_nt}
\end{bmatrix}.
$$

Each coordinate evolves independently:

$$
x_i(t)=e^{\lambda_i t}x_i(0).
$$

A diagonal system is therefore a collection of uncoupled scalar equations.

## 113.7 Diagonalization

Suppose \(A\) is diagonalizable. Then

$$
A = PDP^{-1},
$$

where \(D\) is diagonal.

The columns of \(P\) are eigenvectors of \(A\), and the diagonal entries of \(D\) are eigenvalues.

Since powers of \(A\) satisfy

$$
A^k = PD^kP^{-1},
$$

the matrix exponential satisfies

$$
e^{At}=Pe^{Dt}P^{-1}.
$$

Thus diagonalization reduces a coupled system to independent scalar equations.

This is one of the main reasons eigenvalues and eigenvectors are central in differential equations.

## 113.8 Eigenvector Solutions

If \(v\) is an eigenvector of \(A\) with eigenvalue \(\lambda\), then

$$
Av=\lambda v.
$$

Consider

$$
x(t)=e^{\lambda t}v.
$$

Then

$$
x'(t)=\lambda e^{\lambda t}v,
$$

and

$$
Ax(t)=Ae^{\lambda t}v=e^{\lambda t}Av=e^{\lambda t}\lambda v.
$$

Thus

$$
x'(t)=Ax(t).
$$

Each eigenvector gives a special solution.

If \(A\) has a basis of eigenvectors \(v_1,\ldots,v_n\), then the general solution is

$$
x(t)=c_1e^{\lambda_1t}v_1+\cdots+c_ne^{\lambda_nt}v_n.
$$

The constants are determined by the initial condition.

## 113.9 Example: A Diagonalizable System

Let

$$
A =
\begin{bmatrix}
3 & 1 \\
0 & 2
\end{bmatrix}.
$$

The eigenvalues are \(3\) and \(2\).

For \(\lambda=3\), an eigenvector is

$$
v_1=
\begin{bmatrix}
1 \\
0
\end{bmatrix}.
$$

For \(\lambda=2\), an eigenvector is

$$
v_2=
\begin{bmatrix}
-1 \\
1
\end{bmatrix}.
$$

Therefore the general solution of

$$
x'(t)=Ax(t)
$$

is

$$
x(t) =
c_1e^{3t}
\begin{bmatrix}
1 \\
0
\end{bmatrix}
+
c_2e^{2t}
\begin{bmatrix}
-1 \\
1
\end{bmatrix}.
$$

This expression separates the motion into two independent eigen-directions.

The term with eigenvalue \(3\) grows faster than the term with eigenvalue \(2\).

## 113.10 Stability

Stability concerns the behavior of solutions as \(t\to\infty\).

For the linear homogeneous system

$$
x'(t)=Ax(t),
$$

the eigenvalues of \(A\) determine stability.

If all eigenvalues have negative real parts, then

$$
x(t)\to 0
$$

for every initial condition.

If some eigenvalue has positive real part, then there are solutions that grow without bound.

If eigenvalues lie on the imaginary axis, the behavior may involve oscillation, neutral stability, or instability, depending on the matrix structure.

| Eigenvalue condition | Typical behavior |
|---|---|
| \(\operatorname{Re}\lambda < 0\) | Decay |
| \(\operatorname{Re}\lambda > 0\) | Growth |
| \(\lambda = i\omega\) | Oscillation |
| Repeated eigenvalue with defective structure | Polynomial factors may appear |

Thus spectral information gives qualitative information about the differential equation.

## 113.11 Complex Eigenvalues

Real matrices may have complex eigenvalues.

Suppose

$$
\lambda = \alpha + i\beta
$$

is an eigenvalue.

Then the corresponding exponential is

$$
e^{\lambda t} =
e^{\alpha t}e^{i\beta t}.
$$

Using Euler's formula,

$$
e^{i\beta t}=\cos(\beta t)+i\sin(\beta t).
$$

Thus complex eigenvalues produce oscillation.

The real part \(\alpha\) controls growth or decay. The imaginary part \(\beta\) controls angular frequency.

| Eigenvalue | Behavior |
|---|---|
| \(\alpha+i\beta\), \(\alpha<0\) | Decaying oscillation |
| \(\alpha+i\beta\), \(\alpha=0\) | Sustained oscillation |
| \(\alpha+i\beta\), \(\alpha>0\) | Growing oscillation |

This explains spirals in planar systems.

## 113.12 Planar Systems

A planar linear system has the form

$$
\begin{bmatrix}
x' \\
y'
\end{bmatrix} =
A
\begin{bmatrix}
x \\
y
\end{bmatrix}.
$$

The phase plane shows trajectories in the \((x,y)\)-plane.

The eigenvalues of \(A\) classify many common behaviors.

| Eigenvalues | Phase portrait |
|---|---|
| Two negative real eigenvalues | Stable node |
| Two positive real eigenvalues | Unstable node |
| Opposite signs | Saddle |
| Complex with negative real part | Stable spiral |
| Complex with positive real part | Unstable spiral |
| Pure imaginary | Center |

For example,

$$
A =
\begin{bmatrix}
0 & -1 \\
1 & 0
\end{bmatrix}
$$

has eigenvalues

$$
\lambda = \pm i.
$$

The system is

$$
x'=-y,
\qquad
y'=x.
$$

Its solutions rotate around the origin.

## 113.13 Second-Order Equations as First-Order Systems

Many differential equations involve second derivatives. Linear algebra handles them by rewriting them as first-order systems.

Consider

$$
y'' + ay' + by = 0.
$$

Set

$$
x_1=y,
\qquad
x_2=y'.
$$

Then

$$
x_1'=x_2,
$$

and

$$
x_2'=-bx_1-ax_2.
$$

Thus

$$
\begin{bmatrix}
x_1' \\
x_2'
\end{bmatrix} =
\begin{bmatrix}
0 & 1 \\
-b & -a
\end{bmatrix}
\begin{bmatrix}
x_1 \\
x_2
\end{bmatrix}.
$$

This converts one second-order scalar equation into a first-order vector system.

The matrix

$$
\begin{bmatrix}
0 & 1 \\
-b & -a
\end{bmatrix}
$$

then determines the behavior.

## 113.14 Forced Linear Systems

A nonhomogeneous linear system has the form

$$
x'(t)=Ax(t)+f(t).
$$

Here \(f(t)\) is an external forcing term.

The solution is given by variation of constants:

$$
x(t)=e^{At}x_0+\int_0^t e^{A(t-s)}f(s)\,ds.
$$

The first term is the natural response. The integral term is the forced response.

This formula shows how the matrix exponential propagates both the initial condition and the external input.

## 113.15 Equilibrium Points

An equilibrium point is a constant solution.

For the system

$$
x'(t)=Ax(t),
$$

the vector \(x=0\) is always an equilibrium.

For an affine system

$$
x'(t)=Ax(t)+b,
$$

an equilibrium \(x^\ast\) satisfies

$$
Ax^\ast+b=0.
$$

If \(A\) is invertible, then

$$
x^\ast=-A^{-1}b.
$$

The stability of this equilibrium is determined by the eigenvalues of \(A\).

By shifting variables,

$$
z=x-x^\ast,
$$

the affine system becomes

$$
z'=Az.
$$

Thus the study of affine systems reduces to homogeneous linear systems.

## 113.16 Systems with Constant Coefficients

The matrix equation

$$
x'=Ax
$$

is called a constant-coefficient linear system because \(A\) does not depend on \(t\).

If the matrix depends on time,

$$
x'=A(t)x,
$$

then the problem is more complicated. In general,

$$
e^{\int A(t)\,dt}
$$

does not give the solution unless the matrices \(A(t)\) commute at different times.

For constant coefficients, all powers of \(A\) commute with each other, and the exponential formula is exact.

This makes constant-coefficient systems a fundamental class.

## 113.17 Defective Matrices and Jordan Form

A matrix may fail to have enough eigenvectors for diagonalization.

In that case, Jordan form gives the replacement.

A Jordan block has the form

$$
J =
\begin{bmatrix}
\lambda & 1 & 0 & \cdots & 0 \\
0 & \lambda & 1 & \cdots & 0 \\
0 & 0 & \lambda & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & 1 \\
0 & 0 & 0 & \cdots & \lambda
\end{bmatrix}.
$$

Jordan form records eigenvalues and the structure of generalized eigenvectors.

For a Jordan block,

$$
J=\lambda I+N,
$$

where \(N\) is nilpotent. Therefore

$$
e^{Jt}=e^{\lambda t}e^{Nt}.
$$

Since \(N^k=0\) for some \(k\), the exponential \(e^{Nt}\) is a finite polynomial in \(t\).

Thus defective matrices produce terms such as

$$
te^{\lambda t},
\qquad
t^2e^{\lambda t},
$$

in solutions.

## 113.18 Discretization

Differential equations are often solved numerically.

A simple method is Euler's method. For

$$
x'=Ax,
$$

choose a step size \(h>0\). Approximate

$$
x((k+1)h)
$$

by

$$
x_{k+1}=x_k+hAx_k.
$$

Thus

$$
x_{k+1}=(I+hA)x_k.
$$

This turns a continuous differential equation into a discrete linear recurrence.

After \(k\) steps,

$$
x_k=(I+hA)^kx_0.
$$

This approximation should be compared with the exact solution

$$
x(kh)=e^{Akh}x_0.
$$

Numerical methods for differential equations therefore depend on matrix powers, spectral stability, conditioning, and approximation.

## 113.19 Linear Differential Equations in Applications

Linear differential equations appear in many areas.

| Field | Model |
|---|---|
| Mechanics | Coupled springs and masses |
| Electrical engineering | Circuits |
| Control theory | State-space systems |
| Biology | Linear population models |
| Chemistry | Reaction networks near equilibrium |
| Economics | Linear dynamic systems |
| Quantum mechanics | Schrödinger equation |
| Heat flow | Discretized diffusion equations |
| Vibrations | Normal modes |
| Signal processing | Linear filters |

Many nonlinear systems are also studied by linearization near equilibrium points. This means replacing a nonlinear system by its derivative matrix at a point.

The resulting matrix describes local behavior.

## 113.20 Summary

Differential equations describe change. Linear algebra describes coupled change.

A first-order homogeneous linear system has the form

$$
x'(t)=Ax(t).
$$

Its solution is

$$
x(t)=e^{At}x_0.
$$

When \(A\) is diagonalizable, the system decomposes into independent modes determined by eigenvalues and eigenvectors. Eigenvalues describe growth, decay, oscillation, and stability.

Second-order equations can be rewritten as first-order systems. Forced systems are solved using matrix exponentials and integrals. Numerical methods convert differential equations into matrix recurrences.

The main lesson is that a linear differential equation is a dynamic form of a matrix problem. The matrix determines how the state evolves, and the tools of linear algebra reveal the structure of that evolution.
