# Chapter 73. Matrix Exponential

# Chapter 73. Matrix Exponential

The matrix exponential is the matrix function corresponding to the scalar exponential function.

For a square matrix \(A\), the matrix exponential is written

$$
e^A
$$

or

$$
\exp(A).
$$

It is defined by the power series

$$
e^A =
\sum_{k=0}^{\infty}\frac{A^k}{k!} =
I+A+\frac{A^2}{2!}+\frac{A^3}{3!}+\cdots.
$$

This series converges for every real or complex square matrix, so \(e^A\) is always well-defined. The matrix exponential is used to solve systems of linear differential equations, and it also defines the exponential map from matrix Lie algebras to matrix Lie groups.

## 73.1 Definition

Let \(A\) be an \(n\times n\) matrix over \(\mathbb{R}\) or \(\mathbb{C}\). The exponential of \(A\) is

$$
\exp(A)=e^A =
\sum_{k=0}^{\infty}\frac{A^k}{k!}.
$$

Here

$$
A^0=I.
$$

Thus the first terms are

$$
e^A =
I+A+\frac{1}{2}A^2+\frac{1}{6}A^3+\cdots.
$$

This definition is directly analogous to the scalar exponential series

$$
e^x =
1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\cdots.
$$

The difference is that powers of \(A\) are matrix powers.

## 73.2 Convergence

The exponential series converges for every square matrix.

One way to see this is to use a matrix norm. For a submultiplicative norm,

$$
\|A^k\|\leq \|A\|^k.
$$

Therefore

$$
\left\|\frac{A^k}{k!}\right\|
\leq
\frac{\|A\|^k}{k!}.
$$

The scalar series

$$
\sum_{k=0}^{\infty}\frac{\|A\|^k}{k!}
$$

converges to

$$
e^{\|A\|}.
$$

Thus the matrix exponential series converges absolutely.

This proves that \(e^A\) is defined for every square matrix \(A\), with no restriction on eigenvalues.

## 73.3 Exponential of the Zero Matrix

Let \(0\) be the zero matrix. Since

$$
0^k=0
$$

for every positive integer \(k\), we have

$$
e^0 =
I+0+\frac{0^2}{2!}+\cdots =
I.
$$

Thus

$$
e^0=I.
$$

This agrees with the scalar identity

$$
e^0=1.
$$

For matrices, the identity matrix plays the role of the scalar number \(1\).

## 73.4 Exponential of a Diagonal Matrix

Let

$$
D=
\operatorname{diag}(\lambda_1,\lambda_2,\ldots,\lambda_n).
$$

Then

$$
D^k=
\operatorname{diag}(\lambda_1^k,\lambda_2^k,\ldots,\lambda_n^k).
$$

Therefore

$$
e^D =
\operatorname{diag}(e^{\lambda_1},e^{\lambda_2},\ldots,e^{\lambda_n}).
$$

In matrix form,

$$
e^D =
\begin{bmatrix}
e^{\lambda_1} & 0 & \cdots & 0 \\
0 & e^{\lambda_2} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & e^{\lambda_n}
\end{bmatrix}.
$$

Thus diagonal matrices are exponentiated entry by entry along the diagonal.

## 73.5 Exponential of a Diagonalizable Matrix

Suppose \(A\) is diagonalizable:

$$
A=PDP^{-1}.
$$

Then

$$
A^k=PD^kP^{-1}
$$

for every nonnegative integer \(k\). Substitute this into the exponential series:

$$
e^A =
\sum_{k=0}^{\infty}\frac{A^k}{k!} =
\sum_{k=0}^{\infty}\frac{PD^kP^{-1}}{k!}.
$$

Since \(P\) and \(P^{-1}\) are constant,

$$
e^A =
P
\left(
\sum_{k=0}^{\infty}\frac{D^k}{k!}
\right)
P^{-1}.
$$

Hence

$$
e^A=Pe^DP^{-1}.
$$

If

$$
D=\operatorname{diag}(\lambda_1,\ldots,\lambda_n),
$$

then

$$
e^A =
P
\operatorname{diag}(e^{\lambda_1},\ldots,e^{\lambda_n})
P^{-1}.
$$

## 73.6 Example: Diagonalizable Case

Let

$$
A=
\begin{bmatrix}
2 & 1 \\
1 & 2
\end{bmatrix}.
$$

The eigenvalues are

$$
3
\qquad
\text{and}
\qquad
1.
$$

A diagonalization is

$$
A=PDP^{-1},
$$

where

$$
P=
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix},
\qquad
D=
\begin{bmatrix}
3 & 0 \\
0 & 1
\end{bmatrix}.
$$

Then

$$
e^A=Pe^DP^{-1}.
$$

Since

$$
e^D=
\begin{bmatrix}
e^3 & 0 \\
0 & e
\end{bmatrix},
$$

we have

$$
e^A =
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}
\begin{bmatrix}
e^3 & 0 \\
0 & e
\end{bmatrix}
\frac12
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}.
$$

Multiplying gives

$$
e^A =
\frac12
\begin{bmatrix}
e^3+e & e^3-e \\
e^3-e & e^3+e
\end{bmatrix}.
$$

## 73.7 Exponential of a Nilpotent Matrix

A matrix \(N\) is nilpotent if

$$
N^m=0
$$

for some positive integer \(m\).

For a nilpotent matrix, the exponential series terminates:

$$
e^N =
I+N+\frac{N^2}{2!}+\cdots+\frac{N^{m-1}}{(m-1)!}.
$$

All later terms are zero.

For example, let

$$
N=
\begin{bmatrix}
0 & 1 \\
0 & 0
\end{bmatrix}.
$$

Then

$$
N^2=0.
$$

Therefore

$$
e^N=I+N =
\begin{bmatrix}
1 & 1 \\
0 & 1
\end{bmatrix}.
$$

Nilpotent matrices are important because every Jordan block is a scalar matrix plus a nilpotent matrix.

## 73.8 Exponential of a Jordan Block

Let

$$
J=\lambda I+N,
$$

where \(N\) is nilpotent and commutes with \(\lambda I\).

Then

$$
e^J=e^{\lambda I+N}.
$$

Since

$$
(\lambda I)N=N(\lambda I),
$$

we may split the exponential:

$$
e^J=e^{\lambda I}e^N.
$$

Now

$$
e^{\lambda I}=e^\lambda I.
$$

Thus

$$
e^J=e^\lambda e^N.
$$

If \(J=J_k(\lambda)\), then \(N^k=0\), so

$$
e^J =
e^\lambda
\left(
I+N+\frac{N^2}{2!}+\cdots+\frac{N^{k-1}}{(k-1)!}
\right).
$$

For a \(3\times3\) Jordan block,

$$
J=
\begin{bmatrix}
\lambda & 1 & 0 \\
0 & \lambda & 1 \\
0 & 0 & \lambda
\end{bmatrix},
$$

we get

$$
e^J =
e^\lambda
\begin{bmatrix}
1 & 1 & \frac12 \\
0 & 1 & 1 \\
0 & 0 & 1
\end{bmatrix}.
$$

## 73.9 Exponential of a Matrix in Jordan Form

If

$$
A=PJP^{-1}
$$

is a Jordan decomposition, then

$$
e^A=Pe^JP^{-1}.
$$

Since \(J\) is block diagonal,

$$
J=
J_{k_1}(\lambda_1)\oplus\cdots\oplus J_{k_r}(\lambda_r),
$$

its exponential is also block diagonal:

$$
e^J =
e^{J_{k_1}(\lambda_1)}
\oplus
\cdots
\oplus
e^{J_{k_r}(\lambda_r)}.
$$

Thus the exponential of a matrix is computed block by block in Jordan form.

This formula explains why defective matrices produce polynomial factors multiplied by exponentials.

## 73.10 Basic Properties

The matrix exponential satisfies several basic identities.

First,

$$
e^0=I.
$$

Second,

$$
(e^A)^T=e^{A^T}.
$$

For complex matrices,

$$
(e^A)^*=e^{A^*}.
$$

Third, if \(P\) is invertible, then

$$
e^{PAP^{-1}}=Pe^AP^{-1}.
$$

Fourth,

$$
e^A
$$

is always invertible, with inverse

$$
(e^A)^{-1}=e^{-A}.
$$

These identities follow naturally from the power series definition and from compatibility with matrix multiplication. The matrix exponential is always invertible, and \(e^{-A}\) is its inverse.

## 73.11 The Product Rule and Commutation

For scalars,

$$
e^{x+y}=e^xe^y.
$$

For matrices, this identity generally requires commutation.

If

$$
AB=BA,
$$

then

$$
e^{A+B}=e^Ae^B.
$$

The proof follows the same power series argument as in the scalar case, because commutation allows all products to be rearranged consistently.

If

$$
AB\neq BA,
$$

then generally

$$
e^{A+B}\neq e^Ae^B.
$$

This is one of the main differences between scalar exponentials and matrix exponentials. The identity \(e^{A+B}=e^Ae^B\) holds for commuting matrices, but not in general.

## 73.12 One-Parameter Groups

For a fixed matrix \(A\), define

$$
\Phi(t)=e^{tA}.
$$

Then

$$
\Phi(0)=I.
$$

Also,

$$
\Phi(t+s)=e^{(t+s)A}.
$$

Since \(tA\) and \(sA\) commute, we have

$$
e^{(t+s)A}=e^{tA}e^{sA}.
$$

Therefore

$$
\Phi(t+s)=\Phi(t)\Phi(s).
$$

This means that \(t\mapsto e^{tA}\) is a one-parameter group of invertible matrices.

Its inverse is

$$
\Phi(t)^{-1}=\Phi(-t)=e^{-tA}.
$$

This group property is essential in differential equations and Lie theory.

## 73.13 Derivative of the Matrix Exponential

For fixed \(A\), the derivative of

$$
e^{tA}
$$

is

$$
\frac{d}{dt}e^{tA}=Ae^{tA}.
$$

Since \(A\) commutes with every power of itself, we also have

$$
\frac{d}{dt}e^{tA}=e^{tA}A.
$$

To see this, differentiate term by term:

$$
e^{tA} =
I+tA+\frac{t^2A^2}{2!}+\frac{t^3A^3}{3!}+\cdots.
$$

Then

$$
\frac{d}{dt}e^{tA} =
A+tA^2+\frac{t^2A^3}{2!}+\cdots.
$$

Factor \(A\):

$$
\frac{d}{dt}e^{tA} =
A
\left(
I+tA+\frac{t^2A^2}{2!}+\cdots
\right).
$$

Thus

$$
\frac{d}{dt}e^{tA}=Ae^{tA}.
$$

## 73.14 Homogeneous Linear Systems

Consider the system

$$
x'(t)=Ax(t),
$$

with initial condition

$$
x(0)=x_0.
$$

The solution is

$$
x(t)=e^{tA}x_0.
$$

Indeed,

$$
\frac{d}{dt}x(t) =
\frac{d}{dt}(e^{tA}x_0) =
Ae^{tA}x_0 =
Ax(t).
$$

Also,

$$
x(0)=e^0x_0=Ix_0=x_0.
$$

Thus the matrix exponential is the fundamental solution matrix for constant-coefficient linear systems.

## 73.15 Inhomogeneous Linear Systems

Consider the inhomogeneous system

$$
x'(t)=Ax(t)+b(t),
$$

with

$$
x(0)=x_0.
$$

The solution is

$$
x(t)=e^{tA}x_0+\int_0^t e^{(t-s)A}b(s)\,ds.
$$

This is the variation of constants formula.

The first term describes free evolution. The integral term accumulates the forcing input \(b(s)\), transported forward by \(e^{(t-s)A}\).

If \(b(t)=b\) is constant and \(A\) is invertible, then

$$
x(t)=e^{tA}x_0+A^{-1}(e^{tA}-I)b.
$$

This follows by evaluating

$$
\int_0^t e^{(t-s)A}b\,ds.
$$

## 73.16 Stability

The eigenvalues of \(A\) control the long-term behavior of

$$
e^{tA}.
$$

If \(A\) is diagonalizable and has eigenvalues

$$
\lambda_1,\ldots,\lambda_n,
$$

then

$$
e^{tA}=P
\operatorname{diag}(e^{t\lambda_1},\ldots,e^{t\lambda_n})
P^{-1}.
$$

If all real parts satisfy

$$
\operatorname{Re}(\lambda_i)<0,
$$

then the corresponding exponential factors decay as

$$
t\to\infty.
$$

If some eigenvalue satisfies

$$
\operatorname{Re}(\lambda_i)>0,
$$

then some component grows exponentially.

If eigenvalues have zero real part, the nilpotent or nonnormal structure may decide whether solutions remain bounded.

Thus stability is governed by both eigenvalues and Jordan structure.

## 73.17 Oscillation and Rotation

Complex eigenvalues produce oscillations.

Consider

$$
A=
\begin{bmatrix}
0 & -\omega \\
\omega & 0
\end{bmatrix}.
$$

This matrix satisfies

$$
A^2=-\omega^2 I.
$$

Using the power series,

$$
e^{tA} =
I+tA+\frac{t^2A^2}{2!}+\frac{t^3A^3}{3!}+\cdots.
$$

Separating even and odd powers gives

$$
e^{tA} =
\cos(\omega t)I+\frac{\sin(\omega t)}{\omega}A.
$$

Therefore

$$
e^{tA} =
\begin{bmatrix}
\cos(\omega t) & -\sin(\omega t) \\
\sin(\omega t) & \cos(\omega t)
\end{bmatrix}.
$$

Thus the exponential of a skew-symmetric matrix generates rotations.

## 73.18 Exponential of Skew-Symmetric Matrices

A real matrix \(S\) is skew-symmetric if

$$
S^T=-S.
$$

Then

$$
(e^S)^T=e^{S^T}=e^{-S}.
$$

Since

$$
e^{-S}=(e^S)^{-1},
$$

we get

$$
(e^S)^T(e^S)=I.
$$

Thus

$$
e^S
$$

is orthogonal.

If also \(\det(e^S)=1\), it represents a rotation rather than a reflection. In fact,

$$
\det(e^S)=e^{\operatorname{tr}(S)}.
$$

For skew-symmetric \(S\), the trace is zero, so

$$
\det(e^S)=1.
$$

Thus exponentials of real skew-symmetric matrices lie in the rotation group.

## 73.19 Exponential of Hermitian and Symmetric Matrices

If \(A\) is Hermitian, then

$$
A=U\Lambda U^*
$$

with real diagonal \(\Lambda\).

Then

$$
e^A=Ue^\Lambda U^*.
$$

The eigenvalues of \(e^A\) are

$$
e^{\lambda_1},\ldots,e^{\lambda_n}.
$$

Since each \(e^{\lambda_i}>0\), the matrix \(e^A\) is Hermitian positive definite.

In the real case, if \(A\) is symmetric, then

$$
e^A
$$

is symmetric positive definite.

Thus the exponential maps symmetric matrices to positive definite matrices.

## 73.20 Trace and Determinant

For every square matrix \(A\),

$$
\det(e^A)=e^{\operatorname{tr}(A)}.
$$

This identity is easy to see when \(A\) is triangular or diagonalizable. In general, it follows from Schur form or from spectral arguments.

If the eigenvalues of \(A\) are

$$
\lambda_1,\ldots,\lambda_n
$$

counted with algebraic multiplicity, then the eigenvalues of \(e^A\) are

$$
e^{\lambda_1},\ldots,e^{\lambda_n}.
$$

Therefore

$$
\det(e^A) =
e^{\lambda_1}\cdots e^{\lambda_n} =
e^{\lambda_1+\cdots+\lambda_n} =
e^{\operatorname{tr}(A)}.
$$

This identity connects the matrix exponential with volume scaling.

## 73.21 Numerical Computation

The definition

$$
e^A=\sum_{k=0}^{\infty}\frac{A^k}{k!}
$$

is conceptually simple, but direct summation may be inefficient or unstable.

Common numerical methods include:

| Method | Basic idea |
|---|---|
| Scaling and squaring | Compute \(e^{A/2^s}\), then square repeatedly |
| Padé approximation | Approximate \(e^A\) by a rational function |
| Schur method | Reduce \(A\) to triangular form first |
| Krylov methods | Approximate \(e^A v\) without forming \(e^A\) |
| Diagonalization | Use eigenvectors when well-conditioned |

For large sparse systems, one often needs

$$
e^{tA}v
$$

rather than the full matrix

$$
e^{tA}.
$$

Krylov methods are designed for this case.

## 73.22 Common Errors

The first common error is to compute \(e^A\) by exponentiating entries of \(A\). Matrix exponential is not entrywise exponential.

The second common error is to assume

$$
e^{A+B}=e^Ae^B
$$

without checking whether

$$
AB=BA.
$$

The third common error is to assume diagonalization is always available. Defective matrices require Jordan form, Schur form, or another method.

The fourth common error is to forget the identity matrix in the first term:

$$
e^A=I+A+\frac{A^2}{2!}+\cdots.
$$

The fifth common error is to assume eigenvalues alone always determine boundedness. Jordan blocks and nonnormality may introduce polynomial growth or transient amplification.

## 73.23 Summary

The matrix exponential is defined by

$$
e^A =
\sum_{k=0}^{\infty}\frac{A^k}{k!}.
$$

It is defined for every square matrix.

If

$$
A=PDP^{-1},
$$

then

$$
e^A=Pe^DP^{-1}.
$$

If \(N\) is nilpotent, then the exponential series terminates.

The matrix exponential solves the linear system

$$
x'(t)=Ax(t)
$$

through

$$
x(t)=e^{tA}x_0.
$$

It also produces one-parameter groups, describes rotations from skew-symmetric matrices, maps symmetric matrices to positive definite matrices, and satisfies

$$
\det(e^A)=e^{\operatorname{tr}(A)}.
$$

The matrix exponential is the central matrix function for continuous-time linear dynamics.
