Skip to content

Chapter 118. Control Theory

Control theory studies how to influence the behavior of a dynamical system.

A system has a state. The state changes with time. A controller chooses inputs so that the state behaves in a desired way. The goal may be stabilization, tracking, disturbance rejection, regulation, optimal performance, safety, or robustness.

Linear algebra enters control theory through state-space models. In a linear time-invariant system, the state, input, and output are represented by vectors, and the dynamics are represented by matrices:

x˙(t)=Ax(t)+Bu(t), \dot{x}(t)=Ax(t)+Bu(t), y(t)=Cx(t)+Du(t). y(t)=Cx(t)+Du(t).

This representation is central in modern control theory. It expresses system behavior through matrix equations, eigenvalues, subspaces, rank tests, and feedback laws. Control texts commonly develop controllability, observability, stability, and feedback design directly from this state-space form.

118.1 State, Input, and Output

The state vector contains the variables needed to describe the current condition of the system:

x(t)=[x1(t)x2(t)xn(t)]. x(t) = \begin{bmatrix} x_1(t) \\ x_2(t) \\ \vdots \\ x_n(t) \end{bmatrix}.

The input vector contains variables chosen by the controller:

u(t)=[u1(t)u2(t)um(t)]. u(t) = \begin{bmatrix} u_1(t) \\ u_2(t) \\ \vdots \\ u_m(t) \end{bmatrix}.

The output vector contains variables measured or reported by the system:

y(t)=[y1(t)y2(t)yp(t)]. y(t) = \begin{bmatrix} y_1(t) \\ y_2(t) \\ \vdots \\ y_p(t) \end{bmatrix}.

The dimensions are:

ObjectSpaceMeaning
x(t)x(t)Rn\mathbb{R}^nState
u(t)u(t)Rm\mathbb{R}^mInput
y(t)y(t)Rp\mathbb{R}^pOutput
AARn×n\mathbb{R}^{n\times n}State matrix
BBRn×m\mathbb{R}^{n\times m}Input matrix
CCRp×n\mathbb{R}^{p\times n}Output matrix
DDRp×m\mathbb{R}^{p\times m}Feedthrough matrix

The matrix AA describes how the state evolves naturally. The matrix BB describes how inputs affect the state. The matrix CC describes how the state is observed. The matrix DD describes any direct input-to-output effect.

118.2 Linear Time-Invariant Systems

A continuous-time linear time-invariant system has the form

x˙(t)=Ax(t)+Bu(t), \dot{x}(t)=Ax(t)+Bu(t), y(t)=Cx(t)+Du(t). y(t)=Cx(t)+Du(t).

It is linear because xx, uu, and yy enter through linear expressions. It is time-invariant because the matrices A,B,C,DA,B,C,D do not depend on tt.

The homogeneous system is

x˙(t)=Ax(t). \dot{x}(t)=Ax(t).

Its solution is

x(t)=eAtx(0). x(t)=e^{At}x(0).

With input, the solution is

x(t)=eAtx(0)+0teA(ts)Bu(s)ds. x(t)=e^{At}x(0)+\int_0^t e^{A(t-s)}Bu(s)\,ds.

The first term is the free response. The integral term is the forced response.

This formula shows that control theory depends on matrix exponentials, linear operators, and convolution-like integrals.

118.3 Discrete-Time Systems

A discrete-time system evolves in steps:

xk+1=Axk+Buk, x_{k+1}=Ax_k+Bu_k, yk=Cxk+Duk. y_k=Cx_k+Du_k.

Here kk is an integer time index.

Without input,

xk=Akx0. x_k=A^kx_0.

With input,

xk=Akx0+j=0k1Ak1jBuj. x_k=A^kx_0+\sum_{j=0}^{k-1} A^{k-1-j}Bu_j.

Thus powers of AA describe natural evolution, and the columns of

B,  AB,  A2B,   B,\; AB,\; A^2B,\;\ldots

describe how inputs affect the state over time.

Discrete-time systems appear in digital control, sampled-data systems, robotics, signal processing, economics, and machine learning.

118.4 Equilibrium

An equilibrium is a state that remains constant under a constant input.

For a continuous-time system,

x˙=Ax+Bu, \dot{x}=Ax+Bu,

an equilibrium pair (x,u)(x^\ast,u^\ast) satisfies

Ax+Bu=0. Ax^\ast+Bu^\ast=0.

For a discrete-time system,

xk+1=Axk+Buk, x_{k+1}=Ax_k+Bu_k,

an equilibrium pair satisfies

x=Ax+Bu. x^\ast=Ax^\ast+Bu^\ast.

Equilibria are important because many control tasks seek to hold the system near a desired operating point.

If the desired equilibrium is not at the origin, the variables can often be shifted so that the equilibrium becomes the origin.

Let

z=xx,v=uu. z=x-x^\ast, \qquad v=u-u^\ast.

Then the shifted continuous-time dynamics are

z˙=Az+Bv. \dot{z}=Az+Bv.

Thus regulation around an equilibrium reduces to stabilization of the origin.

118.5 Stability

Stability describes whether small deviations remain small or decay.

For the continuous-time homogeneous system

x˙=Ax, \dot{x}=Ax,

the origin is asymptotically stable if

x(t)0 x(t)\to 0

for every initial state near the origin.

For linear systems, this is determined by the eigenvalues of AA.

If every eigenvalue of AA has negative real part, then the continuous-time system is asymptotically stable.

For the discrete-time system

xk+1=Axk, x_{k+1}=Ax_k,

asymptotic stability requires every eigenvalue of AA to lie strictly inside the unit circle:

λi<1. |\lambda_i|<1.

Thus stability is a spectral property.

118.6 Feedback

Feedback means that the input depends on the current state or output.

A state feedback law has the form

u(t)=Kx(t). u(t)=-Kx(t).

Substituting into

x˙=Ax+Bu \dot{x}=Ax+Bu

gives

x˙=(ABK)x. \dot{x}=(A-BK)x.

The matrix

ABK A-BK

is called the closed-loop state matrix.

The controller changes the dynamics by changing the eigenvalues of the system matrix.

This is the basic mechanism of state feedback. State-space control design commonly studies laws of the form u(t)=Nr(t)Kx(t)u(t)=Nr(t)-Kx(t), where KK is chosen to modify closed-loop behavior.

118.7 Pole Placement

The eigenvalues of the closed-loop matrix

ABK A-BK

are called closed-loop poles.

Pole placement asks whether KK can be chosen so that ABKA-BK has prescribed eigenvalues.

This is possible when the pair (A,B)(A,B) is controllable.

The reason is linear algebraic: the input directions, propagated through the dynamics by AA, must span the whole state space.

If some state direction cannot be reached by the input, then feedback cannot arbitrarily control its dynamics.

Pole placement is a direct link between eigenvalue assignment and matrix rank.

118.8 Controllability

A system is controllable if suitable inputs can move the state from any initial state to any final state in finite time.

For the continuous-time system

x˙=Ax+Bu, \dot{x}=Ax+Bu,

the controllability matrix is

C=[BABA2BAn1B]. \mathcal{C} = \begin{bmatrix} B & AB & A^2B & \cdots & A^{n-1}B \end{bmatrix}.

The system is controllable if

rank(C)=n. \operatorname{rank}(\mathcal{C})=n.

This is the Kalman rank condition.

Controllability is one of the central concepts of modern control theory. It formalizes whether the input can influence all state directions, and it has a concrete matrix-rank test.

118.9 Meaning of the Controllability Matrix

The columns of C\mathcal{C} are input directions and their images under powers of AA:

B,AB,A2B,,An1B. B,\quad AB,\quad A^2B,\quad \ldots,\quad A^{n-1}B.

The matrix BB gives directions affected immediately by input.

The matrix ABAB gives directions produced after the natural dynamics acts on those input directions.

The matrix A2BA^2B gives directions produced after two applications of the natural dynamics.

Together, these columns describe the reachable subspace.

If they span Rn\mathbb{R}^n, every state direction is reachable.

If they span a proper subspace, then the system has unreachable directions.

118.10 Example of Controllability

Let

A=[0100],B=[01]. A= \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, \qquad B= \begin{bmatrix} 0 \\ 1 \end{bmatrix}.

Then

AB=[10]. AB= \begin{bmatrix} 1 \\ 0 \end{bmatrix}.

The controllability matrix is

C=[BAB]=[0110]. \mathcal{C} = \begin{bmatrix} B & AB \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}.

Its rank is 22. Therefore the system is controllable.

This example represents a simple double-integrator structure:

x˙1=x2,x˙2=u. \dot{x}_1=x_2, \qquad \dot{x}_2=u.

The input directly changes velocity, and velocity changes position. Therefore both state variables can be controlled.

118.11 Observability

A system is observable if the initial state can be determined from output measurements over time.

For

x˙=Ax+Bu, \dot{x}=Ax+Bu, y=Cx+Du, y=Cx+Du,

observability concerns whether measurements reveal the internal state.

The observability matrix is

O=[CCACA2CAn1]. \mathcal{O} = \begin{bmatrix} C \\ CA \\ CA^2 \\ \vdots \\ CA^{n-1} \end{bmatrix}.

The system is observable if

rank(O)=n. \operatorname{rank}(\mathcal{O})=n.

Observability is dual to controllability. Controllability asks whether inputs can reach all state directions. Observability asks whether outputs can detect all state directions.

118.12 Meaning of the Observability Matrix

The first block CC tells what is measured directly.

The next block CACA tells how the measured output changes under one application of the dynamics.

The block CA2CA^2 gives information after two applications of the dynamics.

Together, these blocks describe how hidden state directions appear in the output over time.

If a nonzero state vector xx satisfies

Ox=0, \mathcal{O}x=0,

then that state direction is invisible to the output.

Such a direction cannot be reconstructed from measurements.

118.13 Example of Observability

Let

A=[0100],C=[10]. A= \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, \qquad C= \begin{bmatrix} 1 & 0 \end{bmatrix}.

Then

CA=[01]. CA= \begin{bmatrix} 0 & 1 \end{bmatrix}.

The observability matrix is

O=[1001]. \mathcal{O} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}.

Its rank is 22. Therefore the system is observable.

Here the output measures position. Since the derivative of position reveals velocity, the full state can be inferred from output evolution.

118.14 Stabilizability and Detectability

Full controllability is stronger than what is needed for stabilization.

A system is stabilizable if every unstable mode is controllable.

Uncontrollable modes may exist, but they must already be stable.

Similarly, a system is detectable if every unstable mode is observable.

Unobservable modes may exist, but they must already be stable.

These concepts are important in practice because some internal modes may be inaccessible or unmeasured, while the system can still be stabilized or estimated safely.

118.15 Transfer Functions

For single-input single-output continuous-time systems, transfer functions describe input-output behavior in the Laplace domain.

Assume zero initial condition. Taking the Laplace transform of

x˙=Ax+Bu, \dot{x}=Ax+Bu, y=Cx+Du y=Cx+Du

gives

sX(s)=AX(s)+BU(s). sX(s)=AX(s)+BU(s).

Thus

(sIA)X(s)=BU(s), (sI-A)X(s)=BU(s),

and

X(s)=(sIA)1BU(s). X(s)=(sI-A)^{-1}BU(s).

Therefore

Y(s)=(C(sIA)1B+D)U(s). Y(s)=\left(C(sI-A)^{-1}B+D\right)U(s).

The transfer function is

G(s)=C(sIA)1B+D. G(s)=C(sI-A)^{-1}B+D.

This formula expresses input-output behavior using a matrix inverse.

118.16 Poles and Eigenvalues

The poles of the transfer function are related to eigenvalues of AA.

Since

(sIA)1 (sI-A)^{-1}

appears in the transfer function, singularities occur when

det(sIA)=0. \det(sI-A)=0.

These values of ss are eigenvalues of AA.

For minimal realizations, the poles of the transfer function are exactly the eigenvalues of the state matrix.

Thus spectral properties control both internal dynamics and input-output response.

118.17 State Feedback with Reference Input

A controller often tracks a reference signal r(t)r(t).

A common control law is

u(t)=Nr(t)Kx(t). u(t)=Nr(t)-Kx(t).

Substitution gives

x˙=(ABK)x+BNr. \dot{x}=(A-BK)x+BNr.

The feedback term Kx-Kx shapes stability and transient response. The feedforward term NrNr sets the steady-state gain for reference tracking.

The matrix KK modifies closed-loop eigenvalues. The matrix NN modifies the input scaling.

This separates dynamic shaping from reference scaling in simple linear designs.

118.18 Observers

In many systems, the full state x(t)x(t) is not measured.

An observer estimates the state using inputs and outputs.

A Luenberger observer has the form

x^˙=Ax^+Bu+L(yCx^). \dot{\hat{x}} = A\hat{x}+Bu+L(y-C\hat{x}).

Here x^\hat{x} is the estimated state and LL is the observer gain.

The term

yCx^ y-C\hat{x}

is the output estimation error.

Let

e=xx^. e=x-\hat{x}.

Then the error dynamics are

e˙=(ALC)e. \dot{e}=(A-LC)e.

Thus observer design is an eigenvalue assignment problem for ALCA-LC.

It is dual to state feedback design.

118.19 Separation Principle

State feedback assumes access to the full state.

When the state is not directly measured, we may use the estimate x^\hat{x}:

u=Kx^. u=-K\hat{x}.

Under standard linear assumptions, the controller gain KK and observer gain LL can be designed separately.

The closed-loop behavior then combines the eigenvalues of

ABK A-BK

and

ALC. A-LC.

This result is called the separation principle.

It reflects the dual roles of controllability and observability.

118.20 Lyapunov Stability

Lyapunov theory studies stability using scalar energy-like functions.

For a linear continuous-time system

x˙=Ax, \dot{x}=Ax,

choose

V(x)=xTPx, V(x)=x^TPx,

where PP is symmetric positive definite.

Then

V˙(x)=xT(ATP+PA)x. \dot{V}(x)=x^T(A^TP+PA)x.

If there exists P0P\succ 0 such that

ATP+PA0, A^TP+PA \prec 0,

then the system is asymptotically stable.

This is a matrix inequality. It gives a linear algebraic test for stability.

Lyapunov methods are important because they extend beyond explicit solution formulas and support robust and nonlinear control analysis.

118.21 Linear Quadratic Regulator

The linear quadratic regulator, or LQR, chooses feedback to minimize a quadratic cost.

For

x˙=Ax+Bu, \dot{x}=Ax+Bu,

the cost is often

J=0(x(t)TQx(t)+u(t)TRu(t))dt, J= \int_0^\infty \left( x(t)^TQx(t)+u(t)^TRu(t) \right) dt,

where

Q0,R0. Q\succeq 0, \qquad R\succ 0.

The optimal controller has the form

u=Kx. u=-Kx.

The gain is

K=R1BTP, K=R^{-1}B^TP,

where PP solves the algebraic Riccati equation

ATP+PAPBR1BTP+Q=0. A^TP+PA-PBR^{-1}B^TP+Q=0.

LQR connects control theory with quadratic forms, positive definite matrices, matrix equations, and optimization.

118.22 Kalman Filtering

Kalman filtering estimates the state of a noisy linear dynamical system.

A common discrete-time model is

xk+1=Axk+Buk+wk, x_{k+1}=Ax_k+Bu_k+w_k, yk=Cxk+vk. y_k=Cx_k+v_k.

Here wkw_k is process noise and vkv_k is measurement noise.

The Kalman filter recursively updates an estimate of the state and its covariance.

Its algebra involves matrix prediction, covariance propagation, least squares correction, and Riccati-type equations.

Kalman filtering is an observer design method under probabilistic assumptions.

It is used in navigation, tracking, robotics, signal processing, economics, and aerospace systems.

118.23 Controllability Gramian

For a continuous-time stable system, the controllability Gramian is

Wc=0eAtBBTeATtdt. W_c = \int_0^\infty e^{At}BB^Te^{A^Tt}\,dt.

This matrix measures how input energy reaches state directions.

If WcW_c is positive definite, the system is controllable.

Directions corresponding to small eigenvalues of WcW_c are difficult to reach. They require large input energy.

The Gramian also satisfies the Lyapunov equation

AWc+WcAT+BBT=0 AW_c+W_cA^T+BB^T=0

when AA is stable.

This gives another bridge between reachability, energy, and matrix equations.

118.24 Observability Gramian

For a continuous-time stable system, the observability Gramian is

Wo=0eATtCTCeAtdt. W_o = \int_0^\infty e^{A^Tt}C^TCe^{At}\,dt.

This matrix measures how strongly state directions appear in the output.

If WoW_o is positive definite, the system is observable.

Directions corresponding to small eigenvalues of WoW_o are difficult to observe.

The observability Gramian satisfies

ATWo+WoA+CTC=0. A^TW_o+W_oA+C^TC=0.

Thus controllability and observability have parallel Gramian formulations.

118.25 Model Reduction

Large control systems may have very high-dimensional states.

Model reduction seeks a lower-dimensional system that preserves important input-output behavior.

Balanced truncation uses controllability and observability Gramians.

The idea is to find coordinates in which

Wc=Wo=Σ, W_c=W_o=\Sigma,

where Σ\Sigma is diagonal.

The diagonal entries are Hankel singular values.

Large values correspond to states that are both easy to control and easy to observe. Small values correspond to states that have weak input-output influence.

Truncating small states gives a reduced model.

This is an application of change of basis, singular values, and matrix equations.

118.26 Robustness

A controller must work despite modeling errors, disturbances, and parameter uncertainty.

Robust control studies performance under such uncertainty.

Linear algebra appears through norms, singular values, structured matrices, and matrix inequalities.

For example, the singular value

σmax(G(iω)) \sigma_{\max}(G(i\omega))

measures the largest input-output amplification at frequency ω\omega.

A small gain condition can guarantee stability under bounded uncertainty.

Thus robustness is closely tied to induced norms and frequency-domain matrix analysis.

118.27 Nonlinear Control and Linearization

Many real systems are nonlinear:

x˙=f(x,u). \dot{x}=f(x,u).

Near an equilibrium (x,u)(x^\ast,u^\ast), the system can be approximated by a linear model.

Let

z=xx,v=uu. z=x-x^\ast, \qquad v=u-u^\ast.

Then

z˙Az+Bv, \dot{z}\approx Az+Bv,

where

A=fx(x,u),B=fu(x,u). A=\left.\frac{\partial f}{\partial x}\right|_{(x^\ast,u^\ast)}, \qquad B=\left.\frac{\partial f}{\partial u}\right|_{(x^\ast,u^\ast)}.

Thus the Jacobian matrices determine local behavior.

Linear control methods can often stabilize or analyze the nonlinear system near an operating point.

118.28 Control Theory and Linear Algebra

The main objects of control theory are matrix objects.

Control conceptLinear algebra object
StateVector
InputVector
OutputVector
DynamicsState matrix
Input actionColumn space of BB
MeasurementsRow space of CC
StabilityEigenvalues of AA
FeedbackClosed-loop matrix ABKA-BK
ControllabilityRank of C\mathcal{C}
ObservabilityRank of O\mathcal{O}
ObserverMatrix ALCA-LC
LQRRiccati equation
Kalman filterCovariance matrices
Model reductionGramians and singular values

Control theory is therefore a structured application of linear algebra to dynamic systems.

118.29 Summary

Control theory studies how inputs can shape the behavior of dynamical systems.

In state-space form, a linear system is written as

x˙=Ax+Bu,y=Cx+Du. \dot{x}=Ax+Bu, \qquad y=Cx+Du.

The matrix AA determines natural dynamics. The matrix BB determines how inputs influence the state. The matrix CC determines what is measured.

Stability depends on eigenvalues. Controllability and observability depend on rank conditions. Feedback changes closed-loop eigenvalues. Observers reconstruct unmeasured states. LQR and Kalman filtering use quadratic forms, covariance matrices, and Riccati equations.

The central principle is that control turns dynamics into linear algebra over time. The state evolves through matrices, and the controller modifies those matrices to obtain desired behavior.