# Chapter 55. Random Walks on Graphs

# Chapter 55. Random Walks on Graphs

A random walk on a graph is a process that moves from vertex to vertex by choosing edges at random.

Random walks connect graph theory with probability. They describe diffusion, exploration, sampling, search, ranking, electrical networks, and Markov chains. A graph gives the possible moves. Probability gives the rule for choosing among them.

The basic model is simple. Start at a vertex. At each step, choose one of the current vertex's neighbors and move there. The sequence of visited vertices is random, but the graph controls which moves are allowed.

## 55.1 The Simple Random Walk

Let

$$
G=(V,E)
$$

be a finite undirected graph.

A simple random walk on \(G\) is a sequence of random vertices

$$
X_0,X_1,X_2,\ldots
$$

such that, if the walk is at vertex \(v\), then the next vertex is chosen uniformly from the neighbors of \(v\).

Thus, for adjacent vertices \(u\) and \(v\),

$$
\Pr(X_{t+1}=u\mid X_t=v)=\frac{1}{\deg(v)}.
$$

If \(u\) is not adjacent to \(v\), then

$$
\Pr(X_{t+1}=u\mid X_t=v)=0.
$$

The next position depends only on the current position. It does not depend on the earlier history of the walk. This is the Markov property. A random walk on a graph is therefore a Markov chain whose state space is the vertex set of the graph.

## 55.2 Transition Matrix

Let the vertices be

$$
V=\{v_1,v_2,\ldots,v_n\}.
$$

The transition matrix of the random walk is the \(n\times n\) matrix \(P\) defined by

$$
P_{ij} =
\Pr(X_{t+1}=v_j\mid X_t=v_i).
$$

For a simple random walk on an undirected graph,

$$
P_{ij} =
\begin{cases}
\dfrac{1}{\deg(v_i)}, & \text{if } v_iv_j\in E,\\
0, & \text{otherwise.}
\end{cases}
$$

Each row of \(P\) sums to \(1\). Such a matrix is called stochastic.

If \(A\) is the adjacency matrix of \(G\), and \(D\) is the diagonal degree matrix, then

$$
P=D^{-1}A.
$$

This formula assumes there are no isolated vertices. If isolated vertices exist, the random walk rule must be modified, often by adding a self-loop at each isolated vertex.

## 55.3 Distributions

A probability distribution on \(V\) assigns a nonnegative number to each vertex, with total sum \(1\).

If the initial distribution is written as a row vector

$$
\mu_0,
$$

then the distribution after one step is

$$
\mu_1=\mu_0P.
$$

After \(t\) steps,

$$
\mu_t=\mu_0P^t.
$$

Thus powers of the transition matrix describe the evolution of the walk.

The entry

$$
(P^t)_{ij}
$$

is the probability that a walk starting at \(v_i\) is at \(v_j\) after \(t\) steps.

This gives an algebraic way to study random movement on a graph.

## 55.4 Stationary Distribution

A stationary distribution is a distribution \(\pi\) satisfying

$$
\pi P=\pi.
$$

If the walk starts with distribution \(\pi\), then one step of the walk leaves the distribution unchanged.

For a connected undirected graph, the simple random walk has stationary distribution

$$
\pi(v)=\frac{\deg(v)}{2|E|}.
$$

This says that the walk spends more time at high-degree vertices. In a \(d\)-regular graph, every vertex has the same degree, so the stationary distribution is uniform.

## 55.5 Detailed Balance

The stationary distribution above can be checked by detailed balance.

For adjacent vertices \(u\) and \(v\),

$$
\pi(u)P_{uv} =
\frac{\deg(u)}{2|E|}\cdot\frac{1}{\deg(u)} =
\frac{1}{2|E|}.
$$

Similarly,

$$
\pi(v)P_{vu} =
\frac{\deg(v)}{2|E|}\cdot\frac{1}{\deg(v)} =
\frac{1}{2|E|}.
$$

Therefore

$$
\pi(u)P_{uv}=\pi(v)P_{vu}.
$$

This condition means that, at stationarity, probability flow from \(u\) to \(v\) equals probability flow from \(v\) to \(u\).

A Markov chain satisfying detailed balance is called reversible. Random walks on undirected graphs are reversible.

## 55.6 Regular Graphs

If \(G\) is \(d\)-regular, then

$$
\deg(v)=d
$$

for every vertex \(v\).

The stationary distribution becomes

$$
\pi(v)=\frac{d}{nd}=\frac{1}{n}.
$$

Thus the uniform distribution is stationary.

Regular graphs are therefore especially clean for random walks. The walk has no degree bias. In the long run, every vertex has the same probability.

This is one reason regular graphs are common in expander theory.

## 55.7 Periodicity

A random walk may fail to converge to its stationary distribution because of periodicity.

Consider a bipartite graph with bipartition

$$
V=A\cup B.
$$

Every edge goes from \(A\) to \(B\). Therefore a walk starting in \(A\) is in \(B\) after one step, in \(A\) after two steps, and so on.

The walk alternates sides. It cannot settle smoothly to a fixed limiting distribution at every time.

This is a periodic Markov chain.

A common remedy is the lazy random walk.

## 55.8 Lazy Random Walks

In a lazy random walk, the walker stays at the current vertex with probability

$$
\frac{1}{2},
$$

and moves to a uniformly chosen neighbor with probability

$$
\frac{1}{2}.
$$

The transition matrix is

$$
P_{\mathrm{lazy}} =
\frac{1}{2}I+\frac{1}{2}P.
$$

The self-loop probability removes periodic oscillation.

Lazy random walks are widely used because they preserve the stationary distribution while improving convergence behavior. On connected graphs, laziness makes the chain aperiodic.

## 55.9 Irreducibility

A random walk is irreducible if every vertex can be reached from every other vertex with positive probability in some number of steps.

For an undirected graph, this is equivalent to the graph being connected.

If \(G\) is disconnected, a walk starting in one component can never reach another component. There is then no single global behavior across the whole graph. Each connected component must be analyzed separately.

Thus connectivity of the graph becomes irreducibility of the Markov chain.

## 55.10 Mixing

Mixing describes how quickly a random walk approaches its stationary distribution.

A walk mixes rapidly if, after relatively few steps, its distribution is close to \(\pi\), regardless of the starting vertex.

The mixing time depends strongly on graph structure.

| Graph | Mixing behavior |
|---|---|
| Path | Slow |
| Cycle | Slow |
| Complete graph | Very fast |
| Expander | Fast |
| Graph with bottleneck | Slow |

Bottlenecks slow mixing because the walk can remain trapped in one region for a long time.

Expansion speeds mixing because every region has many edges leading outward.

## 55.11 Random Walks and Expansion

Random walks give a probabilistic view of expansion.

If a set \(S\) has a small boundary, then a walk inside \(S\) has few chances to leave. The set acts as a trap.

If every set has a large boundary, then the walk escapes local regions quickly. The distribution spreads across the graph.

Thus expansion and mixing are closely related.

Expander graphs are important partly because random walks on them mix quickly. They provide sparse graphs that behave, for many purposes, like complete graphs.

## 55.12 Spectral View

The speed of mixing is controlled by eigenvalues of the transition matrix.

For a connected \(d\)-regular graph,

$$
P=\frac{1}{d}A.
$$

The largest eigenvalue of \(P\) is

$$
1.
$$

The remaining eigenvalues measure how quickly nonuniform components of a distribution decay.

If the second-largest eigenvalue in absolute value is small, then the walk mixes quickly.

The spectral gap is often written as

$$
1-\lambda_2,
$$

where \(\lambda_2\) is the second-largest eigenvalue in an appropriate ordering.

A larger spectral gap means faster convergence.

This is one of the main links between random walks, spectral graph theory, and expansion.

## 55.13 Hitting Time

The hitting time from \(u\) to \(v\) is the expected number of steps needed for a random walk starting at \(u\) to visit \(v\) for the first time.

It is denoted

$$
H(u,v).
$$

Hitting times need not be symmetric. It may take longer on average to hit \(v\) from \(u\) than to hit \(u\) from \(v\).

For example, in a graph with one high-degree hub and many leaves, a walk from a leaf reaches the hub in one step. But a walk from the hub may take many tries before reaching a particular leaf.

## 55.14 Commute Time

The commute time between \(u\) and \(v\) is

$$
C(u,v)=H(u,v)+H(v,u).
$$

It is the expected time to go from \(u\) to \(v\) and then return from \(v\) to \(u\).

Commute time is symmetric.

There is a deep relation between commute times and electrical networks. If each edge is treated as a unit resistor, then commute time is proportional to effective resistance between the two vertices.

This connection is one of the reasons random walks are useful in network analysis.

## 55.15 Cover Time

The cover time of a graph is the expected time needed for a random walk to visit every vertex.

Cover time measures exploration difficulty.

A complete graph has relatively small cover time because each step can jump to any other vertex.

A path has large cover time because the walk moves locally and often revisits vertices before reaching new ones.

Cover time is important in randomized algorithms, search processes, and network exploration.

## 55.16 Weighted Graphs

Random walks extend naturally to weighted graphs.

Let each edge \(uv\) have a positive weight

$$
w_{uv}.
$$

At vertex \(u\), the walk chooses neighbor \(v\) with probability proportional to the edge weight:

$$
P_{uv} =
\frac{w_{uv}}{\sum_{x\sim u}w_{ux}}.
$$

The weighted degree of \(u\) is

$$
w(u)=\sum_{x\sim u}w_{ux}.
$$

For an undirected weighted graph, the stationary distribution is

$$
\pi(u)=\frac{w(u)}{\sum_{x\in V}w(x)}.
$$

The unweighted formula is the special case in which every edge has weight \(1\).

## 55.17 Directed Graphs

Random walks on directed graphs require more care.

If \(D=(V,A)\) is a directed graph, a walk at vertex \(u\) chooses among outgoing arcs from \(u\).

If some vertex has out-degree \(0\), the walk may get stuck unless a special rule is added.

Stationary distributions may fail to exist uniquely unless the directed graph satisfies appropriate strong connectivity and aperiodicity conditions.

Directed random walks are central in ranking algorithms, especially web ranking. PageRank modifies the directed walk by adding teleportation, so the walker sometimes jumps to a random vertex instead of following an outgoing link.

## 55.18 Absorbing Random Walks

A vertex is absorbing if, once the walk reaches it, the walk stays there forever.

Absorbing walks are useful for modeling processes with terminal states.

Examples include:

| Setting | Absorbing state |
|---|---|
| Game | Winning or losing position |
| Web navigation | Exit page |
| Epidemic model | Removed state |
| Search process | Target found |
| Randomized algorithm | Termination state |

Absorbing Markov chains are studied by separating transient states from absorbing states and computing absorption probabilities and expected absorption times.

## 55.19 Applications

Random walks on graphs appear in many areas.

| Area | Use |
|---|---|
| Search engines | Ranking pages by link structure |
| Sampling | Draw samples from large state spaces |
| Network analysis | Measure centrality and reachability |
| Machine learning | Graph embeddings and label propagation |
| Distributed systems | Gossip and load balancing |
| Algorithms | Randomized exploration |
| Physics | Diffusion on networks |
| Biology | Movement through interaction networks |

The common pattern is movement through a structured set of possible states.

## 55.20 Common Mistakes

A common mistake is to assume that the stationary distribution is always uniform. It is uniform for regular undirected graphs, but in general it is proportional to degree.

Another mistake is to confuse existence of a stationary distribution with convergence to it. A bipartite connected graph has a stationary distribution, but the ordinary walk is periodic.

A third mistake is to ignore disconnected components. A walk cannot move between components.

A fourth mistake is to assume that edge-disjoint paths alone guarantee fast mixing. Bottlenecks and expansion control mixing more directly.

## 55.21 Summary

A random walk is a probabilistic process that moves through a graph by following edges.

| Concept | Meaning |
|---|---|
| Simple random walk | Move uniformly to a neighbor |
| Transition matrix | Matrix of one-step probabilities |
| Stationary distribution | Distribution unchanged by the walk |
| Lazy walk | Walk that may stay in place |
| Mixing time | Time to approach stationarity |
| Hitting time | Expected time to reach a vertex |
| Cover time | Expected time to visit all vertices |
| Spectral gap | Eigenvalue measure controlling convergence |

Random walks provide a probabilistic language for graph structure. They connect connectivity, expansion, spectra, algorithms, sampling, ranking, and network dynamics.
