# Chapter 111. Graphs and Adjacency Matrices

# Chapter 111. Graphs and Adjacency Matrices

A graph is a mathematical object used to describe pairwise relationships.

The objects being related are called vertices. The relationships between them are called edges. Graphs appear in network analysis, computer science, chemistry, optimization, social systems, transportation, web search, and machine learning.

Linear algebra enters graph theory by encoding a graph as a matrix. Once a graph is represented by a matrix, its structure can be studied using rank, eigenvalues, eigenvectors, matrix powers, null spaces, and decompositions. This is the main idea behind algebraic and spectral graph theory, where graph properties are studied through matrices associated with the graph.

## 111.1 Graphs

A graph is an ordered pair

$$
G = (V,E),
$$

where \(V\) is a set of vertices and \(E\) is a set of edges.

If

$$
V = \{v_1,v_2,\ldots,v_n\},
$$

then the graph has \(n\) vertices.

An edge joins two vertices. In an undirected graph, the edge between \(v_i\) and \(v_j\) has no direction. In a directed graph, the edge has an initial vertex and a terminal vertex.

For example, if

$$
V = \{1,2,3,4\}
$$

and

$$
E = \{\{1,2\},\{1,3\},\{2,4\}\},
$$

then the graph has four vertices and three undirected edges.

The edge set says that vertex \(1\) is adjacent to vertices \(2\) and \(3\), and vertex \(2\) is adjacent to vertex \(4\).

## 111.2 Adjacency

Two vertices are adjacent if an edge connects them.

Adjacency is the basic relation in a graph. Most graph questions begin with this relation:

| Question | Meaning |
|---|---|
| Is \(v_i\) connected to \(v_j\)? | Adjacency |
| How many neighbors does \(v_i\) have? | Degree |
| Can \(v_i\) reach \(v_j\)? | Connectivity |
| How many walks go from \(v_i\) to \(v_j\)? | Matrix powers |
| How central is \(v_i\)? | Spectral information |

Linear algebra makes these questions computable by storing adjacency information in a matrix.

## 111.3 The Adjacency Matrix

Let \(G\) be a graph with vertices

$$
v_1,v_2,\ldots,v_n.
$$

The adjacency matrix of \(G\) is the \(n \times n\) matrix \(A\) whose entries are

$$
a_{ij} =
\begin{cases}
1, & \text{if } v_i \text{ is adjacent to } v_j, \\
0, & \text{otherwise.}
\end{cases}
$$

For a finite simple graph, this gives a square \(0\)-\(1\) matrix with zeros on the diagonal. For an undirected graph, the adjacency matrix is symmetric.

For the graph

$$
E = \{\{1,2\},\{1,3\},\{2,4\}\},
$$

the adjacency matrix is

$$
A =
\begin{bmatrix}
0 & 1 & 1 & 0 \\
1 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0
\end{bmatrix}.
$$

The entry \(a_{12}=1\) because vertices \(1\) and \(2\) are adjacent. The entry \(a_{34}=0\) because vertices \(3\) and \(4\) are not adjacent.

## 111.4 Symmetry

For an undirected graph,

$$
a_{ij}=a_{ji}.
$$

Therefore the adjacency matrix satisfies

$$
A^T = A.
$$

Thus the adjacency matrix of an undirected graph is symmetric.

This has important consequences. A real symmetric matrix has real eigenvalues and an orthogonal basis of eigenvectors. Therefore an undirected graph can be studied using the spectral theorem. The eigenvalues of the adjacency matrix form part of the spectrum of the graph.

For directed graphs, symmetry usually fails. If there is an edge from \(v_i\) to \(v_j\), but no edge from \(v_j\) to \(v_i\), then

$$
a_{ij}=1,
\qquad
a_{ji}=0.
$$

In that case \(A\) need not be symmetric.

## 111.5 Degree

The degree of a vertex in an undirected graph is the number of edges incident to it.

For a simple undirected graph, the degree of \(v_i\) is

$$
d_i = \sum_{j=1}^n a_{ij}.
$$

Thus the degree is the sum of the entries in row \(i\) of the adjacency matrix.

For the example matrix

$$
A =
\begin{bmatrix}
0 & 1 & 1 & 0 \\
1 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0
\end{bmatrix},
$$

the degrees are

$$
d_1 = 2,\quad d_2 = 2,\quad d_3 = 1,\quad d_4 = 1.
$$

The degree vector is

$$
d =
\begin{bmatrix}
2 \\
2 \\
1 \\
1
\end{bmatrix}.
$$

It can be computed as

$$
d = A\mathbf{1},
$$

where \(\mathbf{1}\) is the vector whose entries are all \(1\).

## 111.6 Directed Graphs

For a directed graph, adjacency has direction.

The adjacency matrix is usually defined by

$$
a_{ij} =
\begin{cases}
1, & \text{if there is an edge from } v_i \text{ to } v_j, \\
0, & \text{otherwise.}
\end{cases}
$$

The row sum gives the out-degree:

$$
d_i^{\text{out}} = \sum_{j=1}^n a_{ij}.
$$

The column sum gives the in-degree:

$$
d_j^{\text{in}} = \sum_{i=1}^n a_{ij}.
$$

Thus directed graphs naturally distinguish between outgoing and incoming structure.

A web graph is a directed graph. A page links to another page, but the reverse link need not exist.

## 111.7 Weighted Graphs

In a weighted graph, edges carry numerical weights.

The adjacency matrix becomes

$$
a_{ij} =
\begin{cases}
w_{ij}, & \text{if there is an edge from } v_i \text{ to } v_j, \\
0, & \text{otherwise.}
\end{cases}
$$

The weight may represent distance, cost, strength, similarity, probability, flow capacity, or frequency.

For example,

$$
A =
\begin{bmatrix}
0 & 3 & 5 \\
3 & 0 & 2 \\
5 & 2 & 0
\end{bmatrix}
$$

describes an undirected weighted graph with three vertices. The edge between vertices \(1\) and \(2\) has weight \(3\). The edge between vertices \(1\) and \(3\) has weight \(5\).

Weighted adjacency matrices allow graph algorithms to use numerical intensity, not only presence or absence.

## 111.8 Matrix Powers and Walks

One of the most important facts about adjacency matrices is that powers of \(A\) count walks.

A walk of length \(k\) from \(v_i\) to \(v_j\) is a sequence of \(k\) edges starting at \(v_i\) and ending at \(v_j\).

The entry \((A^k)_{ij}\) equals the number of walks of length \(k\) from \(v_i\) to \(v_j\).

For \(k=2\),

$$
(A^2)_{ij} =
\sum_{\ell=1}^n a_{i\ell}a_{\ell j}.
$$

Each nonzero term corresponds to a vertex \(v_\ell\) such that

$$
v_i \to v_\ell \to v_j
$$

is a walk of length \(2\).

For \(k=3\), the entries of \(A^3\) count walks of length \(3\). The same pattern continues for all positive integers \(k\).

This connects graph traversal to matrix multiplication.

## 111.9 Paths, Walks, and Reachability

A walk may repeat vertices and edges. A path does not repeat vertices.

Matrix powers count walks, not necessarily paths. Still, they are useful for determining reachability.

If there exists some \(k\) such that

$$
(A^k)_{ij} > 0,
$$

then there is a walk from \(v_i\) to \(v_j\). In a finite graph, the existence of a walk implies the existence of a path.

Thus reachability can be studied through the sequence

$$
A, A^2, A^3, \ldots.
$$

For a graph with \(n\) vertices, if \(v_i\) can reach \(v_j\), then there is a path of length at most \(n-1\). Therefore it is enough to examine powers up to \(A^{n-1}\).

## 111.10 The Degree Matrix

The degree matrix of an undirected graph is the diagonal matrix

$$
D =
\begin{bmatrix}
d_1 & 0 & \cdots & 0 \\
0 & d_2 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & d_n
\end{bmatrix}.
$$

The diagonal entries are the degrees of the vertices.

For the earlier example,

$$
D =
\begin{bmatrix}
2 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}.
$$

The degree matrix is often used together with the adjacency matrix to form the graph Laplacian.

## 111.11 The Graph Laplacian

The graph Laplacian is

$$
L = D - A.
$$

For an undirected graph, \(L\) is symmetric.

Using the example,

$$
L =
\begin{bmatrix}
2 & -1 & -1 & 0 \\
-1 & 2 & 0 & -1 \\
-1 & 0 & 1 & 0 \\
0 & -1 & 0 & 1
\end{bmatrix}.
$$

The Laplacian records both adjacency and degree information. It is central in spectral graph theory, random walks, electrical networks, graph partitioning, clustering, and discrete analogues of differential operators. Spectral graph theory commonly studies graph structure through matrices such as the adjacency matrix and the Laplacian matrix.

## 111.12 Quadratic Form of the Laplacian

For an undirected graph, the Laplacian has a useful quadratic form.

If \(x \in \mathbb{R}^n\), then

$$
x^TLx =
\sum_{\{i,j\}\in E}(x_i-x_j)^2.
$$

This identity shows that \(x^TLx\) measures how much the values of \(x\) vary across edges.

If adjacent vertices have similar values, the sum is small. If adjacent vertices have very different values, the sum is large.

This formula explains why the Laplacian appears in smoothing, graph signal processing, clustering, and energy minimization.

## 111.13 Connected Components

A graph is connected if every vertex can be reached from every other vertex.

The Laplacian detects connected components.

For an undirected graph,

$$
L\mathbf{1}=0.
$$

Thus \(0\) is always an eigenvalue of \(L\).

The multiplicity of the eigenvalue \(0\) equals the number of connected components of the graph.

If the graph is connected, then the null space of \(L\) is

$$
\operatorname{Null}(L) = \operatorname{span}\{\mathbf{1}\}.
$$

This is a strong example of a graph property expressed as a linear algebra property.

## 111.14 Eigenvalues of the Adjacency Matrix

The eigenvalues of the adjacency matrix give information about graph structure.

If \(A\) is the adjacency matrix of an undirected graph, then \(A\) is real and symmetric. Therefore it has real eigenvalues

$$
\lambda_1,\lambda_2,\ldots,\lambda_n
$$

and an orthonormal basis of eigenvectors.

The multiset of eigenvalues is called the spectrum of the graph. The spectrum is invariant under relabeling of vertices, although it does not fully determine the graph.

A large leading eigenvalue often indicates high connectivity or strong concentration of edges. Eigenvectors can reveal clusters, central vertices, or structural divisions.

## 111.15 Regular Graphs

A graph is \(d\)-regular if every vertex has degree \(d\).

In a \(d\)-regular graph,

$$
A\mathbf{1} = d\mathbf{1}.
$$

Therefore \(\mathbf{1}\) is an eigenvector of \(A\) with eigenvalue \(d\).

For regular graphs, the adjacency matrix and Laplacian are closely related:

$$
L = dI - A.
$$

Thus the eigenvalues of \(L\) are obtained from the eigenvalues of \(A\) by subtracting from \(d\).

If

$$
Aq = \lambda q,
$$

then

$$
Lq = (d-\lambda)q.
$$

This relation makes regular graphs especially convenient in spectral analysis.

## 111.16 Incidence Matrices

Another matrix representation of a graph is the incidence matrix.

Choose an orientation for each edge. For a graph with \(n\) vertices and \(m\) edges, the oriented incidence matrix \(B\) is an \(n \times m\) matrix.

For an edge \(e\) directed from \(v_i\) to \(v_j\),

$$
B_{ie} = -1,
\qquad
B_{je} = 1,
$$

and all other entries in column \(e\) are \(0\).

For an undirected graph, the orientation is arbitrary. The Laplacian satisfies

$$
L = BB^T.
$$

This factorization shows that the Laplacian is positive semidefinite:

$$
x^TLx = x^TBB^Tx = \|B^Tx\|^2 \geq 0.
$$

Thus a structural graph matrix becomes a Gram-type matrix.

## 111.17 Graphs as Linear Operators

An adjacency matrix can be viewed as a linear operator on functions defined on vertices.

A vector

$$
x =
\begin{bmatrix}
x_1 \\
x_2 \\
\vdots \\
x_n
\end{bmatrix}
$$

assigns a number \(x_i\) to each vertex \(v_i\).

Then

$$
(Ax)_i = \sum_{j=1}^n a_{ij}x_j.
$$

For an unweighted graph, this is the sum of the values of \(x\) over the neighbors of \(v_i\).

Thus \(A\) acts by aggregating neighboring values. This interpretation is used in random walks, diffusion, message passing, graph neural networks, ranking algorithms, and spectral methods.

## 111.18 Random Walk Matrices

For a graph with degree matrix \(D\), the matrix

$$
P = D^{-1}A
$$

is the transition matrix of a simple random walk, assuming all degrees are positive.

The entry

$$
p_{ij}
$$

is the probability of moving from vertex \(i\) to vertex \(j\) in one step.

If \(x\) is a row probability vector, then

$$
xP
$$

is the distribution after one step.

Random walk matrices connect graph theory with Markov chains.

They are used in PageRank, diffusion models, network analysis, and stochastic processes.

## 111.19 Relabeling Vertices

The adjacency matrix depends on the ordering of vertices.

If the vertices are relabeled, the matrix changes. However, the graph itself remains the same.

Relabeling corresponds to multiplication by a permutation matrix.

If \(P\) is a permutation matrix, then the relabeled adjacency matrix is

$$
A' = P^TAP.
$$

Matrices related in this way represent the same graph with different vertex labels.

Because \(A'\) is similar to \(A\), they have the same eigenvalues.

This explains why the spectrum is a graph invariant.

## 111.20 Example: A Path Graph

Consider the path graph on four vertices:

$$
1 - 2 - 3 - 4.
$$

Its adjacency matrix is

$$
A =
\begin{bmatrix}
0 & 1 & 0 & 0 \\
1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1 \\
0 & 0 & 1 & 0
\end{bmatrix}.
$$

The degree matrix is

$$
D =
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 \\
0 & 0 & 2 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}.
$$

The Laplacian is

$$
L =
\begin{bmatrix}
1 & -1 & 0 & 0 \\
-1 & 2 & -1 & 0 \\
0 & -1 & 2 & -1 \\
0 & 0 & -1 & 1
\end{bmatrix}.
$$

This matrix records the discrete second-difference structure of the path. It is the graph analogue of a one-dimensional Laplace operator.

## 111.21 Applications

Adjacency matrices give a common language for many systems.

| System | Vertices | Edges |
|---|---|---|
| Web graph | Web pages | Hyperlinks |
| Social network | People or accounts | Relationships |
| Road network | Intersections | Roads |
| Citation network | Papers | Citations |
| Molecular graph | Atoms | Bonds |
| Computer network | Machines | Connections |
| Knowledge graph | Entities | Relations |
| Recommender system | Users and items | Interactions |

Once encoded as matrices, these systems can be analyzed with linear algebra.

Common tasks include:

| Task | Linear algebra method |
|---|---|
| Ranking | Leading eigenvectors |
| Clustering | Laplacian eigenvectors |
| Connectivity | Null space of Laplacian |
| Similarity | Matrix powers and kernels |
| Diffusion | Random walk matrices |
| Compression | Low-rank approximation |
| Link prediction | Matrix factorization |

## 111.22 Summary

Graphs describe relationships. Adjacency matrices turn those relationships into linear algebra.

For a graph with \(n\) vertices, the adjacency matrix is an \(n \times n\) matrix whose entries record which vertices are adjacent. For undirected graphs, the adjacency matrix is symmetric. Its powers count walks. Its row sums give degrees. Together with the degree matrix, it forms the graph Laplacian

$$
L = D - A.
$$

The Laplacian, adjacency matrix, incidence matrix, and random walk matrix provide complementary views of graph structure.

This chapter shows a general principle: discrete structure can be encoded as linear algebra. Once encoded, graph problems become questions about matrices, subspaces, eigenvalues, and operators.
