Skip to content

Chapter 111. Graphs and Adjacency Matrices

A graph is a mathematical object used to describe pairwise relationships.

The objects being related are called vertices. The relationships between them are called edges. Graphs appear in network analysis, computer science, chemistry, optimization, social systems, transportation, web search, and machine learning.

Linear algebra enters graph theory by encoding a graph as a matrix. Once a graph is represented by a matrix, its structure can be studied using rank, eigenvalues, eigenvectors, matrix powers, null spaces, and decompositions. This is the main idea behind algebraic and spectral graph theory, where graph properties are studied through matrices associated with the graph.

111.1 Graphs

A graph is an ordered pair

G=(V,E), G = (V,E),

where VV is a set of vertices and EE is a set of edges.

If

V={v1,v2,,vn}, V = \{v_1,v_2,\ldots,v_n\},

then the graph has nn vertices.

An edge joins two vertices. In an undirected graph, the edge between viv_i and vjv_j has no direction. In a directed graph, the edge has an initial vertex and a terminal vertex.

For example, if

V={1,2,3,4} V = \{1,2,3,4\}

and

E={{1,2},{1,3},{2,4}}, E = \{\{1,2\},\{1,3\},\{2,4\}\},

then the graph has four vertices and three undirected edges.

The edge set says that vertex 11 is adjacent to vertices 22 and 33, and vertex 22 is adjacent to vertex 44.

111.2 Adjacency

Two vertices are adjacent if an edge connects them.

Adjacency is the basic relation in a graph. Most graph questions begin with this relation:

QuestionMeaning
Is viv_i connected to vjv_j?Adjacency
How many neighbors does viv_i have?Degree
Can viv_i reach vjv_j?Connectivity
How many walks go from viv_i to vjv_j?Matrix powers
How central is viv_i?Spectral information

Linear algebra makes these questions computable by storing adjacency information in a matrix.

111.3 The Adjacency Matrix

Let GG be a graph with vertices

v1,v2,,vn. v_1,v_2,\ldots,v_n.

The adjacency matrix of GG is the n×nn \times n matrix AA whose entries are

aij={1,if vi is adjacent to vj,0,otherwise. a_{ij} = \begin{cases} 1, & \text{if } v_i \text{ is adjacent to } v_j, \\ 0, & \text{otherwise.} \end{cases}

For a finite simple graph, this gives a square 00-11 matrix with zeros on the diagonal. For an undirected graph, the adjacency matrix is symmetric.

For the graph

E={{1,2},{1,3},{2,4}}, E = \{\{1,2\},\{1,3\},\{2,4\}\},

the adjacency matrix is

A=[0110100110000100]. A = \begin{bmatrix} 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix}.

The entry a12=1a_{12}=1 because vertices 11 and 22 are adjacent. The entry a34=0a_{34}=0 because vertices 33 and 44 are not adjacent.

111.4 Symmetry

For an undirected graph,

aij=aji. a_{ij}=a_{ji}.

Therefore the adjacency matrix satisfies

AT=A. A^T = A.

Thus the adjacency matrix of an undirected graph is symmetric.

This has important consequences. A real symmetric matrix has real eigenvalues and an orthogonal basis of eigenvectors. Therefore an undirected graph can be studied using the spectral theorem. The eigenvalues of the adjacency matrix form part of the spectrum of the graph.

For directed graphs, symmetry usually fails. If there is an edge from viv_i to vjv_j, but no edge from vjv_j to viv_i, then

aij=1,aji=0. a_{ij}=1, \qquad a_{ji}=0.

In that case AA need not be symmetric.

111.5 Degree

The degree of a vertex in an undirected graph is the number of edges incident to it.

For a simple undirected graph, the degree of viv_i is

di=j=1naij. d_i = \sum_{j=1}^n a_{ij}.

Thus the degree is the sum of the entries in row ii of the adjacency matrix.

For the example matrix

A=[0110100110000100], A = \begin{bmatrix} 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix},

the degrees are

d1=2,d2=2,d3=1,d4=1. d_1 = 2,\quad d_2 = 2,\quad d_3 = 1,\quad d_4 = 1.

The degree vector is

d=[2211]. d = \begin{bmatrix} 2 \\ 2 \\ 1 \\ 1 \end{bmatrix}.

It can be computed as

d=A1, d = A\mathbf{1},

where 1\mathbf{1} is the vector whose entries are all 11.

111.6 Directed Graphs

For a directed graph, adjacency has direction.

The adjacency matrix is usually defined by

aij={1,if there is an edge from vi to vj,0,otherwise. a_{ij} = \begin{cases} 1, & \text{if there is an edge from } v_i \text{ to } v_j, \\ 0, & \text{otherwise.} \end{cases}

The row sum gives the out-degree:

diout=j=1naij. d_i^{\text{out}} = \sum_{j=1}^n a_{ij}.

The column sum gives the in-degree:

djin=i=1naij. d_j^{\text{in}} = \sum_{i=1}^n a_{ij}.

Thus directed graphs naturally distinguish between outgoing and incoming structure.

A web graph is a directed graph. A page links to another page, but the reverse link need not exist.

111.7 Weighted Graphs

In a weighted graph, edges carry numerical weights.

The adjacency matrix becomes

aij={wij,if there is an edge from vi to vj,0,otherwise. a_{ij} = \begin{cases} w_{ij}, & \text{if there is an edge from } v_i \text{ to } v_j, \\ 0, & \text{otherwise.} \end{cases}

The weight may represent distance, cost, strength, similarity, probability, flow capacity, or frequency.

For example,

A=[035302520] A = \begin{bmatrix} 0 & 3 & 5 \\ 3 & 0 & 2 \\ 5 & 2 & 0 \end{bmatrix}

describes an undirected weighted graph with three vertices. The edge between vertices 11 and 22 has weight 33. The edge between vertices 11 and 33 has weight 55.

Weighted adjacency matrices allow graph algorithms to use numerical intensity, not only presence or absence.

111.8 Matrix Powers and Walks

One of the most important facts about adjacency matrices is that powers of AA count walks.

A walk of length kk from viv_i to vjv_j is a sequence of kk edges starting at viv_i and ending at vjv_j.

The entry (Ak)ij(A^k)_{ij} equals the number of walks of length kk from viv_i to vjv_j.

For k=2k=2,

(A2)ij==1naiaj. (A^2)_{ij} = \sum_{\ell=1}^n a_{i\ell}a_{\ell j}.

Each nonzero term corresponds to a vertex vv_\ell such that

vivvj v_i \to v_\ell \to v_j

is a walk of length 22.

For k=3k=3, the entries of A3A^3 count walks of length 33. The same pattern continues for all positive integers kk.

This connects graph traversal to matrix multiplication.

111.9 Paths, Walks, and Reachability

A walk may repeat vertices and edges. A path does not repeat vertices.

Matrix powers count walks, not necessarily paths. Still, they are useful for determining reachability.

If there exists some kk such that

(Ak)ij>0, (A^k)_{ij} > 0,

then there is a walk from viv_i to vjv_j. In a finite graph, the existence of a walk implies the existence of a path.

Thus reachability can be studied through the sequence

A,A2,A3,. A, A^2, A^3, \ldots.

For a graph with nn vertices, if viv_i can reach vjv_j, then there is a path of length at most n1n-1. Therefore it is enough to examine powers up to An1A^{n-1}.

111.10 The Degree Matrix

The degree matrix of an undirected graph is the diagonal matrix

D=[d1000d2000dn]. D = \begin{bmatrix} d_1 & 0 & \cdots & 0 \\ 0 & d_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & d_n \end{bmatrix}.

The diagonal entries are the degrees of the vertices.

For the earlier example,

D=[2000020000100001]. D = \begin{bmatrix} 2 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}.

The degree matrix is often used together with the adjacency matrix to form the graph Laplacian.

111.11 The Graph Laplacian

The graph Laplacian is

L=DA. L = D - A.

For an undirected graph, LL is symmetric.

Using the example,

L=[2110120110100101]. L = \begin{bmatrix} 2 & -1 & -1 & 0 \\ -1 & 2 & 0 & -1 \\ -1 & 0 & 1 & 0 \\ 0 & -1 & 0 & 1 \end{bmatrix}.

The Laplacian records both adjacency and degree information. It is central in spectral graph theory, random walks, electrical networks, graph partitioning, clustering, and discrete analogues of differential operators. Spectral graph theory commonly studies graph structure through matrices such as the adjacency matrix and the Laplacian matrix.

111.12 Quadratic Form of the Laplacian

For an undirected graph, the Laplacian has a useful quadratic form.

If xRnx \in \mathbb{R}^n, then

xTLx={i,j}E(xixj)2. x^TLx = \sum_{\{i,j\}\in E}(x_i-x_j)^2.

This identity shows that xTLxx^TLx measures how much the values of xx vary across edges.

If adjacent vertices have similar values, the sum is small. If adjacent vertices have very different values, the sum is large.

This formula explains why the Laplacian appears in smoothing, graph signal processing, clustering, and energy minimization.

111.13 Connected Components

A graph is connected if every vertex can be reached from every other vertex.

The Laplacian detects connected components.

For an undirected graph,

L1=0. L\mathbf{1}=0.

Thus 00 is always an eigenvalue of LL.

The multiplicity of the eigenvalue 00 equals the number of connected components of the graph.

If the graph is connected, then the null space of LL is

Null(L)=span{1}. \operatorname{Null}(L) = \operatorname{span}\{\mathbf{1}\}.

This is a strong example of a graph property expressed as a linear algebra property.

111.14 Eigenvalues of the Adjacency Matrix

The eigenvalues of the adjacency matrix give information about graph structure.

If AA is the adjacency matrix of an undirected graph, then AA is real and symmetric. Therefore it has real eigenvalues

λ1,λ2,,λn \lambda_1,\lambda_2,\ldots,\lambda_n

and an orthonormal basis of eigenvectors.

The multiset of eigenvalues is called the spectrum of the graph. The spectrum is invariant under relabeling of vertices, although it does not fully determine the graph.

A large leading eigenvalue often indicates high connectivity or strong concentration of edges. Eigenvectors can reveal clusters, central vertices, or structural divisions.

111.15 Regular Graphs

A graph is dd-regular if every vertex has degree dd.

In a dd-regular graph,

A1=d1. A\mathbf{1} = d\mathbf{1}.

Therefore 1\mathbf{1} is an eigenvector of AA with eigenvalue dd.

For regular graphs, the adjacency matrix and Laplacian are closely related:

L=dIA. L = dI - A.

Thus the eigenvalues of LL are obtained from the eigenvalues of AA by subtracting from dd.

If

Aq=λq, Aq = \lambda q,

then

Lq=(dλ)q. Lq = (d-\lambda)q.

This relation makes regular graphs especially convenient in spectral analysis.

111.16 Incidence Matrices

Another matrix representation of a graph is the incidence matrix.

Choose an orientation for each edge. For a graph with nn vertices and mm edges, the oriented incidence matrix BB is an n×mn \times m matrix.

For an edge ee directed from viv_i to vjv_j,

Bie=1,Bje=1, B_{ie} = -1, \qquad B_{je} = 1,

and all other entries in column ee are 00.

For an undirected graph, the orientation is arbitrary. The Laplacian satisfies

L=BBT. L = BB^T.

This factorization shows that the Laplacian is positive semidefinite:

xTLx=xTBBTx=BTx20. x^TLx = x^TBB^Tx = \|B^Tx\|^2 \geq 0.

Thus a structural graph matrix becomes a Gram-type matrix.

111.17 Graphs as Linear Operators

An adjacency matrix can be viewed as a linear operator on functions defined on vertices.

A vector

x=[x1x2xn] x = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}

assigns a number xix_i to each vertex viv_i.

Then

(Ax)i=j=1naijxj. (Ax)_i = \sum_{j=1}^n a_{ij}x_j.

For an unweighted graph, this is the sum of the values of xx over the neighbors of viv_i.

Thus AA acts by aggregating neighboring values. This interpretation is used in random walks, diffusion, message passing, graph neural networks, ranking algorithms, and spectral methods.

111.18 Random Walk Matrices

For a graph with degree matrix DD, the matrix

P=D1A P = D^{-1}A

is the transition matrix of a simple random walk, assuming all degrees are positive.

The entry

pij p_{ij}

is the probability of moving from vertex ii to vertex jj in one step.

If xx is a row probability vector, then

xP xP

is the distribution after one step.

Random walk matrices connect graph theory with Markov chains.

They are used in PageRank, diffusion models, network analysis, and stochastic processes.

111.19 Relabeling Vertices

The adjacency matrix depends on the ordering of vertices.

If the vertices are relabeled, the matrix changes. However, the graph itself remains the same.

Relabeling corresponds to multiplication by a permutation matrix.

If PP is a permutation matrix, then the relabeled adjacency matrix is

A=PTAP. A' = P^TAP.

Matrices related in this way represent the same graph with different vertex labels.

Because AA' is similar to AA, they have the same eigenvalues.

This explains why the spectrum is a graph invariant.

111.20 Example: A Path Graph

Consider the path graph on four vertices:

1234. 1 - 2 - 3 - 4.

Its adjacency matrix is

A=[0100101001010010]. A = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}.

The degree matrix is

D=[1000020000200001]. D = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}.

The Laplacian is

L=[1100121001210011]. L = \begin{bmatrix} 1 & -1 & 0 & 0 \\ -1 & 2 & -1 & 0 \\ 0 & -1 & 2 & -1 \\ 0 & 0 & -1 & 1 \end{bmatrix}.

This matrix records the discrete second-difference structure of the path. It is the graph analogue of a one-dimensional Laplace operator.

111.21 Applications

Adjacency matrices give a common language for many systems.

SystemVerticesEdges
Web graphWeb pagesHyperlinks
Social networkPeople or accountsRelationships
Road networkIntersectionsRoads
Citation networkPapersCitations
Molecular graphAtomsBonds
Computer networkMachinesConnections
Knowledge graphEntitiesRelations
Recommender systemUsers and itemsInteractions

Once encoded as matrices, these systems can be analyzed with linear algebra.

Common tasks include:

TaskLinear algebra method
RankingLeading eigenvectors
ClusteringLaplacian eigenvectors
ConnectivityNull space of Laplacian
SimilarityMatrix powers and kernels
DiffusionRandom walk matrices
CompressionLow-rank approximation
Link predictionMatrix factorization

111.22 Summary

Graphs describe relationships. Adjacency matrices turn those relationships into linear algebra.

For a graph with nn vertices, the adjacency matrix is an n×nn \times n matrix whose entries record which vertices are adjacent. For undirected graphs, the adjacency matrix is symmetric. Its powers count walks. Its row sums give degrees. Together with the degree matrix, it forms the graph Laplacian

L=DA. L = D - A.

The Laplacian, adjacency matrix, incidence matrix, and random walk matrix provide complementary views of graph structure.

This chapter shows a general principle: discrete structure can be encoded as linear algebra. Once encoded, graph problems become questions about matrices, subspaces, eigenvalues, and operators.