Skip to content

Chapter 80. Eigenvalues and Eigenvectors

Eigenvalues and eigenvectors describe special directions of a linear transformation. In graph theory, they describe special patterns on the vertices of a graph.

Let

G=(V,E) G=(V,E)

be a finite simple undirected graph with vertices

V={v1,v2,,vn}. V=\{v_1,v_2,\ldots,v_n\}.

Let AA be the adjacency matrix of GG. A nonzero vector

xRn x\in \mathbb{R}^n

is an eigenvector of AA if there is a scalar λ\lambda such that

Ax=λx. Ax=\lambda x.

The scalar λ\lambda is called the corresponding eigenvalue.

In spectral graph theory, eigenvalues and eigenvectors of graph matrices, especially adjacency and Laplacian matrices, are used to study structural properties of graphs. For an undirected graph, the adjacency matrix is real and symmetric, so its eigenvalues are real and it has an orthonormal basis of eigenvectors.

80.1 Eigenvectors as Vertex Functions

A vector

x=[x1x2xn] x= \begin{bmatrix} x_1\\ x_2\\ \vdots\\ x_n \end{bmatrix}

may be viewed as a function on the vertices of GG. The entry xix_i assigns a value to vertex viv_i.

The equation

Ax=λx Ax=\lambda x

can be read one vertex at a time. Since

(Ax)i=vjvixj, (Ax)_i=\sum_{v_j\sim v_i}x_j,

the eigenvalue equation becomes

vjvixj=λxi. \sum_{v_j\sim v_i}x_j=\lambda x_i.

Thus an adjacency eigenvector assigns values to vertices so that, at every vertex, the sum of neighboring values is a fixed multiple of the value at that vertex.

This is the basic local meaning of an adjacency eigenvector.

80.2 Example: The Path on Three Vertices

Consider the path

P3:v1v2v3. P_3: v_1-v_2-v_3.

Its adjacency matrix is

A=[010101010]. A= \begin{bmatrix} 0&1&0\\ 1&0&1\\ 0&1&0 \end{bmatrix}.

Let

x=[121]. x= \begin{bmatrix} 1\\ \sqrt{2}\\ 1 \end{bmatrix}.

Then

Ax=[222]=2[121]. Ax= \begin{bmatrix} \sqrt{2}\\ 2\\ \sqrt{2} \end{bmatrix} = \sqrt{2} \begin{bmatrix} 1\\ \sqrt{2}\\ 1 \end{bmatrix}.

Hence xx is an eigenvector with eigenvalue

λ=2. \lambda=\sqrt{2}.

The middle vertex receives the largest value. This reflects its central position in the path: both endpoints are adjacent to it, and it connects the two sides of the graph.

80.3 Scaling of Eigenvectors

If xx is an eigenvector with eigenvalue λ\lambda, then every nonzero scalar multiple of xx is also an eigenvector with eigenvalue λ\lambda.

Indeed,

A(cx)=cAx=cλx=λ(cx). A(cx)=cAx=c\lambda x=\lambda(cx).

Therefore an eigenvector is not unique as a single vector. It represents a direction in vector space.

For this reason, eigenvectors are often normalized. A common convention is

x=1. \|x\|=1.

The normalization changes the length of the vector, but not its eigendirection.

80.4 Eigenspaces

The eigenspace corresponding to an eigenvalue λ\lambda is

Eλ={xRn:Ax=λx}. E_\lambda=\{x\in \mathbb{R}^n: Ax=\lambda x\}.

This set includes the zero vector, even though the zero vector is not called an eigenvector.

Equivalently,

Eλ=ker(AλI). E_\lambda=\ker(A-\lambda I).

Thus the eigenspace is a subspace of Rn\mathbb{R}^n.

The dimension of EλE_\lambda is the geometric multiplicity of λ\lambda. It counts the number of independent eigenvectors associated with λ\lambda.

80.5 Orthogonality

For a simple undirected graph, the adjacency matrix is symmetric:

AT=A. A^T=A.

A real symmetric matrix has orthogonal eigenspaces for distinct eigenvalues. Therefore, if

Ax=λx,Ay=μy, Ax=\lambda x, \qquad Ay=\mu y,

and

λμ, \lambda\neq \mu,

then

xTy=0. x^Ty=0.

This orthogonality gives a clean decomposition of vertex functions. Any vector on the vertices can be written as a sum of eigenvector components.

This is one reason spectral graph theory is so useful: it gives a coordinate system adapted to the graph.

80.6 Diagonalization

Since AA is real and symmetric, it can be diagonalized orthogonally. There is an orthogonal matrix QQ and a diagonal matrix Λ\Lambda such that

A=QΛQT. A=Q\Lambda Q^T.

The columns of QQ are orthonormal eigenvectors of AA. The diagonal entries of Λ\Lambda are the corresponding eigenvalues.

If

Λ=diag(λ1,λ2,,λn), \Lambda= \operatorname{diag}(\lambda_1,\lambda_2,\ldots,\lambda_n),

then many powers and functions of AA become simple:

Ak=QΛkQT. A^k=Q\Lambda^kQ^T.

This formula explains why eigenvalues control walk counts, diffusion, random walks, and many iterative processes on graphs.

80.7 Eigenvalues and Walk Growth

The entry (Ak)ij(A^k)_{ij} counts walks of length kk from viv_i to vjv_j. Since

Ak=QΛkQT, A^k=Q\Lambda^kQ^T,

the eigenvalues determine how powers of AA grow.

Large eigenvalues dominate high powers. If λ1\lambda_1 is the largest eigenvalue in absolute value, then terms involving λ1k\lambda_1^k dominate as kk becomes large, unless canceled by orthogonality or symmetry.

Thus the largest eigenvalues describe large-scale walk behavior.

80.8 The Largest Eigenvalue

For a connected graph, the adjacency matrix is nonnegative and irreducible. The Perron-Frobenius theorem implies that the largest adjacency eigenvalue has an eigenvector whose entries are strictly positive. This eigenvector is called the Perron vector.

The largest eigenvalue is often denoted

λ1. \lambda_1.

For a dd-regular connected graph,

λ1=d. \lambda_1=d.

The all-ones vector is an eigenvector:

A1=d1. A\mathbf{1}=d\mathbf{1}.

For irregular graphs, the largest eigenvector often assigns larger values to vertices that are connected to other important vertices. This is the basis of eigenvector centrality.

80.9 Eigenvector Centrality

Eigenvector centrality assigns a score xix_i to each vertex viv_i so that a vertex is important when it is adjacent to important vertices.

The defining equation is

Ax=λx. Ax=\lambda x.

At vertex viv_i, this says

λxi=vjvixj. \lambda x_i=\sum_{v_j\sim v_i}x_j.

Thus xix_i is proportional to the sum of the scores of its neighbors.

The principal eigenvector, when positive, gives a natural centrality measure. It differs from degree centrality. Degree centrality counts neighbors. Eigenvector centrality weights neighbors by their own importance.

80.10 Regular Graphs

Let GG be dd-regular. Then every row of AA sums to dd. Hence

A1=d1. A\mathbf{1}=d\mathbf{1}.

So dd is an eigenvalue.

If GG is connected, then dd is the largest eigenvalue and has multiplicity 11.

The other eigenvalues measure structure beyond uniform degree. For example, in a good expander graph, all nontrivial eigenvalues are small in absolute value compared with dd. This means that the graph behaves, in many ways, like a random regular graph.

For regular graphs, spectral information from the adjacency matrix is closely related to spectral information from the Laplacian. For nonregular graphs, the choice of matrix can make a substantial difference.

80.11 Complete Graphs

For the complete graph KnK_n, the adjacency matrix is

A=JI, A=J-I,

where JJ is the all-ones matrix.

The vector 1\mathbf{1} satisfies

A1=(n1)1. A\mathbf{1}=(n-1)\mathbf{1}.

Therefore n1n-1 is an eigenvalue.

If xx is orthogonal to 1\mathbf{1}, then

Jx=0. Jx=0.

Hence

Ax=(JI)x=x. Ax=(J-I)x=-x.

Thus the eigenvalues of KnK_n are

n1,1,1,,1. n-1,\quad -1,\quad -1,\quad \ldots,\quad -1.

The eigenspace for 1-1 consists of all vectors whose entries sum to zero.

80.12 Empty Graphs

The empty graph on nn vertices has adjacency matrix

A=0. A=0.

Therefore

Ax=0 Ax=0

for every vector xx. Every nonzero vector is an eigenvector with eigenvalue 00.

The only eigenvalue is

0, 0,

with multiplicity nn.

This reflects the absence of adjacency structure.

80.13 Bipartite Graphs

For a bipartite graph with parts XX and YY, the adjacency matrix can be written in block form:

A=[0BBT0]. A= \begin{bmatrix} 0&B\\ B^T&0 \end{bmatrix}.

Suppose

A[uv]=λ[uv]. A \begin{bmatrix} u\\ v \end{bmatrix} = \lambda \begin{bmatrix} u\\ v \end{bmatrix}.

Then

Bv=λu,BTu=λv. Bv=\lambda u, \qquad B^Tu=\lambda v.

Changing the sign of the second block gives

A[uv]=λ[uv]. A \begin{bmatrix} u\\ -v \end{bmatrix} = -\lambda \begin{bmatrix} u\\ -v \end{bmatrix}.

Therefore if λ\lambda is an eigenvalue, then λ-\lambda is also an eigenvalue with the same multiplicity.

Thus the adjacency spectrum of a bipartite graph is symmetric about zero.

80.14 Laplacian Eigenvectors

The Laplacian matrix is

L=ΔA. L=\Delta-A.

A Laplacian eigenvector satisfies

Lx=μx. Lx=\mu x.

Since

(Lx)i=deg(vi)xivjvixj, (Lx)_i=\deg(v_i)x_i-\sum_{v_j\sim v_i}x_j,

the equation at vertex viv_i is

deg(vi)xivjvixj=μxi. \deg(v_i)x_i-\sum_{v_j\sim v_i}x_j=\mu x_i.

Equivalently,

vjvi(xixj)=μxi. \sum_{v_j\sim v_i}(x_i-x_j)=\mu x_i.

Thus a Laplacian eigenvector describes how a vertex value differs from neighboring values.

80.15 The Zero Laplacian Eigenvalue

The all-ones vector satisfies

L1=0. L\mathbf{1}=0.

Thus 00 is always a Laplacian eigenvalue.

More generally, if a graph has cc connected components, then the eigenspace for eigenvalue 00 has dimension cc. Its vectors are constant on each connected component.

In particular, a graph is connected if and only if the zero eigenvalue of LL has multiplicity 11. The multiplicity of 00 in the Laplacian spectrum equals the number of connected components.

80.16 The Fiedler Vector

For a connected graph, the Laplacian eigenvalues are ordered as

0=μ1<μ2μn. 0=\mu_1<\mu_2\leq \cdots\leq \mu_n.

The eigenvalue μ2\mu_2 is the algebraic connectivity. An eigenvector for μ2\mu_2 is called a Fiedler vector.

The Fiedler vector is important because it often reveals a natural partition of the graph. Vertices with positive entries may be placed on one side, and vertices with negative entries on the other. This idea is used in spectral partitioning and spectral clustering.

The value of μ2\mu_2 also measures how strongly connected the graph is. Small μ2\mu_2 indicates a bottleneck. Larger μ2\mu_2 indicates stronger connectivity. This connection is a standard theme in spectral graph theory.

80.17 Normalized Eigenvectors

For graphs with uneven degrees, normalized matrices are often more appropriate.

The symmetric normalized Laplacian is

Lsym=IΔ1/2AΔ1/2. L_{\mathrm{sym}} = I-\Delta^{-1/2}A\Delta^{-1/2}.

The random-walk matrix is

P=Δ1A. P=\Delta^{-1}A.

Eigenvectors of these matrices account for degree variation. They are common in random walks, spectral clustering, and graph-based data analysis.

For regular graphs, normalization changes little because all degrees are equal. For irregular graphs, normalized and unnormalized eigenvectors may give substantially different information.

80.18 Sign Patterns

Eigenvectors are often useful because of their sign patterns.

For example, a Laplacian eigenvector with both positive and negative entries can suggest a cut. Vertices with similar signs and similar magnitudes tend to be placed near one another in the graph.

The sign pattern alone does not determine the graph structure. However, it often reveals coarse geometry: clusters, bottlenecks, central regions, and peripheral regions.

This is one reason eigenvectors are used for visualization and partitioning.

80.19 Spectral Embeddings

Several eigenvectors can be used together to embed a graph into Euclidean space.

Choose eigenvectors

x(1),x(2),,x(k). x^{(1)},x^{(2)},\ldots,x^{(k)}.

Map each vertex viv_i to the point

(xi(1),xi(2),,xi(k))Rk. \bigl(x^{(1)}_i,x^{(2)}_i,\ldots,x^{(k)}_i\bigr) \in \mathbb{R}^k.

This construction is called a spectral embedding.

Nearby points in the embedding often correspond to vertices with similar graph positions. Spectral embeddings are used for clustering, drawing graphs, dimensionality reduction, and network analysis.

80.20 Eigenvalues as Frequencies

For Laplacian eigenvectors, eigenvalues can be interpreted as graph frequencies.

The quadratic form

xTLx={vi,vj}E(xixj)2 x^TLx=\sum_{\{v_i,v_j\}\in E}(x_i-x_j)^2

measures variation across edges.

If Lx=μxLx=\mu x and x=1\|x\|=1, then

xTLx=μ. x^TLx=\mu.

Thus small Laplacian eigenvalues correspond to smooth eigenvectors that vary slowly across edges. Large eigenvalues correspond to oscillatory eigenvectors that change sharply across edges.

This is analogous to Fourier analysis, where low frequencies vary slowly and high frequencies oscillate rapidly.

80.21 Summary

Eigenvalues and eigenvectors translate graph structure into linear algebra.

For the adjacency matrix,

Ax=λx, Ax=\lambda x,

an eigenvector is a vertex function whose neighbor sum is proportional to its value.

For the Laplacian matrix,

Lx=μx, Lx=\mu x,

an eigenvector is a vertex function whose difference from neighboring values is proportional to its value.

ObjectMeaning
Adjacency eigenvalueScaling factor for neighbor-sum patterns
Adjacency eigenvectorVertex function balanced by neighboring values
Largest adjacency eigenvalueMeasures dominant walk growth and centrality
Perron vectorPositive eigenvector for a connected graph
Laplacian eigenvalue 00Constant functions on connected components
Fiedler vectorEigenvector used to detect bottlenecks and partitions
Large Laplacian eigenvaluesHighly oscillatory vertex functions
Spectral embeddingCoordinates built from graph eigenvectors

Eigenvalues give numerical invariants. Eigenvectors give structured functions on vertices. Together they provide a coordinate system for studying graphs through matrices.