# Chapter 80. Eigenvalues and Eigenvectors

# Chapter 80. Eigenvalues and Eigenvectors

Eigenvalues and eigenvectors describe special directions of a linear transformation. In graph theory, they describe special patterns on the vertices of a graph.

Let

$$
G=(V,E)
$$

be a finite simple undirected graph with vertices

$$
V=\{v_1,v_2,\ldots,v_n\}.
$$

Let \(A\) be the adjacency matrix of \(G\). A nonzero vector

$$
x\in \mathbb{R}^n
$$

is an eigenvector of \(A\) if there is a scalar \(\lambda\) such that

$$
Ax=\lambda x.
$$

The scalar \(\lambda\) is called the corresponding eigenvalue.

In spectral graph theory, eigenvalues and eigenvectors of graph matrices, especially adjacency and Laplacian matrices, are used to study structural properties of graphs. For an undirected graph, the adjacency matrix is real and symmetric, so its eigenvalues are real and it has an orthonormal basis of eigenvectors.

## 80.1 Eigenvectors as Vertex Functions

A vector

$$
x=
\begin{bmatrix}
x_1\\
x_2\\
\vdots\\
x_n
\end{bmatrix}
$$

may be viewed as a function on the vertices of \(G\). The entry \(x_i\) assigns a value to vertex \(v_i\).

The equation

$$
Ax=\lambda x
$$

can be read one vertex at a time. Since

$$
(Ax)_i=\sum_{v_j\sim v_i}x_j,
$$

the eigenvalue equation becomes

$$
\sum_{v_j\sim v_i}x_j=\lambda x_i.
$$

Thus an adjacency eigenvector assigns values to vertices so that, at every vertex, the sum of neighboring values is a fixed multiple of the value at that vertex.

This is the basic local meaning of an adjacency eigenvector.

## 80.2 Example: The Path on Three Vertices

Consider the path

$$
P_3: v_1-v_2-v_3.
$$

Its adjacency matrix is

$$
A=
\begin{bmatrix}
0&1&0\\
1&0&1\\
0&1&0
\end{bmatrix}.
$$

Let

$$
x=
\begin{bmatrix}
1\\
\sqrt{2}\\
1
\end{bmatrix}.
$$

Then

$$
Ax=
\begin{bmatrix}
\sqrt{2}\\
2\\
\sqrt{2}
\end{bmatrix} =
\sqrt{2}
\begin{bmatrix}
1\\
\sqrt{2}\\
1
\end{bmatrix}.
$$

Hence \(x\) is an eigenvector with eigenvalue

$$
\lambda=\sqrt{2}.
$$

The middle vertex receives the largest value. This reflects its central position in the path: both endpoints are adjacent to it, and it connects the two sides of the graph.

## 80.3 Scaling of Eigenvectors

If \(x\) is an eigenvector with eigenvalue \(\lambda\), then every nonzero scalar multiple of \(x\) is also an eigenvector with eigenvalue \(\lambda\).

Indeed,

$$
A(cx)=cAx=c\lambda x=\lambda(cx).
$$

Therefore an eigenvector is not unique as a single vector. It represents a direction in vector space.

For this reason, eigenvectors are often normalized. A common convention is

$$
\|x\|=1.
$$

The normalization changes the length of the vector, but not its eigendirection.

## 80.4 Eigenspaces

The eigenspace corresponding to an eigenvalue \(\lambda\) is

$$
E_\lambda=\{x\in \mathbb{R}^n: Ax=\lambda x\}.
$$

This set includes the zero vector, even though the zero vector is not called an eigenvector.

Equivalently,

$$
E_\lambda=\ker(A-\lambda I).
$$

Thus the eigenspace is a subspace of \(\mathbb{R}^n\).

The dimension of \(E_\lambda\) is the geometric multiplicity of \(\lambda\). It counts the number of independent eigenvectors associated with \(\lambda\).

## 80.5 Orthogonality

For a simple undirected graph, the adjacency matrix is symmetric:

$$
A^T=A.
$$

A real symmetric matrix has orthogonal eigenspaces for distinct eigenvalues. Therefore, if

$$
Ax=\lambda x,
\qquad
Ay=\mu y,
$$

and

$$
\lambda\neq \mu,
$$

then

$$
x^Ty=0.
$$

This orthogonality gives a clean decomposition of vertex functions. Any vector on the vertices can be written as a sum of eigenvector components.

This is one reason spectral graph theory is so useful: it gives a coordinate system adapted to the graph.

## 80.6 Diagonalization

Since \(A\) is real and symmetric, it can be diagonalized orthogonally. There is an orthogonal matrix \(Q\) and a diagonal matrix \(\Lambda\) such that

$$
A=Q\Lambda Q^T.
$$

The columns of \(Q\) are orthonormal eigenvectors of \(A\). The diagonal entries of \(\Lambda\) are the corresponding eigenvalues.

If

$$
\Lambda=
\operatorname{diag}(\lambda_1,\lambda_2,\ldots,\lambda_n),
$$

then many powers and functions of \(A\) become simple:

$$
A^k=Q\Lambda^kQ^T.
$$

This formula explains why eigenvalues control walk counts, diffusion, random walks, and many iterative processes on graphs.

## 80.7 Eigenvalues and Walk Growth

The entry \((A^k)_{ij}\) counts walks of length \(k\) from \(v_i\) to \(v_j\). Since

$$
A^k=Q\Lambda^kQ^T,
$$

the eigenvalues determine how powers of \(A\) grow.

Large eigenvalues dominate high powers. If \(\lambda_1\) is the largest eigenvalue in absolute value, then terms involving \(\lambda_1^k\) dominate as \(k\) becomes large, unless canceled by orthogonality or symmetry.

Thus the largest eigenvalues describe large-scale walk behavior.

## 80.8 The Largest Eigenvalue

For a connected graph, the adjacency matrix is nonnegative and irreducible. The Perron-Frobenius theorem implies that the largest adjacency eigenvalue has an eigenvector whose entries are strictly positive. This eigenvector is called the Perron vector.

The largest eigenvalue is often denoted

$$
\lambda_1.
$$

For a \(d\)-regular connected graph,

$$
\lambda_1=d.
$$

The all-ones vector is an eigenvector:

$$
A\mathbf{1}=d\mathbf{1}.
$$

For irregular graphs, the largest eigenvector often assigns larger values to vertices that are connected to other important vertices. This is the basis of eigenvector centrality.

## 80.9 Eigenvector Centrality

Eigenvector centrality assigns a score \(x_i\) to each vertex \(v_i\) so that a vertex is important when it is adjacent to important vertices.

The defining equation is

$$
Ax=\lambda x.
$$

At vertex \(v_i\), this says

$$
\lambda x_i=\sum_{v_j\sim v_i}x_j.
$$

Thus \(x_i\) is proportional to the sum of the scores of its neighbors.

The principal eigenvector, when positive, gives a natural centrality measure. It differs from degree centrality. Degree centrality counts neighbors. Eigenvector centrality weights neighbors by their own importance.

## 80.10 Regular Graphs

Let \(G\) be \(d\)-regular. Then every row of \(A\) sums to \(d\). Hence

$$
A\mathbf{1}=d\mathbf{1}.
$$

So \(d\) is an eigenvalue.

If \(G\) is connected, then \(d\) is the largest eigenvalue and has multiplicity \(1\).

The other eigenvalues measure structure beyond uniform degree. For example, in a good expander graph, all nontrivial eigenvalues are small in absolute value compared with \(d\). This means that the graph behaves, in many ways, like a random regular graph.

For regular graphs, spectral information from the adjacency matrix is closely related to spectral information from the Laplacian. For nonregular graphs, the choice of matrix can make a substantial difference.

## 80.11 Complete Graphs

For the complete graph \(K_n\), the adjacency matrix is

$$
A=J-I,
$$

where \(J\) is the all-ones matrix.

The vector \(\mathbf{1}\) satisfies

$$
A\mathbf{1}=(n-1)\mathbf{1}.
$$

Therefore \(n-1\) is an eigenvalue.

If \(x\) is orthogonal to \(\mathbf{1}\), then

$$
Jx=0.
$$

Hence

$$
Ax=(J-I)x=-x.
$$

Thus the eigenvalues of \(K_n\) are

$$
n-1,\quad -1,\quad -1,\quad \ldots,\quad -1.
$$

The eigenspace for \(-1\) consists of all vectors whose entries sum to zero.

## 80.12 Empty Graphs

The empty graph on \(n\) vertices has adjacency matrix

$$
A=0.
$$

Therefore

$$
Ax=0
$$

for every vector \(x\). Every nonzero vector is an eigenvector with eigenvalue \(0\).

The only eigenvalue is

$$
0,
$$

with multiplicity \(n\).

This reflects the absence of adjacency structure.

## 80.13 Bipartite Graphs

For a bipartite graph with parts \(X\) and \(Y\), the adjacency matrix can be written in block form:

$$
A=
\begin{bmatrix}
0&B\\
B^T&0
\end{bmatrix}.
$$

Suppose

$$
A
\begin{bmatrix}
u\\
v
\end{bmatrix} =
\lambda
\begin{bmatrix}
u\\
v
\end{bmatrix}.
$$

Then

$$
Bv=\lambda u,
\qquad
B^Tu=\lambda v.
$$

Changing the sign of the second block gives

$$
A
\begin{bmatrix}
u\\
-v
\end{bmatrix} =
-\lambda
\begin{bmatrix}
u\\
-v
\end{bmatrix}.
$$

Therefore if \(\lambda\) is an eigenvalue, then \(-\lambda\) is also an eigenvalue with the same multiplicity.

Thus the adjacency spectrum of a bipartite graph is symmetric about zero.

## 80.14 Laplacian Eigenvectors

The Laplacian matrix is

$$
L=\Delta-A.
$$

A Laplacian eigenvector satisfies

$$
Lx=\mu x.
$$

Since

$$
(Lx)_i=\deg(v_i)x_i-\sum_{v_j\sim v_i}x_j,
$$

the equation at vertex \(v_i\) is

$$
\deg(v_i)x_i-\sum_{v_j\sim v_i}x_j=\mu x_i.
$$

Equivalently,

$$
\sum_{v_j\sim v_i}(x_i-x_j)=\mu x_i.
$$

Thus a Laplacian eigenvector describes how a vertex value differs from neighboring values.

## 80.15 The Zero Laplacian Eigenvalue

The all-ones vector satisfies

$$
L\mathbf{1}=0.
$$

Thus \(0\) is always a Laplacian eigenvalue.

More generally, if a graph has \(c\) connected components, then the eigenspace for eigenvalue \(0\) has dimension \(c\). Its vectors are constant on each connected component.

In particular, a graph is connected if and only if the zero eigenvalue of \(L\) has multiplicity \(1\). The multiplicity of \(0\) in the Laplacian spectrum equals the number of connected components.

## 80.16 The Fiedler Vector

For a connected graph, the Laplacian eigenvalues are ordered as

$$
0=\mu_1<\mu_2\leq \cdots\leq \mu_n.
$$

The eigenvalue \(\mu_2\) is the algebraic connectivity. An eigenvector for \(\mu_2\) is called a Fiedler vector.

The Fiedler vector is important because it often reveals a natural partition of the graph. Vertices with positive entries may be placed on one side, and vertices with negative entries on the other. This idea is used in spectral partitioning and spectral clustering.

The value of \(\mu_2\) also measures how strongly connected the graph is. Small \(\mu_2\) indicates a bottleneck. Larger \(\mu_2\) indicates stronger connectivity. This connection is a standard theme in spectral graph theory.

## 80.17 Normalized Eigenvectors

For graphs with uneven degrees, normalized matrices are often more appropriate.

The symmetric normalized Laplacian is

$$
L_{\mathrm{sym}} =
I-\Delta^{-1/2}A\Delta^{-1/2}.
$$

The random-walk matrix is

$$
P=\Delta^{-1}A.
$$

Eigenvectors of these matrices account for degree variation. They are common in random walks, spectral clustering, and graph-based data analysis.

For regular graphs, normalization changes little because all degrees are equal. For irregular graphs, normalized and unnormalized eigenvectors may give substantially different information.

## 80.18 Sign Patterns

Eigenvectors are often useful because of their sign patterns.

For example, a Laplacian eigenvector with both positive and negative entries can suggest a cut. Vertices with similar signs and similar magnitudes tend to be placed near one another in the graph.

The sign pattern alone does not determine the graph structure. However, it often reveals coarse geometry: clusters, bottlenecks, central regions, and peripheral regions.

This is one reason eigenvectors are used for visualization and partitioning.

## 80.19 Spectral Embeddings

Several eigenvectors can be used together to embed a graph into Euclidean space.

Choose eigenvectors

$$
x^{(1)},x^{(2)},\ldots,x^{(k)}.
$$

Map each vertex \(v_i\) to the point

$$
\bigl(x^{(1)}_i,x^{(2)}_i,\ldots,x^{(k)}_i\bigr)
\in \mathbb{R}^k.
$$

This construction is called a spectral embedding.

Nearby points in the embedding often correspond to vertices with similar graph positions. Spectral embeddings are used for clustering, drawing graphs, dimensionality reduction, and network analysis.

## 80.20 Eigenvalues as Frequencies

For Laplacian eigenvectors, eigenvalues can be interpreted as graph frequencies.

The quadratic form

$$
x^TLx=\sum_{\{v_i,v_j\}\in E}(x_i-x_j)^2
$$

measures variation across edges.

If \(Lx=\mu x\) and \(\|x\|=1\), then

$$
x^TLx=\mu.
$$

Thus small Laplacian eigenvalues correspond to smooth eigenvectors that vary slowly across edges. Large eigenvalues correspond to oscillatory eigenvectors that change sharply across edges.

This is analogous to Fourier analysis, where low frequencies vary slowly and high frequencies oscillate rapidly.

## 80.21 Summary

Eigenvalues and eigenvectors translate graph structure into linear algebra.

For the adjacency matrix,

$$
Ax=\lambda x,
$$

an eigenvector is a vertex function whose neighbor sum is proportional to its value.

For the Laplacian matrix,

$$
Lx=\mu x,
$$

an eigenvector is a vertex function whose difference from neighboring values is proportional to its value.

| Object | Meaning |
|---|---|
| Adjacency eigenvalue | Scaling factor for neighbor-sum patterns |
| Adjacency eigenvector | Vertex function balanced by neighboring values |
| Largest adjacency eigenvalue | Measures dominant walk growth and centrality |
| Perron vector | Positive eigenvector for a connected graph |
| Laplacian eigenvalue \(0\) | Constant functions on connected components |
| Fiedler vector | Eigenvector used to detect bottlenecks and partitions |
| Large Laplacian eigenvalues | Highly oscillatory vertex functions |
| Spectral embedding | Coordinates built from graph eigenvectors |

Eigenvalues give numerical invariants. Eigenvectors give structured functions on vertices. Together they provide a coordinate system for studying graphs through matrices.
