# Chapter 78. Laplacian Matrices

# Chapter 78. Laplacian Matrices

The Laplacian matrix is a matrix representation of a graph built from degrees and adjacency. It is one of the central matrices of algebraic graph theory because it encodes connectivity, cuts, flows, spanning trees, and spectral structure.

Let

$$
G=(V,E)
$$

be a finite simple undirected graph with vertex set

$$
V=\{v_1,v_2,\ldots,v_n\}.
$$

Let \(A\) be the adjacency matrix of \(G\), and let \(\Delta\) be the degree matrix, defined by

$$
\Delta_{ii}=\deg(v_i),
\qquad
\Delta_{ij}=0 \quad \text{for } i\neq j.
$$

The Laplacian matrix of \(G\) is

$$
L=\Delta-A.
$$

Equivalently,

$$
L_{ij} =
\begin{cases}
\deg(v_i), & \text{if } i=j, \\
-1, & \text{if } i\neq j \text{ and } v_i \text{ is adjacent to } v_j, \\
0, & \text{otherwise.}
\end{cases}
$$

This is the standard unnormalized graph Laplacian. It is also called the Kirchhoff matrix or discrete Laplacian in many contexts. Its basic definition as degree matrix minus adjacency matrix is standard in graph theory and spectral graph theory.

## 78.1 The Degree Matrix

The degree matrix is diagonal. Its purpose is to record the degree of each vertex:

$$
\Delta =
\begin{bmatrix}
\deg(v_1) & 0 & \cdots & 0 \\
0 & \deg(v_2) & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & \deg(v_n)
\end{bmatrix}.
$$

The adjacency matrix records which vertices are joined. The degree matrix records how many edges meet each vertex. The Laplacian combines these two kinds of information.

For a simple graph, the diagonal of \(A\) is zero. Therefore the diagonal of \(L\) is exactly the degree sequence of the graph.

## 78.2 Example

Consider the graph with vertices

$$
V=\{v_1,v_2,v_3,v_4\}
$$

and edges

$$
E=
\{\{v_1,v_2\},\{v_1,v_3\},\{v_2,v_3\},\{v_3,v_4\}\}.
$$

The adjacency matrix is

$$
A=
\begin{bmatrix}
0 & 1 & 1 & 0 \\
1 & 0 & 1 & 0 \\
1 & 1 & 0 & 1 \\
0 & 0 & 1 & 0
\end{bmatrix}.
$$

The degrees are

$$
\deg(v_1)=2,\quad
\deg(v_2)=2,\quad
\deg(v_3)=3,\quad
\deg(v_4)=1.
$$

Hence

$$
\Delta=
\begin{bmatrix}
2 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 \\
0 & 0 & 3 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}.
$$

Thus

$$
L=\Delta-A =
\begin{bmatrix}
2 & -1 & -1 & 0 \\
-1 & 2 & -1 & 0 \\
-1 & -1 & 3 & -1 \\
0 & 0 & -1 & 1
\end{bmatrix}.
$$

The diagonal entries are degrees. The off-diagonal entries are negative where edges exist and zero where edges do not exist.

## 78.3 Row Sums

Every row sum of the Laplacian matrix is zero.

For row \(i\), the diagonal entry is \(\deg(v_i)\). The row also contains one entry \(-1\) for each neighbor of \(v_i\). Therefore

$$
\sum_{j=1}^{n} L_{ij} =
\deg(v_i)-\deg(v_i) =
0.
$$

Since the graph is undirected, \(L\) is symmetric, so every column sum is also zero.

This implies

$$
L\mathbf{1}=0,
$$

where

$$
\mathbf{1}=
\begin{bmatrix}
1\\
1\\
\vdots\\
1
\end{bmatrix}.
$$

Thus \(0\) is always an eigenvalue of the Laplacian matrix. The all-ones vector is an eigenvector for this eigenvalue. The zero row-sum property is one of the standard elementary properties of the graph Laplacian.

## 78.4 Symmetry

If \(G\) is undirected, then its adjacency matrix is symmetric:

$$
A^T=A.
$$

The degree matrix is diagonal, hence symmetric. Therefore

$$
L^T=(\Delta-A)^T=\Delta^T-A^T=\Delta-A=L.
$$

So the Laplacian matrix of an undirected graph is symmetric.

Because \(L\) is real and symmetric, it has real eigenvalues and an orthonormal basis of eigenvectors. This fact is the starting point of spectral graph theory.

## 78.5 The Quadratic Form

The Laplacian matrix has an important quadratic form.

For any vector

$$
x=
\begin{bmatrix}
x_1\\
x_2\\
\vdots\\
x_n
\end{bmatrix},
$$

one has

$$
x^T L x =
\sum_{\{v_i,v_j\}\in E}(x_i-x_j)^2.
$$

This identity is fundamental. It says that the Laplacian measures variation across edges.

If adjacent vertices have similar values of \(x\), then \(x^TLx\) is small. If adjacent vertices have very different values, then \(x^TLx\) is large.

To derive the formula, write

$$
x^T L x = x^T \Delta x - x^T A x.
$$

The first term is

$$
x^T \Delta x = \sum_{i=1}^{n}\deg(v_i)x_i^2.
$$

The second term is

$$
x^T A x = \sum_{i,j=1}^{n}a_{ij}x_ix_j.
$$

For an undirected graph, each edge \(\{v_i,v_j\}\) contributes both \(x_ix_j\) and \(x_jx_i\). Thus

$$
x^T A x = 2\sum_{\{v_i,v_j\}\in E}x_ix_j.
$$

Also,

$$
\sum_{i=1}^{n}\deg(v_i)x_i^2 =
\sum_{\{v_i,v_j\}\in E}(x_i^2+x_j^2).
$$

Therefore

$$
x^TLx =
\sum_{\{v_i,v_j\}\in E}(x_i^2-2x_ix_j+x_j^2) =
\sum_{\{v_i,v_j\}\in E}(x_i-x_j)^2.
$$

This identity immediately implies that \(L\) is positive semidefinite.

## 78.6 Positive Semidefiniteness

A symmetric matrix \(M\) is positive semidefinite if

$$
x^T M x \geq 0
$$

for every vector \(x\).

The Laplacian matrix is positive semidefinite because

$$
x^T L x =
\sum_{\{v_i,v_j\}\in E}(x_i-x_j)^2
\geq 0.
$$

Every term is a square. Therefore no vector can give a negative value.

It follows that every eigenvalue of \(L\) is nonnegative. If the eigenvalues are written in increasing order,

$$
0=\lambda_1\leq \lambda_2\leq \cdots \leq \lambda_n,
$$

then all \(\lambda_i\) are at least zero. The nonnegativity of Laplacian eigenvalues is a standard property of the graph Laplacian.

## 78.7 Null Space

The null space of \(L\) consists of all vectors \(x\) satisfying

$$
Lx=0.
$$

Since \(L\) is positive semidefinite,

$$
Lx=0
$$

is equivalent to

$$
x^TLx=0.
$$

Using the quadratic form,

$$
x^TLx =
\sum_{\{v_i,v_j\}\in E}(x_i-x_j)^2.
$$

This sum is zero exactly when

$$
x_i=x_j
$$

for every edge \(\{v_i,v_j\}\).

Therefore \(x\) is constant on each connected component of the graph.

If \(G\) is connected, then every vertex can be reached from every other vertex by a path. Along that path, equality across edges forces all entries of \(x\) to be equal. Hence

$$
\ker L = \operatorname{span}\{\mathbf{1}\}
$$

when \(G\) is connected.

If \(G\) has \(c\) connected components, then \(\ker L\) has dimension \(c\). The number of connected components equals the multiplicity of the eigenvalue \(0\). This is one of the central spectral facts about the Laplacian matrix.

## 78.8 Connected Components

The previous result gives an algebraic test for connectivity.

A graph \(G\) is connected if and only if the Laplacian matrix has nullity \(1\):

$$
G \text{ is connected}
\quad \Longleftrightarrow \quad
\dim \ker L = 1.
$$

Equivalently, \(0\) is a simple eigenvalue of \(L\).

If \(G\) has \(c\) connected components, then

$$
\dim \ker L = c.
$$

Thus the Laplacian matrix does not merely store local adjacency. Its spectrum detects a global property: the number of connected components.

## 78.9 Algebraic Connectivity

The second smallest eigenvalue of \(L\) is called the algebraic connectivity of \(G\). It is usually denoted

$$
\lambda_2.
$$

If

$$
0=\lambda_1\leq \lambda_2\leq \cdots \leq \lambda_n,
$$

then \(G\) is connected precisely when

$$
\lambda_2>0.
$$

The value of \(\lambda_2\) measures how strongly connected the graph is. Small values indicate the presence of a weak bottleneck. Larger values indicate stronger global connectivity.

An eigenvector corresponding to \(\lambda_2\) is called a Fiedler vector. It is used in graph partitioning and spectral clustering because its entries often separate the graph along a sparse cut. The connection between the second Laplacian eigenvalue, cuts, and the Fiedler vector is a standard theme in spectral graph theory.

## 78.10 Incidence Matrix Formula

Let \(D\) be an oriented incidence matrix of an undirected graph \(G\). Then

$$
L=DD^T.
$$

This identity connects the Laplacian with flows and edge differences.

Each column of \(D\) corresponds to one edge. If the edge is oriented from \(v_i\) to \(v_j\), then the column contains \(-1\) in row \(i\), \(1\) in row \(j\), and zeros elsewhere.

The product \(DD^T\) gives degree entries on the diagonal and \(-1\) in positions corresponding to adjacent vertices. Therefore it equals the Laplacian.

The identity is independent of the chosen orientation. Reversing an edge multiplies one column of \(D\) by \(-1\), and this does not change \(DD^T\). The formula \(L=DD^T\) is a standard equivalent construction of the graph Laplacian.

## 78.11 Edge Differences

The incidence matrix also explains the quadratic form.

For a vertex vector \(x\), the vector

$$
D^T x
$$

has one entry for each edge. If an edge is oriented from \(v_i\) to \(v_j\), then the corresponding entry is

$$
x_j-x_i.
$$

Therefore

$$
x^TLx =
x^TDD^Tx =
(D^Tx)^T(D^Tx) =
\|D^Tx\|^2.
$$

This equals the sum of squared edge differences:

$$
\sum_{\{v_i,v_j\}\in E}(x_i-x_j)^2.
$$

Thus the Laplacian measures how much a function on vertices changes across edges.

## 78.12 Cuts

The Laplacian matrix is closely related to cuts.

Let \(S\subseteq V\). Define the indicator vector

$$
\chi_S \in \mathbb{R}^n
$$

by

$$
(\chi_S)_i=
\begin{cases}
1, & \text{if } v_i\in S,\\
0, & \text{if } v_i\notin S.
\end{cases}
$$

Then

$$
\chi_S^T L \chi_S
$$

equals the number of edges with one endpoint in \(S\) and the other endpoint in \(V\setminus S\).

Indeed, for an edge \(\{v_i,v_j\}\),

$$
((\chi_S)_i-(\chi_S)_j)^2
$$

is \(1\) exactly when the edge crosses the cut, and \(0\) otherwise.

Thus the Laplacian converts a cut-counting problem into a quadratic form.

## 78.13 Weighted Graphs

For a weighted undirected graph, let \(w_{ij}\geq 0\) be the weight of the edge between \(v_i\) and \(v_j\). The weighted adjacency matrix has entries

$$
a_{ij}=w_{ij}.
$$

The weighted degree of \(v_i\) is

$$
d_i=\sum_{j=1}^{n}w_{ij}.
$$

The weighted Laplacian is still

$$
L=\Delta-A,
$$

where

$$
\Delta_{ii}=d_i.
$$

Thus

$$
L_{ij} =
\begin{cases}
d_i, & \text{if } i=j,\\
-w_{ij}, & \text{if } i\neq j \text{ and } v_i \text{ is adjacent to } v_j,\\
0, & \text{otherwise.}
\end{cases}
$$

The quadratic form becomes

$$
x^T L x =
\sum_{\{v_i,v_j\}\in E} w_{ij}(x_i-x_j)^2.
$$

Weights therefore scale the penalty for differences across edges. The weighted definition \(L=\Delta-A\) is the standard extension from simple graphs to weighted graphs.

## 78.14 Normalized Laplacians

The unnormalized Laplacian uses raw degrees. In some applications, especially when degrees vary greatly, normalized versions are useful.

The symmetric normalized Laplacian is

$$
L_{\mathrm{sym}} =
I-\Delta^{-1/2}A\Delta^{-1/2},
$$

where isolated vertices require a convention because \(\Delta^{-1/2}\) is not defined for degree zero.

The random-walk normalized Laplacian is

$$
L_{\mathrm{rw}} =
I-\Delta^{-1}A.
$$

These matrices adjust adjacency by degree. They are common in spectral clustering, random walks, and graph-based data analysis. Normalized graph Laplacians are standard variants of the Laplacian matrix.

## 78.15 Random Walk Interpretation

Let \(G\) be a graph with no isolated vertices. Define

$$
P=\Delta^{-1}A.
$$

The matrix \(P\) is the transition matrix of the simple random walk on \(G\). From vertex \(v_i\), the walk moves uniformly to one of its neighbors.

Then

$$
L_{\mathrm{rw}}=I-P.
$$

Thus the random-walk Laplacian measures the difference between a value at a vertex and the average value at its neighbors.

For a function \(x\) on vertices,

$$
(L_{\mathrm{rw}}x)_i =
x_i-\frac{1}{\deg(v_i)}\sum_{v_j\sim v_i}x_j.
$$

This is a discrete analogue of a Laplace operator. It compares a value with a local average.

## 78.16 Complete Graphs

For the complete graph \(K_n\), every vertex has degree \(n-1\). Its adjacency matrix is

$$
A=J-I,
$$

where \(J\) is the all-ones matrix. The degree matrix is

$$
\Delta=(n-1)I.
$$

Therefore

$$
L=\Delta-A=(n-1)I-(J-I)=nI-J.
$$

Thus

$$
L(K_n)=nI-J.
$$

The vector \(\mathbf{1}\) is in the null space. Every vector orthogonal to \(\mathbf{1}\) is an eigenvector with eigenvalue \(n\). Hence the spectrum is

$$
0,\ n,\ n,\ldots,n.
$$

## 78.17 Path Graphs

For the path graph

$$
P_n: v_1-v_2-\cdots-v_n,
$$

the Laplacian matrix is tridiagonal:

$$
L(P_n)=
\begin{bmatrix}
1 & -1 & 0 & \cdots & 0 \\
-1 & 2 & -1 & \cdots & 0 \\
0 & -1 & 2 & \ddots & 0 \\
\vdots & \vdots & \ddots & \ddots & -1 \\
0 & 0 & 0 & -1 & 1
\end{bmatrix}.
$$

The endpoints have degree \(1\). The interior vertices have degree \(2\). This matrix is a discrete second-difference operator on a finite line.

## 78.18 Cycle Graphs

For the cycle graph \(C_n\), every vertex has degree \(2\), and each vertex is adjacent to two neighbors. Its Laplacian has the form

$$
L(C_n)=
\begin{bmatrix}
2 & -1 & 0 & \cdots & -1 \\
-1 & 2 & -1 & \cdots & 0 \\
0 & -1 & 2 & \ddots & 0 \\
\vdots & \vdots & \ddots & \ddots & -1 \\
-1 & 0 & 0 & -1 & 2
\end{bmatrix}.
$$

This is a circulant matrix. It represents a discrete Laplace operator on a ring.

## 78.19 Trees

If \(T\) is a tree with \(n\) vertices, then \(T\) has \(n-1\) edges and is connected. Therefore the Laplacian matrix of \(T\) has nullity \(1\) and rank

$$
n-1.
$$

The absence of cycles also appears through the incidence matrix. If \(D\) is an oriented incidence matrix of a tree, then \(D\) has full column rank. Since

$$
L=DD^T,
$$

the Laplacian has rank \(n-1\).

## 78.20 Matrix-Tree Theorem

The Laplacian matrix counts spanning trees through the matrix-tree theorem.

Let \(G\) be a graph with Laplacian matrix \(L\). Delete any one row and the corresponding column from \(L\). The determinant of the resulting \((n-1)\times(n-1)\) matrix equals the number of spanning trees of \(G\).

This number is independent of which row and column are deleted.

The theorem is one of the central results connecting graph theory and linear algebra. It shows that a purely combinatorial quantity, the number of spanning trees, can be computed as a determinant. The use of the Laplacian in Kirchhoff's matrix-tree theorem is a standard application of graph Laplacians.

## 78.21 Dirichlet Energy

The quantity

$$
x^T L x
$$

is often called the Dirichlet energy of the function \(x\) on the graph.

It measures roughness. A function is smooth on a graph when adjacent vertices receive similar values. It is rough when many adjacent vertices receive very different values.

This idea is used in graph signal processing, semi-supervised learning, spectral clustering, and numerical methods on networks.

## 78.22 Summary

The Laplacian matrix of a simple undirected graph is

$$
L=\Delta-A.
$$

Its diagonal entries are vertex degrees. Its off-diagonal entries are \(-1\) for adjacent vertices and \(0\) otherwise.

The Laplacian matrix has several fundamental properties.

| Property | Statement |
|---|---|
| Symmetry | \(L^T=L\) for undirected graphs |
| Row sums | \(L\mathbf{1}=0\) |
| Positive semidefinite | \(x^TLx\geq 0\) |
| Quadratic form | \(x^TLx=\sum_{\{i,j\}\in E}(x_i-x_j)^2\) |
| Incidence form | \(L=DD^T\) |
| Connectivity | \(\dim\ker L\) equals the number of connected components |
| Algebraic connectivity | \(\lambda_2>0\) iff the graph is connected |
| Matrix-tree theorem | Cofactors of \(L\) count spanning trees |

The Laplacian is important because it turns graph structure into linear algebraic structure. It connects adjacency, degree, connectivity, cuts, flows, random walks, spanning trees, and eigenvalues in one matrix.
