Skip to content

Chapter 78. Laplacian Matrices

The Laplacian matrix is a matrix representation of a graph built from degrees and adjacency. It is one of the central matrices of algebraic graph theory because it encodes connectivity, cuts, flows, spanning trees, and spectral structure.

Let

G=(V,E) G=(V,E)

be a finite simple undirected graph with vertex set

V={v1,v2,,vn}. V=\{v_1,v_2,\ldots,v_n\}.

Let AA be the adjacency matrix of GG, and let Δ\Delta be the degree matrix, defined by

Δii=deg(vi),Δij=0for ij. \Delta_{ii}=\deg(v_i), \qquad \Delta_{ij}=0 \quad \text{for } i\neq j.

The Laplacian matrix of GG is

L=ΔA. L=\Delta-A.

Equivalently,

Lij={deg(vi),if i=j,1,if ij and vi is adjacent to vj,0,otherwise. L_{ij} = \begin{cases} \deg(v_i), & \text{if } i=j, \\ -1, & \text{if } i\neq j \text{ and } v_i \text{ is adjacent to } v_j, \\ 0, & \text{otherwise.} \end{cases}

This is the standard unnormalized graph Laplacian. It is also called the Kirchhoff matrix or discrete Laplacian in many contexts. Its basic definition as degree matrix minus adjacency matrix is standard in graph theory and spectral graph theory.

78.1 The Degree Matrix

The degree matrix is diagonal. Its purpose is to record the degree of each vertex:

Δ=[deg(v1)000deg(v2)000deg(vn)]. \Delta = \begin{bmatrix} \deg(v_1) & 0 & \cdots & 0 \\ 0 & \deg(v_2) & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \deg(v_n) \end{bmatrix}.

The adjacency matrix records which vertices are joined. The degree matrix records how many edges meet each vertex. The Laplacian combines these two kinds of information.

For a simple graph, the diagonal of AA is zero. Therefore the diagonal of LL is exactly the degree sequence of the graph.

78.2 Example

Consider the graph with vertices

V={v1,v2,v3,v4} V=\{v_1,v_2,v_3,v_4\}

and edges

E={{v1,v2},{v1,v3},{v2,v3},{v3,v4}}. E= \{\{v_1,v_2\},\{v_1,v_3\},\{v_2,v_3\},\{v_3,v_4\}\}.

The adjacency matrix is

A=[0110101011010010]. A= \begin{bmatrix} 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}.

The degrees are

deg(v1)=2,deg(v2)=2,deg(v3)=3,deg(v4)=1. \deg(v_1)=2,\quad \deg(v_2)=2,\quad \deg(v_3)=3,\quad \deg(v_4)=1.

Hence

Δ=[2000020000300001]. \Delta= \begin{bmatrix} 2 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}.

Thus

L=ΔA=[2110121011310011]. L=\Delta-A = \begin{bmatrix} 2 & -1 & -1 & 0 \\ -1 & 2 & -1 & 0 \\ -1 & -1 & 3 & -1 \\ 0 & 0 & -1 & 1 \end{bmatrix}.

The diagonal entries are degrees. The off-diagonal entries are negative where edges exist and zero where edges do not exist.

78.3 Row Sums

Every row sum of the Laplacian matrix is zero.

For row ii, the diagonal entry is deg(vi)\deg(v_i). The row also contains one entry 1-1 for each neighbor of viv_i. Therefore

j=1nLij=deg(vi)deg(vi)=0. \sum_{j=1}^{n} L_{ij} = \deg(v_i)-\deg(v_i) = 0.

Since the graph is undirected, LL is symmetric, so every column sum is also zero.

This implies

L1=0, L\mathbf{1}=0,

where

1=[111]. \mathbf{1}= \begin{bmatrix} 1\\ 1\\ \vdots\\ 1 \end{bmatrix}.

Thus 00 is always an eigenvalue of the Laplacian matrix. The all-ones vector is an eigenvector for this eigenvalue. The zero row-sum property is one of the standard elementary properties of the graph Laplacian.

78.4 Symmetry

If GG is undirected, then its adjacency matrix is symmetric:

AT=A. A^T=A.

The degree matrix is diagonal, hence symmetric. Therefore

LT=(ΔA)T=ΔTAT=ΔA=L. L^T=(\Delta-A)^T=\Delta^T-A^T=\Delta-A=L.

So the Laplacian matrix of an undirected graph is symmetric.

Because LL is real and symmetric, it has real eigenvalues and an orthonormal basis of eigenvectors. This fact is the starting point of spectral graph theory.

78.5 The Quadratic Form

The Laplacian matrix has an important quadratic form.

For any vector

x=[x1x2xn], x= \begin{bmatrix} x_1\\ x_2\\ \vdots\\ x_n \end{bmatrix},

one has

xTLx={vi,vj}E(xixj)2. x^T L x = \sum_{\{v_i,v_j\}\in E}(x_i-x_j)^2.

This identity is fundamental. It says that the Laplacian measures variation across edges.

If adjacent vertices have similar values of xx, then xTLxx^TLx is small. If adjacent vertices have very different values, then xTLxx^TLx is large.

To derive the formula, write

xTLx=xTΔxxTAx. x^T L x = x^T \Delta x - x^T A x.

The first term is

xTΔx=i=1ndeg(vi)xi2. x^T \Delta x = \sum_{i=1}^{n}\deg(v_i)x_i^2.

The second term is

xTAx=i,j=1naijxixj. x^T A x = \sum_{i,j=1}^{n}a_{ij}x_ix_j.

For an undirected graph, each edge {vi,vj}\{v_i,v_j\} contributes both xixjx_ix_j and xjxix_jx_i. Thus

xTAx=2{vi,vj}Exixj. x^T A x = 2\sum_{\{v_i,v_j\}\in E}x_ix_j.

Also,

i=1ndeg(vi)xi2={vi,vj}E(xi2+xj2). \sum_{i=1}^{n}\deg(v_i)x_i^2 = \sum_{\{v_i,v_j\}\in E}(x_i^2+x_j^2).

Therefore

xTLx={vi,vj}E(xi22xixj+xj2)={vi,vj}E(xixj)2. x^TLx = \sum_{\{v_i,v_j\}\in E}(x_i^2-2x_ix_j+x_j^2) = \sum_{\{v_i,v_j\}\in E}(x_i-x_j)^2.

This identity immediately implies that LL is positive semidefinite.

78.6 Positive Semidefiniteness

A symmetric matrix MM is positive semidefinite if

xTMx0 x^T M x \geq 0

for every vector xx.

The Laplacian matrix is positive semidefinite because

xTLx={vi,vj}E(xixj)20. x^T L x = \sum_{\{v_i,v_j\}\in E}(x_i-x_j)^2 \geq 0.

Every term is a square. Therefore no vector can give a negative value.

It follows that every eigenvalue of LL is nonnegative. If the eigenvalues are written in increasing order,

0=λ1λ2λn, 0=\lambda_1\leq \lambda_2\leq \cdots \leq \lambda_n,

then all λi\lambda_i are at least zero. The nonnegativity of Laplacian eigenvalues is a standard property of the graph Laplacian.

78.7 Null Space

The null space of LL consists of all vectors xx satisfying

Lx=0. Lx=0.

Since LL is positive semidefinite,

Lx=0 Lx=0

is equivalent to

xTLx=0. x^TLx=0.

Using the quadratic form,

xTLx={vi,vj}E(xixj)2. x^TLx = \sum_{\{v_i,v_j\}\in E}(x_i-x_j)^2.

This sum is zero exactly when

xi=xj x_i=x_j

for every edge {vi,vj}\{v_i,v_j\}.

Therefore xx is constant on each connected component of the graph.

If GG is connected, then every vertex can be reached from every other vertex by a path. Along that path, equality across edges forces all entries of xx to be equal. Hence

kerL=span{1} \ker L = \operatorname{span}\{\mathbf{1}\}

when GG is connected.

If GG has cc connected components, then kerL\ker L has dimension cc. The number of connected components equals the multiplicity of the eigenvalue 00. This is one of the central spectral facts about the Laplacian matrix.

78.8 Connected Components

The previous result gives an algebraic test for connectivity.

A graph GG is connected if and only if the Laplacian matrix has nullity 11:

G is connecteddimkerL=1. G \text{ is connected} \quad \Longleftrightarrow \quad \dim \ker L = 1.

Equivalently, 00 is a simple eigenvalue of LL.

If GG has cc connected components, then

dimkerL=c. \dim \ker L = c.

Thus the Laplacian matrix does not merely store local adjacency. Its spectrum detects a global property: the number of connected components.

78.9 Algebraic Connectivity

The second smallest eigenvalue of LL is called the algebraic connectivity of GG. It is usually denoted

λ2. \lambda_2.

If

0=λ1λ2λn, 0=\lambda_1\leq \lambda_2\leq \cdots \leq \lambda_n,

then GG is connected precisely when

λ2>0. \lambda_2>0.

The value of λ2\lambda_2 measures how strongly connected the graph is. Small values indicate the presence of a weak bottleneck. Larger values indicate stronger global connectivity.

An eigenvector corresponding to λ2\lambda_2 is called a Fiedler vector. It is used in graph partitioning and spectral clustering because its entries often separate the graph along a sparse cut. The connection between the second Laplacian eigenvalue, cuts, and the Fiedler vector is a standard theme in spectral graph theory.

78.10 Incidence Matrix Formula

Let DD be an oriented incidence matrix of an undirected graph GG. Then

L=DDT. L=DD^T.

This identity connects the Laplacian with flows and edge differences.

Each column of DD corresponds to one edge. If the edge is oriented from viv_i to vjv_j, then the column contains 1-1 in row ii, 11 in row jj, and zeros elsewhere.

The product DDTDD^T gives degree entries on the diagonal and 1-1 in positions corresponding to adjacent vertices. Therefore it equals the Laplacian.

The identity is independent of the chosen orientation. Reversing an edge multiplies one column of DD by 1-1, and this does not change DDTDD^T. The formula L=DDTL=DD^T is a standard equivalent construction of the graph Laplacian.

78.11 Edge Differences

The incidence matrix also explains the quadratic form.

For a vertex vector xx, the vector

DTx D^T x

has one entry for each edge. If an edge is oriented from viv_i to vjv_j, then the corresponding entry is

xjxi. x_j-x_i.

Therefore

xTLx=xTDDTx=(DTx)T(DTx)=DTx2. x^TLx = x^TDD^Tx = (D^Tx)^T(D^Tx) = \|D^Tx\|^2.

This equals the sum of squared edge differences:

{vi,vj}E(xixj)2. \sum_{\{v_i,v_j\}\in E}(x_i-x_j)^2.

Thus the Laplacian measures how much a function on vertices changes across edges.

78.12 Cuts

The Laplacian matrix is closely related to cuts.

Let SVS\subseteq V. Define the indicator vector

χSRn \chi_S \in \mathbb{R}^n

by

(χS)i={1,if viS,0,if viS. (\chi_S)_i= \begin{cases} 1, & \text{if } v_i\in S,\\ 0, & \text{if } v_i\notin S. \end{cases}

Then

χSTLχS \chi_S^T L \chi_S

equals the number of edges with one endpoint in SS and the other endpoint in VSV\setminus S.

Indeed, for an edge {vi,vj}\{v_i,v_j\},

((χS)i(χS)j)2 ((\chi_S)_i-(\chi_S)_j)^2

is 11 exactly when the edge crosses the cut, and 00 otherwise.

Thus the Laplacian converts a cut-counting problem into a quadratic form.

78.13 Weighted Graphs

For a weighted undirected graph, let wij0w_{ij}\geq 0 be the weight of the edge between viv_i and vjv_j. The weighted adjacency matrix has entries

aij=wij. a_{ij}=w_{ij}.

The weighted degree of viv_i is

di=j=1nwij. d_i=\sum_{j=1}^{n}w_{ij}.

The weighted Laplacian is still

L=ΔA, L=\Delta-A,

where

Δii=di. \Delta_{ii}=d_i.

Thus

Lij={di,if i=j,wij,if ij and vi is adjacent to vj,0,otherwise. L_{ij} = \begin{cases} d_i, & \text{if } i=j,\\ -w_{ij}, & \text{if } i\neq j \text{ and } v_i \text{ is adjacent to } v_j,\\ 0, & \text{otherwise.} \end{cases}

The quadratic form becomes

xTLx={vi,vj}Ewij(xixj)2. x^T L x = \sum_{\{v_i,v_j\}\in E} w_{ij}(x_i-x_j)^2.

Weights therefore scale the penalty for differences across edges. The weighted definition L=ΔAL=\Delta-A is the standard extension from simple graphs to weighted graphs.

78.14 Normalized Laplacians

The unnormalized Laplacian uses raw degrees. In some applications, especially when degrees vary greatly, normalized versions are useful.

The symmetric normalized Laplacian is

Lsym=IΔ1/2AΔ1/2, L_{\mathrm{sym}} = I-\Delta^{-1/2}A\Delta^{-1/2},

where isolated vertices require a convention because Δ1/2\Delta^{-1/2} is not defined for degree zero.

The random-walk normalized Laplacian is

Lrw=IΔ1A. L_{\mathrm{rw}} = I-\Delta^{-1}A.

These matrices adjust adjacency by degree. They are common in spectral clustering, random walks, and graph-based data analysis. Normalized graph Laplacians are standard variants of the Laplacian matrix.

78.15 Random Walk Interpretation

Let GG be a graph with no isolated vertices. Define

P=Δ1A. P=\Delta^{-1}A.

The matrix PP is the transition matrix of the simple random walk on GG. From vertex viv_i, the walk moves uniformly to one of its neighbors.

Then

Lrw=IP. L_{\mathrm{rw}}=I-P.

Thus the random-walk Laplacian measures the difference between a value at a vertex and the average value at its neighbors.

For a function xx on vertices,

(Lrwx)i=xi1deg(vi)vjvixj. (L_{\mathrm{rw}}x)_i = x_i-\frac{1}{\deg(v_i)}\sum_{v_j\sim v_i}x_j.

This is a discrete analogue of a Laplace operator. It compares a value with a local average.

78.16 Complete Graphs

For the complete graph KnK_n, every vertex has degree n1n-1. Its adjacency matrix is

A=JI, A=J-I,

where JJ is the all-ones matrix. The degree matrix is

Δ=(n1)I. \Delta=(n-1)I.

Therefore

L=ΔA=(n1)I(JI)=nIJ. L=\Delta-A=(n-1)I-(J-I)=nI-J.

Thus

L(Kn)=nIJ. L(K_n)=nI-J.

The vector 1\mathbf{1} is in the null space. Every vector orthogonal to 1\mathbf{1} is an eigenvector with eigenvalue nn. Hence the spectrum is

0, n, n,,n. 0,\ n,\ n,\ldots,n.

78.17 Path Graphs

For the path graph

Pn:v1v2vn, P_n: v_1-v_2-\cdots-v_n,

the Laplacian matrix is tridiagonal:

L(Pn)=[110012100120100011]. L(P_n)= \begin{bmatrix} 1 & -1 & 0 & \cdots & 0 \\ -1 & 2 & -1 & \cdots & 0 \\ 0 & -1 & 2 & \ddots & 0 \\ \vdots & \vdots & \ddots & \ddots & -1 \\ 0 & 0 & 0 & -1 & 1 \end{bmatrix}.

The endpoints have degree 11. The interior vertices have degree 22. This matrix is a discrete second-difference operator on a finite line.

78.18 Cycle Graphs

For the cycle graph CnC_n, every vertex has degree 22, and each vertex is adjacent to two neighbors. Its Laplacian has the form

L(Cn)=[210112100120110012]. L(C_n)= \begin{bmatrix} 2 & -1 & 0 & \cdots & -1 \\ -1 & 2 & -1 & \cdots & 0 \\ 0 & -1 & 2 & \ddots & 0 \\ \vdots & \vdots & \ddots & \ddots & -1 \\ -1 & 0 & 0 & -1 & 2 \end{bmatrix}.

This is a circulant matrix. It represents a discrete Laplace operator on a ring.

78.19 Trees

If TT is a tree with nn vertices, then TT has n1n-1 edges and is connected. Therefore the Laplacian matrix of TT has nullity 11 and rank

n1. n-1.

The absence of cycles also appears through the incidence matrix. If DD is an oriented incidence matrix of a tree, then DD has full column rank. Since

L=DDT, L=DD^T,

the Laplacian has rank n1n-1.

78.20 Matrix-Tree Theorem

The Laplacian matrix counts spanning trees through the matrix-tree theorem.

Let GG be a graph with Laplacian matrix LL. Delete any one row and the corresponding column from LL. The determinant of the resulting (n1)×(n1)(n-1)\times(n-1) matrix equals the number of spanning trees of GG.

This number is independent of which row and column are deleted.

The theorem is one of the central results connecting graph theory and linear algebra. It shows that a purely combinatorial quantity, the number of spanning trees, can be computed as a determinant. The use of the Laplacian in Kirchhoff’s matrix-tree theorem is a standard application of graph Laplacians.

78.21 Dirichlet Energy

The quantity

xTLx x^T L x

is often called the Dirichlet energy of the function xx on the graph.

It measures roughness. A function is smooth on a graph when adjacent vertices receive similar values. It is rough when many adjacent vertices receive very different values.

This idea is used in graph signal processing, semi-supervised learning, spectral clustering, and numerical methods on networks.

78.22 Summary

The Laplacian matrix of a simple undirected graph is

L=ΔA. L=\Delta-A.

Its diagonal entries are vertex degrees. Its off-diagonal entries are 1-1 for adjacent vertices and 00 otherwise.

The Laplacian matrix has several fundamental properties.

PropertyStatement
SymmetryLT=LL^T=L for undirected graphs
Row sumsL1=0L\mathbf{1}=0
Positive semidefinitexTLx0x^TLx\geq 0
Quadratic formxTLx={i,j}E(xixj)2x^TLx=\sum_{\{i,j\}\in E}(x_i-x_j)^2
Incidence formL=DDTL=DD^T
ConnectivitydimkerL\dim\ker L equals the number of connected components
Algebraic connectivityλ2>0\lambda_2>0 iff the graph is connected
Matrix-tree theoremCofactors of LL count spanning trees

The Laplacian is important because it turns graph structure into linear algebraic structure. It connects adjacency, degree, connectivity, cuts, flows, random walks, spanning trees, and eigenvalues in one matrix.