# Chapter 77. Incidence Matrices

# Chapter 77. Incidence Matrices

An incidence matrix is a matrix representation of a graph that records the relation between vertices and edges. In contrast with the adjacency matrix, which records whether two vertices are adjacent, the incidence matrix records whether a vertex belongs to an edge.

Let

$$
G = (V,E)
$$

be a finite graph with vertex set

$$
V = \{v_1,v_2,\ldots,v_n\}
$$

and edge set

$$
E = \{e_1,e_2,\ldots,e_m\}.
$$

The incidence matrix of \(G\) has one row for each vertex and one column for each edge. Thus it is an \(n \times m\) matrix.

For an undirected graph, the unoriented incidence matrix is the matrix

$$
B = (b_{ij}),
$$

where

$$
b_{ij} =
\begin{cases}
1, & \text{if } v_i \text{ is incident with } e_j, \\
0, & \text{otherwise.}
\end{cases}
$$

Each column describes one edge. Since an ordinary edge has two endpoints, each column contains two entries equal to \(1\), except when loops are allowed and treated by a separate convention.

## 77.1 Incidence

The word incidence describes membership between objects of different types.

In graph theory, a vertex and an edge are incident when the vertex is an endpoint of the edge. If

$$
e = \{u,v\},
$$

then \(u\) is incident with \(e\), and \(v\) is incident with \(e\).

The incidence matrix records this relation directly. Rows correspond to vertices. Columns correspond to edges. The entry in row \(i\), column \(j\) answers the question:

$$
\text{Is } v_i \text{ an endpoint of } e_j?
$$

If yes, the entry is \(1\). If no, the entry is \(0\).

## 77.2 Example

Consider the graph with vertices

$$
V = \{v_1,v_2,v_3,v_4\}
$$

and edges

$$
e_1 = \{v_1,v_2\}, \qquad
e_2 = \{v_1,v_3\}, \qquad
e_3 = \{v_2,v_3\}, \qquad
e_4 = \{v_3,v_4\}.
$$

The incidence matrix is

$$
B =
\begin{bmatrix}
1 & 1 & 0 & 0 \\
1 & 0 & 1 & 0 \\
0 & 1 & 1 & 1 \\
0 & 0 & 0 & 1
\end{bmatrix}.
$$

The first column corresponds to \(e_1\). It has ones in rows \(1\) and \(2\), because \(e_1\) joins \(v_1\) and \(v_2\).

The fourth column corresponds to \(e_4\). It has ones in rows \(3\) and \(4\), because \(e_4\) joins \(v_3\) and \(v_4\).

Thus each column gives the endpoints of an edge.

## 77.3 Comparison with the Adjacency Matrix

The adjacency matrix is a vertex-by-vertex matrix. The incidence matrix is a vertex-by-edge matrix.

| Matrix | Rows | Columns | Entry records |
|---|---|---|---|
| Adjacency matrix | Vertices | Vertices | Whether two vertices are adjacent |
| Incidence matrix | Vertices | Edges | Whether a vertex is incident with an edge |

For a graph with \(n\) vertices and \(m\) edges, the adjacency matrix has size

$$
n \times n,
$$

while the incidence matrix has size

$$
n \times m.
$$

The adjacency matrix is often convenient for studying walks and spectral properties. The incidence matrix is often convenient for studying degrees, cuts, flows, cycles, and graph Laplacians.

## 77.4 Column Sums

For a simple undirected graph, each edge has exactly two endpoints. Therefore every column of the incidence matrix has sum \(2\):

$$
\sum_{i=1}^{n} b_{ij} = 2
$$

for every edge \(e_j\).

This property distinguishes ordinary graph incidence matrices from more general incidence structures. In a hypergraph, an edge may contain more than two vertices, so a column may have more than two nonzero entries.

## 77.5 Row Sums and Degrees

The row sum of the incidence matrix gives the degree of a vertex.

For a simple undirected graph,

$$
\deg(v_i) = \sum_{j=1}^{m} b_{ij}.
$$

Each entry \(b_{ij}=1\) corresponds to one edge incident with \(v_i\). Therefore the row sum counts the number of edges touching \(v_i\).

In the example above,

$$
B =
\begin{bmatrix}
1 & 1 & 0 & 0 \\
1 & 0 & 1 & 0 \\
0 & 1 & 1 & 1 \\
0 & 0 & 0 & 1
\end{bmatrix},
$$

the row sums are

$$
2,\ 2,\ 3,\ 1.
$$

Thus

$$
\deg(v_1)=2,\quad
\deg(v_2)=2,\quad
\deg(v_3)=3,\quad
\deg(v_4)=1.
$$

## 77.6 The Handshaking Identity

Since every column of the incidence matrix of a simple graph has sum \(2\),

$$
\sum_{i=1}^{n}\sum_{j=1}^{m} b_{ij} = 2m.
$$

But the sum of row \(i\) is \(\deg(v_i)\). Therefore

$$
\sum_{i=1}^{n} \deg(v_i) = 2m.
$$

This is the handshaking identity.

The incidence matrix gives a direct proof: counting all ones by columns gives twice the number of edges, while counting all ones by rows gives the sum of all degrees.

## 77.7 Oriented Incidence Matrices

An oriented incidence matrix assigns a direction to each edge. This may be done even when the original graph is undirected. The orientation is arbitrary but fixed.

If an edge \(e_j\) is oriented from \(v_p\) to \(v_q\), then the oriented incidence matrix \(D=(d_{ij})\) has

$$
d_{pj} = -1,
\qquad
d_{qj} = 1,
$$

and all other entries in column \(j\) are zero.

Equivalently,

$$
d_{ij} =
\begin{cases}
-1, & \text{if } e_j \text{ leaves } v_i, \\
1, & \text{if } e_j \text{ enters } v_i, \\
0, & \text{otherwise.}
\end{cases}
$$

Some authors use the opposite sign convention. The mathematical content is unchanged as long as the convention is used consistently.

## 77.8 Example of an Oriented Incidence Matrix

Use the same graph as before:

$$
e_1 = \{v_1,v_2\}, \qquad
e_2 = \{v_1,v_3\}, \qquad
e_3 = \{v_2,v_3\}, \qquad
e_4 = \{v_3,v_4\}.
$$

Choose the orientations

$$
v_1 \to v_2,\qquad
v_1 \to v_3,\qquad
v_2 \to v_3,\qquad
v_3 \to v_4.
$$

Then the oriented incidence matrix is

$$
D =
\begin{bmatrix}
-1 & -1 & 0 & 0 \\
1 & 0 & -1 & 0 \\
0 & 1 & 1 & -1 \\
0 & 0 & 0 & 1
\end{bmatrix}.
$$

Each column has one entry \(1\), one entry \(-1\), and all other entries zero. The sum of every column is zero:

$$
\sum_{i=1}^{n} d_{ij}=0.
$$

This expresses the fact that every oriented edge leaves one vertex and enters one vertex.

## 77.9 Directed Graphs

For a directed graph, the orientation is part of the graph itself. The oriented incidence matrix uses the direction of each arc.

If \(e_j\) is an arc from \(v_p\) to \(v_q\), then

$$
d_{pj} = -1,
\qquad
d_{qj} = 1.
$$

Thus a directed graph naturally has an oriented incidence matrix.

The row of a vertex records all arcs entering and leaving that vertex. Positive entries mark incoming arcs. Negative entries mark outgoing arcs. With the opposite sign convention, these signs are reversed.

## 77.10 Net Flow Interpretation

The oriented incidence matrix has a natural flow interpretation.

Let

$$
x =
\begin{bmatrix}
x_1\\
x_2\\
\vdots\\
x_m
\end{bmatrix}
$$

be a vector of edge flows, where \(x_j\) is the amount of flow along edge \(e_j\) in its chosen orientation.

Then

$$
Dx
$$

is a vector indexed by vertices. Its \(i\)-th entry is

$$
(Dx)_i = \sum_{j=1}^{m} d_{ij}x_j.
$$

With the convention \(-1\) for leaving and \(1\) for entering, this entry equals

$$
\text{inflow at } v_i - \text{outflow at } v_i.
$$

Thus the equation

$$
Dx = 0
$$

means that every vertex has zero net imbalance. In network language, this is flow conservation.

## 77.11 Cuts and Potential Differences

The transpose of the oriented incidence matrix also has an important meaning.

Let

$$
p =
\begin{bmatrix}
p_1\\
p_2\\
\vdots\\
p_n
\end{bmatrix}
$$

be a vector assigning a scalar potential to each vertex. Then

$$
D^T p
$$

is a vector indexed by edges.

If \(e_j\) is oriented from \(v_p\) to \(v_q\), then the \(j\)-th entry of \(D^T p\) is

$$
p_q - p_p.
$$

Thus \(D^T p\) gives the potential difference across each oriented edge.

This interpretation appears in electrical networks, graph Laplacians, optimization, and numerical methods on graphs.

## 77.12 The Graph Laplacian

The oriented incidence matrix gives a concise formula for the graph Laplacian.

For an undirected graph with an arbitrary orientation, let \(D\) be its oriented incidence matrix. Then

$$
L = DD^T
$$

is the graph Laplacian.

The result does not depend on the chosen orientation. Reversing an edge changes the sign of one column of \(D\). Since that column is multiplied by its transpose in \(DD^T\), the contribution to \(L\) remains the same.

The Laplacian satisfies

$$
L_{ii} = \deg(v_i)
$$

and, for \(i \neq j\),

$$
L_{ij} =
\begin{cases}
-1, & \text{if } v_i \text{ is adjacent to } v_j, \\
0, & \text{otherwise.}
\end{cases}
$$

Equivalently,

$$
L = \Delta - A,
$$

where \(\Delta\) is the diagonal degree matrix and \(A\) is the adjacency matrix. The identity \(L=DD^T\) is a standard reason incidence matrices are central in spectral graph theory and network analysis.

## 77.13 The Matrix \(D^T D\)

The matrix

$$
D^T D
$$

is indexed by edges rather than vertices. Its diagonal entries are \(2\), because each column of \(D\) has one \(1\) and one \(-1\).

For two distinct edges \(e_j\) and \(e_k\), the entry

$$
(D^T D)_{jk}
$$

depends on whether the two edges share a vertex and on how their orientations meet at that vertex.

Thus \(D^T D\) records edge-edge interaction. It is closely related to the line graph, whose vertices correspond to edges of the original graph. For the unoriented incidence matrix \(B\), one has the relation

$$
A(L(G)) = B^T B - 2I_m,
$$

where \(L(G)\) is the line graph and \(I_m\) is the \(m \times m\) identity matrix.

## 77.14 Cycle Space

The null space of the oriented incidence matrix describes cycles.

A vector

$$
x \in \mathbb{R}^m
$$

lies in the null space of \(D\) when

$$
Dx = 0.
$$

This means that the net flow at every vertex is zero. Such a flow can circulate around cycles without accumulating at any vertex.

In this sense, the cycle space of a graph is represented by the null space of the oriented incidence matrix. Over the real numbers, this gives a vector space of circulations. Over the field with two elements, it gives the binary cycle space.

For a connected graph with \(n\) vertices and \(m\) edges, the dimension of the cycle space is

$$
m-n+1.
$$

This number counts the independent cycles of the graph.

## 77.15 Rank

The rank of the oriented incidence matrix is determined by the number of connected components.

If \(G\) has \(n\) vertices and \(c\) connected components, then

$$
\operatorname{rank}(D) = n-c.
$$

In particular, if \(G\) is connected, then

$$
\operatorname{rank}(D)=n-1.
$$

The reason is that each column of \(D\) has entries summing to zero, so the rows are linearly dependent:

$$
\mathbf{1}^T D = 0.
$$

For a connected graph, this is the only row dependency up to scalar multiplication. For a graph with \(c\) components, there is one such dependency for each component.

## 77.16 Reduced Incidence Matrix

For a connected graph, the oriented incidence matrix has rank \(n-1\). Therefore one row is redundant.

A reduced incidence matrix is obtained by deleting one row from \(D\). If \(G\) is connected, the reduced incidence matrix has full row rank:

$$
n-1.
$$

Reduced incidence matrices appear in the matrix-tree theorem, electrical network theory, and flow problems. Deleting one row corresponds to choosing a reference vertex, sometimes called a ground node in network theory.

## 77.17 Trees

Incidence matrices are especially simple for trees.

If \(T\) is a tree with \(n\) vertices, then it has

$$
m=n-1
$$

edges. If \(D\) is an oriented incidence matrix of \(T\), then

$$
\operatorname{rank}(D)=n-1=m.
$$

Thus the columns of \(D\) are linearly independent.

This reflects the absence of cycles. A nonzero circulation would require a cycle. Since a tree has no cycle, the only solution of

$$
Dx=0
$$

is

$$
x=0.
$$

## 77.18 Multigraphs and Loops

For multigraphs, the incidence matrix has one column for each edge, including parallel edges.

If two edges have the same endpoints, their columns in the unoriented incidence matrix are identical. In the oriented incidence matrix, their columns may be identical or negatives of each other, depending on their chosen orientations.

Loops require a convention. In the unoriented incidence matrix, a loop may be recorded as \(2\) in the row of its endpoint, because the loop touches the vertex twice. In some conventions, it is recorded as \(1\). In an oriented incidence matrix for an ordinary directed loop, the edge both leaves and enters the same vertex, so the contributions cancel and the column is zero.

Because of these convention choices, loops should be specified explicitly when incidence matrices are used.

## 77.19 Hypergraphs

The incidence matrix extends naturally to hypergraphs.

A hypergraph has vertices and hyperedges, where a hyperedge may contain any number of vertices. Its incidence matrix is still a vertex-by-edge matrix:

$$
b_{ij} =
\begin{cases}
1, & \text{if } v_i \in e_j, \\
0, & \text{otherwise.}
\end{cases}
$$

Unlike ordinary graphs, a column may contain three or more ones. Thus the incidence matrix is often the most natural matrix representation of a hypergraph.

## 77.20 Incidence Structures

Incidence matrices also appear outside graph theory. More generally, an incidence matrix records a relation between two finite sets.

For example, in finite geometry, one set may consist of points and the other of lines. The entry is \(1\) if a point lies on a line and \(0\) otherwise.

In block design theory, one set may consist of points and the other of blocks. The entry is \(1\) if the point belongs to the block.

Graph incidence matrices are one special case of this broader idea.

## 77.21 Summary

An incidence matrix records the relation between vertices and edges.

For an undirected graph, the unoriented incidence matrix has entries

$$
b_{ij}=1
$$

when vertex \(v_i\) is an endpoint of edge \(e_j\), and

$$
b_{ij}=0
$$

otherwise.

For an oriented graph, the incidence matrix records direction:

$$
d_{ij} =
\begin{cases}
-1, & \text{if } e_j \text{ leaves } v_i, \\
1, & \text{if } e_j \text{ enters } v_i, \\
0, & \text{otherwise.}
\end{cases}
$$

The incidence matrix supports several basic interpretations.

| Object | Matrix expression |
|---|---|
| Degree of \(v_i\) in a simple graph | Row sum of the unoriented incidence matrix |
| Handshaking identity | Sum of all entries equals \(2|E|\) |
| Flow conservation | \(Dx=0\) |
| Edge potential differences | \(D^T p\) |
| Graph Laplacian | \(L=DD^T\) |
| Cycle space | \(\ker D\) |
| Rank | \(\operatorname{rank}(D)=n-c\) |

The incidence matrix gives a vertex-edge view of a graph. It is especially important in the study of flows, cuts, cycles, trees, Laplacians, electrical networks, and spectral graph theory.
