A maximum matching algorithm finds a matching of largest possible size in a graph. Matching algorithms are among the most important combinatorial optimization methods. They appear in scheduling, assignment, routing, network design, computational biology, and resource allocation. (en.wikipedia.org)
The structure of the graph strongly influences the complexity of the problem. Bipartite graphs admit especially elegant algorithms. General graphs require deeper structural ideas because odd cycles create additional complications.
This chapter develops the main algorithmic principles behind maximum matching.
60.1 The Maximum Matching Problem
Let
be a graph.
A matching is a set of pairwise disjoint edges. The goal is to find a matching of maximum size.
Formally, the problem asks for
such that:
- No two edges in share a vertex.
- is as large as possible.
The size of a maximum matching is denoted by
The problem can be solved exactly in polynomial time.
60.2 Basic Strategy
Most matching algorithms follow the same general framework:
- Start with an initial matching.
- Search for an augmenting path.
- Augment the matching along the path.
- Repeat until no augmenting path exists.
The theoretical foundation is Berge’s lemma.
60.3 Berge’s Lemma
Theorem 60.1. Berge’s Lemma.
A matching is maximum if and only if there exists no augmenting path with respect to . (en.wikipedia.org)
An augmenting path is an alternating path whose endpoints are unmatched.
If such a path exists, flipping the matching status of its edges increases the matching size by one.
Suppose
alternates between nonmatching and matching edges, beginning and ending with nonmatching edges.
Then:
| Edge Type | Count |
|---|---|
| Nonmatching edges | |
| Matching edges |
Flipping the path removes edges and adds edges.
Thus the matching size increases by one.
60.4 Augmentation
Let be a matching and an augmenting path.
Define the new matching:
where
denotes symmetric difference.
Thus:
| Edge in | Action |
|---|---|
| In | Remove |
| Not in | Add |
The result is again a matching, and:
This operation is called augmentation.
60.5 Naive Augmenting-Path Algorithm
The simplest maximum matching algorithm repeatedly searches for augmenting paths.
Algorithm
- Start with
- Search for an augmenting path.
- If found, augment the matching.
- Repeat until no augmenting path exists.
By Berge’s lemma, the final matching is maximum.
Complexity
Suppose augmenting paths are found using breadth-first search or depth-first search.
Each augmentation increases the matching size by one. Since the maximum size is at most
the number of augmentations is at most
If each search costs
then the total running time is
This is acceptable for small and moderate graphs but inefficient for very large instances.
60.6 Alternating Trees
Augmenting-path algorithms usually organize the search using alternating trees.
Start from an unmatched vertex.
Grow a tree whose edges alternate:
| Level | Edge Type |
|---|---|
| Even depth | Nonmatching edge |
| Odd depth | Matching edge |
This structure ensures that every root-to-vertex path is alternating.
If an unmatched vertex is reached at an even level, an augmenting path has been found.
Alternating trees are fundamental in both bipartite and general matching algorithms.
60.7 Bipartite Matching Algorithms
Bipartite graphs simplify the problem because they contain no odd cycles.
This allows augmenting paths to be found cleanly using layered searches.
Layered Search
Let
be bipartite.
Start from unmatched vertices in .
Construct layers:
| Step | Edge Type |
|---|---|
| From to | Nonmatching edges |
| From to | Matching edges |
This produces alternating paths automatically.
60.8 The Hopcroft-Karp Algorithm
The Hopcroft-Karp algorithm is the standard algorithm for maximum bipartite matching. (en.wikipedia.org)
The key improvement is that the algorithm augments along many shortest augmenting paths simultaneously.
Main Structure
Each phase contains:
- Breadth-first search to compute distance layers.
- Depth-first search to find vertex-disjoint shortest augmenting paths.
- Simultaneous augmentation along all such paths.
This reduces the total number of phases dramatically.
Complexity
The running time is
This is one of the major results in combinatorial optimization.
60.9 Breadth-First Search Layers
During the Hopcroft-Karp algorithm, breadth-first search builds a layered graph.
Vertices are partitioned by distance from unmatched vertices.
The search respects alternation:
| Edge Traversed | Condition |
|---|---|
| Unmatched edge | Forward |
| Matching edge | Reverse |
The first layer containing unmatched vertices on the opposite side identifies shortest augmenting paths.
Only shortest augmenting paths are considered during the phase.
60.10 Why Shortest Paths Matter
Suppose one augments along arbitrary paths.
A later augmentation may destroy much of the previous structure.
The Hopcroft-Karp algorithm avoids this inefficiency.
By augmenting along many shortest disjoint paths simultaneously:
| Benefit | Result |
|---|---|
| Paths are vertex-disjoint | Augmentations do not interfere |
| Path lengths increase over time | Number of phases decreases |
| Many augmentations occur per phase | Faster convergence |
This leads to the
bound.
60.11 General Graph Matching
General graphs introduce new complications because odd cycles may appear.
In bipartite graphs, alternating trees never encounter parity conflicts. In arbitrary graphs, alternating paths may become trapped inside odd cycles.
These odd cycles are called blossoms.
Handling blossoms is the key difficulty in general matching algorithms.
60.12 Blossoms
A blossom is an odd cycle appearing during the search for augmenting paths.
Suppose an alternating tree reaches two vertices already connected by an edge. If the resulting cycle has odd length, the cycle may prevent straightforward augmentation.
For example:
may form an alternating odd cycle.
The cycle obstructs the layered structure used in bipartite algorithms.
60.13 Edmonds’ Blossom Algorithm
Edmonds introduced the blossom algorithm in 1965. It was the first polynomial-time algorithm for maximum matching in arbitrary graphs. (en.wikipedia.org)
The main idea is blossom contraction.
Blossom Contraction
Suppose an odd alternating cycle appears.
Contract the entire blossom into a single supervertex.
Then continue searching for augmenting paths in the contracted graph.
If an augmenting path is found, the contraction can later be expanded consistently.
This remarkable idea preserves correctness while eliminating odd-cycle obstructions.
60.14 Complexity of Blossom Algorithms
Early blossom algorithms had relatively high polynomial complexity.
Modern implementations achieve significantly better performance.
Typical bounds include:
| Algorithm | Complexity |
|---|---|
| Naive augmenting paths | |
| Hopcroft-Karp | |
| Edmonds blossom | Polynomial |
| Micali-Vazirani |
The Micali-Vazirani algorithm extends Hopcroft-Karp-type efficiency to general graphs.
60.15 Weighted Matching
In weighted matching problems, edges carry weights or costs.
The goal becomes:
| Objective | Optimization |
|---|---|
| Maximum-weight matching | Maximize total weight |
| Minimum-cost matching | Minimize total cost |
Weighted problems require more sophisticated methods.
60.16 The Hungarian Algorithm
The Hungarian algorithm solves weighted bipartite matching problems. (en.wikipedia.org)
The input is typically a cost matrix:
The goal is to assign rows to columns with minimum total cost.
Main Ideas
The algorithm maintains:
| Structure | Purpose |
|---|---|
| Dual variables | Maintain feasibility |
| Tight edges | Candidate assignments |
| Alternating trees | Search for augmenting structure |
The algorithm combines graph theory and linear programming duality.
Complexity
Standard implementations run in polynomial time, commonly:
60.17 Matching and Linear Programming
The maximum matching problem can be expressed as a linear program.
Introduce variables:
The objective is:
Subject to:
for every vertex .
This formulation expresses the fact that each vertex participates in at most one matching edge.
In bipartite graphs, the linear relaxation automatically yields integral solutions. This property is one reason bipartite matching is especially tractable.
60.18 Matching Polytope
The set of all matchings forms a polytope in high-dimensional space.
Each matching corresponds to a - vector.
The convex hull of these vectors is called the matching polytope.
Edmonds characterized this polytope using linear inequalities, including blossom inequalities.
This result became foundational in polyhedral combinatorics.
60.19 Online Matching
In online matching problems, vertices or edges arrive over time.
The algorithm must make decisions immediately without knowing future inputs.
Applications include:
| Application | Online Interpretation |
|---|---|
| Advertisement allocation | Users arrive sequentially |
| Ride matching | Requests appear dynamically |
| Cloud scheduling | Jobs arrive continuously |
Online matching theory studies competitive algorithms and probabilistic guarantees.
60.20 Randomized Matching Algorithms
Randomization often improves matching algorithms.
Randomized algorithms appear in:
| Context | Purpose |
|---|---|
| Online matching | Adversarial robustness |
| Approximation algorithms | Faster computation |
| Large sparse graphs | Sampling techniques |
| Distributed systems | Conflict reduction |
One famous result is the Karp-Vazirani-Vazirani algorithm for online bipartite matching.
60.21 Parallel and Distributed Matching
Modern systems frequently require matching algorithms on massive graphs.
Parallel algorithms divide computation across processors.
Distributed algorithms allow vertices to make decisions locally.
Applications include:
| Domain | Graph Size |
|---|---|
| Social networks | Millions to billions of vertices |
| Web graphs | Massive scale |
| Communication networks | Dynamic topology |
| Data centers | Real-time allocation |
Efficient distributed matching remains an active research area.
60.22 Approximation Algorithms
For some matching variants, exact computation is expensive.
Approximation algorithms produce near-optimal solutions efficiently.
A simple greedy maximal matching gives a factor- approximation to maximum matching.
More advanced approximation schemes achieve stronger guarantees for weighted and dynamic settings.
60.23 Applications
Maximum matching algorithms appear throughout science and engineering.
| Application | Matching Interpretation |
|---|---|
| Job assignment | Workers to tasks |
| Kidney exchange | Donors to recipients |
| Image processing | Feature correspondence |
| Scheduling | Tasks to machines |
| Resource allocation | Requests to servers |
| Transportation | Vehicles to routes |
| Chemistry | Molecular pairing |
The mathematical abstraction is extremely general.
60.24 Summary of Major Algorithms
| Algorithm | Graph Type | Complexity |
|---|---|---|
| Naive augmenting paths | General | |
| Hopcroft-Karp | Bipartite | |
| Blossom algorithm | General | Polynomial |
| Hungarian algorithm | Weighted bipartite | |
| Micali-Vazirani | General |
These algorithms form the foundation of practical matching computation.
60.25 Exercises
Prove Berge’s lemma.
Show that augmenting along an augmenting path increases matching size by one.
Apply the naive augmenting-path algorithm to a small bipartite graph.
Explain why odd cycles create difficulties in general matching problems.
Describe blossom contraction in your own words.
Show that every maximal matching is at least half the size of a maximum matching.
Construct the flow network associated with a bipartite matching problem.
Explain why Hopcroft-Karp augments along shortest augmenting paths.
Write the linear programming formulation for maximum matching.
Explain why bipartite matching is easier than general matching.