Divide and conquer solves a problem by splitting it into smaller subproblems, solving those subproblems, and combining their answers.
Divide and conquer solves a problem by splitting it into smaller subproblems, solving those subproblems, and combining their answers. It is useful when the original problem has repeated structure and when the cost of combining subanswers is controlled.
The pattern has three parts:
divide:
split the input into smaller pieces
conquer:
solve each piece recursively
combine:
merge the subanswers into the answer for the full inputThe key question is whether the subproblems are easier than the original problem and whether combining their answers costs less than solving the whole problem directly.
Problem
You need to solve a problem whose input can be split into smaller parts, and you want to reason about correctness and running time through recursion.
Solution
Use this template:
solve(input):
if input is small:
return direct answer
split input into subinputs
solve each subinput recursively
combine recursive answers
return combined answerA divide and conquer algorithm needs:
1. A base case
2. A valid split
3. Recursive calls on smaller inputs
4. A combine step
5. A recurrence for running timeExample: Merge Sort
Problem:
Given an array A, return the elements in sorted order.Algorithm:
merge_sort(A):
if length(A) <= 1:
return A
split A into left and right halves
L = merge_sort(left)
R = merge_sort(right)
return merge(L, R)The merge operation combines two sorted arrays:
merge(L, R):
result = empty array
i = 0
j = 0
while i < length(L) and j < length(R):
if L[i] <= R[j]:
append L[i] to result
i = i + 1
else:
append R[j] to result
j = j + 1
append remaining elements of L
append remaining elements of R
return resultCorrectness Argument
The proof follows induction on the length of the array.
Base case:
If the array has length 0 or 1, it is already sorted.
Inductive step:
Assume merge sort correctly sorts arrays smaller than length n. For an array of length n, the algorithm splits it into two smaller arrays. By the inductive hypothesis, the recursive calls return sorted versions of both halves.
The merge procedure preserves sorted order because it repeatedly selects the smaller first element from the two sorted arrays. Every element from both halves is appended exactly once. Therefore the merged result is sorted and contains exactly the same elements as the original input.
Thus merge sort correctly sorts the array.
Time Complexity
Merge sort satisfies the recurrence:
T(n) = 2T(n / 2) + O(n)There are two recursive calls on half-size inputs. The merge step scans all elements once, so it costs (O(n)).
The recurrence solves to:
O(n log n)The recursion has (O(\log n)) levels, and each level performs (O(n)) total merge work.
Example: Fast Exponentiation
Problem:
Compute x^n for a nonnegative integer n.A direct algorithm multiplies (x) by itself (n) times. Divide and conquer reduces the exponent by half.
pow(x, n):
if n == 0:
return 1
half = pow(x, n // 2)
if n is even:
return half * half
else:
return half * half * xCorrectness follows from the identities:
x^n = x^(n/2) * x^(n/2) when n is even
x^n = x^((n-1)/2) * x^((n-1)/2) * x when n is oddThe time complexity is (O(\log n)), since each call halves n.
Designing the Split
A good split has these properties:
| Property | Meaning |
|---|---|
| Smaller subproblems | Recursion must progress |
| Balanced size | Prevents excessive recursion depth |
| Complete coverage | Every part of the input is represented |
| Limited overlap | Repeated work should be avoided |
| Cheap combine step | Combining should not dominate unnecessarily |
Unbalanced splits can degrade performance. For example, quicksort with a poor pivot may split into sizes 0 and n - 1, leading to (O(n^2)) time.
Recurrence Patterns
Common divide and conquer costs:
| Recurrence | Typical Complexity | Example |
|---|---|---|
| (T(n) = T(n/2) + O(1)) | (O(\log n)) | Binary search |
| (T(n) = T(n/2) + O(n)) | (O(n)) | Some selection routines |
| (T(n) = 2T(n/2) + O(n)) | (O(n \log n)) | Merge sort |
| (T(n) = 2T(n/2) + O(1)) | (O(n)) | Full binary recursion with constant combine |
| (T(n) = 3T(n/2) + O(n)) | (O(n^{\log_2 3})) | Karatsuba multiplication |
Common Pitfalls
Do not split without a clear combine step. Solving smaller pieces helps only if their answers can be assembled into the full answer.
Do not ignore repeated subproblems. If recursion recomputes the same states many times, dynamic programming may be more appropriate.
Do not assume divide and conquer is automatically fast. Poor splits or expensive merging can make it slower than a direct method.
Do not omit the base case. Every recursive branch must eventually reach a directly solvable input.
Takeaway
Divide and conquer turns a large problem into smaller problems of the same form. Its correctness follows from induction, and its time complexity follows from a recurrence. The method works best when the split is balanced, the subproblems have little overlap, and the combine step is efficient.