Skip to content

1.15 Dynamic Programming

Dynamic programming solves problems by storing answers to subproblems and reusing them.

Dynamic programming solves problems by storing answers to subproblems and reusing them. It is useful when a naive recursive solution repeats the same work many times. Instead of recomputing a subproblem every time it appears, the algorithm computes it once and records the result.

The method depends on two properties:

Optimal substructure:
  The answer to a larger problem can be built from answers to smaller problems.

Overlapping subproblems:
  The same smaller problems appear repeatedly.

If both properties are present, dynamic programming is often a good fit.

Problem

You have a recursive problem where many branches solve the same subproblems, and you need to reduce repeated work.

Solution

Use this design pattern:

1. Define the state.
2. Define the value stored for each state.
3. Write the recurrence.
4. Identify base cases.
5. Choose an evaluation order.
6. Return the target state.

The state is the most important part. A state should contain exactly the information needed to describe a subproblem.

Example: Fibonacci Numbers

The naive recursive algorithm repeats work:

fib(n):
  if n <= 1:
    return n
  return fib(n - 1) + fib(n - 2)

For large n, this recomputes the same values many times.

With memoization:

fib(n):
  memo = empty map

  solve(k):
    if k <= 1:
      return k

    if k in memo:
      return memo[k]

    memo[k] = solve(k - 1) + solve(k - 2)
    return memo[k]

  return solve(n)

Each value fib(k) is computed once. The time complexity becomes (O(n)), and the auxiliary space is (O(n)).

Tabulation

The same computation can be written bottom-up:

fib(n):
  if n <= 1:
    return n

  dp[0] = 0
  dp[1] = 1

  for i = 2 to n:
    dp[i] = dp[i - 1] + dp[i - 2]

  return dp[n]

This version fills states in increasing order, so every dependency is available when needed.

Example: 0/1 Knapsack

Problem:

Given items with weights and values, choose a subset with total weight at most W and maximum value.

State:

dp[i][w] = maximum value using the first i items with capacity w

Recurrence:

dp[i][w] =
  dp[i - 1][w]                                      if item i is not taken
  max(dp[i - 1][w], dp[i - 1][w - weight[i]] + value[i])
                                                     if item i is taken

Base cases:

dp[0][w] = 0
dp[i][0] = 0

The answer is:

dp[n][W]

The time complexity is (O(nW)), and the space complexity is (O(nW)), or (O(W)) with rolling-array optimization.

Memoization vs Tabulation

MethodDirectionStrengthWeakness
MemoizationTop-downComputes only reached statesRecursion overhead
TabulationBottom-upPredictable iterationMay compute unused states

Memoization is often easier when the recurrence is natural but the state graph is irregular. Tabulation is often better when the state space is dense and ordered.

Choosing a State

A good dynamic programming state has three properties:

Completeness:
  It contains enough information to solve the subproblem.

Minimality:
  It avoids information that does not affect future decisions.

Orderability:
  Its dependencies can be evaluated before the state itself.

If the state is incomplete, the recurrence gives wrong answers. If the state is too large, the algorithm becomes slow or memory-heavy.

Common DP Patterns

PatternTypical State
Prefix DPdp[i] over first i elements
Interval DPdp[l][r] over subarray A[l..r]
Grid DPdp[row][col]
Knapsack DPdp[i][capacity]
Bitmask DPdp[mask] or dp[mask][last]
Tree DPdp[node][state]
Digit DPdp[pos][tight][state]

Correctness Argument

A DP proof usually follows induction over the evaluation order.

For each state, prove:

1. The base states match the specification directly.
2. The recurrence considers all valid choices.
3. The recurrence chooses the best valid value.
4. Dependencies are already correct when the state is computed.

For knapsack, the recurrence is correct because every optimal solution either excludes the current item or includes it. These two cases are exhaustive and disjoint.

Common Pitfalls

Do not start by writing a table. Start by defining the subproblem.

Do not use a state that forgets necessary information. For example, in path problems, the current node may not be enough if visited nodes matter.

Do not fill states before their dependencies are ready.

Do not assume pseudo-polynomial time is polynomial in the input length. Knapsack time (O(nW)) depends on the numeric capacity, not just the number of bits needed to represent it.

Takeaway

Dynamic programming turns repeated recursive work into a table of reusable subproblem answers. The core design task is state definition. Once the state, recurrence, base cases, and evaluation order are correct, the algorithm follows mechanically.