Dynamic Programming
Table of Contents
Introduction
Dynamic Programming (DP) is an optimization technique that solves complex problems by breaking them into overlapping subproblems and storing their results so you never solve the same subproblem twice. Think of it like taking notes during a long calculation — instead of redoing work, you look up answers you’ve already figured out.
If you’ve worked through Recursion, you’ve already seen the problem DP solves. Remember the naive recursive Fibonacci? It recalculates the same values over and over. DP eliminates that waste. You’ll also want to be comfortable with Big-O Notation to understand the complexity improvements.
When does DP apply?
A problem is a good candidate for DP when it has two properties:
1. Overlapping Subproblems — The same smaller problems are solved multiple times.
2. Optimal Substructure — The optimal solution to the problem can be built from optimal solutions to its subproblems.
If a problem only has optimal substructure but no overlapping subproblems, you’re looking at plain divide and conquer (like merge sort). If subproblems overlap, DP gives you a speedup.
The Fibonacci problem
Fibonacci is the classic example because it clearly shows both properties. Let’s trace through the naive recursive version:
fib(5)
├── fib(4)
│ ├── fib(3)
│ │ ├── fib(2)
│ │ │ ├── fib(1) → 1
│ │ │ └── fib(0) → 0
│ │ └── fib(1) → 1
│ └── fib(2)
│ ├── fib(1) → 1
│ └── fib(0) → 0
└── fib(3)
├── fib(2)
│ ├── fib(1) → 1
│ └── fib(0) → 0
└── fib(1) → 1
fib(3) is computed twice. fib(2) is computed three times. At fib(40), there are over a billion redundant calls. The time complexity is O(2^n) — unusable for any meaningful input.
Two approaches to DP
Top-Down (Memoization)
Start with the original problem, recurse into subproblems, and cache results as you go. This is the recursive approach with a lookup table bolted on.
Bottom-Up (Tabulation)
Start from the smallest subproblems, build up a table of results, and use them to solve larger subproblems iteratively. No recursion needed.
Both approaches produce the same results. The choice comes down to what feels more natural for the problem.
Fibonacci with DP
Top-Down (Memoization)
Each subproblem is solved once and looked up in O(1) after that. Time: O(n). Space: O(n).
Bottom-Up (Tabulation)
Same O(n) time, but no recursion overhead and no risk of stack overflow.
Space-Optimized Bottom-Up
Since Fibonacci only depends on the previous two values, we don’t need the full table:
function fib(n) {
if (n <= 1) return n;
let prev2 = 0, prev1 = 1;
for (let i = 2; i <= n; i++) {
const current = prev1 + prev2;
prev2 = prev1;
prev1 = current;
}
return prev1;
}
Time: O(n). Space: O(1). This is the optimal solution.
Climbing Stairs
A classic DP problem: you’re climbing a staircase with n steps. Each time you can climb 1 or 2 steps. How many distinct ways can you reach the top?
This is Fibonacci in disguise. To reach step n, you either came from step n-1 (one step) or step n-2 (two steps). So ways(n) = ways(n-1) + ways(n-2).
n=1: 1 way → [1]
n=2: 2 ways → [1,1], [2]
n=3: 3 ways → [1,1,1], [1,2], [2,1]
n=4: 5 ways → [1,1,1,1], [1,1,2], [1,2,1], [2,1,1], [2,2]
Time: O(n). Space: O(1).
Minimum Cost Climbing Stairs
Now each step has a cost. Find the minimum cost to reach the top. You can start from step 0 or step 1.
This introduces a key DP pattern: at each step, you make a choice (come from i-1 or i-2) and take the minimum.
Recurrence: dp[i] = cost[i] + min(dp[i-1], dp[i-2])
Time: O(n). Space: O(1).
House Robber
You’re a robber planning to rob houses along a street. Each house has a certain amount of money, but you can’t rob two adjacent houses (alarm system). Find the maximum amount you can rob.
Recurrence: At each house, you either rob it (add its value to the best you could do two houses back) or skip it (keep the best from the previous house).
dp[i] = max(dp[i-1], dp[i-2] + nums[i])
Time: O(n). Space: O(1).
How to approach a DP problem
Here’s a framework that works for most DP problems:
1. Define the subproblem. What does dp[i] (or dp[i][j]) represent? State this clearly in words before writing any code.
2. Find the recurrence relation. How does dp[i] relate to smaller subproblems? This is the core of the solution.
3. Identify the base cases. What are the smallest subproblems you can solve directly?
4. Determine the computation order. For bottom-up, make sure you compute subproblems before you need them.
5. Optimize space if possible. If dp[i] only depends on the last few values, you don’t need the full table.
Top-Down vs Bottom-Up
| Top-Down (Memoization) | Bottom-Up (Tabulation) | |
|---|---|---|
| Approach | Recursive + cache | Iterative + table |
| Ease of writing | Often more intuitive | Requires knowing computation order |
| Stack overflow risk | Yes, for deep recursion | No |
| Subproblems computed | Only those needed | All subproblems in range |
| Space optimization | Harder | Easier |
In interviews, start with whichever feels more natural. If the interviewer asks for optimization, convert to bottom-up and then reduce space.
Complexity analysis
DP transforms problems from exponential to polynomial time:
| Problem | Naive Recursive | With DP |
|---|---|---|
| Fibonacci | O(2^n) | O(n) |
| Climbing Stairs | O(2^n) | O(n) |
| House Robber | O(2^n) | O(n) |
| 0/1 Knapsack | O(2^n) | O(n × W) |
| Longest Common Subsequence | O(2^n) | O(n × m) |
The space complexity depends on the number of dimensions in your DP table and whether you can optimize it.
Practice problems
- Climbing Stairs (1, 2, or 3 steps variant)
- Coin Change — minimum coins to make amount
- Longest Increasing Subsequence
- Maximum Subarray (Kadane’s Algorithm)
- Unique Paths in a grid
- 0/1 Knapsack Problem
- Longest Common Subsequence
- Edit Distance
- Word Break
- Decode Ways
Dynamic programming is a skill that improves with practice. The patterns repeat — once you’ve solved a few problems in each category (1D, 2D, string DP, decision-making), new problems start to feel familiar. Start with the examples in this tutorial, then work through the practice problems above.