Dynamic Programming8 min read

DP Pattern

Never solve the same puzzle piece twice — write it down and look it up next time
requires:Overlapping subproblems + optimal substructureapproach:State → Transition → Base casevs naive:Exponential → Polynomial

Imagine you're assembling a huge jigsaw puzzle and you keep solving the same corner section over and over — every time you come back to it. That's what naive recursion does. Dynamic Programming (DP) says: solve each piece once, write the answer on a sticky note, and look it up if you need it again.

That simple idea — cache results, never recompute — turns exponential algorithms into polynomial ones.

Two Properties You Need

Overlapping Subproblems: The recursive solution calls the same sub-problems many times. Classic example: Fibonacci numbers. Computing F(5) calls F(4) and F(3). F(4) calls F(3) and F(2). F(3) is computed twice, F(2) is computed three times... Without caching, computing F(50) makes over a trillion function calls. With caching: 50 unique calls, done.

Optimal Substructure: The optimal answer to the big problem is built from optimal answers to smaller sub-problems. Shortest path from A to C through B: the A→B segment must be the shortest A→B path, and the B→C segment must be the shortest B→C path. If either sub-path were suboptimal, you could improve it and get a better overall path — contradiction. So optimal substructure holds.

Both properties together mean: compute each sub-problem once in optimal fashion, combine them, and you're done.

The DP Design Framework

1. Define the state: What does dp[i] (or dp[i][j]) represent? Be precise and write it out. Wrong state definition → everything downstream is wrong.

2. Write the recurrence (transition): Express dp[i] in terms of smaller indices. This is the hardest and most creative step.
• Fibonacci: dp[i] = dp[i-1] + dp[i-2]
• 0/1 Knapsack: dp[i][w] = max(dp[i-1][w], dp[i-1][w-wt[i]] + val[i])
LCS: dp[i][j] = dp[i-1][j-1]+1 if s1[i]==s2[j] else max(dp[i-1][j], dp[i][j-1])

3. Define base cases: The smallest sub-problems you can answer directly.
• Fibonacci: dp[0]=0, dp[1]=1
• Knapsack: dp[0][w]=0 for all w (no items → no value)

4. Fill order or memoize: Either fill the table bottom-up (tabulation) ensuring all dependencies are computed before they're needed, or add a cache to the top-down recursion (memoization).

The four-step DP design pipeline: define state, write the recurrence, set base cases, then choose memoization or tabulation — and optimize space if possible.

DP vs Greedy vs Divide & Conquer

All three break problems into smaller pieces, but differently:
Divide & Conquer: subproblems are independent — solve each once, combine. (Merge sort, closest pair.)
Greedy: makes one locally-optimal choice, never revisits. Works only when the greedy-choice property holds.
DP: subproblems overlap, caches results. Considers all choices at each step. More powerful than greedy but typically slower.

Rule of thumb: try greedy first. If you can find a counterexample to the greedy choice, switch to DP.

Decision flowchart: given a problem with smaller sub-problems, determine whether to use Divide & Conquer, Greedy, or DP.
Note: The number of distinct subproblems is the key number. If there are only n unique subproblems (like Fibonacci 0..n) and each takes \(O(1)\) to compute given its dependencies, your DP runs in \(O(n)\) — no matter how many times the naive recursion would have recomputed them.

DP Framework — Fibonacci (Naive → Memo → Tab → Space-Optimised)

import sys
from functools import lru_cache
# 1. Naive — exponential O(2^n)
def fib_naive(n):
if n <= 1: return n
return fib_naive(n-1) + fib_naive(n-2)
# 2. Memoization (top-down) — O(n)
@lru_cache(maxsize=None)
def fib_memo(n):
if n <= 1: return n
return fib_memo(n-1) + fib_memo(n-2)
# 3. Tabulation (bottom-up) — O(n) time, O(n) space
def fib_tab(n):
if n <= 1: return n
dp = [0] * (n+1)
dp[1] = 1
for i in range(2, n+1):
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
# 4. Space-optimised — O(n) time, O(1) space
def fib_opt(n):
a, b = 0, 1
for _ in range(n):
a, b = b, a+b
return a
for fn, name in [(fib_memo,'memo'),(fib_tab,'tab'),(fib_opt,'opt')]:
print(f"fib(10) [{name}] = {fn(10)}")
Output
fib(10) [memo] = 55
fib(10) [tab] = 55
fib(10) [opt] = 55

Complexity

🐢 Naive Fibonacci (no cache)Every call spawns two more; the same sub-problems are recomputed exponentially many times
Exponential blowup\(O(2^n)\)
⚡ Fibonacci with memoizationn unique states (F(0)..F(n)); each computed once and cached
Each state computed once\(O(n)\)
📏 General 1D DPn states, \(O(1)\) transition each = \(O(n)\) total
Linear in state count\(O(n)\)
📐 General 2D DPn × m state grid, \(O(1)\) transition each — LCS, edit distance, etc.
Quadratic\(O(n · m)\)
📦 Space (rolling array trick)When recurrence only looks back a fixed number of rows, discard older rows
Reduced table\(O(1)\) to \(O(row size)\)

Quick check

Naive Fibonacci is \(O(2^n)\). What does DP exploit to reduce this to \(O(n)\)?

Continue reading

Memoization vs Tabulation
Top-down is calling a friend who calls a friend. Bottom-up is filling a spreadsheet row by row.
Greedy Pattern
Always grab the biggest slice of pizza — sometimes that's optimal, sometimes it isn't
1D DP
Build the answer one step at a time — just like climbing stairs