DP Pattern
Imagine you're assembling a huge jigsaw puzzle and you keep solving the same corner section over and over — every time you come back to it. That's what naive recursion does. Dynamic Programming (DP) says: solve each piece once, write the answer on a sticky note, and look it up if you need it again.
That simple idea — cache results, never recompute — turns exponential algorithms into polynomial ones.
Two Properties You Need
Overlapping Subproblems: The recursive solution calls the same sub-problems many times. Classic example: Fibonacci numbers. Computing F(5) calls F(4) and F(3). F(4) calls F(3) and F(2). F(3) is computed twice, F(2) is computed three times... Without caching, computing F(50) makes over a trillion function calls. With caching: 50 unique calls, done.
Optimal Substructure: The optimal answer to the big problem is built from optimal answers to smaller sub-problems. Shortest path from A to C through B: the A→B segment must be the shortest A→B path, and the B→C segment must be the shortest B→C path. If either sub-path were suboptimal, you could improve it and get a better overall path — contradiction. So optimal substructure holds.
Both properties together mean: compute each sub-problem once in optimal fashion, combine them, and you're done.
The DP Design Framework
1. Define the state: What does dp[i] (or dp[i][j]) represent? Be precise and write it out. Wrong state definition → everything downstream is wrong.
2. Write the recurrence (transition): Express dp[i] in terms of smaller indices. This is the hardest and most creative step.
• Fibonacci: dp[i] = dp[i-1] + dp[i-2]
• 0/1 Knapsack: dp[i][w] = max(dp[i-1][w], dp[i-1][w-wt[i]] + val[i])
• LCS: dp[i][j] = dp[i-1][j-1]+1 if s1[i]==s2[j] else max(dp[i-1][j], dp[i][j-1])
3. Define base cases: The smallest sub-problems you can answer directly.
• Fibonacci: dp[0]=0, dp[1]=1
• Knapsack: dp[0][w]=0 for all w (no items → no value)
4. Fill order or memoize: Either fill the table bottom-up (tabulation) ensuring all dependencies are computed before they're needed, or add a cache to the top-down recursion (memoization).
DP vs Greedy vs Divide & Conquer
All three break problems into smaller pieces, but differently:
• Divide & Conquer: subproblems are independent — solve each once, combine. (Merge sort, closest pair.)
• Greedy: makes one locally-optimal choice, never revisits. Works only when the greedy-choice property holds.
• DP: subproblems overlap, caches results. Considers all choices at each step. More powerful than greedy but typically slower.
Rule of thumb: try greedy first. If you can find a counterexample to the greedy choice, switch to DP.
DP Framework — Fibonacci (Naive → Memo → Tab → Space-Optimised)
Complexity
Quick check
Continue reading