Memoization vs Tabulation
Once you've identified a DP problem and defined your state and recurrence, you face one more question: how do I actually compute this? There are two approaches, and they're like two different personalities.
Memoization (top-down): Start at the big problem. Recurse down. If you've already solved a sub-problem, return the cached answer immediately. Otherwise, solve it, cache it, return it. Like calling a friend — who might call another friend — who might call someone else. Eventually the chain bottoms out, and answers propagate back up.
Tabulation (bottom-up): Start at the smallest sub-problems. Fill a table from smallest to largest, ensuring each cell's dependencies are already filled in. Like filling in a spreadsheet: each cell uses already-computed cells to its left or above. No recursion, no call stack.
Both give the same asymptotic time and space. The choice is about practical tradeoffs.
Memoization — The Lazy Approach
Write the recursive solution exactly as the recurrence says, then add a cache check at the top.
Advantages:
• Natural to write — the code mirrors the recurrence directly.
• Lazy: only computes sub-problems actually needed. If your solution only touches a small fraction of the state space, memoization avoids the wasted work tabulation would do.
• Easy to implement with Python's @lru_cache, JavaScript closures, or a dictionary.
Disadvantages:
• Function call overhead and call-stack memory.
• Stack overflow risk: if n=1,000,000 and each recursive call adds one frame, most systems' default call stack (typically 1,000–10,000 frames) will overflow.
• Space optimization (rolling arrays) is harder to apply — the recursive structure makes it non-obvious which cached states you can discard.
Tabulation — The Eager Approach
Determine the correct fill order (usually smallest-index to largest), iterate, and fill each cell.
Advantages:
• No recursion overhead, no stack overflow.
• Fill order is explicit — you can often optimize space by discarding old rows (rolling window), reducing \(O(n^2)\) to \(O(n)\) or \(O(n)\) to \(O(1)\).
• Better cache locality in practice — sequential array access is fast.
Disadvantages:
• Determining the correct fill order can be non-obvious for 2D or irregular DP.
• Computes all sub-problems, even those not needed for the final answer.
Space Optimization — Tabulation's Superpower
Many DP recurrences look only one or two rows back. When that's the case, you can throw away older rows and keep only what you need.
Fibonacci: dp[i] = dp[i-1] + dp[i-2]. Only need the last two values. Space: \(O(n)\) → \(O(1)\).
0/1 Knapsack: dp[i][w] = max(dp[i-1][w], dp[i-1][w-wt]+val). Only need the previous row. Space: \(O(n·W)\) → \(O(W)\). (Iterate w in reverse to avoid overwriting values you still need.)
This optimization is straightforward with tabulation. With memoization, you'd need to manually track which cached states are still needed — much harder.