Dynamic Programming8 min read

DP on Intervals

Solve tiny sections first, then merge them into bigger ones
time:\(O(n^3)\)space:\(O(n^2)\)

Imagine solving a jigsaw puzzle. You wouldn't try to place every piece at once — you'd first complete tiny corner sections, then merge those into bigger regions, until the whole picture comes together. That's exactly how Interval DP works: solve small subproblems first, then combine them into larger ones.

The state is always a range [i, j] of some sequence. The key insight: to solve a large interval, you must have already solved all its sub-intervals. This forces a specific fill order — from short intervals to long.

The Pattern

State: dp[i][j] = optimal answer for the problem restricted to range [i, j].

Transition: Pick a split point k inside [i, j] and combine solutions for sub-intervals — usually [i, k] and [k+1, j], or [i, k-1] + k + [k+1, j].

Fill order: Outer loop over length l = 1, 2, …, n. Inner loop over starting positions i (with j = i + l - 1 determined). Innermost loop over split points k.

Base case: dp[i][i] — single elements, often 0 or trivial.

Burst Balloons — The Canonical Twist

You have n balloons, each with a value. Popping balloon i earns nums[i-1] × nums[i] × nums[i+1] coins (neighbors, with virtual 1s at boundaries). Maximize total coins.

The clever reframe: Instead of asking "which balloon to pop first", ask "which balloon is popped last in interval [i, j]?" When balloon k is the last to go in [i, j], all others are already gone — so k's neighbors are fixed as the interval boundaries. This gives: dp[i][j] = max over k of (dp[i][k-1] + nums[i-1]×nums[k]×nums[j+1] + dp[k+1][j]).

Stone Merge

Merge n piles of stones (adjacent only). Cost of each merge = total stones in the merged group. Minimize total cost. dp[i][j] = min cost to merge piles i..j into one. Try all splits, add prefix-sum cost of combining the two halves.

Palindrome Partitioning

Minimum cuts to split a string so every piece is a palindrome. A 1D DP problem in disguise: dp[i] = min cuts for s[0..i]. Try all j ≤ i where s[j..i] is a palindrome: dp[i] = min(dp[j-1] + 1). Pre-compute palindrome substrings separately.

Note: The 'last balloon' trick in Burst Balloons is a classic insight: it's easier to reason about what happens last inside an interval because the boundaries are fixed — whereas reasoning about what happens first leaves the remaining structure chaotic.

Burst Balloons — interval DP

def max_coins(nums):
nums = [1] + nums + [1] # virtual boundary balloons
n = len(nums)
dp = [[0]*n for _ in range(n)]
# i and j are indices into the padded array
for length in range(2, n):
for i in range(0, n - length):
j = i + length
for k in range(i+1, j):
dp[i][j] = max(dp[i][j],
dp[i][k] + nums[i]*nums[k]*nums[j] + dp[k][j])
return dp[0][n-1]
print(max_coins([3, 1, 5, 8])) # 167
print(max_coins([1, 5])) # 10
Output
167
10

Complexity

⏱️ Time\(O(n^2)\) intervals × \(O(n)\) split points per interval — three nested loops
Cubic\(O(n^3)\)
📊 Spacedp[n][n] table for all possible intervals
Quadratic\(O(n^2)\)
📐 Fill order overheadOuter: n lengths; middle: n starting positions; inner: n split points
By length\(O(n^2)\) outer + \(O(n)\) inner
🔢 Matrix Chain MCM (same pattern)Matrix chain multiplication is a special case of interval DP
Identical complexity\(O(n^3)\)

Quick check

In Burst Balloons, why do we think about which balloon is popped LAST in an interval — not first?

Continue reading

Matrix Chain Multiplication
The order you multiply matrices matters enormously — find the cheapest order
DP Pattern
Never solve the same puzzle piece twice — write it down and look it up next time