Advanced Topics8 min read

Branch & Bound

Backtracking with a crystal ball — prune branches that can't possibly win
best case:Polynomial (tight bounds prune aggressively)worst case:Exponential (bounds never help)goal:Exact optimal solution to NP-hard problems

Imagine you're shopping for the cheapest flight with a layover. You're at the airport considering Route A (New York → Chicago → LA) and Route B (New York → Dallas → LA). Before you even book the leg to Chicago, you check: "What's the absolute minimum the rest of this route could cost?" If even the best-case completion of Route A is more expensive than a complete Route B you already found — skip Route A entirely.

That's Branch and Bound: backtracking upgraded with a bounding function that estimates the best possible outcome of any partial solution. If the best possible outcome can't beat what you've already found, prune the whole branch — don't go there.

The two parts of the name

Branch — split the problem into sub-choices, just like backtracking builds a decision tree.

Bound — for each partial solution, compute a bound: the best possible objective value achievable by completing it. For minimization, this is a lower bound. For maximization, an upper bound.

The pruning rule for minimization: if lower_bound(node) ≥ best_complete_solution_so_far, prune — nothing in this subtree can improve the current best.

Traversal strategies

The order you expand nodes dramatically affects how quickly you find a good solution to tighten the bound:

Depth-first (stack): finds complete solutions quickly, tightening the bound early. Memory-efficient. May explore poor branches before good ones.

Best-first (priority queue): always expands the node with the best (lowest) bound — guaranteed to find the optimal solution with fewest nodes explored. Downside: may require exponential memory.

In practice, best-first with a good bounding function is dramatically faster than alternatives.

0/1 Knapsack — worked example

Branch on each item: include it or skip it. The upper bound for a partial solution: take the value accumulated so far, then greedily fill remaining capacity with the fractional knapsack solution on remaining items (allow breaking items). Since fractional filling is always at least as good as 0/1 filling, this is a valid upper bound. If this upper bound is below the best complete solution found, prune.

With items sorted by value-per-weight ratio, the fractional bound is tight, and pruning is aggressive.

Traveling Salesman Problem

A lower bound for a partial TSP tour: (cost of edges chosen so far) + (minimum spanning tree of unvisited cities). This underestimates the remaining cost — a valid lower bound. When this lower bound exceeds the best complete tour found, prune. With tight bounds, Branch and Bound solves TSP instances with hundreds of cities exactly — though worst case stays exponential.

When to use Branch and Bound

Use it when you need a provably optimal answer and the problem instance is small enough. For larger instances: approximation algorithms give polynomial-time answers within a guaranteed ratio of optimal; dynamic programming works when the problem has suitable substructure; heuristics are fast with no guarantee. The quality of the bounding function is everything — poor bounds mean near-exhaustive search; tight bounds can tame surprisingly large instances.

Note: The bounding function is the soul of Branch and Bound. A bound that is always exactly optimal tells you nothing useful. A bound that is fast to compute and close to the true optimum can prune 99% of the tree. Invest in your bound.

0/1 Knapsack via Branch & Bound

import heapq
def knapsack_bb(weights, values, capacity):
n = len(weights)
# Sort by value/weight ratio descending
items = sorted(zip(values, weights), key=lambda x: x[0]/x[1], reverse=True)
def upper_bound(idx, remaining_cap, current_val):
"""Fractional knapsack upper bound from item idx onward."""
val = current_val
for i in range(idx, n):
v, w = items[i]
if w <= remaining_cap:
val += v; remaining_cap -= w
else:
val += v * (remaining_cap / w)
break
return val
best = 0
# heap entries: (-upper_bound, idx, remaining_cap, current_val)
heap = [(-upper_bound(0, capacity, 0), 0, capacity, 0)]
while heap:
neg_ub, idx, rem, cur = heapq.heappop(heap)
if -neg_ub <= best or idx == n:
continue
v, w = items[idx]
# Branch: include item
if w <= rem:
new_val = cur + v
best = max(best, new_val)
ub = upper_bound(idx+1, rem-w, new_val)
if ub > best:
heapq.heappush(heap, (-ub, idx+1, rem-w, new_val))
# Branch: skip item
ub = upper_bound(idx+1, rem, cur)
if ub > best:
heapq.heappush(heap, (-ub, idx+1, rem, cur))
return best
weights = [2, 3, 4, 5]
values = [3, 4, 5, 6]
print(knapsack_bb(weights, values, 8)) # 10
Output
10

Complexity

🎯 Best case (tight bounds)With bounds that closely match the true optimum, only a polynomial number of nodes may be visited
Most branches pruned immediately\(O(n^c)\) possible
💥 Worst case (loose bounds)If bounds never prune anything, Branch and Bound is just backtracking with extra overhead
Degrades to exhaustive search\(O(2^n)\) or \(O(n!)\)
📊 Best-first traversalN = nodes explored; guarantees optimal with fewest nodes but needs \(O(N)\) memory for the queue
Priority queue per node\(O(\log N)\) per node
📐 Fractional knapsack boundAfter \(O(n \log n)\) pre-sort; produces tight bounds that prune aggressively
Greedy fill on remaining items\(O(n)\) per node

Quick check

What does Branch and Bound add on top of plain backtracking?

Continue reading

Backtracking
Explore every path, retreat the moment you hit a dead end
NP, NP-Complete, NP-Hard
The million-dollar question: can finding an answer ever be as easy as checking one?