Complexity & Analysis7 min read

Amortized Analysis

One big bill, paid in tiny installments
push (typical):\(O(1)\) — just writepush (with resize):\(O(n)\) — rare spikepush (amortized):\(O(1)\)* averaged outn pushes total:\(O(n)\) guaranteed

Think about a gym membership. Most months you just pay $30 and go. But once a year there's a $300 annual equipment fee. If you average that fee across all 12 months, it's only $25 extra per month. The big payment hurts once — but spread out, it's cheap.

Amortized analysis works exactly the same way. Instead of asking "how slow can a single operation be?", we ask "what is the average cost per operation over a long sequence of operations?" Some operations are expensive, but if they're rare enough, the long-run average stays low.

The Classic Example: Dynamic Array Push

A dynamic array (Python's list, Java's ArrayList) starts small and doubles in capacity whenever it fills up. Most push calls are \(O(1)\) — just write to the next slot. But when the array is full, it must allocate a new array twice as large and copy everything over. That resize costs \(O(n)\).

Seeing that one expensive resize might make you think pushing is slow. But look at the whole picture: if the array just doubled from size n/2 to n, you copied n/2 elements. Before that resize could happen again, you must do another n/2 cheap pushes. So each element is "paying" for its own eventual copy. Spread across all pushes, the amortized cost per push is \(O(1)\).

The Three Methods for Proving It

Aggregate method: Count the total work for n operations, then divide by n. For n pushes with doubling: resizes copy 1 + 2 + 4 + … + n/2 elements = \(O(n)\) total. So n pushes cost \(O(n)\) total → \(O(1)\) each.

Accounting method: Give each push a "credit" of 3 units: 1 to insert, 2 saved. When a resize happens, the 2 banked credits per element pay for copying itself and one slot in the new array. Credits never go negative, which proves correctness.

Potential method: Define a potential Φ (like stored energy) that rises as the array fills and drops at resize. The amortized cost = actual cost + ΔΦ. A good Φ for dynamic arrays is 2·(size) − capacity. This method scales to complex structures where intuition alone isn't enough.

Note: Amortized \(O(1)\) does NOT mean every operation is fast. It means that no matter how many operations you run, the total cost divided by the count is \(O(1)\). One occasional spike is fine — as long as it's rare enough.

Dynamic Array Push — Watching Resizes

import sys
class DynamicArray:
def __init__(self):
self._cap = 1
self._size = 0
self._data = [None] * self._cap
def push(self, val):
if self._size == self._cap:
self._resize()
self._data[self._size] = val
self._size += 1
def _resize(self):
new_cap = self._cap * 2
print(f" Resize! {self._cap} -> {new_cap}, copying {self._cap} elements")
new_data = [None] * new_cap
for i in range(self._size):
new_data[i] = self._data[i]
self._data = new_data
self._cap = new_cap
arr = DynamicArray()
for i in range(9):
print(f"Push {i}")
arr.push(i)
Output
Push 0
  Resize! 1 -> 2, copying 1 elements
Push 1
  Resize! 2 -> 4, copying 2 elements
Push 2
Push 3
  Resize! 4 -> 8, copying 4 elements
Push 4
Push 5
Push 6
Push 7
  Resize! 8 -> 16, copying 8 elements
Push 8

Complexity

⚡ Push (no resize needed)The common case — happens most of the time
Instant — just write\(O(1)\)
💸 Push (triggers resize)Rare spike — doubles capacity, copies all elements
Linear in current size\(O(n)\)
📊 Push (amortized average)Spread the resize cost across all pushes that led to it
Constant on average\(O(1)\)*
🧮 n pushes total1 + 2 + 4 + … + n = 2n copies across all resizes
Linear total work\(O(n)\)

Quick check

A dynamic array doubles its capacity on overflow. After n total pushes, what is the total copy work done across all resize events?

Continue reading

Big-O, Theta, Omega
How fast does your recipe slow down as the guest list grows?
DP Pattern
Never solve the same puzzle piece twice — write it down and look it up next time