Amortized Analysis
Think about a gym membership. Most months you just pay $30 and go. But once a year there's a $300 annual equipment fee. If you average that fee across all 12 months, it's only $25 extra per month. The big payment hurts once — but spread out, it's cheap.
Amortized analysis works exactly the same way. Instead of asking "how slow can a single operation be?", we ask "what is the average cost per operation over a long sequence of operations?" Some operations are expensive, but if they're rare enough, the long-run average stays low.
The Classic Example: Dynamic Array Push
A dynamic array (Python's list, Java's ArrayList) starts small and doubles in capacity whenever it fills up. Most push calls are \(O(1)\) — just write to the next slot. But when the array is full, it must allocate a new array twice as large and copy everything over. That resize costs \(O(n)\).
Seeing that one expensive resize might make you think pushing is slow. But look at the whole picture: if the array just doubled from size n/2 to n, you copied n/2 elements. Before that resize could happen again, you must do another n/2 cheap pushes. So each element is "paying" for its own eventual copy. Spread across all pushes, the amortized cost per push is \(O(1)\).
The Three Methods for Proving It
Aggregate method: Count the total work for n operations, then divide by n. For n pushes with doubling: resizes copy 1 + 2 + 4 + … + n/2 elements = \(O(n)\) total. So n pushes cost \(O(n)\) total → \(O(1)\) each.
Accounting method: Give each push a "credit" of 3 units: 1 to insert, 2 saved. When a resize happens, the 2 banked credits per element pay for copying itself and one slot in the new array. Credits never go negative, which proves correctness.
Potential method: Define a potential Φ (like stored energy) that rises as the array fills and drops at resize. The amortized cost = actual cost + ΔΦ. A good Φ for dynamic arrays is 2·(size) − capacity. This method scales to complex structures where intuition alone isn't enough.