Big-O, Theta, Omega
Imagine you're cooking for a dinner party. A recipe for grilled cheese takes about the same time whether you're feeding 2 people or 200 — you just stack them on the grill together. That's \(O(1)\): constant time, no matter the size.
But if the recipe says "taste every dish and compare it to every other dish", cooking for 200 people means an astronomic number of comparisons. That's \(O(n^2)\): it explodes as the guest list grows.
Big-O notation is just a way to describe that growth rate — how many more steps does an algorithm need as the input gets bigger? We don't care about exact milliseconds (those depend on your computer). We care about the shape of the growth.
The Core Idea: Drop the Details
If an algorithm does 3n² + 100n + 42 steps, Big-O says that's simply \(O(n^2)\). Here's why: when n is a million, the \(n^2\) term completely drowns out everything else. So we drop constants and lower-order terms and just keep the dominant piece.
Two rules to remember: (1) Drop constants — 5n is \(O(n)\). (2) Drop smaller terms — \(n^2\) + n is \(O(n^2)\). Sequential steps add; nested loops multiply.
The Most Common Complexities
\(O(1)\) — Constant. Grabbing an array element by index. Same speed no matter what.
\(O(\log n)\) — Logarithmic. Binary search on a sorted list. Each step cuts the problem in half, so even a billion items only takes ~30 steps.
\(O(n)\) — Linear. Scanning every item once. Double the input, double the time.
\(O(n \log n)\) — Linearithmic. Merge sort, heap sort. The sweet spot for comparison-based sorting.
\(O(n^2)\) — Quadratic. A loop inside a loop. 10× more input = 100× more time.
\(O(2^n)\) — Exponential. Brute-force subset search. Completely infeasible past ~30 items.
Big-O, Big-Omega, Big-Theta
Big-O is an upper bound — the worst it can get. Big-Omega (Ω) is a lower bound — the best it can ever do. Big-Theta (Θ) is both at once: the algorithm grows exactly like that function, up to constants. Merge sort is \(\Theta(n \log n)\) in all cases — best, worst, and average.