Heap Sort
Heap sort elegantly combines the two key heap operations into a sorting algorithm. It has the best-of-both-worlds profile of merge sort's \(O(n \log n)\) guarantee with quicksort's in-place \(O(1)\) extra space — but with a catch we'll get to.
The algorithm has two phases:
- Build a max-heap from the input array using Floyd's heapify algorithm — \(O(n)\)
- Extract repeatedly: swap the root (current maximum) with the last element, shrink the heap boundary by 1, then sift down the new root — repeat n-1 times — \(O(n \log n)\)
Each extraction places the current maximum at the end of the array, building a sorted sequence from the right. No extra array needed — just the original array and a moving "heap boundary".
← → arrow keys to step · click elements to interact
Heap Sort
Step-by-step on [4, 1, 3, 2, 6]
Why not use heap sort everywhere?
Despite its perfect asymptotic profile, heap sort is rarely used in practice compared to quicksort or merge sort. The reason is cache performance.
Quicksort accesses memory sequentially during partition. Heap sort's sift-down jumps between parents and children at positions i, 2i+1, 2i+2 — for large arrays these are cache misses, making heap sort 2-5× slower in practice even though it's the same Big-O.
Heap sort vs merge sort vs quicksort
- Heap sort: \(O(n \log n)\) always, \(O(1)\) space, cache-unfriendly, not stable
- Merge sort: \(O(n \log n)\) always, \(O(n)\) space, cache-friendly (sequential), stable
- Quicksort: \(O(n \log n)\) average, \(O(n^2)\) worst, \(O(\log n)\) stack, very cache-friendly, not stable