Fast Power
How would you compute 264? The obvious way: multiply 2 by itself 64 times. That's 63 multiplications.
Here's a smarter way. Start with 2. Square it → 4. Square it → 16. Square it → 256. Keep squaring 6 times and you get 264. That's 6 multiplications instead of 63. And the gap only grows: 21,000,000 takes just 20 squarings, while the naive method needs a million steps.
This trick — repeated squaring — is the heart of fast power (also called exponentiation by squaring). It's a classic example of divide and conquer.
The core idea
If n is even: xn = (xn/2)2. Compute xn/2 once, square it. One recursive call, one multiplication.
If n is odd: xn = x · xn−1. Reduce to the even case with one extra multiplication.
Each step halves n, so after \(\log_2(n)\) steps, n reaches 0 — the base case (anything to the power 0 is 1). Total: \(O(\log n)\) multiplications.
The iterative version
Look at n in binary. Each bit tells you whether to include a power of x in the result. The iterative approach scans bits from LSB to MSB: maintain a running result and a current base. If the current bit is 1, multiply result by base. Always square the base and shift n right by 1. No recursion needed — \(O(1)\) space.
Beyond integers: matrix fast power
The same trick works for any associative operation — including matrix multiplication. The matrix [[1,1],[1,0]]n gives the nth Fibonacci number. Raise that matrix to the nth power using fast power, and you compute Fibonacci(n) in \(O(\log n)\) matrix multiplications instead of \(O(n)\). This is closely tied to solving recurrence relations. It matters when n can be 1018.