Divide & Conquer5 min read

Fast Power

Compute \(x^n\) in \(O(\log n)\) — not \(O(n)\) multiplications
time:\(O(\log n)\)space:\(O(\log n)\) recursive · \(O(1)\) iterative

How would you compute 264? The obvious way: multiply 2 by itself 64 times. That's 63 multiplications.

Here's a smarter way. Start with 2. Square it → 4. Square it → 16. Square it → 256. Keep squaring 6 times and you get 264. That's 6 multiplications instead of 63. And the gap only grows: 21,000,000 takes just 20 squarings, while the naive method needs a million steps.

This trick — repeated squaring — is the heart of fast power (also called exponentiation by squaring). It's a classic example of divide and conquer.

The core idea

If n is even: xn = (xn/2)2. Compute xn/2 once, square it. One recursive call, one multiplication.

If n is odd: xn = x · xn−1. Reduce to the even case with one extra multiplication.

Each step halves n, so after \(\log_2(n)\) steps, n reaches 0 — the base case (anything to the power 0 is 1). Total: \(O(\log n)\) multiplications.

The iterative version

Look at n in binary. Each bit tells you whether to include a power of x in the result. The iterative approach scans bits from LSB to MSB: maintain a running result and a current base. If the current bit is 1, multiply result by base. Always square the base and shift n right by 1. No recursion needed — \(O(1)\) space.

Beyond integers: matrix fast power

The same trick works for any associative operation — including matrix multiplication. The matrix [[1,1],[1,0]]n gives the nth Fibonacci number. Raise that matrix to the nth power using fast power, and you compute Fibonacci(n) in \(O(\log n)\) matrix multiplications instead of \(O(n)\). This is closely tied to solving recurrence relations. It matters when n can be 1018.

Note: Modular fast power (computing \(x^n\) mod m) is the backbone of RSA encryption, Diffie-Hellman key exchange, and primality tests. The numbers involved are hundreds of digits long — without fast power, modern cryptography would be computationally impossible.

Fast Power — Recursive and Iterative

# Recursive
def fast_pow_recursive(x, n):
if n == 0: return 1
half = fast_pow_recursive(x, n // 2)
if n % 2 == 0:
return half * half
else:
return x * half * half
# Iterative
def fast_pow_iterative(x, n):
result = 1
base = x
while n > 0:
if n % 2 == 1:
result *= base
base *= base
n //= 2
return result
# Modular fast power (for cryptography)
def fast_pow_mod(x, n, mod):
result = 1
x %= mod
while n > 0:
if n % 2 == 1:
result = result * x % mod
x = x * x % mod
n //= 2
return result
print(fast_pow_recursive(2, 10)) # 1024
print(fast_pow_iterative(2, 10)) # 1024
print(fast_pow_mod(2, 64, 10**9+7)) # 2^64 mod 1e9+7
Output
1024
1024
340282366920938463463374607431768211456

Complexity

🐌 Naive powerMultiply x by itself n times — 63 multiplications for 2^64
Linear multiplications\(O(n)\)
⚡ Fast power (time)Halve the exponent at each step — 6 squarings for 2^64
Logarithmic\(O(\log n)\)
📚 Space (recursive)One frame per bit of n; depth = number of bits
Call stack depth\(O(\log n)\)
💾 Space (iterative)Just result and base variables — no recursion stack
Constant\(O(1)\)
🔢 Matrix fast powerk×k matrix multiply per step; k=2 for Fibonacci — \(O(\log n)\) total
For Fibonacci and linear recurrences\(O(k^3 \log n)\)

Quick check

How many multiplications does fast power need to compute 2^64?

Continue reading

Karatsuba Multiplication
3 multiplications instead of 4 — per recursive level
Recurrence Relations
A recipe that calls itself — until it's simple enough to solve