💡 Python QuickBits — 🚀 Speed Up Functions with lru_cache
Posted On: August 20, 2025
Description:
When dealing with recursive or expensive functions in Python, performance can quickly become a problem. The same computations are often repeated many times, wasting valuable time.
Python provides a built-in solution: functools.lru_cache. This decorator memoizes (caches) function calls, so repeated inputs return instantly from cache instead of being recomputed.
The Problem — Naive Recursion
A simple recursive Fibonacci implementation recalculates the same values over and over:
def fib(n: int) -> int:
if n < 2:
return n
return fib(n - 1) + fib(n - 2)
Calling fib(35) can take almost a second, since it expands into thousands of redundant function calls.
The Fix — One Decorator
By adding @lru_cache, Python automatically remembers previous results. The first call computes, and the second call returns instantly:
from functools import lru_cache
@lru_cache(maxsize=None)
def fib_cached(n: int) -> int:
if n < 2:
return n
return fib_cached(n - 1) + fib_cached(n - 2)
Now, fib_cached(35) is lightning fast after the first run.
Managing the Cache
Sometimes you’ll want to clear or limit the cache. That’s easy too:
fib_cached.cache_clear()
Next call recomputes and refills the cache.
Key Points
- One decorator = instant performance boost.
- Great for pure functions (same input → same output).
- Arguments must be hashable.
- Use maxsize to control memory usage.
Code Snippet:
# Built-in tools: no external installs required
from functools import lru_cache # decorator that adds memoization
from time import perf_counter # high-resolution timer for simple benchmarking
def fib(n: int) -> int:
"""
Classic recursive Fibonacci (intentionally slow).
Recomputes the same subproblems many times.
"""
if n < 2:
return n
return fib(n - 1) + fib(n - 2)
# Quick timing: naive version
start = perf_counter()
print("Naive fib(35) =", fib(35))
print("Naive elapsed: {:.3f}s".format(perf_counter() - start))
@lru_cache(maxsize=None) # None => unbounded cache; memoize all distinct n
def fib_cached(n: int) -> int:
"""
Same logic, now cached.
Subsequent calls with the same n are O(1) after first computation.
"""
if n < 2:
return n
return fib_cached(n - 1) + fib_cached(n - 2)
# First timed call: computes & fills the cache
start = perf_counter()
print("Cached fib_cached(35) =", fib_cached(35))
print("Cached (1st call) elapsed: {:.3f}s".format(perf_counter() - start))
# Second timed call: returns instantly from cache
start = perf_counter()
print("Cached fib_cached(35) again =", fib_cached(35))
print("Cached (2nd call) elapsed: {:.6f}s".format(perf_counter() - start))
# Inspect cache stats: hits/misses/currsize
print("Cache info:", fib_cached.cache_info())
# If your underlying data/logic changes, clear the cache:
fib_cached.cache_clear()
# After clearing, the next call recomputes and repopulates the cache.
start = perf_counter()
_ = fib_cached(35)
print("After clear, recompute elapsed: {:.3f}s".format(perf_counter() - start))
# ✅ Requirements / Pitfalls:
# - Function must be "pure" (same input -> same output) for correctness.
# - All arguments must be hashable (usable as dict keys). E.g., tuples ok; lists not ok.
# - Use maxsize to bound memory (e.g., @lru_cache(maxsize=1024)) if inputs vary widely.
No comments yet. Be the first to comment!