Profiling — Measure Before You Optimize
The first rule of optimization: don't guess, measure. Developers are notoriously bad at predicting where bottlenecks are. A function you think is slow might take 1% of total time, while a "fast" function called 100,000 times dominates. Always profile first.
Premature optimization is the root of all evil
— Donald Knuth. Write clear, correct code first. Optimize only when you have evidence of a performance problem, and only the specific parts that profiling identifies as bottlenecks.
cProfile — Built-in Profiler
import cProfile
def slow_function():
total = 0
for i in range(1_000_000):
total += i ** 2
return total
def fast_function():
return sum(i ** 2 for i in range(1_000_000))
# Profile a specific function
cProfile.run('slow_function()')
# Or from the command line:
# python -m cProfile -s cumtime your_script.py
Output
4 function calls in 0.182 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.182 0.182 0.182 0.182 <string>:1(slow_function)
1 0.000 0.000 0.182 0.182 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler'}timeit — Micro-benchmarks
For comparing two approaches, timeit runs code thousands of times to get reliable measurements:
import timeit
# Compare list comprehension vs loop
loop_time = timeit.timeit(