Testing in Python

Master pytest, fixtures, parametrize, mocking, coverage, and test organization for reliable Python code.

Intermediate 40 min read 🐍 Python

Why Test?

Testing is how you prove your code works — and keep it working as you make changes. Without tests, every bug fix or new feature risks breaking something else. With tests, you refactor with confidence because the tests catch regressions instantly.

Tests are not optional for professional code

Every serious Python project uses tests. Django, Flask, NumPy, Pandas — they all have thousands of tests. If you want to contribute to open source or work on production code, testing is a core skill.

pytest — The Modern Testing Framework

pytest is the standard testing tool in the Python ecosystem. It's simpler than the built-in unittest module, more powerful, and has a huge plugin ecosystem. Install it with pip install pytest.

Your First Test

Create a file named test_math.py (the test_ prefix is important — pytest discovers tests by looking for files and functions that start with test):

# calculator.py
def add(a, b):
    return a + b

def divide(a, b):
    if b == 0:
        raise ValueError("Cannot divide by zero")
    return a / b
# test_calculator.py
import pytest
from calculator import add, divide

def test_add_positive():
    assert add(2, 3) == 5

def test_add_negative():
    assert add(-1, 1) == 0

def test_add_floats():
    assert add(0.1, 0.2) == pytest.approx(0.3)  # Float comparison!

def test_divide():
    assert divide(10, 2) == 5.0

def test_divide_by_zero():
    with pytest.raises(ValueError, match="Cannot divide by zero"):
        divide(10, 0)
# Run tests
pytest test_calculator.py -v
Output
test_calculator.py::test_add_positive PASSED
test_calculator.py::test_add_negative PASSED
test_calculator.py::test_add_floats PASSED
test_calculator.py::test_divide PASSED
test_calculator.py::test_divide_by_zero PASSED
===== 5 passed in 0.01s =====
Key Takeaway: Use plain assert statements in pytest — no special assertion methods needed. pytest gives detailed failure messages showing exactly what was expected vs. what you got.

Fixtures — Setup and Teardown

Fixtures provide reusable test setup. Instead of duplicating setup code in every test, define it once as a fixture and pytest injects it automatically:

import pytest

@pytest.fixture
def sample_users():
    """Provide sample user data for tests."""
    return [
        {"name": "Alice", "age": 30, "role": "admin"},
        {"name": "Bob", "age": 25, "role": "user"},
        {"name": "Charlie", "age": 35, "role": "user"},
    ]

@pytest.fixture
def db_connection():
    """Setup and teardown a database connection."""
    conn = create_connection()  # Setup
    yield conn                  # Provide to test
    conn.close()                # Teardown (runs after test)

def test_user_count(sample_users):
    assert len(sample_users) == 3

def test_admin_exists(sample_users):
    admins = [u for u in sample_users if u["role"] == "admin"]
    assert len(admins) == 1
    assert admins[0]["name"] == "Alice"

The yield in a fixture separates setup from teardown. Everything before yield runs before the test, everything after runs after — even if the test fails.

Parametrize — Test Multiple Inputs

Instead of writing separate tests for each input, use @pytest.mark.parametrize to run the same test with different data:

import pytest
from calculator import add

@pytest.mark.parametrize("a, b, expected", [
    (2, 3, 5),
    (-1, 1, 0),
    (0, 0, 0),
    (100, -50, 50),
    (0.1, 0.2, pytest.approx(0.3)),
])
def test_add(a, b, expected):
    assert add(a, b) == expected

This generates 5 separate tests from one function. Each runs independently — if one fails, the others still run.

Mocking — Testing in Isolation

Sometimes your code calls external services (APIs, databases, file systems). You don't want tests to depend on these — they're slow, unreliable, and might cost money. Mocking replaces real dependencies with fake ones that you control:

from unittest.mock import patch, MagicMock

# The function we want to test
def get_user_name(user_id):
    """Fetch user name from an API."""
    import requests
    response = requests.get(f"https://api.example.com/users/{user_id}")
    response.raise_for_status()
    return response.json()["name"]

# Test with mocking — no real API call!
@patch("requests.get")
def test_get_user_name(mock_get):
    # Configure the mock
    mock_response = MagicMock()
    mock_response.json.return_value = {"name": "Alice", "id": 1}
    mock_response.raise_for_status.return_value = None
    mock_get.return_value = mock_response

    # Run the test
    result = get_user_name(1)
    assert result == "Alice"

    # Verify the mock was called correctly
    mock_get.assert_called_once_with("https://api.example.com/users/1")

When to Mock

Mock external dependencies (APIs, databases, file I/O, time, random). Don't mock the code you're testing — that defeats the purpose. A good rule: mock at the boundary between your code and the outside world.

Test Coverage

Coverage measures what percentage of your code is executed by tests. Install with pip install pytest-cov:

# Run tests with coverage report
pytest --cov=mymodule --cov-report=term-missing

# Generate HTML report (opens in browser)
pytest --cov=mymodule --cov-report=html
Output
---------- coverage: platform linux, python 3.12 ----------
Name              Stmts   Miss  Cover   Missing
------------------------------------------------
calculator.py         8      0   100%
mymodule/core.py     45      3    93%   34-36
------------------------------------------------
TOTAL                53      3    94%

Aim for 80%+ coverage on critical paths. Don't chase 100% blindly — testing trivial getters or __repr__ methods adds noise without value.

Test Organization

myproject/
    src/
        mymodule/
            __init__.py
            core.py
            utils.py
    tests/
        __init__.py
        conftest.py          # Shared fixtures (auto-discovered by pytest)
        test_core.py
        test_utils.py
    pyproject.toml           # pytest configuration
# pyproject.toml
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py"]
addopts = "-v --tb=short"
🔍 Deep Dive: TDD (Test-Driven Development)

TDD flips the order: write the test first, watch it fail, then write the minimum code to make it pass. The cycle is Red (failing test) → Green (passing test) → Refactor (clean up). This ensures every feature has a test and forces you to think about the API before implementation. It feels slow at first but prevents bugs and produces cleaner designs.

⚠️ Common Mistake: Testing Implementation, Not Behavior

Wrong:

def test_sort_uses_quicksort():
    # Tests HOW it works, not WHAT it does
    assert sorter.algorithm == "quicksort"

Why: If you change the algorithm to mergesort (same results, better for your use case), the test breaks even though nothing is wrong.

Instead:

def test_sort_returns_sorted_list():
    # Tests WHAT it does — the behavior
    assert sorter.sort([3, 1, 2]) == [1, 2, 3]