Skip to main content

FastAPI Test Best Practices

Introduction

Testing is a crucial aspect of software development that ensures your application works as expected and continues to do so as it evolves. When building FastAPI applications, following established testing best practices helps you create more reliable, maintainable, and robust APIs.

In this guide, we'll explore essential testing best practices specifically for FastAPI applications. Whether you're new to testing or looking to improve your existing testing strategy, these recommendations will help you create effective test suites for your FastAPI projects.

Why Testing Matters in FastAPI Applications

FastAPI is designed for building high-performance APIs with automatic validation, serialization, and documentation. Proper testing ensures:

  1. Your API endpoints return the expected responses
  2. Request validation works correctly
  3. Dependencies and middleware function as intended
  4. Database operations perform properly
  5. Authentication and authorization mechanisms are secure

Setting Up Your Testing Environment

For testing FastAPI applications, we recommend the following tools:

  • pytest: The most popular testing framework for Python
  • pytest-asyncio: For testing asynchronous code
  • TestClient from FastAPI: For making test requests to your API
  • pytest-cov: For checking test coverage

Basic Setup

First, install the necessary packages:

bash
pip install pytest pytest-asyncio pytest-cov httpx

Create a basic test structure:

my_fastapi_project/
├── app/
│ ├── __init__.py
│ ├── main.py
│ └── ...
├── tests/
│ ├── __init__.py
│ ├── conftest.py
│ ├── test_main.py
│ └── ...
├── requirements.txt
└── ...

Best Practice 1: Use Fixtures for Reusable Test Components

Pytest fixtures allow you to create reusable components for your tests.

Example: Creating a TestClient Fixture

In your conftest.py:

python
import pytest
from fastapi.testclient import TestClient
from app.main import app

@pytest.fixture
def client():
"""Create a test client for the app."""
return TestClient(app)

Then in your tests:

python
def test_read_main(client):
"""Test the main endpoint returns the expected status code and response."""
response = client.get("/")
assert response.status_code == 200
assert response.json() == {"message": "Hello World"}

Best Practice 2: Create Test Databases

Avoid testing against your production database by creating a separate test database.

Example: Test Database Fixture

python
import pytest
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from app.database import Base, get_db
from app.main import app

SQLALCHEMY_TEST_DATABASE_URL = "sqlite:///./test.db"

@pytest.fixture(scope="session")
def test_db_engine():
"""Create a test database engine."""
engine = create_engine(SQLALCHEMY_TEST_DATABASE_URL)
Base.metadata.create_all(bind=engine)
yield engine
Base.metadata.drop_all(bind=engine)

@pytest.fixture
def db_session(test_db_engine):
"""Create a test database session."""
TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=test_db_engine)
db = TestingSessionLocal()
try:
yield db
finally:
db.close()

@pytest.fixture
def client(db_session):
"""Create a test client with a test database."""
def override_get_db():
try:
yield db_session
finally:
pass

app.dependency_overrides[get_db] = override_get_db
with TestClient(app) as client:
yield client
app.dependency_overrides.clear()

Best Practice 3: Mock External Services

When your FastAPI application interacts with external services, you should mock these interactions during testing.

Example: Mocking an External API

python
import pytest
from unittest.mock import patch, MagicMock
from app.services import weather_service

@pytest.fixture
def mock_weather_api():
"""Mock the weather API response."""
with patch('app.services.weather_service.get_weather_data') as mock:
mock.return_value = {
"city": "Test City",
"temperature": 25,
"condition": "Sunny"
}
yield mock

def test_get_weather_endpoint(client, mock_weather_api):
"""Test the weather endpoint with mocked data."""
response = client.get("/weather/Test City")
assert response.status_code == 200
assert response.json() == {
"city": "Test City",
"temperature": 25,
"condition": "Sunny"
}
mock_weather_api.assert_called_once_with("Test City")

Best Practice 4: Parametrize Tests

Use pytest's parametrize decorator to test multiple scenarios with the same test function.

Example: Testing Different Input Scenarios

python
import pytest

@pytest.mark.parametrize(
"item_id,expected_status,expected_detail",
[
(1, 200, None), # Valid ID
(999, 404, "Item not found"), # Non-existent ID
("abc", 422, "Input should be a valid integer"), # Invalid type
]
)
def test_get_item(client, item_id, expected_status, expected_detail):
"""Test getting items with different IDs and expected outcomes."""
response = client.get(f"/items/{item_id}")
assert response.status_code == expected_status

if expected_status == 200:
assert "id" in response.json()
elif expected_status == 404:
assert response.json()["detail"] == expected_detail
elif expected_status == 422:
assert expected_detail in str(response.json()["detail"])

Best Practice 5: Test Authentication and Authorization

Secure endpoints should have dedicated tests to ensure authentication and authorization work correctly.

Example: Testing Protected Endpoints

python
def test_read_users_me_unauthorized(client):
"""Test that unauthorized access is properly rejected."""
response = client.get("/users/me")
assert response.status_code == 401
assert response.json() == {"detail": "Not authenticated"}

def test_read_users_me_authorized(client):
"""Test that authorized access works correctly."""
# Create a mock token
mock_token = "fake-super-secret-token"

# Create a user in the test database with this token
# ... (setup code omitted) ...

response = client.get(
"/users/me",
headers={"Authorization": f"Bearer {mock_token}"}
)
assert response.status_code == 200
assert response.json()["username"] == "testuser"

Best Practice 6: Use Test-Driven Development

Consider using Test-Driven Development (TDD) by writing tests before implementing features.

Example TDD Workflow:

  1. Write a test for a new feature
  2. Run the test and see it fail
  3. Implement the feature
  4. Run the test again and make it pass
  5. Refactor your code while ensuring tests still pass
python
# Step 1: Write the test first
def test_create_item(client):
"""Test creating a new item."""
item_data = {"name": "Test Item", "price": 10.5}
response = client.post("/items/", json=item_data)
assert response.status_code == 201
created_item = response.json()
assert created_item["name"] == item_data["name"]
assert created_item["price"] == item_data["price"]
assert "id" in created_item

# Step 2: Implement the feature in your FastAPI app
# from fastapi import FastAPI, HTTPException, status
# app = FastAPI()
#
# @app.post("/items/", status_code=status.HTTP_201_CREATED)
# def create_item(item: Item):
# new_item = item.dict()
# new_item["id"] = len(items) + 1
# items.append(new_item)
# return new_item

Best Practice 7: Test Middleware and Dependencies

Don't forget to test your middleware and dependencies separately from your routes.

Example: Testing Custom Middleware

python
from fastapi import FastAPI, Depends, Request
from starlette.middleware.base import BaseHTTPMiddleware

# A custom middleware that adds a header to responses
class CustomHeaderMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request, call_next):
response = await call_next(request)
response.headers["X-Custom-Header"] = "test-value"
return response

# Test for the middleware
def test_custom_header_middleware(client):
"""Test that the custom header middleware adds the expected header."""
response = client.get("/")
assert response.headers["X-Custom-Header"] == "test-value"

Best Practice 8: Measure and Maintain Test Coverage

Use pytest-cov to measure your test coverage and aim to maintain high coverage.

Example: Running Tests with Coverage

bash
pytest --cov=app tests/

You can generate a coverage report:

bash
pytest --cov=app --cov-report=html tests/

This will generate an HTML report showing which parts of your code are covered by tests.

Best Practice 9: Test Error Handling

Make sure to test how your application handles errors and edge cases.

python
def test_invalid_json_body(client):
"""Test that invalid JSON bodies are properly handled."""
response = client.post(
"/items/",
headers={"Content-Type": "application/json"},
data="This is not valid JSON"
)
assert response.status_code == 422

def test_database_connection_error(client, monkeypatch):
"""Test handling of database connection errors."""
def mock_db_error(*args, **kwargs):
raise Exception("Database connection error")

# Patch the database function
monkeypatch.setattr("app.database.get_db", mock_db_error)

response = client.get("/items/")
assert response.status_code == 500
assert response.json() == {"detail": "Internal server error"}

Best Practice 10: Organize Tests Logically

Structure your tests to match your application structure for better organization.

tests/
├── conftest.py
├── test_main.py
├── api/
│ ├── test_items.py
│ ├── test_users.py
│ └── ...
├── models/
│ ├── test_item.py
│ ├── test_user.py
│ └── ...
└── utils/
├── test_security.py
└── ...

Real-World Application: E-commerce API Testing

Let's look at a more comprehensive example testing endpoints for an e-commerce API:

python
# In conftest.py
import pytest
from fastapi.testclient import TestClient
from app.main import app
from app.database import get_db, Base
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker

SQLALCHEMY_TEST_DATABASE_URL = "sqlite:///./test_ecommerce.db"

@pytest.fixture(scope="session")
def db_engine():
engine = create_engine(SQLALCHEMY_TEST_DATABASE_URL)
Base.metadata.create_all(bind=engine)
yield engine
Base.metadata.drop_all(bind=engine)

@pytest.fixture
def db(db_engine):
TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=db_engine)
session = TestingSessionLocal()
try:
yield session
finally:
session.close()

@pytest.fixture
def client(db):
def override_get_db():
try:
yield db
finally:
pass

app.dependency_overrides[get_db] = override_get_db
with TestClient(app) as client:
yield client
app.dependency_overrides.clear()

@pytest.fixture
def test_user(client):
user_data = {"username": "testuser", "email": "[email protected]", "password": "testpassword"}
response = client.post("/users/", json=user_data)
assert response.status_code == 201
return response.json()

@pytest.fixture
def test_product(client, test_user, db):
# Create a product for testing
product_data = {
"name": "Test Product",
"description": "Test description",
"price": 29.99,
"stock": 100
}
response = client.post(
"/products/",
json=product_data,
headers={"Authorization": f"Bearer {test_user['access_token']}"}
)
assert response.status_code == 201
return response.json()

# In test_products.py
def test_get_product_list(client):
"""Test retrieving the product list."""
response = client.get("/products/")
assert response.status_code == 200
assert isinstance(response.json(), list)

def test_get_product_by_id(client, test_product):
"""Test retrieving a product by ID."""
product_id = test_product["id"]
response = client.get(f"/products/{product_id}")
assert response.status_code == 200
assert response.json()["name"] == test_product["name"]
assert response.json()["price"] == test_product["price"]

def test_update_product(client, test_user, test_product):
"""Test updating a product."""
product_id = test_product["id"]
updated_data = {
"name": "Updated Product Name",
"description": "Updated description",
"price": 39.99,
"stock": 50
}

response = client.put(
f"/products/{product_id}",
json=updated_data,
headers={"Authorization": f"Bearer {test_user['access_token']}"}
)
assert response.status_code == 200
assert response.json()["name"] == updated_data["name"]
assert response.json()["price"] == updated_data["price"]

def test_delete_product(client, test_user, test_product):
"""Test deleting a product."""
product_id = test_product["id"]

response = client.delete(
f"/products/{product_id}",
headers={"Authorization": f"Bearer {test_user['access_token']}"}
)
assert response.status_code == 204

# Verify product no longer exists
get_response = client.get(f"/products/{product_id}")
assert get_response.status_code == 404

Continuous Integration Best Practices

Integrate your tests with a CI/CD pipeline to ensure they run on every code change:

yaml
# Example GitHub Actions workflow
name: FastAPI Tests

on:
push:
branches: [ main ]
pull_request:
branches: [ main ]

jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pytest pytest-cov
- name: Test with pytest
run: |
pytest --cov=app tests/
- name: Upload coverage report
uses: codecov/codecov-action@v1

Summary

Following these testing best practices for your FastAPI applications will help ensure your API is reliable, maintainable, and functions as expected. Remember these key points:

  1. Use pytest fixtures for reusable components
  2. Create dedicated test databases
  3. Mock external services
  4. Parametrize tests for multiple scenarios
  5. Test authentication and authorization
  6. Consider test-driven development
  7. Test middleware and dependencies
  8. Measure and maintain test coverage
  9. Test error handling
  10. Organize your tests logically

By implementing these practices, you'll create a robust testing strategy that gives you confidence in your FastAPI application's functionality and helps catch issues early in development.

Additional Resources

Exercises

  1. Basic Testing: Create a simple FastAPI application with a /users/ endpoint and write tests for it.

  2. Database Testing: Implement proper database fixtures and test CRUD operations on a model of your choice.

  3. Authentication Testing: Add JWT authentication to an endpoint and write tests to verify both authorized and unauthorized access.

  4. Error Handling: Create tests for various error conditions in your API (404, 422, 500, etc.).

  5. CI Integration: Set up a GitHub Actions workflow to run your tests on every push.

Happy testing!



If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)