FastAPI Testing Strategy
In modern web development, ensuring your API works correctly is crucial. A solid testing strategy helps catch bugs early, enables refactoring with confidence, and ensures your API behaves as expected. This guide will walk you through the best practices for testing FastAPI applications.
Introduction to FastAPI Testing
FastAPI comes with built-in support for testing, largely thanks to its integration with Starlette's TestClient
and Python's powerful pytest
framework. By establishing a good testing strategy, you can verify your API's behavior, maintain code quality, and simplify future updates.
Setting Up Your Testing Environment
Prerequisites
Before we begin, make sure you have the following packages installed:
pip install fastapi pytest pytest-cov httpx
Basic Testing Structure
A well-organized FastAPI test suite typically includes:
- Unit tests - Testing individual functions and methods
- Integration tests - Testing interactions between components
- End-to-end tests - Testing complete API workflows
Let's create a basic directory structure for our tests:
my_fastapi_app/
├── app/
│ ├── __init__.py
│ ├── main.py
│ ├── models.py
│ └── routers/
│ └── items.py
└── tests/
├── __init__.py
├── conftest.py
├── test_main.py
└── test_items.py
Creating Your First Test
Let's start with a simple FastAPI application and write tests for it:
Sample FastAPI Application (app/main.py
)
from fastapi import FastAPI, HTTPException
app = FastAPI()
items = {}
@app.post("/items/", status_code=201)
def create_item(name: str, price: float):
if name in items:
raise HTTPException(status_code=400, detail="Item already exists")
items[name] = {"name": name, "price": price}
return items[name]
@app.get("/items/{name}")
def read_item(name: str):
if name not in items:
raise HTTPException(status_code=404, detail="Item not found")
return items[name]
Writing Tests (tests/test_main.py
)
from fastapi.testclient import TestClient
from app.main import app
client = TestClient(app)
def test_create_item():
# Clear items dictionary before test
from app.main import items
items.clear()
# Test creating a new item
response = client.post("/items/?name=test_item&price=10.5")
assert response.status_code == 201
assert response.json() == {"name": "test_item", "price": 10.5}
# Test creating an existing item (should fail)
response = client.post("/items/?name=test_item&price=20.0")
assert response.status_code == 400
assert response.json() == {"detail": "Item already exists"}
def test_read_item():
# Ensure item exists
from app.main import items
items.clear()
items["test_item"] = {"name": "test_item", "price": 10.5}
# Test reading an existing item
response = client.get("/items/test_item")
assert response.status_code == 200
assert response.json() == {"name": "test_item", "price": 10.5}
# Test reading a non-existent item
response = client.get("/items/nonexistent_item")
assert response.status_code == 404
assert response.json() == {"detail": "Item not found"}
Using Fixtures with Pytest
Fixtures in pytest help you set up preconditions for your tests. Let's create a fixture to initialize our test database:
Creating Fixtures (tests/conftest.py
)
import pytest
from fastapi.testclient import TestClient
from app.main import app
@pytest.fixture
def client():
with TestClient(app) as client:
yield client
@pytest.fixture
def sample_item():
from app.main import items
items.clear()
sample = {"name": "sample_item", "price": 15.5}
items[sample["name"]] = sample
return sample
Using Fixtures in Tests
def test_read_item_with_fixture(client, sample_item):
response = client.get(f"/items/{sample_item['name']}")
assert response.status_code == 200
assert response.json() == sample_item
Testing Dependencies and Mocking
FastAPI applications often depend on external services like databases. We can use mocking to isolate our tests.
Example with Dependencies
Let's refactor our app to use a database dependency:
from fastapi import FastAPI, HTTPException, Depends
app = FastAPI()
class Database:
def __init__(self):
self.items = {}
def get_item(self, name: str):
return self.items.get(name)
def create_item(self, name: str, price: float):
self.items[name] = {"name": name, "price": price}
return self.items[name]
def get_db():
return Database()
@app.post("/items/", status_code=201)
def create_item(name: str, price: float, db: Database = Depends(get_db)):
if db.get_item(name):
raise HTTPException(status_code=400, detail="Item already exists")
return db.create_item(name, price)
@app.get("/items/{name}")
def read_item(name: str, db: Database = Depends(get_db)):
item = db.get_item(name)
if not item:
raise HTTPException(status_code=404, detail="Item not found")
return item
Mocking Database in Tests
import pytest
from unittest.mock import MagicMock
from fastapi.testclient import TestClient
from app.main import app, get_db
@pytest.fixture
def mock_db():
db = MagicMock()
app.dependency_overrides[get_db] = lambda: db
yield db
app.dependency_overrides.clear()
def test_read_item_with_mock_db(client, mock_db):
mock_item = {"name": "mocked_item", "price": 20.0}
mock_db.get_item.return_value = mock_item
response = client.get("/items/mocked_item")
assert response.status_code == 200
assert response.json() == mock_item
mock_db.get_item.assert_called_once_with("mocked_item")
Testing with Async Client
For asynchronous endpoints, we should use an asynchronous test client:
import pytest
from httpx import AsyncClient
from app.main import app # Assuming app has async endpoints
@pytest.mark.asyncio
async def test_async_endpoint():
async with AsyncClient(app=app, base_url="http://test") as ac:
response = await ac.get("/async-endpoint")
assert response.status_code == 200
assert response.json() == {"message": "This is an async endpoint"}
Integration Testing with Real Dependencies
Sometimes, you'll want to test with real dependencies. Let's test with a real SQLAlchemy database:
import pytest
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from fastapi.testclient import TestClient
from app.database import Base
from app.main import app, get_db
# Create a test database
SQLALCHEMY_TEST_DATABASE_URL = "sqlite:///./test.db"
engine = create_engine(SQLALCHEMY_TEST_DATABASE_URL)
TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
@pytest.fixture
def test_db():
# Create tables
Base.metadata.create_all(bind=engine)
# Override dependency
def override_get_db():
try:
db = TestingSessionLocal()
yield db
finally:
db.close()
app.dependency_overrides[get_db] = override_get_db
# Run tests
yield
# Clean up
Base.metadata.drop_all(bind=engine)
app.dependency_overrides.clear()
def test_create_item_with_real_db(client, test_db):
response = client.post("/items/?name=db_item&price=25.0")
assert response.status_code == 201
assert response.json()["name"] == "db_item"
Testing Authentication and Authorization
Most APIs require authentication. Here's how to test protected endpoints:
from fastapi import Depends, FastAPI, HTTPException, Security
from fastapi.security import APIKeyHeader
app = FastAPI()
api_key_header = APIKeyHeader(name="X-API-Key")
def get_api_key(api_key: str = Security(api_key_header)):
if api_key != "valid_key":
raise HTTPException(status_code=403, detail="Invalid API Key")
return api_key
@app.get("/protected/")
def protected_route(api_key: str = Depends(get_api_key)):
return {"message": "You have access to protected route"}
# Testing the protected route
def test_protected_route_with_valid_key(client):
response = client.get("/protected/", headers={"X-API-Key": "valid_key"})
assert response.status_code == 200
assert response.json() == {"message": "You have access to protected route"}
def test_protected_route_with_invalid_key(client):
response = client.get("/protected/", headers={"X-API-Key": "invalid_key"})
assert response.status_code == 403
assert response.json() == {"detail": "Invalid API Key"}
Measuring Test Coverage
Understanding how much of your code is tested is important. Use pytest-cov for this:
pytest --cov=app tests/
This command will show you the percentage of code covered by your tests. You can generate HTML reports for more details:
pytest --cov=app --cov-report=html tests/
Real-World Example: Testing a Todo API
Let's look at a more complete example of testing a Todo API:
# app/models.py
from pydantic import BaseModel
from typing import Optional
class TodoCreate(BaseModel):
title: str
description: Optional[str] = None
completed: bool = False
class Todo(TodoCreate):
id: int
# app/routers/todos.py
from fastapi import APIRouter, HTTPException, Depends
from typing import List
from app.models import Todo, TodoCreate
router = APIRouter()
todo_db = {}
todo_counter = 0
def get_todo_db():
return todo_db, todo_counter
@router.post("/todos/", response_model=Todo, status_code=201)
def create_todo(todo: TodoCreate, db=Depends(get_todo_db)):
todos, counter = db
global todo_counter
todo_counter += 1
new_todo = Todo(id=todo_counter, **todo.dict())
todos[todo_counter] = new_todo
return new_todo
@router.get("/todos/", response_model=List[Todo])
def list_todos(db=Depends(get_todo_db)):
todos, _ = db
return list(todos.values())
@router.get("/todos/{todo_id}", response_model=Todo)
def get_todo(todo_id: int, db=Depends(get_todo_db)):
todos, _ = db
if todo_id not in todos:
raise HTTPException(status_code=404, detail="Todo not found")
return todos[todo_id]
@router.put("/todos/{todo_id}", response_model=Todo)
def update_todo(todo_id: int, todo: TodoCreate, db=Depends(get_todo_db)):
todos, _ = db
if todo_id not in todos:
raise HTTPException(status_code=404, detail="Todo not found")
updated_todo = Todo(id=todo_id, **todo.dict())
todos[todo_id] = updated_todo
return updated_todo
@router.delete("/todos/{todo_id}", status_code=204)
def delete_todo(todo_id: int, db=Depends(get_todo_db)):
todos, _ = db
if todo_id not in todos:
raise HTTPException(status_code=404, detail="Todo not found")
del todos[todo_id]
return None
Now, let's write comprehensive tests:
# tests/test_todos.py
import pytest
from fastapi.testclient import TestClient
from app.main import app # Assuming you've included the router
client = TestClient(app)
@pytest.fixture
def reset_db():
from app.routers.todos import todo_db, todo_counter
todo_db.clear()
global todo_counter
todo_counter = 0
def test_create_todo(reset_db):
response = client.post(
"/todos/",
json={"title": "Test todo", "description": "Test description"}
)
assert response.status_code == 201
data = response.json()
assert data["title"] == "Test todo"
assert data["description"] == "Test description"
assert data["completed"] == False
assert "id" in data
def test_list_todos(reset_db):
# Create two todos first
client.post("/todos/", json={"title": "First todo"})
client.post("/todos/", json={"title": "Second todo"})
response = client.get("/todos/")
assert response.status_code == 200
data = response.json()
assert len(data) == 2
assert data[0]["title"] == "First todo"
assert data[1]["title"] == "Second todo"
def test_get_todo(reset_db):
# Create a todo first
create_response = client.post("/todos/", json={"title": "Get todo"})
todo_id = create_response.json()["id"]
response = client.get(f"/todos/{todo_id}")
assert response.status_code == 200
data = response.json()
assert data["title"] == "Get todo"
assert data["id"] == todo_id
def test_get_nonexistent_todo(reset_db):
response = client.get("/todos/999")
assert response.status_code == 404
def test_update_todo(reset_db):
# Create a todo first
create_response = client.post("/todos/", json={"title": "Original title"})
todo_id = create_response.json()["id"]
response = client.put(
f"/todos/{todo_id}",
json={"title": "Updated title", "completed": True}
)
assert response.status_code == 200
data = response.json()
assert data["title"] == "Updated title"
assert data["completed"] == True
assert data["id"] == todo_id
def test_delete_todo(reset_db):
# Create a todo first
create_response = client.post("/todos/", json={"title": "To be deleted"})
todo_id = create_response.json()["id"]
# Delete it
response = client.delete(f"/todos/{todo_id}")
assert response.status_code == 204
# Verify it's gone
get_response = client.get(f"/todos/{todo_id}")
assert get_response.status_code == 404
Organizing Tests for Larger Applications
As your application grows, organize your tests to match your application structure:
tests/
├── conftest.py # Shared fixtures
├── test_main.py # Main app tests
├── test_dependencies.py # Test custom dependencies
├── routers/
│ ├── test_users.py # Tests for user routes
│ └── test_items.py # Tests for item routes
└── utils/
└── test_security.py # Tests for security utils
Best Practices for FastAPI Testing
- Test isolation: Each test should be independent and not rely on the state created by another test
- Use fixtures: Create fixtures for common setup and teardown operations
- Test failure cases: Don't just test the happy path; test error cases too
- Use parametrized tests: Test multiple inputs with
@pytest.mark.parametrize
- Mock external dependencies: Use mocks to isolate tests from external services
- Test middleware: Don't forget to test any custom middleware
- Separate unit and integration tests: Keep faster unit tests separate from slower integration tests
- Use CI/CD: Run tests automatically in your CI/CD pipeline
Summary
A robust testing strategy is essential for any FastAPI application. By using TestClient
, fixtures, mocking, and good test organization, you can ensure your API works correctly and remains maintainable.
Testing should be an integral part of your development workflow. Start writing tests early in your project, and you'll thank yourself later when changes need to be made with confidence.
Additional Resources
- FastAPI Testing Documentation
- Pytest Documentation
- Starlette TestClient Documentation
- SQLAlchemy Testing
Exercise
- Create a FastAPI application with endpoints for a book library, including routes to add, list, and delete books.
- Write comprehensive tests for each endpoint, including error cases.
- Add authentication to your API and write tests for both authenticated and unauthenticated requests.
- Implement test coverage reporting and ensure you have at least 80% coverage.
By following this guide and completing the exercises, you'll develop a solid understanding of FastAPI testing strategies that will serve you well in your development career.
If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)