FastAPI Test Coverage
Test coverage is a crucial metric that helps you understand how much of your code is being tested by your test suite. In this guide, we'll explore how to measure test coverage in FastAPI applications, analyze the results, and implement strategies to improve your test coverage.
Introduction to Test Coverage
Test coverage measures the percentage of your codebase that is executed during your test suite runs. It helps you identify:
- Code that isn't being tested
- Potential dead code that's never executed
- Areas of your application that need more testing
High test coverage gives you confidence that your application behaves as expected and reduces the risk of introducing bugs when making changes.
Setting Up Test Coverage Tools
Required Packages
First, let's install the necessary packages:
pip install pytest pytest-cov
pytest
: The testing framework we'll usepytest-cov
: A pytest plugin that integrates with the coverage.py library to measure code coverage
Project Structure
For this tutorial, let's assume we have a FastAPI application with the following structure:
my_fastapi_app/
├── app/
│ ├── __init__.py
│ ├── main.py
│ ├── models.py
│ └── routers/
│ ├── __init__.py
│ └── items.py
└── tests/
├── __init__.py
├── test_main.py
└── test_items.py
Running Tests with Coverage
Basic Coverage Report
To run your tests with coverage analysis:
pytest --cov=app tests/
This command runs all tests in the tests/
directory and measures coverage for the app
package.
The output will look something like:
============================= test session starts ==============================
...
collected 5 items
tests/test_items.py .. [ 40%]
tests/test_main.py ... [100%]
---------- coverage: platform linux, python 3.9.5-final-0 -----------
Name Stmts Miss Cover
--------------------------------------------
app/__init__.py 0 0 100%
app/main.py 15 2 87%
app/models.py 10 0 100%
app/routers/__init__.py 0 0 100%
app/routers/items.py 25 5 80%
--------------------------------------------
TOTAL 50 7 86%
============================== 5 passed in 0.98s ==============================
Generating HTML Reports
For more detailed information, generate an HTML report:
pytest --cov=app --cov-report=html tests/
This command creates a htmlcov
directory with an interactive HTML report. Open htmlcov/index.html
in your browser to explore the coverage in detail.
Understanding Coverage Reports
Let's look at what coverage reports tell us:
- Stmts: Total number of statements in the file
- Miss: Number of statements not executed during tests
- Cover: Percentage of statements executed
- Lines: When using HTML reports, you can see exactly which lines were executed (green) and which were missed (red)
Practical Example: Testing a FastAPI Endpoint
Let's create a simple FastAPI application and write tests with coverage:
Application Code (app/main.py)
from fastapi import FastAPI, HTTPException, Query
from typing import List, Optional
from pydantic import BaseModel
app = FastAPI()
class Item(BaseModel):
id: int
name: str
description: Optional[str] = None
price: float
# In-memory database
items_db = {}
@app.post("/items/", response_model=Item)
def create_item(item: Item):
if item.id in items_db:
raise HTTPException(status_code=400, detail="Item ID already exists")
items_db[item.id] = item
return item
@app.get("/items/{item_id}", response_model=Item)
def read_item(item_id: int):
if item_id not in items_db:
raise HTTPException(status_code=404, detail="Item not found")
return items_db[item_id]
@app.get("/items/", response_model=List[Item])
def list_items(skip: int = 0, limit: int = Query(default=10, le=100)):
items_values = list(items_db.values())
return items_values[skip:skip + limit]
@app.delete("/items/{item_id}")
def delete_item(item_id: int):
if item_id not in items_db:
raise HTTPException(status_code=404, detail="Item not found")
del items_db[item_id]
return {"message": "Item deleted successfully"}
Test File (tests/test_main.py)
from fastapi.testclient import TestClient
from app.main import app, items_db
client = TestClient(app)
# Clear the database before each test
def setup_function():
items_db.clear()
def test_create_item():
response = client.post(
"/items/",
json={"id": 1, "name": "Test Item", "price": 10.5}
)
assert response.status_code == 200
assert response.json() == {
"id": 1,
"name": "Test Item",
"description": None,
"price": 10.5
}
def test_create_existing_item():
# Create an item first
client.post(
"/items/",
json={"id": 1, "name": "Test Item", "price": 10.5}
)
# Try to create another item with the same ID
response = client.post(
"/items/",
json={"id": 1, "name": "Another Item", "price": 20.0}
)
assert response.status_code == 400
assert response.json()["detail"] == "Item ID already exists"
def test_read_item():
# Create an item first
client.post(
"/items/",
json={"id": 1, "name": "Test Item", "price": 10.5}
)
# Retrieve the item
response = client.get("/items/1")
assert response.status_code == 200
assert response.json() == {
"id": 1,
"name": "Test Item",
"description": None,
"price": 10.5
}
def test_read_nonexistent_item():
response = client.get("/items/999")
assert response.status_code == 404
assert response.json()["detail"] == "Item not found"
Running Coverage
Now let's run our tests with coverage:
pytest --cov=app tests/
We might see something like:
Name Stmts Miss Cover
----------------------------------------
app/main.py 30 5 83%
----------------------------------------
TOTAL 30 5 83%
Looking at the HTML report, we notice that we're not testing the list_items
and delete_item
endpoints.
Improving Coverage
Let's add tests for the remaining endpoints:
def test_list_items():
# Create a few items
client.post("/items/", json={"id": 1, "name": "Item 1", "price": 10.0})
client.post("/items/", json={"id": 2, "name": "Item 2", "price": 20.0})
# List all items
response = client.get("/items/")
assert response.status_code == 200
items = response.json()
assert len(items) == 2
assert items[0]["name"] == "Item 1"
assert items[1]["name"] == "Item 2"
def test_delete_item():
# Create an item
client.post("/items/", json={"id": 1, "name": "Test Item", "price": 10.5})
# Delete the item
response = client.delete("/items/1")
assert response.status_code == 200
assert response.json() == {"message": "Item deleted successfully"}
# Verify item is gone
response = client.get("/items/1")
assert response.status_code == 404
def test_delete_nonexistent_item():
response = client.delete("/items/999")
assert response.status_code == 404
assert response.json()["detail"] == "Item not found"
After adding these tests, our coverage should increase significantly:
Name Stmts Miss Cover
----------------------------------------
app/main.py 30 0 100%
----------------------------------------
TOTAL 30 0 100%
Setting Coverage Thresholds
To ensure your code maintains a certain level of test coverage, you can set minimum thresholds:
pytest --cov=app --cov-fail-under=90 tests/
This command will fail the test run if coverage falls below 90%. You can add this to your CI/CD pipeline to prevent merging code with insufficient test coverage.
Coverage Configuration
You can create a .coveragerc
file in your project root to configure coverage settings:
[run]
source = app
omit =
*/migrations/*
*/tests/*
*/__init__.py
[report]
exclude_lines =
pragma: no cover
def __repr__
if self.debug:
raise NotImplementedError
if __name__ == .__main__.:
pass
raise ImportError
This configuration:
- Specifies the source code to measure
- Omits certain directories from coverage analysis
- Excludes specific lines or patterns that don't need coverage
Best Practices for Test Coverage
-
Aim for realistic coverage goals: 100% coverage is often impractical. Focus on critical code paths instead.
-
Don't just chase numbers: High coverage doesn't necessarily mean good testing. Focus on meaningful tests.
-
Test edge cases: Make sure you're testing error conditions and boundary cases, not just the happy path.
-
Use parametrized tests: Pytest's
@pytest.mark.parametrize
decorator allows you to test multiple variations with minimal code. -
Create fixtures: Use pytest fixtures to set up test data and reduce code duplication.
Common Challenges with Coverage in FastAPI
Testing Background Tasks
FastAPI background tasks run asynchronously, making them harder to test. One approach is to run the background tasks synchronously in tests:
@app.post("/send-notification/")
async def send_notification(email: str, background_tasks: BackgroundTasks):
background_tasks.add_task(send_email, email, "Hello")
return {"message": "Notification will be sent"}
# In tests
def test_send_notification():
# Mock the send_email function
with patch("app.main.send_email") as mock_send_email:
response = client.post("/send-notification/[email protected]")
assert response.status_code == 200
# Verify the background task was scheduled
mock_send_email.assert_called_once_with("[email protected]", "Hello")
Testing Dependency Injection
Use the app.dependency_overrides
dictionary to override dependencies in tests:
# In your app
async def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
# In your tests
def override_get_db():
db = TestingSessionLocal()
try:
yield db
finally:
db.close()
app.dependency_overrides[get_db] = override_get_db
Summary
Test coverage is a powerful tool for ensuring the quality and reliability of your FastAPI applications. Key takeaways include:
- Use
pytest
andpytest-cov
to measure test coverage - Generate HTML reports for detailed coverage analysis
- Set coverage thresholds for continuous integration
- Focus on meaningful tests, not just high coverage numbers
- Configure coverage to ignore non-essential code paths
- Test edge cases and error conditions
- Use mocks and dependency overrides for complex scenarios
By integrating coverage analysis into your development workflow, you'll build more robust FastAPI applications with fewer bugs and greater confidence in your code.
Additional Resources
- pytest Documentation
- pytest-cov GitHub
- Coverage.py Documentation
- FastAPI Testing Documentation
- Starlette TestClient Documentation
Exercises
- Set up a basic FastAPI application with at least two endpoints and write tests to achieve 100% coverage.
- Create a FastAPI application with a database dependency and write tests using dependency overrides.
- Use parametrized tests to test an endpoint with different inputs and expected outputs.
- Configure a CI/CD pipeline (like GitHub Actions) to run tests with coverage reports and enforce a minimum coverage threshold.
- Add a complex endpoint with conditional logic and write comprehensive tests to cover all branches of the code.
If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)