Flask Test Coverage
Introduction
Test coverage is a crucial metric in software development that measures how much of your codebase is being executed when your tests run. In Flask applications, understanding and improving test coverage helps ensure that your web application behaves as expected and reduces the likelihood of bugs in production.
In this tutorial, we'll explore how to measure test coverage in Flask applications, interpret the results, and implement strategies to improve coverage. By the end of this guide, you'll have a solid understanding of how to ensure your Flask application is thoroughly tested.
Understanding Test Coverage
Test coverage refers to the percentage of code that is executed when your test suite runs. It helps identify:
- Which parts of your code are being tested
- Which parts are not being tested (potential risk areas)
- How thorough your tests are overall
Coverage is typically measured across several dimensions:
- Statement coverage: The percentage of statements that have been executed
- Branch coverage: The percentage of possible branches (like if/else paths) that have been executed
- Function coverage: The percentage of functions that have been called
Setting Up Coverage Tools
Before we can measure coverage, we need to install the necessary tools. For Python and Flask applications, pytest
and pytest-cov
are excellent choices.
Installation
pip install pytest pytest-cov
Let's set up a basic Flask application structure to demonstrate:
my_flask_app/
├── app.py
├── models.py
├── routes.py
└── tests/
├── __init__.py
├── test_models.py
└── test_routes.py
Here's a simple Flask application to work with:
# app.py
from flask import Flask
from routes import register_routes
def create_app():
app = Flask(__name__)
register_routes(app)
return app
if __name__ == '__main__':
app = create_app()
app.run(debug=True)
# routes.py
from flask import jsonify, request
def register_routes(app):
@app.route('/api/greeting', methods=['GET'])
def greeting():
name = request.args.get('name', 'Guest')
return jsonify({'message': f'Hello, {name}!'})
@app.route('/api/calculate', methods=['POST'])
def calculate():
data = request.get_json()
if not data:
return jsonify({'error': 'No data provided'}), 400
operation = data.get('operation')
x = data.get('x', 0)
y = data.get('y', 0)
if operation == 'add':
result = x + y
elif operation == 'subtract':
result = x - y
elif operation == 'multiply':
result = x * y
elif operation == 'divide':
if y == 0:
return jsonify({'error': 'Cannot divide by zero'}), 400
result = x / y
else:
return jsonify({'error': 'Invalid operation'}), 400
return jsonify({'result': result})
Writing Tests for Coverage
Now, let's write tests that we'll use to measure coverage:
# tests/test_routes.py
import json
import pytest
from app import create_app
@pytest.fixture
def client():
app = create_app()
app.config['TESTING'] = True
with app.test_client() as client:
yield client
def test_greeting(client):
response = client.get('/api/greeting')
data = json.loads(response.data)
assert response.status_code == 200
assert data['message'] == 'Hello, Guest!'
def test_greeting_with_name(client):
response = client.get('/api/greeting?name=John')
data = json.loads(response.data)
assert response.status_code == 200
assert data['message'] == 'Hello, John!'
def test_calculate_add(client):
response = client.post(
'/api/calculate',
data=json.dumps({'operation': 'add', 'x': 5, 'y': 3}),
content_type='application/json'
)
data = json.loads(response.data)
assert response.status_code == 200
assert data['result'] == 8
Running Tests with Coverage
To run the tests and generate a coverage report, use the following command:
pytest --cov=. tests/
This will run all tests and show a basic coverage report in the terminal:
----------- coverage: platform linux, python 3.8.10-final-0 -----------
Name Stmts Miss Cover
---------------------------------------
app.py 8 1 88%
routes.py 24 8 67%
tests/__init__.py 0 0 100%
tests/test_routes.py 23 0 100%
---------------------------------------
TOTAL 55 9 84%
For a more detailed HTML report:
pytest --cov=. --cov-report=html tests/
This generates an htmlcov
directory with an interactive HTML coverage report that highlights which lines of code were executed and which were missed.
Interpreting Coverage Results
After running the coverage, you'll see that we're missing some test coverage. Looking at our HTML report or output, we notice:
-
The
calculate
endpoint isn't fully tested:- We only tested the 'add' operation
- We didn't test error cases (invalid operation, division by zero)
- Missing handling for subtract, multiply, and divide operations
-
Some lines in
app.py
aren't covered:- The
if __name__ == '__main__'
block isn't executed during testing
- The
Improving Test Coverage
Let's improve our test coverage by adding more tests:
# Add these tests to tests/test_routes.py
def test_calculate_subtract(client):
response = client.post(
'/api/calculate',
data=json.dumps({'operation': 'subtract', 'x': 10, 'y': 4}),
content_type='application/json'
)
data = json.loads(response.data)
assert response.status_code == 200
assert data['result'] == 6
def test_calculate_multiply(client):
response = client.post(
'/api/calculate',
data=json.dumps({'operation': 'multiply', 'x': 7, 'y': 3}),
content_type='application/json'
)
data = json.loads(response.data)
assert response.status_code == 200
assert data['result'] == 21
def test_calculate_divide(client):
response = client.post(
'/api/calculate',
data=json.dumps({'operation': 'divide', 'x': 20, 'y': 5}),
content_type='application/json'
)
data = json.loads(response.data)
assert response.status_code == 200
assert data['result'] == 4
def test_calculate_divide_by_zero(client):
response = client.post(
'/api/calculate',
data=json.dumps({'operation': 'divide', 'x': 10, 'y': 0}),
content_type='application/json'
)
data = json.loads(response.data)
assert response.status_code == 400
assert 'error' in data
assert data['error'] == 'Cannot divide by zero'
def test_calculate_invalid_operation(client):
response = client.post(
'/api/calculate',
data=json.dumps({'operation': 'power', 'x': 2, 'y': 3}),
content_type='application/json'
)
data = json.loads(response.data)
assert response.status_code == 400
assert 'error' in data
assert data['error'] == 'Invalid operation'
def test_calculate_no_data(client):
response = client.post('/api/calculate')
data = json.loads(response.data)
assert response.status_code == 400
assert 'error' in data
assert data['error'] == 'No data provided'
Now when we run the coverage again:
pytest --cov=. tests/
The output should show significantly improved coverage:
----------- coverage: platform linux, python 3.8.10-final-0 -----------
Name Stmts Miss Cover
---------------------------------------
app.py 8 1 88%
routes.py 24 0 100%
tests/__init__.py 0 0 100%
tests/test_routes.py 65 0 100%
---------------------------------------
TOTAL 97 1 99%
We've achieved nearly 100% coverage, only missing the if __name__ == '__main__'
block which is typically not tested since it's just an entry point.
Configuration Options
For more control over coverage, you can create a .coveragerc
file in your project root:
[run]
source = .
omit =
*/tests/*
venv/*
*/site-packages/*
[report]
exclude_lines =
pragma: no cover
def __repr__
if __name__ == .__main__.:
pass
raise NotImplementedError
This configuration:
- Specifies which files to include in coverage
- Excludes test files, virtual environments, and external packages
- Excludes specific lines from coverage calculation
Integrating with CI/CD
Test coverage is most valuable when integrated into your Continuous Integration (CI) pipeline. Here's how to set up GitHub Actions for Flask test coverage:
# .github/workflows/test.yml
name: Test
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
pip install pytest pytest-cov
- name: Test with pytest
run: |
pytest --cov=. --cov-report=xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
with:
file: ./coverage.xml
fail_ci_if_error: true
This workflow runs your tests with coverage on every push and pull request, then uploads the results to Codecov for visualization and tracking over time.
Best Practices for Flask Test Coverage
-
Aim for high but realistic coverage
- 100% coverage isn't always necessary or practical
- Focus on critical paths and business logic
-
Write meaningful tests
- Don't write tests just to increase coverage
- Make sure tests validate functionality, not just execution
-
Use test fixtures effectively
- Set up common test environments
- Reduce test code duplication
-
Test edge cases
- Error conditions
- Boundary values
- Unexpected inputs
-
Regular coverage review
- Make coverage reports part of code reviews
- Track coverage trends over time
Practical Real-World Example
Let's look at a more complex real-world example: an authentication system for a Flask API:
# auth.py
from flask import request, jsonify
import jwt
import datetime
from functools import wraps
SECRET_KEY = "your-secret-key"
def generate_token(user_id):
"""Generate a JWT token for a user"""
payload = {
'exp': datetime.datetime.utcnow() + datetime.timedelta(days=1),
'iat': datetime.datetime.utcnow(),
'sub': user_id
}
return jwt.encode(payload, SECRET_KEY, algorithm='HS256')
def token_required(f):
@wraps(f)
def decorated(*args, **kwargs):
token = None
# Check if token is in headers
if 'Authorization' in request.headers:
auth_header = request.headers['Authorization']
try:
token = auth_header.split(" ")[1]
except IndexError:
return jsonify({'error': 'Token is missing or invalid'}), 401
if not token:
return jsonify({'error': 'Token is missing!'}), 401
try:
# Decode the token
data = jwt.decode(token, SECRET_KEY, algorithms=['HS256'])
current_user_id = data['sub']
except jwt.ExpiredSignatureError:
return jsonify({'error': 'Token has expired!'}), 401
except jwt.InvalidTokenError:
return jsonify({'error': 'Token is invalid!'}), 401
# Pass the user_id to the wrapped function
return f(current_user_id, *args, **kwargs)
return decorated
Here's how to test this authentication module comprehensively:
# tests/test_auth.py
import jwt
import pytest
import time
from auth import generate_token, token_required, SECRET_KEY
from flask import Flask, jsonify
@pytest.fixture
def app():
app = Flask(__name__)
@app.route('/protected')
@token_required
def protected(current_user_id):
return jsonify({'message': 'This is protected', 'user_id': current_user_id})
return app
@pytest.fixture
def client(app):
return app.test_client()
def test_generate_token():
token = generate_token('user123')
decoded = jwt.decode(token, SECRET_KEY, algorithms=['HS256'])
assert decoded['sub'] == 'user123'
assert 'exp' in decoded
assert 'iat' in decoded
def test_valid_token_access(client):
token = generate_token('user123')
response = client.get('/protected', headers={'Authorization': f'Bearer {token}'})
data = response.get_json()
assert response.status_code == 200
assert data['message'] == 'This is protected'
assert data['user_id'] == 'user123'
def test_missing_token(client):
response = client.get('/protected')
data = response.get_json()
assert response.status_code == 401
assert data['error'] == 'Token is missing!'
def test_invalid_token_format(client):
response = client.get('/protected', headers={'Authorization': 'InvalidFormat'})
data = response.get_json()
assert response.status_code == 401
assert data['error'] == 'Token is missing or invalid'
def test_invalid_token(client):
response = client.get('/protected', headers={'Authorization': 'Bearer invalid.token.here'})
data = response.get_json()
assert response.status_code == 401
assert data['error'] == 'Token is invalid!'
def test_expired_token(client, monkeypatch):
# Create a token that's already expired
def mock_utcnow():
return datetime.datetime.utcnow() - datetime.timedelta(days=2)
# Temporarily patch datetime.utcnow to return a past date
with monkeypatch.context() as m:
import datetime
m.setattr(datetime, 'datetime', type('datetime', (), {'utcnow': staticmethod(mock_utcnow)}))
expired_token = generate_token('user123')
response = client.get('/protected', headers={'Authorization': f'Bearer {expired_token}'})
data = response.get_json()
assert response.status_code == 401
assert data['error'] == 'Token has expired!'
Running coverage on this complex example would show how well we've tested the authentication system, including edge cases and error handling.
Summary
Test coverage is an essential tool for ensuring the quality and reliability of your Flask applications. In this tutorial, we covered:
- The basics of test coverage and why it's important
- How to set up and run coverage tests in Flask applications
- Interpreting and improving coverage results
- Configuring coverage settings for your specific needs
- Integrating coverage into CI/CD pipelines
- Best practices for maintaining good test coverage
- A real-world example of comprehensive testing
By incorporating these practices into your development workflow, you'll build more robust Flask applications with fewer bugs and greater confidence in your code.
Additional Resources and Exercises
Resources
- pytest-cov Documentation
- Flask Testing Documentation
- Codecov - Coverage Visualization
- Python Coverage Documentation
Exercises
-
Basic Coverage
- Create a simple Flask application with at least two routes
- Write tests to achieve at least 90% coverage
- Generate and analyze an HTML coverage report
-
Advanced Coverage
- Add a database layer (SQLAlchemy) to your Flask application
- Write tests that use database fixtures or mocks
- Identify and improve coverage of database operations
-
Coverage Integration
- Set up a GitHub repository with GitHub Actions
- Configure automated coverage reporting
- Add a coverage badge to your README.md
-
Coverage Improvement
- Fork an existing open-source Flask project
- Run coverage on its test suite
- Submit a pull request that improves test coverage
By completing these exercises, you'll gain practical experience in measuring and improving test coverage in real Flask applications.
If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)