FastAPI Logging Strategy
Effective logging is a crucial aspect of developing robust FastAPI applications. A well-designed logging strategy helps you debug issues, monitor application health, and understand user behavior. In this guide, we'll explore how to implement a comprehensive logging system in your FastAPI applications.
Introduction to Logging in FastAPI
Logging provides a way to track events that happen when your application runs. Instead of using print()
statements throughout your code (which are difficult to control and manage), a proper logging system offers flexibility, configurability, and different severity levels.
FastAPI doesn't provide its own logging mechanism but instead works seamlessly with Python's built-in logging
module. This allows you to leverage your existing Python logging knowledge while building FastAPI applications.
Basic Logging Setup
Let's start with a basic logging setup for a FastAPI application:
import logging
from fastapi import FastAPI
# Configure basic logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S"
)
# Create a logger instance
logger = logging.getLogger(__name__)
app = FastAPI()
@app.get("/")
async def read_root():
logger.info("Root endpoint accessed")
return {"message": "Hello World"}
@app.get("/items/{item_id}")
async def read_item(item_id: int):
logger.info(f"Accessed item with id: {item_id}")
return {"item_id": item_id}
When you run this application and access these endpoints, you'll see log entries like:
2023-07-15 14:32:45 - __main__ - INFO - Root endpoint accessed
2023-07-15 14:33:12 - __main__ - INFO - Accessed item with id: 42
Logging Levels
Python's logging module provides several logging levels, ordered by severity:
DEBUG
: Detailed information, typically useful for diagnosing problemsINFO
: Confirmation that things are working as expectedWARNING
: Indication that something unexpected happened or may happenERROR
: The software couldn't perform some functionCRITICAL
: A serious error indicating the program itself may be unable to continue running
In FastAPI applications, you should use these levels appropriately:
@app.get("/user/{user_id}")
async def get_user(user_id: int):
logger.debug(f"Looking up user with id {user_id}") # Detailed trace info
try:
# Simulating a database lookup
if user_id <= 0:
logger.warning(f"Invalid user_id received: {user_id}")
return {"error": "Invalid user ID"}
if user_id > 1000:
logger.error(f"User with id {user_id} not found")
return {"error": "User not found"}
user = {"id": user_id, "name": f"User {user_id}"}
logger.info(f"Successfully retrieved user: {user_id}")
return user
except Exception as e:
logger.critical(f"Critical error when retrieving user {user_id}: {str(e)}")
raise
Creating a Middleware for Request Logging
A middleware can help you log all incoming requests automatically:
import time
from fastapi import Request
from starlette.middleware.base import BaseHTTPMiddleware
class LoggingMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next):
start_time = time.time()
# Log request info
logger.info(f"Request started: {request.method} {request.url}")
# Process the request
try:
response = await call_next(request)
# Log response info
process_time = time.time() - start_time
logger.info(f"Request completed: {request.method} {request.url} - Status: {response.status_code} - Time: {process_time:.3f}s")
return response
except Exception as e:
logger.error(f"Request failed: {request.method} {request.url} - Error: {str(e)}")
raise
# Add middleware to the FastAPI app
app.add_middleware(LoggingMiddleware)
This middleware logs when requests start, complete, and any errors that occur during processing.
Configuring Logging for Different Environments
Different environments (development, testing, production) require different logging configurations. Here's how you can set up environment-specific logging:
import os
import json
import logging.config
def setup_logging(env="development"):
"""Configure logging based on the environment."""
# Base config for all environments
log_config = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"standard": {
"format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s",
"datefmt": "%Y-%m-%d %H:%M:%S"
},
"json": {
"format": "%(asctime)s %(levelname)s %(name)s %(message)s",
"class": "pythonjsonlogger.jsonlogger.JsonFormatter"
}
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"level": "DEBUG",
"formatter": "standard",
"stream": "ext://sys.stdout"
},
"file": {
"class": "logging.handlers.RotatingFileHandler",
"level": "INFO",
"formatter": "standard",
"filename": "app.log",
"maxBytes": 10485760, # 10MB
"backupCount": 10
}
},
"loggers": {
"": { # Root logger
"handlers": ["console"],
"level": "INFO"
},
"app": {
"handlers": ["console", "file"],
"level": "INFO",
"propagate": False
},
"uvicorn": {
"handlers": ["console", "file"],
"level": "INFO",
"propagate": False
}
}
}
# Environment-specific overrides
if env == "development":
log_config["loggers"]["app"]["level"] = "DEBUG"
elif env == "production":
log_config["handlers"]["console"]["formatter"] = "json"
log_config["handlers"]["file"]["formatter"] = "json"
log_config["handlers"]["console"]["level"] = "INFO"
# Apply configuration
logging.config.dictConfig(log_config)
# Set up logging using environment variable or default to development
env = os.getenv("ENVIRONMENT", "development")
setup_logging(env)
# Create a logger for this application
logger = logging.getLogger("app")
app = FastAPI()
# Rest of your FastAPI code...
Working with Structured Logging
Structured logging formats logs as JSON objects rather than plain text, making them easier to parse and analyze with log management tools:
# Requires the python-json-logger package
# pip install python-json-logger
@app.get("/products")
async def list_products(category: str = None, limit: int = 10):
extra = {
"category": category,
"limit": limit,
"endpoint": "/products"
}
logger.info("Products listing requested", extra=extra)
# Logic to fetch products
products = [{"id": i, "name": f"Product {i}"} for i in range(limit)]
extra["product_count"] = len(products)
logger.info("Products listing completed", extra=extra)
return products
This produces structured logs that contain all the context information about the request:
{
"asctime": "2023-07-15 15:42:33",
"levelname": "INFO",
"name": "app",
"message": "Products listing requested",
"category": "electronics",
"limit": 10,
"endpoint": "/products"
}
Handling Sensitive Information
Be careful not to log sensitive information like passwords or personal data:
@app.post("/login")
async def login(username: str, password: str):
# Log the username but NEVER the password
logger.info(f"Login attempt for user: {username}")
# WRONG! Don't do this:
# logger.debug(f"Login with username: {username}, password: {password}")
# Authentication logic here
# ...
return {"status": "success"}
Creating Custom Log Filters
You might want to filter out certain log entries or modify them before they're recorded:
class HealthCheckFilter(logging.Filter):
"""Filter out health check endpoint logs to reduce noise."""
def filter(self, record):
# Skip logging for health check endpoint
if "health" in record.getMessage() and "endpoint accessed" in record.getMessage():
return False
return True
# Apply the filter to the logger
logger.addFilter(HealthCheckFilter())
@app.get("/health")
async def health_check():
# This won't be logged due to our filter
logger.info("Health check endpoint accessed")
return {"status": "healthy"}
Integrating with Log Management Systems
For production environments, consider integrating with centralized logging systems:
# For Sentry integration
# pip install sentry-sdk
import sentry_sdk
from sentry_sdk.integrations.asgi import SentryAsgiMiddleware
# Initialize Sentry (in production)
if env == "production":
sentry_sdk.init(
dsn="your-sentry-dsn",
traces_sample_rate=0.2,
environment=env
)
# Wrap your FastAPI app with Sentry's middleware
app = SentryAsgiMiddleware(app)
Real-world Example: Complete Logging Setup
Here's a more complete example bringing together several concepts:
import logging
import time
import uuid
from contextvars import ContextVar
import os
from fastapi import FastAPI, Request, Depends
from fastapi.responses import JSONResponse
# Create a context variable to store request ID
request_id_contextvar = ContextVar("request_id", default=None)
# Configure logging
logging.basicConfig(
level=logging.INFO if os.getenv("ENVIRONMENT") == "production" else logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - [%(request_id)s] - %(message)s',
datefmt="%Y-%m-%d %H:%M:%S"
)
# Create a filter to inject request ID into log records
class RequestIdFilter(logging.Filter):
def filter(self, record):
record.request_id = request_id_contextvar.get() or "-"
return True
# Get logger and add filter
logger = logging.getLogger(__name__)
logger.addFilter(RequestIdFilter())
app = FastAPI()
# Middleware to generate request IDs
@app.middleware("http")
async def request_middleware(request: Request, call_next):
# Generate unique request ID
request_id = str(uuid.uuid4())
request_id_contextvar.set(request_id)
# Add request ID to headers
start_time = time.time()
# Log the request
logger.info(
f"Request started: {request.method} {request.url.path}",
extra={
"method": request.method,
"path": request.url.path,
"client_ip": request.client.host,
"user_agent": request.headers.get("user-agent", "")
}
)
try:
# Process request
response = await call_next(request)
# Calculate processing time
process_time = time.time() - start_time
# Log the response
logger.info(
f"Request completed in {process_time:.3f}s with status {response.status_code}",
extra={
"status_code": response.status_code,
"processing_time": process_time
}
)
# Add the request ID to the response headers
response.headers["X-Request-ID"] = request_id
return response
except Exception as e:
# Log exceptions
logger.exception(f"Error processing request: {str(e)}")
# Return error response with request ID
return JSONResponse(
status_code=500,
content={
"error": "Internal server error",
"request_id": request_id
}
)
# Example endpoint
@app.get("/items/{item_id}")
async def read_item(item_id: int, q: str = None):
logger.debug(f"Processing item request with query param q={q}")
if item_id <= 0:
logger.warning(f"Invalid item_id received: {item_id}")
return {"error": "Invalid item ID"}
# Simulate item retrieval
logger.info(f"Item {item_id} was successfully retrieved")
return {"item_id": item_id, "q": q}
Additional Logging Best Practices
- Use context managers for complex operations:
@app.post("/upload")
async def upload_file(file: UploadFile):
with logger.contextualize(filename=file.filename, size=file.size):
logger.info("Starting file upload")
# Upload logic
logger.info("File upload completed")
return {"filename": file.filename}
- Log the start and end of important operations:
@app.get("/generate-report")
async def generate_report():
logger.info("Starting report generation")
# Report generation logic
# ...
logger.info("Report generation completed")
return {"status": "completed"}
- Create utility functions for common logging patterns:
def log_database_query(query, params=None):
"""Log database queries with proper handling of parameters."""
if params:
# Avoid logging actual parameter values in production
if os.getenv("ENVIRONMENT") == "production":
param_count = len(params) if isinstance(params, (list, tuple)) else 1
logger.debug(f"Executing query: {query} with {param_count} parameters")
else:
logger.debug(f"Executing query: {query} with params: {params}")
else:
logger.debug(f"Executing query: {query}")
Summary
A robust logging strategy is essential for maintaining and troubleshooting FastAPI applications. By implementing proper logging, you can:
- Track application behavior and identify issues quickly
- Monitor performance and detect bottlenecks
- Create an audit trail of important events
- Simplify debugging in production environments
- Improve the overall robustness and maintainability of your code
Remember these key points:
- Use Python's built-in logging module with appropriate logging levels
- Create middleware for automatic request/response logging
- Configure logging differently for development and production
- Implement structured logging for better analysis
- Never log sensitive information
- Include context information in your logs
- Use request IDs to trace requests across your application
- Consider integration with centralized logging systems
Additional Resources
- Python Logging Documentation
- FastAPI Documentation
- Starlette Middleware
- Python JSON Logger for structured logging
- Sentry Documentation for error tracking
Exercises
- Implement a logging system that captures both request and response bodies (be careful with sensitive data).
- Create a custom logging middleware that measures and logs the time spent in database operations.
- Build a custom context manager for logging that automatically times operations and logs the duration.
- Implement a rate limiting filter that prevents logging the same message more than X times per minute to avoid log flooding.
- Set up a rotating log file system that creates new log files daily and compresses old ones.
If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)