FastAPI Response Pagination
When building APIs that return large collections of data, it's essential to implement pagination to improve performance and usability. In this tutorial, we'll learn how to implement pagination in FastAPI responses.
What is Pagination?
Pagination is the process of dividing a large dataset into smaller, discrete pages of data. Instead of returning all results at once, which could be inefficient and slow, we return a subset of results along with metadata that allows the client to navigate through the complete dataset.
Why Implement Pagination?
- Performance: Loading thousands of records at once can be slow and resource-intensive
- User Experience: Presenting data in manageable chunks makes it easier for users to process
- Network Efficiency: Reduces bandwidth consumption and payload size
- Resource Management: Decreases server load and database query time
Basic Pagination in FastAPI
Let's start with a basic example of implementing pagination in FastAPI:
from fastapi import FastAPI, Query
from typing import List
from pydantic import BaseModel
app = FastAPI()
class Item(BaseModel):
id: int
name: str
description: str
# Simulate database with sample items
items_db = [
Item(id=i, name=f"Item {i}", description=f"Description for item {i}")
for i in range(1, 101) # 100 sample items
]
@app.get("/items/", response_model=List[Item])
async def read_items(
skip: int = Query(0, ge=0, description="Number of items to skip"),
limit: int = Query(10, ge=1, le=100, description="Number of items to return")
):
return items_db[skip:skip + limit]
In this example:
skip
parameter indicates how many items to skip (offset)limit
parameter indicates the maximum number of items to return- We've added validation to ensure
skip
is not negative andlimit
is between 1 and 100
Enhanced Pagination with Response Models
For a more comprehensive pagination solution, we can create dedicated response models that include metadata about the pagination state:
from fastapi import FastAPI, Query
from typing import List, Generic, TypeVar
from pydantic import BaseModel
from pydantic.generics import GenericModel
T = TypeVar('T')
class Item(BaseModel):
id: int
name: str
description: str
class PaginatedResponse(GenericModel, Generic[T]):
items: List[T]
total: int
page: int
page_size: int
pages: int
app = FastAPI()
# Simulate database with sample items
items_db = [
Item(id=i, name=f"Item {i}", description=f"Description for item {i}")
for i in range(1, 101) # 100 sample items
]
@app.get("/items/", response_model=PaginatedResponse[Item])
async def read_items(
page: int = Query(1, ge=1, description="Page number"),
page_size: int = Query(10, ge=1, le=50, description="Items per page")
):
start = (page - 1) * page_size
end = start + page_size
items_on_page = items_db[start:end]
total_items = len(items_db)
total_pages = (total_items + page_size - 1) // page_size # Ceiling division
return {
"items": items_on_page,
"total": total_items,
"page": page,
"page_size": page_size,
"pages": total_pages
}
This approach provides a much richer response that includes:
- The requested page of items
- Total number of items in the dataset
- Current page number
- Page size
- Total number of pages
Cursor-Based Pagination
For very large datasets or for data that changes frequently, cursor-based pagination can be more efficient than offset-based pagination:
from fastapi import FastAPI, Query, Depends
from typing import List, Optional
from pydantic import BaseModel
app = FastAPI()
class Item(BaseModel):
id: int
name: str
description: str
class ItemPage(BaseModel):
items: List[Item]
next_cursor: Optional[str] = None
# Simulate database with sample items
items_db = [
Item(id=i, name=f"Item {i}", description=f"Description for item {i}")
for i in range(1, 101) # 100 sample items
]
@app.get("/items/", response_model=ItemPage)
async def read_items(
cursor: Optional[str] = None,
limit: int = Query(10, ge=1, le=50)
):
if cursor is None:
# First page
start_index = 0
else:
# Convert cursor to index
try:
start_index = int(cursor)
except ValueError:
start_index = 0
end_index = start_index + limit
items = items_db[start_index:end_index]
# Generate next cursor if there are more items
next_cursor = str(end_index) if end_index < len(items_db) else None
return {"items": items, "next_cursor": next_cursor}
In this approach:
- Instead of page numbers, we use a cursor that points to the next item in the sequence
- The client uses the returned cursor to fetch the next set of results
- This is more efficient for datasets that change between requests
Real-World Example: Paginating Database Results with SQLAlchemy
Let's see how to implement pagination with a real database using SQLAlchemy:
from fastapi import FastAPI, Depends, Query
from sqlalchemy.orm import Session
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.orm import sessionmaker
from typing import List
from pydantic import BaseModel
# Database setup
SQLALCHEMY_DATABASE_URL = "sqlite:///./test.db"
engine = create_engine(SQLALCHEMY_DATABASE_URL)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
class ProductDB(Base):
__tablename__ = "products"
id = Column(Integer, primary_key=True, index=True)
name = Column(String, index=True)
description = Column(String)
price = Column(Integer)
# Create tables
Base.metadata.create_all(bind=engine)
# Pydantic models
class Product(BaseModel):
id: int
name: str
description: str
price: float
class Config:
orm_mode = True
class PaginatedProducts(BaseModel):
items: List[Product]
total: int
page: int
size: int
pages: int
# FastAPI app
app = FastAPI()
# Dependency
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
@app.get("/products/", response_model=PaginatedProducts)
def read_products(
page: int = Query(1, ge=1),
size: int = Query(10, ge=1, le=100),
db: Session = Depends(get_db)
):
# Get total count
total = db.query(ProductDB).count()
# Get products for the requested page
products = db.query(ProductDB).offset((page - 1) * size).limit(size).all()
# Calculate total pages
pages = (total + size - 1) // size
return {
"items": products,
"total": total,
"page": page,
"size": size,
"pages": pages
}
Best Practices for API Pagination
- Be Consistent: Use the same pagination approach across your API
- Document Pagination Parameters: Make sure to clearly document how your pagination works
- Set Reasonable Defaults: Choose sensible default values for page size and starting page
- Use Parameter Validation: Validate pagination parameters to avoid edge cases
- Include Total Counts: When possible, include the total number of items and pages
- Include Links: Consider adding links for next/previous pages (HATEOAS principle)
- Handle Edge Cases: Properly handle requests for non-existent pages
Implementing HATEOAS for Pagination
HATEOAS (Hypermedia as the Engine of Application State) improves API usability by including navigation links:
from fastapi import FastAPI, Query, Request
from typing import List
from pydantic import BaseModel, HttpUrl
from typing import Optional
app = FastAPI()
class Item(BaseModel):
id: int
name: str
description: str
class PaginationLinks(BaseModel):
first: HttpUrl
last: HttpUrl
prev: Optional[HttpUrl] = None
next: Optional[HttpUrl] = None
class ItemsResponse(BaseModel):
items: List[Item]
total: int
page: int
size: int
pages: int
links: PaginationLinks
# Simulate database with sample items
items_db = [
Item(id=i, name=f"Item {i}", description=f"Description for item {i}")
for i in range(1, 101) # 100 sample items
]
@app.get("/items/", response_model=ItemsResponse)
async def read_items(
request: Request,
page: int = Query(1, ge=1),
size: int = Query(10, ge=1, le=50)
):
# Calculate pagination values
total = len(items_db)
pages = (total + size - 1) // size
start = (page - 1) * size
end = min(start + size, total)
# Get base URL for links
base_url = str(request.base_url)
# Create pagination links
links = PaginationLinks(
first=f"{base_url}items/?page=1&size={size}",
last=f"{base_url}items/?page={pages}&size={size}"
)
if page > 1:
links.prev = f"{base_url}items/?page={page-1}&size={size}"
if page < pages:
links.next = f"{base_url}items/?page={page+1}&size={size}"
return {
"items": items_db[start:end],
"total": total,
"page": page,
"size": size,
"pages": pages,
"links": links
}
This implementation provides hyperlinks for navigating through the paginated collection, making your API more discoverable and easier to use.
Summary
Pagination is a critical technique for efficiently handling large datasets in APIs. In this tutorial, we've covered:
- Basic offset-based pagination using
skip
andlimit
- Enhanced pagination with metadata
- Cursor-based pagination for large or frequently changing datasets
- Real-world implementation with SQLAlchemy
- Best practices for API pagination
- HATEOAS implementation for improved API usability
By implementing proper pagination in your FastAPI applications, you'll improve performance, reduce server load, and provide a better experience for API consumers.
Exercises
- Modify the basic pagination example to sort results by a field specified by the client
- Implement cursor-based pagination using a timestamp field instead of an ID
- Create a pagination system that allows both page-based and cursor-based approaches
- Extend the HATEOAS example to include links for sorting and filtering
- Implement a caching system for paginated responses to improve performance
Additional Resources
If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)