Skip to content

Commit 80a32b8

Browse files
stackjayjaygaha
authored andcommitted
day #48 fastapi #10 cache, rate-limit, schedule tasks
1 parent 7675616 commit 80a32b8

7 files changed

Lines changed: 372 additions & 1 deletion

File tree

workspace/7_framework/fastapi/README.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,11 @@ uvicorn main:app --reload
9999

100100
- [Day 09: Database Integration with SQLAlchemy](day09/README.md)
101101
Learn how to integrate a SQL database with FastAPI using SQLAlchemy, manage database sessions, define models and schemas, and implement CRUD operations.
102-
_Includes: SQLAlchemy setup, session management, CRUD utils, and testing with a SQLite database._
102+
_Includes: SQLAlchemy setup, session management, CRUD utils, and testing with an in-memory database._
103+
104+
- [Day 10: Advanced Features - Caching, Rate Limiting, and Background Tasks](day10/README.md)
105+
Learn to implement caching with Redis, protect endpoints with rate limiting, and defer long-running jobs with background tasks.
106+
_Includes: `fastapi-cache2`, `slowapi`, `BackgroundTasks`, lifespan events, and testing._
103107

104108
---
105109

Lines changed: 151 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,151 @@
1+
# FastAPI Day 10: Advanced Features - Caching, Rate Limiting, and Background Tasks
2+
3+
Welcome to **Day 10** of the FastAPI tutorial series! Today, we're moving beyond the basics to explore advanced features that are essential for building robust, scalable, and production-ready applications. You'll learn how to implement caching to improve performance, rate limiting to protect your API from abuse, and background tasks to handle long-running operations without blocking the client.
4+
5+
---
6+
7+
## What You'll Learn
8+
9+
- **Response Caching**: Implement caching with `fastapi-cache2` and a Redis backend to dramatically reduce response times for frequent requests.
10+
- **Rate Limiting**: Use `slowapi` to apply flexible rate limits to your endpoints, preventing individual users from overwhelming the service.
11+
- **Background Tasks**: Leverage FastAPI's built-in `BackgroundTasks` to execute operations (like writing to a log or sending an email) after a response has been sent to the client.
12+
- **Application Lifespan Events**: Use the `lifespan` context manager to run setup and teardown logic (like initializing a cache connection) when the application starts and stops.
13+
- **Custom Middleware & Exception Handling**: Integrate third-party middleware and write custom exception handlers to manage application-wide concerns like rate limiting.
14+
- **Testing Advanced Features**: Write unit tests to verify that caching, rate limiting, and background tasks are all functioning as expected.
15+
16+
---
17+
18+
## Key Concepts
19+
20+
For this tutorial, we've simplified the structure to focus on the new concepts within a single `src` directory.
21+
22+
- `src/main.py`: The application's entry point. It initializes the FastAPI app and contains all the logic for caching, rate limiting, and background tasks.
23+
- `src/dependencies.py`: Defines the `slowapi` limiter instance.
24+
- `tests/test_main.py`: Contains unit tests for all the API endpoints and their advanced features.
25+
- `requirements.txt`: Lists the new dependencies: `redis`, `slowapi`, and `fastapi-cache2`.
26+
27+
### 1. Caching with `fastapi-cache2` and Redis
28+
29+
Caching is a powerful technique for improving API performance. Instead of re-computing a result for every request, we can store it temporarily and serve the stored version for subsequent requests.
30+
31+
- **Lifespan Event**: We use the `lifespan` async context manager to initialize the Redis cache when the application starts and properly close the connection when it shuts down. This is the modern replacement for startup/shutdown events.
32+
33+
```python-beginner/workspace/7_framework/fastapi/day10/src/main.py#L13-L27
34+
@asynccontextmanager
35+
async def lifespan(app: FastAPI):
36+
"""
37+
This function is executed when the application starts.
38+
It initializes the Redis connection and the FastAPI Caching.
39+
"""
40+
# Connect to your Redis instance
41+
redis = aioredis.from_url("redis://localhost")
42+
# Initialize the cache with the Redis backend
43+
FastAPICache.init(RedisBackend(redis), prefix="fastapi-cache")
44+
print("FastAPI application startup complete. Cache initialized.")
45+
46+
try:
47+
yield
48+
finally:
49+
await redis.close()
50+
```
51+
52+
- **The `@cache` Decorator**: Applying the `@cache` decorator to an endpoint is all it takes to enable caching. The `expire` argument specifies how long the response should be cached, in seconds.
53+
54+
```python-beginner/workspace/7_framework/fastapi/day10/src/main.py#L93-L102
55+
@app.get("/cached-data")
56+
@cache(expire=30) # Cache this response for 30 seconds
57+
async def get_cached_data():
58+
"""
59+
This endpoint demonstrates caching.
60+
The first time it's called, it will "process" for 2 seconds.
61+
Subsequent calls within 30 seconds will return the cached response instantly.
62+
"""
63+
print("Processing request to get cached data...")
64+
time.sleep(2) # Simulate a slow operation
65+
return {"detail": "This is some cached data", "timestamp": time.time()}
66+
```
67+
68+
### 2. Rate Limiting with `slowapi`
69+
70+
Rate limiting is crucial for preventing abuse and ensuring your API remains available for all users. We use the `slowapi` library, which integrates smoothly with FastAPI.
71+
72+
- **Limiter Instance**: We create a `Limiter` instance that uses the client's IP address to identify unique users.
73+
74+
```python-beginner/workspace/7_framework/fastapi/day10/src/dependencies.py#L4-L5
75+
# Create a Limiter instance that uses the client's IP address as the key.
76+
limiter = Limiter(key_func=get_remote_address)
77+
```
78+
79+
- **Middleware and Exception Handler**: The `SlowAPIMiddleware` is added to the application to process requests. We also add a custom exception handler to return a clear JSON response when a user exceeds the rate limit.
80+
81+
```python-beginner/workspace/7_framework/fastapi/day10/src/main.py#L36-L51
82+
# Add the SlowAPI middleware to handle rate limiting
83+
app.add_middleware(SlowAPIMiddleware)
84+
85+
# Define a custom exception handler for RateLimitExceeded
86+
@app.exception_handler(RateLimitExceeded)
87+
async def rate_limit_exceeded_handler(request: Request, exc: RateLimitExceeded):
88+
"""
89+
Custom exception handler for rate-limited requests.
90+
Returns a JSON response with a 429 status code.
91+
"""
92+
return JSONResponse(
93+
status_code=429,
94+
content={"detail": f"Rate limit exceeded: {exc.detail}"}
95+
)
96+
```
97+
98+
- **The `@limiter.limit` Decorator**: To protect an endpoint, we apply the `@limiter.limit` decorator with a specific limit (e.g., "5/minute").
99+
100+
```python-beginner/workspace/7_framework/fastapi/day10/src/main.py#L104-L111
101+
@app.get("/rate-limited")
102+
@limiter.limit("5/minute") # Allow 5 requests per minute
103+
async def get_rate_limited_endpoint(request: Request):
104+
"""
105+
This endpoint is rate-limited.
106+
It allows a maximum of 5 requests per minute from the same IP address.
107+
"""
108+
return {"detail": "This endpoint is rate-limited."}
109+
```
110+
111+
### 3. Background Tasks
112+
113+
For operations that don't need to complete before sending a response, FastAPI provides a `BackgroundTasks` dependency. This is perfect for tasks like sending confirmation emails, processing data, or writing logs.
114+
115+
- **Injecting `BackgroundTasks`**: Simply add `background_tasks: BackgroundTasks` to your path operation function's signature.
116+
- **Adding a Task**: Use the `background_tasks.add_task()` method, passing the function to run and its arguments. The API will immediately return a response to the client while the task executes in the background.
117+
118+
```python-beginner/workspace/7_framework/fastapi/day10/src/main.py#L113-L122
119+
@app.post("/background-task")
120+
async def trigger_background_task(background_tasks: BackgroundTasks):
121+
"""
122+
This endpoint triggers a background task.
123+
It immediately returns a response to the client while the task
124+
(writing to a log file) runs in the background.
125+
"""
126+
# Add the task to be executed in the background
127+
background_tasks.add_task(write_log, "Task started: Processing data in the background.\n")
128+
return {"message": "Background task has been initiated."}
129+
```
130+
131+
### 4. Testing These Features
132+
133+
Our `tests/test_main.py` file demonstrates how to effectively test these advanced features:
134+
- **Testing Caching**: We call the cached endpoint twice. The first call's duration is asserted to be slow, while the second is fast. We also assert that the `timestamp` in the response body is identical for both calls, proving the data came from the cache.
135+
- **Testing Rate Limiting**: We loop to hit the rate-limited endpoint just enough times to succeed, then make one more call and assert that we receive a `429 Too Many Requests` status code.
136+
- **Testing Background Tasks**: We call the endpoint and then check for the side effect of the task—in this case, we assert that a log file has been created and contains the expected content.
137+
138+
---
139+
140+
## Next Steps
141+
142+
- Make sure you have Redis running on your local machine.
143+
- Install the new dependencies: `pip install -r requirements.txt`.
144+
- Run the application with `uvicorn src.main:app --reload`.
145+
- Use an API client like `curl` or Postman to test the endpoints.
146+
- Hit `GET /cached-data` twice and observe the difference in response time.
147+
- Hit `GET /rate-limited` six times in under a minute to see the rate limit kick in.
148+
- Call `POST /background-task` and check for the `log.txt` file in your project root.
149+
- Run the automated tests with `python -m pytest`.
150+
151+
---
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
[pytest]
2+
asyncio_default_fixture_loop_scope = function
3+
pythonpath = . src
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
fastapi
2+
uvicorn[standard]
3+
pytest
4+
httpx
5+
redis
6+
slowapi
7+
fastapi-cache2[redis]
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
from slowapi import Limiter
2+
from slowapi.util import get_remote_address
3+
4+
# Create a Limiter instance that uses the client's IP address as the key.
5+
limiter = Limiter(key_func=get_remote_address)
Lines changed: 106 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
import time
2+
from contextlib import asynccontextmanager
3+
from fastapi import FastAPI, Request, BackgroundTasks
4+
from fastapi.responses import JSONResponse
5+
from slowapi.errors import RateLimitExceeded
6+
from slowapi.middleware import SlowAPIMiddleware
7+
from fastapi_cache import FastAPICache
8+
from fastapi_cache.backends.redis import RedisBackend
9+
from fastapi_cache.decorator import cache
10+
from redis import asyncio as aioredis
11+
12+
from .dependencies import limiter
13+
14+
# --- Lifespan Events for Caching ---
15+
@asynccontextmanager
16+
async def lifespan(app: FastAPI):
17+
"""
18+
This function is executed when the application starts.
19+
It initializes the Redis connection and the FastAPI Caching.
20+
"""
21+
# Connect to your Redis instance
22+
redis = aioredis.from_url("redis://localhost")
23+
# Initialize the cache with the Redis backend
24+
FastAPICache.init(RedisBackend(redis), prefix="fastapi-cache")
25+
print("FastAPI application startup complete. Cache initialized.")
26+
27+
try:
28+
yield
29+
finally:
30+
await redis.close()
31+
32+
# Create the FastAPI app instance
33+
app = FastAPI(lifespan=lifespan)
34+
35+
# Attach the limiter instance to the application's state
36+
# This is the crucial step to make the limiter accessible to the middleware.
37+
app.state.limiter = limiter
38+
39+
# Add the SlowAPI middleware to handle rate limiting
40+
app.add_middleware(SlowAPIMiddleware)
41+
42+
# Define a custom exception handler for RateLimitExceeded
43+
@app.exception_handler(RateLimitExceeded)
44+
async def rate_limit_exceeded_handler(request: Request, exc: RateLimitExceeded):
45+
"""
46+
Custom exception handler for rate-limited requests.
47+
Returns a JSON response with a 429 status code.
48+
"""
49+
return JSONResponse(
50+
status_code=429,
51+
content={"detail": f"Rate limit exceeded: {exc.detail}"}
52+
)
53+
54+
55+
56+
57+
# --- Background Task Function ---
58+
def write_log(message: str):
59+
"""
60+
A simple background task that writes a message to a log file.
61+
"""
62+
with open("log.txt", mode="a") as log_file:
63+
log_file.write(message)
64+
print(f"Log written: {message.strip()}")
65+
66+
67+
# --- API Endpoints ---
68+
69+
@app.get("/")
70+
def read_root():
71+
"""
72+
A simple root endpoint to confirm the API is running.
73+
"""
74+
return {"status": "API is running"}
75+
76+
@app.get("/cached-data")
77+
@cache(expire=30) # Cache this response for 30 seconds
78+
async def get_cached_data():
79+
"""
80+
This endpoint demonstrates caching.
81+
The first time it's called, it will "process" for 2 seconds.
82+
Subsequent calls within 30 seconds will return the cached response instantly.
83+
"""
84+
print("Processing request to get cached data...")
85+
time.sleep(2) # Simulate a slow operation
86+
return {"detail": "This is some cached data", "timestamp": time.time()}
87+
88+
@app.get("/rate-limited")
89+
@limiter.limit("5/minute") # Allow 5 requests per minute
90+
async def get_rate_limited_endpoint(request: Request):
91+
"""
92+
This endpoint is rate-limited.
93+
It allows a maximum of 5 requests per minute from the same IP address.
94+
"""
95+
return {"detail": "This endpoint is rate-limited."}
96+
97+
@app.post("/background-task")
98+
async def trigger_background_task(background_tasks: BackgroundTasks):
99+
"""
100+
This endpoint triggers a background task.
101+
It immediately returns a response to the client while the task
102+
(writing to a log file) runs in the background.
103+
"""
104+
# Add the task to be executed in the background
105+
background_tasks.add_task(write_log, "Task started: Processing data in the background.\n")
106+
return {"message": "Background task has been initiated."}
Lines changed: 95 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
import time
2+
import os
3+
import pytest
4+
from fastapi.testclient import TestClient
5+
from src.main import app
6+
7+
@pytest.fixture(scope="module")
8+
def client():
9+
"""
10+
Pytest fixture to create a TestClient.
11+
Using a `with` statement ensures that the application's lifespan
12+
(startup and shutdown events) is handled correctly.
13+
"""
14+
with TestClient(app) as c:
15+
yield c
16+
17+
def test_read_root(client):
18+
"""
19+
Test the root endpoint to ensure the API is running.
20+
"""
21+
response = client.get("/")
22+
assert response.status_code == 200
23+
assert response.json() == {"status": "API is running"}
24+
25+
def test_caching(client):
26+
"""
27+
Test the caching functionality.
28+
- The first request should take ~2 seconds.
29+
- The second request should be much faster due to the cache.
30+
- The data from both requests should be identical.
31+
"""
32+
# First request - should be slow and not from cache
33+
start_time = time.time()
34+
response1 = client.get("/cached-data")
35+
duration1 = time.time() - start_time
36+
37+
assert response1.status_code == 200
38+
assert duration1 >= 2.0
39+
data1 = response1.json()
40+
41+
# Second request - should be fast and from cache
42+
start_time = time.time()
43+
response2 = client.get("/cached-data")
44+
duration2 = time.time() - start_time
45+
46+
assert response2.status_code == 200
47+
assert duration2 < 1.0 # Should be significantly faster
48+
data2 = response2.json()
49+
50+
# The timestamp should be the same, proving it's cached data
51+
assert data1["timestamp"] == data2["timestamp"]
52+
53+
def test_rate_limiting(client):
54+
"""
55+
Test the rate limiting functionality.
56+
- The first 5 requests should succeed (200 OK).
57+
- The 6th request should fail with a 429 Too Many Requests error.
58+
"""
59+
# Make 5 successful requests
60+
for i in range(5):
61+
response = client.get("/rate-limited")
62+
assert response.status_code == 200, f"Request {i+1} failed unexpectedly"
63+
64+
# The 6th request should be rate-limited
65+
response = client.get("/rate-limited")
66+
assert response.status_code == 429
67+
assert "Rate limit exceeded" in response.json()["detail"]
68+
69+
def test_background_task(client):
70+
"""
71+
Test the background task endpoint.
72+
- The endpoint should return an immediate success response.
73+
- The background task should create/write to a log file.
74+
"""
75+
log_file = "log.txt"
76+
# Clean up log file before test if it exists
77+
if os.path.exists(log_file):
78+
os.remove(log_file)
79+
80+
# Trigger the background task
81+
response = client.post("/background-task")
82+
assert response.status_code == 200
83+
assert response.json() == {"message": "Background task has been initiated."}
84+
85+
# Give the background task a moment to run
86+
time.sleep(0.5)
87+
88+
# Check if the log file was created and contains the correct message
89+
assert os.path.exists(log_file)
90+
with open(log_file, "r") as f:
91+
content = f.read()
92+
assert "Processing data in the background" in content
93+
94+
# Clean up the log file after the test
95+
os.remove(log_file)

0 commit comments

Comments
 (0)