Skip to content

Commit 0fd3f20

Browse files
authored
Merge pull request #37 from igorbenav/rate-limiting
Rate limiting
2 parents 394c791 + ac22c6b commit 0fd3f20

27 files changed

+1083
-88
lines changed

README.md

Lines changed: 200 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
<p align="center">
77
<a href="https://github.com/igormagalhaesr/FastAPI-boilerplate">
8-
<img src="https://user-images.githubusercontent.com/43156212/277095260-ef5d4496-8290-4b18-99b2-0c0b5500504e.png" width="35%" height="auto">
8+
<img src="https://user-images.githubusercontent.com/43156212/277095260-ef5d4496-8290-4b18-99b2-0c0b5500504e.png" alt="Blue Rocket with FastAPI Logo as its window. There is a word FAST written" width="35%" height="auto">
99
</a>
1010
</p>
1111

@@ -49,6 +49,7 @@
4949
- 🚦 ARQ integration for task queue
5050
- ⚙️ Efficient querying (only queries what's needed)
5151
- ⎘ Out of the box pagination support
52+
- 🛑 Rate Limiter dependency
5253
- 👮 FastAPI docs behind authentication and hidden based on the environment
5354
- 🦾 Easily extendable
5455
- 🤸‍♂️ Flexible
@@ -64,7 +65,7 @@
6465
- [ ] Docs for other databases (MysQL, SQLite)
6566

6667
#### Features
67-
- [ ] Add a Rate Limiter decorator
68+
- [x] Add a Rate Limiter dependency
6869
- [ ] Add mongoDB support
6970

7071
#### Tests
@@ -100,7 +101,8 @@
100101
8. [Caching](#58-caching)
101102
9. [More Advanced Caching](#59-more-advanced-caching)
102103
10. [ARQ Job Queues](#510-arq-job-queues)
103-
11. [Running](#511-running)
104+
11. [Rate Limiting](#511-rate-limiting)
105+
12. [Running](#512-running)
104106
7. [Running in Production](#6-running-in-production)
105107
8. [Testing](#7-testing)
106108
9. [Contributing](#8-contributing)
@@ -112,7 +114,7 @@ ___
112114
## 3. Prerequisites
113115
Start by using the template, and naming the repository to what you want.
114116
<p align="left">
115-
<img src="https://user-images.githubusercontent.com/43156212/277866726-975d1c98-b1c9-4c8e-b4bd-001c8a5728cb.png" width="35%" height="auto">
117+
<img src="https://user-images.githubusercontent.com/43156212/277866726-975d1c98-b1c9-4c8e-b4bd-001c8a5728cb.png" alt="clicking use this template button, then create a new repository option" width="35%" height="auto">
116118
</p>
117119

118120
Then clone your created repository (I'm using the base for the example)
@@ -195,6 +197,24 @@ REDIS_CACHE_PORT=6379
195197
> **Warning**
196198
> You may use the same redis for both caching and queue while developing, but the recommendation is using two separate containers for production.
197199
200+
To create the first tier:
201+
```
202+
# ------------- first tier -------------
203+
TIER_NAME="free"
204+
```
205+
206+
For the rate limiter:
207+
```
208+
# ------------- redis rate limit -------------
209+
REDIS_RATE_LIMIT_HOST="localhost" # default="localhost"
210+
REDIS_RATE_LIMIT_PORT=6379 # default=6379
211+
212+
213+
# ------------- default rate limit settings -------------
214+
DEFAULT_RATE_LIMIT_LIMIT=10 # default=10
215+
DEFAULT_RATE_LIMIT_PERIOD=3600 # default=3600
216+
```
217+
198218
For tests (optional to run):
199219
```
200220
# ------------- test -------------
@@ -368,13 +388,15 @@ to stop the create_superuser service:
368388
docker-compose stop create_superuser
369389
```
370390

371-
372391
#### 4.3.2 From Scratch
373392
While in the `src` folder, run (after you started the application at least once to create the tables):
374393
```sh
375394
poetry run python -m scripts.create_first_superuser
376395
```
377396

397+
### 4.3.3 Creating the first tier
398+
To create the first tier it's similar, you just replace `create_superuser` for `create_tier`. If using `docker compose`, do not forget to uncomment the `create_tier` service in `docker-compose.yml`.
399+
378400
### 4.4 Database Migrations
379401
While in the `src` folder, run Alembic migrations:
380402
```sh
@@ -473,7 +495,7 @@ First, you may want to take a look at the project structure and understand what
473495

474496
### 5.2 Database Model
475497
Create the new entities and relationships and add them to the model
476-
![diagram](https://user-images.githubusercontent.com/43156212/274053323-31bbdb41-15bf-45f2-8c8e-0b04b71c5b0b.png)
498+
![diagram](https://user-images.githubusercontent.com/43156212/282272311-c7a36e26-dcd0-42cf-939d-6434b5579f29.png)
477499

478500
### 5.3 SQLAlchemy Models
479501
Inside `app/models`, create a new `entity.py` for each new entity (replacing entity with the name) and define the attributes according to [SQLAlchemy 2.0 standards](https://docs.sqlalchemy.org/en/20/orm/mapping_styles.html#orm-mapping-styles):
@@ -832,6 +854,8 @@ Passing resource_id_name is usually preferred.
832854
The behaviour of the `cache` decorator changes based on the request method of your endpoint.
833855
It caches the result if you are passing it to a **GET** endpoint, and it invalidates the cache with this key_prefix and id if passed to other endpoints (**PATCH**, **DELETE**).
834856

857+
858+
#### Invalidating Extra Keys
835859
If you also want to invalidate cache with a different key, you can use the decorator with the `to_invalidate_extra` variable.
836860

837861
In the following example, I want to invalidate the cache for a certain `user_id`, since I'm deleting it, but I also want to invalidate the cache for the list of users, so it will not be out of sync.
@@ -886,6 +910,68 @@ async def patch_post(
886910
> **Warning**
887911
> Note that adding `to_invalidate_extra` will not work for **GET** requests.
888912
913+
#### Invalidate Extra By Pattern
914+
Let's assume we have an endpoint with a paginated response, such as:
915+
```python
916+
@router.get("/{username}/posts", response_model=PaginatedListResponse[PostRead])
917+
@cache(
918+
key_prefix="{username}_posts:page_{page}:items_per_page:{items_per_page}",
919+
resource_id_name="username",
920+
expiration=60
921+
)
922+
async def read_posts(
923+
request: Request,
924+
username: str,
925+
db: Annotated[AsyncSession, Depends(async_get_db)],
926+
page: int = 1,
927+
items_per_page: int = 10
928+
):
929+
db_user = await crud_users.get(db=db, schema_to_select=UserRead, username=username, is_deleted=False)
930+
if not db_user:
931+
raise HTTPException(status_code=404, detail="User not found")
932+
933+
posts_data = await crud_posts.get_multi(
934+
db=db,
935+
offset=compute_offset(page, items_per_page),
936+
limit=items_per_page,
937+
schema_to_select=PostRead,
938+
created_by_user_id=db_user["id"],
939+
is_deleted=False
940+
)
941+
942+
return paginated_response(
943+
crud_data=posts_data,
944+
page=page,
945+
items_per_page=items_per_page
946+
)
947+
```
948+
949+
Just passing `to_invalidate_extra` will not work to invalidate this cache, since the key will change based on the `page` and `items_per_page` values.
950+
To overcome this we may use the `pattern_to_invalidate_extra` parameter:
951+
952+
```python
953+
@router.patch("/{username}/post/{id}")
954+
@cache(
955+
"{username}_post_cache",
956+
resource_id_name="id",
957+
pattern_to_invalidate_extra=["{username}_posts:*"]
958+
)
959+
async def patch_post(
960+
request: Request,
961+
username: str,
962+
id: int,
963+
values: PostUpdate,
964+
current_user: Annotated[UserRead, Depends(get_current_user)],
965+
db: Annotated[AsyncSession, Depends(async_get_db)]
966+
):
967+
...
968+
```
969+
970+
Now it will invalidate all caches with a key that matches the pattern `"{username}_posts:*`, which will work for the paginated responses.
971+
972+
> **Warning**
973+
> Using `pattern_to_invalidate_extra` can be resource-intensive on large datasets. Use it judiciously and consider the potential impact on Redis performance. Be cautious with patterns that could match a large number of keys, as deleting many keys simultaneously may impact the performance of the Redis server.
974+
889975
#### Client-side Caching
890976
For `client-side caching`, all you have to do is let the `Settings` class defined in `app/core/config.py` inherit from the `ClientSideCacheSettings` class. You can set the `CLIENT_CACHE_MAX_AGE` value in `.env,` it defaults to 60 (seconds).
891977

@@ -931,8 +1017,115 @@ If you are doing it from scratch, run while in the `src` folder:
9311017
```sh
9321018
poetry run arq app.worker.WorkerSettings
9331019
```
1020+
### 5.11 Rate Limiting
1021+
To limit how many times a user can make a request in a certain interval of time (very useful to create subscription plans or just to protect your API against DDOS), you may just use the `rate_limiter` dependency:
1022+
1023+
```python
1024+
from fastapi import Depends
1025+
1026+
from app.api.dependencies import rate_limiter
1027+
from app.core import queue
1028+
from app.schemas.job import Job
1029+
1030+
@router.post("/task", response_model=Job, status_code=201, dependencies=[Depends(rate_limiter)])
1031+
async def create_task(message: str):
1032+
job = await queue.pool.enqueue_job("sample_background_task", message)
1033+
return {"id": job.job_id}
1034+
```
1035+
1036+
By default, if no token is passed in the header (that is - the user is not authenticated), the user will be limited by his IP address with the default `limit` (how many times the user can make this request every period) and `period` (time in seconds) defined in `.env`.
1037+
1038+
Even though this is useful, real power comes from creating `tiers` (categories of users) and standard `rate_limits` (`limits` and `periods` defined for specific `paths` - that is - endpoints) for these tiers.
1039+
1040+
All of the `tier` and `rate_limit` models, schemas, and endpoints are already created in the respective folders (and usable only by superusers). You may use the `create_tier` script to create the first tier (it uses the `.env` variable `TIER_NAME`, which is all you need to create a tier) or just use the api:
1041+
1042+
Here I'll create a `free` tier:
1043+
1044+
<p align="left">
1045+
<img src="https://user-images.githubusercontent.com/43156212/282275103-d9c4f511-4cfa-40c6-b882-5b09df9f62b9.png" alt="passing name = free to api request body" width="70%" height="auto">
1046+
</p>
1047+
1048+
And a `pro` tier:
1049+
1050+
<p align="left">
1051+
<img src="https://user-images.githubusercontent.com/43156212/282275107-5a6ca593-ccc0-4965-b2db-09ec5ecad91c.png" alt="passing name = pro to api request body" width="70%" height="auto">
1052+
</p>
1053+
1054+
Then I'll associate a `rate_limit` for the path `api/v1/tasks/task` for each of them, I'll associate a `rate limit` for the path `api/v1/tasks/task`.
1055+
1056+
1 request every hour (3600 seconds) for the free tier:
1057+
1058+
<p align="left">
1059+
<img src="https://user-images.githubusercontent.com/43156212/282275105-95d31e19-b798-4f03-98f0-3e9d1844f7b3.png" alt="passing path=api/v1/tasks/task, limit=1, period=3600, name=api_v1_tasks:1:3600 to free tier rate limit" width="70%" height="auto">
1060+
</p>
1061+
1062+
10 requests every hour for the pro tier:
1063+
1064+
<p align="left">
1065+
<img src="https://user-images.githubusercontent.com/43156212/282275108-deec6f46-9d47-4f01-9899-ca42da0f0363.png" alt="passing path=api/v1/tasks/task, limit=10, period=3600, name=api_v1_tasks:10:3600 to pro tier rate limit" width="70%" height="auto">
1066+
</p>
1067+
1068+
Now let's read all the tiers available (`GET api/v1/tiers`):
1069+
1070+
```javascript
1071+
{
1072+
"data": [
1073+
{
1074+
"name": "free",
1075+
"id": 1,
1076+
"created_at": "2023-11-11T05:57:25.420360"
1077+
},
1078+
{
1079+
"name": "pro",
1080+
"id": 2,
1081+
"created_at": "2023-11-12T00:40:00.759847"
1082+
}
1083+
],
1084+
"total_count": 2,
1085+
"has_more": false,
1086+
"page": 1,
1087+
"items_per_page": 10
1088+
}
1089+
```
1090+
1091+
And read the `rate_limits` for the `pro` tier to ensure it's working (`GET api/v1/tier/pro/rate_limits`):
1092+
1093+
```javascript
1094+
{
1095+
"data": [
1096+
{
1097+
"path": "api_v1_tasks_task",
1098+
"limit": 10,
1099+
"period": 3600,
1100+
"id": 1,
1101+
"tier_id": 2,
1102+
"name": "api_v1_tasks:10:3600"
1103+
}
1104+
],
1105+
"total_count": 1,
1106+
"has_more": false,
1107+
"page": 1,
1108+
"items_per_page": 10
1109+
}
1110+
```
1111+
1112+
Now, whenever an authenticated user makes a `POST` request to the `api/v1/tasks/task`, they'll use the quota that is defined by their tier.
1113+
You may check this getting the token from the `api/v1/login` endpoint, then passing it in the request header:
1114+
```sh
1115+
curl -X POST 'http://127.0.0.1:8000/api/v1/tasks/task?message=test' \
1116+
-H 'Authorization: Bearer <your-token-here>'
1117+
```
1118+
1119+
> **Warning**
1120+
> Since the `rate_limiter` dependency uses the `get_optional_user` dependency instead of `get_current_user`, it will not require authentication to be used, but will behave accordingly if the user is authenticated (and token is passed in header). If you want to ensure authentication, also use `get_current_user` if you need.
1121+
1122+
To change a user's tier, you may just use the `PATCH api/v1/user/{username}/tier` endpoint.
1123+
Note that for flexibility (since this is a boilerplate), it's not necessary to previously inform a tier_id to create a user, but you probably should set every user to a certain tier (let's say `free`) once they are created.
1124+
1125+
> **Warning**
1126+
> If a user does not have a `tier` or the tier does not have a defined `rate limit` for the path and the token is still passed to the request, the default `limit` and `period` will be used, this will be saved in `app/logs`.
9341127
935-
### 5.11 Running
1128+
### 5.12 Running
9361129
If you are using docker compose, just running the following command should ensure everything is working:
9371130
```sh
9381131
docker compose up

docker-compose.yml

Lines changed: 42 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -45,34 +45,50 @@ services:
4545
volumes:
4646
- redis-data:/data
4747

48-
# #-------- uncomment to create first superuser --------
49-
# create_superuser:
50-
# build:
51-
# context: .
52-
# dockerfile: Dockerfile
53-
# env_file:
54-
# - ./src/.env
55-
# depends_on:
56-
# - db
57-
# command: python -m src.scripts.create_first_superuser
58-
# volumes:
59-
# - ./src:/code/src
48+
# #-------- uncomment to create first superuser --------
49+
# create_superuser:
50+
# build:
51+
# context: .
52+
# dockerfile: Dockerfile
53+
# env_file:
54+
# - ./src/.env
55+
# depends_on:
56+
# - db
57+
# - web
58+
# command: python -m src.scripts.create_first_superuser
59+
# volumes:
60+
# - ./src:/code/src
6061

61-
# #-------- uncomment to run tests --------
62-
# pytest:
63-
# build:
64-
# context: .
65-
# dockerfile: Dockerfile
66-
# env_file:
67-
# - ./src/.env
68-
# depends_on:
69-
# - db
70-
# - create_superuser
71-
# - redis
72-
# command: python -m pytest
73-
# volumes:
74-
# - ./src:/code/src
62+
# #-------- uncomment to run tests --------
63+
# # pytest:
64+
# # build:
65+
# # context: .
66+
# # dockerfile: Dockerfile
67+
# # env_file:
68+
# # - ./src/.env
69+
# # depends_on:
70+
# # - db
71+
# # - create_superuser
72+
# # - redis
73+
# # command: python -m pytest
74+
# # volumes:
75+
# # - ./src:/code/src
76+
77+
# #-------- uncomment to create first tier --------
78+
# create_tier:
79+
# build:
80+
# context: .
81+
# dockerfile: Dockerfile
82+
# env_file:
83+
# - ./src/.env
84+
# depends_on:
85+
# - db
86+
# - web
87+
# command: python -m src.scripts.create_first_tier
88+
# volumes:
89+
# - ./src:/code/src
7590

7691
volumes:
7792
postgres-data:
7893
redis-data:
94+

0 commit comments

Comments
 (0)