|
5 | 5 |
|
6 | 6 | <p align="center"> |
7 | 7 | <a href="https://github.com/igormagalhaesr/FastAPI-boilerplate"> |
8 | | - <img src="https://user-images.githubusercontent.com/43156212/277095260-ef5d4496-8290-4b18-99b2-0c0b5500504e.png" width="35%" height="auto"> |
| 8 | + <img src="https://user-images.githubusercontent.com/43156212/277095260-ef5d4496-8290-4b18-99b2-0c0b5500504e.png" alt="Blue Rocket with FastAPI Logo as its window. There is a word FAST written" width="35%" height="auto"> |
9 | 9 | </a> |
10 | 10 | </p> |
11 | 11 |
|
|
49 | 49 | - 🚦 ARQ integration for task queue |
50 | 50 | - ⚙️ Efficient querying (only queries what's needed) |
51 | 51 | - ⎘ Out of the box pagination support |
| 52 | +- 🛑 Rate Limiter dependency |
52 | 53 | - 👮 FastAPI docs behind authentication and hidden based on the environment |
53 | 54 | - 🦾 Easily extendable |
54 | 55 | - 🤸♂️ Flexible |
|
64 | 65 | - [ ] Docs for other databases (MysQL, SQLite) |
65 | 66 |
|
66 | 67 | #### Features |
67 | | -- [ ] Add a Rate Limiter decorator |
| 68 | +- [x] Add a Rate Limiter dependency |
68 | 69 | - [ ] Add mongoDB support |
69 | 70 |
|
70 | 71 | #### Tests |
|
100 | 101 | 8. [Caching](#58-caching) |
101 | 102 | 9. [More Advanced Caching](#59-more-advanced-caching) |
102 | 103 | 10. [ARQ Job Queues](#510-arq-job-queues) |
103 | | - 11. [Running](#511-running) |
| 104 | + 11. [Rate Limiting](#511-rate-limiting) |
| 105 | + 12. [Running](#512-running) |
104 | 106 | 7. [Running in Production](#6-running-in-production) |
105 | 107 | 8. [Testing](#7-testing) |
106 | 108 | 9. [Contributing](#8-contributing) |
|
112 | 114 | ## 3. Prerequisites |
113 | 115 | Start by using the template, and naming the repository to what you want. |
114 | 116 | <p align="left"> |
115 | | - <img src="https://user-images.githubusercontent.com/43156212/277866726-975d1c98-b1c9-4c8e-b4bd-001c8a5728cb.png" width="35%" height="auto"> |
| 117 | + <img src="https://user-images.githubusercontent.com/43156212/277866726-975d1c98-b1c9-4c8e-b4bd-001c8a5728cb.png" alt="clicking use this template button, then create a new repository option" width="35%" height="auto"> |
116 | 118 | </p> |
117 | 119 |
|
118 | 120 | Then clone your created repository (I'm using the base for the example) |
@@ -195,6 +197,24 @@ REDIS_CACHE_PORT=6379 |
195 | 197 | > **Warning** |
196 | 198 | > You may use the same redis for both caching and queue while developing, but the recommendation is using two separate containers for production. |
197 | 199 |
|
| 200 | +To create the first tier: |
| 201 | +``` |
| 202 | +# ------------- first tier ------------- |
| 203 | +TIER_NAME="free" |
| 204 | +``` |
| 205 | + |
| 206 | +For the rate limiter: |
| 207 | +``` |
| 208 | +# ------------- redis rate limit ------------- |
| 209 | +REDIS_RATE_LIMIT_HOST="localhost" # default="localhost" |
| 210 | +REDIS_RATE_LIMIT_PORT=6379 # default=6379 |
| 211 | +
|
| 212 | +
|
| 213 | +# ------------- default rate limit settings ------------- |
| 214 | +DEFAULT_RATE_LIMIT_LIMIT=10 # default=10 |
| 215 | +DEFAULT_RATE_LIMIT_PERIOD=3600 # default=3600 |
| 216 | +``` |
| 217 | + |
198 | 218 | For tests (optional to run): |
199 | 219 | ``` |
200 | 220 | # ------------- test ------------- |
@@ -368,13 +388,15 @@ to stop the create_superuser service: |
368 | 388 | docker-compose stop create_superuser |
369 | 389 | ``` |
370 | 390 |
|
371 | | - |
372 | 391 | #### 4.3.2 From Scratch |
373 | 392 | While in the `src` folder, run (after you started the application at least once to create the tables): |
374 | 393 | ```sh |
375 | 394 | poetry run python -m scripts.create_first_superuser |
376 | 395 | ``` |
377 | 396 |
|
| 397 | +### 4.3.3 Creating the first tier |
| 398 | +To create the first tier it's similar, you just replace `create_superuser` for `create_tier`. If using `docker compose`, do not forget to uncomment the `create_tier` service in `docker-compose.yml`. |
| 399 | + |
378 | 400 | ### 4.4 Database Migrations |
379 | 401 | While in the `src` folder, run Alembic migrations: |
380 | 402 | ```sh |
@@ -473,7 +495,7 @@ First, you may want to take a look at the project structure and understand what |
473 | 495 |
|
474 | 496 | ### 5.2 Database Model |
475 | 497 | Create the new entities and relationships and add them to the model |
476 | | - |
| 498 | + |
477 | 499 |
|
478 | 500 | ### 5.3 SQLAlchemy Models |
479 | 501 | Inside `app/models`, create a new `entity.py` for each new entity (replacing entity with the name) and define the attributes according to [SQLAlchemy 2.0 standards](https://docs.sqlalchemy.org/en/20/orm/mapping_styles.html#orm-mapping-styles): |
@@ -832,6 +854,8 @@ Passing resource_id_name is usually preferred. |
832 | 854 | The behaviour of the `cache` decorator changes based on the request method of your endpoint. |
833 | 855 | It caches the result if you are passing it to a **GET** endpoint, and it invalidates the cache with this key_prefix and id if passed to other endpoints (**PATCH**, **DELETE**). |
834 | 856 |
|
| 857 | + |
| 858 | +#### Invalidating Extra Keys |
835 | 859 | If you also want to invalidate cache with a different key, you can use the decorator with the `to_invalidate_extra` variable. |
836 | 860 |
|
837 | 861 | In the following example, I want to invalidate the cache for a certain `user_id`, since I'm deleting it, but I also want to invalidate the cache for the list of users, so it will not be out of sync. |
@@ -886,6 +910,68 @@ async def patch_post( |
886 | 910 | > **Warning** |
887 | 911 | > Note that adding `to_invalidate_extra` will not work for **GET** requests. |
888 | 912 |
|
| 913 | +#### Invalidate Extra By Pattern |
| 914 | +Let's assume we have an endpoint with a paginated response, such as: |
| 915 | +```python |
| 916 | +@router.get("/{username}/posts", response_model=PaginatedListResponse[PostRead]) |
| 917 | +@cache( |
| 918 | + key_prefix="{username}_posts:page_{page}:items_per_page:{items_per_page}", |
| 919 | + resource_id_name="username", |
| 920 | + expiration=60 |
| 921 | +) |
| 922 | +async def read_posts( |
| 923 | + request: Request, |
| 924 | + username: str, |
| 925 | + db: Annotated[AsyncSession, Depends(async_get_db)], |
| 926 | + page: int = 1, |
| 927 | + items_per_page: int = 10 |
| 928 | +): |
| 929 | + db_user = await crud_users.get(db=db, schema_to_select=UserRead, username=username, is_deleted=False) |
| 930 | + if not db_user: |
| 931 | + raise HTTPException(status_code=404, detail="User not found") |
| 932 | + |
| 933 | + posts_data = await crud_posts.get_multi( |
| 934 | + db=db, |
| 935 | + offset=compute_offset(page, items_per_page), |
| 936 | + limit=items_per_page, |
| 937 | + schema_to_select=PostRead, |
| 938 | + created_by_user_id=db_user["id"], |
| 939 | + is_deleted=False |
| 940 | + ) |
| 941 | + |
| 942 | + return paginated_response( |
| 943 | + crud_data=posts_data, |
| 944 | + page=page, |
| 945 | + items_per_page=items_per_page |
| 946 | + ) |
| 947 | +``` |
| 948 | + |
| 949 | +Just passing `to_invalidate_extra` will not work to invalidate this cache, since the key will change based on the `page` and `items_per_page` values. |
| 950 | +To overcome this we may use the `pattern_to_invalidate_extra` parameter: |
| 951 | + |
| 952 | +```python |
| 953 | +@router.patch("/{username}/post/{id}") |
| 954 | +@cache( |
| 955 | + "{username}_post_cache", |
| 956 | + resource_id_name="id", |
| 957 | + pattern_to_invalidate_extra=["{username}_posts:*"] |
| 958 | +) |
| 959 | +async def patch_post( |
| 960 | + request: Request, |
| 961 | + username: str, |
| 962 | + id: int, |
| 963 | + values: PostUpdate, |
| 964 | + current_user: Annotated[UserRead, Depends(get_current_user)], |
| 965 | + db: Annotated[AsyncSession, Depends(async_get_db)] |
| 966 | +): |
| 967 | +... |
| 968 | +``` |
| 969 | + |
| 970 | +Now it will invalidate all caches with a key that matches the pattern `"{username}_posts:*`, which will work for the paginated responses. |
| 971 | + |
| 972 | +> **Warning** |
| 973 | +> Using `pattern_to_invalidate_extra` can be resource-intensive on large datasets. Use it judiciously and consider the potential impact on Redis performance. Be cautious with patterns that could match a large number of keys, as deleting many keys simultaneously may impact the performance of the Redis server. |
| 974 | +
|
889 | 975 | #### Client-side Caching |
890 | 976 | For `client-side caching`, all you have to do is let the `Settings` class defined in `app/core/config.py` inherit from the `ClientSideCacheSettings` class. You can set the `CLIENT_CACHE_MAX_AGE` value in `.env,` it defaults to 60 (seconds). |
891 | 977 |
|
@@ -931,8 +1017,115 @@ If you are doing it from scratch, run while in the `src` folder: |
931 | 1017 | ```sh |
932 | 1018 | poetry run arq app.worker.WorkerSettings |
933 | 1019 | ``` |
| 1020 | +### 5.11 Rate Limiting |
| 1021 | +To limit how many times a user can make a request in a certain interval of time (very useful to create subscription plans or just to protect your API against DDOS), you may just use the `rate_limiter` dependency: |
| 1022 | + |
| 1023 | +```python |
| 1024 | +from fastapi import Depends |
| 1025 | + |
| 1026 | +from app.api.dependencies import rate_limiter |
| 1027 | +from app.core import queue |
| 1028 | +from app.schemas.job import Job |
| 1029 | + |
| 1030 | +@router.post("/task", response_model=Job, status_code=201, dependencies=[Depends(rate_limiter)]) |
| 1031 | +async def create_task(message: str): |
| 1032 | + job = await queue.pool.enqueue_job("sample_background_task", message) |
| 1033 | + return {"id": job.job_id} |
| 1034 | +``` |
| 1035 | + |
| 1036 | +By default, if no token is passed in the header (that is - the user is not authenticated), the user will be limited by his IP address with the default `limit` (how many times the user can make this request every period) and `period` (time in seconds) defined in `.env`. |
| 1037 | + |
| 1038 | +Even though this is useful, real power comes from creating `tiers` (categories of users) and standard `rate_limits` (`limits` and `periods` defined for specific `paths` - that is - endpoints) for these tiers. |
| 1039 | + |
| 1040 | +All of the `tier` and `rate_limit` models, schemas, and endpoints are already created in the respective folders (and usable only by superusers). You may use the `create_tier` script to create the first tier (it uses the `.env` variable `TIER_NAME`, which is all you need to create a tier) or just use the api: |
| 1041 | + |
| 1042 | +Here I'll create a `free` tier: |
| 1043 | + |
| 1044 | +<p align="left"> |
| 1045 | + <img src="https://user-images.githubusercontent.com/43156212/282275103-d9c4f511-4cfa-40c6-b882-5b09df9f62b9.png" alt="passing name = free to api request body" width="70%" height="auto"> |
| 1046 | +</p> |
| 1047 | + |
| 1048 | +And a `pro` tier: |
| 1049 | + |
| 1050 | +<p align="left"> |
| 1051 | + <img src="https://user-images.githubusercontent.com/43156212/282275107-5a6ca593-ccc0-4965-b2db-09ec5ecad91c.png" alt="passing name = pro to api request body" width="70%" height="auto"> |
| 1052 | +</p> |
| 1053 | + |
| 1054 | +Then I'll associate a `rate_limit` for the path `api/v1/tasks/task` for each of them, I'll associate a `rate limit` for the path `api/v1/tasks/task`. |
| 1055 | + |
| 1056 | +1 request every hour (3600 seconds) for the free tier: |
| 1057 | + |
| 1058 | +<p align="left"> |
| 1059 | + <img src="https://user-images.githubusercontent.com/43156212/282275105-95d31e19-b798-4f03-98f0-3e9d1844f7b3.png" alt="passing path=api/v1/tasks/task, limit=1, period=3600, name=api_v1_tasks:1:3600 to free tier rate limit" width="70%" height="auto"> |
| 1060 | +</p> |
| 1061 | + |
| 1062 | +10 requests every hour for the pro tier: |
| 1063 | + |
| 1064 | +<p align="left"> |
| 1065 | + <img src="https://user-images.githubusercontent.com/43156212/282275108-deec6f46-9d47-4f01-9899-ca42da0f0363.png" alt="passing path=api/v1/tasks/task, limit=10, period=3600, name=api_v1_tasks:10:3600 to pro tier rate limit" width="70%" height="auto"> |
| 1066 | +</p> |
| 1067 | + |
| 1068 | +Now let's read all the tiers available (`GET api/v1/tiers`): |
| 1069 | + |
| 1070 | +```javascript |
| 1071 | +{ |
| 1072 | + "data": [ |
| 1073 | + { |
| 1074 | + "name": "free", |
| 1075 | + "id": 1, |
| 1076 | + "created_at": "2023-11-11T05:57:25.420360" |
| 1077 | + }, |
| 1078 | + { |
| 1079 | + "name": "pro", |
| 1080 | + "id": 2, |
| 1081 | + "created_at": "2023-11-12T00:40:00.759847" |
| 1082 | + } |
| 1083 | + ], |
| 1084 | + "total_count": 2, |
| 1085 | + "has_more": false, |
| 1086 | + "page": 1, |
| 1087 | + "items_per_page": 10 |
| 1088 | +} |
| 1089 | +``` |
| 1090 | + |
| 1091 | +And read the `rate_limits` for the `pro` tier to ensure it's working (`GET api/v1/tier/pro/rate_limits`): |
| 1092 | + |
| 1093 | +```javascript |
| 1094 | +{ |
| 1095 | + "data": [ |
| 1096 | + { |
| 1097 | + "path": "api_v1_tasks_task", |
| 1098 | + "limit": 10, |
| 1099 | + "period": 3600, |
| 1100 | + "id": 1, |
| 1101 | + "tier_id": 2, |
| 1102 | + "name": "api_v1_tasks:10:3600" |
| 1103 | + } |
| 1104 | + ], |
| 1105 | + "total_count": 1, |
| 1106 | + "has_more": false, |
| 1107 | + "page": 1, |
| 1108 | + "items_per_page": 10 |
| 1109 | +} |
| 1110 | +``` |
| 1111 | + |
| 1112 | +Now, whenever an authenticated user makes a `POST` request to the `api/v1/tasks/task`, they'll use the quota that is defined by their tier. |
| 1113 | +You may check this getting the token from the `api/v1/login` endpoint, then passing it in the request header: |
| 1114 | +```sh |
| 1115 | +curl -X POST 'http://127.0.0.1:8000/api/v1/tasks/task?message=test' \ |
| 1116 | +-H 'Authorization: Bearer <your-token-here>' |
| 1117 | +``` |
| 1118 | + |
| 1119 | +> **Warning** |
| 1120 | +> Since the `rate_limiter` dependency uses the `get_optional_user` dependency instead of `get_current_user`, it will not require authentication to be used, but will behave accordingly if the user is authenticated (and token is passed in header). If you want to ensure authentication, also use `get_current_user` if you need. |
| 1121 | +
|
| 1122 | +To change a user's tier, you may just use the `PATCH api/v1/user/{username}/tier` endpoint. |
| 1123 | +Note that for flexibility (since this is a boilerplate), it's not necessary to previously inform a tier_id to create a user, but you probably should set every user to a certain tier (let's say `free`) once they are created. |
| 1124 | + |
| 1125 | +> **Warning** |
| 1126 | +> If a user does not have a `tier` or the tier does not have a defined `rate limit` for the path and the token is still passed to the request, the default `limit` and `period` will be used, this will be saved in `app/logs`. |
934 | 1127 |
|
935 | | -### 5.11 Running |
| 1128 | +### 5.12 Running |
936 | 1129 | If you are using docker compose, just running the following command should ensure everything is working: |
937 | 1130 | ```sh |
938 | 1131 | docker compose up |
|
0 commit comments