You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
And this is what I get when I run Sanic with its own server:
Running 10s test @ http://localhost:8000/echo
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 6.50ms 6.17ms 68.92ms 75.49%
Req/Sec 6.16k 3.23k 38.93k 75.60%
736312 requests in 10.10s, 85.67MB read
Requests/sec: 72901.33
Transfer/sec: 8.48MB
This would actually make it the fastest of the tested frameworks.
Litestar
The Litestar app is set up to respond on /, where the other apps, and the benchmark, are run against /echo, meaning all results you're seeing are just 404 responses. This is actually visible in the results:
I've also noticed that you enabled way more strict data validation for Litestar than FastAPI; dict[str, str] for incoming and outgoing data for Litestar and just dict for incoming data for FastAPI and no validation for outgoing data. To make this a useful comparison, those should probably be equivalent (=
Litestar before the adjustments:
wrk -t12 -c400 -d10s -s wrk_script.lua http://localhost:8000/echo
Running 10s test @ http://localhost:8000/echo
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 11.27ms 9.04ms 111.02ms 64.85%
Req/Sec 3.17k 1.64k 20.26k 77.37%
379577 requests in 10.08s, 62.26MB read
Non-2xx or 3xx responses: 379577
Requests/sec: 37667.42
Transfer/sec: 6.18MB
tushar5526
changed the title
Use sanic's own server to run benchmarks 🚀
Use sanic's own server to run benchmarks and adjust Litestart and FastAPI servers for same grounds 🚀
Oct 8, 2023
Sanic
Since you're giving Robyn the opportunity to run on its own server, you should do the same for Sanic.
Here's the results of Sanic with uvicorn:
And this is what I get when I run Sanic with its own server:
This would actually make it the fastest of the tested frameworks.
Litestar
The Litestar app is set up to respond on
/
, where the other apps, and the benchmark, are run against/echo
, meaning all results you're seeing are just 404 responses. This is actually visible in the results:python-framework-benchmarks/results/litestar.txt
Lines 7 to 9 in 40a3cad
I've also noticed that you enabled way more strict data validation for Litestar than FastAPI;
dict[str, str]
for incoming and outgoing data for Litestar and justdict
for incoming data for FastAPI and no validation for outgoing data. To make this a useful comparison, those should probably be equivalent (=Litestar before the adjustments:
Litestar after the adjustments:
Adjusted results and rankings:
I've also ran Starlette and FastAPI for comparison, and compiled a table with the results of the adjusted tests:
This gives a very different picture than your original run.
The text was updated successfully, but these errors were encountered: