Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use sanic's own server to run benchmarks and adjust Litestart and FastAPI servers for same grounds 🚀 #3

Open
provinzkraut opened this issue Oct 8, 2023 · 1 comment
Labels

Comments

@provinzkraut
Copy link

Sanic

Since you're giving Robyn the opportunity to run on its own server, you should do the same for Sanic.

Here's the results of Sanic with uvicorn:

wrk -t12 -c400 -d10s -s wrk_script.lua http://localhost:8000/echo
Running 10s test @ http://localhost:8000/echo
  12 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    23.96ms   23.94ms 205.95ms   85.51%
    Req/Sec     1.71k     1.33k    6.51k    85.60%
  204405 requests in 10.10s, 33.33MB read
Requests/sec:  20241.36
Transfer/sec:      3.30MB

And this is what I get when I run Sanic with its own server:

Running 10s test @ http://localhost:8000/echo
  12 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     6.50ms    6.17ms  68.92ms   75.49%
    Req/Sec     6.16k     3.23k   38.93k    75.60%
  736312 requests in 10.10s, 85.67MB read
Requests/sec:  72901.33
Transfer/sec:      8.48MB

This would actually make it the fastest of the tested frameworks.

Litestar

The Litestar app is set up to respond on /, where the other apps, and the benchmark, are run against /echo, meaning all results you're seeing are just 404 responses. This is actually visible in the results:

433571 requests in 10.10s, 71.12MB read
Socket errors: connect 0, read 306, write 0, timeout 0
Non-2xx or 3xx responses: 433571

I've also noticed that you enabled way more strict data validation for Litestar than FastAPI; dict[str, str] for incoming and outgoing data for Litestar and just dict for incoming data for FastAPI and no validation for outgoing data. To make this a useful comparison, those should probably be equivalent (=


Litestar before the adjustments:

wrk -t12 -c400 -d10s -s wrk_script.lua http://localhost:8000/echo
Running 10s test @ http://localhost:8000/echo
  12 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    11.27ms    9.04ms 111.02ms   64.85%
    Req/Sec     3.17k     1.64k   20.26k    77.37%
  379577 requests in 10.08s, 62.26MB read
  Non-2xx or 3xx responses: 379577
Requests/sec:  37667.42
Transfer/sec:      6.18MB

Litestar after the adjustments:

wrk -t12 -c400 -d10s -s wrk_script.lua http://localhost:8000/echo
Running 10s test @ http://localhost:8000/echo
  12 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     7.42ms    6.14ms 106.89ms   73.79%
    Req/Sec     4.89k     1.64k   28.00k    78.47%
  585402 requests in 10.10s, 87.65MB read
Requests/sec:  57968.14
Transfer/sec:      8.68MB

Adjusted results and rankings:

I've also ran Starlette and FastAPI for comparison, and compiled a table with the results of the adjusted tests:

Framework RPS
Sanic 72901
Starlette 68016
Litestar 57968
FastAPI 38225

This gives a very different picture than your original run.

@tushar5526
Copy link
Owner

tushar5526 commented Oct 8, 2023

This is awesome!! I think there should be some CI that runs these benchmarks on every commit and updates the images in README.

Feel free to open a PR with the changes you mentioned if I you have them handy with you else I will make these changes when I get a chance :)

@tushar5526 tushar5526 changed the title Test setup Use sanic's own server to run benchmarks 🚀 Oct 8, 2023
@tushar5526 tushar5526 changed the title Use sanic's own server to run benchmarks 🚀 Use sanic's own server to run benchmarks and adjust Litestart and FastAPI servers for same grounds 🚀 Oct 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants