Closed eddy2by4 closed 3 months ago
You can use this repository to test locally:
https://github.com/eddy2by4/django-testing
pip install -r requirements.txt
python manage.py makemigrations
python manage.py migrate
Add 1 new record manually with PgAdmin or a different tool to the db in the Books table
granian --interface wsgi djangotesting.wsgi:application --workers 10
Run the wrk tests
From the readme:
the default number of blocking threads should work properly with the majority of applications; in synchronous protocols like WSGI this will also impact the number of concurrent requests you can handle, but you should use the
backpressure
configuration parameter to control it and set a lower number of blocking threads only if your application has a very low (1ms order) average response time;
Thus, you should set --backpressure
to the maximum number of database connections (per worker). See also https://polar.sh/emmett-framework/posts/granian-1-4
I am not sure what value should I add here.
I've tried:
--workers 2 --backpressure 2
I would expect this to be very limiting and have a max of 4 db connections, but when I run the wrk script it still hits 100.
What should be a good backpressure value to use to have performance and reliability and make sure db connections don't hit through the roof ?
I've tried:
--workers 2 --backpressure 2
I would expect this to be very limiting and have a max of 4 db connections, but when I run the wrk script it still hits 100.
That's somewhat strange. Backpressure should work as you expected:
❯ granian --interface wsgi --workers 1 benchmarks.app.wsgi:app
[INFO] Websockets are not supported on WSGI
[INFO] Starting granian (main PID: 88229)
[INFO] Listening at: http://127.0.0.1:8000
[INFO] Spawning worker-1 with pid: 88231
[INFO] Started worker-1
[INFO] Started worker-1 runtime-1
❯ wrk -d 10s -c 100 http://localhost:8000/io10
Running 10s test @ http://localhost:8000/io10
2 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 12.70ms 0.92ms 20.30ms 71.78%
Req/Sec 3.95k 105.30 4.16k 76.62%
79020 requests in 10.10s, 10.32MB read
Requests/sec: 7821.30
Transfer/sec: 1.02MB
❯ granian --interface wsgi --workers 1 --backpressure 2 benchmarks.app.wsgi:app
[INFO] Websockets are not supported on WSGI
[INFO] Starting granian (main PID: 88332)
[INFO] Listening at: http://127.0.0.1:8000
[INFO] Spawning worker-1 with pid: 88334
[INFO] Started worker-1
[INFO] Started worker-1 runtime-1
❯ wrk -d 10s -c 100 http://localhost:8000/io10
Running 10s test @ http://localhost:8000/io10
2 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 12.58ms 695.12us 17.45ms 83.02%
Req/Sec 159.29 7.73 181.00 68.00%
1596 requests in 10.10s, 213.53KB read
Requests/sec: 158.04
Transfer/sec: 21.14KB
The only other limitation you can set is --blocking-threads
(also per worker): that puts a hard limit in the number of active threads running Python code.
I'm not sure is there something going on with the way Django ORM use connections, maybe it's just opening new connections instead of re-using the previous one..
Anyways, a good value for backpressure
(or blocking-threads
if the first one doesn't work for you) for your use case would be max_db_connections_you_want/workers
.
@eddy2by4 do you have any updates on this? May I close it?
Closing this due to inactivity. Feel free to comment again @eddy2by4 and I might re-open this.
So with a simple Django project, I've created 2 routes:
started the server with the command:
granian --interface wsgi djangotesting.wsgi:application --workers 10
When I use the route, everything is ok:
wrk -t12 -c100 -d15s http://localhost:8000/test/
But when I benchmark the route with the db call:
wrk -t12 -c100 -d15s http://localhost:8000/test/db
It just opens many db connections and doesn't really serve much, get a lot of non-200 responses, probably hitting db limit connections.
I don't have this issue if I run the same django app with gunicorn or other servers.