Open dawenxi-only opened 1 month ago
I deployed a multi process HTTP service using Gunicorn, but during stress testing, I found that requests were not evenly distributed among these processes.Is there any way to achieve even distribution of requests?
dispatching of requests depends on the OS kernel and the scheduling policy you have set. If the system consider once process can still accept connections it will dispatch the connection accordingly. You shouldn't be worried of uneven distribution.
Thank you for your response. I am concerned that multiple requests concentrated in one process may affect performance.
well the schedulet of the kernel is taking care of it. Some schedulers can balance the requests more often. If you use linux check https://docs.kernel.org/scheduler/index.html
What do you mean ? accepted response are always passed to the wsgi callback. Can you clarify what you're asking for?