Closed onixlas closed 12 months ago
The backlog parameter is around new socket-level connections, not new requests. Is your script making new connections or reusing a keep-alive socket with the server? Ultimately the backlog is not in our control, it's just a number we are passing down to socket.listen(backlog)
and so it's subject to any idiosyncrasies there.
I'd suggest sharing your script.
Closing - can re-evaluate if we have a way to repro.
I use waitress to run my Flask app like this: serve(app, host="0.0.0.0", port=8003, threads=4, backlog=10, channel_timeout=0.4) After that I launch a script which makes lot's of requests simultaneously. In the waitress output I see this: ... WARNING:waitress.queue:Task queue depth is 26 WARNING:waitress.queue:Task queue depth is 27 WARNING:waitress.queue:Task queue depth is 28 ... Shouldn't backlog parameter limit the queue depth? Is there a way to respond with something like 429 Too Many Requests if the queue is too deep?
Python 3.11.4 (main, Jun 9 2023, 07:59:55) [GCC 12.3.0] on linux Flask==2.3.3 waitress==2.1.2