emmett-framework / granian

A Rust HTTP server for Python applications
BSD 3-Clause "New" or "Revised" License
2.86k stars 83 forks source link

Implement a timeout #177

Closed jefer94 closed 2 months ago

jefer94 commented 10 months ago

Something like --timeout 30, I'm writing a benchmark and I need to set this value

gi0baro commented 10 months ago

🤔 a timeout on what? can you be more specific? Also, if I'd implementing a benchmark, and we're talking about a timeout on request time, I'd implement it on the client side, not on the server. Having a timeout on the server would produce false results in a benchmark IMHO.

jefer94 commented 10 months ago

For handling the requests timeouts, I have something like

echo "# Django Workers" > $FILE

sudo fuser -k $PORT/tcp
gunicorn mysite.asgi --timeout $TIMEOUT --workers $THREADS --worker-class uvicorn.workers.UvicornWorker & echo "starting server..."
sleep $SLEEP_TIME
echo "## ASGI Gunicorn Uvicorn" >> $FILE
bench

sudo fuser -k $PORT/tcp
granian --interface asgi mysite.asgi:application --port $PORT --workers $THREADS --loop asyncio & echo "starting server..."
sleep $SLEEP_TIME
echo "## Granian Asyncio" >> $FILE
bench

sudo fuser -k $PORT/tcp
granian --interface asgi mysite.asgi:application --port $PORT --workers $THREADS --loop uvloop & echo "starting server..."
sleep $SLEEP_TIME
echo "## Granian uvloop" >> $FILE
bench

sudo fuser -k $PORT/tcp

And I thought if each server would have different timeout it should affect the results, and I'm working over Heroku, it has a request timeout of 30 seconds

My benchmark is not so realistic, I think but cover a lot of common tasks

I got good results, but I could not get any good results for AIOHTTP or HTTPX test, so I think maybe the default request timeout should affect to Granian result

jefer94 commented 10 months ago

And it appears even with --no-opt, apparently it is not handling the requests timeout, and it should penalize the results for all the output it is generating in the console

[WARNING] Application callable raised an exception
Task exception was never retrieved
future: <Task finished name='Task-55077' coro=<future_watcher_wrapper.<locals>.future_watcher() done, defined at /home/jefer/dev/work/apiv2/benchmarks/django-workers/.venv/lib/python3.11/site-packages/granian/_futures.py:2> exception=TypeError('PyFutureAwaitable.cancel() takes no arguments (1 given)')>
Traceback (most recent call last):
  File "/home/jefer/dev/work/apiv2/benchmarks/django-workers/.venv/lib/python3.11/site-packages/granian/_futures.py", line 4, in future_watcher
    await inner(watcher.scope, watcher.proto)
  File "/home/jefer/dev/work/apiv2/benchmarks/django-workers/.venv/lib/python3.11/site-packages/django/core/handlers/asgi.py", line 170, in __call__
    await self.handle(scope, receive, send)
  File "/home/jefer/dev/work/apiv2/benchmarks/django-workers/.venv/lib/python3.11/site-packages/django/core/handlers/asgi.py", line 218, in handle
    task.cancel()
TypeError: PyFutureAwaitable.cancel() takes no arguments (1 given)
Task exception was never retrieved
future: <Task finished name='Task-55434' coro=<ASGIHandler.listen_for_disconnect() done, defined at /home/jefer/dev/work/apiv2/benchmarks/django-workers/.venv/lib/python3.11/site-packages/django/core/handlers/asgi.py:226> exception=RequestAborted()>
Traceback (most recent call last):
  File "/home/jefer/dev/work/apiv2/benchmarks/django-workers/.venv/lib/python3.11/site-packages/django/core/handlers/asgi.py", line 230, in listen_for_disconnect
    raise RequestAborted()
django.core.exceptions.RequestAborted
gi0baro commented 10 months ago

@jefer94 1.0 added the proper code to handle cancel calls on PyFutureAwaitable, this should solve your issue.

As for the timeout thing, I'm still not convinced Granian should implement this and leave it to the application/framework.

jefer94 commented 10 months ago

I will make an endpoint in Heroku with an asyncio.sleep(40); print('somebody') on Wednesday to see how them interact

JaimeOnaindia commented 9 months ago

A timeout would be nice, timeout with the proxy, nging or whatever, just like unicorn and gunicorn

gi0baro commented 2 months ago

Closing this as stale

Object905 commented 4 days ago

It would be really nice to have to kill "misbehaving" workers. I've started using granian for one "legacy" service (in wsgi mode) and faced issue when some requests are hanging - deadlocking in DB transaction on hot table. Because of this postgres spikes in memory usage when this happens. After app is restarted - everything is ok. This deadlock happens very rarely and is hard to debug - so seems like timeouts are best solution for this.

Gunicorn states that it's only applied to "silent" workers. And in my case it's a bit harder to set timeout on reverse proxy, because I'm using traefik and it supports timeouts only on "endpoints", but on the same endpoint I have other services that require different settings.

gi0baro commented 4 days ago

faced issue when some requests are hanging - deadlocking in DB transaction on hot table. Because of this postgres spikes in memory usage when this happens. After app is restarted - everything is ok. This deadlock happens very rarely and is hard to debug - so seems like timeouts are best solution for this.

To me it sounds like you actually want statement execution timeouts and lock acquire timeouts in postgres and thus deal with the relevant errors in the Python app. There's no way for Granian in WSGI to stop postgresql queries.

Object905 commented 4 days ago

Yes, surely it would be better to fix that in code. But it's legacy that stinks 😃. I was unable to pinpoint exactly why/where it happens. Global timeouts will cause other stuff to break for sure, but I'll try them out, thanks for the tip.

In that case I would be fine if hanging worker was killed altogether, so rolled back to gunicorn with timeouts for now because of that.