Open nonprofitnerd opened 6 months ago
Got any context; when does this get triggered? Did you at the same time bump dependency or Python versions?
I partially reverted https://github.com/benoitc/gunicorn/commit/1ccebab763e2fa3a6b580f02d848a9dc314ed181 (meant to silence exceptions below SSLError
) in https://github.com/benoitc/gunicorn/commit/0b10cbab1d6368fcab2d5a7b6fe359a6cecc81a7 specifically to make cases like this visible again - you are not necessarily seeing changed behaviour beyond changed log entries.
Hello, we're experiencing an issue in our production environment, and since we have limited information, diagnosing it is challenging. The problem occurs randomly, and we're using Python version 3.8.18 and Django version 4.2.12.
Hello,
[INFO] 2024-05-13 06:08:22,863 gunicorn.error: Worker exiting (pid: 28063)
[ERROR] 2024-05-13 06:08:22,860 gunicorn.error: Error handling request (no URI read)
[DEBUG] 2024-05-13 06:08:22,409 gunicorn.error: Closing connection.
[WARNING] 2024-05-13 06:08:21,858 gunicorn.error: Worker graceful timeout (pid:28063)
[DEBUG] 2024-05-13 06:07:51,192 gunicorn.error: Closing connection.
[DEBUG] 2024-05-13 06:07:50,892 gunicorn.error: Closing connection.
[INFO] 2024-05-13 06:07:50,835 gunicorn.error: Autorestarting worker after current request.
--
[INFO] 2024-05-13 04:11:34,150 gunicorn.error: Worker exiting (pid: 5111)
[ERROR] 2024-05-13 04:11:34,146 gunicorn.error: Error handling request (no URI read)
[WARNING] 2024-05-13 04:11:33,145 gunicorn.error: Worker graceful timeout (pid:5111)
[DEBUG] 2024-05-13 04:11:03,058 gunicorn.error: Closing connection.
[INFO] 2024-05-13 04:11:02,192 gunicorn.error: Autorestarting worker after current request.
--
[DEBUG] 2024-05-13 02:48:42,494 gunicorn.error: Closing connection.
[INFO] 2024-05-13 02:48:42,460 gunicorn.error: Worker exiting (pid: 2016)
[ERROR] 2024-05-13 02:48:42,459 gunicorn.error: Error handling request (no URI read)
[WARNING] 2024-05-13 02:48:41,457 gunicorn.error: Worker graceful timeout (pid:2016)
[DEBUG] 2024-05-13 02:48:11,691 gunicorn.error: Closing connection.
[INFO] 2024-05-13 02:48:10,755 gunicorn.error: Autorestarting worker after current request.
--
[INFO] 2024-05-13 00:51:37,997 gunicorn.error: Worker exiting (pid: 123650)
[ERROR] 2024-05-13 00:51:37,994 gunicorn.error: Error handling request (no URI read)
[WARNING] 2024-05-13 00:51:36,992 gunicorn.error: Worker graceful timeout (pid:123650)
[INFO] 2024-05-13 00:51:06,191 gunicorn.error: Autorestarting worker after current request.
--
[INFO] 2024-05-12 23:19:47,497 gunicorn.error: Worker exiting (pid: 93595)
[ERROR] 2024-05-12 23:19:47,494 gunicorn.error: Error handling request (no URI read)
[WARNING] 2024-05-12 23:19:46,492 gunicorn.error: Worker graceful timeout (pid:93595)
[INFO] 2024-05-12 23:19:16,330 gunicorn.error: Autorestarting worker after current request.
--
[INFO] 2024-05-12 12:22:49,710 gunicorn.error: Worker exiting (pid: 61916)
[ERROR] 2024-05-12 12:22:49,709 gunicorn.error: Error handling request (no URI read)
[WARNING] 2024-05-12 12:22:48,708 gunicorn.error: Worker graceful timeout (pid:61916)
[INFO] 2024-05-12 12:22:17,893 gunicorn.error: Autorestarting worker after current request.
Your logs are upside down 🙃 - but yes, you did find something.
There are various issues around the max_requests
feature (and the basic recommendation is to not use it unless you really do need it). I expect this one is not any more serious than the other ones, given that the worker has stopped processing requests and is shutting down. Suppress or patch those lines if it bothers you, while no immediate solution is available.
In 40232284934c32939c0e4e78caad1987c3773e08 all BaseException handling was forced into the parent class, and because exceptions handled from connections that had not yet received a full request line were previously not logged, that greenlet-specific BaseException suppression looks(²) neutralized. I think some class hierarchy respecting way of moving just those two would do the trick, but would prefer a solution that undoes the imho too broad except block in favour of only handling those not-quite-fatal BaseException in our area of responsibility. (² I'll have to write a few tests to tell. gevent exception propagation is somewhat special.)
Thank you, @pajod, for the detailed explanation. We will attempt to disable max_requests. Have a wonderful week!
P.S. Sorry for the logs 🙃
Hi @pajod , thank you for looking into this! I have the same issue here.
My environment setting is basically the same but was receiving huge volume of greenlet.GreenletExit
(~2k/h).
Is there any reason for exposing no URI read
and greenlet.GreenletExit
here?
Since greenlet.GreenletExit
is claimed to be harmless in greenlet official doc here, can we change from self.log.exception("Error handling request (no URI read)")
to self.log.info("Error handling request (no URI read)")
. Please correct me if I'm wrong.
Thanks!
FYI, I am describing a related bug. I am not opening a new issue so as not to make too much noise, but I would be interested about comments from the maintainers https://github.com/benoitc/gunicorn/commit/40232284934c32939c0e4e78caad1987c3773e08#r144628336
Hi, this might be completely unrelated, but I was seeing a similar error message:
app-1 | [2024-07-30 22:16:06 +0000] [9] [CRITICAL] WORKER TIMEOUT (pid:15)
app-1 | [2024-07-30 22:16:06 +0000] [15] [ERROR] Error handling request (no URI read)
app-1 | Traceback (most recent call last):
app-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/workers/sync.py", line 134, in handle
app-1 | req = next(parser)
app-1 | ^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/http/parser.py", line 42, in __next__
app-1 | self.mesg = self.mesg_class(self.cfg, self.unreader, self.source_addr, self.req_count)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/http/message.py", line 257, in __init__
app-1 | super().__init__(cfg, unreader, peer_addr)
app-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/http/message.py", line 60, in __init__
app-1 | unused = self.parse(self.unreader)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/http/message.py", line 269, in parse
app-1 | self.get_data(unreader, buf, stop=True)
app-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/http/message.py", line 260, in get_data
app-1 | data = unreader.read()
app-1 | ^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/http/unreader.py", line 37, in read
app-1 | d = self.chunk()
app-1 | ^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/http/unreader.py", line 64, in chunk
app-1 | return self.sock.recv(self.mxchunk)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/workers/base.py", line 203, in handle_abort
app-1 | sys.exit(1)
app-1 | SystemExit: 1
app-1 | [2024-07-30 22:16:06 +0000] [15] [INFO] Worker exiting (pid: 15)
app-1 | [2024-07-30 22:16:06 +0000] [17] [INFO] Booting worker with pid: 17
In my case, I was playing around with sendfile/X-Accel-Redirect and had a Django view like this:
def login_required_media_view(request, path):
response = HttpResponse(status=200)
response["Content-Type"] = ""
response["X-Accel-Redirect"] = "/media/" + quote(path)
return response
nginx
was not set up at this point, and opening a media URL lead to the timeout ERROR above.
Just in case this is useful for someone here.
@jeverling It is related - that is how the exception handling looks like without the added gevent complexity. Useful for others to find this issue. Where they can learn that they can revert https://github.com/benoitc/gunicorn/commit/0b10cbab1d6368fcab2d5a7b6fe359a6cecc81a7 (to silence the error log again) or https://github.com/benoitc/gunicorn/commit/40232284934c32939c0e4e78caad1987c3773e08 (to return to the previous exception handling) before a proper bugfix release is issued. (At this time I do not suggest a downgrade.)
So, I'm a bit confused since I'm getting both these errors as well in my Django application:
[Critical] WORKER TIMEOUT
[ERROR] Error handling request (no URI read)
The Django application is a very simple render, with no slow operations in the background and I can see it complete way before the timeout (checking network in Chrome). It doesn't matter how long I set the --timeout
it will eventually kill the worker.
Is my issue linked to this error handling or should I be looking elsewhere? The primary effect I'm getting is that:
Any suggestions?
I don't understand this issue. Gunicorn responds appropriately when queried directly with wget/curl, but fails when there's a webserver inbetween. Is that correct? How could such behavior be correct? My nginx configuration is using the default uwsgi_params in the example documentation https://docs.nginx.com/nginx/admin-guide/web-server/app-gateway-uwsgi-django/
I can reproduce this issue when I use chrome and after default timeout it through error, but in case of postman or safari I am not seeing it.
I am facing the same problem except that the error shows up intermittently. I am running gunicorn
in Docker without any proxy:
app-1 | [2024-09-08 05:52:36 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:7)
app-1 | [2024-09-08 06:52:36 +0100] [7] [ERROR] Error handling request (no URI read)
app-1 | Traceback (most recent call last):
app-1 | File "/usr/local/lib/python3.11/site-packages/gunicorn/workers/sync.py", line 134, in handle
app-1 | req = next(parser)
app-1 | ^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/gunicorn/http/parser.py", line 42, in __next__
app-1 | self.mesg = self.mesg_class(self.cfg, self.unreader, self.source_addr, self.req_count)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/gunicorn/http/message.py", line 257, in __init__
app-1 | super().__init__(cfg, unreader, peer_addr)
app-1 | File "/usr/local/lib/python3.11/site-packages/gunicorn/http/message.py", line 60, in __init__
app-1 | unused = self.parse(self.unreader)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/gunicorn/http/message.py", line 269, in parse
app-1 | self.get_data(unreader, buf, stop=True)
app-1 | File "/usr/local/lib/python3.11/site-packages/gunicorn/http/message.py", line 260, in get_data
app-1 | data = unreader.read()
app-1 | ^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/gunicorn/http/unreader.py", line 37, in read
app-1 | d = self.chunk()
app-1 | ^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/gunicorn/http/unreader.py", line 64, in chunk
app-1 | return self.sock.recv(self.mxchunk)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/gunicorn/workers/base.py", line 203, in handle_abort
app-1 | sys.exit(1)
app-1 | SystemExit: 1
app-1 | [2024-09-08 06:52:36 +0100] [7] [INFO] Worker exiting (pid: 7)
app-1 | [2024-09-08 05:52:36 +0000] [13] [INFO] Booting worker with pid: 13
As you can see, we get a timeout and suddenly an exception shows up. When this happens, there is a temporary Internal Server Error due to this but it goes away once another worker is up.
I have the same problem:
[2024-09-11 12:45:45 +0100] [26508] [CRITICAL] WORKER TIMEOUT (pid:26634)
[2024-09-11 11:45:45 +0000] [26634] [ERROR] Error handling request (no URI read)
Traceback (most recent call last):
File "/home/mb/Documents/venv/registo3.11/lib/python3.11/site-packages/gunicorn/workers/sync.py", line 133, in handle
req = next(parser)
^^^^^^^^^^^^
File "/home/mb/Documents/venv/registo3.11/lib/python3.11/site-packages/gunicorn/http/parser.py", line 41, in __next__
self.mesg = self.mesg_class(self.cfg, self.unreader, self.source_addr, self.req_count)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mb/Documents/venv/registo3.11/lib/python3.11/site-packages/gunicorn/http/message.py", line 259, in __init__
super().__init__(cfg, unreader, peer_addr)
File "/home/mb/Documents/venv/registo3.11/lib/python3.11/site-packages/gunicorn/http/message.py", line 60, in __init__
unused = self.parse(self.unreader)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mb/Documents/venv/registo3.11/lib/python3.11/site-packages/gunicorn/http/message.py", line 274, in parse
line, rbuf = self.read_line(unreader, buf, self.limit_request_line)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mb/Documents/venv/registo3.11/lib/python3.11/site-packages/gunicorn/http/message.py", line 326, in read_line
self.get_data(unreader, buf)
File "/home/mb/Documents/venv/registo3.11/lib/python3.11/site-packages/gunicorn/http/message.py", line 262, in get_data
data = unreader.read()
^^^^^^^^^^^^^^^
File "/home/mb/Documents/venv/registo3.11/lib/python3.11/site-packages/gunicorn/http/unreader.py", line 36, in read
d = self.chunk()
^^^^^^^^^^^^
File "/home/mb/Documents/venv/registo3.11/lib/python3.11/site-packages/gunicorn/http/unreader.py", line 63, in chunk
return self.sock.recv(self.mxchunk)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mb/Documents/venv/registo3.11/lib/python3.11/site-packages/gunicorn/workers/base.py", line 204, in handle_abort
sys.exit(1)
SystemExit: 1
[2024-09-11 11:45:45 +0000] [26634] [INFO] Worker exiting (pid: 26634)
Even with a substantial timeout:
gunicorn -w 2 -b 0.0.0.0:7000 --timeout 120 --log-level=debug wsgi:application
Software versions:
nginx version: nginx/1.22.1
Django==5.1
gunicorn==23.0.0
UPDATE:
With this setup on NGINX, the timeout stopped occurring:
(...)
location / {
proxy_pass http://0.0.0.0:7000;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-SSL on;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header User-Agent $http_user_agent;
proxy_set_header REMOTE_ADDR $remote_addr;
}
(...)
For those in a position to review and/or test it: you are welcome to provide feedback on my suggested patch: https://github.com/benoitc/gunicorn/pull/3275
Has anyone identified what the actual originating issue is that is causing the worker timeout? I see the PR addresses the exception bubbling, but I've yet to find any threads where anyone has identified what the actual cause of this is.
I'm seeing the same issue where a worker will handle the request, the browser receives the data and request ends there, but then the worker dies after the set timeout
period and reboots. However, even after rebooting (or with multiple other workers which should be fine to take the request), the next most immediate request fails with 500 Internal Server Error and requests after that work fine.
Setting timeout
to 0 appears to prevent the workers from dying (as they never hit the max timeout) but why are they held up after successful requests anyways?
Considering that using raw http or proxy protocol with nginx doesn't generate problems, while uwsgi_pass does, this problem is contained to using wsgi.
I seem to encounter the same problem after running any Celery-task (@shared_task
with func.delay
).
core-1 | [2024-10-24 15:58:04 +0000] [7] [CRITICAL] WORKER TIMEOUT (pid:8)
core-1 | [2024-10-24 15:58:04 +0000] [8] [ERROR] Error handling request (no URI read)
core-1 | Traceback (most recent call last):
core-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/workers/sync.py", line 133, in handle
core-1 | req = next(parser)
core-1 | ^^^^^^^^^^^^
core-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/http/parser.py", line 41, in __next__
core-1 | self.mesg = self.mesg_class(self.cfg, self.unreader, self.source_addr, self.req_count)
core-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
core-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/http/message.py", line 259, in __init__
core-1 | super().__init__(cfg, unreader, peer_addr)
core-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/http/message.py", line 60, in __init__
core-1 | unused = self.parse(self.unreader)
core-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
core-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/http/message.py", line 271, in parse
core-1 | self.get_data(unreader, buf, stop=True)
core-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/http/message.py", line 262, in get_data
core-1 | data = unreader.read()
core-1 | ^^^^^^^^^^^^^^^
core-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/http/unreader.py", line 36, in read
core-1 | d = self.chunk()
core-1 | ^^^^^^^^^^^^
core-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/http/unreader.py", line 63, in chunk
core-1 | return self.sock.recv(self.mxchunk)
core-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
core-1 | File "/usr/local/lib/python3.12/site-packages/gunicorn/workers/base.py", line 204, in handle_abort
core-1 | sys.exit(1)
core-1 | SystemExit: 1
core-1 | [2024-10-24 15:58:04 +0000] [8] [INFO] Worker exiting (pid: 8)
core-1 | [2024-10-24 15:58:05 +0000] [17] [INFO] Booting worker with pid: 17
gunicorn==23.0.0
Django==5.1.1
celery==5.4.0
django-celery-beat==2.7.0
django-celery-results==2.5.1
On gunicorn==20.1.0
I see a simpler output:
core-1 | [2024-10-24 16:02:31 +0000] [7] [CRITICAL] WORKER TIMEOUT (pid:8)
core-1 | [2024-10-24 16:02:31 +0000] [8] [INFO] Worker exiting (pid: 8)
core-1 | [2024-10-24 16:02:32 +0000] [10] [INFO] Booting worker with pid: 10
I also faced the problem but in my case, I could find what was going on.
Very simple (after 1 day to find out why): My worker was trying to make an HTTPS request to another web server (Gunicorn by the way, but not really important) that was listening in HTTP (without TLS).
It was really difficult to "see" that as I was debugging a running set of services without a comfortable way to see every environment variables I passed to each one. And I was migrating one from HTTPS to HTTP (internal Docker containers, not exposed but talking each others).
I hope I could help someone.
I just tried #3275 and I am still getting these errors in sentry.
I just tried #3275 and I am still getting these errors in sentry.
In #3275 the message in the title of this issue is debug, not error-level severity. If you do still get errors, you should quote them, in that PR.
Has anyone identified what the actual originating issue is that is causing the worker timeout?
@ncpalmie I think I did, at least for my specific case (homelab). My frontend is nginx (nginx version: nginx/1.26.2) running in a FreeBSD jail and from there I reverse-proxy to another FreeBSD jail running Gunicorn (gunicorn==23.0.0), serving Flask. The proxy connection is TLS1.3 only - the nginx-side has a client cert and the gunicorn-side has a server-cert, and you would get SSL_ERROR_RX_CERTIFICATE_REQUIRED_ALERT from Firefox when trying to access the Gunicorn side directly as only the client cert (Nginx) and the server cert (Gunicorn) together can establish the connection. The whole CA infrastructure is setup using openssl and I documented every single step when I created my test-CA, the certs, the keys, nginx, gunicorn, etc.
My error logs looked exactly like the ones of @DvaMishkiLapa @lordsarcastic and @nonprofitnerd when I did the following:
After I managed to get the reverse-proxy connection TLS1.3-encrypted I tried to find out how SNI works, as mentioned in the sni_callback()
part of the Gunicorn example config file.
To avoid the error everyone had in their logs I believe the following must be set up correctly:
(all this assumes that your firewall is set up correctly, otherwise openssl gives you connect() permission denied errors)
proxy_ssl_name
. This combined with proxy_ssl_server_name on;
was necessary to ensure that nginx did not crash anymore when proxy_ssl_verify on;
is set.When I tried to use sni_callback()
in gunicorn.conf.py I outcommented keyfile
and certfile
, believing it is enough that those will be present anyway in the new_context.load_cert_chain()
part of the sni_callback()
function.
This was when I got the error everyone had in my logs (restoring keyfile
and certfile
made the error disappear). Investigating the error brought me here.
Note that I only configured my original Gunicorn server in sni_callback()
(and not yet alternative ones which is probably the purpose of all that SNI stuff).
Also note that Gunicorn did not crash until the arrival of a TLS1.3 request. I could reproduce the error via Nginx and also using either of the following openssl-s_client
commands that I used to verify that TLS1.3 works:
echo -n | openssl s_client -cert /full/path/to/nginx/side/cert.pem -key /full/path/to/nginx/side/key.pem -CAfile /full/path/fullchain.pem -connect correct-gunicorn-dns-name.com:443 -verify_return_error -servername correct-gunicorn-dns-name.com </dev/null | openssl x509 -noout -text | grep correct-gunicorn-dns-name.com
and
openssl s_client -cert /full/path/to/nginx/side/cert.pem -key /full/path/to/nginx/side/key.pem -CAfile /full/path/fullchain.pem -connect correct-gunicorn-dns-name.com:443 -verify_return_error -crlf -brief
The internet said that the first one is good for checking the SANs in the cert, the -servername
option is SNI-related.
Using either of these commands on the Nginx side made Gunicorn crash and produced the error everyone had in the logs, just like Nginx did before.
Not sure if it matters, but I used EC instead of RSA for all my certs and used the P-256 curve in my openssl commands.
I'm fairly new to all this. I was a little suprised that both, Nginx and Gunicorn, seem to behave so erratic when a TLS error of this kind occurs. I don't know what kind of parsing and the like is involved behind the scenes though, this is probably very complicated.
Anyway, I hope someone will benefit from my observations.
After upgrading from version 20.1.0 to 22.0.0, we have encountered "Error handling request (no URI read)" errors.
/var/log/messages
gunicorn.conf.py
Env: