getsentry / self-hosted

Sentry, feature-complete and packaged up for low-volume deployments and proofs-of-concept
https://develop.sentry.dev/self-hosted/
Other
7.9k stars 1.77k forks source link

uWSGI reports full listen queue #1573

Open ethanhs opened 2 years ago

ethanhs commented 2 years ago

Version

22.7.0.dev0

Steps to Reproduce

  1. Run ./install.sh
  2. docker compose up -d
  3. docker compose logs web

Expected Result

No uWSGI errors.

Actual Result

The web container logs contain

uWSGI listen queue of socket "127.0.0.1:46721" (fd: 3) full

It seems that uWSGI is reporting that the socket connection queue is full. This is weird for 2 reasons:

Edit: there is some more info on the listen queue in the uWSGI docs https://uwsgi-docs.readthedocs.io/en/latest/articles/TheArtOfGracefulReloading.html#the-listen-queue

aminvakil commented 2 years ago

We receive 300 requests per second on average on our sentry instance and do not have a problem, probably because we're using a nginx installed on host to reverse proxy to sentry instance, so we're not hitting sentry with 300 requests directly.

I guess it's reasonable for any uWSGI instance to use a reverse proxy in front of it instead of delivering requests to it directly.

Do you think it suffices if we document this?

ethanhs commented 2 years ago

Yeah, I think documenting this is probably sufficient.

ValentinFrancois commented 2 years ago

@aminvakil sorry for the ignorance, could you elaborate a bit more about how the nginx reverse-proxy helps in this case? Does it somehow enqueue the 300 connections and distribute them to the uWSGI instance without hitting the 100 connections limit on the uWSGI side?

aminvakil commented 2 years ago

@ValentinFrancois Sorry for late response.

Based your configuration nginx can keep a limited set of open connections to backend you've configured, and uses those connections to pass the traffic.

Therefore there is no need to open a new connection to backend for each new user connection.

ValentinFrancois commented 2 years ago

@aminvakil you're mentioning this mechanism, right? I think I understand better now image

aminvakil commented 2 years ago

@ValentinFrancois Exactly!

Nekuromento commented 1 year ago

Encountering this error on M1 mac host (I think this is caused by https://github.com/unbit/uwsgi/issues/2406) Anyone managed to find a workaround?

KevinMind commented 11 months ago

I had this issue and what fixed it was enabling Rosetta emulation for docker for mac. I'm using a m2 macbook pro.

I'm not sure if this fixes it in all situations as my underlying issue was related to file system changes not propogating to the container but you could give it a try.

@Nekuromento

image
yanghua-ola commented 2 months ago

We receive 300 requests per second on average on our sentry instance and do not have a problem, probably because we're using a nginx installed on host to reverse proxy to sentry instance, so we're not hitting sentry with 300 requests directly.

I guess it's reasonable for any uWSGI instance to use a reverse proxy in front of it instead of delivering requests to it directly.

Do you think it suffices if we document this?

I faced the same issue. If I am not mistaken, the latest docker compose file included an NGINX in front of the web service. Is that any different from running a reverse proxy on the host?

aminvakil commented 2 months ago

We receive 300 requests per second on average on our sentry instance and do not have a problem, probably because we're using a nginx installed on host to reverse proxy to sentry instance, so we're not hitting sentry with 300 requests directly. I guess it's reasonable for any uWSGI instance to use a reverse proxy in front of it instead of delivering requests to it directly. Do you think it suffices if we document this?

I faced the same issue. If I am not mistaken, the latest docker compose file included an NGINX in front of the web service. Is that any different from running a reverse proxy on the host?

Yes, it does not keep open connections and just passes them through.

aminvakil commented 1 month ago

I've just faced this error myself :)

web-1  | 2024-10-08T10:53:12.953684904Z worker 3 lifetime reached, it was running for 86401 second(s)
web-1  | 2024-10-08T10:59:41.334198402Z Tue Oct  8 10:59:41 2024 - *** uWSGI listen queue of socket "127.0.0.1:39041" (fd: 3) full !!! (101/100) ***

And flooding uWSGIlisten queue of socket afterwards.

I've searched and it seems like this is an issue with uWSGI itself, it should respawn workers after a time or requests (which has already been configured in sentry/sentry.conf.py), but seems it does not in certain situations. https://github.com/unbit/uwsgi/issues/2527#issuecomment-1493527398

cc @guoard

guoard commented 1 month ago

In this comment, we see the message: Waiting for the GIL.

Both workers and threads can be used to increase concurrency. Threads are generally more lightweight than workers and consume fewer resources. However, due to Python's Global Interpreter Lock (GIL), threads may not always run in true parallel.

One possible solution to address the deadlock issue could be to increase the number of workers and configure each worker to run with just one thread:

workers = 6
threads = 1

This way, you avoid shared memory between threads and may bypass GIL-related contention.