celery / kombu

Messaging library for Python.
http://kombu.readthedocs.org/
BSD 3-Clause "New" or "Revised" License
2.89k stars 930 forks source link

Celery worker deploy on Azure fails: didn't respond to HTTP pings on port: 80 #1917

Open sglebs opened 9 months ago

sglebs commented 9 months ago

If we could have some port that these PaaS platforms could ping to check for liveness, it would make things so much easier.

2024-02-05T18:59:27.316Z ERROR - Container staging-foobarcelery_0_3ed668ba for site staging-foobarcelery has exited, failing site start 2024-02-05T18:59:27.318Z ERROR - Container staging-foobarcelery_0_3ed668ba didn't respond to HTTP pings on port: 80, failing site start. See container logs for debugging.

How can I run Celery in worker mode and still have a port that can be pinged for something? Even a "nothing here" reply would do.

Any tips on how to do this? Subclass? Or is it in the roadmap? Thanks!

sglebs commented 9 months ago

1177 seems to be related. But I am using Redis and not RabbitMQ. And I am using Azure (free credits!), and not Kubernetes. It using this GitHub hook: https://github.com/Azure/webapps-deploy

I have not been able to disable the liveness check on the exposed port on deploy. And I have not being able to bundle a tiny web server to just reply "just go away". Docker compose is experimental on Azure (compose with celery+nginx might be a valid hack).

I am willing to write my own code which is like celery-worker+FastAPI for a single stupid endpoint, but I am not sure how to achieve this.

Help is welcome. Thanks.

sglebs commented 9 months ago

The current workaround I am trying requires 2 more python VMs:

  1. supervisord
  2. http.server

Here's the supervisord.conf:

[supervisord]
# http://supervisord.org/configuration.html
nodaemon=true
logfile=/dev/null
logfile_maxbytes=0

[program:celery]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
redirect_stderr=true
command=celery worker --loglevel=info -E

[program:web]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
redirect_stderr=true
#https://realpython.com/python-http-server/
command=python3 -m http.server -b "::" -d ./app/static 80
iloveitaly commented 5 months ago

Running into the same issue—curious if anyone has a workaround?

sglebs commented 5 months ago

I ended up bundling Celery and Flower together. This way there is an http server (separate VMs, with supervisord). It sucks, but it works and I get a web viewer for Celery workers. Here:

[supervisord]
# http://supervisord.org/configuration.html
nodaemon=true
logfile=/dev/stdout
logfile_maxbytes=0

[program:celery]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
redirect_stderr=true
command=newrelic-admin run-program celery worker --loglevel=info -E

[program:httpd]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
redirect_stderr=true
#https://realpython.com/python-http-server/
#command=python3 -m http.server -b "::" -d ./app/static 80
# --basic_auth comes as default from $FLOWER_BASIC_AUTH - https://flower.readthedocs.io/en/latest/config.html#environment-variables
command=celery flower --port=80