Open lzgabel opened 1 year ago
@lzgabel do you have an example to repo the issue?
Hi @mbierma. After our investigation, due to the following code existing in the system:
@celeryd_init.connect
def recover_job(sender=None, conf=None, **kwargs) -> None:
i = app.control.inspect()
running_list = []
for worker_list in i.active().values():
for task_item in worker_list:
name = task_item.get('name')
# do something
args = task_item.get('args')
job_id = args[1]
if job_id:
running_list.append(job_id)
logger.info(f'lost job : {running_list}')
It will be blocked here _brpop_read()
because of the content fixed by this PR #1733.
hi @lzgabel , I think this is different than my issue. My issue is not fork pool workers but celery workers themselves. I can have 2 celery workers when I'm using rabbitmq as a broker but, when I use kafka as a broker, only one celery worker is working.
We have exactly the same problem. Downgrading kombu to 5.3.1 solves it.
hi @lzgabel , I think this is different than my issue. My issue is not fork pool workers but celery workers themselves. I can have 2 celery workers when I'm using rabbitmq as a broker but, when I use kafka as a broker, only one celery worker is working.
@mfaw is this still the case? If so then maybe I'll put it as a limitation in my celery documentation ticket here https://github.com/celery/celery/pull/8935
It seems that if you specify another queue, like add
instead of the default celery
queue, then the task is routed to a new topic called add
. If you do that then you can also create a second worker listening on the add
queue. Then you can have two working. It is unfortunate that you can't have two workers listening to the same queue though. That definitely seems like a regression. I wonder if we can construct a test case for this.
I wonder if we can use partitions to allow for multiple workers.
Is there any plan to fix the issue reported by @lzgabel or is there any known workaround? It's still preventing us from upgrading kombu and celery beyond 5.3.1.
Is there any plan to fix the issue reported by @lzgabel or is there any known workaround? It's still preventing us from upgrading kombu and celery beyond 5.3.1.
I don't think that there is a maintainer for Kafka at the moment.
Is there any plan to fix the issue reported by @lzgabel or is there any known workaround? It's still preventing us from upgrading kombu and celery beyond 5.3.1.
I don't think that there is a maintainer for Kafka at the moment.
The original issue reported by @lzgabel is not for Kafka but using Redis as broker, which we are also using.
@lzgabel @ojnas I am unable to reproduce the issue using Celery 5.2.7
and Kombu 5.3.2
, Redis-Py 5.0.7
, and Python 3.9.0
. What version of Redis are you using? I am using
$ redis-server --version
Redis server v=6.0.16 sha=00000000:0 malloc=jemalloc-5.2.1 bits=64 build=a3fdef44459b3ad6
[2024-07-28 10:38:49,622: INFO/ForkPoolWorker-1] Task tasks.add[8adf78ef-53ff-4845-80d3-2582089060fb] succeeded in 10.009252441999706s: 8
[2024-07-28 10:38:59,371: INFO/ForkPoolWorker-2] Task tasks.add[b59745df-bef0-48be-9667-691dc1893aa9] succeeded in 10.008264554000107s: 8
Additionally, have you tried the latest versions of Kombu and Celery?
@thuibr I just tried with latest versions Celery 5.4.0 and Kombu 5.4.2 and the issue still persists for us. Redis version is 7.4.0.
@thuibr If you want to reproduce the issue, as @lzgabel mentioned in their comment here https://github.com/celery/kombu/issues/1785#issuecomment-1713973850, it seems related to app.control.inspect()
. When this has been used at any time, multiple workers don't work.
These issues seems to be related: https://github.com/celery/celery/issues/8671 https://github.com/celery/celery/issues/8874
@ojnas I was able to produce a different issue, but I would think that it has the same root cause. A Warm Shutdown hangs with the following code:
import logging
from celery import Celery
from celery.signals import celeryd_init
app = Celery('tasks', broker='redis://localhost:6379/0')
logger = logging.getLogger(__name__)
@app.task
def add(x, y):
return x + y
@celeryd_init.connect
def recover_job(sender=None, conf=None, **kwargs) -> None:
i = app.control.inspect()
running_list = []
if i.active():
for worker_list in i.active().values():
for task_item in worker_list:
name = task_item.get('name')
args = task_item.get('args')
job_id = args[1]
if job_id:
running_list.append(job_id)
logger.info(f'lost job : {running_list}')
Adding a breakpoint here in kombu/transport/redis.py
for some reason fixes the issue:
def close(self):
self._closing = True
if self._in_poll:
try:
breakpoint()
self._brpop_read()
except Empty:
pass
if not self.closed:
# remove from channel poller.
self.connection.cycle.discard(self)
# delete fanout bindings
client = self.__dict__.get('client') # only if property cached
if client is not None:
for queue in self._fanout_queues:
if queue in self.auto_delete_queues:
self.queue_delete(queue, client=client)
self._disconnect_pools()
self._close_clients()
super().close()
I also notice that we get stuck here:
worker: Warm shutdown (MainProcess)
[2024-10-15 06:33:14,725: DEBUG/MainProcess] | Worker: Closing Hub...
[2024-10-15 06:33:14,725: DEBUG/MainProcess] | Worker: Closing Pool...
[2024-10-15 06:33:14,725: DEBUG/MainProcess] | Worker: Closing Consumer...
[2024-10-15 06:33:14,726: DEBUG/MainProcess] | Worker: Stopping Consumer...
[2024-10-15 06:33:14,726: DEBUG/MainProcess] | Consumer: Closing Connection...
[2024-10-15 06:33:14,726: DEBUG/MainProcess] | Consumer: Closing Events...
[2024-10-15 06:33:14,726: DEBUG/MainProcess] | Consumer: Closing Mingle...
[2024-10-15 06:33:14,726: DEBUG/MainProcess] | Consumer: Closing Gossip...
[2024-10-15 06:33:14,727: DEBUG/MainProcess] | Consumer: Closing Tasks...
[2024-10-15 06:33:14,727: DEBUG/MainProcess] | Consumer: Closing Control...
[2024-10-15 06:33:14,727: DEBUG/MainProcess] | Consumer: Closing Heart...
[2024-10-15 06:33:14,727: DEBUG/MainProcess] | Consumer: Closing event loop...
[2024-10-15 06:33:14,728: DEBUG/MainProcess] | Consumer: Stopping event loop...
[2024-10-15 06:33:14,728: DEBUG/MainProcess] | Consumer: Stopping Heart...
[2024-10-15 06:33:14,730: DEBUG/MainProcess] | Consumer: Stopping Control...
[2024-10-15 06:33:14,734: DEBUG/MainProcess] | Consumer: Stopping Tasks...
[2024-10-15 06:33:14,734: DEBUG/MainProcess] Canceling task consumer...
[2024-10-15 06:33:14,734: DEBUG/MainProcess] | Consumer: Stopping Gossip...
[2024-10-15 06:33:14,738: DEBUG/MainProcess] | Consumer: Stopping Mingle...
[2024-10-15 06:33:14,738: DEBUG/MainProcess] | Consumer: Stopping Events...
[2024-10-15 06:33:14,738: DEBUG/MainProcess] | Consumer: Stopping Connection...
[2024-10-15 06:33:14,739: DEBUG/MainProcess] | Worker: Stopping Pool...
but we make it past that with the breakpoint:
worker: Warm shutdown (MainProcess)
[2024-10-15 06:37:19,521: DEBUG/MainProcess] | Worker: Closing Hub...
[2024-10-15 06:37:19,522: DEBUG/MainProcess] | Worker: Closing Pool...
[2024-10-15 06:37:19,522: DEBUG/MainProcess] | Worker: Closing Consumer...
[2024-10-15 06:37:19,522: DEBUG/MainProcess] | Worker: Stopping Consumer...
[2024-10-15 06:37:19,522: DEBUG/MainProcess] | Consumer: Closing Connection...
[2024-10-15 06:37:19,523: DEBUG/MainProcess] | Consumer: Closing Events...
[2024-10-15 06:37:19,523: DEBUG/MainProcess] | Consumer: Closing Mingle...
[2024-10-15 06:37:19,523: DEBUG/MainProcess] | Consumer: Closing Gossip...
[2024-10-15 06:37:19,523: DEBUG/MainProcess] | Consumer: Closing Tasks...
[2024-10-15 06:37:19,523: DEBUG/MainProcess] | Consumer: Closing Control...
[2024-10-15 06:37:19,523: DEBUG/MainProcess] | Consumer: Closing Heart...
[2024-10-15 06:37:19,524: DEBUG/MainProcess] | Consumer: Closing event loop...
[2024-10-15 06:37:19,524: DEBUG/MainProcess] | Consumer: Stopping event loop...
[2024-10-15 06:37:19,524: DEBUG/MainProcess] | Consumer: Stopping Heart...
[2024-10-15 06:37:19,526: DEBUG/MainProcess] | Consumer: Stopping Control...
[2024-10-15 06:37:19,530: DEBUG/MainProcess] | Consumer: Stopping Tasks...
[2024-10-15 06:37:19,530: DEBUG/MainProcess] Canceling task consumer...
[2024-10-15 06:37:19,531: DEBUG/MainProcess] | Consumer: Stopping Gossip...
[2024-10-15 06:37:19,534: DEBUG/MainProcess] | Consumer: Stopping Mingle...
[2024-10-15 06:37:19,535: DEBUG/MainProcess] | Consumer: Stopping Events...
[2024-10-15 06:37:19,535: DEBUG/MainProcess] | Consumer: Stopping Connection...
[2024-10-15 06:37:19,535: DEBUG/MainProcess] | Worker: Stopping Pool...
[2024-10-15 06:37:20,544: DEBUG/MainProcess] | Worker: Stopping Hub...
[2024-10-15 06:37:20,545: DEBUG/MainProcess] | Consumer: Shutdown Heart...
[2024-10-15 06:37:20,545: DEBUG/MainProcess] | Consumer: Shutdown Control...
[2024-10-15 06:37:20,545: DEBUG/MainProcess] | Consumer: Shutdown Tasks...
[2024-10-15 06:37:20,545: DEBUG/MainProcess] Canceling task consumer...
[2024-10-15 06:37:20,545: DEBUG/MainProcess] Closing consumer channel...
[2024-10-15 06:37:20,545: DEBUG/MainProcess] | Consumer: Shutdown Gossip...
[2024-10-15 06:37:20,545: DEBUG/MainProcess] | Consumer: Shutdown Events...
[2024-10-15 06:37:20,545: DEBUG/MainProcess] | Consumer: Shutdown Connection...
[2024-10-15 06:37:20,548: WARNING/MainProcess] > /home/tom/code/kombu/kombu/transport/redis.py(1093)close()
-> self._brpop_read()
[2024-10-15 06:37:20,548: WARNING/MainProcess] (Pdb)
c
[2024-10-15 06:37:22,115: DEBUG/MainProcess] removing tasks from inqueue until task handler finished
I am wondering if disconnecting all redis connections after an inspect operation will mitigate this issue, but for some reason the redis connections are not disconnecting when I call client.connection_pool.disconnect()
@celeryd_init.connect
def recover_job(sender=None, conf=None, **kwargs) -> None:
#breakpoint()
i = app.control.inspect()
running_list = []
# breakpoint()
# Get number of active_connections from redis
client = app.connection().channel().client
info = client.info()
active_connections = info.get("connected_clients", 0)
# breakpoint()
i.ping()
# Disconnect all redis connections
client.connection_pool.disconnect()
info = client.info()
active_connections = info.get("connected_clients", 0)
breakpoint()
It appears that more than one of the ForkPoolWorker
workers get stuck blocking on BRPOP
. Setting the following resolves the issue:
app.conf.broker_transport_options = {'socket_timeout': 5}
👋 Hi All. FYI, since 5.3.2 was released, we encountered multiple celery fork pool workers don't work, so we rolled back to 5.3.1 and everything returned to normal.
Version: 5.3.2
Version: 5.3.1
🤔 We compared the
kombu
version changes, and when we reverted this PR: #1733 in version 5.3.2, all workers worked normally.cc @auvipy @Nusnus @mfaw @mbierma.