Closed hidbpark closed 3 months ago
@hidbpark I'm trying to establish if a recent issue we had is related to what you've reported here. We use retries and as soon as we upgraded to 5.3.6 our rabbitMQ queues filled up very quickly, we rolled back to 5.3.5 and everything stabilised again. However looking at the version dif we can't see anything that could have caused this.
What did you see on your side that made you raise this issue?
@hidbpark I'm trying to establish if a recent issue we had is related to what you've reported here. We use retries and as soon as we upgraded to 5.3.6 our rabbitMQ queues filled up very quickly, we rolled back to 5.3.5 and everything stabilised again. However looking at the version dif we can't see anything that could have caused this.
What did you see on your side that made you raise this issue?
Thank you for your response. The only difference between what we've been through and what you've answered is the type of broker (RabbitMQ / Celery) and the situation is the same. The queue fills up immediately without any delay for retries, so the job runs immediately. Also, after rolling back to version 5.3.5 everything stabilized and worked as intended. Unfortunately, we can't compare the differences in the 'kombu code' version yet because we are focused on implementing the service features. If we can afford it, we would like to compare them.
It is normal with celery 5.4.0 version.
There is a problem with not waiting as much as the default_retry_delay value when retry() is performed within the "task" of "Celery". With multiple pre-forked workers, the retried task seems to be immediately performed by another worker. In kombu 5.3.5, the retried task was performed by the worker after waiting for the default_retry_delay value. (good) Regarding this issue, is there any change in kombu 5.3.6?
django: 4.2.11. redis: 5.0.3. celery: 5.3.6. kombu: 5.3.6.
` @shared_task(bind=True, default_retry_delay=5, max_retries=3) def my_task(self, some_data):
do somethings
else: return `