Open hp685 opened 7 years ago
Additionally, I'm using the following package versions amqp (2.1.4) billiard (3.5.0.2) celery (4.0.2) hiredis (0.2.0) redis (2.10.5) redis-collections (0.1.7)
Are there any ideas on possible solutions? An idea is to try to increase the socket timeout. Another idea I had was to catch the specific error and reattempt the task.
What would be recommended by celery contributors/developers? Any additional diagnostics that could help? Thanks!
whats the update of the issue on with the latest release of celery?
report if this still exists
Yes, this is still an issue...
I'm now using amqp backend but I can switch back to redis & report back if the issue still exists with Celery 4.2.1.
Hi, I think I'm facing the same issue. I can open a new one if you think it's different.
I've found the issue when testing RedisLabs Cloud instance as a broker and backend. Intentionally, I connected via public Internet with ping around 15 ms to the AWS datacenter and some packet loss.
Under these conditions, I observe that sometimes the result.get(timeout=X)
call times out even if I'm able to query the Redis database and retrieve the key when the get()
is blocked.
from celery import Celery
app = Celery('testapp')
app.conf.broker_url = 'redis:///0'
app.conf.result_backend = 'redis:///1'
@app.task
def hello():
return 'hello world'
sudo tc qdisc add dev lo root netem loss 20% delay 10ms
>>> from testapp import *
>>> for i in range(1000):
... resp = hello.delay()
... resp.get(timeout=60)
...
'hello world'
'hello world'
'hello world'
'hello world'
Traceback (most recent call last):
File "/home/krab/.virtualenvs/celery-skKHJbkz/lib/python3.6/site-packages/celery/backends/async.py", line 255, in _wait_for_pending
on_interval=on_interval):
File "/home/krab/.virtualenvs/celery-skKHJbkz/lib/python3.6/site-packages/celery/backends/async.py", line 54, in drain_events_until
raise socket.timeout()
socket.timeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "/home/krab/.virtualenvs/celery-skKHJbkz/lib/python3.6/site-packages/celery/result.py", line 224, in get
on_message=on_message,
File "/home/krab/.virtualenvs/celery-skKHJbkz/lib/python3.6/site-packages/celery/backends/async.py", line 188, in wait_for_pending
for _ in self._wait_for_pending(result, **kwargs):
File "/home/krab/.virtualenvs/celery-skKHJbkz/lib/python3.6/site-packages/celery/backends/async.py", line 259, in _wait_for_pending
raise TimeoutError('The operation timed out.')
celery.exceptions.TimeoutError: The operation timed out.
>>> resp.get(timeout=1)
'hello world'
>>> resp.task_id
'559cac82-192f-4cdb-a7fb-02fda758baf1'
[2018-11-22 12:28:12,279: INFO/MainProcess] Received task: testapp.hello[559cac82-192f-4cdb-a7fb-02fda758baf1]
[2018-11-22 12:28:12,302: INFO/ForkPoolWorker-8] Task testapp.hello[559cac82-192f-4cdb-a7fb-02fda758baf1] succeeded in 0.021271978001095704s: 'hello world'
These lines appear and roughly after 60s, the calling code times out.
The database can be queried any time after the log lines appear in the worker log.
$ redis-cli -n 1 GET celery-task-meta-559cac82-192f-4cdb-a7fb-02fda758baf1
"{\"status\": \"SUCCESS\", \"result\": \"hello world\", \"traceback\": null, \"children\": [], \"task_id\": \"559cac82-192f-4cdb-a7fb-02fda758baf1\"}"
~Even after Python restart, Celery isn't able to get the result:~
EDIT: This isn't true. I was trying different backend URLs and was connecting to a wrong one. The rest still holds.
amqp==2.3.2
backcall==0.1.0
billiard==3.5.0.4
celery==4.2.1
decorator==4.3.0
ipython==7.1.1
ipython-genutils==0.2.0
jedi==0.13.1
kombu==4.2.1
parso==0.3.1
pexpect==4.6.0
pickleshare==0.7.5
prompt-toolkit==2.0.7
ptyprocess==0.6.0
Pygments==2.2.0
pytz==2018.7
redis==2.10.6
six==1.11.0
traitlets==4.3.2
vine==1.1.4
wcwidth==0.1.7
please try celery and kombu from master branch and report again here
Thank you for your reply. I'll do it (I plan it for Tuesday).
install all the celery dependencies along with celery from master. if the issue still persists will reopen it.
Hi, I can confirm the issue with master
branch and 2% simulated packet loss.
$ sudo tc qdisc add dev lo root netem loss 2% delay 10ms
$ python3 -u tester.py
.............................................................................Traceback (most recent call last):
File "/home/krab/.virtualenvs/celery-skKHJbkz/lib/python3.6/site-packages/celery/backends/asynchronous.py", line 255, in _wait_for_pending
on_interval=on_interval):
File "/home/krab/.virtualenvs/celery-skKHJbkz/lib/python3.6/site-packages/celery/backends/asynchronous.py", line 54, in drain_events_until
raise socket.timeout()
socket.timeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "tester.py", line 10, in <module>
resp.get(timeout=60)
File "/home/krab/.virtualenvs/celery-skKHJbkz/lib/python3.6/site-packages/celery/result.py", line 224, in get
on_message=on_message,
File "/home/krab/.virtualenvs/celery-skKHJbkz/lib/python3.6/site-packages/celery/backends/asynchronous.py", line 188, in wait_for_pending
for _ in self._wait_for_pending(result, **kwargs):
File "/home/krab/.virtualenvs/celery-skKHJbkz/lib/python3.6/site-packages/celery/backends/asynchronous.py", line 259, in _wait_for_pending
raise TimeoutError('The operation timed out.')
celery.exceptions.TimeoutError: The operation timed out.
hello world
a9c86651-cdd8-48ba-866a-927f3bcdf783
$ pip3 freeze
amqp==2.3.2
billiard==3.5.0.4
celery==4.2.0
kombu==4.2.1
pytz==2018.7
redis==3.0.1
vine==1.1.4
$ sudo tc qdisc del dev lo root netem
$ python -u tester.py
........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................hello world
a4a5bee1-c333-422e-a3d0-20a4a79f9ee0
$ cat tester.py
import traceback
import sys
from testapp import *
from celery.exceptions import TimeoutError
try:
for i in range(1000):
resp = hello.delay()
resp.get(timeout=60)
print('.', end='')
sys.stdout.flush()
except TimeoutError:
traceback.print_exc(file=sys.stdout)
print(resp.get(timeout=1))
print(resp.task_id)
$ celery -A testapp inspect report
-> celery@nb-krab: OK
software -> celery:4.2.0 (windowlicker) kombu:4.2.1 py:3.6.7
billiard:3.5.0.4 redis:3.0.1
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis:///1
broker_url: 'redis://localhost:6379/0'
result_backend: 'redis:///1'
include:
('testapp', 'celery.app.builtins')
$ cat Pipfile.lock
{
"_meta": {
"hash": {
"sha256": "8534f027020a820a831a2e4dd01d00b0ed03a39ed31258208ce2e55384583eeb"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3.6"
},
"sources": [
{
"name": "pypi",
"url": "https://pypi.org/simple",
"verify_ssl": true
}
]
},
"default": {
"amqp": {
"file": "https://github.com/celery/py-amqp/zipball/master"
},
"billiard": {
"file": "https://github.com/celery/billiard/zipball/master"
},
"celery": {
"file": "https://github.com/celery/celery/zipball/master"
},
"kombu": {
"file": "https://github.com/celery/kombu/zipball/master"
},
"redis": {
"hashes": [
"sha256:2100750629beff143b6a200a2ea8e719fcf26420adabb81402895e144c5083cf",
"sha256:8e0bdd2de02e829b6225b25646f9fb9daffea99a252610d040409a6738541f0a"
],
"index": "pypi",
"version": "==3.0.1"
},
"vine": {
"file": "https://github.com/celery/vine/zipball/master"
}
},
"develop": {}
}
A more complete version of the test script that shows the result is indeed already stored.
import queue
import traceback
import sys
from multiprocessing import Process, Queue
q = Queue()
def side_process(q):
from testapp import app
while True:
task_id = q.get(timeout=60) # started task
try:
assert 'finished' == q.get(timeout=20)
except queue.Empty:
result = app.AsyncResult(task_id)
print('From another process:', task_id, result.get())
return
p = Process(target=side_process, args=(q,))
p.start()
from testapp import *
from celery.exceptions import TimeoutError
try:
for i in range(1000):
resp = hello.delay()
q.put(resp.task_id)
resp.get(timeout=60)
q.put('finished')
print('.', end='')
sys.stdout.flush()
except TimeoutError:
traceback.print_exc(file=sys.stdout)
print(resp.get(timeout=1))
print(resp.task_id)
$ python -u tester.py
.....................From another process: 998f5853-0a5d-445f-b5a7-a62ba57f4fb8 hello world
Traceback (most recent call last):
File "/home/krab/.virtualenvs/celery-skKHJbkz/lib/python3.6/site-packages/celery/backends/asynchronous.py", line 255, in _wait_for_pending
on_interval=on_interval):
File "/home/krab/.virtualenvs/celery-skKHJbkz/lib/python3.6/site-packages/celery/backends/asynchronous.py", line 54, in drain_events_until
raise socket.timeout()
socket.timeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "tester.py", line 31, in <module>
q.put('finished')
File "/home/krab/.virtualenvs/celery-skKHJbkz/lib/python3.6/site-packages/celery/result.py", line 224, in get
on_message=on_message,
File "/home/krab/.virtualenvs/celery-skKHJbkz/lib/python3.6/site-packages/celery/backends/asynchronous.py", line 188, in wait_for_pending
for _ in self._wait_for_pending(result, **kwargs):
File "/home/krab/.virtualenvs/celery-skKHJbkz/lib/python3.6/site-packages/celery/backends/asynchronous.py", line 259, in _wait_for_pending
raise TimeoutError('The operation timed out.')
celery.exceptions.TimeoutError: The operation timed out.
hello world
998f5853-0a5d-445f-b5a7-a62ba57f4fb8
@auvipy Thank you for reopening the issue. If you have any pointers what to investigate, I could have a look, although I don't see much into Celery internals.
I see there is still the label "Feedback Needed". Is there anything that would help you reproducing the issue?
celery used billiard which is a fork of multiprocessing. could you try that instead? though I'm not sure that will solve the problem. I don't have enough bandwidth for digging the issue in depth right now. sorry, if you can dig it further deeply that would be really appreciated.
any update on the above timeout issue?
Having the same issue (intermittent TimeoutError) under somewhat heavy loads (~200 tasks, 25 workers) whenever I use concurrent gets (2 or more)
billiard 3.6.1.0 celery 4.3.0 kombu 4.6.3 redis 3.3.8
what with celery==4.4.0rc3?
Just tested 4.4.0rc3. Yes, issue is still there
我似乎也遇到类似问题,我们发现他的任务已经成功执行了,但是一直没有得到结果返回,错误日志如下 File "/data/mobi-backend-mtm/submodule/mobi-rpc/mrpc/util/decorator.py", line 7, in wrapper return func(req_payload, args, kw) File "/data/mobi-backend-mtm/src/mtm/service/order.py", line 330, in create_order scope=scope) File "/data/mobi-backend-mtm/src/mtm/modules/order.py", line 558, in create_order order.sn = dal.otc_freeze(order.seller_id, order.crypto_currency_code, amount_in, amount_out, order.id) File "/data/mobi-backend-mtm/src/mtm/modules/dal.py", line 45, in otc_freeze resp = dal.otc_freeze.apply_async([payer_customer_id, currency, amount_in, amount_out], kwargs={'extra_data': extr a_data}).get(GET_DAL_OPTS) File "/data/mobi-backend-mtm/src/mtm/celery/init.py", line 29, in get return super().get(args, **kwargs) File "/data/mobi-backend-mtm/venv/local/lib/python3.6/site-packages/celery/result.py", line 169, in get no_ack=no_ack, File "/data/mobi-backend-mtm/venv/local/lib/python3.6/site-packages/celery/backends/base.py", line 238, in wait_for raise TimeoutError('The operation timed out.') celery.exceptions.TimeoutError: The operation timed out.
For now, I am forced to use a mutex to allow only one "get" a time, which is a temporary fix for my application. It is the concurrent access that it is the problem.
@frogger72 how to use mutex to allow only one "get" a time?
can you try celery==4.4.0rc4 and report again?
can you try celery==4.4.0rc4 and report again?
Just started looking into this issue as well -- I believe I am experiencing the same with celery 4.4.1 and redis. I'm brand new to Celery, so perhaps something wrong on my end.
However in the celery debug logs, I can see that the task is in fact succeeding and has acquired the response I am expecting:
[2020-03-17 23:36:27,781: INFO/ForkPoolWorker-4] Task celery.starmap[59a9a6ce-2f11-46b6-9b59-c2e15916c8a4] succeeded in 1.7543270770693198s: [{'InvocationsErrorCode': 0, 'InvocationsResult': [{...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}]}]
...but .get() never returns anything and just seems to lock up - doesn't even timeout.
can you try celery==4.4.2?
Hi, im facing the same issue but with rpc as my backend, rabbitmq as my broker and using:
software -> celery:5.0.5 (singularity) kombu:5.0.2 py:3.6.12
billiard:3.6.3.0 py-amqp:5.0.2
platform -> system:Linux arch:64bit
kernel version:4.19.112+ imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:rpc:///
Like others above, I'm seeing the task succeed in the logs but get() never returns, after setting a timeout i see:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/celery/backends/asynchronous.py", line 267, in _wait_for_pending
on_interval=on_interval):
File "/usr/local/lib/python3.6/site-packages/celery/backends/asynchronous.py", line 52, in drain_events_until
raise socket.timeout()
socket.timeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/deimos/app/itemizer.py", line 132, in perform_scrapes
scraper_results = jobs.apply_async().join_native(timeout=10)
File "/usr/local/lib/python3.6/site-packages/celery/result.py", line 799, in join_native
on_message, on_interval):
File "/usr/local/lib/python3.6/site-packages/celery/backends/asynchronous.py", line 150, in iter_native
for _ in self._wait_for_pending(result, no_ack=no_ack, **kwargs):
File "/usr/local/lib/python3.6/site-packages/celery/backends/asynchronous.py", line 271, in _wait_for_pending
raise TimeoutError('The operation timed out.')
celery.exceptions.TimeoutError: The operation timed out.
This happens in about 1/5 tasks, the rest succeed and return without issue
I am experiencing this as well, is a fix for 5.2 realistic? (as it was moved a couple of times so far) Does anyone have a workaround?
contributions are welcome
For me Redis works fine as a production backend, but we do not have an enormous farm, only a few tasks running every few minutes.
However, this problem does occur when I am trying to run the pytest celery_worker
fixture in combination with Redis, using the most simple test examples taken from https://docs.celeryq.dev/en/stable/userguide/testing.html
../../../../.pyenv/versions/3.10.6/envs/bla/lib/python3.10/site-packages/celery/result.py:224: in get
return self.backend.wait_for_pending(
../../../../.pyenv/versions/3.10.6/envs/bla/lib/python3.10/site-packages/celery/backends/asynchronous.py:221: in wait_for_pending
for _ in self._wait_for_pending(result, **kwargs):
except socket.timeout:
> raise TimeoutError('The operation timed out.')
E celery.exceptions.TimeoutError: The operation timed out.
software -> celery:5.2.7 (dawn-chorus) kombu:5.2.4 py:3.10.6
billiard:3.6.4.0 py-amqp:5.1.1
platform -> system:Darwin arch:64bit
kernel version:21.6.0 imp:CPython
loader -> celery.loaders.default.Loader
settings -> transport:amqp results:disabled
deprecated_settings: None
Also tried the latest dev version, but that threw a different error (assignment before initialisation), so could not test it further. (5.3.0b)
I'm facing the same issue. In production celery and redis work fine. But I can't setup tests to check if my task pipeline works as expected. I made a simple project reproducing the issue here.
P.S. I spent several days trying to figure out how to setup celery and pytest and still it doesn't work :( I think that examples in docs should be extended with more clear setups.
@Nusnus do you remember you fix anything like this recently?
Unfortunately not exactly like this, but I will keep an open eye on this issue.
I had a similar error recently, but it was a bug in my own code, so it made sense (I had a race condition between revoking a task and get()
ing it, which caused TimeOut randomly, but synchronizing it fixed it 100%, so it was recognized as revoked correctly, and the error was matching, instead of just timeout).
P.S This had made the most sense to me from the whole discussion:
For now, I am forced to use a mutex to allow only one "get" a time, which is a temporary fix for my application. It is the concurrent access that it is the problem.
I seem to have this same problem right now, was there any solution to this?
@reedjones No, from what I can remember it was a Redis bug, that has been unsolved for years.
I went for the solution to run Celery synchronous in tests.
@auvipy I have the same issue as well. Someone said using mutex fix this issue (comment) but isn't get()
method process-safe?
I'm using celery with django (10 gunicorn workers). I have the issue in https://github.com/celery/celery/discussions/7028 too but if this is the cause, this might be also the reason for this discussion.
I just discovered this with our prod. workers. Any fix yet?
Same problem, using celery 5.4.0rc1 with django, Redis as a broker. From pod logs, I can see that these failures corellate with how much requests was done in that period. More requests - more timeouts.
Traceback
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/asgiref/sync.py", line 518, in thread_handler
raise exc_info[1]
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/django/core/handlers/exception.py", line 42, in inner
response = await get_response(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/asgiref/sync.py", line 518, in thread_handler
raise exc_info[1]
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/django/core/handlers/base.py", line 253, in _get_response_async
response = await wrapped_callback(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/sentry_sdk/integrations/django/asgi.py", line 157, in sentry_wrapped_callback
return await callback(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/ninja/operation.py", line 390, in _async_view
return await cast(AsyncOperation, operation).run(request, *a, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/ninja/operation.py", line 268, in run
return self.api.on_exception(request, e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/config/ninja_api.py", line 167, in handle_exception
return handler(request, exc)
^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/ninja/errors.py", line 104, in _default_exception
raise exc # let django deal with it
^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/ninja/operation.py", line 265, in run
result = await self.view_func(request, **values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/user_backend/core/api/routes.py", line 1094, in retrieve_promised_caption
caption = await to_thread.run_sync(functools.partial(async_result.get, timeout=timeout))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/celery/result.py", line 251, in get
return self.backend.wait_for_pending(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/celery/backends/asynchronous.py", line 221, in wait_for_pending
for _ in self._wait_for_pending(result, **kwargs):
File "/home/user/.pyenv/versions/3.12.1/lib/python3.12/site-packages/celery/backends/asynchronous.py", line 293, in _wait_for_pending
raise TimeoutError('The operation timed out.')
celery.exceptions.TimeoutError: The operation timed out.
celery report
software -> celery:5.4.0rc1 (opalescent) kombu:5.3.6 py:3.12.1
billiard:4.2.0 redis:5.1.0b3
platform -> system:Linux arch:64bit
kernel version:6.1.58+ imp:CPython
loader -> celery.loaders.default.Loader
settings -> transport:redis results:disabled
pip freeze
aioboto3==12.3.0
aiobotocore==2.11.2
aiodns==3.1.1
aiofiles==23.1.0
aiohttp==3.9.3
aioitertools==0.11.0
aiosignal==1.3.1
alabaster==0.7.16
amplitude-analytics==1.1.1
amqp==5.2.0
anyio==3.6.2
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asgiref==3.8.1
astroid==2.15.8
asttokens==2.4.1
async-lru==2.0.4
async-timeout==4.0.3
attrs==23.2.0
autopep8==2.0.4
Babel==2.14.0
beautifulsoup4==4.12.3
billiard==4.2.0
black==23.3.0
bleach==6.1.0
boto3==1.34.34
botocore==1.34.34
cachetools==5.3.3
cairocffi==1.6.1
celery @ file:///home/user/celery # 5.4.0rc1
certifi==2024.2.2
cffi==1.16.0
cfgv==3.4.0
channels==4.0.0
channels-redis==4.1.0
charset-normalizer==3.3.2
click==8.1.7
click-didyoumean==0.3.1
click-plugins==1.1.1
click-repl==0.3.0
colorama==0.4.6
comm==0.2.2
coverage==7.2.7
crispy-bootstrap5==0.7
cron-descriptor==1.4.3
cryptography==42.0.5
debugpy==1.8.1
decorator==5.1.1
defusedxml==0.7.1
derpconf==0.8.4
devtools==0.12.2
dill==0.3.8
distlib==0.3.8
Django==4.2.2
django-admin-autocomplete-list-filter @ git+https://github.com/demiroren-teknoloji/django-admin-autocomplete-list-filter.git@239fca057b9aa29e92806fbaf2bb955f9fa8bedd
django-admin-rangefilter==0.10.0
django-allauth==0.54.0
django-anymail==10.0
django-celery-beat==2.5.0
django-cors-headers==4.0.0
django-coverage-plugin==3.0.0
django-crispy-forms==2.0
django-debug-toolbar==4.1.0
django-environ==0.10.0
django-extensions==3.2.3
django-model-utils==4.3.1
django-ninja==0.22.2
django-prometheus==2.3.1
django-quill-editor==0.1.40
django-redis==5.2.0
django-silk==5.0.3
django-storages==1.14.2
django-stubs==4.2.7
django-stubs-ext==4.2.7
django-timezone-field==6.1.0
djangorestframework==3.14.0
djangorestframework-stubs==3.14.1
dnspython==2.6.1
docutils==0.20.1
drf-spectacular==0.26.2
email-validator==2.0.0.post2
executing==2.0.1
factory-boy==3.2.1
Faker==24.4.0
fastjsonschema==2.19.1
filelock==3.13.3
flake8==6.0.0
flake8-isort==6.0.0
flower==1.2.0
fqdn==1.5.1
frozenlist==1.4.1
google-api-core==2.11.0
google-api-python-client==2.88.0
google-auth==2.23.3
google-auth-httplib2==0.1.0
google-auth-oauthlib==1.1.0
googleapis-common-protos==1.59.0
gprof2dot==2022.7.29
graphql-core==3.2.3
gunicorn==20.1.0
h11==0.14.0
hiredis==2.2.3
httpcore==1.0.5
httplib2==0.22.0
httptools==0.6.1
httpx==0.27.0
humanize==4.9.0
hypothesis==6.78.1
identify==2.5.35
idna==3.6
imageio==2.34.0
imagesize==1.4.1
inflection==0.5.1
iniconfig==2.0.0
ipdb==0.13.13
ipykernel==6.29.4
ipython==8.22.2
isoduration==20.11.0
isort==5.13.2
jedi==0.19.1
Jinja2==3.1.3
jmespath==1.0.1
joblib==1.3.2
JpegIPTC==1.5
json5==0.9.24
jsonpointer==2.4
jsonschema==4.21.1
jsonschema-specifications==2023.12.1
jupyter-events==0.10.0
jupyter-lsp==2.2.4
jupyter_client==8.6.1
jupyter_core==5.7.2
jupyter_server==2.13.0
jupyter_server_terminals==0.5.3
jupyterlab==4.1.5
jupyterlab_pygments==0.3.0
jupyterlab_server==2.25.4
kombu==5.3.6
kubernetes==27.2.0
lazy-object-proxy==1.10.0
lazy_loader==0.3
libthumbor==2.0.2
livereload==2.6.3
MarkupSafe==2.1.5
matplotlib-inline==0.1.6
mccabe==0.7.0
merge-args==0.1.5
mistune==3.0.2
msgpack==1.0.8
multidict==6.0.5
mypy==1.3.0
mypy-extensions==1.0.0
nbclient==0.10.0
nbconvert==7.16.3
nbformat==5.10.3
nest-asyncio==1.6.0
networkx==3.2.1
nodeenv==1.8.0
notebook_shim==0.2.4
numpy==1.26.0
oauthlib==3.2.2
opencv-python-headless==4.9.0.80
ormsgpack==1.4.2
overrides==7.7.0
packaging==24.0
pandocfilters==1.5.1
pangocairocffi==0.7.0
pangocffi==0.12.0
parso==0.8.3
pathspec==0.12.1
pexpect==4.9.0
piexif==1.1.3
pillow==10.2.0
pillow_heif==0.12.0
platformdirs==4.2.0
pluggy==1.4.0
pre-commit==3.3.2
prometheus-client==0.19.0
prompt-toolkit==3.0.43
protobuf==4.25.3
psutil==5.9.8
psycopg==3.1.17
psycopg-binary==3.1.17
psycopg-pool==3.2.1
ptyprocess==0.7.0
pure-eval==0.2.2
pyasn1==0.6.0
pyasn1_modules==0.4.0
pycairo==1.26.0
pycares==4.4.0
pycodestyle==2.10.0
pycparser==2.21
pydantic==1.10.14
pyflakes==3.0.1
Pygments==2.17.2
pyheif==0.7.1
PyJWT==2.8.0
pylint==2.17.7
pylint-celery==0.3
pylint-django==2.5.3
pylint-plugin-utils==0.8.2
pymongo==4.6.2
pyparsing==3.1.2
pytest==7.3.2
pytest-asyncio==0.21.0
pytest-django==4.5.2
pytest-sugar==0.9.7
python-crontab==3.0.0
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
python-json-logger==2.0.7
python-slugify==8.0.1
python3-openid==3.2.0
pytz==2022.6
PyWavelets==1.5.0
PyYAML==6.0.1
pyzmq==25.1.2
redis==5.1.0b3
referencing==0.34.0
requests==2.31.0
requests-oauthlib==2.0.0
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rpds-py==0.18.0
rsa==4.9
s3transfer==0.10.1
scikit-image==0.21.0
scikit-learn==1.3.2
scipy==1.12.0
Send2Trash==1.8.2
sentry-sdk==1.40.4
setuptools==69.2.0
six==1.16.0
slack-sdk==3.20.0
sniffio==1.3.1
snowballstemmer==2.2.0
sortedcontainers==2.4.0
soupsieve==2.5
Sphinx==7.0.1
sphinx-autobuild==2021.3.14
sphinxcontrib-applehelp==1.0.8
sphinxcontrib-devhelp==1.0.6
sphinxcontrib-htmlhelp==2.0.5
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.7
sphinxcontrib-serializinghtml==1.1.10
sqlparse==0.4.4
stack-data==0.6.3
statsd==3.3.0
strawberry-graphql==0.219.2
strawberry-graphql-django==0.32.1
stripe==5.5.0
teamcity-messages==1.32
termcolor==2.4.0
terminado==0.18.1
text-unidecode==1.3
threadpoolctl==3.2.0
thumbor==7.5.1
thumbor-plugins-gifv==0.1.5
tifffile==2023.9.26
tinycss2==1.2.1
tomlkit==0.12.4
tornado==6.4
tqdm==4.65.0
traitlets==5.14.2
types-python-dateutil==2.9.0.20240316
types-pytz==2024.1.0.20240203
types-PyYAML==6.0.12.20240311
types-requests==2.31.0.20240311
typing_extensions==4.10.0
tzdata==2024.1
uri-template==1.3.0
uritemplate==4.1.1
urllib3==2.0.7
uvicorn==0.22.0
uvloop==0.19.0
vine==5.1.0
virtualenv==20.25.1
watchdog==4.0.0
watchfiles==0.19.0
wcwidth==0.2.13
webcolors==1.13
webencodings==0.5.1
websocket-client==1.7.0
websockets==12.0
Werkzeug==2.3.6
whitenoise==6.4.0
wrapt==1.16.0
yarl==1.9.4
Any more updates on this timeout issue? Could it be caused by race condition due to using fastAPI?
I'm using Celery version 4.0.2 When using Redis as the backend, I've been observing TimeoutError exception waiting on a .get() of a celery task. The issue is intermittent. Roughly, 1/30 cases exhibit the issue From the logs, I observe that the task has finished successfully and therefore, I expect to see the result stored in the backend and available for the client. However, the client never receives that result, and instead, raises a TimeoutError.