Hi, I would like to report a question here which disturbs us for a while.
We are using djangoQ with broker Redis (Azure Redis Cache) and we observe that from certain moment all submitted async_task will be queued at broker side as qmonitor shown in the following:
You can see that there are 50 task queued. These tasks can be consumed once we launch another qcluster using 'python3 manage.py qcluster'. At the very beginning when django and django-q services were launched, all the tasks can be executed correctly. We have no idea what is going on.
And our django-q logs look like this. You can see that after 07:42:50 no run can be executed anymore. There is a very interesting message shown here 23:00:17 [Q] ERROR Error while reading from socket: (110, 'Connection timed out'). However, we do not really understand what does it mean.
06:45:57 [Q] INFO Process-1:1 processing [run_sJPzfCmiGjZovHV]
06:45:57 [Q] INFO Processed [run_sJPzfCmiGjZovHV]
06:46:03 [Q] INFO Process-1:2 processing [run_boxDUf8Dr7qw6j9]
06:46:03 [Q] INFO Processed [run_boxDUf8Dr7qw6j9]
06:46:10 [Q] INFO Process-1:3 processing [run_64v4l50oCRbZ4tT]
06:46:10 [Q] INFO Processed [run_64v4l50oCRbZ4tT]
06:46:17 [Q] INFO Process-1:4 processing [run_8QCQcY0TaeCM2bp]
06:46:17 [Q] INFO Processed [run_8QCQcY0TaeCM2bp]
06:46:23 [Q] INFO Process-1:1 processing [run_rFLAS6Qvp3y2e4k]
06:46:24 [Q] INFO Processed [run_rFLAS6Qvp3y2e4k]
06:46:31 [Q] INFO Process-1:2 processing [run_Xh2WEfXMb4Ocrfz]
06:46:31 [Q] INFO Processed [run_Xh2WEfXMb4Ocrfz]
07:42:50 [Q] INFO Process-1:3 processing [run_b9ZvOHdaALQWoTP]
07:42:50 [Q] INFO Processed [run_b9ZvOHdaALQWoTP]
23:00:17 [Q] ERROR Error while reading from socket: (110, 'Connection timed out')
02:37:38 [Q] INFO Enqueued 1
02:37:38 [Q] INFO Process-1 created a task from schedule [delete_workspace_resource_async_task]
02:37:38 [Q] INFO Enqueued 1
02:37:38 [Q] INFO Process-1 created a task from schedule [delete_all_async_task]
02:38:08 [Q] INFO Enqueued 1
02:38:08 [Q] INFO Process-1 created a task from schedule [delete_workspace_resource_async_task]
02:38:08 [Q] INFO Enqueued 1
02:38:08 [Q] INFO Process-1 created a task from schedule [delete_all_async_task]
02:39:39 [Q] INFO Enqueued 1
02:39:39 [Q] INFO Process-1 created a task from schedule [delete_workspace_resource_async_task]
02:39:39 [Q] INFO Enqueued 1
02:39:39 [Q] INFO Process-1 created a task from schedule [delete_all_async_task]
06:57:13 [Q] INFO Enqueued 39
06:57:13 [Q] INFO Process-1 created a task from schedule [267]
08:36:12 [Q] INFO Enqueued 40
08:36:12 [Q] INFO Process-1 created a task from schedule [267]
08:37:13 [Q] INFO Enqueued 41
Hi, I would like to report a question here which disturbs us for a while.
We are using djangoQ with broker Redis (Azure Redis Cache) and we observe that from certain moment all submitted async_task will be queued at broker side as qmonitor shown in the following:
You can see that there are 50 task queued. These tasks can be consumed once we launch another qcluster using 'python3 manage.py qcluster'. At the very beginning when django and django-q services were launched, all the tasks can be executed correctly. We have no idea what is going on.
Our settings of DjangoQ is configured as:
And our django-q logs look like this. You can see that after 07:42:50 no run can be executed anymore. There is a very interesting message shown here 23:00:17 [Q] ERROR Error while reading from socket: (110, 'Connection timed out'). However, we do not really understand what does it mean.
Any idea is highly welcomed.