Closed noose closed 2 years ago
Thanks for opening your first issue here! Be sure to follow the issue template!
Just to add more insights for current issue which is created by my colleague :
There are two types of dags where dag one act as a internal scheduler which runs with 1 min heartbeat to check for files in s3 once the file is arrived in s3 then it will trigger the second dag to do data processing and other types of pythpn operator in our pipeline . Dag One (scheduler) -> to check file in s3 and trigger dag two
Dag Two (upload dag)
Problem happens in second dag where dag is stuck on success state for any of the above task while in flower (celery worker) task is active/running state forever. Seems like issue is between scheduler to worker communication for task state
Apache Airflow version
2.3.3 (latest released)
What happened
To be honest I don't know why it stopped working properly. In our process we have 2 DAGs per client, first DAG have 3 tasks, second one have 5-8 tasks. In general first DAG should take ~3min, second one ~5-10min to finish. A week ago we've added 2 new clients with similar amount of data than previous customers and airflow started to behave strangely. Dags (different ones, not only for those 2 customers) are in the
running
state for hours (but all tasks inside are already finished after few minutes after start, but worker is doing "something" which is not in the logs and causing high load ~12, when in normal conditions we have < 1). Or dag is inrunning
state, task can havequeued
(orno_status
) status for hours. We've mitigated that issue with restarting workers and schedulers on every hour, but it's not a longterm or midterm solution.We're using CeleryExecutors (in the kubernetes - 1 pod = 1 worker). It's not helping if we change concurrency from 4 to 1 for example. On the worker pod on the process list we have only celery, gunicorn and current task.
We had
apache/airflow:2.2.5-python3.8
but right now it'sapache/airflow:2.3.3-python3.8
with the same problems.What you think should happen instead
No response
How to reproduce
No response
Operating System
Debian GNU/Linux 11 (bullseye) (on pods), amazon-linux on EKS
Versions of Apache Airflow Providers
Deployment
Other 3rd-party Helm chart
Deployment details
Airflow-scheduler, web, workers, redis are on our EKS, deployed via our own helm charts.
We have also RDS (postgresql).
Anything else
Are you willing to submit PR?
Code of Conduct