Open paramjeet01 opened 7 months ago
Without any logs, errors, metrics or details it is impossible to (1) understand your problem and (2) fix anything.
Can you please describe more details?
Apologies, I'm relatively new to Airflow. We've checked the scheduler logs thoroughly, and everything seems to be functioning correctly without any errors. Additionally, the scheduler pods are operating within normal CPU and memory limits. Our database, RDS, doesn't indicate any breaches either. Currently, we're running a DAG with 150 parallel DAG runs. However, a significant portion of tasks are remaining in a queued state for an extended period. Specifically, about 140 tasks are queued, while only 39 are actively running. I've already reviewed the configurations for max_active_tasks_per_dag and max_active_runs_per_dag, and they appear to be properly set. We did not face this issue in 2.3.3
Apologies, I'm relatively new to Airflow. We've checked the scheduler logs thoroughly, and everything seems to be functioning correctly without any errors. Additionally, the scheduler pods are operating within normal CPU and memory limits. Our database, RDS, doesn't indicate any breaches either. Currently, we're running a DAG with 150 parallel DAG runs. However, a significant portion of tasks are remaining in a queued state for an extended period. Specifically, about 140 tasks are queued, while only 39 are actively running. I've already reviewed the configurations for max_active_tasks_per_dag and max_active_runs_per_dag, and they appear to be properly set. We did not face this issue in 2.3.3
Can you try increasing the [scheduler]max_tis_per_query
to 512? In one performance debugging, we discovered this to work better when increased but it might depend on the environment
I have updated the config map with max_tis_per_query = 512
and redeployed the scheduler. Will monitor for some time and let you know , thanks for quick response.
@ephraimbuddy , The above config has improved the performance in scheduling the tasks and the gantt view shows the tasks queue time is lesser than before. Also , please share performance tuning documentation that will be really nice of you.
@paramjeet01 This might be helpful
@ephraimbuddy , Also saw that the dags were in scheduled state , after restarting the scheduler everything works fine now. Found that the executor was showing no open slots available, attaching the image of the metrics
This is similar issue to #36998 , #36478
We got the same issues for twice. Same observation, this happened when executor open slots < 0.
cc @paramjeet01
@jscheffl , Can you remove the pending response label.
After reviewing various GitHub and Stack Overflow discussions, I've made updates to the following configuration and migrated to Airflow version 2.7.2
with apache-airflow-providers-cncf-kubernetes version 8.0.0
:
[scheduler]
task_queued_timeout : 90
max_dagruns_per_loop_to_schedule : 128
max_dagruns_to_create_per_loop : 128
max_tis_per_query : 1024
Disabled gitsync. Additionally, scaling the scheduler to 8 replicas has notably improved performance. The executor slots being exhausted was solved using max_tis_query to max number. Sorry I couldn't find the root cause of the issue but I hope this helps
After observing for some time, we encountered instances where the executor open slots were approaching negative values, leading to tasks becoming stuck in the scheduled state. Restarting all the scheduler pods solved this issue in airflow v2.8.3 , apache-airflow-providers-cncf-kubernetes v8.0.0
We have also observed that the pods are not cleaned up after completion of the task and all the pods are stuck in SUCCEEDED state.
Sorry , the above comment is false positive. We are customizing our KPO and we missed to add on_finish_action
so the pods stuck in SUCCEEDED state. After adding it , all the pods are removed properly.
We also able to mitigate the executor slots leak by adding a cronjob to restart our schedulers once in a while.
@paramjeet01 You can mention airflow num_runs configuration parameter to restart the scheduler container based on your needs. https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html
This issue is related to watcher is not able to scale and process the events on time. This leads to so many completed pods over the time. related: https://github.com/apache/airflow/issues/22612
@dirrao , airflow num_runs configuration parameter purpose has been changed a while ago AFAIK and cannot be used for restarting the scheduler. We have also removed run_duration which was later used for restarting the scheduler. https://airflow.apache.org/docs/apache-airflow/stable/release_notes.html#num-runs https://airflow.apache.org/docs/apache-airflow/stable/release_notes.html#remove-run-duration
If I understood this correctly, the performance issues with tasks in the queued state were mitigated by adjusting max_tis_per_query, scaling scheduler replicas, and implementing periodic scheduler restarts. @paramjeet01 tried periodic restarts of all scheduler pods to temporarily resolve the issue.
Related Issues: #36998, #22612
Can anyone try this patch https://github.com/apache/airflow/pull/40183 for the scheduler restarting issue?
Airflow 2.10.3 is now out an it has fix #42932 that is likely to fix the problems you reported, please
Apache Airflow version
Other Airflow 2 version (please specify below)
If "Other Airflow 2 version" selected, which one?
2.8.3
What happened?
The tasks are in queued state for longer time than expected. This was working fine in 2.3.3 perfectly.
What you think should happen instead?
The tasks should be in running state instead being queued.
How to reproduce
Spin up more than 150 dag run in parallel and the tasks gets queued while it can execute in airflow 2.8.3
Operating System
Amazon Linux 2
Versions of Apache Airflow Providers
No response
Deployment
Official Apache Airflow Helm Chart
Deployment details
No response
Anything else?
No response
Are you willing to submit PR?
Code of Conduct