apache / airflow

Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
https://airflow.apache.org/
Apache License 2.0
37.12k stars 14.31k forks source link

Tasks are in queued state for a longer time and executor slots are exhausted often #38968

Open paramjeet01 opened 7 months ago

paramjeet01 commented 7 months ago

Apache Airflow version

Other Airflow 2 version (please specify below)

If "Other Airflow 2 version" selected, which one?

2.8.3

What happened?

The tasks are in queued state for longer time than expected. This was working fine in 2.3.3 perfectly.

What you think should happen instead?

The tasks should be in running state instead being queued.

How to reproduce

Spin up more than 150 dag run in parallel and the tasks gets queued while it can execute in airflow 2.8.3

Operating System

Amazon Linux 2

Versions of Apache Airflow Providers

No response

Deployment

Official Apache Airflow Helm Chart

Deployment details

No response

Anything else?

No response

Are you willing to submit PR?

Code of Conduct

jscheffl commented 7 months ago

Without any logs, errors, metrics or details it is impossible to (1) understand your problem and (2) fix anything.

Can you please describe more details?

paramjeet01 commented 7 months ago

Apologies, I'm relatively new to Airflow. We've checked the scheduler logs thoroughly, and everything seems to be functioning correctly without any errors. Additionally, the scheduler pods are operating within normal CPU and memory limits. Our database, RDS, doesn't indicate any breaches either. Currently, we're running a DAG with 150 parallel DAG runs. However, a significant portion of tasks are remaining in a queued state for an extended period. Specifically, about 140 tasks are queued, while only 39 are actively running. I've already reviewed the configurations for max_active_tasks_per_dag and max_active_runs_per_dag, and they appear to be properly set. We did not face this issue in 2.3.3

ephraimbuddy commented 7 months ago

Apologies, I'm relatively new to Airflow. We've checked the scheduler logs thoroughly, and everything seems to be functioning correctly without any errors. Additionally, the scheduler pods are operating within normal CPU and memory limits. Our database, RDS, doesn't indicate any breaches either. Currently, we're running a DAG with 150 parallel DAG runs. However, a significant portion of tasks are remaining in a queued state for an extended period. Specifically, about 140 tasks are queued, while only 39 are actively running. I've already reviewed the configurations for max_active_tasks_per_dag and max_active_runs_per_dag, and they appear to be properly set. We did not face this issue in 2.3.3

Can you try increasing the [scheduler]max_tis_per_query to 512? In one performance debugging, we discovered this to work better when increased but it might depend on the environment

paramjeet01 commented 7 months ago

I have updated the config map with max_tis_per_query = 512 and redeployed the scheduler. Will monitor for some time and let you know , thanks for quick response.

paramjeet01 commented 7 months ago

@ephraimbuddy , The above config has improved the performance in scheduling the tasks and the gantt view shows the tasks queue time is lesser than before. Also , please share performance tuning documentation that will be really nice of you.

tirkarthi commented 7 months ago

@paramjeet01 This might be helpful

https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/scheduler.html#fine-tuning-your-scheduler-performance

paramjeet01 commented 7 months ago

@ephraimbuddy , Also saw that the dags were in scheduled state , after restarting the scheduler everything works fine now. Found that the executor was showing no open slots available, attaching the image of the metrics

Screenshot 2024-04-13 at 7 13 55 PM
paramjeet01 commented 7 months ago

This is similar issue to #36998 , #36478

changqian9 commented 7 months ago

We got the same issues for twice. Same observation, this happened when executor open slots < 0.

Screenshot 2024-04-14 at 14 21 36

cc @paramjeet01

paramjeet01 commented 7 months ago

@jscheffl , Can you remove the pending response label.

paramjeet01 commented 7 months ago

After reviewing various GitHub and Stack Overflow discussions, I've made updates to the following configuration and migrated to Airflow version 2.7.2 with apache-airflow-providers-cncf-kubernetes version 8.0.0 :

[scheduler]
task_queued_timeout : 90
max_dagruns_per_loop_to_schedule : 128
max_dagruns_to_create_per_loop : 128
max_tis_per_query : 1024

Disabled gitsync. Additionally, scaling the scheduler to 8 replicas has notably improved performance. The executor slots being exhausted was solved using max_tis_query to max number. Sorry I couldn't find the root cause of the issue but I hope this helps

paramjeet01 commented 7 months ago

After observing for some time, we encountered instances where the executor open slots were approaching negative values, leading to tasks becoming stuck in the scheduled state. Restarting all the scheduler pods solved this issue in airflow v2.8.3 , apache-airflow-providers-cncf-kubernetes v8.0.0 9EC59300-B161-4A66-9639-01CDA6ABAF71_1_201_a 9EC59300-B161-4A66-9639-01CDA6ABAF71_1_201_a

paramjeet01 commented 7 months ago

We have also observed that the pods are not cleaned up after completion of the task and all the pods are stuck in SUCCEEDED state.

paramjeet01 commented 6 months ago

Sorry , the above comment is false positive. We are customizing our KPO and we missed to add on_finish_action so the pods stuck in SUCCEEDED state. After adding it , all the pods are removed properly. We also able to mitigate the executor slots leak by adding a cronjob to restart our schedulers once in a while.

dirrao commented 6 months ago

@paramjeet01 You can mention airflow num_runs configuration parameter to restart the scheduler container based on your needs. https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html

dirrao commented 6 months ago

This issue is related to watcher is not able to scale and process the events on time. This leads to so many completed pods over the time. related: https://github.com/apache/airflow/issues/22612

paramjeet01 commented 6 months ago

@dirrao , airflow num_runs configuration parameter purpose has been changed a while ago AFAIK and cannot be used for restarting the scheduler. We have also removed run_duration which was later used for restarting the scheduler. https://airflow.apache.org/docs/apache-airflow/stable/release_notes.html#num-runs https://airflow.apache.org/docs/apache-airflow/stable/release_notes.html#remove-run-duration

sunank200 commented 5 months ago

If I understood this correctly, the performance issues with tasks in the queued state were mitigated by adjusting max_tis_per_query, scaling scheduler replicas, and implementing periodic scheduler restarts. @paramjeet01 tried periodic restarts of all scheduler pods to temporarily resolve the issue.

Related Issues: #36998, #22612

ephraimbuddy commented 5 months ago

Can anyone try this patch https://github.com/apache/airflow/pull/40183 for the scheduler restarting issue?

potiuk commented 1 week ago

Airflow 2.10.3 is now out an it has fix #42932 that is likely to fix the problems you reported, please upgrade, check if it fixed your problem and report back @paramjeet01 ?