Closed ajsquared closed 4 months ago
I think this might be better reported at dbt-core; I believe the handling of cancel signals happens there, and then they delegate to adapters to handle the cancellation. If this works for one cancel scenario, but not another, the fixes probably need to happen in core.
Ah makes sense. I actually found a similar issue there (https://github.com/dbt-labs/dbt-core/issues/8356), so I'll close this one.
Describe the bug
We run DBT via Airflow. When a running Airflow task is manually marked as failed, Airflow will terminate all child processes.
However, with our DBT jobs the process is killed on the Airflow worker, but continues executing on our Databricks cluster.
This does work properly with a ctrl-c in an interactive shell.
Steps To Reproduce
Sending a
SIGTERM
withkill
to a local DBT process appears to produce the same behavior (local process dies, but the job keeps executing on the cluster), so this can be reproduced outside of Airflow.Expected behavior
When the DBT process is terminated, the job should stop executing on the cluster.
Screenshots and log output
This shows the termination process from Airflow:
System information
The output of
dbt --version
:The operating system you're using: Linux
The output of
python --version
: Python 3.11.7