Open jPinhao-Rover opened 9 months ago
For anyone interested, there is a non-ideal workaround - effectively intercept the materialisation events before they're submitted to dagster, and if there were test-failures, don't report the asset as materialised. Dagster will see the asset as "skipped" and will prevent dowsntream dependencies from running.
Dagster version
1.5.12
What's the issue?
When defining different sets of @dbt_assets as part of a single job, dagster correctly identifies dependencies across them and correctly creates and orchestrated tasks within the job to execute downstream dependencies after the upstream assets have been materialised. In our case, we need to have different @dbt_assets definitions to correctly configure and run partitioned and non-partitioned assets as part of a single job.
However if one of the upstream assets' tests fail, it fails to recognise this when selecting downstream assets to execute, and will execute them independently of test success/failure.
If assets are defined within a single @dbt_assets definition, it correctly respects test failures and doesn't execute downstream assets.
What did you expect to happen?
Test failures are respected when deciding to execute downstream assets as part of a single job, whether the asset is defined in the same or across separate @dbt_assets definitions.
How to reproduce?
Create 4 dbt models where:
Create 2 @dbt_asset definitions:
Create 1 job definition:
Execute the job, and note that:
Deployment type
None
Deployment details
This occurs both in a local deployment + Dagster Helm Chart deployment
Additional information
You have 2 alternatives way to handle this as far as I'm aware:
Message from the maintainers
Impacted by this issue? Give it a 👍! We factor engagement into prioritization.