Closed vladimir-vvalov closed 7 months ago
That is a valid point, but to me there is no clear way to move forward.
I don't think changing default behavior at this point is reasonable. The expectation is that dbt will "force its way through" when the state can't be determined. We could potentially surface the behavior as a parameter - fail if you can't get metadata. But at that point I'd rather we focus on ways to make the metadata retrieval more consistent.
I'm closing for now as won't fix, but I'm happy to listen to counter arguments :)
Is this a new bug in dbt-spark?
Current Behavior
Sometimes an existing table which maintained by the incremental model is replaced by command 'create or replace table' because load_relation(this) returns None It happens when the error during
After error dbt continues to work but all runned incremental models is running using create_table_as. Because of this the entire history in the incremental table is replaced with short data from the current update. More details are below.
Expected Behavior
This may be not good default behavior where error in command
is interpreted as missing tables when those tables exist.
Steps To Reproduce
profiles:
The project has some sets of models that can be executed in parallel in different environments. set1 has some models (several set1_stg, several set1_int__ and incremental set1_marttable. set2 has some models (several set2_stg__, several view models like set2_int__ and set2_mart. set1 is isolated from set2 (models of set1 don't have any depencies on models of set2 and doesn't depend on any tables/views of set2. And vice versa). models of both sets using same schema 'logistics_dbt'
Configs that are used for set1_mart__table (or other similar incremental models) in 1.7 versions
in 1.7 and older, example 1
in 1.7 and older, example 2
When this sets are run is parallel in different isolated environments, sometimes incremental model 'set1_marttable' is considered non-existent then dbt makes script 'create or replace table set1_mart__table'. Usual run don’t show any errors. When I run set1 with '--debug', I see error after this:
show table extended in logistics_dbt like '*'
'Table or view 'gold_staging_stgstkprod_tb_ecde' not found in database 'logistics_dbt'' (this is a view model from parallel set2) part of log:set2 executes in parallel 'create or replace view gold_staging_stg__stkprod_tb_ecde'.
After this error dbt continues to work but for any incremental model 'load_relation(this)' returns None. When the set1 incremental model runs the macro dbt-spark/dbt/include/spark/macros/materializations/incremental/incremental.sql at main · dbt-labs/dbt-spark (github.com) is getting None from this expression:
{%- set existing_relation = load_relation(this) -%}
and according to value choose this option ('create_table_as'):Workaround: I created a macro that is called in the pre_hook of each incremental model that checks for the existence of the table before running the model and compares it to the result of load_relation(this). If load_relation(this) == None but the table exists, the macro generates an error to block further execution.
Below full part of log from start until error.
Relevant log output
Environment
Additional Context
Don't blame me if it's not a bug. I tried to find a similar problem, but didn't find it. I'm confused by the default behavior. It’s not very pleasant when dags are executed successfully but a few days later customers write that the history in the mart has disappeared.