Open jtcohen6 opened 1 year ago
An argument in favor of prioritizing this is that BigQuery now supports the use of foreign keys for optimizing joins.
I would also submit that, database enforcement implementation aside, forcing the usage of explicit <schema>.<table>
hardcodings and not supporting ref()
is a crack in dbt's abstraction model. On its own it's certainly not the end of the world, but these breaks in the overall architectural vision and product conceptualization tend to proliferate if left unaddressed.
Snowflake can also use foreign keys for optimizing joins: https://docs.snowflake.com/en/user-guide/join-elimination#setting-the-rely-constraint-property-to-eliminate-unnecessary-joins
I'd really be interested in referencing a FK constraint to a model that lives in a custom schema. The referred model lives in a custom schema that is dependent on an Environment Variable that is passed in at runtime, so I cannot hardcode a <schema>.<table>
reference in my constraint as I do not know what it will be ahead of time.
Until dbt is enhanced to support ref()
in a foreign key constraint, I cannot model my FKs in constraints.
Another reason to add this is to ensure that dbt builds DAG dependencies that support the foreign keys. Because there is no ref()
, but instead the hard-coding specification of <schema.table>
, there’s no way to for dbt to understand the DAG dependency that a foreign key constraint creates.
For example, let’s say I have 3 models: A
, B
, and C
B
depends on A
.
So if I say dbt run -m +B
it will first build A
, then B
.
So far so good. Now, suppose I specify a foreign key constraint on a column in B
, referring to a column in C
. For this to work, C
has to exist. In other words, there’s now a DAG dependency between B
and C
, for that reason.
But with that constraint specified, dbt run -m +B
still just builds A
and then B
. The constraint itself causes an error, because C
does not exist.
In any non-trivial sized DAG, this will cause constant errors in builds, because there is no guarantee of a thread getting to C
before B
.
The workaround is to force the dependency by placing a SQL-commented ref()
in the model .sql
, as described here. In other words, something like:
-- {{ ref('C') }}
But this is just more extra work, and it becomes difficult to maintain as it scales. So this is one more reason to support ref
in foreign key constraints expressions in the .yml
; i.e. all in the same place.
Like Snowflake and BigQuery, Redshift also uses foreign keys for optimizing joins: https://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-defining-constraints.html
During development we build into developer dependent datasets (e.g. dev_developer_name.dataset_name__model_name
instead of dataset_name.model_name
in production), so hard coding foreign keys seems impossible.
During development we build into developer dependent datasets (e.g.
dev_developer_name.dataset_name__model_name
instead ofdataset_name.model_name
in production), so hard coding foreign keys seems impossible.
@elyobo
The dependency issue raised by @noahjgreen295 will still be an issue and was a major issue for us in using this feature. Our pipelines were less reliable and there was essentially a race condition when running multiple models in parallel.
I use a similar naming convention to you and I used something like this in the model YAML
- type: foreign_key
expression: "{{ 'warehouse' if target.name!='dev' else target.dataset }}.tableA(tableB_ForeignKey)"
you can define simple if-else logic in the brackets. This allows for the FKs to be created in a dev_developer_name
schema under a dev
target. Hope this helps!
Thanks @Stochastic-Squirrel, I didn't realise you could do that; ends up something like this for ours and does indeed work, leaving the logic duplication (this is already handled in the naming macros that ref
calls) and the dependency issue.
- type: foreign_key
expression: "{{ 'prod_dataset.' if target.name!='dev' else target.dataset ~ '.prod_dataset__' }}foreign_table(foreign_key)"
Another option might be a post hooks alterations with alter table statements, but also not ideal. ref
support would be ideal but can appreciate that it's a pain to implement.
@jtcohen6 Given that Snowflake, Redshift and BigQuery use foreign keys to optimize joins, will this issue get re-prioritized? Also, I'll add that downstream tools can use PK/FK to infer table relationships, perhaps bumping the priority further.
Any updates on the priority for this? I feel like dbt focus a lot in adding new features but pushes aside the improvement of great features already present...
Any updates on this? It defeats the purpose of foreign key constraints as we cannot use them because it seems that dbt is unable to build a correct DAG. I have to run the project a couple of times so that parent tables get built.
Problem
Because you must hard-code your database.schema.table name when setting a foreign key constraint:
This feature has become more important now that warehouses use foreign key constraints for better performance.
Instead, we should support
ref
in foreign key constraint expression - both at the model and column level.This is similar to how the relationships data test works.
Current workaround
Having to use jinja to specify the expression based on the target:
Acceptance criteria
ref
at the column levelNotes from technical refinement
here’s where we parse constraints currently
originally left as comment in https://github.com/dbt-labs/dbt-core/issues/7417
I'm opening this issue to track upvotes/comments that could inform eventual prioritization. Is this something people want/need in their production workflows? Are happy to solve by other means in the meantime (e.g.
dbt_constraints
)?If we were to take FK constraints more seriously, we're missing a pretty important ingredient, which is the ability to include & template ref inside the expression field — or providing more structure, i.e.
Per https://github.com/dbt-labs/dbt-core/issues/6754#issuecomment-1449200569, we kicked that out of scope for v1.5, and we're unlikely to prioritize it while this remains a metadata-only (nonfunctional & unenforceable) feature on the majority of data platforms.