dbt-labs / dbt-core

dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
https://getdbt.com
Apache License 2.0
9.29k stars 1.54k forks source link

[CT-2809] Support `ref` in foreign key constraint expressions #8062

Open jtcohen6 opened 1 year ago

jtcohen6 commented 1 year ago

Problem

    constraints:
      - type: FOREIGN_KEY # multi_column
        columns: [FIRST_COLUMN, SECOND_COLUMN, ...]
        expression: "OTHER_MODEL_SCHEMA.OTHER_MODEL_NAME (OTHER_MODEL_FIRST_COLUMN, OTHER_MODEL_SECOND_COLUMN, ...)"

    columns:
      - name: FIRST_COLUMN
        data_type: DATA_TYPE

        # column-level constraints
        constraints:
          - type: foreign_key
            expression: OTHER_MODEL_SCHEMA.OTHER_MODEL_NAME (OTHER_MODEL_COLUMN)

Because you must hard-code your database.schema.table name when setting a foreign key constraint:

This feature has become more important now that warehouses use foreign key constraints for better performance.

Instead, we should support ref in foreign key constraint expression - both at the model and column level.

This is similar to how the relationships data test works.

models:
  - name: orders
    columns:
      - name: customer_id
        tests:
          - relationships:
              to: ref('customers')
              field: id

Current workaround

Having to use jinja to specify the expression based on the target:

- type: foreign_key
  expression: "{{ 'prod_dataset.' if target.name!='dev' else target.dataset ~ '.prod_dataset__' }}foreign_table(foreign_key)"

Acceptance criteria

Notes from technical refinement

originally left as comment in https://github.com/dbt-labs/dbt-core/issues/7417

I'm opening this issue to track upvotes/comments that could inform eventual prioritization. Is this something people want/need in their production workflows? Are happy to solve by other means in the meantime (e.g. dbt_constraints)?


If we were to take FK constraints more seriously, we're missing a pretty important ingredient, which is the ability to include & template ref inside the expression field — or providing more structure, i.e.

constraints:
  - type: foreign_key
    ref_table_name: ref('other_table_name')
    ref_column_names: ['id'] # could be multiple

Per https://github.com/dbt-labs/dbt-core/issues/6754#issuecomment-1449200569, we kicked that out of scope for v1.5, and we're unlikely to prioritize it while this remains a metadata-only (nonfunctional & unenforceable) feature on the majority of data platforms.

jgillies commented 11 months ago

An argument in favor of prioritizing this is that BigQuery now supports the use of foreign keys for optimizing joins.

https://cloud.google.com/blog/products/data-analytics/join-optimizations-with-bigquery-primary-and-foreign-keys?hl=en

noahjgreen295 commented 8 months ago

I would also submit that, database enforcement implementation aside, forcing the usage of explicit <schema>.<table> hardcodings and not supporting ref() is a crack in dbt's abstraction model. On its own it's certainly not the end of the world, but these breaks in the overall architectural vision and product conceptualization tend to proliferate if left unaddressed.

awal11 commented 7 months ago

Snowflake can also use foreign keys for optimizing joins: https://docs.snowflake.com/en/user-guide/join-elimination#setting-the-rely-constraint-property-to-eliminate-unnecessary-joins

Juniper-vdg commented 7 months ago

I'd really be interested in referencing a FK constraint to a model that lives in a custom schema. The referred model lives in a custom schema that is dependent on an Environment Variable that is passed in at runtime, so I cannot hardcode a <schema>.<table> reference in my constraint as I do not know what it will be ahead of time.

Until dbt is enhanced to support ref() in a foreign key constraint, I cannot model my FKs in constraints.

noahjgreen295 commented 7 months ago

Another reason to add this is to ensure that dbt builds DAG dependencies that support the foreign keys. Because there is no ref(), but instead the hard-coding specification of <schema.table>, there’s no way to for dbt to understand the DAG dependency that a foreign key constraint creates.

For example, let’s say I have 3 models: A, B, and C

B depends on A.

So if I say dbt run -m +B it will first build A, then B.

So far so good. Now, suppose I specify a foreign key constraint on a column in B, referring to a column in C. For this to work, C has to exist. In other words, there’s now a DAG dependency between B and C, for that reason.

But with that constraint specified, dbt run -m +B still just builds A and then B. The constraint itself causes an error, because C does not exist.

In any non-trivial sized DAG, this will cause constant errors in builds, because there is no guarantee of a thread getting to C before B.

The workaround is to force the dependency by placing a SQL-commented ref() in the model .sql, as described here. In other words, something like:

-- {{ ref('C') }}

But this is just more extra work, and it becomes difficult to maintain as it scales. So this is one more reason to support ref in foreign key constraints expressions in the .yml; i.e. all in the same place.

horony commented 7 months ago

Like Snowflake and BigQuery, Redshift also uses foreign keys for optimizing joins: https://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-defining-constraints.html

elyobo commented 6 months ago

During development we build into developer dependent datasets (e.g. dev_developer_name.dataset_name__model_name instead of dataset_name.model_name in production), so hard coding foreign keys seems impossible.

Stochastic-Squirrel commented 6 months ago

During development we build into developer dependent datasets (e.g. dev_developer_name.dataset_name__model_name instead of dataset_name.model_name in production), so hard coding foreign keys seems impossible.

@elyobo

The dependency issue raised by @noahjgreen295 will still be an issue and was a major issue for us in using this feature. Our pipelines were less reliable and there was essentially a race condition when running multiple models in parallel.

I use a similar naming convention to you and I used something like this in the model YAML

- type: foreign_key
  expression: "{{ 'warehouse' if target.name!='dev' else target.dataset }}.tableA(tableB_ForeignKey)"

you can define simple if-else logic in the brackets. This allows for the FKs to be created in a dev_developer_name schema under a dev target. Hope this helps!

elyobo commented 6 months ago

Thanks @Stochastic-Squirrel, I didn't realise you could do that; ends up something like this for ours and does indeed work, leaving the logic duplication (this is already handled in the naming macros that ref calls) and the dependency issue.

- type: foreign_key
  expression: "{{ 'prod_dataset.' if target.name!='dev' else target.dataset ~ '.prod_dataset__' }}foreign_table(foreign_key)"

Another option might be a post hooks alterations with alter table statements, but also not ideal. ref support would be ideal but can appreciate that it's a pain to implement.

ataft commented 4 months ago

@jtcohen6 Given that Snowflake, Redshift and BigQuery use foreign keys to optimize joins, will this issue get re-prioritized? Also, I'll add that downstream tools can use PK/FK to infer table relationships, perhaps bumping the priority further.

babschlott commented 3 weeks ago

Any updates on the priority for this? I feel like dbt focus a lot in adding new features but pushes aside the improvement of great features already present...

vishaalkk commented 2 weeks ago

Any updates on this? It defeats the purpose of foreign key constraints as we cannot use them because it seems that dbt is unable to build a correct DAG. I have to run the project a couple of times so that parent tables get built.