dbt-labs / dbt-redshift

dbt-redshift contains all of the code enabling dbt to work with Amazon Redshift
https://getdbt.com
Apache License 2.0
101 stars 59 forks source link

[Feature] Datasharing: cross-database support #742

Open dataders opened 8 months ago

dataders commented 8 months ago

Describe the feature

What

I think this is a direct successor to #217. There's many other related issues listed below.

Today, the only supported cross-database scenario in dbt-redshift is to allow source tables to be in another database. There are considerable caveats to this narrow use case:

  1. a cluster is used of node_type ra3 or serverless
  2. a profiles.yml profile that specifies: ra3_node: true
  3. a source table is defined in a database different than what's given in profile
  4. all models referencing the "foreign db" source table must be materialized as a table (views are not supported)

there is a newer feature, still in preview, Datashare, that allows writes to external databases. For example, if I am logged into the FOO database on cluster FIZZ, a configuration exists such that I can create tables within database BAR on cluster BUZZ (and vise versa).

Ideal end-state

A dbt-redshift project could take any model in the DAG and have it materialized in another with a simply as providing {{ config( database = 'BAR' ) }} at the top of the model. That is, provided that the databases used have been properly configured as such.

How

Challenges

The challenges in supporting this new feature are varied:

problem example
dbt-redshift inheirits from dbt-postgres redshift__create_schema() which invokes postgres__create_schema() which only uses two-part names, schema.relation, exlcuding a required database
dbt-redshift relies on postgres metadata views that don't support Datashares redshift__get_columns_in_relation() queries information_schema."columns" but should perhaps use SVV_ALL_COLUMNS instead`
discrepancy across Redshift SKUs for support of Datashares ra3_node and serverless supports Datasharing/crossdb-writes, dc2 clusters do not
discrepancy in performance between pg_* metadata tables and SVV_* datashare-supporting Redshift system tables information_schema."columns" takes a few dozen millisecionds to return all column metadata in the current database. SVV_ALL_COLUMNS can be more than 50X slower than this

Work required

Before the actual coding begins, it is imperative that we first learn the following:

### Possible work required
- [ ] shift away from `pg_` metadata queries to those that include "external" database metadata
- [ ] override any `postgres` macros that exclude `database` from `relation` name e.g. redshift__create_schema()
- [ ] (possible) remove both `verify_database` method and `ra3_node` profile parameter
- [ ] ensure functionality of metadata-related macros in "core" packages such as dbt-codegen, dbt-utils, dbt-audit-helper etc 
- [ ] write cross database integration tests
- [ ] add RA3 and serverless to our CI pipeline

related

### related issues
- [ ] https://github.com/dbt-labs/dbt-core/issues/3179
- [ ] https://github.com/dbt-labs/dbt-core/issues/3236
- [ ] https://github.com/dbt-labs/dbt-core/issues/5297
- [ ] #94
- [ ] #179
- [ ] #217
- [ ] #281 
- [ ] #501
- [ ] #555
- [ ] #656
- [ ] #736
- [ ] https://github.com/dbt-labs/dbt-redshift/pull/905

relevant AWS Redshift docs

Who will this benefit?

dbt-redshift users who want to use Datasharing to federate a dbt project across databases and clusters

ryanwith commented 7 months ago

Q: What data sharing scenarios could feasibly be supported? There's a few different scenarios that could be supported. Here are some of the key ones:

Q: The current & future best practice for fetching metadata? Ideally, you leverage the redshift-connector metadata APIs and not have to think about redshift system tables. We plan to offer Metadata API support inline with our JDBC driver when we announce the General Availability of Multi-Warehouse Writes through Data Sharing. If this isn't possible because our APIs don't support your use cases lets chat so we can figure out how to fix this. If we need to provide more info in our APIs, we need to provide more info in our APIs.

Q: The performance impact of using SVV_ tables for all SKUs (ra3, serverless, dc2)? Again, ideally you leverage the redshift-connector metadata APIs and don't think about Redshift system tables. However, if you need to use them, a few recommendations:

dataders commented 7 months ago

re: metadata over the API I think we're totally on the same page. Can discuss that more down the line.

If this isn't possible because our APIs don't support your use cases lets chat so we can figure out how to fix this. If we need to provide more info in our APIs, we need to provide more info in our APIs.

What concerns me most is not the APIs not having enough info. Instead my issue is maintaining our users' performance expectation. @ryanwaldorf check out #741 (and my response).

dataders commented 7 months ago

with respect to workloads, I totally buy the business case for cross-cluster reads & writes. My concern is the perspective that a dbt end-user.

How a dbt user thinks about their dbt project

Mapping dbt perspective to Redshift Datasharing

If Redshift Datasharing experience is to map directly to the expectations communicated by the dbt framework, then the boundary between models in different clusters has to feel as invisible as models that share a database.

Based on how much of the below is met, we have the following options:

Support-level Action
Full No action. dbt assumptions hold for cross-database
Nealy complete (some caveats) heavily communicate caveats
partial recommend dbt Mesh project separation

One example caveat is that we already know that a view of a tables in another db is not supported. This should be clearly documented and communicated (at least until it is supported).

If the full set of Datasharing cross-database capabilities results in intra-dbt-project experience that is not "invisible". Perhaps, instead we take that stance that "best practice" is to mirror the database boundary as as dbt project boundary. This aligns well with the concept of dbt Mesh

Expectations of full-support cross-database queries in dbt

For example, creating the following dependencies in objects within the same database is easy, but what if they're in distinct databases? Assuming that the administration to connect the clusters has already been done, which of the following are possible today with Datasharing?

  1. create a table that depends on a table in another cluster's database
  2. create a table that depends on a view in another cluster's database
  3. create a view that depends on a table in another cluster's database
  4. create a view that depends on a view in another cluster's database

Adding on to that:

ryanwith commented 7 months ago

Saw the issue with the driver performance over time. I can't speak to what happened with those previous releases but we should be able to address this together going forward through better communication between our teams. All the key folks for this going forward will be included in our Friday meeting.

ryanwith commented 7 months ago

If Redshift Datasharing experience is to map directly to the expectations communicated by the dbt framework, then the boundary between models in different clusters has to feel as invisible as models that share a database. This is our intent. With the metadata changes we're working on we should get there.

One example caveat is that we already know that a view of a tables in another db is not supported. This should be clearly documented and communicated (at least until it is supported). Not 100% true since it's available in SVV_DATASHARE_OBJECTS. However, it requires additional work hence why we're making the metadata changes. You shouldn't have to do this.

For what's possible

  1. create a table that depends on a table in another cluster's database--possible, no restrictions
  2. create a table that depends on a view in another cluster's database--possible, no restrictions
  3. create a view that depends on a table in another cluster's database--late-binding views yes, standard views no
  4. create a view that depends on a view in another cluster's database--late-binding views yes, standard views no
  5. temp tables--shouldn't be any restrictions but I may need a bit more clarity
  6. we don't have support for all alters yet. Add/remove column and rename table/view/column will be possible at GA. Some other ones like alter dist style will take a bit longer. If you could tell me what the most important alters are so that we can make sure we're prioritizing right that would be appreciated
dataders commented 7 months ago

3. create a view that depends on a table in another cluster's database--possible, no restrictions

I'm surprised to hear that this is supported because I literally got an error when I tried to do this. Will dig up the specific error message again.

ryanwith commented 7 months ago

wait sorry I messed up my answer. you did get an error. it needs to be a late-binding view. updated my answer above and bolded the changes