Open jtcohen6 opened 2 years ago
Any updates on this topic?
Do you plan to resolve this issue?
@jtcohen6 Any update on timeline for this one? This would be a big help.
per BLG-- Maybe use protocols?
@jtcohen6 Is there any update on this? We're looking to reduce the snowflake costs of our pipelines and this functionality would support us in that.
Thank you to all of you that have asked about this recently -- it's helpful for us to see the interest level.
While we don't currently have a timeline for implementing this feature, we are still interested in:
I’ve created a dbt project to demonstrate how to implement WAP on table materializations using “dummy” post-test-hooks:
--my_table_wap.sql
{{ config(
table_type='iceberg',
s3_data_naming='unique',
s3_data_dir=transform_table.generate_s3_location(),
) }}
SELECT 1 AS id
2. Create a view model that references the wap model, and then deletes the view and renames the wap table as post-hook:
--my_table.sql {{ config( materialized='view', post_hook=[ "DROP VIEW {{ this }}", "{{ rename_relation(ref('my_table_wap'),this) }}", ], ) }}
SELECT * FROM {{ref('my_table_wap')}}
It applies the approach to the example [jaffle_shop_db](https://github.com/dbt-labs/jaffle_shop_duckdb) dbt project which uses the [dbt-duckdb](https://github.com/duckdb/dbt-duckdb) adapter.
Would this feature replace the need for these “dummy” post-test-hooks? Otherwise I'm happy to raise a ticket for implementing post-test-hooks since my approach is just a work-around
@SoumayaMauthoorMOJ cool that you were able to create an example of apply Write-Audit-Publish (WAP) with dbt build
!
This issue (https://github.com/dbt-labs/dbt-adapters/issues/212) seems focused on enabling use-cases like use warehouse
in dbt-snowflake for dbt tests rather than WAP.
Rather, what you are asking about seems like it is covered by this Discussion: https://github.com/dbt-labs/dbt-core/discussions/5687. If applicable, do you want to add your thoughts or questions to that Discussion?
@dbeatty10 thanks for clarifying. I did add a comment already (see here) but I didn't get a response so I thought an issue might be a good way to go to push the idea forward?
@SoumayaMauthoorMOJ Ah, I see that now! https://github.com/dbt-labs/dbt-core/discussions/5687 is still the best place at this stage, and it would also be the perfect place to share this!
If you are looking to instigate further discussion, you could try posing some questions that invite the community to think and interact with your idea.
e.g., you could ask folks for feedback on the pros/cons of your approach. You could also ask if anyone has ideas how to enable that pattern in dbt-core without the need for the “dummy” post-test-hooks.
Hi, has there been any updates on this issue? Alternative suggestions are also welcome. We are trying to get test to run on the same warehouse as models to optimize time and cost. Thank you!
@vskarine We're still interested in this feature, but we don't currently have a timeline for implementing it.
@vskarine We're still interested in this feature, but we don't currently have a timeline for implementing it.
Thanks for the update. I guess for now the only way for us to do it is to change warehouse size in the profile before each pipeline runs.
Hey @dbeatty10, although it will probably take some time, I would like to try to work on this. 😊
Awesome @bjarneschroeder ! 🏆
Give it a go and let us know if you need any help along the way.
Hey @dbeatty10, quick update:
I'm on it, but I found that it takes me more time than I expected to understand the overall structure of dbt and how different parts of the project interact with each other under the hood. After diving deeper into the project and playing around with a custom project and implementing some first changes, I wanted to check out test cases which currently test the hook execution for the run
command, so I can add similiar cases for the test
command. And although I can generally run tests successfully with make
. I currently struggle to find good cases to debug, so I can play around with the internal program state during a specific time in the execution.
I found what looks like appropriate test cases which were in dbt/tests/adapter
but I struggled to execute those. I then just found out that they were removed in this commit and that some restructuring of tests is going on by finding out about dbt-labs/dbt-core#9513 and then seeing in the dbt-postgres repo that the moved tests seemed not to be integrated yet.
TLDR: So currently I'm still finding a good way to understand the issue better and what is important for a good implementation. Because hooks always require the interaction with adapters its tricky for me to find a good way of debugging things and understanding stuff better. I'm on it, its a grind (a fun one though). ☺️
Hey @bjarneschroeder, any luck with some progress on this? :)
Hey @jwolos I started a new job a few weeks ago which keeps me very busy. I unfortunately do not really have the time to work on this at the moment. Sorry!
Upvoting this as a request from several active customers
We are using Snowflake, and for our environment the default warehouse is a small. This is fast enough to run 95% of our models. Where we have a a model where the number of rows pushes the limit of the warehouse we have used the config snowflake_warehouse with a macro get_warehouse to set a larger warehouse. Using the macro we can set different warehouses for each of the different environments the model is being built in. Dev, UAT, Production.
What we are experiencing now is the model is being successfully built with an XLarge warehouse, but the Tests are failing as these are defaulting back to the environments default warehouse which is a small. The test is a unique test on a table with 14b rows. So it is not complex SQL, just a lot of data.
@AlexanderStephenson beginning in dbt-core v1.9 (currently in beta), you can do something like this to configure snowflake_warehouse
for your models and data tests:
models:
- name: my_model
config:
snowflake_warehouse: something
columns:
- name: id
tests:
- accepted_values:
values: [2]
config:
severity: warn
snowflake_warehouse: something_else
Wanna give that a shot and see if it works for you?
Adapting from https://github.com/dbt-labs/dbt-snowflake/issues/23#issuecomment-1223773953:
Let's talk about
adapter.pre_model_hook
+adapter.post_model_hook
.Background
Here's where they're triggered to run, right before and after a model materialization:
https://github.com/dbt-labs/dbt-core/blob/8c8be687019014ced9be37c084f944205fc916ab/core/dbt/task/run.py#L279-L283
These are different from user-provided
pre-hook
andpost-hook
, which run within materializations. (I wish we named these things a bit more distinctly). They are also no-ops by default:https://github.com/dbt-labs/dbt-core/blob/8c8be687019014ced9be37c084f944205fc916ab/core/dbt/adapters/base/impl.py#L1093-L1116
For certain adapter plugins, these "internal" hooks are the appropriate mechanism for database-specific behavior that needs to wrap a node's execution. For instance, on
dbt-snowflake
, this is where we turn thesnowflake_warehouse
config into ause warehouse
command. @dataders and I were just discussing the same principle foruse database
(compute and storage) in serverless Azure Synapse (?).Current limitations
TestRunner
does not inherit from theModelRunner
or extend itsexecute
method (whereadapter.pre_model_hook
+adapter.post_model_hook
get called). We've gotten the request to supportsnowflake_warehouse
on tests several times.run_query
) will use the default warehouse (target.warehouse
or user/role-configured), rather than the value ofsnowflake_warehouse
configured on the model.Why improve this