Open svdimchenko opened 2 weeks ago
Hey @svdimchenko A workaround that might work for you is to separate the artifact uploading to a different job, this can be done by doing the following:
vars:
disable_dbt_artifacts_autoupload: true
dbt run --select edr.dbt_artifacts
in a separate jobThis will make sure that not all the metadata is uploaded after every job (avoiding the parallel uploading), but is still up to date.
Let me know if this helps 🙏
Is your feature request related to a problem? Please describe. Currently I'm using aws athena as my query engine for dbt transformations. The problem with integrating elementary is following:
Describe the solution you'd like There are several possible solutions I can offer to solve the issue:
Implement partitioning for elementary tables and utilise partition fields in monitoring models. Unfortunately, we can not use
created_at
field withhive
table format. So that we'll need to addcreated_at_date
field and utilise it for partition pruning.Implement possibility to load dbt artifacts to separate backend. For instance, it can be AWS RDS. Currently, elementary loads data from dbt context and there is no possibility to work with dbt's json files: run_results.json, manifest.json etc. Here is datahub's example how json files can be ingested into external database.
Describe alternatives you've considered As a quick workaround I can keep elementary tables in hive format and setup s3 bucket lifecycle policy to remove outdated elementary's data, but such approach requires accurate s3 bucket tuning for every specific elementary's table which can be tricky.
Would you be willing to contribute this feature? Once we clarify the most appropriate way of athena integration, I can contribute of course.