Closed DistributedDoge closed 6 months ago
I think it is ready to merge. I really do apperciate the notes tough! Seem very in-line with dagster way, for future refactor:
dune_table_x (dune group)
not operations push_to_dune
op job => refresh dune
create asset job => materialize *dune group
DuneAPI
logic can be packaged into a resource for easier re-use and configurationWhat I'd love to have and I don't know how easy it would be is for us to generate these Dune assets dynamically basically on previous assets tags. Similar to how the new dbt integration treats dbt assets.
This is something I can test for Flipside where we can actually afford to load multiple tables without hitting upload limit.
One thing before merging. Do you mind checking if re-uploading to Dune the same or new data is possible? I think it might give an error and we might need to do an upsert or something like that.
Last time I tried, was working as-is. The data
branch of DuneAPI ` is limited to single POST endpoint, where you either:
Note that currently, there is no way to actually remove dataset, except for using interface.
Alright! Will try to clear some time tomorrow morning (generate API keys and stuff) and get this deployed! :rocket:
This PR builds an
op
and threejobs
to see if we can play nicely with new dagster primitives and address #38.It does not modify existing workflow in any way (or at least I hope it doesn't), but allows us to use new CLI commands:
For dagster UI navigate to
Overview
=>Jobs
to see new additions:build all
materializes everythingbuild_indexer_assets
builds assets provided by indexer, except votes for fast testingrefresh dune
materializes an asset and then uses op to upload said asset onto Dune.To succeed job needs envar DUNE_API_KEY to be set first.
Once uploaded, dataset becomes accessible like so.