Closed fleupold closed 2 months ago
Be aware that this endpoint consumes API credits, so performing this regularly might use a lot
I wasn't able to find any details on how much credits are used (so I ended up thinking it might be free). @bh2smith do you know where this is documented? Not here afaict.
Ahh it's just part of the storage plan on the data set. But looks like you've got 15Gb on the plus plan so nothing to worry about.
This PR dramatically simplifies the way we sync app_data into Dune. Instead of looking for hashes we see on Dune for which we don't have the app_data pre-image and then trying to fetch this data from IPFS, we simply mirror the entire app_data pre-image table we have locally to Dune using their csv upload feature.
We ensured that we are no longer seeing orders for which app_data can only be retrieved from IPFS (to the contrary, we see more and more orders for which the pre image has only been written into the DB).
In order to achieve this, we
sync/app_data.py
and its configuration to basically be a full tablescan from the backend which gets written using the CSV upload feature of the dune python clientApp data for each network (mainnet, gnosis, arbitrum) will be written in its own table (since each network requires a separate db connection to the designated target db). The table name can be specified as an additional parameter (which feels a bit 🤡, I'm not sure how sync job specific arguments were envisioned in the current architecture)
Test Plan
Both
python3 -m src.main --sync-table app_data
as well aspython -m pytest tests/e2e/test_sync_app_data.py
pass