The schema proposed here is [order_uid: str | tx_hash: str | solver: str | data ] where (currently) data takes the following form:
Data {
surplus_fee: str (we can make make this nullable, or use zero in place of null)
amount: float
safe_liquidity: boolean (nullable)
}
Examples, of this are provided in the unit test.
Additional Notes
Technically the solver column isn't necessary since it can be obtained from the tx_hash. Let me know if we think it should be removed.
I spoke with Dune today about string types in spark SQL and their answer was as I expected:
We use decimal(38,0) or just strings for large integers in spark. DuneSQL will provide native support for large integers but it’s not directly available in spark sql
Note that, we will use strings for the surplus_fee and cast to Decimal in the spell that parses this content.
The schema proposed here is
[order_uid: str | tx_hash: str | solver: str | data ]
where (currently) data takes the following form:Examples, of this are provided in the unit test.
Additional Notes
Technically the
solver
column isn't necessary since it can be obtained from thetx_hash
. Let me know if we think it should be removed.I spoke with Dune today about string types in spark SQL and their answer was as I expected:
Note that, we will use strings for the
surplus_fee
and cast to Decimal in the spell that parses this content.Test Plan
with all the appropriate credentials you can run:
Which will process orders and write locally. A sample output with two rows would look like this: