Open hexonaut opened 4 years ago
Hi Sam. For those who aren't as familiar with SQL but are still curious in what is proposed, could you provide a pseudo-example response from each query?
So each of these take the form of <table>_time(start, end granularity)
. So for example spot_poke_time('2020-05-01'::DATE, '2020-06-01'::DATE, '1 day'::INTERVAL)
returns the ilk price over the month of May in 1 day buckets.
Similarly ilk_ink_time('2020-05-01'::DATE, '2020-06-01'::DATE, '1 day'::INTERVAL)
returns the amount of collateral an ilk has for the month of May in 1 day buckets.
Here is an example in GraphQL form:
Query:
{
ilkInkTime(startTime:"2020-05-01", endTime:"2020-06-01", granularity:{days:1}) {
nodes {
date,
ilkId,
ink,
blockNumber,
blockTimestamp
}
}
}
Response:
{
"data": {
"ilkInkTime": {
"nodes": [
{
"date": "2020-05-28T00:00:00+00:00",
"ilkId": 10,
"ink": "14243787879208000000000000",
"blockNumber": "10070473",
"blockTimestamp": "1589542052"
},
{
"date": "2020-05-11T00:00:00+00:00",
"ilkId": 1,
"ink": "648226674586864668452888",
"blockNumber": "10047822",
"blockTimestamp": "1589238672"
},
{
"date": "2020-05-12T00:00:00+00:00",
"ilkId": 10,
"ink": "13323780487907000000000000",
"blockNumber": "10052841",
"blockTimestamp": "1589306670"
},
{
"date": "2020-05-05T00:00:00+00:00",
"ilkId": 9,
"ink": "282783589266214677304525",
"blockNumber": "10007975",
"blockTimestamp": "1588705424"
},
{
"date": "2020-05-10T00:00:00+00:00",
"ilkId": 508622,
"ink": "57743128310000000000",
"blockNumber": "10041360",
"blockTimestamp": "1589152472"
},
{
"date": "2020-05-04T00:00:00+00:00",
"ilkId": 1,
"ink": "90936570123725291376599",
"blockNumber": "10002705",
"blockTimestamp": "1588634910"
},
...
]
}
}
}
Does this explain better?
It also probably makes sense to add an optional ilk filter and to order the result by date.
Thanks for the input! Will put some thought into considering how we could support these kinds of queries 👍
Wrote a PR for ilk_snapshot: https://github.com/makerdao/vdb-mcd-transformers/pull/202
I think this format can be generalized to many of the other trigger/storage tables.
In particular I am prefixing time
to indicate it is a bucketed, time-series query and requiring bucket_start
, bucket_end
and bucket_interval
.
PR for auctions: https://github.com/makerdao/vdb-mcd-transformers/pull/211
Continuing on from here: https://github.com/makerdao/vdb-mcd-transformers/issues/179
I think a common use case for this data is going to be displaying time series data on these values/events.
I think we should provide a common generic api for querying stats with 3 parameters. A start datetime, an end datetime and a granularity bucket size. I've added these three for collateral price, savings dai and collateral locked:
I'm not the best with Postgres so maybe there is a cleaner way to write those, but I think this pattern can be generalized to produce aggregation over most stats. For single values we can provide the latest value in the bucket (as in the examples above). For event-based stats such as Draws/Wipes we can provide both SUM and AVERAGE aggregators.
Also, does it make sense to have all these definitions in one giant schema.sql file? Maybe it makes sense to break these apart into more manageable pieces?