oceanprotocol / market

🧜‍♀️ THE Data Market
https://market.oceanprotocol.com
Apache License 2.0
190 stars 297 forks source link

Track total data $ volume consumed / day, and total data $ volume bought / day #380

Closed trentmc closed 2 years ago

trentmc commented 3 years ago

Motivations:

What:

Notes:

trentmc commented 3 years ago

cc @kremalicious

kremalicious commented 3 years ago

blocked by https://github.com/oceanprotocol/ocean-subgraph/issues/10

mihaisc commented 3 years ago

most of these will be solvable after https://github.com/oceanprotocol/ocean-subgraph/issues/180 . For consume volume i think we can do a sum of orders like: spotPrice at consume timestamp 1 + spotPrice at consume timestamp 2 , and have this per day. The issue is that the spotPrice is expressed in the pair token of that pool ( if pool is dt/ocean then spotPrice is in ocean, if pool is dt/eth spotPrice is in eth ), thus we have an array of tokens. Another step would be to convert all of then in fiat at the conversion rate of that day and i believe this would be the consume price of day x. We can add consumeVolume on the poolSnapshot as well and to get total consumtion for a dt you just fetch all pools/fre for that datatoken (this needs some more thinking to create an optimal flow)

mihaisc commented 2 years ago

Related to https://github.com/oceanprotocol/market/issues/871

trentmc commented 2 years ago

FYI we've built what we needed for this, for DF. Maybe some of it's useful for this issue. Or maybe this issue is sufficiently redundant with the work done below and outstanding issue below that we can now close this one.

Key things that DF has:

Then, DF has an infrastructure around that to meet the needs of DF:

Here's how the data flows. GSlide source

One thing that DF does not have yet, but is slated for later, and would be useful to the broader Ocean ecosystem, is a Dune (or Dune-style) Dashboard to track TVL, DCV, etc over time. This is in df-py#19

mihaisc commented 2 years ago

Cool, thanks, i'll spec something lighter then and think of some integration/link to df. No need to do redundant stuff