citycoins / api

A simple API to interact with Stacks and CityCoins data.
https://api.citycoins.co/docs
Apache License 2.0
9 stars 2 forks source link

Feature: aggregate mining and stacking data #43

Open whoabuddy opened 2 years ago

whoabuddy commented 2 years ago

This will likely turn into an endpoint request at the end, but will start as an exploration of how to best compile and serve data similar to the mining dashboards available now. https://miamining.com https://mining.nyc

Some of the raw data is available through API endpoints, but the file sizes are growing pretty large: https://miamining.com/blocks https://miamining.com/winners https://miamining.com/stacking

Since the data is organized per block, I think this is a situation similar to #42 where we can take advantage of KV, although the logic is a bit more complex.


The KV key name could be mia-miningdata-{blocknumber}, with information stored for each block height since the contract activated.

The /blocks and /winners data above could be combined into one json record with a little more specificity:

{
  "minersVerified": "boolean",
  "miners": {
    "address": "ustx",
    "address": "ustx",
    "address": "ustx",
    "address": "ustx"
  },
  "winnerVerified": "boolean",
  "winnerClaimed": "boolean",
  "winner": {
    "miner": "address",
    "commitment": "ustx",
    "reward": "citycoins",
  }
}

miners keeps track of the list of miners who participated in that block, which can be found by querying and filtering the transactions, then extracting the addresses.

minersVerified is set to true sequentially after the last 200 blocks are accounted for and verified, due to mine-many transactions spanning up to 200 blocks. This should be monitored/updated by a cron script.

winner keeps track of the winning miner, commitment, reward, and whether they claimed it.

winnerVerified is set to true once the winner data is populated via is-block-winner, and indicates the winner properties are available.

winnerClaimed is based on querying the winning miner address against can-claim-mining-reward once the winning address is known. This should be separate as the miner doesn't have to claim right away.

Both data points should be monitored/updated by a cron script.

The resulting endpoint could take query parameters for start and end then aggregate the KV responses with promise.All(). It will be interesting to see how the performance scales here with how many blocks are returned at a time.


Stacking data could follow a similar format:

mia-stackingdata-{cycleid}

{
  "amountToken": "citycoins",
  "amountUstx": "ustx",
  "firstBlock": "blockheight",
  "lastBlock": "blockheight",
  "complete": "boolean"
}

Side note - is it possible to know the number of Stacked addresses in a cycle? That's one interesting data point that doesn't exist here.

@jamil would love to know if you have any additional input here!

whoabuddy commented 2 years ago

These patterns could be used to aggregate individual user data on mining and stacking as well.

whoabuddy commented 2 years ago

Thinking this through a bit more and expanding on the info above after the upgrades, this could be done in a phased approach using similar data structures.

Use data structures from API

The get-city-list endpoint gives us the supported cities through the API.

["mia","nyc"]

Iterating that list over the get-city-info endpoint gives us the name, logo, and version info for a given city.

{
  "fullName": "Miami",
  "logo": "https://cdn.citycoins.co/brand/MIA_Miami/Coins/SVG/MiamiCoin_StandAlone_Coin.svg",
  "versions": ["v1", "v2"],
  "currentVersion": "v2"
}

Using the data from those two endpoints, the required contract data can be queried through get-city-configuration for a specific version or through get-full-city-configuration for all versions.

{
  "v1": {
    "cityName": "Miami",
    "deployed": true,
    "deployer": "SP466FNC0P7JWTNM2R9T199QRZN1MYEDTAR0KP27",
    "auth": { "name": "miamicoin-auth", "initialized": true },
    "core": {
      "name": "miamicoin-core-v1",
      "activated": false,
      "startBlock": 24497,
      "shutdown": true,
      "shutdownBlock": 58917
    },
    "token": {
      "name": "miamicoin-token",
      "activated": true,
      "activationBlock": 24497,
      "displayName": "MiamiCoin",
      "tokenName": "miamicoin",
      "symbol": "MIA",
      "decimals": 0,
      "logo": "https://cdn.citycoins.co/logos/miamicoin.png",
      "uri": "https://cdn.citycoins.co/metadata/miamicoin.json"
    }
  },
  "v2": {
    "cityName": "Miami",
    "deployed": true,
    "deployer": "SP1H1733V5MZ3SZ9XRW9FKYGEZT0JDGEB8Y634C7R",
    "auth": { "name": "miamicoin-auth-v2", "initialized": true },
    "core": {
      "name": "miamicoin-core-v2",
      "activated": true,
      "startBlock": 58921,
      "shutdown": false
    },
    "token": {
      "name": "miamicoin-token-v2",
      "activated": true,
      "activationBlock": 24497,
      "displayName": "MiamiCoin",
      "tokenName": "miamicoin",
      "symbol": "MIA",
      "decimals": 6,
      "logo": "https://cdn.citycoins.co/logos/miamicoin.png",
      "uri": "https://cdn.citycoins.co/metadata/miamicoin.json"
    }
  }
}

Cache the current data

Starting with the current endpoints, adopt a get-or-put strategy using KV where immutable data is converted and read from rather than directly querying the blockchain.

This could be tested with mining stats per block and stacking stats per cycle, and the patterns here could be reused in the future for tracking and/or aggregating additional data.

Mining Stats

Stacking Stats

Additional Thoughts

Both structures above can take advantage of some KV features which should help with scalability in the long-term:

Future ideas for data stored in this format cover both data served by endpoints now and some aggregated options:

This could also be expanded to start building user data:

One focus throughout should be to create self-healing data, so that the same process can be run (and re-run) to fill in past and future data, and if possible, a metadata flag that allows reprocessing old entries. This way if something changes or an error is found, updating old or new data should be pretty straightforward.