XLNT / gnarly

🤙 A blockchain ETL tool to make your life better. Now that’s fuckin’ gnarly.
https://xlnt.co
Apache License 2.0
119 stars 11 forks source link

Run the default gnarly reducer for a few days and report on usage statistics #13

Open shrugs opened 6 years ago

shrugs commented 6 years ago

The current version of gnarly-bin (commit https://github.com/XLNT/gnarly/commit/999b9e3b74c9dd3e04f194576bc6a6ba2968398d) is configured to index CryptoKitties NFT transfers, Transfer events, and block metadata.

Build yourself a docker container, push it to your own username in the docker hub, and then run this container from the beginning on cryptokitties. We want to see how it performs in your experience!

Requirements

Extra Usage Details

For running the gnarly instance docker container, you can use a command like:

sudo docker run --name gnarly --net=host -d \
  -e "DEBUG=*,-sequelize*,-gnarly-core:store:*,-gnarly-core:api,-gnarly-core:blockstream:fast-forward,-gnarly-core:ourbit:notifyPatches" \
  -e "GNARLY_RESET=true" \
  -e "LATEST_BLOCK_HASH=0x62f9bc4786f6f8101dc9510aba5744a4ca5ed94d0e47d9b625d5f09afd7e842a" \
  -e "NODE_ENDPOINT=http://127.0.0.1:8545" \
  -e "CONNECTION_STRING=postgres:////" \
  shrugs/gnarly-test:latest

# and then save your logs with something like
sudo docker logs gnarly -f &> gnarly.log &

(remembering to configure the postgres connection string correctly, as well as, if gnarly happens to crash (report that bug please ❤️), removing the GNARLY_RESET=true env var. That var tells gnarly that it should nuke your data store to nothingness, so avoid using it if there's something in your postgres database that you want to keep!)

That hash is the blockhash of the block before cryptokitties was deployed, so we'll be indexing every cryptokitty since the beginning.

Report on:

SELECT EXTRACT(EPOCH FROM (f."createdAt" - s."createdAt")) as duration, bc.block_count, bc."block_count" / EXTRACT(EPOCH FROM (f."createdAt" - s."createdAt")) as blocks_per_second FROM (
    select "createdAt" from "transactions" limit 1
) as s JOIN (
    select "createdAt" from "transactions" order by "createdAt" DESC limit 1
) as f ON 1=1 JOIN (
    select count(*) as block_count from "transactions"
) as bc ON 1=1;

SELECT ckt."tokenId" as latest_token_id, b."timestamp" as block_time, b."number" as block_number
FROM (select max("tokenId"::integer)::text as "tokenId" from "cryptoKitties_tokens") as lt
JOIN "cryptoKitties_tokens" ckt ON lt."tokenId" = ckt."tokenId"
JOIN "patches" p ON p."id" = ckt."patchId"
JOIN "transactions" t ON t."id" = p."transactionId"
JOIN "blocks" b ON b.hash = t."blockHash"
WHERE ckt."tokenId" = lt."tokenId";

Output should be like

duration,block_count,blocks_per_second
297313.879,109786,0.369259586431887

latest_token_id,block_time,block_number
250419,2017-12-11 20:16:46+00,4715906

(which shows that I've been running this instance over ~110k blocks for 3.4 days and indexed 250419 kitties up to block 4715906 with an average blocks per second of 0.36)

Notes

If there are any questions, mention me in this thread and I'll be around to help!