Closed NatPDeveloper closed 4 years ago
If I delete the state-db storage folder I get
2020-08-17T12:48:06.701Z (dfuse) ulimit max open files before adjustment (launcher/setup.go:61){"current_value": 1048576}
2020-08-17T12:48:06.701Z (dfuse) no need to update ulimit as it's already higher than our good enough value (launcher/setup.go:63){"good_enough_value": 256000}
2020-08-17T12:48:06.701Z (dfuse) dfuseeos binary started (cli/start.go:51){"data_dir": "./dfuse-data"}
2020-08-17T12:48:06.701Z (dfuse) Starting dfuse for EOSIO with config file '/etc/dfuseeos-configs/dfuse-p2-b0.yaml' (cli/start.go:54)
2020-08-17T12:48:06.700Z (dfuse) starting atomic level switcher (launcher/logging.go:119){"listen_addr": "localhost:1065"}
2020-08-17T12:48:06.701Z (dfuse) launcher created (cli/start.go:125)
2020-08-17T12:48:06.722Z (dfuse) nodeos version regexp matched (cli/utils.go:101){"matches": [["v2.0.6-dm.12.0","2","0","6","-dm.12.0","dm.12.0"]]}
2020-08-17T12:48:06.722Z (dfuse) Launching applications: mindreader,statedb,trxdb-loader (cli/start.go:140)
2020-08-17T12:48:06.722Z (dfuse) initialize application (launcher/launcher.go:71){"app": "mindreader"}
2020-08-17T12:48:06.722Z (dfuse) creating application (launcher/launcher.go:83){"app": "mindreader"}
2020-08-17T12:48:06.723Z (mindreader) registered log plugin (superviser/superviser.go:59){"plugin count": 1}
2020-08-17T12:48:06.723Z (mindreader) registered log plugin (superviser/superviser.go:59){"plugin count": 2}
2020-08-17T12:48:06.723Z (mindreader) creating operator (operator/operator.go:93){"options": {"BackupTag":"default","BackupStoreURL":"file:///dfuse-data/storage/pitreos","SnapshotStoreURL":"file:///dfuse-data/storage/snapshots","VolumeSnapshotAppVer":"","Namespace":"","Pod":"","PVCPrefix":"","Project":"","BootstrapDataURL":"","AutoRestoreSource":"snapshot","NumberOfSnapshotsToKeep":0,"RestoreBackupName":"","RestoreSnapshotName":"","Profiler":null,"StartFailureHandlerFunc":"func()","EnableSupervisorMonitoring":false,"ShutdownDelay":0}}
2020-08-17T12:48:06.723Z (mindreader) creating mindreader plugin (mindreader/mindreader.go:97){"archive_store_url": "file:///dfuse-data/storage/one-blocks", "merge_archive_store_url": "file:///dfuse-data/storage/merged-blocks", "batch_mode": true, "merge_threshold_age": "12h0m0s", "working_directory": "/dfuse-data/mindreader/work", "start_block_num": 0, "stop_block_num": 205400, "channel_capacity": 100000, "with_head_block_update_func": true, "with_set_maintenance_func": true, "with_stop_block_reach_func": true, "fail_on_non_continuous_blocks": false, "wait_upload_complete_on_shutdown": "30s"}
2020-08-17T12:48:06.723Z (mindreader) loading continuity checker info (mindreader/continuity.go:85){"locked": false, "highest_seen_block": 0}
2020-08-17T12:48:06.723Z (mindreader) resetting continuity checker (mindreader/continuity.go:58)
2020-08-17T12:48:06.744Z (mindreader) creating new mindreader plugin (mindreader/mindreader.go:205)
2020-08-17T12:48:06.745Z (mindreader) registered log plugin (superviser/superviser.go:59){"plugin count": 3}
2020-08-17T12:48:06.745Z (dfuse) creating application (launcher/launcher.go:83){"app": "statedb"}
2020-08-17T12:48:06.745Z (dfuse) creating application (launcher/launcher.go:83){"app": "trxdb-loader"}
2020-08-17T12:48:06.745Z (dfuse) launching app (launcher/launcher.go:110){"app": "trxdb-loader"}
2020-08-17T12:48:06.745Z (dfuse) launching app (launcher/launcher.go:110){"app": "mindreader"}
2020-08-17T12:48:06.745Z (dfuse) launching app (launcher/launcher.go:110){"app": "statedb"}
2020-08-17T12:48:06.745Z (mindreader) launching nodeos mindreader (node_mindreader/app.go:79){"config": {"ManagerAPIAddress":":13009","ConnectionWatchdog":false,"AutoBackupModulo":0,"AutoBackupPeriod":0,"AutoBackupHostnameMatch":"","AutoSnapshotModulo":0,"AutoSnapshotPeriod":86400000000000,"AutoSnapshotHostnameMatch":"","GRPCAddr":":13010"}}
2020-08-17T12:48:06.745Z (trxdb-loader) launching trxdb loader (trxdb-loader/app.go:72){"config": {"ChainID":"","ProcessingType":"live","BlockStoreURL":"file:///dfuse-data/storage/merged-blocks","BlockStreamAddr":":13011","KvdbDsn":"badger:///dfuse-data/storage/trxdb","BatchSize":100,"StartBlockNum":0,"StopBlockNum":205400,"NumBlocksBeforeStart":300,"ParallelFileDownloadCount":2,"AllowLiveOnEmptyTable":true,"HTTPListenAddr":":13020","EnableTruncationMarker":false,"TruncationWindow":0,"PurgerInterval":1000}}
2020-08-17T12:48:06.745Z (mindreader) retrieved hostname from os (node_mindreader/app.go:82){"hostname": "2481df0efe09"}
2020-08-17T12:48:06.745Z (statedb) running statedb (statedb/app.go:61){"config": {"StoreDSN":"badger:///dfuse-data/storage/statedb-v1","BlockStreamAddr":":13011","EnableServerMode":false,"EnableInjectMode":false,"EnablePipeline":true,"EnableReprocSharderMode":false,"EnableReprocInjectorMode":true,"BlockStoreURL":"file:///dfuse-data/storage/merged-blocks","ReprocShardStoreURL":"file:///dfuse-data/statedb/reproc-shards","ReprocShardCount":10,"ReprocSharderStartBlockNum":0,"ReprocSharderStopBlockNum":205400,"ReprocInjectorShardIndex":0,"HTTPListenAddr":":13029","GRPCListenAddr":":13032"}}
2020-08-17T12:48:06.745Z (registry) new trxdb from dsn string (trxdb/registry.go:52){"dsn_string": "badger:///dfuse-data/storage/trxdb"}
2020-08-17T12:48:06.745Z (registry) trxdb instance factory (trxdb/registry.go:58){"dsns": ["badger:///dfuse-data/storage/trxdb"]}
2020-08-17T12:48:06.746Z (db) creating kv db (kv/db.go:72){"dsns": ["badger:///dfuse-data/storage/trxdb"]}
2020-08-17T12:48:06.746Z (dsn) parsing DSN (kv/dsn.go:14){"dsn": "badger:///dfuse-data/storage/trxdb"}
2020-08-17T12:48:06.746Z (cache) kv store store is not cached for this DSN, creating a new one (kv/cache.go:21){"dsn": "badger:///dfuse-data/storage/trxdb"}
2020-08-17T12:48:06.746Z (mindreader) starting grpc listener (mindreader/publisher.go:27){"listen_addr": ":13010"}
2020-08-17T12:48:06.746Z (mindreader) unable to execute get health request (node_mindreader/app.go:156){"error": "Get \"http://:13009/healthz\": dial tcp :13009: connect: connection refused"}
2020-08-17T12:48:06.746Z (dfuse) app status switching to warning (launcher/launcher.go:249){"app_id": "mindreader"}
2020-08-17T12:48:06.746Z (statedb) running fluxdb (fluxdb/app.go:72){"config": {"StoreDSN":"badger:///dfuse-data/storage/statedb-v1","BlockStreamAddr":":13011","EnableServerMode":false,"EnableInjectMode":false,"EnablePipeline":true,"EnableReprocSharderMode":false,"EnableReprocInjectorMode":true,"BlockStoreURL":"file:///dfuse-data/storage/merged-blocks","ReprocShardStoreURL":"file:///dfuse-data/statedb/reproc-shards","ReprocShardCount":10,"ReprocSharderStartBlockNum":0,"ReprocSharderStopBlockNum":205400,"ReprocInjectorShardIndex":0}}
2020-08-17T12:48:06.746Z (statedb) creating underlying kv store engine (fluxdb@v0.0.0-20200812140457-086d625689d9/store.go:37){"scheme": "badger", "dsn": "badger:///dfuse-data/storage/statedb-v1"}
2020-08-17T12:48:06.746Z (trxdb-loader) is ready request execution error (trxdb-loader/app.go:170){"error": "Get \"http://:13020/healthz\": dial tcp :13020: connect: connection refused"}
2020-08-17T12:48:06.746Z (dfuse) app status switching to warning (launcher/launcher.go:249){"app_id": "trxdb-loader"}
2020-08-17T12:48:06.759Z (statedb) fetching last written block (kv/store.go:128){"key": "636865636b706f696e74"}
2020-08-17T12:48:06.759Z (statedb) using shards url (fluxdb/app.go:188){"store_url": "file:///dfuse-data/statedb/reproc-shards/000"}
2020-08-17T12:48:06.759Z (statedb) fetching last written block (kv/store.go:128){"key": "73686172642d303030"}
2020-08-17T12:48:06.760Z (statedb) last written block empty, returning empty checkpoint values (fluxdb@v0.0.0-20200812140457-086d625689d9/read.go:361)
2020-08-17T12:48:06.760Z (statedb) starting back shard injector (fluxdb@v0.0.0-20200812140457-086d625689d9/shardinject.go:80){"block": "Block <empty>"}
2020-08-17T12:48:06.760Z (badger) prefix scanning (badger/badger.go:260){"prefix": "0173686172642d", "limit": "unlimited"}
2020-08-17T12:48:06.760Z (statedb) all shards are not done yet, not updating last block (fluxdb/app.go:210){"error": "missing shards: [002 006 009 004 005 007 008 000 001 003]"}
2020-08-17T12:48:06.760Z (dfuse) app statedb triggered clean shutdown (launcher/launcher.go:176)
2020-08-17T12:48:06.760Z (dfuse) Application statedb triggered a clean shutdown, quitting (cli/start.go:154)
2020-08-17T12:48:06.760Z (dfuse) Waiting for all apps termination... (launcher/launcher.go:259)
2020-08-17T12:48:06.760Z (dfuse) app terminated (launcher/launcher.go:266){"app_id": "statedb"}
2020-08-17T12:48:06.760Z (dfuse) app terminated (launcher/launcher.go:266){"app_id": "trxdb-loader"}
2020-08-17T12:48:06.760Z (dfuse) app terminated (launcher/launcher.go:266){"app_id": "mindreader"}
2020-08-17T12:48:06.760Z (dfuse) All apps terminated gracefully (launcher/launcher.go:273)
2020-08-17T12:48:06.760Z (dfuse) Goodbye (cli/start.go:62)```
I strongly suggest that you do all these steps in separate jobs (processes.)
dfuseeos does not support having two systems in batch mode: the first "stop block" reached will cause the others to stop !
Running statedb by itself gets me
2020-08-17T12:58:38.079Z (api) registering development exporters from environment variables (dtracing@v0.0.0-20200417133307-c09302668d0c/api.go:139)
2020-08-17T12:58:38.080Z (dfuse) ulimit max open files before adjustment (launcher/setup.go:61){"current_value": 1048576}
2020-08-17T12:58:38.080Z (dfuse) no need to update ulimit as it's already higher than our good enough value (launcher/setup.go:63){"good_enough_value": 256000}
2020-08-17T12:58:38.079Z (dfuse) starting atomic level switcher (launcher/logging.go:119){"listen_addr": "localhost:1065"}
2020-08-17T12:58:38.080Z (dfuse) dfuseeos binary started (cli/start.go:51){"data_dir": "./dfuse-data"}
2020-08-17T12:58:38.080Z (dfuse) Starting dfuse for EOSIO with config file '/etc/dfuseeos-configs/dfuse-p2-b0.yaml' (cli/start.go:54)
2020-08-17T12:58:38.080Z (dfuse) launcher created (cli/start.go:125)
2020-08-17T12:58:38.080Z (dfuse) Launching applications: statedb (cli/start.go:140)
2020-08-17T12:58:38.080Z (dfuse) creating application (launcher/launcher.go:83){"app": "statedb"}
2020-08-17T12:58:38.080Z (dfuse) launching app (launcher/launcher.go:110){"app": "statedb"}
2020-08-17T12:58:38.080Z (statedb) running statedb (statedb/app.go:61){"config": {"StoreDSN":"badger:///dfuse-data/storage/statedb-v1","BlockStreamAddr":":13011","EnableServerMode":false,"EnableInjectMode":false,"EnablePipeline":true,"EnableReprocSharderMode":false,"EnableReprocInjectorMode":true,"BlockStoreURL":"file:///dfuse-data/storage/merged-blocks","ReprocShardStoreURL":"file:///dfuse-data/statedb/reproc-shards","ReprocShardCount":10,"ReprocSharderStartBlockNum":0,"ReprocSharderStopBlockNum":205400,"ReprocInjectorShardIndex":0,"HTTPListenAddr":":13029","GRPCListenAddr":":13032"}}
2020-08-17T12:58:38.080Z (statedb) running fluxdb (fluxdb/app.go:72){"config": {"StoreDSN":"badger:///dfuse-data/storage/statedb-v1","BlockStreamAddr":":13011","EnableServerMode":false,"EnableInjectMode":false,"EnablePipeline":true,"EnableReprocSharderMode":false,"EnableReprocInjectorMode":true,"BlockStoreURL":"file:///dfuse-data/storage/merged-blocks","ReprocShardStoreURL":"file:///dfuse-data/statedb/reproc-shards","ReprocShardCount":10,"ReprocSharderStartBlockNum":0,"ReprocSharderStopBlockNum":205400,"ReprocInjectorShardIndex":0}}
2020-08-17T12:58:38.080Z (statedb) creating underlying kv store engine (fluxdb@v0.0.0-20200812140457-086d625689d9/store.go:37){"scheme": "badger", "dsn": "badger:///dfuse-data/storage/statedb-v1"}
2020-08-17T12:58:38.084Z (statedb) fetching last written block (kv/store.go:128){"key": "636865636b706f696e74"}
2020-08-17T12:58:38.084Z (statedb) using shards url (fluxdb/app.go:188){"store_url": "file:///dfuse-data/statedb/reproc-shards/000"}
2020-08-17T12:58:38.084Z (statedb) fetching last written block (kv/store.go:128){"key": "73686172642d303030"}
2020-08-17T12:58:38.084Z (statedb) last written block empty, returning empty checkpoint values (fluxdb@v0.0.0-20200812140457-086d625689d9/read.go:361)
2020-08-17T12:58:38.084Z (statedb) starting back shard injector (fluxdb@v0.0.0-20200812140457-086d625689d9/shardinject.go:80){"block": "Block <empty>"}
2020-08-17T12:58:38.084Z (badger) prefix scanning (badger/badger.go:260){"prefix": "0173686172642d", "limit": "unlimited"}
2020-08-17T12:58:38.084Z (statedb) all shards are not done yet, not updating last block (fluxdb/app.go:210){"error": "missing shards: [009 003 006 007 004 005 008 000 001 002]"}
2020-08-17T12:58:38.084Z (dfuse) app statedb triggered clean shutdown (launcher/launcher.go:176)
2020-08-17T12:58:38.084Z (dfuse) Application statedb triggered a clean shutdown, quitting (cli/start.go:154)
2020-08-17T12:58:38.084Z (dfuse) Waiting for all apps termination... (launcher/launcher.go:259)
2020-08-17T12:58:38.084Z (dfuse) app terminated (launcher/launcher.go:266){"app_id": "statedb"}
2020-08-17T12:58:38.084Z (dfuse) All apps terminated gracefully (launcher/launcher.go:273)
2020-08-17T12:58:38.084Z (dfuse) Goodbye (cli/start.go:62)
With yaml:
start:
args:
- statedb
flags:
statedb-reproc-shard-stop-block-num: 205400
statedb-enable-reproc-injector-mode: true
statedb-enable-server-mode: false
statedb-enable-inject-mode: false
statedb-reproc-shard-count: 10
Shard reprocessing in statedb is designed for a specific use case of multi-phase injection (made to inject all eos-mainnet in 3 days instead of 3 weeks). It is not linear and cannot be done more than once.
Sharding steps are like this: 1) run 10 parallel batch sharding jobs (--statedb-enable-reproc-sharder-mode=true), each with a different index (0 to 9 in the case of shard-count=10). This takes dfuse blocks logs and creates "temporary shards files" 2) run 10 parallel batch injecting jobs (--statedb-enable-reproc-injector-mode=true) (0...9). These jobs will read from the "temporary shards files" and apply the operations to the database. 3) (once all the 10 batch injecting jobs have completed successfully), you can run the normal operations mode of statedb (--statedb-enable-inject-mode) and never hear about the concept of "shards" anymore for that chain. You can then delete the temporary shards files)
It seems to me like you are trying to run step #2 without having run step 1, and you're expecting a behavior like step 3...
It makes no sense to use sharded injecting on 200k blocks, do not use sharding for your use case.
statedb-reproc-shard-stop-block-num: 0
statedb-enable-reproc-injector-mode: false
statedb-reproc-shard-count: 0
statedb-enable-inject-mode: true
Also, know that the statedb can only be injected linearly (block 2, 3, 4, 5 ...) so the only way to gain some parallelism in injection, is to do the "shard-injecting", a single time, as described above. But it must still be injected linearly in terms of block numbers.
The 4 flags given should allow you to fill your statedb from blocks 0 to 200k. (The statedb injection does not have a stop block in normal mode, it will just wait for more blocks at the end)
https://docs.dfuse.io/eosio/admin-guide/parallel-processing/#fluxdb-reprocessing <-- some (outdated :( ) documentation on sharded reprocessing
well it is not """outdated""", it is just not on par with develop
HEAD :sweat_smile:
Created 10 yamls, 12m blocks each 120m block chain, increment shard index 0-9, start/stop reflect 12m increments until last merged blocks file. Getting the error below.
Yaml:
start:
args:
- statedb
flags:
statedb-enable-reproc-sharder-mode: true
statedb-enable-server-mode: false
statedb-enable-reproc-injector-mode: false
statedb-enable-inject-mode: false
statedb-reproc-shard-count: 10
statedb-reproc-injector-shard-index: 0
statedb-reproc-shard-stop-block-num: 12000000
Other yamls (incrementing index 0-9 and updating up until my final merged blocks file.
start:
args:
- statedb
flags:
statedb-enable-reproc-sharder-mode: true
statedb-enable-server-mode: false
statedb-enable-reproc-injector-mode: false
statedb-enable-inject-mode: false
statedb-reproc-shard-count: 10
statedb-reproc-injector-shard-index: 1
statedb-reproc-shard-start-block-num: 12000000
statedb-reproc-shard-stop-block-num: 24000000
Docker command
docker run -d -v "/newvolume/work/config/dfuse-data:/dfuse-data" -v "/newvolume/work/config/mindreader:/mindreader" -v "/newvolume/work/config/phase2_yamls/statedb:/etc/dfuseeos-configs" --name dfuse-p2-b0 dfuseeos /app/dfuseeos -c /etc/dfuseeos-configs/dfuse-p2-b0.yaml start -vvvv
Error
2020-08-17T21:32:45.255Z (api) registering development exporters from environment variables (dtracing@v0.0.0-20200417133307-c09302668d0c/api.go:139)
2020-08-17T21:32:45.255Z (dfuse) starting atomic level switcher (launcher/logging.go:119){"listen_addr": "localhost:1065"}
2020-08-17T21:32:45.255Z (dfuse) ulimit max open files before adjustment (launcher/setup.go:61){"current_value": 1048576}
2020-08-17T21:32:45.255Z (dfuse) no need to update ulimit as it's already higher than our good enough value (launcher/setup.go:63){"good_enough_value": 256000}
2020-08-17T21:32:45.255Z (dfuse) dfuseeos binary started (cli/start.go:51){"data_dir": "./dfuse-data"}
2020-08-17T21:32:45.255Z (dfuse) Starting dfuse for EOSIO with config file '/etc/dfuseeos-configs/dfuse-p2-b0.yaml' (cli/start.go:54)
2020-08-17T21:32:45.256Z (dfuse) launcher created (cli/start.go:125)
2020-08-17T21:32:45.256Z (dfuse) Launching applications: statedb (cli/start.go:140)
2020-08-17T21:32:45.256Z (dfuse) creating application (launcher/launcher.go:83){"app": "statedb"}
2020-08-17T21:32:45.256Z (dfuse) launching app (launcher/launcher.go:110){"app": "statedb"}
2020-08-17T21:32:45.256Z (statedb) running statedb (statedb/app.go:61){"config": {"StoreDSN":"badger:///dfuse-data/storage/statedb-v1","BlockStreamAddr":":13011","EnableServerMode":false,"EnableInjectMode":false,"EnablePipeline":true,"EnableReprocSharderMode":true,"EnableReprocInjectorMode":false,"BlockStoreURL":"file:///dfuse-data/storage/merged-blocks","ReprocShardStoreURL":"file:///dfuse-data/statedb/reproc-shards","ReprocShardCount":10,"ReprocSharderStartBlockNum":0,"ReprocSharderStopBlockNum":12000000,"ReprocInjectorShardIndex":0,"HTTPListenAddr":":13029","GRPCListenAddr":":13032"}}
2020-08-17T21:32:45.256Z (statedb) running fluxdb (fluxdb/app.go:72){"config": {"StoreDSN":"badger:///dfuse-data/storage/statedb-v1","BlockStreamAddr":":13011","EnableServerMode":false,"EnableInjectMode":false,"EnablePipeline":true,"EnableReprocSharderMode":true,"EnableReprocInjectorMode":false,"BlockStoreURL":"file:///dfuse-data/storage/merged-blocks","ReprocShardStoreURL":"file:///dfuse-data/statedb/reproc-shards","ReprocShardCount":10,"ReprocSharderStartBlockNum":0,"ReprocSharderStopBlockNum":12000000,"ReprocInjectorShardIndex":0}}
2020-08-17T21:32:45.256Z (statedb) creating underlying kv store engine (fluxdb@v0.0.0-20200812140457-086d625689d9/store.go:37){"scheme": "badger", "dsn": "badger:///dfuse-data/storage/statedb-v1"}
2020-08-17T21:32:45.270Z (statedb) file stream looking for (bstream@v0.0.2-0.20200730171716-a46b819bf678/filesource.go:130){"base_block_num": 0}
2020-08-17T21:32:45.270Z (statedb) downloading archive file (bstream@v0.0.2-0.20200730171716-a46b819bf678/filesource.go:161){"filename": "0000000000"}
2020-08-17T21:32:45.270Z (statedb) file stream looking for (bstream@v0.0.2-0.20200730171716-a46b819bf678/filesource.go:130){"base_block_num": 100}
2020-08-17T21:32:45.270Z (statedb) downloading archive file (bstream@v0.0.2-0.20200730171716-a46b819bf678/filesource.go:161){"filename": "0000000100"}
2020-08-17T21:32:45.270Z (statedb) file stream looking for (bstream@v0.0.2-0.20200730171716-a46b819bf678/filesource.go:130){"base_block_num": 200}
2020-08-17T21:32:45.270Z (statedb) downloading archive file (bstream@v0.0.2-0.20200730171716-a46b819bf678/filesource.go:161){"filename": "0000000200"}
2020-08-17T21:32:45.270Z (statedb) file stream looking for (bstream@v0.0.2-0.20200730171716-a46b819bf678/filesource.go:130){"base_block_num": 300}
2020-08-17T21:32:45.270Z (statedb) feeding from incoming file (bstream@v0.0.2-0.20200730171716-a46b819bf678/filesource.go:247){"filename": "0000000000"}
2020-08-17T21:32:45.270Z (statedb) downloading archive file (bstream@v0.0.2-0.20200730171716-a46b819bf678/filesource.go:161){"filename": "0000000300"}
2020-08-17T21:32:45.270Z (statedb) launching processing of file (bstream@v0.0.2-0.20200730171716-a46b819bf678/filesource.go:169){"base_filename": "0000000000"}
2020-08-17T21:32:45.270Z (statedb) open files (bstream@v0.0.2-0.20200730171716-a46b819bf678/filesource.go:185){"count": 1, "filename": "0000000000"}
2020-08-17T21:32:45.270Z (statedb) launching processing of file (bstream@v0.0.2-0.20200730171716-a46b819bf678/filesource.go:169){"base_filename": "0000000100"}
2020-08-17T21:32:45.270Z (statedb) launching processing of file (bstream@v0.0.2-0.20200730171716-a46b819bf678/filesource.go:169){"base_filename": "0000000200"}
2020-08-17T21:32:45.270Z (statedb) open files (bstream@v0.0.2-0.20200730171716-a46b819bf678/filesource.go:185){"count": 3, "filename": "0000000200"}
2020-08-17T21:32:45.270Z (statedb) open files (bstream@v0.0.2-0.20200730171716-a46b819bf678/filesource.go:185){"count": 2, "filename": "0000000100"}
2020-08-17T21:32:45.273Z (statedb) processing block (forkable/forkable.go:131){"block": "#2 (00000002a1ec7ae214b9e43a904b6c010fb1260c9e8a12e5967bdbe451152a07)", "new_longest_chain": true}
2020-08-17T21:32:45.273Z (statedb) candidate LIB received is first streamable block of chain, assuming it's the new LIB (forkable/forkdb.go:88){"lib": "#1 (00000001d80c979db347eac322b4c6ecb34e885387b01661f699cebd32f79bc6)"}
2020-08-17T21:32:45.273Z (statedb) got longest chain (forkable/forkable.go:177){"block": "#2 (00000002a1ec7ae214b9e43a904b6c010fb1260c9e8a12e5967bdbe451152a07)", "chain_length": 1, "undos_length": 0, "redos_length": 0}
2020-08-17T21:32:45.273Z (statedb) sending block as new to consumer (forkable/forkable.go:319){"block": "#2 (00000002a1ec7ae214b9e43a904b6c010fb1260c9e8a12e5967bdbe451152a07)"}
2020-08-17T21:32:45.273Z (statedb) missing links to reach lib_num (forkable/forkable.go:206){"block": "#2 (00000002a1ec7ae214b9e43a904b6c010fb1260c9e8a12e5967bdbe451152a07)", "new_head_block": "#2 (00000002a1ec7ae214b9e43a904b6c010fb1260c9e8a12e5967bdbe451152a07)", "new_lib_num": 0}
2020-08-17T21:32:45.273Z (statedb) processing block (forkable/forkable.go:131){"block": "#3 (000000037ccac0013c9c8b70e13f9fbe844e61fffc5a9b1fb9422bda938c13c7)", "new_longest_chain": true}
2020-08-17T21:32:45.273Z (statedb) got longest chain (forkable/forkable.go:177){"block": "#3 (000000037ccac0013c9c8b70e13f9fbe844e61fffc5a9b1fb9422bda938c13c7)", "chain_length": 2, "undos_length": 0, "redos_length": 0}
2020-08-17T21:32:45.273Z (statedb) sending block as new to consumer (forkable/forkable.go:319){"block": "#3 (000000037ccac0013c9c8b70e13f9fbe844e61fffc5a9b1fb9422bda938c13c7)"}
2020-08-17T21:32:45.273Z (statedb) moving lib (forkable/forkable.go:218){"block": "#3 (000000037ccac0013c9c8b70e13f9fbe844e61fffc5a9b1fb9422bda938c13c7)", "lib_id": "00000002a1ec7ae214b9e43a904b6c010fb1260c9e8a12e5967bdbe451152a07", "lib_num": 2}
2020-08-17T21:32:45.273Z (statedb) block num gate passed (bstream@v0.0.2-0.20200730171716-a46b819bf678/gates.go:102){"gate_type": "inclusive", "at_block_num": 2, "gate_block_num": 0}
2020-08-17T21:32:45.273Z (dfuse)
################################################################
Fatal error in app statedb:
process block failed: encoding sharded request: gob: type not registered for interface: bstream.BasicBlockRef
################################################################
(launcher/launcher.go:174)
2020-08-17T21:32:45.273Z (dfuse) Application statedb shutdown unexpectedly, quitting (cli/start.go:156)
Error: unable to launch: process block failed: encoding sharded request: gob: type not registered for interface: bstream.BasicBlockRef
2020-08-17T21:32:45.273Z (cli) dfuse (derr@v0.0.0-20200730183817-a747f6f333ad/cli.go:25){"error": "unable to launch: process block failed: encoding sharded request: gob: type not registered for interface: bstream.BasicBlockRef"}
@NatPDeveloper Indeed, I did not test the batch reproc part of statedb and it’s definitely not working.
I’ll fix this tomorrow, I need to do it internally for our own use, so it will be the right moment to test it and fix all the small non-working elements.
Brief:
Trying to run statedb/trxdb for first 200k blocks without snapshot to populate dfuse-data/storage/...
Version:
d002b175eed2cf9fa7e058131bfccde812acf409
Error:
Docker command:
Yaml