(merger) about to write merged blocks to storage location {"filename": "0003009600", "write_timeout": "5m0s", "lower_block_num": 3009600, "highest_block_num": 3009699}
Writing cached state to disk block=3,009,764
// mindreader node restarts
Loaded most recent local header number=3,009,765
Loaded most recent local fast block number=3,009,765
(codec) mindreader block stats {"block_num": 3009766, "duration": 277088, "stats": {"finalize_block":1}}"
...
(merger) new bundler {"bundle_size": 100, "first_exclusive_highest_block_limit": 3009900}"
first_exclusive_highest_block_limit should be 3009800 so the blocks from 3009700-3009799 are not skipped
The other problem I have noticed is that one-block folder doesn't have blocks 3009700-3009765. The blocks in that folder start from 3009766. As a result, the merger instance gets stuck and blocks relayer and firehose.
Whenever mindreader is restarted while processing historical blocks, it misses 100 blocks (one 100 block file). Merger skips 100 blocks due to
bundleSize
being added twice tolastMergedBlock
on this line: https://github.com/streamingfast/merger/blob/develop/app/merger/app.go#L100A real example from my logs:
first_exclusive_highest_block_limit
should be3009800
so the blocks from3009700-3009799
are not skippedThe other problem I have noticed is that one-block folder doesn't have blocks
3009700-3009765
. The blocks in that folder start from3009766
. As a result, the merger instance gets stuck and blocks relayer and firehose.