streamingfast / merger

Apache License 2.0
4 stars 5 forks source link

Mindreader restart produces block holes #15

Closed 0xSalman closed 4 months ago

0xSalman commented 2 years ago

Whenever mindreader is restarted while processing historical blocks, it misses 100 blocks (one 100 block file). Merger skips 100 blocks due to bundleSize being added twice to lastMergedBlock on this line: https://github.com/streamingfast/merger/blob/develop/app/merger/app.go#L100

A real example from my logs:

(merger) about to write merged blocks to storage location {"filename": "0003009600", "write_timeout": "5m0s", "lower_block_num": 3009600, "highest_block_num": 3009699}
Writing cached state to disk block=3,009,764

// mindreader node restarts

Loaded most recent local header number=3,009,765
Loaded most recent local fast block number=3,009,765
(codec) mindreader block stats {"block_num": 3009766, "duration": 277088, "stats": {"finalize_block":1}}"
...
(merger) new bundler {"bundle_size": 100, "first_exclusive_highest_block_limit": 3009900}"

first_exclusive_highest_block_limit should be 3009800 so the blocks from 3009700-3009799 are not skipped

The other problem I have noticed is that one-block folder doesn't have blocks 3009700-3009765. The blocks in that folder start from 3009766. As a result, the merger instance gets stuck and blocks relayer and firehose.

matthewdarwin commented 2 years ago

Probably this is fixed with new merger?