Open SozinM opened 1 year ago
@leoyvens do you know why this block wouldn't have been cleared from the cache when the reorg was handled?
We had a problem when the subgraph was stuck until we remove duplicates from the cache.
@SozinM can you provide any more details / logs of what you observed here?
what version of Graph Node are you running?
Graph-node v0.31 It was some time ago, so no logs, but the problem occurred because subgraph logic was broken due to incorrect block being processed
Having two blocks for the same number in the block cache is expected. This causing any issues would be unexpected, but then we would need more info to debug.
Example of the problem: we had a subgraph that failed with error:
Mapping aborted at src/mapping.ts, line 72, column 7, with message: Unexpected null fromEn\twasm backtrace:\t 0: 0x3e5a - <unknown>!src/mapping/handleTransferLINA\t in handler `handleTransferLINA` at block #28842404 (abc680fa9e4df646864070bc179885f57b7b5c7502f30f9c82eb35f481b32b7f)", "block_number": 28842404, "block_hash": "0xabc680fa9e4df646864070bc179885f57b7b5c7502f30f9c82eb35f481b32b7f"
After we made a rewind for 1000 block it's failed again on the same block. After we fixed the cache and removed duplicates + made a rewind for 1000 blocks it started to work correctly.
Also @leoyvens could you please point me to the place in the code when block cache is used and duplicates are handled? Because I don't see any duplicates handling in here: https://github.com/graphprotocol/graph-node/blob/27cbcdd0cc21bd9a22e604bf18fd0f6b6c8dc37e/chain/ethereum/src/ethereum_adapter.rs#L1258
Also affected from this issue specifically when using block data from the index-node for a subgraph, for example: https://github.com/Sobal/network-blocks
Reorgs are more frequent in Solana (NeonEVM uses Solana block data) so this can cause a frequent service breakdown.
We witness that indexing will seize on a reorg for the subgraph above without entering a failed state, the only resolution being to delete duplicate blocks and restart the index node
hey @joehquak how deep can the reorgs be?
@azf20 32 blocks is the finalisation time for Solana
Bug report
In case reorg happend and graph-node has seen it we would have 2 entries in the cache table: Example for reorg on ethereum mainnet on block 17820205 with depth=1
graph=# select hash, number, parent_hash from chain1.blocks where number=17820205 (https://etherscan.io/block/17820205/f)
From our experience that sometimes cause errors in subgraphs, because the subgraph takes incorrect data from the cache (This need confirming) We had a problem when the subgraph was stuck until we remove duplicates from the cache.
Relevant log output
No response
IPFS hash
No response
Subgraph name or link to explorer
No response
Some information to help us out
OS information
None