Closed Eduard-Voiculescu closed 11 months ago
This is resolved now with firehose-ethereum v2.0.0-rc1?
Do you have any feedback for this @sduchesneau?
I think was fixed even before that. But yes latest version are stricter on read anyway. Let's close.
Tooling
Endpoints
Content Creation / Tutorial / How to manage StreamingFast Substreams Stack
Debugging
Post-Mortem:
Data Nexus have merge blocks which are corrupted
The real size should be
The merge block
14206400
contains multiple times the same block sequence, like this block14206400
.More precisely, the merge block
14206400
contains these block sequences:Data Nexus has 2 Subgraph deployments of
Uniswap v3
. In their secondSubgraph
they encountered a panic:This lead use to investigate the issue with them to better understand how they are running the Subgraph.
First deployment of Uniswap v3 Subgraph: The first time that the
Substreams
saw the block14206400
, it generated anEntityChange
of typeCreate
and added1
in the store ofstore_total_tx_count
.Because the merge block file
14206400
contains the block14206400
twice, the second time that the Substreams saw the block14206400
, it generated anEntityChange
of typeUpdate
and added another1
in the store ofstore_total_tx_count
and now the value is2
. When the Substreams saw the block14206499
it flushed the store and thegraph_out
to disk.Second deployment of Uniswap v3 Subgraph (pointing to the SAME filesystem, SAME
Substreams
cache, but DIFFERENT database): In this case, when the second Substreams reached the block14206400
, it read theUpdate
EntityChange from thegraph_out
map output file andPanicked
because theEntity
did not exist.