Open alexqrid opened 3 months ago
After syncing a while the node threw panic for some reason
And after the restart here we go again with the wrong trusted batch:
Again, dropping a few blocks helped just for a while:
Nah, it syncs up to head batch and then fails:
{"level":"info","ts":1710371744.7810743,"caller":"l1_parallel_sync/l1_rollup_info_consumer.go:236","msg":"consumer: Empty block range: [19429306, latest]","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.7810829,"caller":"l1_parallel_sync/l1_rollup_info_consumer.go:143","msg":"consumer: received a fullSync but still have 1 items in channel to process, so not stopping consumer","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.7810907,"caller":"l1_parallel_sync/l1_rollup_info_consumer.go:119","msg":"consumer: processed ControlData[action:eventIsFullySynced param:19429305]. Result: %!s(<nil>)","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.781105,"caller":"l1_parallel_sync/l1_rollup_info_consumer.go:183","msg":"consumer: processing rollupInfo #15: range:[19429306, latest] num_blocks [0] highest_block [19429305] statistics:wasted_time_waiting_for_data [0s] last_process_time [7.96µs] block_per_second [0.381153]","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.7811155,"caller":"l1_parallel_sync/l1_rollup_info_consumer.go:236","msg":"consumer: Empty block range: [19429306, latest]","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.783303,"caller":"l1_parallel_sync/l1_rollup_info_producer.go:570","msg":"producer: Need a new value for Last Block On L1, doing the request old_block:19429335 -> new block:19429336","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.7833438,"caller":"l1_parallel_sync/l1_rollup_info_producer.go:483","msg":"producer: New last block on L1: 19429336 -> fullRange: [19429305, 19429336] extendedRange: [19429336, 19429336]","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.7833576,"caller":"l1_parallel_sync/l1_rollup_info_producer.go:404","msg":"producer: producerWorking: still not synchronized with the new block range launch workers again","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.7834048,"caller":"l1_parallel_sync/l1_rollup_info_producer.go:549","msg":"producer: launch_worker: num of launched workers: 1 ( working: 1 of 10 ) result: [inFilter:0 + inChannel:0 > maximum:25? ==> allow new req] segment [19429306, latest]/UNSAFE -> [LAUNCHED] [NoNextRange] ","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.821745,"caller":"l1_parallel_sync/l1_rollup_info_producer.go:577","msg":"producer: Received responseRollupInfoByBlockRange: generic:[typeOfRequest: [rollup] duration: [38.256886ms] err: [<nil>] ] result:[ blockRange: [19429306, latest] len_blocks: [0] len_order:[0] lastBlockOfRangeSet [false] previousBlockSet [true]]","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.821773,"caller":"l1_parallel_sync/l1_rollup_info_producer.go:597","msg":"producer: sendind data to consumer: [0/1] -> range:[[19429306, latest]] Sending results [data] to consumer:data: blockRange: [19429306, latest] len_blocks: [0] len_order:[0] lastBlockOfRangeSet [false] previousBlockSet [true] NO_CTRL ","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.8217943,"caller":"l1_parallel_sync/l1_rollup_info_producer.go:267","msg":"producer: Status changed from [working] to [synchronized]","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.8218043,"caller":"l1_parallel_sync/l1_rollup_info_producer.go:270","msg":"producer: send a message to consumer to indicate that we are synchronized. highestBlockRequested:19429305","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.8218393,"caller":"l1_parallel_sync/l1_rollup_info_consumer.go:183","msg":"consumer: processing rollupInfo #16: range:[19429306, latest] num_blocks [0] highest_block [19429305] statistics:wasted_time_waiting_for_data [0s] last_process_time [10.84µs] block_per_second [0.381060]","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.8218496,"caller":"l1_parallel_sync/l1_rollup_info_consumer.go:236","msg":"consumer: Empty block range: [19429306, latest]","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.8218572,"caller":"l1_parallel_sync/l1_rollup_info_consumer.go:136","msg":"consumer: received a fullSync and nothing pending in channel to process, so stopping consumer. lastBlock: 19429305","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.8218632,"caller":"l1_parallel_sync/l1_rollup_info_consumer.go:119","msg":"consumer: processed ControlData[action:eventIsFullySynced param:19429305]. Result: consumer:stopped because is synchronized","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.8218682,"caller":"l1_parallel_sync/l1_sync_orchestration.go:154","msg":"orchestration: consumer finished","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.8218782,"caller":"l1_parallel_sync/l1_sync_orchestration.go:176","msg":"orchestration: consumer has finished. No error","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.8218882,"caller":"l1_parallel_sync/l1_sync_orchestration.go:186","msg":"orchestration: finished L1 sync orchestration With LastBlock. Last block synced: 19429305 err:nil","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.8218968,"caller":"synchronizer/synchronizer.go:396","msg":"L1 state fully synchronized","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.824792,"caller":"synchronizer/synchronizer.go:329","msg":"latestSequencedBatchNumber: 1998737, latestSyncedBatch: 1998737, lastVerifiedBatchNumber: 1998734","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.824811,"caller":"synchronizer/synchronizer.go:336","msg":"Syncing trusted state (permissionless)","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.8248255,"caller":"l2_shared/trusted_batches_retrieve.go:84","msg":"syncTrustedState: Getting trusted state info","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371744.8995245,"caller":"l2_shared/trusted_batches_retrieve.go:96","msg":"syncTrustedState: latestSyncedBatch:1998737 syncTrustedState:1998740 (max Batch on network: 1998740)","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371745.1578214,"caller":"l2_shared/processor_trusted_batch_sync.go:235","msg":"syncTrustedState: batch[1998737/1998740] mode nothing: Processing trusted batch: mode=nothing desc=exactly batches: Equal batch: 1998737","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371745.471844,"caller":"l2_sync_etrog/check_sync_status_to_process_batch.go:57","msg":"We have this GlobalExitRoot (0x31ec84e8e386139e8feaefd9563deb6808a27bf781dc87d1b9d7d9e53c2ebb4f) in L1block 19429221, so we are synced from L1 CheckL1SyncStatusEnoughToProcessBatch batchNumber:1998738 globalExitRoot: 0x31ec84e8e386139e8feaefd9563deb6808a27bf781dc87d1b9d7d9e53c2ebb4f ","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371745.4719331,"caller":"l2_shared/processor_trusted_batch_sync.go:235","msg":"syncTrustedState: batch[1998738/1998740] mode full: Processing trusted batch: mode=full desc=Batch is not on database, so is the first time we process it","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371749.7502549,"caller":"synchronizer/synchronizer.go:772","msg":"pending flushID: 31","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371749.9151192,"caller":"l2_sync_etrog/executor_trusted_batch_sync.go:402","msg":"syncTrustedState: batch[1998738/1998740] mode full: Batch 1998738: batchl2data len:12837 processed and stored: l2block[10727081-10727264] txs[36] oldStateRoot: 0xed624abc493144dc4ecda38ab468b0d95b86d821cb56b3029d8f4163b33abf76 -> newStateRoot:0x35bc939338961fc87dd04f8f18ff1f5d404016ec6b02b4b4440f82475439acb2","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371749.918751,"caller":"l2_shared/post_closed_batch_check_l2block.go:51","msg":"&{0xf4e9878c0850223e4ada96aed734d43a301e4ab30509d915ccfb43d8c7b1db61 0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347 0x148Ee7dAF16574cD020aFa34CC658f8F3fbd2800 0x35bc939338961fc87dd04f8f18ff1f5d404016ec6b02b4b4440f82475439acb2 0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421 0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] 0 0xc001b92c30 515 10727264 1125899906842624 0 1710371496 [] 0x0000000000000000000000000000000000000000000000000000000000000000 0xc0019d1e00 0x39b76d1b4f0942f92bdb482cf36050f8d1b5ed0b6639c674ad19b71eef9a25cb [] [] 0x0000000000000000000000000000000000000000000000000000000000000000 0xff6dea9b319acd937feb35ae0f6c9e4deb2d0b0c4c4c442a47319e3fe1d49e31}","pid":1,"version":"v0.6.2"}
{"level":"error","ts":1710371749.9189122,"caller":"l2_shared/processor_trusted_batch_sync.go:262","msg":"syncTrustedState: batch[1998738/1998740] mode full: error checking post closed batch. Error: %!(EXTRA *errors.errorString=last L2Block 10727264 in the database 0x78945459a4aeee50c3e7e6ab4f250223304b9cbb7f0059e4410e3e1b70bc62fc and the trusted batch 0x39b76d1b4f0942f92bdb482cf36050f8d1b5ed0b6639c674ad19b71eef9a25cb are different, string=\n/src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()\n/src/log/log.go:251 github.com/0xPolygonHermez/zkevm-node/log.Errorf()\n/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_sync.go:262 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*ProcessorTrustedBatchSync).ExecuteProcessBatch()\n/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_sync.go:190 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*ProcessorTrustedBatchSync).ProcessTrustedBatch()\n/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:136 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom()\n/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState()\n/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState()\n/src/synchronizer/synchronizer.go:648 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState()\n/src/synchronizer/synchronizer.go:337 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()\n/src/cmd/run.go:319 main.runSynchronizer()\n)","pid":1,"version":"v0.6.2","stacktrace":"github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*ProcessorTrustedBatchSync).ExecuteProcessBatch\n\t/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_sync.go:262\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*ProcessorTrustedBatchSync).ProcessTrustedBatch\n\t/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_sync.go:190\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom\n\t/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:136\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState\n\t/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState\n\t/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState\n\t/src/synchronizer/synchronizer.go:648\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync\n\t/src/synchronizer/synchronizer.go:337\nmain.runSynchronizer\n\t/src/cmd/run.go:319"}
{"level":"error","ts":1710371749.919067,"caller":"l2_shared/processor_trusted_batch_sync.go:192","msg":"syncTrustedState: batch[1998738/1998740] mode full: error processing trusted batch. Error: last L2Block 10727264 in the database 0x78945459a4aeee50c3e7e6ab4f250223304b9cbb7f0059e4410e3e1b70bc62fc and the trusted batch 0x39b76d1b4f0942f92bdb482cf36050f8d1b5ed0b6639c674ad19b71eef9a25cb are different%!(EXTRA string=\n/src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()\n/src/log/log.go:251 github.com/0xPolygonHermez/zkevm-node/log.Errorf()\n/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_sync.go:192 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*ProcessorTrustedBatchSync).ProcessTrustedBatch()\n/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:136 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom()\n/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState()\n/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState()\n/src/synchronizer/synchronizer.go:648 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState()\n/src/synchronizer/synchronizer.go:337 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()\n/src/cmd/run.go:319 main.runSynchronizer()\n)","pid":1,"version":"v0.6.2","stacktrace":"github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*ProcessorTrustedBatchSync).ProcessTrustedBatch\n\t/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_sync.go:192\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom\n\t/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:136\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState\n\t/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState\n\t/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState\n\t/src/synchronizer/synchronizer.go:648\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync\n\t/src/synchronizer/synchronizer.go:337\nmain.runSynchronizer\n\t/src/cmd/run.go:319"}
{"level":"error","ts":1710371749.9191742,"caller":"l2_shared/trusted_batches_retrieve.go:139","msg":"syncTrustedState: batch[1998738/1998740] error processing trusted batch 1998738: last L2Block 10727264 in the database 0x78945459a4aeee50c3e7e6ab4f250223304b9cbb7f0059e4410e3e1b70bc62fc and the trusted batch 0x39b76d1b4f0942f92bdb482cf36050f8d1b5ed0b6639c674ad19b71eef9a25cb are different%!(EXTRA string=\n/src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()\n/src/log/log.go:251 github.com/0xPolygonHermez/zkevm-node/log.Errorf()\n/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:139 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom()\n/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState()\n/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState()\n/src/synchronizer/synchronizer.go:648 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState()\n/src/synchronizer/synchronizer.go:337 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()\n/src/cmd/run.go:319 main.runSynchronizer()\n)","pid":1,"version":"v0.6.2","stacktrace":"github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom\n\t/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:139\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState\n\t/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState\n\t/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState\n\t/src/synchronizer/synchronizer.go:648\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync\n\t/src/synchronizer/synchronizer.go:337\nmain.runSynchronizer\n\t/src/cmd/run.go:319"}
{"level":"warn","ts":1710371749.9193957,"caller":"synchronizer/synchronizer.go:340","msg":"error syncing trusted state. Error: last L2Block 10727264 in the database 0x78945459a4aeee50c3e7e6ab4f250223304b9cbb7f0059e4410e3e1b70bc62fc and the trusted batch 0x39b76d1b4f0942f92bdb482cf36050f8d1b5ed0b6639c674ad19b71eef9a25cb are different","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371750.9271817,"caller":"synchronizer/synchronizer.go:329","msg":"latestSequencedBatchNumber: 1998737, latestSyncedBatch: 1998737, lastVerifiedBatchNumber: 1998734","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371750.9272194,"caller":"synchronizer/synchronizer.go:336","msg":"Syncing trusted state (permissionless)","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371750.927233,"caller":"l2_shared/trusted_batches_retrieve.go:84","msg":"syncTrustedState: Getting trusted state info","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371750.9589975,"caller":"l2_shared/trusted_batches_retrieve.go:96","msg":"syncTrustedState: latestSyncedBatch:1998737 syncTrustedState:1998740 (max Batch on network: 1998740)","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371751.245062,"caller":"l2_shared/processor_trusted_batch_sync.go:235","msg":"syncTrustedState: batch[1998737/1998740] mode nothing: Processing trusted batch: mode=nothing desc=exactly batches: Equal batch: 1998737","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710371751.2560651,"caller":"synchronizer/synchronizer.go:838","msg":"Synchronized BLOCKED!: Wating for the flushID to be stored. FlushID to be stored: 31. Latest flushID stored: 30","pid":1,"version":"v0.6.2"}
Seeing the same across multiple nodes that have been upgraded
System information zkEVM Node version: v0.6.1 zkEVM Prover version: v5.0.4 OS & Version: Linux container Network: Mainnet
2024-03-14T06:59:06.009Z ERROR l2_shared/trusted_batches_retrieve.go:139 syncTrustedState: batch[1998816/1998819] error processing trusted batch 1998816: last L2Block 10736259 in the database 0x000339ec8eac4e078dc88dda49f6fc5de896374e5b9f250f1731330cefea29fd and the trusted batch 0x0543160e3081dd8c42f5b8dbe94568c06c14209ca13301d06c572c8de547e268 are different%!(EXTRA string=
/src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()
/src/log/log.go:251 github.com/0xPolygonHermez/zkevm-node/log.Errorf()
/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:139 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom()
/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState()
/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState()
/src/synchronizer/synchronizer.go:648 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState()
/src/synchronizer/synchronizer.go:337 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()
/src/cmd/run.go:319 main.runSynchronizer()
) {"pid": 1, "version": "v0.6.2"}
github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom
/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:139
github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState
/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102
github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState
/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67
github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState
/src/synchronizer/synchronizer.go:648
github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync
/src/synchronizer/synchronizer.go:337
main.runSynchronizer
/src/cmd/run.go:319
Seeing the same
This issue has been seen with node 0.6.2 and prover 5.0.7, the current required versions for the hardfork
Same issue
Please follow these instructions, you need to use node v0.6.2 and executor v5.0.7, versions 0.6.1 and 0.5.4 have issues that affect to the blockhash calculation:
We did this, and after some catchup still see issues. Here are some log snippets
We did the rollback on one node.
zkevm-sync-1 | 2024-03-14T17:47:27.420Z INFO synchronizer/synchronizer.go:396 L1 state fully synchronized {"pid": 1, "version": "v0.6.2"}
zkevm-sync-1 | 2024-03-14T17:47:29.423Z INFO synchronizer/synchronizer.go:329 latestSequencedBatchNumber: 1998936, latestSyncedBatch: 1998938, lastVerifiedBatchNumber: 1998931 {"pid": 1, "version": "v0.6.2"}
zkevm-sync-1 | 2024-03-14T17:47:29.423Z INFO synchronizer/synchronizer.go:336 Syncing trusted state (permissionless) {"pid": 1, "version": "v0.6.2"}
zkevm-sync-1 | 2024-03-14T17:47:29.423Z INFO l2_shared/trusted_batches_retrieve.go:84 syncTrustedState: Getting trusted state info {"pid": 1, "version": "v0.6.2"}
zkevm-sync-1 | 2024-03-14T17:47:29.521Z INFO l2_shared/trusted_batches_retrieve.go:96 syncTrustedState: latestSyncedBatch:1998938 syncTrustedState:1998938 (max Batch on network: 1998938) {"pid": 1, "version": "v0.6.2"}
zkevm-sync-1 | 2024-03-14T17:47:59.620Z WARN l2_shared/trusted_batches_retrieve.go:120 syncTrustedState: batch[1998938/1998938] failed to get batch 1998938 from trusted state. Error: 502 - error code: 502 {"pid": 1, "version": "v0.6.2"}
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:120
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState
zkevm-sync-1 | /src/synchronizer/synchronizer.go:648
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync
zkevm-sync-1 | /src/synchronizer/synchronizer.go:337
zkevm-sync-1 | main.runSynchronizer
zkevm-sync-1 | /src/cmd/run.go:319
zkevm-sync-1 | 2024-03-14T17:47:59.621Z WARN synchronizer/synchronizer.go:340 error syncing trusted state. Error: 502 - error code: 502 {"pid": 1, "version": "v0.6.2"}
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync
zkevm-sync-1 | /src/synchronizer/synchronizer.go:340
zkevm-sync-1 | main.runSynchronizer
zkevm-sync-1 | /src/cmd/run.go:319
zkevm-sync-1 | 2024-03-14T17:48:02.643Z INFO synchronizer/synchronizer.go:329 latestSequencedBatchNumber: 1998936, latestSyncedBatch: 1998938, lastVerifiedBatchNumber: 1998931 {"pid": 1, "version": "v0.6.2"}
zkevm-sync-1 | 2024-03-14T18:01:07.604Z ERROR l2_shared/processor_trusted_batch_sync.go:192 syncTrustedState: batch[1998939/1998940] mode nothing: error processing trusted batch. Error: when closing the batch, the batch is already close, but the data on state doesnt match the expected%!(EXTRA string=
zkevm-sync-1 | /src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()
zkevm-sync-1 | /src/log/log.go:251 github.com/0xPolygonHermez/zkevm-node/log.Errorf()
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_sync.go:192 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*ProcessorTrustedBatchSync).ProcessTrustedBatch()
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:136 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom()
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState()
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState()
zkevm-sync-1 | /src/synchronizer/synchronizer.go:648 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState()
zkevm-sync-1 | /src/synchronizer/synchronizer.go:337 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()
zkevm-sync-1 | /src/cmd/run.go:319 main.runSynchronizer()
zkevm-sync-1 | ) {"pid": 1, "version": "v0.6.2"}
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*ProcessorTrustedBatchSync).ProcessTrustedBatch
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_sync.go:192
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:136
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState
zkevm-sync-1 | /src/synchronizer/synchronizer.go:648
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync
zkevm-sync-1 | /src/synchronizer/synchronizer.go:337
zkevm-sync-1 | main.runSynchronizer
zkevm-sync-1 | /src/cmd/run.go:319
zkevm-sync-1 | 2024-03-14T18:01:07.604Z ERROR l2_shared/trusted_batches_retrieve.go:139 syncTrustedState: batch[1998939/1998940] error processing trusted batch 1998939: when closing the batch, the batch is already close, but the data on state doesnt match the expected%!(EXTRA string=
zkevm-sync-1 | /src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()
zkevm-sync-1 | /src/log/log.go:251 github.com/0xPolygonHermez/zkevm-node/log.Errorf()
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:139 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom()
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState()
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState()
zkevm-sync-1 | /src/synchronizer/synchronizer.go:648 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState()
zkevm-sync-1 | /src/synchronizer/synchronizer.go:337 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()
zkevm-sync-1 | /src/cmd/run.go:319 main.runSynchronizer()
zkevm-sync-1 | ) {"pid": 1, "version": "v0.6.2"}
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:139
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState
zkevm-sync-1 | /src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState
zkevm-sync-1 | /src/synchronizer/synchronizer.go:648
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync
zkevm-sync-1 | /src/synchronizer/synchronizer.go:337
zkevm-sync-1 | main.runSynchronizer
zkevm-sync-1 | /src/cmd/run.go:319
zkevm-sync-1 | 2024-03-14T18:01:07.605Z WARN synchronizer/synchronizer.go:340 error syncing trusted state. Error: when closing the batch, the batch is already close, but the data on state doesnt match the expected {"pid": 1, "version": "v0.6.2"}
zkevm-sync-1 | github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync
zkevm-sync-1 | /src/synchronizer/synchronizer.go:340
zkevm-sync-1 | main.runSynchronizer
zkevm-sync-1 | /src/cmd/run.go:319
still seeing issues
My another node started to display another logs:
Guys can you pay attention and shed a light on what's happening please? @ToniRamirezM @tclemos @joanestebanr
Restart hepled for a while but then the node threw panic:
Restarted again, synced for 5 minutes, then paniced again.
Something is happening during 5 days in the chain.
Both of my nodes work sync a few batches and then face with error on trusted batch
"level":"error","ts":1710792658.8007498,"caller":"l2_shared/processor_trusted_batch_sync.go:262","msg":"syncTrustedState: batch[2000187/2000190] mode full: error checking post closed batch. Error: %!(EXTRA *errors.errorString=last L2Block 10866454 in the database 0x7088893fea67af8d094b8af2f16171c124f59c6cd000f92c5663bac8130051d3 and the trusted batch 0xff483ae382c7c22adcddded34b9b50f57f7a7213871b4a66a84f2946584d79f8 are different, string=\n/src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()\n/src/log/log.go:251 github.com/0xPolygonHermez/zkevm-node/log.Errorf()\n/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_sync.go:262 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*ProcessorTrustedBatchSync).ExecuteProcessBatch()\n/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_sync.go:190 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*ProcessorTrustedBatchSync).ProcessTrustedBatch()\n/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:136 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom()\n/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState()\n/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState()\n/src/synchronizer/synchronizer.go:648 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState()\n/src/synchronizer/synchronizer.go:337 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()\n/src/cmd/run.go:319 main.runSynchronizer()\n)","pid":1,"version":"v0.6.2","stacktrace":"github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*ProcessorTrustedBatchSync).ExecuteProcessBatch\n\t/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_sync.go:262\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*ProcessorTrustedBatchSync).ProcessTrustedBatch\n\t/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_sync.go:190\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom\n\t/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:136\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState\n\t/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState\n\t/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState\n\t/src/synchronizer/synchronizer.go:648\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync\n\t/src/synchronizer/synchronizer.go:337\nmain.runSynchronizer\n\t/src/cmd/run.go:319"}
{"level":"error","ts":1710792658.8009229,"caller":"l2_shared/processor_trusted_batch_sync.go:192","msg":"syncTrustedState: batch[2000187/2000190] mode full: error processing trusted batch. Error: last L2Block 10866454 in the database 0x7088893fea67af8d094b8af2f16171c124f59c6cd000f92c5663bac8130051d3 and the trusted batch 0xff483ae382c7c22adcddded34b9b50f57f7a7213871b4a66a84f2946584d79f8 are different%!(EXTRA string=\n/src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()\n/src/log/log.go:251 github.com/0xPolygonHermez/zkevm-node/log.Errorf()\n/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_sync.go:192 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*ProcessorTrustedBatchSync).ProcessTrustedBatch()\n/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:136 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom()\n/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState()\n/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState()\n/src/synchronizer/synchronizer.go:648 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState()\n/src/synchronizer/synchronizer.go:337 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()\n/src/cmd/run.go:319 main.runSynchronizer()\n)","pid":1,"version":"v0.6.2","stacktrace":"github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*ProcessorTrustedBatchSync).ProcessTrustedBatch\n\t/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_sync.go:192\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom\n\t/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:136\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState\n\t/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState\n\t/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState\n\t/src/synchronizer/synchronizer.go:648\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync\n\t/src/synchronizer/synchronizer.go:337\nmain.runSynchronizer\n\t/src/cmd/run.go:319"}
{"level":"error","ts":1710792658.801048,"caller":"l2_shared/trusted_batches_retrieve.go:139","msg":"syncTrustedState: batch[2000187/2000190] error processing trusted batch 2000187: last L2Block 10866454 in the database 0x7088893fea67af8d094b8af2f16171c124f59c6cd000f92c5663bac8130051d3 and the trusted batch 0xff483ae382c7c22adcddded34b9b50f57f7a7213871b4a66a84f2946584d79f8 are different%!(EXTRA string=\n/src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()\n/src/log/log.go:251 github.com/0xPolygonHermez/zkevm-node/log.Errorf()\n/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:139 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom()\n/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState()\n/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67 github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState()\n/src/synchronizer/synchronizer.go:648 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState()\n/src/synchronizer/synchronizer.go:337 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()\n/src/cmd/run.go:319 main.runSynchronizer()\n)","pid":1,"version":"v0.6.2","stacktrace":"github.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).syncTrustedBatchesToFrom\n\t/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:139\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*TrustedBatchesRetrieve).SyncTrustedState\n\t/src/synchronizer/l2_sync/l2_shared/trusted_batches_retrieve.go:102\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/l2_sync/l2_shared.(*SyncTrustedStateExecutorSelector).SyncTrustedState\n\t/src/synchronizer/l2_sync/l2_shared/processor_trusted_batch_selector.go:67\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncTrustedState\n\t/src/synchronizer/synchronizer.go:648\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync\n\t/src/synchronizer/synchronizer.go:337\nmain.runSynchronizer\n\t/src/cmd/run.go:319"}
{"level":"warn","ts":1710792658.8012958,"caller":"synchronizer/synchronizer.go:340","msg":"error syncing trusted state. Error: last L2Block 10866454 in the database 0x7088893fea67af8d094b8af2f16171c124f59c6cd000f92c5663bac8130051d3 and the trusted batch 0xff483ae382c7c22adcddded34b9b50f57f7a7213871b4a66a84f2946584d79f8 are different","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710792659.8054812,"caller":"synchronizer/synchronizer.go:329","msg":"latestSequencedBatchNumber: 2000186, latestSyncedBatch: 2000186, lastVerifiedBatchNumber: 2000186","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710792659.805522,"caller":"synchronizer/synchronizer.go:336","msg":"Syncing trusted state (permissionless)","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710792659.8055477,"caller":"l2_shared/trusted_batches_retrieve.go:84","msg":"syncTrustedState: Getting trusted state info","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710792659.8377905,"caller":"l2_shared/trusted_batches_retrieve.go:96","msg":"syncTrustedState: latestSyncedBatch:2000186 syncTrustedState:2000190 (max Batch on network: 2000190)","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710792660.0883129,"caller":"l2_shared/processor_trusted_batch_sync.go:235","msg":"syncTrustedState: batch[2000186/2000190] mode nothing: Processing trusted batch: mode=nothing desc=exactly batches: Equal batch: 2000186","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710792660.0893412,"caller":"synchronizer/synchronizer.go:829","msg":"executor vs local: flushid=1876/1848, proverID=448e493f-ce37-4493-bee4-933d92c0c6bd/448e493f-ce37-4493-bee4-933d92c0c6bd","pid":1,"version":"v0.6.2"}
{"level":"info","ts":1710792660.0893958,"caller":"synchronizer/synchronizer.go:854","msg":"Pending Flushid fullfiled: 1848, executor have write 1876","pid":1,"version":"v0.6.2"}
Both my nodes stop on the same batch:
syncTrustedState: batch[2000191/2000192] mode full: error checking post closed batch. Error: %!(EXTRA *errors.errorString=last L2Block 10867022 in the database 0x9da6ad206d8b866c686d16c3c13b925f8829ed65a69aa063147c063a797709f8 and the trusted batch 0x3bfe8030c1def56cd408e1457e4ba7dce0b1d6b6393b4f019d61c9e3fd408069 are different
Seems like a bug in the software
This still happening with 2 nodes we have
This needs to be checked as quickly as possible, as nodes sync normally if i rewind to a 1 week old db snapsot but once they reach the tip this starts happening
@alexqrid You running an obsolete node & prover version
These are the versions you need to be running: Instructions Node version: v0.6.2 Prover version: v5.0.7 Bridge version: v0.4.2
After the upgrade do the following
I've been running always the new version and also have this problem, i did procedure and it just keep node syncing for a while and then back to the same. Please acknowledge the problem and start sorting it.
Stop giving people procedures that dont solve the issue. Please solve the issue
@alexqrid You running an obsolete node & prover version
@obynonwane thanks, but unfortunately this is not true. I'm running the latest versions:
image: hermeznetwork/zkevm-node:v0.6.2
image: hermeznetwork/zkevm-prover:v5.0.7
Hi @alexqrid, I check your logs. We have introduced features to check L2Block hashes to be sure that are the same to the trustedNode. I understand that this is a bit annoying but this garantee that you have the same L2Block as Trusted node. Here you have more information about that:
V0.6.1
: It checks the hash of last L2BLock after synchronize from TrustedNode (#3375)v0.6.2
: It checks the hash of L2Block after syncrhonize sequenced batches from L1: (#3442)So, the problem is that you have in database some L2block with the wrong hash and current version of node is able to detected that. So with the right version of the node is not going to fix that because the problem was produced by an old one and is in your node database. The good new is that right now we known the problem and we can fix that:
I suggest to run this tool that will find out which is the discrepancy L2block. A example of executing it against mainnet:
./check_l2block_hashes_backwards_binary.sh https://zkevm-rpc.com/ http://localhost:8545/
This show the exactly point where the hashes from L2block start to differ. Based on this l2block you can find out the delete point for state.batch and state.block:
select virtual_batch.batch_num, virtual_batch.block_num from state.virtual_batch where batch_num=(select batch_num from state.l2block lb where block_num=$1);
@joanestebanr Appreciate your help! Can I just delete this bad blocks or it's better to resync from scratch?
We have introduced features to check L2Block hashes to be sure that are the same to the trustedNode.
Does it mean that there was a possibility of double spending, as hashes wasn't validated and checked? And the main reason why it didn't happened is the centralization among single seqouencer from the foundation?
@alexqrid In both cases I suggest to delete from where it starts to be wrong because is going to be faster :
delete from state.batch where batch_num >= 1984821;
delete from state.block where block_num >= 19220466;
delete from state.batch where batch_num >= 1998875;
delete from state.block where block_num >= 19433281;
Hi @yorickdowne, I check your logs and I see two problems:
error syncing trusted state. Error: 502 - error code: 502
usually retrying this solve the situation@alexqrid
Does it mean that there was a possibility of double spending, as hashes wasn't validated and checked? And the main reason why it didn't happened is the centralization among single seqouencer from the foundation?
not at all!!
On the one hand there are the L2Hashes
which are a representation of what happened and which obviously should match, but we have found inconsistencies when comparing with erigon (logs, comulative gas,...), but nothing in relation to the txs or state.
On the other hand there is the stateroot
and this has not changed or varied, therefore there is no difference in the state only at the time of computing these L2Hash
.
thanks @joanestebanr , your suggestions helped and I was able to sync my nodes. Closing the issue, as the trigger and root cause were found
Hurried a little bit) Actually node is still syncing but I tried to run suggested tool again and noticed that it displays the same blocks mismatching:
executed deletion again:
(first bad l2block 10742147) : delete from state.batch where batch_num >= 1998875; delete from state.block where block_num >= 19433281;
Synced up a few batches, checked, and the same result again.
Hello. I also want to add that i have same issue on two of my nodes. I deleted batches and blocks as suggested, for the first time it helped, but after some time nodes stopped to sync again, and after second deletion nodes cant reach head with this error.
ERROR synchronizer/synchronizer.go:626 error: checkL2block: L2BlockNumber: 10980670 localL2Block.Hash 0xf23f648827533c8df4dcd73cffcbf147839ba95e87a0afdea4e4a45896d0fd07 and trustedL2Block.Hash 0xd9f776eddee56423d0b91cdfbf500e90e80b24e8fa50899121cadbb087daac81 are different
zkEVM Node version: v0.6.4 zkEVM Prover version: v6.0.0 OS & Version: Linux container Network: Mainnet
I upgraded the version to the latest and solved the out-of-sync problem as described above, but currently the node height is stuck at 10985472 again with the following logs
{"level":"info","ts":1711357977.4481509,"caller":"synchronizer/synchronizer.go:415","msg":"L1 state fully synchronized","pid":58,"version":"v0.6.4"}
{"level":"info","ts":1711357977.4506264,"caller":"synchronizer/synchronizer.go:348","msg":"latestSequencedBatchNumber: 2002026, latestSyncedBatch: 2001448, lastVerifiedBatchNumber: 2002013","pid":58,"version":"v0.6.4"}
{"level":"info","ts":1711357977.4506507,"caller":"synchronizer/synchronizer.go:394","msg":"Syncing L1 blocks sequentially lastEthBlockSynced=19494172","pid":58,"version":"v0.6.4"}
{"level":"info","ts":1711357977.4976807,"caller":"synchronizer/synchronizer.go:519","msg":"Syncing block 19494173 of 19510534","pid":58,"version":"v0.6.4"}
{"level":"info","ts":1711357977.4977312,"caller":"synchronizer/synchronizer.go:520","msg":"Getting rollup info from block 19494173 to block 19494273","pid":58,"version":"v0.6.4"}
{"level":"error","ts":1711357977.6226282,"caller":"state/l1infotree.go:74","msg":"error add new leaf to the L1InfoTree. Error: mismatched leaf count: 4928, expected: 5341\n/src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()\n/src/log/log.go:217 github.com/0xPolygonHermez/zkevm-node/log.Error()\n/src/state/l1infotree.go:74 github.com/0xPolygonHermez/zkevm-node/state.(*State).AddL1InfoTreeLeaf()\n/src/synchronizer/actions/etrog/processor_l1_info_tree_update.go:47 github.com/0xPolygonHermez/zkevm-node/synchronizer/actions/etrog.(*ProcessorL1InfoTreeUpdate).Process()\n/src/synchronizer/actions/processor_manager/processor_manager.go:66 github.com/0xPolygonHermez/zkevm-node/synchronizer/actions/processor_manager.(*L1EventProcessors).Process()\n/src/synchronizer/synchronizer.go:625 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).ProcessBlockRange()\n/src/synchronizer/synchronizer.go:533 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncBlocksSequential()\n/src/synchronizer/synchronizer.go:395 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()\n/src/cmd/run.go:318 main.runSynchronizer()\n","pid":58,"version":"v0.6.4","stacktrace":"github.com/0xPolygonHermez/zkevm-node/state.(*State).AddL1InfoTreeLeaf\n\t/src/state/l1infotree.go:74\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/actions/etrog.(*ProcessorL1InfoTreeUpdate).Process\n\t/src/synchronizer/actions/etrog/processor_l1_info_tree_update.go:47\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/actions/processor_manager.(*L1EventProcessors).Process\n\t/src/synchronizer/actions/processor_manager/processor_manager.go:66\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).ProcessBlockRange\n\t/src/synchronizer/synchronizer.go:625\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncBlocksSequential\n\t/src/synchronizer/synchronizer.go:533\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync\n\t/src/synchronizer/synchronizer.go:395\nmain.runSynchronizer\n\t/src/cmd/run.go:318"}
{"level":"error","ts":1711357977.622718,"caller":"etrog/processor_l1_info_tree_update.go:49","msg":"error storing the l1InfoTree(etrog). BlockNumber: 19494192, error: mismatched leaf count: 4928, expected: 5341%!(EXTRA string=\n/src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()\n/src/log/log.go:251 github.com/0xPolygonHermez/zkevm-node/log.Errorf()\n/src/synchronizer/actions/etrog/processor_l1_info_tree_update.go:49 github.com/0xPolygonHermez/zkevm-node/synchronizer/actions/etrog.(*ProcessorL1InfoTreeUpdate).Process()\n/src/synchronizer/actions/processor_manager/processor_manager.go:66 github.com/0xPolygonHermez/zkevm-node/synchronizer/actions/processor_manager.(*L1EventProcessors).Process()\n/src/synchronizer/synchronizer.go:625 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).ProcessBlockRange()\n/src/synchronizer/synchronizer.go:533 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncBlocksSequential()\n/src/synchronizer/synchronizer.go:395 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()\n/src/cmd/run.go:318 main.runSynchronizer()\n)","pid":58,"version":"v0.6.4","stacktrace":"github.com/0xPolygonHermez/zkevm-node/synchronizer/actions/etrog.(*ProcessorL1InfoTreeUpdate).Process\n\t/src/synchronizer/actions/etrog/processor_l1_info_tree_update.go:49\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/actions/processor_manager.(*L1EventProcessors).Process\n\t/src/synchronizer/actions/processor_manager/processor_manager.go:66\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).ProcessBlockRange\n\t/src/synchronizer/synchronizer.go:625\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncBlocksSequential\n\t/src/synchronizer/synchronizer.go:533\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync\n\t/src/synchronizer/synchronizer.go:395\nmain.runSynchronizer\n\t/src/cmd/run.go:318"}
zkEVM Node version: v0.6.4 zkEVM Prover version: v6.0.0 OS & Version: Linux container Network: Mainnet
I upgraded the version to the latest and solved the out-of-sync problem as described above, but currently the node height is stuck at 10985472 again with the following logs
{"level":"info","ts":1711357977.4481509,"caller":"synchronizer/synchronizer.go:415","msg":"L1 state fully synchronized","pid":58,"version":"v0.6.4"} {"level":"info","ts":1711357977.4506264,"caller":"synchronizer/synchronizer.go:348","msg":"latestSequencedBatchNumber: 2002026, latestSyncedBatch: 2001448, lastVerifiedBatchNumber: 2002013","pid":58,"version":"v0.6.4"} {"level":"info","ts":1711357977.4506507,"caller":"synchronizer/synchronizer.go:394","msg":"Syncing L1 blocks sequentially lastEthBlockSynced=19494172","pid":58,"version":"v0.6.4"} {"level":"info","ts":1711357977.4976807,"caller":"synchronizer/synchronizer.go:519","msg":"Syncing block 19494173 of 19510534","pid":58,"version":"v0.6.4"} {"level":"info","ts":1711357977.4977312,"caller":"synchronizer/synchronizer.go:520","msg":"Getting rollup info from block 19494173 to block 19494273","pid":58,"version":"v0.6.4"} {"level":"error","ts":1711357977.6226282,"caller":"state/l1infotree.go:74","msg":"error add new leaf to the L1InfoTree. Error: mismatched leaf count: 4928, expected: 5341\n/src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()\n/src/log/log.go:217 github.com/0xPolygonHermez/zkevm-node/log.Error()\n/src/state/l1infotree.go:74 github.com/0xPolygonHermez/zkevm-node/state.(*State).AddL1InfoTreeLeaf()\n/src/synchronizer/actions/etrog/processor_l1_info_tree_update.go:47 github.com/0xPolygonHermez/zkevm-node/synchronizer/actions/etrog.(*ProcessorL1InfoTreeUpdate).Process()\n/src/synchronizer/actions/processor_manager/processor_manager.go:66 github.com/0xPolygonHermez/zkevm-node/synchronizer/actions/processor_manager.(*L1EventProcessors).Process()\n/src/synchronizer/synchronizer.go:625 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).ProcessBlockRange()\n/src/synchronizer/synchronizer.go:533 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncBlocksSequential()\n/src/synchronizer/synchronizer.go:395 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()\n/src/cmd/run.go:318 main.runSynchronizer()\n","pid":58,"version":"v0.6.4","stacktrace":"github.com/0xPolygonHermez/zkevm-node/state.(*State).AddL1InfoTreeLeaf\n\t/src/state/l1infotree.go:74\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/actions/etrog.(*ProcessorL1InfoTreeUpdate).Process\n\t/src/synchronizer/actions/etrog/processor_l1_info_tree_update.go:47\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/actions/processor_manager.(*L1EventProcessors).Process\n\t/src/synchronizer/actions/processor_manager/processor_manager.go:66\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).ProcessBlockRange\n\t/src/synchronizer/synchronizer.go:625\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncBlocksSequential\n\t/src/synchronizer/synchronizer.go:533\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync\n\t/src/synchronizer/synchronizer.go:395\nmain.runSynchronizer\n\t/src/cmd/run.go:318"} {"level":"error","ts":1711357977.622718,"caller":"etrog/processor_l1_info_tree_update.go:49","msg":"error storing the l1InfoTree(etrog). BlockNumber: 19494192, error: mismatched leaf count: 4928, expected: 5341%!(EXTRA string=\n/src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()\n/src/log/log.go:251 github.com/0xPolygonHermez/zkevm-node/log.Errorf()\n/src/synchronizer/actions/etrog/processor_l1_info_tree_update.go:49 github.com/0xPolygonHermez/zkevm-node/synchronizer/actions/etrog.(*ProcessorL1InfoTreeUpdate).Process()\n/src/synchronizer/actions/processor_manager/processor_manager.go:66 github.com/0xPolygonHermez/zkevm-node/synchronizer/actions/processor_manager.(*L1EventProcessors).Process()\n/src/synchronizer/synchronizer.go:625 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).ProcessBlockRange()\n/src/synchronizer/synchronizer.go:533 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncBlocksSequential()\n/src/synchronizer/synchronizer.go:395 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()\n/src/cmd/run.go:318 main.runSynchronizer()\n)","pid":58,"version":"v0.6.4","stacktrace":"github.com/0xPolygonHermez/zkevm-node/synchronizer/actions/etrog.(*ProcessorL1InfoTreeUpdate).Process\n\t/src/synchronizer/actions/etrog/processor_l1_info_tree_update.go:49\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/actions/processor_manager.(*L1EventProcessors).Process\n\t/src/synchronizer/actions/processor_manager/processor_manager.go:66\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).ProcessBlockRange\n\t/src/synchronizer/synchronizer.go:625\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncBlocksSequential\n\t/src/synchronizer/synchronizer.go:533\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync\n\t/src/synchronizer/synchronizer.go:395\nmain.runSynchronizer\n\t/src/cmd/run.go:318"}
zkEVM Node version: v0.6.4 zkEVM Prover version: v6.0.0 OS & Version: Linux container Network: Mainnet
{"level":"info","ts":1711375090.556431,"caller":"synchronizer/synchronizer.go:520","msg":"Getting rollup info from block 19494173 to block 19494273","pid":1,"version":"v0.6.4"}
{"level":"error","ts":1711375090.812974,"caller":"state/l1infotree.go:74","msg":"error add new leaf to the L1InfoTree. Error: mismatched leaf count: 4928, expected: 5341\n/src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()\n/src/log/log.go:217 github.com/0xPolygonHermez/zkevm-node/log.Error()\n/src/state/l1infotree.go:74 github.com/0xPolygonHermez/zkevm-node/state.(*State).AddL1InfoTreeLeaf()\n/src/synchronizer/actions/etrog/processor_l1_info_tree_update.go:47 github.com/0xPolygonHermez/zkevm-node/synchronizer/actions/etrog.(*ProcessorL1InfoTreeUpdate).Process()\n/src/synchronizer/actions/processor_manager/processor_manager.go:66 github.com/0xPolygonHermez/zkevm-node/synchronizer/actions/processor_manager.(*L1EventProcessors).Process()\n/src/synchronizer/synchronizer.go:625 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).ProcessBlockRange()\n/src/synchronizer/synchronizer.go:533 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncBlocksSequential()\n/src/synchronizer/synchronizer.go:395 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()\n/src/cmd/run.go:318 main.runSynchronizer()\n","pid":1,"version":"v0.6.4","stacktrace":"github.com/0xPolygonHermez/zkevm-node/state.(*State).AddL1InfoTreeLeaf\n\t/src/state/l1infotree.go:74\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/actions/etrog.(*ProcessorL1InfoTreeUpdate).Process\n\t/src/synchronizer/actions/etrog/processor_l1_info_tree_update.go:47\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/actions/processor_manager.(*L1EventProcessors).Process\n\t/src/synchronizer/actions/processor_manager/processor_manager.go:66\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).ProcessBlockRange\n\t/src/synchronizer/synchronizer.go:625\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncBlocksSequential\n\t/src/synchronizer/synchronizer.go:533\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync\n\t/src/synchronizer/synchronizer.go:395\nmain.runSynchronizer\n\t/src/cmd/run.go:318"}
{"level":"error","ts":1711375090.813083,"caller":"etrog/processor_l1_info_tree_update.go:49","msg":"error storing the l1InfoTree(etrog). BlockNumber: 19494192, error: mismatched leaf count: 4928, expected: 5341%!(EXTRA string=\n/src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()\n/src/log/log.go:251 github.com/0xPolygonHermez/zkevm-node/log.Errorf()\n/src/synchronizer/actions/etrog/processor_l1_info_tree_update.go:49 github.com/0xPolygonHermez/zkevm-node/synchronizer/actions/etrog.(*ProcessorL1InfoTreeUpdate).Process()\n/src/synchronizer/actions/processor_manager/processor_manager.go:66 github.com/0xPolygonHermez/zkevm-node/synchronizer/actions/processor_manager.(*L1EventProcessors).Process()\n/src/synchronizer/synchronizer.go:625 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).ProcessBlockRange()\n/src/synchronizer/synchronizer.go:533 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncBlocksSequential()\n/src/synchronizer/synchronizer.go:395 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()\n/src/cmd/run.go:318 main.runSynchronizer()\n)","pid":1,"version":"v0.6.4","stacktrace":"github.com/0xPolygonHermez/zkevm-node/synchronizer/actions/etrog.(*ProcessorL1InfoTreeUpdate).Process\n\t/src/synchronizer/actions/etrog/processor_l1_info_tree_update.go:49\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer/actions/processor_manager.(*L1EventProcessors).Process\n\t/src/synchronizer/actions/processor_manager/processor_manager.go:66\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).ProcessBlockRange\n\t/src/synchronizer/synchronizer.go:625\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncBlocksSequential\n\t/src/synchronizer/synchronizer.go:533\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync\n\t/src/synchronizer/synchronizer.go:395\nmain.runSynchronizer\n\t/src/cmd/run.go:318"}
{"level":"error","ts":1711375090.813152,"caller":"synchronizer/synchronizer.go:627","msg":"error: mismatched leaf count: 4928, expected: 5341\n/src/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()\n/src/log/log.go:217 github.com/0xPolygonHermez/zkevm-node/log.Error()\n/src/synchronizer/synchronizer.go:627 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).ProcessBlockRange()\n/src/synchronizer/synchronizer.go:533 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncBlocksSequential()\n/src/synchronizer/synchronizer.go:395 github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync()\n/src/cmd/run.go:318 main.runSynchronizer()\n","pid":1,"version":"v0.6.4","stacktrace":"github.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).ProcessBlockRange\n\t/src/synchronizer/synchronizer.go:627\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).syncBlocksSequential\n\t/src/synchronizer/synchronizer.go:533\ngithub.com/0xPolygonHermez/zkevm-node/synchronizer.(*ClientSynchronizer).Sync\n\t/src/synchronizer/synchronizer.go:395\nmain.runSynchronizer\n\t/src/cmd/run.go:318"}
{"level":"warn","ts":1711375090.8133898,"caller":"synchronizer/synchronizer.go:399","msg":"error syncing blocks: mismatched leaf count: 4928, expected: 5341","pid":1,"version":"v0.6.4"}
I have the same problem even after doing the instructions node was running fine before this weekend issues, can you guys help @joanestebanr @obynonwane ?
Thanks a lot,
Stop the RPC, Synchronizer and Executor components (keep the state DB running)
Update node (RPC and Synchronizer) to v0.6.4 (keep RPC and Synchronizer stopped)
Update Executor to v6.0.0 (keep Executor stopped)
Connect to the state DB and run the following queries:
delete from state.batch where batch_num >= 2001300;
delete from state.block where block_num >= 19490457;
Start the components in this order:5.1 Executor5.2 Synchronizer5.3 RPC
Why not add a healing functionality into the binary? When a back block is detected, unwind, and proceed.
System information
zkEVM Node version:
v0.6.1
zkEVM Prover version:v5.0.4
OS & Version:Linux container
Network:Mainnet
Expected behaviour
Node is syncing fine.
Actual behaviour
Started to receive many errors related to trusted batch, solving it via deleting latest batch from
statedb.batch
and ~10 blocks fromstatedb.block
, but then receiving error again^