Closed thesoulkiller closed 6 months ago
validators that run with the old code (<v0.6.0) after Durango has created a faulty state. Did you copy from a validator DB to new nodes? If so they also have this faulty state.
We're working on a way to mitigate missed upgrades.
we have 7 validators, 4 of them were operating normally but the other 3 fall in to this state. I copied db from one of the four validators to the others to mitigate the issue. I am looking forward for updates ! Thank you
Hello there! We just released Subnet-EVM v0.6.3 focusing on this very issue. You can now coordinate an upgrade for your network to reschedule the Durango activation. I advise to set a timestamp in future, giving enough time so your network can coordinate the upgrade. See here for more information. If you have any questions feel free to ask.
Hello I have upgraded to avalanchego-v1.11.4 and subnet-evm_0.6.3 but it couldn't detect networkUpgradeOverrides
setting. I'm sharing the bootstrap logs.
Apr 22 07:41:28 vmi1540889.contaboserver.net avalanchego[1961877]: INFO [04-22|07:41:28.649] <2KV1ighhTjNpuQq8BVgHeJF3QHdF3KxhY9AqB9M1GfUuBCKjNo Chain> github.com/ava-labs/subnet-evm/plugin/evm/vm.go:288: Initializing Subnet EVM VM Version=v0.6.3 Config="{AirdropFile: SnowmanAPIEnabled:false AdminAPIEnabled:false AdminAPIDir: WarpAPIEnabled:false EnabledEthAPIs:[eth eth-filter net web3 internal-eth internal-blockchain internal-transaction] ContinuousProfilerDir: ContinuousProfilerFrequency:15m0s ContinuousProfilerMaxFiles:5 RPCGasCap:50000000 RPCTxFeeCap:100 TrieCleanCache:512 TrieDirtyCache:512 TrieDirtyCommitTarget:20 TriePrefetcherParallelism:16 SnapshotCache:256 Preimages:false SnapshotWait:false SnapshotVerify:false Pruning:false AcceptorQueueLimit:64 CommitInterval:4096 AllowMissingTries:false PopulateMissingTries:<nil> PopulateMissingTriesParallelism:1024 PruneWarpDB:false MetricsExpensiveEnabled:true LocalTxsEnabled:false TxPoolPriceLimit:1 TxPoolPriceBump:10 TxPoolAccountSlots:16 TxPoolGlobalSlots:5120 TxPoolAccountQueue:64 TxPoolGlobalQueue:1024 TxPoolLifetime:10m0s APIMaxDuration:0s WSCPURefillRate:0s WSCPUMaxStored:0s MaxBlocksPerRequest:0 AllowUnfinalizedQueries:false AllowUnprotectedTxs:false AllowUnprotectedTxHashes:[0xfefb2da535e927b85fe68eb81cb2e4a5827c905f78381a01ef2322aa9b0aee8e] KeystoreDirectory: KeystoreExternalSigner: KeystoreInsecureUnlockAllowed:false PushGossipPercentStake:0.9 PushGossipNumValidators:100 PushGossipNumPeers:0 PushRegossipNumValidators:10 PushRegossipNumPeers:0 PushGossipFrequency:100ms PullGossipFrequency:1s RegossipFrequency:30s PriorityRegossipAddresses:[] LogLevel:info LogJSONFormat:false FeeRecipient: OfflinePruning:false OfflinePruningBloomFilterSize:512 OfflinePruningDataDirectory: MaxOutboundActiveRequests:16 MaxOutboundActiveCrossChainRequests:64 StateSyncEnabled:false StateSyncSkipResume:false StateSyncServerTrieCache:64 StateSyncIDs: StateSyncCommitInterval:16384 StateSyncMinBlocks:300000 StateSyncRequestSize:1024 InspectDatabase:false SkipUpgradeCheck:true AcceptedCacheSize:32 TxLookupLimit:0 SkipTxIndexing:false WarpOffChainMessages:[]}"
Apr 22 07:41:28 vmi1540889.contaboserver.net avalanchego[1961877]: [04-22|07:41:28.651] ERROR chains/manager.go:331 error creating chain {"subnetID": "21HEmZx5zVHYcP3JzbmRGVsYdm3HjrM2BMEPoCpoS3fHmZshq9", "chainID": "2KV1ighhTjNpuQq8BVgHeJF3QHdF3KxhY9AqB9M1GfUuBCKjNo", "chainAlias": "2KV1ighhTjNpuQq8BVgHeJF3QHdF3KxhY9AqB9M1GfUuBCKjNo", "vmID": "nyfSdZmrxTXbJrxdUoqLegVGQzWF6RVL4jYn7Yr6NsMzpdrFA", "error": "error while creating new snowman vm rpc error: code = Unknown desc = failed to parse upgrade bytes: unknown precompile config: networkUpgradeOverrides"}
EDIT My bad, I included the configuration in precompileUpgrades, when I just append the upgrade.json with this config, it worked. Thank you !
Short story: We recently missed the the major Warp messaging (Fuji) update, the subnet stopped 3 days, after we synchronized all of the archieve nodes and rpc's with copying the db. However now a new rpc nodes which tries to enter the subnet receives Fatal error and shuts down the bootstrapping process.
Describe the bug Somehow subnet produces a dirty block and inserts through all of the validators, then newly entered node fails to bootstrap the subnet chain. I am sharing the both of the archive validator node and a recent node ( which is just indexing node ) logs. After creating a certain block ( 2PGQzuTjzBEcVqTnNmVwhAsk4PsXV88M7DT7wRtgmC973J4TVf ) the validator kills the subnet subprocess.
The last line of the recently joined rpc node gives the following error and shuts down
Logs
1- The subnet archive validator node logs (truncated till that faulty blockID):
2- The recent rpc node enters the network. The last line gives the FATAL error.
Operating System Debian 11 x64
Any help or clarification will be appreciated