filecoin-project / lotus

Reference implementation of the Filecoin protocol, written in Go
https://lotus.filecoin.io/
Other
2.83k stars 1.25k forks source link

When the Node Cluster synchronization is inconsistent, a bug occurs #10024

Closed zelin44913 closed 1 year ago

zelin44913 commented 1 year ago

Checklist

Lotus component

Lotus Version

lotus version 1.19.0+mainnet+git.64059ca87

Describe the Bug

Enable Node Cluster, if one of the Nodes has been offline for too long, it will take a while to synchronize the block height after the service is restored, but the other two Nodes are normal, during this process, lotus-miner will have a large number of error logs, and windowpost will not be normal Finish

{"level":"error","ts":"2023-01-14T15:17:29.518+0800","logger":"wdpost","caller":"wdpost/wdpost_run.go:98","msg":"runPoStCycle failed: failed to get chain randomness from beacon for window post (ts=2512445; deadline={2512445 2510064 40 2512464 2512524 2512444 2512394 48 2880 60 20 70}):\n github.com/filecoin-project/lotus/storage/wdpost.(*WindowPoStScheduler).runPoStCycle\n /nfs/go/lotus-1.19.0/storage/wdpost/wdpost_run.go:431\n - cannot draw randomness from the future"}

New bolck can't be done either

{"level":"info","ts":"2023-01-14T15:38:10.076+0800","logger":"miner","caller":"miner/miner.go:483","msg":"completed mineOne","tookMilliseconds":18,"forRound":2512517,"baseEpoch":2512455,"baseDeltaSeconds":1840,"nullRounds":61,"lateStart":false,"beaconEpoch":2608362,"lookbackEpochs":900,"networkPowerAtLookback":"21749528320385679360","minerPowerAtLookback":"6372975553019904","isEligible":true,"isWinner":false,"error":null}
{"level":"info","ts":"2023-01-14T15:38:40.097+0800","logger":"miner","caller":"miner/miner.go:483","msg":"completed mineOne","tookMilliseconds":17,"forRound":2512518,"baseEpoch":2512455,"baseDeltaSeconds":1870,"nullRounds":62,"lateStart":false,"beaconEpoch":2608363,"lookbackEpochs":900,"networkPowerAtLookback":"21749528526544109568","minerPowerAtLookback":"6372975553019904","isEligible":true,"isWinner":false,"error":null}

Logging Information

{"level":"info","ts":"2023-01-14T15:38:10.076+0800","logger":"miner","caller":"miner/miner.go:483","msg":"completed mineOne","tookMilliseconds":18,"forRound":2512517,"baseEpoch":2512455,"baseDeltaSeconds":1840,"nullRounds":61,"lateStart":false,"beaconEpoch":2608362,"lookbackEpochs":900,"networkPowerAtLookback":"21749528320385679360","minerPowerAtLookback":"6372975553019904","isEligible":true,"isWinner":false,"error":null}
{"level":"info","ts":"2023-01-14T15:38:40.097+0800","logger":"miner","caller":"miner/miner.go:483","msg":"completed mineOne","tookMilliseconds":17,"forRound":2512518,"baseEpoch":2512455,"baseDeltaSeconds":1870,"nullRounds":62,"lateStart":false,"beaconEpoch":2608363,"lookbackEpochs":900,"networkPowerAtLookback":"21749528526544109568","minerPowerAtLookback":"6372975553019904","isEligible":true,"isWinner":false,"error":null}

{"level":"error","ts":"2023-01-14T15:16:29.642+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: failed to load miner actor: load state tree: failed to load state tree bafy2bzacebvadripkrwei555nucoucfpw5tshqedbhzmvs7qjkmmrkrhb72qs: failed to load hamt node: ipld: could not find bafy2bzacebvadripkrwei555nucoucfpw5tshqedbhzmvs7qjkmmrkrhb72qs"}
{"level":"error","ts":"2023-01-14T15:16:30.153+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: failed to load miner actor: load state tree: failed to load state tree bafy2bzacebvadripkrwei555nucoucfpw5tshqedbhzmvs7qjkmmrkrhb72qs: failed to load hamt node: ipld: could not find bafy2bzacebvadripkrwei555nucoucfpw5tshqedbhzmvs7qjkmmrkrhb72qs"}
{"level":"error","ts":"2023-01-14T15:16:30.165+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: loading tipset {bafy2bzacecnu7dqg5tgrxperl7orpz6ekc7phtyu27bebz2yda6wdi6ugoy6o,bafy2bzacebkxohh2zxy3o5r4umsjxn36sl4gtanqyequtu3zaz3kwaoccnlpi,bafy2bzacec2adm4zvuu3yp6euztjcobrtj2nvlevereyuw6k2agfgmbnms4uq,bafy2bzacecznli7k4i5ezglz4tcdujsyujstflu2p4tvaby5bj6rnghq5xy3y}: get block bafy2bzacecznli7k4i5ezglz4tcdujsyujstflu2p4tvaby5bj6rnghq5xy3y: ipld: could not find bafy2bzacecznli7k4i5ezglz4tcdujsyujstflu2p4tvaby5bj6rnghq5xy3y"}
{"level":"error","ts":"2023-01-14T15:16:30.170+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: loading tipset {bafy2bzacecnu7dqg5tgrxperl7orpz6ekc7phtyu27bebz2yda6wdi6ugoy6o,bafy2bzacebkxohh2zxy3o5r4umsjxn36sl4gtanqyequtu3zaz3kwaoccnlpi,bafy2bzacec2adm4zvuu3yp6euztjcobrtj2nvlevereyuw6k2agfgmbnms4uq,bafy2bzacecznli7k4i5ezglz4tcdujsyujstflu2p4tvaby5bj6rnghq5xy3y,bafy2bzaceasts5ifrw4jfi276wwmevnkyuxowvnbdss67y5z6ibggl5cf4lsc}: get block bafy2bzacecznli7k4i5ezglz4tcdujsyujstflu2p4tvaby5bj6rnghq5xy3y: ipld: could not find bafy2bzacecznli7k4i5ezglz4tcdujsyujstflu2p4tvaby5bj6rnghq5xy3y"}
{"level":"error","ts":"2023-01-14T15:16:30.189+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: loading tipset {bafy2bzacecnu7dqg5tgrxperl7orpz6ekc7phtyu27bebz2yda6wdi6ugoy6o,bafy2bzacebkxohh2zxy3o5r4umsjxn36sl4gtanqyequtu3zaz3kwaoccnlpi,bafy2bzacec2adm4zvuu3yp6euztjcobrtj2nvlevereyuw6k2agfgmbnms4uq,bafy2bzacecznli7k4i5ezglz4tcdujsyujstflu2p4tvaby5bj6rnghq5xy3y,bafy2bzaceasts5ifrw4jfi276wwmevnkyuxowvnbdss67y5z6ibggl5cf4lsc,bafy2bzaceby44pwtrwo7xlpu2d7mvz4ukbcddcacb7yxgv7mhr3obavj5adiq}: get block bafy2bzacecznli7k4i5ezglz4tcdujsyujstflu2p4tvaby5bj6rnghq5xy3y: ipld: could not find bafy2bzacecznli7k4i5ezglz4tcdujsyujstflu2p4tvaby5bj6rnghq5xy3y"}
{"level":"error","ts":"2023-01-14T15:16:30.534+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: loading tipset {bafy2bzacecnu7dqg5tgrxperl7orpz6ekc7phtyu27bebz2yda6wdi6ugoy6o,bafy2bzacebkxohh2zxy3o5r4umsjxn36sl4gtanqyequtu3zaz3kwaoccnlpi,bafy2bzaceafrxwrht6jdswquge5ok4tkhybn5z7a6z7nteepvr2eui7nnqy4q,bafy2bzacec2adm4zvuu3yp6euztjcobrtj2nvlevereyuw6k2agfgmbnms4uq,bafy2bzacecznli7k4i5ezglz4tcdujsyujstflu2p4tvaby5bj6rnghq5xy3y,bafy2bzaceasts5ifrw4jfi276wwmevnkyuxowvnbdss67y5z6ibggl5cf4lsc,bafy2bzaceby44pwtrwo7xlpu2d7mvz4ukbcddcacb7yxgv7mhr3obavj5adiq}: get block bafy2bzacecznli7k4i5ezglz4tcdujsyujstflu2p4tvaby5bj6rnghq5xy3y: ipld: could not find bafy2bzacecznli7k4i5ezglz4tcdujsyujstflu2p4tvaby5bj6rnghq5xy3y"}
{"level":"error","ts":"2023-01-14T15:16:59.639+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: failed to load miner actor: load state tree: failed to load state tree bafy2bzacecio7mujsem4oemu3dpc3mcd5parw3uo2cyrfr2mzyoabpbt6az5w: failed to load hamt node: ipld: could not find bafy2bzacecio7mujsem4oemu3dpc3mcd5parw3uo2cyrfr2mzyoabpbt6az5w"}
{"level":"error","ts":"2023-01-14T15:17:00.227+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: failed to load miner actor: load state tree: failed to load state tree bafy2bzacecio7mujsem4oemu3dpc3mcd5parw3uo2cyrfr2mzyoabpbt6az5w: failed to load hamt node: ipld: could not find bafy2bzacecio7mujsem4oemu3dpc3mcd5parw3uo2cyrfr2mzyoabpbt6az5w"}
{"level":"error","ts":"2023-01-14T15:17:00.239+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: failed to load miner actor: load state tree: failed to load state tree bafy2bzacecio7mujsem4oemu3dpc3mcd5parw3uo2cyrfr2mzyoabpbt6az5w: failed to load hamt node: ipld: could not find bafy2bzacecio7mujsem4oemu3dpc3mcd5parw3uo2cyrfr2mzyoabpbt6az5w"}
{"level":"info","ts":"2023-01-14T15:17:20.490+0800","logger":"bellperson::groth16::prover","caller":"/root/.cargo/registry/src/github.com-1ecc6299db9ec823/bellperson-0.22.0/src/groth16/prover.rs:592","msg":"prover time: 759.847030619s"}
{"level":"info","ts":"2023-01-14T15:17:22.401+0800","logger":"storage_proofs_core::compound_proof","caller":"/root/.cargo/registry/src/github.com-1ecc6299db9ec823/storage-proofs-core-12.0.0/src/compound_proof.rs:110","msg":"snark_proof:finish"}
{"level":"info","ts":"2023-01-14T15:17:22.401+0800","logger":"filecoin_proofs::api::window_post","caller":"/root/.cargo/registry/src/github.com-1ecc6299db9ec823/filecoin-proofs-12.0.0/src/api/window_post.rs:174","msg":"generate_window_post:finish"}
{"level":"info","ts":"2023-01-14T15:17:29.516+0800","logger":"wdpost","caller":"wdpost/wdpost_run.go:412","msg":"computing window post","cycle":"2023-01-14T15:02:30.659+0800","batch":0,"elapsed":892.663643361,"skip":0,"err":null}
{"level":"info","ts":"2023-01-14T15:17:29.518+0800","logger":"wdpost","caller":"wdpost/wdpost_run.go:262","msg":"post cycle done","cycle":"2023-01-14T15:02:30.659+0800","took":898.859359437}
{"level":"error","ts":"2023-01-14T15:17:29.518+0800","logger":"wdpost","caller":"wdpost/wdpost_run.go:98","msg":"runPoStCycle failed: failed to get chain randomness from beacon for window post (ts=2512445; deadline={2512445 2510064 40 2512464 2512524 2512444 2512394 48 2880 60 20 70}):\n    github.com/filecoin-project/lotus/storage/wdpost.(*WindowPoStScheduler).runPoStCycle\n        /nfs/go/lotus-1.19.0/storage/wdpost/wdpost_run.go:431\n  - cannot draw randomness from the future"}
{"level":"warn","ts":"2023-01-14T15:17:29.518+0800","logger":"wdpost","caller":"wdpost/wdpost_changehandler.go:254","msg":"Aborted window post Proving (Deadline: &{CurrentEpoch:2512445 PeriodStart:2510064 Index:40 Open:2512464 Close:2512524 Challenge:2512444 FaultCutoff:2512394 WPoStPeriodDeadlines:48 WPoStProvingPeriod:2880 WPoStChallengeWindow:60 WPoStChallengeLookback:20 FaultDeclarationCutoff:70})"}
{"level":"error","ts":"2023-01-14T15:17:30.049+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: loading tipset {bafy2bzaceaydw3ntwdnlkbuwifugzqzbvfn5olcihbu2ljcycwiuv3pxsucmo}: get block bafy2bzaceaydw3ntwdnlkbuwifugzqzbvfn5olcihbu2ljcycwiuv3pxsucmo: ipld: could not find bafy2bzaceaydw3ntwdnlkbuwifugzqzbvfn5olcihbu2ljcycwiuv3pxsucmo"}
{"level":"error","ts":"2023-01-14T15:17:30.198+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: loading tipset {bafy2bzaceaw3ugw5r2wv7gzc6onuyqnes4zuhplk3vleceikhdynp5up5kkbo,bafy2bzaceaydw3ntwdnlkbuwifugzqzbvfn5olcihbu2ljcycwiuv3pxsucmo}: get block bafy2bzaceaw3ugw5r2wv7gzc6onuyqnes4zuhplk3vleceikhdynp5up5kkbo: ipld: could not find bafy2bzaceaw3ugw5r2wv7gzc6onuyqnes4zuhplk3vleceikhdynp5up5kkbo"}
{"level":"error","ts":"2023-01-14T15:17:30.246+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: failed to load miner actor: load state tree: failed to load state tree bafy2bzaceddebg4jzn6py4kqdptdeyv47cpldhghzixqpr7dfeure4ecpdxi2: failed to load hamt node: ipld: could not find bafy2bzaceddebg4jzn6py4kqdptdeyv47cpldhghzixqpr7dfeure4ecpdxi2"}
{"level":"error","ts":"2023-01-14T15:17:30.259+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: failed to load miner actor: load state tree: failed to load state tree bafy2bzaceddebg4jzn6py4kqdptdeyv47cpldhghzixqpr7dfeure4ecpdxi2: failed to load hamt node: ipld: could not find bafy2bzaceddebg4jzn6py4kqdptdeyv47cpldhghzixqpr7dfeure4ecpdxi2"}
{"level":"error","ts":"2023-01-14T15:17:30.397+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: failed to load miner actor: load state tree: failed to load state tree bafy2bzaceddebg4jzn6py4kqdptdeyv47cpldhghzixqpr7dfeure4ecpdxi2: failed to load hamt node: ipld: could not find bafy2bzaceddebg4jzn6py4kqdptdeyv47cpldhghzixqpr7dfeure4ecpdxi2"}
{"level":"error","ts":"2023-01-14T15:18:00.189+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: failed to load miner actor: load state tree: failed to load state tree bafy2bzacear6ywcakbff7adobvcdyp4vvjsg3v5a4fyoyiud6s4m34t3kb3a4: failed to load hamt node: ipld: could not find bafy2bzacear6ywcakbff7adobvcdyp4vvjsg3v5a4fyoyiud6s4m34t3kb3a4"}
{"level":"error","ts":"2023-01-14T15:18:00.213+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: failed to load miner actor: load state tree: failed to load state tree bafy2bzacear6ywcakbff7adobvcdyp4vvjsg3v5a4fyoyiud6s4m34t3kb3a4: failed to load hamt node: ipld: could not find bafy2bzacear6ywcakbff7adobvcdyp4vvjsg3v5a4fyoyiud6s4m34t3kb3a4"}
{"level":"error","ts":"2023-01-14T15:18:04.896+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: failed to load miner actor: load state tree: failed to load state tree bafy2bzacear6ywcakbff7adobvcdyp4vvjsg3v5a4fyoyiud6s4m34t3kb3a4: failed to load hamt node: ipld: could not find bafy2bzacear6ywcakbff7adobvcdyp4vvjsg3v5a4fyoyiud6s4m34t3kb3a4"}
{"level":"error","ts":"2023-01-14T15:18:29.657+0800","logger":"wdpost","caller":"wdpost/wdpost_sched.go:223","msg":"handling head updates in window post sched: failed to load miner actor: load state tree: failed to load state tree bafy2bzacedpdbvr2w3kzomca2csjnf7ypu5ds5vbflgyu2tvdia2icoxgscxu: failed to load hamt node: ipld: could not find bafy2bzacedpdbvr2w3kzomca2csjnf7ypu5ds5vbflgyu2tvdia2icoxgscxu"}

Repo Steps

  1. Run '...'
  2. Do '...'
  3. See error '...' ...
TippyFlitsUK commented 1 year ago

Hey @zelin44913 👋 Have you tried changing any of the config options in your daemon config.tomlto change the default behaviour?

[Cluster]
  # EXPERIMENTAL. config to enabled node cluster with raft consensus
  #
  # type: bool
  # env var: LOTUS_CLUSTER_CLUSTERMODEENABLED
  #ClusterModeEnabled = false

  # A folder to store Raft's data.
  #
  # type: string
  # env var: LOTUS_CLUSTER_DATAFOLDER
  #DataFolder = ""

  # InitPeersetMultiAddr provides the list of initial cluster peers for new Raft
  # peers (with no prior state). It is ignored when Raft was already
  # initialized or when starting in staging mode.
  #
  # type: []string
  # env var: LOTUS_CLUSTER_INITPEERSETMULTIADDR
  #InitPeersetMultiAddr = []

  # LeaderTimeout specifies how long to wait for a leader before
  # failing an operation.
  #
  # type: Duration
  # env var: LOTUS_CLUSTER_WAITFORLEADERTIMEOUT
  #WaitForLeaderTimeout = "15s"

  # NetworkTimeout specifies how long before a Raft network
  # operation is timed out
  #
  # type: Duration
  # env var: LOTUS_CLUSTER_NETWORKTIMEOUT
  #NetworkTimeout = "1m40s"

  # CommitRetries specifies how many times we retry a failed commit until
  # we give up.
  #
  # type: int
  # env var: LOTUS_CLUSTER_COMMITRETRIES
  #CommitRetries = 1

  # How long to wait between retries
  #
  # type: Duration
  # env var: LOTUS_CLUSTER_COMMITRETRYDELAY
  #CommitRetryDelay = "200ms"

  # BackupsRotate specifies the maximum number of Raft's DataFolder
  # copies that we keep as backups (renaming) after cleanup.
  #
  # type: int
  # env var: LOTUS_CLUSTER_BACKUPSROTATE
  #BackupsRotate = 6

  # Tracing enables propagation of contexts across binary boundaries.
  #
  # type: bool
  # env var: LOTUS_CLUSTER_TRACING
  #Tracing = false
zelin44913 commented 1 year ago

This is my configuration file, it has not been modified when a bug occurs

[API]
ListenAddress = "/ip4/*.*.*.*/tcp/1314/http"

[Libp2p]
ListenAddresses = ["/ip4/*.*.*.*/tcp/1413","/ip4/*.*.*.*/tcp/1413"]

[Raft]
ClusterModeEnabled = true
InitPeersetMultiAddr = ["/ip4/*.*.*.*/tcp/1413/p2p/12D3KooWGB8gAQdraaaGQy9Y3V6zKp2cwXDHppUY9pdrGM1LgeUw","/ip4/*.*.*.*/tcp/1413/p2p/12D3KooWN3aexhuU3ezNokUzFkTB2EMgBHd8tbdmrViCggAZJ9Kn","/ip4/*.*.*.*/tcp/1413/p2p/12D3KooWDsSf9755KkPHyzSSXUdnUGjsc5vKs43GytDCYrpkGwGX"]

[Cluster]
ClusterModeEnabled = true
InitPeersetMultiAddr = ["/ip4/*.*.*.*/tcp/1413/p2p/12D3KooWGB8gAQdraaaGQy9Y3V6zKp2cwXDHppUY9pdrGM1LgeUw","/ip4/*.*.*.*/tcp/1413/p2p/12D3KooWN3aexhuU3ezNokUzFkTB2EMgBHd8tbdmrViCggAZJ9Kn","/ip4/*.*.*.*/tcp/1413/p2p/12D3KooWDsSf9755KkPHyzSSXUdnUGjsc5vKs43GytDCYrpkGwGX"]
github-actions[bot] commented 1 year ago

Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 24 hours.

TippyFlitsUK commented 1 year ago

Remove the [Raft] section from your config.toml - the [Cluster] section replaces it.

The config options I posted above allow you to change cluster settings including timeouts. Can you please try adjusting these settings to see if it produces a more desirable outcome?

Let me know how it goes!!

github-actions[bot] commented 1 year ago

Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 24 hours.

github-actions[bot] commented 1 year ago

This issue was closed because it is missing author input.

zelin44913 commented 1 year ago

@TippyFlitsUK
Except for the following parameters, the others are default configurations without modification. I don’t think the bugs that appear now have anything to do with timeouts. You should add relevant node checks. Don’t connect when the node synchronization is abnormal.


ListenAddress = "/ip4/*.*.*.*/tcp/1314/http"

[Libp2p]
ListenAddresses = ["/ip4/*.*.*.*/tcp/1413","/ip4/*.*.*.*/tcp/1413"]

[Raft]
ClusterModeEnabled = true
InitPeersetMultiAddr = ["/ip4/*.*.*.*/tcp/1413/p2p/12D3KooWGB8gAQdraaaGQy9Y3V6zKp2cwXDHppUY9pdrGM1LgeUw","/ip4/*.*.*.*/tcp/1413/p2p/12D3KooWN3aexhuU3ezNokUzFkTB2EMgBHd8tbdmrViCggAZJ9Kn","/ip4/*.*.*.*/tcp/1413/p2p/12D3KooWDsSf9755KkPHyzSSXUdnUGjsc5vKs43GytDCYrpkGwGX"]

[Cluster]
ClusterModeEnabled = true
InitPeersetMultiAddr = ["/ip4/*.*.*.*/tcp/1413/p2p/12D3KooWGB8gAQdraaaGQy9Y3V6zKp2cwXDHppUY9pdrGM1LgeUw","/ip4/*.*.*.*/tcp/1413/p2p/12D3KooWN3aexhuU3ezNokUzFkTB2EMgBHd8tbdmrViCggAZJ9Kn","/ip4/*.*.*.*/tcp/1413/p2p/12D3KooWDsSf9755KkPHyzSSXUdnUGjsc5vKs43GytDCYrpkGwGX"]```
TippyFlitsUK commented 1 year ago

As I have already said above

Remove the [Raft] section from your config.toml - the [Cluster] section replaces it.

zelin44913 commented 1 year ago

@TippyFlitsUK Removed [Raft] configuration, still the same I manually stopped one of the Nodes for 30 minutes, and then restored the service of the Node again, During the shutdown of the Node, because the other two Nodes in the cluster were running well, the wnpost and wdpost of the lotus-miner process were executed normally, but after the faulty Node was restored, neither the wnpost nor the wdpost could be completed normally, which lasted for about 10 minutes. Wait for the faulty Node to fully recover block synchronization lotus-miner to return to normal

[API]
ListenAddress = "/ip4/*.*.*.*/tcp/1314/http"

[Libp2p]
ListenAddresses = ["/ip4/*.*.*.*/tcp/1413","/ip4/*.*.*.*/tcp/1413"]

[Cluster]
ClusterModeEnabled = true
InitPeersetMultiAddr = ["/ip4/*.*.*.*/tcp/1413/p2p/12D3KooWGB8gAQdraaaGQy9Y3V6zKp2cwXDHppUY9pdrGM1LgeUw","/ip4/*.*.*.*/tcp/1413/p2p/12D3KooWN3aexhuU3ezNokUzFkTB2EMgBHd8tbdmrViCggAZJ9Kn","/ip4/*.*.*.*/tcp/1413/p2p/12D3KooWDsSf9755KkPHyzSSXUdnUGjsc5vKs43GytDCYrpkGwGX"]
zelin44913 commented 1 year ago

Current Temporary Solution

  1. Temporarily modify the listening port so that lotus-miner cannot connect to the faulty Node during block synchronization
  2. After the block synchronization is completed, lotus sync wait, change back to the correct listening port, and restart lotus daemon
github-actions[bot] commented 1 year ago

Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 24 hours.

github-actions[bot] commented 1 year ago

This issue was closed because it is missing author input.

zelin44913 commented 1 year ago

@TippyFlitsUK why?

strahe commented 9 months ago

Seems that no one uses this feature, the current raft selection does not refer to the synchronization situation, and slow synchronization is not considered as service unavailability.