Closed marcosmartinez7 closed 4 months ago
Based on this image
That is the situation of all the sealers, they just stop sealing waiting for each other, seems like a deadlock situation
Wich files can i check for errors since the js console isnt throwing anything?
This is the debug.stacks() info, i dont know it is important here but this is executing while the sealers are stucked:
Found that i have a lot of block lost on each node..
Can be this the problem? the chain was running with that warnings without any issues anyway..
Btw, it is caused by bad connection between nodes? Im using Digital Ocean droplets
NOTE: if i check eth.getBlockNuber i get 488676 or 488675 depending on the sealer
We experienced a similar deadlock on our fork, and the cause was due to out-of-turn blocks all being difficulty 1
, mixed with a bit of bad luck. When the difficulties are the same, a random block is chosen as canonical, which can produce split decisions. You can compare hashes and recent signers on each node to confirm if your network is deadlocked in the same way. We had to both modify the protocol to produce distinct difficulties for each signer, and modify the same difficulty tie-breaker logic to make more deterministic choices.
Thanks for the response, can you give me an idea of how "compare hashes and recent signers on each node to confirm if your network is deadlocked in the same way." ?
Thanks
By getting the last 2 blocks from each node, you should be able to see exactly why they are stuck based on their view of the world. They all think that they have signed too recently, so they must disagree on what the last few blocks are supposed to be, so you'll see different hashes and signers for blocks with the same number (and difficulty!).
Good idea!!
Sealer 1
Last block 488676
Last -1 = 488675
Sealer 2
Last is 488675
The second node didnt reach the 488676 block.
The hashes of block 488675 are different, but the difficulty are differents (1 and 2)
For other blocks, like block 8, the hashes are equals and the difficulty is 2 for both..
Seems like all the blocks has 2 of difficulty except that conflictive one..did you find any logical explanation to that?
Btw, dont know why difficulty = 2 since the genesis file uses 0x1
Thoughts?
The in-turn signer always signs with difficulty 2. Out-of-turn signers sign with difficulty 1. This is built-in to the clique protocol, and the primary cause of this problem in the first place. It looks like you have 6 signers. You will have to check them all to make sense of this.
So, if i found two signers (into my 6) with the same difficulty and different hash the deadlock would make sense right?
Same block, different difficulty and different hash doesnt probe anything?
I have deleted the chaindata of the other node with the same last block 488675
Not necessarily. Those kind of ambiguous splits happen very frequently with clique and would normally sort themselves out.
Are you still trying to recover this chain?
If it is not necesarrily and it normally sort themselves out, then the deadlock theory maybe isnt valid..
What did you mean about "It looks like you have 6 signers. You will have to check them all to make sense of this."?
About the chain: I wanted to know what happened basically, i dont know if i can provide any kind of logs or something since the sealers just stoped to wait each other and i dont have any other information.
Also, getting this scenario in a production environment sucks, since i cant continue mining..and there is nothing on go-ethereum that guarantees that this will not happen again
So, just to make the things more clear, if the block 488675 has different difficulty and different hash doesnt probe that there was an issue? It is normal to have different hashes comparing in-turn with out-turn then?
Resyncing the signers that you deleted may produce a different distributed state which doesn't deadlock. Or it could deadlock again right away (or at any point in the future). Making fundamental protocol changes to clique like we did for GoChain is necessary to avoid the possibility completely, but can't be applied to an existing chain (without coding in a custom hard fork). You could start a new chain with GoChain instead.
What did you mean about "It looks like you have 6 signers. You will have to check them all to make sense of this."?
They all have different views of the chain. You can't be sure why each one was stuck without looking at them all individually.
Ok, but, what i am looking for?
Right now im deleting chain data for all the nodes except 1 and resync the rest of them (5 singers) from that node.
About this comment:
"By getting the last 2 blocks from each node, you should be able to see exactly why they are stuck based on their view of the world. They all think that they have signed too recently, so they must disagree on what the last few blocks are supposed to be, so you'll see different hashes and signers for blocks with the same number (and difficulty!)."
If i see two in turn or two out turn with the same difficulty and different hash that will confirm that they think that they have signed recently?
If i see two in turn or two out turn with the same difficulty and different hash that will confirm that they think that they have signed recently?
If they logged that they signed too recently then you can trust that they did. Inspecting the recent blocks would just give you a more complete picture of what exactly happened.
Well, i delete all the chain data for the 5 sealers and sync from 1
Started to work again but there is a sealer that seems to have connectivity issues or something..
The sealer starts with 6 peers, then goes to 4, 3, 2 then again to 4, 6 , etc...
And thats why i suppose the blocks are being lost... and probably thats why the sycnronization fail warning is throwed since is always the same node
Any ideas of why is this happening?
Connectivity issues since they are separate droplets?
Any way to troubleshoot this?
Thanks
I don't think the peer count is related to lost blocks, and neither peers or lost blocks are related to the logical deadlock caused by the same-difficulty ambiguity.
Regardless, you can use static/trusted enodes to add the peers automatically.
I add the nodes mannually, but it is weird that a sealer is always getting connectivity issues with the rest of the peers
I will try the static/trusted nodes.
I will put the block lost in a separate issue, but i would like to have a response from the geth team about the initial problem, because it seems like i can go into another deadlock issue again
Thanks @jmank88
PS: Do you think that the block sealing time can be an issue here? Im using 10 secs
'Lost blocks' are just blocks that were signed but didn't make the canonical chain. These happen constantly in clique, because most (~1/2) of the signers are eligible to sign at any given time, but only one block is chosen (usually the in-turn signer, with difficulty 2) - all of the other out-of-turn candidates become 'lost blocks'.
PS: Do you think that the block sealing time can be an issue here? Im using 10 secs
Faster times might increase the chances of bad luck or produce more opportunities for it to go wrong, but the fundamental problem always exists.
Right, i understand, so, nothing to worry into a PoA network then?
About time, yeah, completely agree
Thanks a lot!
Right, i understand, so, nothing to worry into a PoA network then?
I'm not sure what you mean. IMHO the ambiguous difficulty issues are absolutely fatal flaws - the one affecting the protocol itself is much more severe, but the client changes I linked addressed deadlocks as well.
It's also worth noting that increasing the number of signers may reduce the chance of deadlock, possibly having an odd number rather than even as well.
Yes sure, i mean, i didnt know about that, but is really good information and i really apreciate it. I was talking about the lost block warning, your explanation make sense for PoA
About # of signers, yes, i have read about that, makes sense. I have also implemented a PoC with just 2 sealers and, maybe im lucky, but in 700k blocks i did not experimented this issue.
Right now im using a odd number
Limiting to just 2 signers is a special case with no ambiguous same-difficulty blocks.
After removing 1 node and resync from the data of 1 of the nodes, i was running the network with 5 sealers without issues.
Summary:
After 1 day it got stucked again, but now in weirdest situation:
Sealer 1
Sealer 2
Sealer 4 (off turn with different hash and parent hash)
Sealer 5 (off turn, 3 side chain)
Sealer 6 (Same hash as sidechain but different parent)
The number of signers is 5
Each node is paired with 4 signers and 1 standard node
Last block -1: 503075
Sealer 1 (out off turn)
Sealer 2 (out off turn, same hash)
Sealer 4 (out off turn, different hash, same parent..)
Sealer 5 (in turn)
Sealer 6 (int turn too)
You can remove the stack traces, they are not necessary. This looks like a logical deadlock again. Can you double check your screenshots?
The lastblock -2 has some difference too, 2 nodes has different views of that node
S1
S2
S4
S5
S6
Indeed, this looks like a multi-block fork, which has now stalled out with all branches having the same total difficulty.
The lastblock-3 is where they agree
S1.
S2.
S3.
S4.
S5.
S6.
If they all signed their own versions, then the hashes would be different. This indicates the last point where they all agreed on the same block.
Thats true, i edited my comment.
Could please anybody of go-ethereum give me a hint of what is happening here?
I have double checked that the lastblock has difficulty 1 on each node
Also, this is contradictory, i have checked the logs and i see that sealer 6 have sealed the last block, also the Sealer 2, but the difficulty is 1 on each sealer when queried!
S1
S6
Also, is weird that into another sealer i have 2 consecutive block sealings
They are always speculatively sealing on whatever branch is the best that they have seen, so those logs do not look unusual.
I understand, but, the last block with difficulty 1 on each node isnt usual right? i mean, they made speculative sealing and the result chain include a last block that wasnt sealed?
Do you relate this too with a deadlock? seems more like the multiple chains were corrupted since that deadlock
Can you elaborate? I'm not sure I understand. The reason for there being only difficulty 1 blocks at the head would be that the in-turn signer had signed too recently (out-of-turn) to sign again (according to whichever branch it was following locally).
So, basically, if there are multiple forks (wrong behaviour) this could happen, but is not the expected situation (leads into a deadlock)
It is certainly not the desired behavior, but it is not wrong as defined by the clique protocol. Plus the client is arguably too strict about the edge case of peers with same total difficulty branches, which may just be due to being written originally for ethash.
Well, restarted again, running for about 10 hours and get deadlocked again,
is there any information that i can provide for this bug?
@marcosmartinez7; Hi, This seems strange. I am not really sure about it, but do you mind if I ask; Are you sure you are using different accounts for each miner (--unlock address)?
Yes, off course.
We are experiencing the same problem in our testnet and our production network. The difficulty chosen difficulties of 1 or 2 is the cause of this.
In Blockchain Federal Argentina (bfa.ar), we are sealing a new block every 5 seconds, and have seen this problem since we had around 8 sealers (now we are at around 14, I think).
I talked a bit with @marcosmartinez7 on Discord today, and it seems that one interesting solution could be to use prime numbers for difficulties, where if you are in-turn you have the highest possible prime number.
This is a protocol problem, as parts of the network does indeed get stuck in separate branches, just like @marcosmartinez7 experienced.
With monitoring you can detect it and do debug.rewind. Detecting it, doesn't stop it from happening, though.
I talked a bit with @marcosmartinez7 on Discord today, and it seems that one interesting solution could be to use prime numbers for difficulties, where if you are in-turn you have the highest possible prime number.
I linked some of our fixes here: https://github.com/ethereum/go-ethereum/issues/18402#issuecomment-452141245, one of which was a protocol change to use dynamic difficulties from 1-n (for n signers) based on how recently each signer has signed. We've been running this on our mainnet since last May (5s blocks, 5-20 signers). Using primes is an interesting approach, but I'm not sure it's necessary (and could cause trouble, especially with a high number of signers). One neat feature of using 1-n is that all eligible signers will always sign with a difficulty > n/2, therefore any two consecutive out-of-turn blocks will always have a total difficulty > n and thus greater than a single in-turn block, so there won't be any late re-orgs from lagging signers producing lower numbered but higher total difficulty blocks (this is where I think primes would get you in to trouble).
This sounds pretty much like the deadlocks we experience on the Görli testnet. We were able to break it down to two issues that would greatly improve this situation:
the out-of-turn block sealing delay should be much, much higher. currently, it's sometimes lower than the network latency, causing authority nodes with high geographical distance constantly producing out-of-turn blocks. I suggest to put in at least a 5000 ms minimum delay before sealing out of turn blocks (plus random delay up to another 10000 ms). This is something that can be done without breaking the clique spec and will in most cases ensure that in-turn blocks are always faster propagated in the network than out-of-turn blocks.
the choice of 1 and 2 difficulty score for out-of-turn blocks and in-turn blocks is not ideal. two out-of-turn blocks have the same difficulty as one in-turn block. I believe, in-turn blocks must be much, much heavier, I would recommend an in-turn difficulty score of 3 to make sure, they always get priority and to avoid deadlock situation where you have two different chain tips with the same difficulty. Unfortunately, this would require a new spec / hardfork.
Reviewed in team call: @5chdn suggestions are good. We could solve this by making out-of-turn difficulty more complicated. There should be some deterministic order to out-of-turn blocks. @karalabe fears that this will introduce too much protocol complexity or large reorgs.
My suggestion:
N
miners, distance
is the number of blocks since miner X
mined a block min( distance, N)
, if X
seals a blockExample, 10 signers:
10
9
if they sign in-turn:ish, but lower if they don't. To exit the deadlock you can set the chain back to one canonical block using: debug.setHead(hex_value)
@5chdn @fjl @karalabe PTAL on PR #19239
To exit the deadlock you can set the chain back to one canonical block using: debug.setHead(hex_value)
It might work but we cannot always give an eye to the nodes and resolve the deadlock by running a command or restart the nodes. it's nightmare 😄
True! ^^
To exit the deadlock you can set the chain back to one canonical block using: debug.setHead(hex_value)
It might work but we cannot always give an eye to the nodes and resolve the deadlock by running a command or restart the nodes. it's nightmare 😄
Also the process to detect where the chain has forked and select the correct block to reset is not trivial
System information
My current version is:
Expected behaviour
Keep the normal signing .
Actual behaviour
I was running a go-ethereum private network with 6 sealers.
Each sealer is run by:
The blockchain was running good for about 1-2 months.
Today i found that all the nodes were having issues. Each node was emmiting the message "Signed recently, must wait for others"
I check out the logs and i found this message every 1 hour, no more information, the nodes where not mining:
Experimenting the same issue with 6 sealers, i restarted each node but now im get stucked in
The first thing that is weird is that, some nodes are stucked on the 488677 and others are on 488676, this behaviour was reported on this issue https://github.com/ethereum/go-ethereum/issues/16406 same for the user @lyhbarry
Example: Signer 1
Signer 2
Note that there is no votes pending
So, right now, i shut down and restar each node, i have found that:
So, the syncronization fail but also i just can start signing again because each node is stucked waiting for the others, that means, the network is useless?
The comment of @tudyzhb on that issue mention that:
After this problem, i take a look at the logs, each signer has this error messages:
Synchronisation failed, dropping peer peer=7875a002affc775b err="retrieved hash chain is invalid"
I also see some:
INFO [01-02|16:58:10.902] 😱 block lost number=488205 hash=1fb1c5…a41a42
This error about hash chain was just a warning, so the node keep mining until the 2th of january, then i saw this on each of the 6 nodesI was looking that there are a lot of issues about this error, the most similar is the one i posted here but is unresolved.
Most of the issues workarrounds seems to be a restart, but in this case, the chain seems to be is in a unconsistent state and the nodes are always waiting for the others
So,
This are other related issues:
https://github.com/ethereum/go-ethereum/issues/16444 (Same issue but i dont have votes pending in my snapshot)
https://github.com/ethereum/go-ethereum/issues/14381#
https://github.com/ethereum/go-ethereum/issues/16825
https://github.com/ethereum/go-ethereum/issues/16406