ethereum / go-ethereum

Go implementation of the Ethereum protocol
https://geth.ethereum.org
GNU Lesser General Public License v3.0
47.32k stars 20.03k forks source link

PoA network, all the sealers are waiting for each other after 2 months running, possible deadlock? #18402

Closed marcosmartinez7 closed 4 months ago

marcosmartinez7 commented 5 years ago

System information

My current version is:

Geth
Version: 1.8.17-stable
Git Commit: 8bbe72075e4e16442c4e28d999edee12e294329e
Architecture: amd64
Protocol Versions: [63 62]
Network Id: 1
Go Version: go1.10.1
Operating System: linux
GOPATH=
GOROOT=/usr/lib/go-1.10

Expected behaviour

Keep the normal signing .

Actual behaviour

I was running a go-ethereum private network with 6 sealers.

Each sealer is run by:

directory=/home/poa
command=/bin/bash -c 'geth --datadir sealer4/  --syncmode 'full' --port 30393 --rpc --rpcaddr 'localhost' --rpcport 8600 --rpcapi='net,web3,eth' --networkid 30 --gasprice '1' -unlock 'someaddress' --password sealer4/password.txt --mine '

The blockchain was running good for about 1-2 months.

Today i found that all the nodes were having issues. Each node was emmiting the message "Signed recently, must wait for others"

I check out the logs and i found this message every 1 hour, no more information, the nodes where not mining:

Regenerated local transaction journal transactions=0 accounts=0 Regenerated local transaction journal transactions=0 accounts=0 Regenerated local transaction journal transactions=0 accounts=0 Regenerated local transaction journal transactions=0 accounts=0

Experimenting the same issue with 6 sealers, i restarted each node but now im get stucked in

INFO [01-07|18:17:30.645] Etherbase automatically configured address=0x5Bc69DC4dba04b6955aC94BbdF129C3ce2d20D34 INFO [01-07|18:17:30.645] Commit new mining work number=488677 sealhash=a506ec…8cb403 uncles=0 txs=0 gas=0 fees=0 elapsed=133.76µs INFO [01-07|18:17:30.645] Signed recently, must wait for others

The first thing that is weird is that, some nodes are stucked on the 488677 and others are on 488676, this behaviour was reported on this issue https://github.com/ethereum/go-ethereum/issues/16406 same for the user @lyhbarry

Example: Signer 1

image

Signer 2

image

Note that there is no votes pending

So, right now, i shut down and restar each node, i have found that:

INFO [01-07|18:41:56.134] Signed recently, must wait for others 
INFO [01-07|19:41:42.125] Regenerated local transaction journal    transactions=0 accounts=0
INFO [01-07|18:41:56.134] Signed recently, must wait for others 

So, the syncronization fail but also i just can start signing again because each node is stucked waiting for the others, that means, the network is useless?

The comment of @tudyzhb on that issue mention that:

Ref clique-seal of v1.8.11, I think there is no an effective mechanism to retry seal, when an in-turn/out-of-turn seal fail occur. So our dev network useless easily.

After this problem, i take a look at the logs, each signer has this error messages:

Synchronisation failed, dropping peer peer=7875a002affc775b err="retrieved hash chain is invalid"

INFO [01-02|16:42:10.902] Signed recently, must wait for others 
WARN [01-02|16:42:11.960] Synchronisation failed, dropping peer    peer=7875a002affc775b err="retrieved hash chain is invalid"
INFO [01-02|16:42:12.128] Imported new chain segment               blocks=1  txs=0 mgas=0.000 elapsed=540.282µs mgasps=0.000  number=488116 hash=269920…afd3c7 cache=5.99kB
INFO [01-02|16:42:12.129] Commit new mining work                   number=488117 sealhash=f7b00c…787d5c uncles=2 txs=0 gas=0     fees=0          elapsed=307.314µs
INFO [01-02|16:42:20.929] Successfully sealed new block            number=488117 sealhash=f7b00c…787d5c hash=f17438…93ffe3 elapsed=8.800s
INFO [01-02|16:42:20.929] 🔨 mined potential block                  number=488117 hash=f17438…93ffe3
INFO [01-02|16:42:20.930] Commit new mining work                   number=488118 sealhash=b09b33…1526ba uncles=2 txs=0 gas=0     fees=0          elapsed=520.754µs
INFO [01-02|16:42:20.930] Signed recently, must wait for others 
INFO [01-02|16:42:31.679] Imported new chain segment               blocks=1  txs=0 mgas=0.000 elapsed=2.253ms   mgasps=0.000  number=488118 hash=763a32…a579f5 cache=5.99kB
INFO [01-02|16:42:31.680] 🔗 block reached canonical chain          number=488111 hash=3d44dc…df0be5
INFO [01-02|16:42:31.680] Commit new mining work                   number=488119 sealhash=c8a5e7…db78a1 uncles=2 txs=0 gas=0     fees=0          elapsed=214.155µs
INFO [01-02|16:42:31.680] Signed recently, must wait for others 
INFO [01-02|16:42:40.901] Imported new chain segment               blocks=1  txs=0 mgas=0.000 elapsed=808.903µs mgasps=0.000  number=488119 hash=accc3f…44bc4c cache=5.99kB
INFO [01-02|16:42:40.901] Commit new mining work                   number=488120 sealhash=f73978…c03fa7 uncles=2 txs=0 gas=0     fees=0          elapsed=275.72µs
INFO [01-02|16:42:40.901] Signed recently, must wait for others 
WARN [01-02|16:42:41.961] Synchronisation failed, dropping peer    peer=7875a002affc775b err="retrieved hash chain is invalid"

I also see some:

INFO [01-02|16:58:10.902] 😱 block lost number=488205 hash=1fb1c5…a41a42 This error about hash chain was just a warning, so the node keep mining until the 2th of january, then i saw this on each of the 6 nodes

image

I was looking that there are a lot of issues about this error, the most similar is the one i posted here but is unresolved.

Most of the issues workarrounds seems to be a restart, but in this case, the chain seems to be is in a unconsistent state and the nodes are always waiting for the others

So,

  1. any ideas? peers are connected, accounts are unlocked, it just entered into a deadlock situation after 450k blocks
  2. any logs that i can provide? i only see the warnings of the error described and the block lost, but nothing when the node stoped to be mining
  3. Is this PR related? https://github.com/ethereum/go-ethereum/pull/18072
  4. Maybe is related with the comment of @karalabe onthis issue https://github.com/ethereum/go-ethereum/issues/16406? 5 Upgrading from 1.8.17 to 1.8.20 will solve this?
  5. In my opinion, seems like a race condition or something, since i have 2 chains, one running for 2 months, the other one for three months and is the first time this error happens

This are other related issues:

https://github.com/ethereum/go-ethereum/issues/16444 (Same issue but i dont have votes pending in my snapshot)

https://github.com/ethereum/go-ethereum/issues/14381#

https://github.com/ethereum/go-ethereum/issues/16825

https://github.com/ethereum/go-ethereum/issues/16406

marcosmartinez7 commented 5 years ago

Based on this image

image

That is the situation of all the sealers, they just stop sealing waiting for each other, seems like a deadlock situation

Wich files can i check for errors since the js console isnt throwing anything?

marcosmartinez7 commented 5 years ago

This is the debug.stacks() info, i dont know it is important here but this is executing while the sealers are stucked:

image

image

image

marcosmartinez7 commented 5 years ago

Found that i have a lot of block lost on each node..

image

Can be this the problem? the chain was running with that warnings without any issues anyway..

Btw, it is caused by bad connection between nodes? Im using Digital Ocean droplets

NOTE: if i check eth.getBlockNuber i get 488676 or 488675 depending on the sealer

jmank88 commented 5 years ago

We experienced a similar deadlock on our fork, and the cause was due to out-of-turn blocks all being difficulty 1, mixed with a bit of bad luck. When the difficulties are the same, a random block is chosen as canonical, which can produce split decisions. You can compare hashes and recent signers on each node to confirm if your network is deadlocked in the same way. We had to both modify the protocol to produce distinct difficulties for each signer, and modify the same difficulty tie-breaker logic to make more deterministic choices.

marcosmartinez7 commented 5 years ago

Thanks for the response, can you give me an idea of how "compare hashes and recent signers on each node to confirm if your network is deadlocked in the same way." ?

Thanks

jmank88 commented 5 years ago

By getting the last 2 blocks from each node, you should be able to see exactly why they are stuck based on their view of the world. They all think that they have signed too recently, so they must disagree on what the last few blocks are supposed to be, so you'll see different hashes and signers for blocks with the same number (and difficulty!).

marcosmartinez7 commented 5 years ago

Good idea!!

Sealer 1

Last block 488676

image

Last -1 = 488675

image

Sealer 2

Last is 488675

image

The second node didnt reach the 488676 block.

The hashes of block 488675 are different, but the difficulty are differents (1 and 2)

For other blocks, like block 8, the hashes are equals and the difficulty is 2 for both..

Seems like all the blocks has 2 of difficulty except that conflictive one..did you find any logical explanation to that?

Btw, dont know why difficulty = 2 since the genesis file uses 0x1

Thoughts?

jmank88 commented 5 years ago

The in-turn signer always signs with difficulty 2. Out-of-turn signers sign with difficulty 1. This is built-in to the clique protocol, and the primary cause of this problem in the first place. It looks like you have 6 signers. You will have to check them all to make sense of this.

marcosmartinez7 commented 5 years ago

So, if i found two signers (into my 6) with the same difficulty and different hash the deadlock would make sense right?

Same block, different difficulty and different hash doesnt probe anything?

I have deleted the chaindata of the other node with the same last block 488675

fail

jmank88 commented 5 years ago

Not necessarily. Those kind of ambiguous splits happen very frequently with clique and would normally sort themselves out.

Are you still trying to recover this chain?

marcosmartinez7 commented 5 years ago

If it is not necesarrily and it normally sort themselves out, then the deadlock theory maybe isnt valid..

What did you mean about "It looks like you have 6 signers. You will have to check them all to make sense of this."?

About the chain: I wanted to know what happened basically, i dont know if i can provide any kind of logs or something since the sealers just stoped to wait each other and i dont have any other information.

Also, getting this scenario in a production environment sucks, since i cant continue mining..and there is nothing on go-ethereum that guarantees that this will not happen again

So, just to make the things more clear, if the block 488675 has different difficulty and different hash doesnt probe that there was an issue? It is normal to have different hashes comparing in-turn with out-turn then?

jmank88 commented 5 years ago

Resyncing the signers that you deleted may produce a different distributed state which doesn't deadlock. Or it could deadlock again right away (or at any point in the future). Making fundamental protocol changes to clique like we did for GoChain is necessary to avoid the possibility completely, but can't be applied to an existing chain (without coding in a custom hard fork). You could start a new chain with GoChain instead.

jmank88 commented 5 years ago

What did you mean about "It looks like you have 6 signers. You will have to check them all to make sense of this."?

They all have different views of the chain. You can't be sure why each one was stuck without looking at them all individually.

marcosmartinez7 commented 5 years ago

Ok, but, what i am looking for?

Right now im deleting chain data for all the nodes except 1 and resync the rest of them (5 singers) from that node.

About this comment:

"By getting the last 2 blocks from each node, you should be able to see exactly why they are stuck based on their view of the world. They all think that they have signed too recently, so they must disagree on what the last few blocks are supposed to be, so you'll see different hashes and signers for blocks with the same number (and difficulty!)."

If i see two in turn or two out turn with the same difficulty and different hash that will confirm that they think that they have signed recently?

jmank88 commented 5 years ago

If i see two in turn or two out turn with the same difficulty and different hash that will confirm that they think that they have signed recently?

If they logged that they signed too recently then you can trust that they did. Inspecting the recent blocks would just give you a more complete picture of what exactly happened.

marcosmartinez7 commented 5 years ago

Well, i delete all the chain data for the 5 sealers and sync from 1

Started to work again but there is a sealer that seems to have connectivity issues or something..

The sealer starts with 6 peers, then goes to 4, 3, 2 then again to 4, 6 , etc...

image

And thats why i suppose the blocks are being lost... and probably thats why the sycnronization fail warning is throwed since is always the same node

Any ideas of why is this happening?

Connectivity issues since they are separate droplets?

Any way to troubleshoot this?

Thanks

jmank88 commented 5 years ago

I don't think the peer count is related to lost blocks, and neither peers or lost blocks are related to the logical deadlock caused by the same-difficulty ambiguity.

Regardless, you can use static/trusted enodes to add the peers automatically.

marcosmartinez7 commented 5 years ago

I add the nodes mannually, but it is weird that a sealer is always getting connectivity issues with the rest of the peers

I will try the static/trusted nodes.

I will put the block lost in a separate issue, but i would like to have a response from the geth team about the initial problem, because it seems like i can go into another deadlock issue again

Thanks @jmank88

PS: Do you think that the block sealing time can be an issue here? Im using 10 secs

jmank88 commented 5 years ago

'Lost blocks' are just blocks that were signed but didn't make the canonical chain. These happen constantly in clique, because most (~1/2) of the signers are eligible to sign at any given time, but only one block is chosen (usually the in-turn signer, with difficulty 2) - all of the other out-of-turn candidates become 'lost blocks'.

jmank88 commented 5 years ago

PS: Do you think that the block sealing time can be an issue here? Im using 10 secs

Faster times might increase the chances of bad luck or produce more opportunities for it to go wrong, but the fundamental problem always exists.

marcosmartinez7 commented 5 years ago

Right, i understand, so, nothing to worry into a PoA network then?

About time, yeah, completely agree

Thanks a lot!

jmank88 commented 5 years ago

Right, i understand, so, nothing to worry into a PoA network then?

I'm not sure what you mean. IMHO the ambiguous difficulty issues are absolutely fatal flaws - the one affecting the protocol itself is much more severe, but the client changes I linked addressed deadlocks as well.

jmank88 commented 5 years ago

It's also worth noting that increasing the number of signers may reduce the chance of deadlock, possibly having an odd number rather than even as well.

marcosmartinez7 commented 5 years ago

Yes sure, i mean, i didnt know about that, but is really good information and i really apreciate it. I was talking about the lost block warning, your explanation make sense for PoA

About # of signers, yes, i have read about that, makes sense. I have also implemented a PoC with just 2 sealers and, maybe im lucky, but in 700k blocks i did not experimented this issue.

Right now im using a odd number

jmank88 commented 5 years ago

Limiting to just 2 signers is a special case with no ambiguous same-difficulty blocks.

marcosmartinez7 commented 5 years ago

After removing 1 node and resync from the data of 1 of the nodes, i was running the network with 5 sealers without issues.

Summary:

After 1 day it got stucked again, but now in weirdest situation:

Last block 503076

Sealer 1

image

Sealer 2

image

Sealer 4 (off turn with different hash and parent hash)

image

Sealer 5 (off turn, 3 side chain)

image

Sealer 6 (Same hash as sidechain but different parent)

image

The number of signers is 5

image

Each node is paired with 4 signers and 1 standard node

image

Last block -1: 503075

Sealer 1 (out off turn)

image

Sealer 2 (out off turn, same hash)

image

Sealer 4 (out off turn, different hash, same parent..)

image

Sealer 5 (in turn)

image

Sealer 6 (int turn too)

image

jmank88 commented 5 years ago

You can remove the stack traces, they are not necessary. This looks like a logical deadlock again. Can you double check your screenshots?

marcosmartinez7 commented 5 years ago

The lastblock -2 has some difference too, 2 nodes has different views of that node

S1 image

S2

image

S4

image

S5

image

S6

image

jmank88 commented 5 years ago

Indeed, this looks like a multi-block fork, which has now stalled out with all branches having the same total difficulty.

marcosmartinez7 commented 5 years ago

The lastblock-3 is where they agree

S1.

image

S2.

image

S3.

image

S4.

image

S5.

image

S6.

image

jmank88 commented 5 years ago

If they all signed their own versions, then the hashes would be different. This indicates the last point where they all agreed on the same block.

marcosmartinez7 commented 5 years ago

Thats true, i edited my comment.

Could please anybody of go-ethereum give me a hint of what is happening here?

I have double checked that the lastblock has difficulty 1 on each node

Also, this is contradictory, i have checked the logs and i see that sealer 6 have sealed the last block, also the Sealer 2, but the difficulty is 1 on each sealer when queried!

S1

image

S6 image

Also, is weird that into another sealer i have 2 consecutive block sealings

image

jmank88 commented 5 years ago

They are always speculatively sealing on whatever branch is the best that they have seen, so those logs do not look unusual.

marcosmartinez7 commented 5 years ago

I understand, but, the last block with difficulty 1 on each node isnt usual right? i mean, they made speculative sealing and the result chain include a last block that wasnt sealed?

Do you relate this too with a deadlock? seems more like the multiple chains were corrupted since that deadlock

jmank88 commented 5 years ago

Can you elaborate? I'm not sure I understand. The reason for there being only difficulty 1 blocks at the head would be that the in-turn signer had signed too recently (out-of-turn) to sign again (according to whichever branch it was following locally).

marcosmartinez7 commented 5 years ago

So, basically, if there are multiple forks (wrong behaviour) this could happen, but is not the expected situation (leads into a deadlock)

jmank88 commented 5 years ago

It is certainly not the desired behavior, but it is not wrong as defined by the clique protocol. Plus the client is arguably too strict about the edge case of peers with same total difficulty branches, which may just be due to being written originally for ethash.

marcosmartinez7 commented 5 years ago

Well, restarted again, running for about 10 hours and get deadlocked again,

is there any information that i can provide for this bug?

usmananwar commented 5 years ago

@marcosmartinez7; Hi, This seems strange. I am not really sure about it, but do you mind if I ask; Are you sure you are using different accounts for each miner (--unlock address)?

marcosmartinez7 commented 5 years ago

Yes, off course.

rlegene commented 5 years ago

We are experiencing the same problem in our testnet and our production network. The difficulty chosen difficulties of 1 or 2 is the cause of this.

In Blockchain Federal Argentina (bfa.ar), we are sealing a new block every 5 seconds, and have seen this problem since we had around 8 sealers (now we are at around 14, I think).

I talked a bit with @marcosmartinez7 on Discord today, and it seems that one interesting solution could be to use prime numbers for difficulties, where if you are in-turn you have the highest possible prime number.

This is a protocol problem, as parts of the network does indeed get stuck in separate branches, just like @marcosmartinez7 experienced.

With monitoring you can detect it and do debug.rewind. Detecting it, doesn't stop it from happening, though.

jmank88 commented 5 years ago

I talked a bit with @marcosmartinez7 on Discord today, and it seems that one interesting solution could be to use prime numbers for difficulties, where if you are in-turn you have the highest possible prime number.

I linked some of our fixes here: https://github.com/ethereum/go-ethereum/issues/18402#issuecomment-452141245, one of which was a protocol change to use dynamic difficulties from 1-n (for n signers) based on how recently each signer has signed. We've been running this on our mainnet since last May (5s blocks, 5-20 signers). Using primes is an interesting approach, but I'm not sure it's necessary (and could cause trouble, especially with a high number of signers). One neat feature of using 1-n is that all eligible signers will always sign with a difficulty > n/2, therefore any two consecutive out-of-turn blocks will always have a total difficulty > n and thus greater than a single in-turn block, so there won't be any late re-orgs from lagging signers producing lower numbered but higher total difficulty blocks (this is where I think primes would get you in to trouble).

5chdn commented 5 years ago

This sounds pretty much like the deadlocks we experience on the Görli testnet. We were able to break it down to two issues that would greatly improve this situation:

  1. the out-of-turn block sealing delay should be much, much higher. currently, it's sometimes lower than the network latency, causing authority nodes with high geographical distance constantly producing out-of-turn blocks. I suggest to put in at least a 5000 ms minimum delay before sealing out of turn blocks (plus random delay up to another 10000 ms). This is something that can be done without breaking the clique spec and will in most cases ensure that in-turn blocks are always faster propagated in the network than out-of-turn blocks.

  2. the choice of 1 and 2 difficulty score for out-of-turn blocks and in-turn blocks is not ideal. two out-of-turn blocks have the same difficulty as one in-turn block. I believe, in-turn blocks must be much, much heavier, I would recommend an in-turn difficulty score of 3 to make sure, they always get priority and to avoid deadlock situation where you have two different chain tips with the same difficulty. Unfortunately, this would require a new spec / hardfork.

fjl commented 5 years ago

Reviewed in team call: @5chdn suggestions are good. We could solve this by making out-of-turn difficulty more complicated. There should be some deterministic order to out-of-turn blocks. @karalabe fears that this will introduce too much protocol complexity or large reorgs.

holiman commented 5 years ago

My suggestion:

Example, 10 signers:

yananli89 commented 5 years ago

To exit the deadlock you can set the chain back to one canonical block using: debug.setHead(hex_value)

hadv commented 5 years ago

@5chdn @fjl @karalabe PTAL on PR #19239

hadv commented 5 years ago

To exit the deadlock you can set the chain back to one canonical block using: debug.setHead(hex_value)

It might work but we cannot always give an eye to the nodes and resolve the deadlock by running a command or restart the nodes. it's nightmare 😄

yananli89 commented 5 years ago

True! ^^

marcosmartinez7 commented 5 years ago

To exit the deadlock you can set the chain back to one canonical block using: debug.setHead(hex_value)

It might work but we cannot always give an eye to the nodes and resolve the deadlock by running a command or restart the nodes. it's nightmare 😄

Also the process to detect where the chain has forked and select the correct block to reset is not trivial