Open alexauroradev opened 7 years ago
We found a problem during the implementation of this feature:
When a node is started, it must restore the leaders of the last blocks. Currently, only the round is stored in the header of the block. So with the new algorithm, we need to iterate over all blocks starting from genesis to recover the last leaders and determine the order of the leaders at the current height.
Possible solutions:
Store leader: PublicKey
in a block header. Storing leader: ValidatorId
is not an option since a list of validators may be changed between blocks. But this means that size of block headers increases from 116 to 148 bytes. However, I would like to leave the block headers as small as possible.
Create a special table last_leaders: List<PublicKey>
into storage. This table will not affect state_hash, but it will be atomically updated along with the block commits. The disadvantage of this approach is the introducing of the new table for specific leader selection algorithm (What if we want to provide different algorithms for leader selection as consensus modules?).
cc @slowli @djsatok @yznovyak @VukW
Seemingly, there are two ways to approach this modification, and I'm not sure which one is proposed:
I think we are talking about the second approach, because afaik full nodes and lightweight clients right now do not check whether the "author" of a committed block is correct w.r.t. the block height/round. (And I don't see an immediate reason why they should check it.)
If this is the case: If a node is not a validator, it does not care about leader restriction at all. If it is a validator, it's enough to get information about the "author" of the latest F
blocks. This information could be indirectly committed to the block header, e.g., as a part of the blockchain state.
@slowli
afaik full nodes and lightweight clients right now do not check whether the "author" of a committed block is correct
If a node is not a validator, it does not care about leader restriction at all.
Currently, it's true for committed block, but not for proposes. Full nodes with validators apply the same rule for propose messages during handling: "propose should be signed by the correct leader".
@slowli @defuz
Make leader exclusion a consensus rule... Make leader exclusion a policy...
As for me, it is metaphysical question: how strongly do we believe our validators? If we really believe them than full-nodes' checks for commited block should differ from validators' checks for block proposal (because validators have some additional responsibility to other nodes). However in this case full nodes become not completely full ones. Are there already any other differences between full nodes and validators (except right to vote)? It seems to me that ideally there should be no difference, so I believe every full node should be able to check this rule. Nevertheless, I'm not sure if it should be a part of consensus, is it the only option?
Create a special table last_leaders:
List<PublicKey>
into storage. This table will not affectstate_hash
, but it will be atomically updated along with the block commits. The disadvantage of this approach is the introducing of the new table for specific leader selection algorithm (What if we want to provide different algorithms for leader selection as consensus modules?).
Using modular architecture for consensus?
TL;DR: every full node may check a correctness of the block leader for the committed block; we may create an additional table for leader election instead of saving the leader's PublicKey
in the block_hash
I'm feeling stupid, but I still don't see the problem. A lagging validator can request and download blocks up to the current height and validate them using a generic procedure (namely, checking precommits). Then, the validator will request Merkle proofs about authors of the newest F
blocks; this information can be stored in the blockchain state (say, in a special table). There are various ways to organize this info:
F
"authors"; rolling Merkle table (not implemented, and I'm not sure how to recalculate its nodes efficiently)In each case, the lagging validator can request information using the general framework of GET-requests, which is already available. In the 1st case, he can request information from latest F
blocks; in the 2nd and 3rd - from the latest block alone.
Now, there is a problem in that the blockchain state must be changed by not using transactions, but rather "automatically" after (and/or before) processing any transactions. But this potential evolution is not specific to this proposal; e.g., we may want to store blockchain height somewhere in the state, too.
Update a leader election algorithm to provide weak censorship resistance.
Changes are the following
disabled
state forF
blocks (During the nextF
blocks one don't have a right to create new block proposals). The node behaves as usual in other activities, including voting for a new block, signing messages, etc.M = N - F
validators. The number of permutation is calculated asT = Hash(H) mod M!
. Such calculation provides uniform distribution of the orders, that is byzantine validators would be randomly distributed inside the current heightH
.