Closed yutelin closed 2 years ago
Fantastic work, guys!
Great work!
Block insertion fails
Can you explain when block insertion might fail? I'm struggling to see why block insertion would ever fail for a valid proposal.
Return transaction fee to sender
Why not just accept zero-gasprice transactions?
We have implemented a simple faulty node that can make a validator run faulty behaviors during consensus.
Have you tried running the network with >=1/3 faulty nodes? If so, what does the result look like; what kinds of failures do you see in practice?
Thanks @vbuterin
Block insertion fails
Before actually inserting the block into the chain, the consensus only validates the block header. Inserting will do more checks so it can fail with other reasons.
Return transaction fee to sender
You're right. We've updated the EIP according.
testing >=1/3 faulty nodes?
Yes.
If there are more than 1/3 and less than 2/3 of faulty nodes, it will keep running round change and no consensus can be reached.
Theoretically it's also possible to finalize two conflicting blocks, if the proposer is one of the Byzantine nodes and makes two proposals and each get 2/3 prepares+commits. Though I guess that's fairly unlikely to happen in practice and so won't appear in that many random tests.
Each validator enters PRE-PREPARED upon receiving the PRE-PREPARE message with the following conditions: Block proposal is from the valid proposer. Block header is valid. Block proposal's sequence and round match the validator's state.
I know the meaning of block validity, but outside the PoW this is a little bit ambiguous. When a block is defined Valid or not without the proof-of-work?
sequence number should be greater than all pervious sequence numbers.
pervious -> previous
I like the structure, but for someone not accustomed to the terminology, 2F + 1
not defining up to section constants
makes it more difficult to understand.
@vbuterin
Theoretically it's also possible to finalize two conflicting blocks, if the proposer is one of the Byzantine nodes and makes two proposals and each get 2/3 prepares+commits. Though I guess that's fairly unlikely to happen in practice and so won't appear in that many random tests.
Yes, I think you are right. Suppose there are f+1 faulty nodes, f+f good nodes, and the propose is among the faulty nodes. The proposer can send first f good nodes A block and second f good nodes B block. Then both groups can receive 2f+1 of prepares+commits for block A and B respectively. Thus two conflicting blocks can be finalized.
@deanstef
I know the meaning of block validity, but outside the PoW this is a little bit ambiguous. When a block is defined Valid or not without the proof-of-work?
Each validator puts 2F+1
committed seals into the extraData
field in block header before inserting the block into the chain, which is seen as the consensus proof of the associated block. extraData
also contains proposer seal for validators to verify the block source during consensus (same mechanism as in Clique).
@ice09 Thanks, we've updated this EIP accordingly.
Each validator puts 2F+1 committed seals into the extraData field in block header before inserting the block into the chain, which is seen as the consensus proof of the associated block. extraData also contains proposer seal for validators to verify the block source during consensus (same mechanism as in Clique).
Great! I was a little confuse through Valid block and Consensus Proof, your response is helpful also for the meaning of validation in Clique. Thank you. Nice work guys !
Round change timer expires.
Can you clarify when this timer starts? Is there one timer for the whole round, like in PBFT (well, in PBFT the timer starts once the client request is received), or is there a new timer at each phase (pre-prepared, prepared, etc.) as the figure seems to suggest?
Unless there is additional mechanism not described above (or perhaps I am just missing something), I think this protocol may have safety issues across round changes, as there does not seem to be anything stopping validators from committing a new block in a new round after others have committed in the previous round. This is what the "locking" mechanism in Tendermint addresses. In PBFT it's handled by broadcasting much more information during the round change. When you "blockchainify" PBFT, you can do away with this extra information if you're careful to introduce something like Tendermint's locking mechanism. I suspect that if you address these issues, you will end up with a protocol that is roughly identical (if not exactly identical) to Tendermint. Happy to discuss further and collaborate on this - great initiative!
@ebuchman
Can you clarify when this timer starts?
Yes, there is only one timer which is reset/triggered in every beginning of a new round.
safety issues across round changes
Yes, in some extreme cases there might be safety issues. For example, say there is only one validator which receives 2F+1
commits but all the others do not. Then that validator would insert a valid block in to its chain while others would start a new round on the same block height. Eventually that might lead to conflict blocks.. We've put locking mechanism in the remaining tasks section. And yeah, we're looking forward to collaboration with Tendermint!
Sticky proposer seems like it would be able to submit empty blocks or censorship transactions if it never passed through the RoundChange state. As long as they submit valid blocks, they can hold their Proposer role indefinitely.
Blocks in Istanbul BFT protocol are final, which means that there are no forks and any valid block must be somewhere in the main chain.
Seems like a strong claim considering there is no penalty to being a faulty node (e.g. voting on multiple forks)
@kumavis
Faulty sticky proposer can keep generating empty valid blocks.
Yes, sticky proposer policy can lead to this issue. We've listed "faulty propose detection" in the remaining tasks section aiming to resolve it. One possible way is to switch to round robin policy whenever a validator sees an empty block. However, sticky proposer can still hack it by generating very small block every round.
Block finality and penalty on faulty node.
Detecting faulty node deterministically is hard which makes penalize faulty nodes even harder. For simplicity, this PR doesn't dive into this topic. It might be worth looking in the follow up EIP and research. Block finality is indeed a strong claim. In some rare case as @ebuchman pointed out, there might be safety issues. We listed it in remaining tasks section as well, and are looking to resolve it by introducing some kind of locking mechanism.
Awesome work! Can you give us a sense of performance benchmark in terms of throughput and latency? Thanks!
@epoquehu
Throughput and latency
In our preliminary testing result with 4 validators setup, the consensus time took around 10ms ~ 100ms, depending on how many transactions per block. In our testing, we allow each block to contain up to 2000 transactions. Regarding throughput, the transaction per second (TPS) ranges from 400 ~ 1200; however, there are still too many Geth factors that significantly affect the result. We are trying to fix some of them and workaround some of them as well. More comprehensive benchmarking and stress testing is still in progress. Stay tuned!
Is there any way to keep the nodekey (account private key) secured? Seems like it's left there unencrypted.
Great work on developing Istanbul!
One comment on "Does it still make sense to use gas?"
I've developed a testnet (using Ethermint) and modified the client to not charge gas. I wanted to bounce this idea of others to see whether this it is valid...
To avoid the infinite loop problem, the validators ensure the that smart contracts being published to the blockchain are sent from a small set of white-listed accounts.
These accounts are trusted by the consortium to only publish smart contracts that have gone through a strict review process.
I suppose in the extreme edge case that a computationally expensive slipped through and was published by mistake, then the validators stop and rollback to before the event.
Does this sound reasonable?
Appreciate any feedback on the faults with such an implementation.
Thanks.
The current implementation (as found in Quorum) breaks the concept of the "pending" block, used in several calls, but most notably in eth_getTransactionCount
(PendingNonceAt
in ethclient
):
In Ethereum, the pending block means the latest confirmed block + all pending transactions the node is aware of. This means that directly after a transaction is sent to the node (through RPC), the transaction count (aka nonce) in the "pending" block is increased. A lot of tools, like abigen in this repo or any other tool where tx signing occurs at the application level instead of in geth, rely on this for making multiple transactions at once. After the first one, the result of eth_getTransactionCount
will increase so that a valid second tx can be crafted.
With the current implementation of Istanbul, the definition of the "pending block" seem to be different. When submitting a transaction, the result for eth_getTransactionCount
for the sender in the "pending" block does not change. When a new block is confirmed (not containing this tx), it does change however (while the value for "latest" doesn't). Then, on the next block confirmation, the "latest" also changes because the tx is in the confirmed block.
So this seems to mean that the "pending block" definition changed from "latest block + pending txs" to "the block that is currently being voted on". I consider this a bug; if this is done on purpose, it breaks with a lot of existing applications (all users of abigen, f.e.) and should be reconsidered.
I originally reported about this issue in the Quorum repo, but there doesn't seem to be a good place to report bugs in Istanbul other than here.
I'm sorry to disrupt the technical discussion here with a non-technical question: What is the intention for including this in the EIP repository? In particular I was wondering:
(1) Is this proposal seeking public protocol adoption (it seems private chain focused, really at extending quorum
with the aims of also moving upstream to geth
)?
(2) Does the scope of EIPs in this repository extend beyond public chain protocol improvements?
I have used set of extraData coding tools in istanbul-tools repository to manually generate genesis.json & defined toml file too, but when i starts nodes, it throws error as "Failed to decode message from payload", err="unauthorized address"
Fantastic work
Thank you guys very much for this great contribuition. I would like to know about the progress on it.
@renuseabhaya I had the same issue. My problem was that with Istanbul, you do not use a "regular" account (meaning, an account that you generate using geth account new
) to makes nodes validators. You need to use the node key and create an account from the node key.
@yutelin Can you explain what the rationale was behind using an account address, derived from the node key, to identify validators instead of using the regular enode ID that is already being used for identifying nodes?
@michaelkunzmann-sap enode id is from node key.
@yutelin Yes, correct. So currently we are using
istanbul.propose("0x23971dab0b29c27fa0de9226c45bef04d9f39156", true)
Where 0x23971dab0b29c27fa0de9226c45bef04d9f39156
is the "address" of the node to be permitted. As far as I understand, this address does not represent a regular account like we create with geth account new
, since it is derived from the node key:
node_address = address(pub(node_key))
Since the enode id is also derived from the private node key (in its original purpose), is it possible to use the enode id instead of the address? This would save the extra step of generating an address from node key.
istanbul.propose("6f8a80d14311c39f35f516fa664deaaaa13e85b2f7493f37f6144d86991ec012937307647bd3b9a82abe2974e1407241d54947bbb39763a4cac9f77166ad92a0", true)
Ottoman testnet We have setup a testnet for public testing. There are initially 4 validators and no designated faulty nodes. In the future, we want to extend it to 22 validators and setup few faulty nodes amongst them. Run testnet node
geth --ottoman
I have tried that with the newest geth
geth version Version: 1.8.14-unstable Architecture: amd64 Go Version: go1.10.3 Operating System: linux
but I get a
flag provided but not defined: -ottoman
So it is not yet part of vanilla geth? Only quorum?
In quorum the switch --ottoman
is recognized:
geth_quorum --ottoman
WARN [08-07|09:50:51] No etherbase set and no accounts found as default
INFO [08-07|09:50:51] Starting peer-to-peer node instance=Geth/v1.7.2-stable-df4267a2/linux-amd64/go1.9.3
INFO [08-07|09:50:51] Allocated cache and file handles database=~/.ethereum/ottoman/geth/chaindata cache=128 handles=1024
INFO [08-07|09:50:51] Writing custom genesis block
INFO [08-07|09:50:51] Initialised chain configuration config="{ChainID: 5 Homestead: 1 DAO: <nil> DAOSupport: true EIP150: 2 EIP155: 3 EIP158: 3 Byzantium: 9223372036854775807 IsQuorum: false Engine: istanbul}"
INFO [08-07|09:50:51] Initialising Ethereum protocol versions="[63 62]" network=5
INFO [08-07|09:50:51] Loaded most recent local header number=0 hash=22919a…075196 td=1
INFO [08-07|09:50:51] Loaded most recent local full block number=0 hash=22919a…075196 td=1
INFO [08-07|09:50:51] Loaded most recent local fast block number=0 hash=22919a…075196 td=1
INFO [08-07|09:50:51] Regenerated local transaction journal transactions=0 accounts=0
INFO [08-07|09:50:51] Starting P2P networking
INFO [08-07|09:50:53] UDP listener up self=enode://fe329f4395d30db66cced5d750fd4395993f66ccd08c703ea2653b78cdd364b76938e13d2ab8cc5129a295fdc0d43ecc1dfb9c408b24639bbe42dd1091333251@[::]:30303
INFO [08-07|09:50:53] RLPx listener up self=enode://fe329f4395d30db66cced5d750fd4395993f66ccd08c703ea2653b78cdd364b76938e13d2ab8cc5129a295fdc0d43ecc1dfb9c408b24639bbe42dd1091333251@[::]:30303
INFO [08-07|09:50:53] IPC endpoint opened: ~/.ethereum/ottoman/geth.ipc
but then it does not sync.
Please update the hardcoded IP addresses of the bootnodes, or publish a script / list of current bootnodes. Thanks.
Hi, have some issues with block creation (Mining) using IBFT, I'm testing with 7 validator nodes, when I bring 4 nodes up wait some time (around 30 minutes) and then bring the 5th node up, there is no block creation after more than half an hour (another 30 minutes). now, if I bring all 5 nodes up at the same time block creation is happening normally. What might be the issue?
I have given more details here https://github.com/getamis/istanbul-tools/issues/113
Is there anyway, using ISTANBUL OPTIONS: --istanbul.requesttimeout value & --istanbul.blockperiod value, change the time block creation? As a default, mine time is 1sec, I would like to increase it to 10sec, thanks.
Any plans to integrate this into the official go-ethereum project?
One more question: In Clique, with N = 3*f + 1
nodes, if I wait for a TX to be confirmed in 2*f + 1
blocks, would this reassemble the same consistency property (transaction finality) like in IBFT/PBFT? Of course, it would be slower, but theoretically, it would be the same behaviour?
Is Gossip complete?
Hi I have a question about IBFT’s consensus fault at the number of lock <n/3:
Imagine we have n=7 node, f=2. The node are A, B, C, D, E, F, G F and G are Byzantine node.
At first round: A propose p1, only E saw that B-C-D-E-F vote PREPARE p1-> E lock p1. The rest of the nodes are timed out at PREPREPARED.
Second round: B propose p2, only D saw that A-C-D-F-G vote vote PREPARE p2-> D lock p2. The rest of the nodes are timed out at PREPREPARED.
At this stage, F and G stop voting.
We have 5 nodes, however E and D cannot unlock to either p1 or p2. A, B, C could not themselves come to any consensus since at most we have 4 node voting, while we need at least 5 Nodes.
As I can see current implementation of locks is not suffice to handle this case.
This still has not made it to accepted EIP status, @axic? Eeek. Yes, so I very much agree.
With the EEA/EF Mainnet initiative, we really do need to be starting to consider EEA standards within the same EIP process, even if they do not apply to the ETH mainnet.
The EIP standards process needs to look at Ethereum-as-a-protocol, not purely the needs of $ETH.
When I raised that to @Souptacular in 2017, his response was that there was likely little appetite in the Core Devs group for taking on that extra load, considering that such proposals were not of direct benefit to ETH. Maybe the appetite is different now, especially with PegaSys people spanning both sides, @timbeiko and @shemnon being deeply involved with Core Devs, etc?
I am a bit confused, but I don't think anyone would have rejected this submitted as an EIP. As it stands today, this is only a discussion. When it gets submitted as a pull request, it can be merged as a draft and likely turned final, given it was implemented in multiple clients (and superseded already?).
Note that the Quorum implementation has recently changed the calculation for a quorum of validators to fix an issue. There are a bunch of details I'm not familiar with but this spec likely needs an update before it becomes final. From my memory of trying to implement IBFT1 I seem to recall some parts of this were misleading or wrong (or possibly the Quorum implementation was wrong but that's essentially become the standard for IBFT1 since it's what's in production). I should have raised them at the time (sorry) and would have to review the spec again now, though there are likely better people.
There is also ongoing work in the EEA to adopt a standard BFT consensus algorithm. I'm not sure what the status of that is. It does mean that we don't necessarily need this and other non-mainnet stuff as EIPs, the EEA spec may (or may not) be a better place for them.
@ajsutton My guts says that everything which can be EIPs should be EIPs, to avoid siloing between Public Ethereum and Enterprise Ethereum (which is exactly what happened with the EEA - intentionally at first, but with the intention of converging them back together in happier days - ie now).
There is nothing to say that all EIPs have to be implemented by ALL clients to be useful. There is nothing to say that all EIPs have to apply to the ETH mainnet to be accepted.
The fact that EIPs were NOT originally written for functionality like: JSON-RPCs, Swarm, Warp-Sync, Aura, Clique and more was a real problem. You were stuck with trying to be bug-for-bug compatible with Geth or with Parity.
Now we have more clients I would argue that pretty much EVERY useful feature from ETH1 clients, including EEA features, should have EIPs written for them - unless they are very experimental and new. The spec is what lets other clients adopt.
Note that the Quorum implementation has recently changed the calculation for a quorum of validators to fix an issue. There are a bunch of details I'm not familiar with but this spec likely needs an update before it becomes final. From my memory of trying to implement IBFT1 I seem to recall some parts of this were misleading or wrong (or possibly the Quorum implementation was wrong but that's essentially become the standard for IBFT1 since it's what's in production). I should have raised them at the time (sorry) and would have to review the spec again now, though there are likely better people.
There is also ongoing work in the EEA to adopt a standard BFT consensus algorithm. I'm not sure what the status of that is. It does mean that we don't necessarily need this and other non-mainnet stuff as EIPs, the EEA spec may (or may not) be a better place for them.
We modified the implementation to better handle dynamic validators based on a reported issue with scaling a network from 1 validator to 4. We'll continue to enhance the protocol as IBFT. We are currently working on a TLA+ spec, with so far a few updates to the described protocol, that we'll also make available once it's completed and more than happy to see it as an EIP. I thought this was originally an EIP.
Clique is a EIP issue just like IBFT - #225
It is an EIP actually: https://eips.ethereum.org/EIPS/eip-225
JSON-RPC has a doc that serves much like a spec, EEA references specific wiki edit versions - https://github.com/ethereum/wiki/wiki/JSON-RPC
It has an EIP too: https://eips.ethereum.org/EIPS/eip-1474
The Clique EIP was written by @karalabe in an unsuccessful attempt to "unfork" the different POA approaches after Parity "went first" with Aura and then a group of companies launched the Kovan testnet without even informing the Geth team:
https://medium.com/@Digix/announcing-kovan-a-stable-ethereum-public-testnet-10ac7cb6c85f
Parity did not "play ball" and implement Clique in Parity, and also did not author an EIP of their own for Aura, or propose any alternative standard which both teams could implement.
That was finally resolved by the Gorli project (co-funded by the EF and ETC Coop) which added Clique support to Parity. Thank you @soc1c, @aidanih and @YazzyYaz. ETC Coop paid $130K on our side for that to happen, and I believe that the EF matched that funding.
https://medium.com/ethereum-classic/building-a-better-unified-testnet-3f48490cd4e1 https://goerli.net/
The JSON-RPC EIP also happened a lot later than the original Wiki spec. Does Parity even comply with the EIP? I honestly do not know. The lack of alignment between Geth and Parity on that score has been an issue since 2016.
A Warp-Sync EIP would have been very useful. Aleth was leveraging that functionality at one stage, right, @axic? Is that still the case?
Swarm is "graduated" from EF funding now, and they have their own process, making an EIP moot at this stage:
https://github.com/ethersphere/SWIPs
ETC Labs have started funding Swarm now. And @tgerring has been funding personally. And they have partnered by @pipermerriam and Trinity team. Go @zelig :-)
https://medium.com/ethereum-classic-labs/ethereum-classic-labs-partnership-announcement-79328d5055f4
https://twitter.com/BobSummerwill/status/1174071570588815360
Hello, I have a small question.
Why "Istanbul" is used as the name? Is it from Ethereum Istanbul update?
The Istanbul name here predates the fork.
The Istanbul name here predates the fork.
Predates the fork? So why is it called Istanbul?? It is not related to Ethereum Istanbul, right?
Correct,@NoriMin.
IBFT was created by AMIS, a Taiwanese banking consortium, in 2017 and it is completely unrelated to the Istanbul hard fork.
They called it Istanbul as a riff on Byzantium Fault Tolerance.
Where Byzantium, Constantinople and Istanbul were the names assigned to the phases of what was originally planned as a single hard fork called Metropolis, the phase of the original ETH roadmap prior to Serenity.
Those all being different names which the real world city of Istanbul has had in its history (and being a metropolis).
@bobsummerwill I understood! Thank you:)
Hi, sorry, I would like to see if someone could give me details about the IBFT consensus mode.
For example, if in each round IBFT chooses a random node as proposer? and also if IBFT chooses which nodes participate in each round or if it works with all nodes?
Change log:
extraData
tools.Pull request
https://github.com/ethereum/go-ethereum/pull/14674
Istanbul byzantine fault tolerant consensus protocol
Note, this work is deeply inspired by Clique POA. We've tried to design as similar a mechanism as possible in the protocol layer, such as with validator voting. We've also followed its EIP style of putting the background and rationale behind the proposed consensus protocol to help developers easily find technical references. This work is also inspired by Hyperledger's SBFT, Tendermint, HydraChain, and NCCU BFT.
Terminology
Consensus
Istanbul BFT is inspired by Castro-Liskov 99 paper. However, the original PBFT needed quite a bit of tweaking to make it work with blockchain. First off, there is no specific "client" which sends out requests and waits for the results. Instead, all of the validators can be seen as clients. Furthermore, to keep the blockchain progressing, a proposer will be continuously selected in each round to create block proposal for consensus. Also, for each consensus result, we expect to generate a verifiable new block rather than a bunch of read/write operations to the file system.
Istanbul BFT inherits from the original PBFT by using 3-phase consensus,
PRE-PREPARE
,PREPARE
, andCOMMIT
. The system can tolerate at most ofF
faulty nodes in aN
validator nodes network, whereN = 3F + 1
. Before each round, the validators will pick one of them as the proposer, by default, in a round robin fashion. The proposer will then propose a new block proposal and broadcast it along with thePRE-PREPARE
message. Upon receiving thePRE-PREPARE
message from the proposer, validators enter the state ofPRE-PREPARED
and then broadcastPREPARE
message. This step is to make sure all validators are working on the same sequence and the same round. While receiving2F + 1
ofPREPARE
messages, the validator enters the state ofPREPARED
and then broadcastsCOMMIT
message. This step is to inform its peers that it accepts the proposed block and is going to insert the block to the chain. Lastly, validators wait for2F + 1
ofCOMMIT
messages to enterCOMMITTED
state and then insert the block to the chain.Blocks in Istanbul BFT protocol are final, which means that there are no forks and any valid block must be somewhere in the main chain. To prevent a faulty node from generating a totally different chain from the main chain, each validator appends
2F + 1
receivedCOMMIT
signatures toextraData
field in the header before inserting it into the chain. Thus blocks are self-verifiable and light client can be supported as well. However, the dynamicextraData
would cause an issue on block hash calculation. Since the same block from different validators can have different set ofCOMMIT
signatures, the same block can have different block hashes as well. To solve this, we calculate the block hash by excluding theCOMMIT
signatures part. Therefore, we can still keep the block/block hash consistency as well as put the consensus proof in the block header.Consensus states
Istanbul BFT is a state machine replication algorithm. Each validator maintains a state machine replica in order reach block consensus.
States:
NEW ROUND
: Proposer to send new block proposal. Validators wait forPRE-PREPARE
message.PRE-PREPARED
: A validator has receivedPRE-PREPARE
message and broadcastsPREPARE
message. Then it waits for2F + 1
ofPREPARE
orCOMMIT
messages.PREPARED
: A validator has received2F + 1
ofPREPARE
messages and broadcastsCOMMIT
messages. Then it waits for2F + 1
ofCOMMIT
messages.COMMITTED
: A validator has received2F + 1
ofCOMMIT
messages and is able to insert the proposed block into the blockchain.FINAL COMMITTED
: A new block is successfully inserted into the blockchain and the validator is ready for the next round.ROUND CHANGE
: A validator is waiting for2F + 1
ofROUND CHANGE
messages on the same proposed round number.State transitions:
NEW ROUND
->PRE-PREPARED
:PRE-PREPARED
state.PRE-PREPARED
upon receiving thePRE-PREPARE
message with the following conditions:PREPARE
message to other validators.PRE-PREPARED
->PREPARED
:2F + 1
of validPREPARE
messages to enterPREPARED
state. Valid messages conform to the following conditions:COMMIT
message upon enteringPREPARED
state.PREPARED
->COMMITTED
:2F + 1
of validCOMMIT
messages to enterCOMMITTED
state. Valid messages conform to the following conditions:COMMITTED
->FINAL COMMITTED
:2F + 1
commitment signatures toextraData
and tries to insert the block into the blockchain.FINAL COMMITTED
state when insertion succeeds.FINAL COMMITTED
->NEW ROUND
:Round change flow
ROUND CHANGE
:PREPREPARE
message.ROUND CHANGE
message along with the proposed round number and waits forROUND CHANGE
messages from other validators. The proposed round number is selected based on following condition:ROUND CHANGE
messages from its peers, it picks the largest round number which hasF + 1
ofROUND CHANGE
messages.1 + current round number
as the proposed round number.F + 1
ofROUND CHANGE
messages on the same proposed round number, it compares the received one with its own. If the received is larger, the validator broadcastsROUND CHANGE
message again with the received number.2F + 1
ofROUND CHANGE
messages on the same proposed round number, the validator exits the round change loop, calculates the new proposer, and then entersNEW ROUND
state.Proposer selection
Currently we support two policies: round robin and sticky proposer.
Validator list voting
We use a similar validator voting mechanism as Clique and copy most of the content from Clique EIP. Every epoch transaction resets the validator voting, meaning if an authorization or de-authorization vote is still in progress, that voting process will be terminated.
For all transactions blocks:
VALIDATOR_LIMIT
come into effect immediately.Future message and backlog
In an asynchronous network environment, one may receive future messages which cannot be processed in the current state. For example, a validator can receive
COMMIT
messages onNEW ROUND
. We call this kind of message a "future message." When a validator receives a future message, it will put the message into its backlog and try to process later whenever possible.Optimization
To speed up the consensus process, a validator that received
2F + 1
ofCOMMIT
messages prior to receiving2F + 1
ofPREPARE
message will jump to theCOMMITTED
state so that it is not necessary to wait for furtherPREPARE
messages.Constants
We define the following constants:
EPOCH_LENGTH
: Number of blocks after which to checkpoint and reset the pending votes.30000
for the testnet to remain analogous to the main netethash
epoch.REQUEST_TIMEOUT
: Timeout for each consensus round before firing a round change in millisecond.BLOCK_PERIOD
: Minimum timestamp difference in seconds between two consecutive blocks.PROPOSER_POLICY
: Proposer selection policy, defaults to round robin.ISTANBUL_DIGEST
: Fixed magic number0x63746963616c2062797a616e74696e65206661756c7420746f6c6572616e6365
ofmixDigest
in block header for Istanbul block identification.DEFAULT_DIFFICULTY
: Default block difficulty, which is set to0x0000000000000001
.EXTRA_VANITY
: Fixed number of extra-data prefix bytes reserved for proposer vanity.32 bytes
to retain the current extra-data allowance and/or use.NONCE_AUTH
: Magic nonce number0xffffffffffffffff
to vote on adding a validator.NONCE_DROP
: Magic nonce number0x0000000000000000
to vote on removing a validator.UNCLE_HASH
: AlwaysKeccak256(RLP([]))
as uncles are meaningless outside of PoW.PREPREPARE_MSG_CODE
: Fixed number0
. Message code forPREPREPARE
message.COMMIT_MSG_CODE
: Fixed number1
. Message code forCOMMIT
message.ROUND_CHANGE_MSG_CODE
: Fixed number2
. Message code forROUND CHANGE
message.We also define the following per-block constants:
BLOCK_NUMBER
: Block height in the chain, where the height of the genesis block is 0.N
: Number of authorized validators.F
: Number of allowed faulty validators.VALIDATOR_INDEX
: Index of the block validator in the sorted list of current authorized validators.VALIDATOR_LIMIT
: Number of validators to pass an authorization or de-authorization proposal.floor(N / 2) + 1
to enforce majority consensus on a chain.Block header
We didn't invent a new block header for Istanbul BFT. Instead, we follow Clique in repurposing the
ethash
header fields as follows:beneficiary
: Address to propose modifying the list of validator with.nonce
: Proposer proposal regarding the account defined by the beneficiary field.NONCE_DROP
to propose deauthorizing beneficiary as a existing validator.NONCE_AUTH
to propose authorizing beneficiary as a new validator.NONCE_DROP
orNONCE_AUTH
mixHash
: Fixed magic number0x63746963616c2062797a616e74696e65206661756c7420746f6c6572616e6365
for Istanbul block identification.ommersHash
: Must beUNCLE_HASH
as uncles are meaningless outside of PoW.timestamp
: Must be at least the parent timestamp +BLOCK_PERIOD
difficulty
: Must be filled with0x0000000000000001
.extraData
: Combined field for signer vanity and RLP encoded Istanbul extra data, where Istanbul extra data contains validator list, proposer seal, and commit seals. Istanbul extra data is defined as follows:Thus the
extraData
would be in the form ofEXTRA_VANITY | ISTANBUL_EXTRA
where|
represents a fixed index to separate vanity and Istanbul extra data (not an actual character for separator).EXTRA_VANITY
bytes (fixed) may contain arbitrary proposer vanity data.ISTANBUL_EXTRA
bytes are the RLP encoded Istanbul extra data calculated fromRLP(IstanbulExtra)
, whereRLP()
is RLP encoding function, andIstanbulExtra
is the Istanbul extra data.Validators
: The list of validators, which must be sorted in ascending order.Seal
: The proposer's signature sealing of the header.CommittedSeal
: The list of commitment signature seals as consensus proof.Block hash, proposer seal, and committed seals
The Istanbul block hash calculation is different from the
ethash
block hash calculation due to the following reasons:extraData
to prove the block is signed by the chosen proposer.2F + 1
of committed seals as consensus proof inextraData
to prove the block has gone through consensus.The calculation is still similar to the
ethash
block hash calculation, with the exception that we need to deal withextraData
. We calculate the fields as follows:Proposer seal calculation
By the time of proposer seal calculation, the committed seals are still unknown, so we calculate the seal with those unknowns empty. The calculation is as follows:
Proposer seal
:SignECDSA(Keccak256(RLP(Header)), PrivateKey)
PrivateKey
: Proposer's private key.Header
: Same asethash
header only with a differentextraData
.extraData
:vanity | RLP(IstanbulExtra)
, where in theIstanbulExtra
,CommittedSeal
andSeal
are empty arrays.Block hash calculation
While calculating block hash, we need to exclude committed seals since that data is dynamic between different validators. Therefore, we make
CommittedSeal
an empty array while calculating the hash. The calculation is:Header
: Same as ethash header only with a differentextraData
.extraData
:vanity | RLP(IstanbulExtra)
, where in theIstanbulExtra
,CommittedSeal
is an empty array.Consensus proof
Before inserting a block into the blockchain, each validator needs to collect
2F + 1
of committed seals from other validators to compose a consensus proof. Once it receives enough committed seals, it will fill theCommittedSeal
inIstanbulExtra
, recalculate theextraData
, and then insert the block into the blockchain. Note that since committed seals can differ by different sources, we exclude that part while calculating the block hash as in the previous section.Committed seal calculation:
Committed seal is calculated by each of the validator signing the hash along with
COMMIT_MSG_CODE
message code of its private key. The calculation is as follows:Committed seal
:SignECDSA(Keccak256(CONCAT(Hash, COMMIT_MSG_CODE)), PrivateKey)
.CONCAT(Hash, COMMIT_MSG_CODE)
: Concatenate block hash andCOMMIT_MSG_CODE
bytes.PrivateKey
: Signing validator's private key.Block locking mechanism
Locking mechanism is introduced to resolve safety issues. In general, when a proposer is locked at certain height
H
with a blockB
, it can only proposeB
for heightH
. On the other hand, when a validator is locked, it can only vote onB
for heightH
.Lock
A lock
Lock(B, H)
contains a block and its height, which means its belonging validator is currently locked at certain blockB
and heightH
. In the following, we also use+
to denote more than and-
to denote less than. For example+2/3
validators denotes more than two-thirds of validators, while-1/3
validators denotes less than one-third of validators.Lock and unlock
2F + 1
PREPARE
messages on a blockB
at heightH
.H
and blockB
when it fails to insert blockB
to blockchain.Protocol (
+2/3
validators are locked withLock(B,H)
)PRE-PREPARE
:PRE-PREPARE
onB
, and entersPREPARED
state.PRE-PREPARE
on blockB'
.PRE-PREPARE
on existing block: Ignore.PRE-PREPARE
onB
: BroadcastsPREPARE
onB
.PRE-PREPARE
onB'
: BroadcastsROUND CHANGE
.PRE-PREPARE
onB
: BroadcastsPREPARE
onB
.PRE-PREPARE
onB'
: BroadcastsPREPARE
onB'
.+2/3
are locked atB
and which would lead to round change.PREPARE
:PREPARE
onB
: BroadcastsCOMMIT
onB
, and entersPREPARED
state.PREPARED
inPRE-PREPARE
stage.PREPARE
onB'
: Ignore.+1/3
PREPARE
onB'
since+2/3
are locked atB
. Thus the consensus round onB'
will cause round change. Validator cannot broadcastROUND CHANGE
directly here since thisPREPARE
message can possibly from a faulty node.PREPARE
onB
: Waits for2F + 1
PREPARE
messages onB
.2F + 1
COMMIT
messages prior to receiving2F + 1
PREPARE
messages since there are+2/3
validators being locked atB
. In this case, it will jump toCOMMITTED
state directly.PREPARE
onB'
: Waits for2F + 1
PREPARE
message onB'
.+2/3
validators are locked onB
and which would lead to round change.COMMIT
:COMMIT
onB
: Waits for2F + 1
COMMIT
messages.COMMIT
onB'
: Shouldn't happen.Locking cases
Round change:
+2/3
are locked:B
.B'
, but which will lead to another round change.B
will be committed by honest validators.+1/3 ~ 2/3
are locked:B
.B'
. However, since+1/3
are locked atB
, no validators can ever receive2F + 1
PREPARE
onB'
, meaning no validators can be locked atB'
. Also those+1/3
locked validators will not response toB'
and eventually lead to round change.B
will be committed by honest validators.-1/3
are locked:B
.B'
. If+2/3
reach consensus onB'
, those locked-1/3
will getB'
through synchronization and move to next height. Otherwise, there will be another round change.B
or other blockB'
be finally committed.Round change caused by insertion failure:
+2/3
validators will unlock blockB
atH
and try to propose a new blockB'
.-1/3
validators insert the block successfully, but others successfully trigger round change, meaning+1/3
are still locked atLock(B,H)
B
: Proposer will proposeB'
atH'
, but+1/3
are locked atB
, soB'
won't pass the consensus, which will eventually lead to round change. Other validators will either perform consensus onB
or getB
through synchronization.B
:B
.B'
atH
. The rest is the same as above case 1.+1/3
validators insert the block successfully,-2/3
are trying to trigger round change atH
.B
: Proposer will proposeB'
atH'
, but won't pass the consensus until+1/3
getB
through synchronization.B
:B
.B'
atH
. The rest is the same as above case 1.+2/3
validators insert the block successfully,-1/3
are trying to trigger round change atH
.B
: proposer will proposeB'
atH'
, which may lead to a successful consensus. Then those-1/3
need to getB
through synchronization.B
:B
.B'
atH
. Since+2/3
haveB
atH
already, this round would cause round change.Gossip network
Traditionally, validators need to be strongly connected in order to reach stable consensus results, which means all validators need to be connected directly to each other; however, in practical network environment, stable and constant p2p connections are hard to achieve. To resolve this issue, Istanbul BFT implements gossip network to overcome this constrain. In a gossip network environment, all validators only need to be weakly connected, which means any two validators are seen connected when either they are directly connected or they are connected with one or more validators in between. Consensus messages will be relayed between validators.
How to run
Running Istanbul BFT validators and nodes is similar to running the official node in a private chain. First of all, you need to initialize the data folder as:
Then, for validators:
for regular nodes:
Note on
syncmode
:--syncmode "full"
is required for the first set of validators to initialize a new network. Since we are using fetcher to insert blocks, if we don't set it to full mode, the fetcher cannot insert the first block. Please refer the following code ineth/handler.go
.The sync mode affects only if there are some existing blocks, so there is no impact for initializing a new network.
For the later joined validators, we don't need to use full mode as they can get blocks by downloader. After the first sync from peers, they will automatically switch to full mode.
Command line options
Nodekey and validator
To be a validator, a node needs to meet the following conditions:
extraData
's validators section.genesis.json
To run the Istanbul BFT chain, the
config
field is required, and thepbft
subfield must present. Example as the following:extraData
toolsWe've create a set of
extraData
coding tools in istanbul-tools repository to help developers to manually generategenesis.json
.Encoding: Before encoding you need to define a toml file with
vanity
andvalidators
fields to define proposer vanity and validator set. Please refer to example.toml for the example. The output would be a hex string which can be put intoextraData
field directly.Command:
Decoding: Use
--extradata
option to give theextraData
hex string. The output would show the following if presents: vanity, validator set, seal, and committed seal.Command:
Ottoman testnet
We have setup a testnet for public testing. There are initially 4 validators and no designated faulty nodes. In the future, we want to extend it to 22 validators and setup few faulty nodes amongst them.
Run testnet node
Faulty node
We have implemented a simple faulty node that can make a validator run faulty behaviors during consensus. There are five behaviors included in this implementation:
NotBroadcast
: The validator doesn't broadcast any message.SendWrongMsg
: The validator sends out messages with wrong message codes.ModifySig
: The validator modifies the message signatures.AlwaysPropose
: The validator always sends out proposals.AlwaysRoundChange
: The validator always sendsROUND CHANGE
while receiving messages.BadBlock
: The validator proposes a block with bad bodyRun following command to enable faulty node:
Where the
<MODE>
can be the following number:0
: Disable faulty behaviors.1
: Randomly run any faulty behaviors.2
:NotBroadcast
.3
:SendWrongMsg
.4
:ModifySig
.5
:AlwaysPropose
.6
:AlwaysRoundChange
.7
:BadBlock
.Background
The idea of implementing a byzantine fault tolerance (BFT) consensus came from the challenges we faced while building blockchain solutions for banks. We chose ethereum as the baseline protocol mostly because of its smart contract capability. However, the built-in consensus, proof of work or ethash, is not the ideal choice when settlement finality and minimum latency is required.
Banking systems tend to form a private chain or consortium chain to run their applications. PBFT is ideal for these settings. These environments require a higher degree of manageability and higher throughput. In terms of scalability, validator scalability is not required. Many of the decentralization benefits of PoW in public chains become drawbacks in a private/consortium chain. On the other hand, designated validators in a PBFT environment maps well to private/consortium chains.
Remaining Tasks
extraData
field, but should be fairly straightforward.worker.go
code.Notes and discussions
Does it still make sense to use gas?
Yes. We still need gas to prevent infinite loops and any kind of EVM exhaustion.
Does it make sense to charge gas in a consortium chain?
The network would be vulnerable if every account has unlimited gas or unlimited transaction sending power. However, to enable so, one can run all validators with gas price flag
--gasprice 0
to accept gas price at zero.Put consensus proof in the next block?
Currently our block header can be varied in
extraData
depending on its source validator because of the need to put consensus proof in the block header (by each validator). One way to resolve this is to put the proof in the next block. Therefore, in the proposing stage, the proposer can select2F + 1
of commitment signatures of the previous block and put them in the current proposed block header. However, it would require each block to have one confirmation to reach finality (not instant finality).Proof of lock
Inspired by Tendermint. We are still considering whether to add it to this EIP. Further efficiency benefits can be realized by reusing a current proposed block in a round change situation.
Contribution
The work was initiated and open sourced by the Amis team. We're looking for developers around the world to contribute. Please feel free to contact us:
Forked repository (and original implementation branch)
https://github.com/getamis/go-ethereum/tree/feature/pbft
Clarifications and feedback
TBD