barryWhiteHat / roll_up

scale ethereum with snarks
356 stars 47 forks source link

Consensus Mechanisim #7

Closed barryWhiteHat closed 5 years ago

barryWhiteHat commented 5 years ago

We need to build a consensus mechanism for More viable Roll_up so we can move the data availability guarantees off chain.

barryWhiteHat commented 5 years ago

Perhaps this should live here

To re-cap some discussion earlier.

Data availability can be guaranteed by splitting it into the 'tx pool' and the 'prover', this also provides an anti-censorship mechanism.

There is a consensus of nodes which represent the 'tx pool', they guarantee data availability.

If I want a TX included in a block, I must first prove to the 'tx pool' nodes that it's a valid transaction for a given merkle tree root (e.g. the current block).

The tx pool signs this information, confirming they have received proof that it's a valid transaction and hold all the data necessary for the prover to include it in a block.

I then take this information to the prover, I say 'I have proof, from the tx pool, that I have a valid transaction' without revealing any details about that transaction to the prover.

The prover then gives me a token/signature which guarantees they will include this transaction in the block, however they don't know any information about that transaction (yet).

For the next block, the prover then provides all the guarantees for transactions to the tx pool nodes - which reply with the details of the transactions.

This provides a mechanism which splits responsibility of data availability and proving transactions, so the prover must commit to what it will include without knowing what it is, and that all information it needs will be available - so would require collusion between the tx pool nodes and the prover to censor transactions.

This is a half-baked idea, but is worth investigating.

Copied from https://github.com/barryWhiteHat/roll_up/issues/6

PhABC commented 5 years ago

I believe the good thing about snarks and similar ZK constructions is that you don't need to care about double spending and some other 51% attack related outcomes, since proper state transition is enforced by the proof verification on-chain. Hence, I believe all we need is mean to choose which validator is now allowed to generate a proof for the next root update and we don't need to care too much about slashing.

Ideally, validators should know it will be their turn for commitment t at least at t-1, otherwise all validators are forced to generate all the proofs. There is a lot of discussions on how to pick the next validator around the Casper/sharding research and I think the current consensus is as follow (spitting this out of memory, so might be wrong):

  1. Anyone can become a validator when they want, but there are certain "epochs" and "dynasties", which act as checkpoints when a new validator can join.

  2. Validators form a RANDAO to select next validator.

RANDAO has some flaws (participants can choose not to reveal), but I recall Vitalik saying that they had a way to make it so that controlling the randomness didn't give you much. In our case, for now, we could simply add a penalty for not revealing that is equal to the current commitment reward. Hence, if a validator does not reveal so they can be the next one to commit a root, they will lose money (cost of generating the proof). This approach becomes risky for some dapp where the reward from the fee is smaller than the reward of censoring some txs (e.g. FOMOED), but this is somewhat of a niche case imo.

Note that the probability of being selected should be proportional to amount staked.

barryWhiteHat commented 5 years ago

Hence, I believe all we need is mean to choose which validator is now allowed to generate a proof for the next root update and we don't need to care too much about slashing.

it depends on the properites you want the system to have. For MVP i totally agree. For MVP i think the best way forward is proof of authority where there is a trusted prover.

So we can probably wait for teh casper work to mature before we decide on this.

PhABC commented 5 years ago

Most of Casper's work is on slashing conditions for things like double spend, double commits, etc. which we don't have in our case. I think for MVP a central operator makes more sense indeed, but having a set of validators isn't complicated either since there are a lot of things we don't need to care about that L1 PoS needs to care about.

barryWhiteHat commented 5 years ago

Here is proof of burn consensus which allows us to run data availability guarantees inside the snark.

We have our basic roll_up but instead of a single prover we have many each tx defines the prover they want their tx to be included int. Each tx burns a small portion of ETH Then we have a "fork choice rule" which calculates the total ETH burned on a chain and picks the chain that has the most ETH burned on it. So all users us the most burned chain. Then the data on that chain become unavailble. So someone forks that chain for the previous block and tells teh users about it. All the users say the old chain is unavialable lets move to the new one. So they move there and start to burn on the new chain. So now the new chain starts to become heavyer than the old chain. so the attacker has to start burning their ETH if they want to stay the "most burned" chain. So now the attack costs money...

but you say why do we want to be on the most burned chain? Well because the contract only allows withdraw when there is a single chain. If there are two then no one can withdraw. That is the basic idea.

They vote with their transaction fees. which are mandatory.

How is this better than casper? Because 51% attack can lock up all funds. The price of the attack is fixed. in proof of burn the price of the attack increases with the duration of the attack. Also it works quite nicely with snarks because the attack cannot steal anyone coins only lock them up. and they need to pay to do this.