Open kianenigma opened 3 months ago
Hey @kianenigma sorry for the late response
You are absolutely correct in your analysis. We did not go over a lot of details in the litepaper about the verification layer. there are a couple of approaches here.
Dividing the verification nodes into subcommittees: In this case, only a few validators (aka pessimistic executors) run the compute and the others would have an option to challenge their result. The subcommittees could be like gaming, defi etc each with different system requirements. MRUs can choose which subcommittee they want to work with. This might be quite close to polkadot's validator set for parachains. Easiest to design and implement and battle tested but not very scalable (in our context).
Verification layer as a prover network: In this case, the MRUs would send blocks to the network and based on availability 1 (or many) nodes would generate a proof for the MRU in a ZkWASM VM during re-execution (we compile our STFs into WASM to send to vulcan anyway). The proof would then be shared in the network and other nodes can verify that to come to a consensus. This way only 1 node would be required to re-execute the computation. MRU can still optimistically receive a fast pre-conf on this if the node can re-execute it outside the VM (fast) first and then create the proof (slow) later.
MRU side ZK proofs: Another more promising track would be to have each MRU generate a proof and send it to the network. In this case, the network just becomes a utility layer to abstract the MRUs from L1 and DA. This works however the feasibility of this mode is not that great.
We are trying a few of these (and some other) techniques to get over the problem. Would love to hear any other ideas that you may have around it.
Hello there.
I have skimmed your docs and Litepaper around the verification network and have a fundamental question about the system.
If the premise of the verification layer is to (pessimistically, as you called it) re-execute everything the micro-rollups (MR) do, then the scalability bottleneck of this entire system is the same as the verification layer.
If a trillion micro-rollups do a million transactions each, this create an insurmountable amount of verification for the verification layer to do.
Moreover, all of this is assuming you can get by with a verification layer that is merely one node. I suppose the goal is to have a network of nodes to re-verify MRs, or else this system is not really secure, right?
As you scale the number of verification nodes to make the verification more secure, they naturally becomes slower. If you want to have a 100 nodes be verifiers, that is 100x more secure, but a few orders of magnitude slower, because they have to gossip information between one another, and generally all the same reasons all existing secure blockchains are slow.
And to scale a "network of nodes" that all are supposed to re-execute MRs, aren't we back at square zero?
As in, this system is supposed to scale a slow distributed network (ETH), but as a part of the solution it has created another distributed system that is not scalable?