Closed smtmfft closed 8 months ago
Added zlib support, it seems quite light weight comparing to kzg commit, 2 seg vs 200 seg (2^20 cycle each seg).
One thing we're going to need to verify is the worst case scenario, similar to RLP data. Because if someone is able to generate data that is somehow very large decoded (either correctly or incorrectly) it may very easily be impossible to prove this data stream because of massive memory or processing requirements. Or it may even still be possible to prove it in practice, but the cost could be extremely large so it could be used to attack the protocol (cost to generate proof > proof bond).
attack the protocol (cost to generate proof > proof bond).
I think we can add some cost functions in host to pre-evaluate the VM execution, but a conflicting thing here is that tier selection is random on chain, it may force prover to build ZK proof. Or we make those limits part of consensus? (i.e., impl same cost functions in client)
attack the protocol (cost to generate proof > proof bond).
I think we can add some cost functions in host to pre-evaluate the VM execution, but a conflicting thing here is that tier selection is random on chain, it may force prover to build ZK proof. Or we make those limits part of consensus? (i.e., impl same cost functions in client)
yeah those limitations will have to be implemented in the client/consensus part as well, they need to be consistent and produce the same result in all cases because we'll have to have zk proofs for all blocks pretty soon if not at mainnet.
And I guess this could even be an issue in the SGX prover.
How OP encode blob: https://www.notion.so/taikoxyz/How-OP-encodes-blob-8096cad7765d48cdab9667722330aef0