Transparent blockchains operate as follows: all participants maintain a copy of the (consensus-determined) application state. Transactions modify the application state directly, and participants check that the state changes are allowed by the application rules before coming to consensus on them.
On a shielded blockchain, however, the state is fragmented across all users of the application, as each user has a view only of their "local" portion of the application state. Transactions update a user's state privately, and use a zero-knowledge proof to prove to all other participants that the update was allowed by the application rules.
There are then two main challenges involved in creating a shielded blockchain:
Design of the application state and data flows to be compatible with per-user views of local portions of the chain state, and isolated updates to those portions that can be made private;
Design of the cryptography that allows proving correctness of private updates.
Of these challenges, the first is more foundational for the system design, because it changes the entire architecture and assumptions about data availability. However, the second requires a large amount of detailed cryptographic design work (e.g., working through all of the details of a state update circuit).
Frontloading the cryptographic design work makes it difficult to develop the application iteratively and get rapid feedback on what aspects work well, but deferring it entirely risks creating a situation where the application state is designed incompatibly with private functionality. How do we balance this tension?
Frog put the cookies in a box. "There," he said. "Now we will not eat any more cookies."
"But we can open the box," said Toad.
"That is true," said Frog.
One idea to tread a middle ground between these extremes is to try to separate (1) and (2) by designing the application state around (1) -- i.e., designing a fragmented application state with update proofs -- but replacing the zero-knowledge proofs with "transparent proofs" that work as trivial proofs-of-knowledge. These transparent proofs are just a packet containing the witness data, and the verification algorithm takes the public inputs and uses them to check the desired relation against the witness data directly.
While these transparent proofs do not provide privacy, because they have the same interface as a zero-knowledge proof, they do ensure that the data flows in the system are compatible with an actually private implementation, and they can be gradually replaced by real ZK proofs as the system design solidifies.
Transparent blockchains operate as follows: all participants maintain a copy of the (consensus-determined) application state. Transactions modify the application state directly, and participants check that the state changes are allowed by the application rules before coming to consensus on them.
On a shielded blockchain, however, the state is fragmented across all users of the application, as each user has a view only of their "local" portion of the application state. Transactions update a user's state privately, and use a zero-knowledge proof to prove to all other participants that the update was allowed by the application rules.
There are then two main challenges involved in creating a shielded blockchain:
Of these challenges, the first is more foundational for the system design, because it changes the entire architecture and assumptions about data availability. However, the second requires a large amount of detailed cryptographic design work (e.g., working through all of the details of a state update circuit).
Frontloading the cryptographic design work makes it difficult to develop the application iteratively and get rapid feedback on what aspects work well, but deferring it entirely risks creating a situation where the application state is designed incompatibly with private functionality. How do we balance this tension?
One idea to tread a middle ground between these extremes is to try to separate (1) and (2) by designing the application state around (1) -- i.e., designing a fragmented application state with update proofs -- but replacing the zero-knowledge proofs with "transparent proofs" that work as trivial proofs-of-knowledge. These transparent proofs are just a packet containing the witness data, and the verification algorithm takes the public inputs and uses them to check the desired relation against the witness data directly.
While these transparent proofs do not provide privacy, because they have the same interface as a zero-knowledge proof, they do ensure that the data flows in the system are compatible with an actually private implementation, and they can be gradually replaced by real ZK proofs as the system design solidifies.