Now we assume that the prover runs on a single prover. But for the first stark, we can prove it on multi-node, and generate the first stark proof parallelly.
The API Gateway implements aggregator.proto, and forwards the tasks to Stage server, and stage server splits the ELF into segments, stores the segments in some storage, then generates the task metadata, and wait for the Prover Server to fetch its tasks.
The batch Server generates the first Stark Proof and consumes most of the computing power.
After fixed-size batch proofs are generated, the Stage server will create an Agg Prover task, and Agg Prover aggregates all the batch proofs and generates a final stark proof.
Finally, the Stage server creates a Final Prover task, and the Final Prover produces the final snark proof.
Given that we have 5 machines, the number of Batch Prover should be 5, and deploy a Batch Prover on each machine.
Now we assume that the prover runs on a single prover. But for the first stark, we can prove it on multi-node, and generate the first stark proof parallelly.
The API Gateway implements aggregator.proto, and forwards the tasks to Stage server, and stage server splits the ELF into segments, stores the segments in some storage, then generates the task metadata, and wait for the Prover Server to fetch its tasks.
The batch Server generates the first Stark Proof and consumes most of the computing power.
After fixed-size batch proofs are generated, the Stage server will create an Agg Prover task, and Agg Prover aggregates all the batch proofs and generates a final stark proof.
Finally, the Stage server creates a Final Prover task, and the Final Prover produces the final snark proof.
Given that we have 5 machines, the number of Batch Prover should be 5, and deploy a Batch Prover on each machine.