quilt / pm

Quilt Project Management Repo
5 stars 0 forks source link

Extend Scout with Phase 2 Simulation, Long running #1

Open villanuevawill opened 4 years ago

villanuevawill commented 4 years ago

Build Phase 1 & 2 Into Scout (or Truffle of Eth 2)

Here is a general roadmap/structure of how you may integrate a phase 2 simulation. This should essentially end up being what we describe as the "truffle" for eth 2. Here are simple steps to expand what scout is currently to accomplish this long running simulation.

  1. We need it to be a long running process. You may start by looping through each shard block in the YAML file with a determined amount of time in between to mimic "block times". This should introduce basic tooling to initialize this as a harness or basic client.
  2. Server or interface for read access to the latest shard blocks and shard state roots. Most likely a harness starts the running process and the server in tandem. The harness keeps a read reference to the running shard chain.
  3. Introduce a pool or queue of shard block entries. Remove post-state checking needed for tests in this “client” format. The pool or queue would be initialized from the YAML file shard blocks and the shard process would pull off the queue to create each subsequent block.
  4. Introduce an endpoint to submit to the queue or pool of entries. This now allows for continued submission of blocks. Remove YAML initialization for this particular harness and create a default “Genesis State” (can be part of the harness tooling). At this point, data should be kept in memory. No need for a persistent store. The genesis state would already have execution environment code preloaded.
  5. Have a running beacon state or beacon chain with just crosslink data. The harness should initialize it and the shard should have a reference to it. This gets passed in as the Beacon State (already situated in Scout) to the shard runtime.
  6. We should be able to read crosslinks from other shards and should update the crosslink in the beacon state/chain with a delay of 1 block time (no forks and always added). This should be managed via the harness which should have a read/write reference to the running processes. Each subsequent beacon state passed into runtimes should have the updated crosslink for its shard. Other crosslinks from other shards can just be mocked or generated on the fly.
  7. Allow for EE deployments. The server/harness should have an EE deploy endpoint. The beacon state should be updated appropriately and the shard should now be able to access the new EE code.
  8. Allow for EEs to print balances to other EEs. See "The shard “basic operating system” from the latest proposal. Shard state now has associated balances and follows this model. In the shard transition, the balances should be decremented/added appropriately.
  9. Introduce persistence or seeding. There should be a way to terminate the running process and save the current state of the system. It should be possible to seed a newly running harness with this seeded state.
  10. Gas limits/block size limits. Provide some basic mocking/setup of limits so blocks are limited in size.
  11. Allow for multiple EEs to be called in one block. For now, run these in sequence.
  12. Introduce a mempool of transactions that the "block producer" can read from and generate their own transaction package from.
  13. Introduce multiple shards in parallel (16). Can either have one master server that submits to each of the queues or run separate servers on separate threads. To keep pieces simple, I’d suggest running one master server. The master server should be a part of a harness that initiates each of the 16 shards, holds a reference to them, and adds into their queue/pool. All crosslinks should be updated.
  14. Expand "The shard basic operating system” to print to EEs on other shards.
  15. Add some degree of "forkability"
tecywiz121 commented 4 years ago

You may start by looping through each shard block in the YAML file with a determined amount of time in between to mimic "block times".

Would there be value in having an API to trigger block production, instead of a timer? Would make reproducible runs easier.

villanuevawill commented 4 years ago

Yeah for sure - I think a configuration can handle both cases.

SamWilsn commented 4 years ago

Now that we have a skeleton implementation over at quilt/simulation, I have a couple more follow up questions: