Add an application that accepts and processes Turbine traffic into block data.
At a high-level, this application should act as a "spy" an existing stream of Turbine traffic.
Implement all steps of a "follower" validator short of any stateful processing in the runtime.
Most results of this work can then be repurposed into production validator code.
Unused bits can serve as integration tests.
Steps
Network interface for hooking into a stream of block data
Getting a stream of Turbine block data requires basic plumbing with the P2P network.
[ ] Use an external packet mirroring setup to splice Turbine traffic arriving at a real Solana validator to another Firedancer test box
Pro: Relatively easy to maintain
Con: Complicated initial setup
Con: Does not work with local test validators
Note: The Jito team has offered us free access to their product for this, but should be easy to set up on local infra too
Shred Parser
[x] Parser supporting legacy and merkle-variant shreds
Initial PoH sync
The spy is not aware of the network-wide clock when it first starts.
[ ] Algorithm to train clock on incoming shred data when first starting up
[ ] Note: Once gossip is implemented, get initial clock from gossip instead
Leader Schedule
Accepting Turbine traffic requires knowledge of the leader schedule.
[ ] Define text-oriented file format for reading a leader schedule, implement parser (requires Base58)
[ ] Plumbing to periodically export this leader schedule file from a trusted node
[ ] Plumbing to periodically load this leader schedule file
[ ] Note: Once snapshots & database are implemented, use trust-on-first-use approach to acquire initial leader schedule -- then trustlessly derive subsequent schedules
Shred sigverify stage
[x] Implement high-performance/parallel sigverify stage for shreds (Unclear whether the existing verify stage should be extended or whether a second one should be implemented)
[x] Add SigVerify cache for Merkle shreds
Deshred & Block parser
Shred data will have to be defragmented and converted to entries
[ ] Add shred reassemble buffer
Shreds arrive out-of-order, so need scratch space to reassemble them.
Might require thread-safe depending on plumbing as shreds might come in from multiple tiles
[ ] Add shred-to-batch defragmentation
[x] Add transaction parser
[ ] Add entry (block) parser (#27)
[ ] Add entry buffer
Processing each entry can take up to ~100ms, so need to allocate shm for block data and descriptors
Summary
Add an application that accepts and processes Turbine traffic into block data.
At a high-level, this application should act as a "spy" an existing stream of Turbine traffic. Implement all steps of a "follower" validator short of any stateful processing in the runtime.
Most results of this work can then be repurposed into production validator code. Unused bits can serve as integration tests.
Steps
Network interface for hooking into a stream of block data Getting a stream of Turbine block data requires basic plumbing with the P2P network.
[ ] Use an external packet mirroring setup to splice Turbine traffic arriving at a real Solana validator to another Firedancer test box
Shred Parser
Initial PoH sync The spy is not aware of the network-wide clock when it first starts.
Leader Schedule Accepting Turbine traffic requires knowledge of the leader schedule.
Shred sigverify stage
Deshred & Block parser Shred data will have to be defragmented and converted to entries
PoH hash chain Basic verifier whether blocks adhere to PoH constraints
Data dumping Add command-line interface to dump data
.ar
file.ar
fileFork tracking The network partitions occasionally, Firedancer needs to be able to keep record of available forks
Next Goals