Constellation-Labs / constellation

:milky_way::satellite: Decentralized Application Integration Platform
Apache License 2.0
153 stars 40 forks source link

Optimize edge forking #1555

Open buckysballs opened 3 years ago

buckysballs commented 3 years ago

We need to enforce deterministic dimensions of the data dependency graph (width depth height) in order to prevent checkpoint forking, which makes redownloading currently the most expensive operation performed. What’s happening must be that as the cluster gets big there are more forks in the tips usage and thus all the snapshot hash proposals become unique. By making the facilitators and tips deterministic we will have much more uniform proposals and thus less data to redownload. We can solve this fairly simply by determining the edge, l1 owner and facilitators for any given height. Tips only matter for the owner of a consensus round. Tip reuse really only affects traversing the graph after it’s made, as long as there is still topological ordering. If there is a discrete state shift of facilitators for each snapshot window, then we can use a locality sensitive hashing function to associate a partition of tips to a single owner address. We may want to incorporate barycenetric subdivision for performance into edge selection to help minimize height divergence.

1) each node owns a consensus round and may participate in up to two others as a facilitator. Nodes calculate their facilitors via lsh 2) if facilitator, share tip from previous round 3) owner kick off and make block for this specific checkpoint depth 4) change snapshot trigger to execute on an interval of dag depth. In the L0 we can take the greatest common subset of checkpoints within that specific depth which is on topological order.