Open kayabaNerve opened 1 year ago
If we haven't observed a cosign in however long, we can fall back to a non-horizontally-scalable live cosigning protocol.
So we attempt the horizontally scaling, 12% DoSable protocol, yet on omission, we fall back to each validator posting a cosign (waiting for 67%, 34% DoSable).
I do like the idea of introducing this pessimistic path. Credit to Jeff Burdges for the suggestion.
Instead of cosigning being based on validator sets, we should investigate if random subset sampling would enable us to maintain a fault tolerance of 33%. This would increase the computational cost of verifying cosigns, yet would notably reduce the cost of producing cosigns.
We could also restore fault tolerance by removing the validator-set-based cosign, yet then we have cosign performance linearly decrease with horizontal network growth :/