serai-dex / serai

Other
265 stars 49 forks source link

Delegated PoS #55

Open kayabaNerve opened 2 years ago

kayabaNerve commented 2 years ago

Delegation will potentially increase centralization while also allowing anyone to bond Serai, not just people with large amounts of it. This will maintain the ability to repay holders after theft (if a validator set is compromised), yet does also increase the odds of theft as validator sets are no longer paying for the bond they use to enable their theft.

So: - fund security + scalability + potential centralization + staking participation while maintaining economic security, as viewed by the statement we can reimburse users.

The issue with the reimbursement comment is that if we have a single validator set which commits theft of all funds, Serai is worthless. Sure, we have their bonds they paid for... they won't pay for it again with the assets they stole. We can only buy back assets so long as Serai as a whole still has backing, such as under a scheme with multiple validator sets, which will happen post-launch.

Beyond the multiple validator set discussion about the feasibility of re-acquiring assets, the question becomes is the reduction in security acceptable for the benefits we'd gain. Others considerations would be reducing the bond ratio, which would decrease fund security and no longer guarantee economic security (unless we blame everyone in the group, which may be required since we'd be unable to determine who actually did it). It would enforce the costs come from the actual validators who performed the theft though, while not changing centralization, yet would potentially be less scalable.

53 has the following document regarding delegation.

# Validators

### Register (message)

- `validator` (signer): Address which will be the validator on Substrate.
- `manager`   (signer): Address which will manage this validator.
- `set`       (VS):     Validator set being joined.

Marks `validator` as a validator candidate for the specified validator set,
enabling delegation.

### Delegate (message)

  - `delegator` (signer):  Address delegating funds to `validator`.
  - `validator` (address): Registered validator being delegated to.
  - `amount`    (Amount):  Amount of funds being delegated to `validator`.

Delegated funds will be removed from `delegator`'s wallet and moved to
`validator`'s bond. If `validator`'s bond is not a multiple of the validator
set's bond, it is queued, and will become actively delegated when another
delegator reduces their bond.

Note: At launch, only `validator`'s manager will be able to delegate to
`validator`, and only in multiples of the validator set's bond.

### Undelegate (message)

  - `delegator` (signer):  Address removing delegated funds from `validator`.
  - `validator` (address): Registered validator no longer being delegated to.
  - `amount`    (Amount):  Amount of funds no longer being delegated to
`validator`.

If a sufficient amount of funds are queued, the `validator`'s operation
continues normally, shifting in queued funds. If the `validator` falls below a
multiple of the validator set's bond, they will lose a key share at the next
churn. Only then will this undelegation process, unless another party delegates,
forming a sufficient queue.

Note: At launch, only multiples of the validator set's bond will be valid.

### Resign (message)

  - `validator` (address): Validator being removed from the pool/candidacy.
  - `manager`   (signer):  Manage of `validator`.

If `validator` is active, they will be removed at the next churn. If they are
solely a candidate, they will no longer be eligible for delegations. All bond is
refunded after their removal.

The commentary on delegation will be reviewed, as it was already left to post-launch, and there's obviously still aspects to discuss here. The API itself remains sane though, as a manager address may have multiple validators, and accordingly the separation of addresses/messages remains sane. It's just the heavy leaning towards DPoS in the future which will be removed, as this is discussed.

We can also just leave this until after launch. It's highly dependent on the security we achieve, how limiting bond is to pool liquidity, and if we do set up multiple validator sets.

kayabaNerve commented 2 years ago

cc @TheArchitect108

kayabaNerve commented 1 year ago

For reasons other than Serai, I had to write a DPoS pallet. During that, the following designs were produced.


Validator Sorting

The trie underlying Substrate offers lexicographic iteration over its keys. We define a StorageMap, TopValidators, which allocates a key prefix for our storage entries. Then, we define keys of amount || blake2_128(amount || account) || account.

We want to iterate from highest balance to lowest balance, so we encode the amount in big-endian byte order (placing the most-significant bytes first), then use a binary NOT to put higher-value amounts first.

This does open up attacks where a malicious user can stake amounts sharing nibbles with other bytes, increasing the tree depth. This is bounded to the nibble-length of the amount, yet still a concern (as for a 16-byte amount, 32 unnecessary layers could be created increasing the size of storage proofs by almost a kilobyte).

To work around this, we only insert members who pass some minimum amount threshold. This filters out (potentially) spam accounts, allowing us to reap the efficiency of our scheme.

The alternative would've been heavily inefficient in-memory sorts or a bucketing algorithm. To compare to bags-list, which is a bucketed series of linked lists, this achieves faster insertions/removals (as it doesn't have to read/write its siblings' pointers to itself), assuming a consistent storage depth. Iteration is also potentially quicker, if the trie key iteration caches the path where-as the bags-list would do fresh key lookups (though a higher layer cache may also resolve this for bags-list). There's also no overhead by tracking bags since we directly have a list structure.

Rewards

In order to issue rewards, a mapping from an awarded validator to their delegators must be present. The naive method would be a Vec, or more specifically a BoundedVec (to satisfy requirements). This fundamentally has a DoS risk as either the amount of delegators must be limited or they will be flooded to exhaust resources and stall the chain. Limiting the amount prevents validators from receiving new delegators however.

The solution is to allocate a storage key for the 'ValidatorToDelegators' map, which is updated on every delegation. Each read/write would occur in O(log n). When a new session occurs, the map isn't cloned, yet the hash of its intermediary node in the Merkle-Patricia trie is taken. This is an O(1) operation.

Then, rewards can accumulate over the session. After the session, they'd be claimable via Merkle proofs in a O(log n) process.

This prevents both possible DoS vectors present with delegators.

This is possible via sp_io::default_child_storage.


Substrate's provided DPoS pallet uses O(n) algorithms with limits which make it fine, yet AFAICT, this is how it'd be done scalable.

Upon further discussion, a versioned algorithm was proposed similar to how OpenZeppelin contracts handle historical token balances.

This would be a map of (Session, Validator, Delegate) -> Amount. New entries are made to the upcoming session. If a prior session had an entry, it's cloned into the new session. The validator accrues rewards over the session. At the end of the session, any delegator can claim rewards for that session. The relevant historical delegation entry is pulled up*, and rewards paid accordingly.

From my initial review and understanding of this, it's impossible to prune historical entries (or at least difficult) as you can't know how many sessions a historical entry was live for. There may be ways for each delegator to have a vector with relevant metadata, which could be binary searched in logarithmic time though. Reading the vector would be O(n) to the amount of sessions updates occurred in AND would set a maximum amount of updates (due to the requirements of using a BoundedVec).

This mechanism would avoid needing a merkle proof though, and by doing so, reduce the size of claim transactions (at the cost of state size).