The chain triggers a subnet's emission calculation and step depending on its tempo. This means every few blocks we run a calculation to determine the correct emission for every subnet based on the current StakeMap. The calculation of emissions has been taking a long time, slowing block times.
This calculation appears to be slowed by the large number of entries in the StakeMap. To remedy this, two suggestions were made:
The accepted solution (should be live on the chain Apr 29th) is (1). This means we greatly reduce the number of delegator entries in the StakeMap at the cost of limiting participation to TAO stake holders with more than the limit of k TAO (dynamic value).
The issue
The issue with this, as mentioned, is this change reduces the participation of holders to only those that hold over a certain threshold of TAO.
What is a good solution (Acceptance Criteria)
A good solution would mean we still reduce the runtime of the calculation during each subnet tempo step, while retaining as much participation among as many stake-holders as possible.
Existing Solutions
Polkadot implements an approach called Nomination Pools[^1] which addresses a similar issue by pooling stakers into one account, reducing the size of their stake entries while maximizing participation of individual (read: small) holders.
Regarding their implementation, some elements of their design are unclear, i.e., I am not sure why they chose to:
separate rewards from stake
rebalance shares (aka "points") on each join/leave
Though I think this may be because of Polkadot's stake bonding time requirements (Bittensor lacks this lock period).
I imagine they decided to rebalance on every change to the pool in order to maintain the 1:1 (points to funds) ratio because fractional ownerships are imprecise and can cause future issues. I think we could do the same with not much more hit to performance.
Proposed Solution
We could represent delegation as shares in the a stake pool, similar to the Polkadot[^1] solution.
Then on subnet tempo steps, we have less accounts to consider in the StakeMap.
This would change the way delegation works, i.e., delegates would operate as stake pools, where the corresponding entry in the StakeMap is the pool's total stake.
On emission to a pool, increase the pool stake and increase owner shares (or rebalance)
On add_stake, update the shares of the staker (or all the shares)
On remove_stake, update the shares of the unstaker (or all the shares)
Considerations
It is useful to consider when the shares are adjusted versus the pool's stake (i.e. the ratio of shares to stake). There are currently two ideas:
Adjust the ratio on every change to the total stake
a. i.e. maintain the 1:1 ratio of shares to TAO
Do not adjust the ratio at all a. increase the pool balance on emission
b. i.e. commission to the pool owner increases their shares
c. (de)issue new shares on (un)stake
The example python code implements the latter (2).
There is a trade-off here. (2.) is less precise due to fractional ownership of the pool's stake, but requires less operations (e.g. rebalances) to occur. Whereas (1.) is exactly precise, but will require rebalancing of the shares on every update to the pool, which may not improve runtime at all (yet to be tested).
An example in python:
from typing import Dict
class NominationPool:
owner: str
shareMap: Dict[str, float] = {}
shareSum: float = 0
commission: float
poolStake: float = 0
def __init__(self, owner: str, comission: float = 0.18):
self.owner = owner
self.shareMap[owner] = 0.0
self.shareSum = 0.0
self.comission = comission
def addNomination(self, nominator: str, amount: float):
# Find how much this amount will increase the pool
if self.poolStake == 0:
increase = 1
# Issue initial shares
new_shares = 1
else:
increase = amount / self.poolStake
# Find the amount of shares issued
new_shares = increase * self.shareSum
# Issue the shares
if nominator not in self.shareMap:
self.shareMap[nominator] = 0
self.shareMap[nominator] += new_shares
self.shareSum += new_shares
# Increase the pool stake
self.poolStake += amount
def getNomination(self, nominator: str):
return self.shareMap[nominator]
def getStake(self, nominator: str):
# Find the amount of stake the nominator has
if self.shareMap.get(nominator, 0.0) == 0:
return 0.0
return (self.shareMap[nominator]/self.shareSum) * self.poolStake
def getCommission(self):
return self.commission
def getPoolStake(self):
return self.poolStake
def getPoolSize(self):
return len(self.shareMap)
def getShareOfPool(self, nominator: str):
return self.shareMap[nominator] / self.shareSum
def emitThroughPool(self, amount: float):
# Find the owner commission
commission = amount * self.comission
# Give the rest to the pool
left_amount = amount - commission
self.poolStake += left_amount
# Add the commission to the owner
self.addNomination(self.owner, commission)
def removeStake(self, nominator: str, amount: float):
# Find the amount of shares to remove
decrease = amount / self.poolStake
# Find the amount of shares to remove
remove_shares = decrease * self.shareSum
# Remove the shares
if self.shareMap[nominator] < remove_shares:
raise ValueError("The nominator does not have enough shares")
self.shareMap[nominator] -= remove_shares
self.shareSum -= remove_shares
# Decrease the pool stake
self.poolStake -= amount
Background (The issue)
The chain triggers a subnet's emission calculation and step depending on its tempo. This means every few blocks we run a calculation to determine the correct emission for every subnet based on the current StakeMap. The calculation of emissions has been taking a long time, slowing block times.
This calculation appears to be slowed by the large number of entries in the StakeMap. To remedy this, two suggestions were made:
The accepted solution (should be live on the chain Apr 29th) is (1). This means we greatly reduce the number of delegator entries in the StakeMap at the cost of limiting participation to TAO stake holders with more than the limit of
k
TAO (dynamic value).The issue
The issue with this, as mentioned, is this change reduces the participation of holders to only those that hold over a certain threshold of TAO.
What is a good solution (Acceptance Criteria)
A good solution would mean we still reduce the runtime of the calculation during each subnet tempo step, while retaining as much participation among as many stake-holders as possible.
Existing Solutions
Polkadot implements an approach called Nomination Pools[^1] which addresses a similar issue by pooling stakers into one account, reducing the size of their stake entries while maximizing participation of individual (read: small) holders.
[^1]: https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/nomination-pools/src/lib.rs For the polkadot/FRAME impl
Regarding their implementation, some elements of their design are unclear, i.e., I am not sure why they chose to:
Though I think this may be because of Polkadot's stake bonding time requirements (Bittensor lacks this lock period). I imagine they decided to rebalance on every change to the pool in order to maintain the 1:1 (points to funds) ratio because fractional ownerships are imprecise and can cause future issues. I think we could do the same with not much more hit to performance.
Proposed Solution
We could represent delegation as shares in the a stake pool, similar to the Polkadot[^1] solution. Then on subnet tempo steps, we have less accounts to consider in the StakeMap. This would change the way delegation works, i.e., delegates would operate as stake pools, where the corresponding entry in the StakeMap is the pool's total stake.
add_stake
, update the shares of the staker (or all the shares)remove_stake
, update the shares of the unstaker (or all the shares)Considerations
It is useful to consider when the shares are adjusted versus the pool's stake (i.e. the ratio of shares to stake). There are currently two ideas:
There is a trade-off here. (2.) is less precise due to fractional ownership of the pool's stake, but requires less operations (e.g. rebalances) to occur. Whereas (1.) is exactly precise, but will require rebalancing of the shares on every update to the pool, which may not improve runtime at all (yet to be tested).
An example in python: