opentensor / subtensor

Bittensor Blockchain Layer
The Unlicense
117 stars 88 forks source link

Chain Bloat #520

Open distributedstatemachine opened 2 weeks ago

distributedstatemachine commented 2 weeks ago

Description

Currently, the Bittensor Chain distributes emissions and updates staking information in every tempo. This frequent updating contributes to chain bloat and inefficiencies. To address this, we propose the following changes:

  1. Emission Distribution: Distribute emissions only after every 7200 blocks (one epoch).
  2. Staking Calculation: Changing the emission distribution schedule introduces a complication ; with changing the underlying staking mechanism , a person could wait until block 7198, stake and still get the full rewards. To mitigate this , we will make any stake added during an epoch will not count until the next epoch.

Acceptance Criteria

Tasks

impl<T: Config> Pallet<T> {
    pub fn generate_emission(block_number: u64) {
        // --- 1. Iterate across each network and add pending emission into stash.
        for (netuid, tempo) in <Tempo<T> as IterableStorageMap<u16, u16>>::iter() {
            // Skip the root network or subnets with registrations turned off
            if netuid == Self::get_root_netuid() || !Self::is_registration_allowed(netuid) {
                // Root emission or subnet emission is burned
                continue;
            }

            // --- 2. Queue the emission due to this network.
            let new_queued_emission: u64 = Self::get_subnet_emission_value(netuid);
            log::debug!(
                "generate_emission for netuid: {:?} with tempo: {:?} and emission: {:?}",
                netuid,
                tempo,
                new_queued_emission,
            );
        }
    }
}
impl<T: Config> Pallet<T> {
    pub fn distribute_emission() {
        for (netuid, _) in <Tempo<T> as IterableStorageMap<u16, u16>>::iter() {
            let Some(tuples_to_drain) = Self::get_loaded_emission_tuples(netuid) else {
                continue;
            };
            let mut total_emitted: u64 = 0;
            for (hotkey, server_amount, validator_amount) in tuples_to_drain.iter() {
                Self::emit_inflation_through_hotkey_account(
                    hotkey,
                    *server_amount,
                    *validator_amount,
                );
                total_emitted += *server_amount + *validator_amount;
            }
            LoadedEmission::<T>::remove(netuid);
            TotalIssuance::<T>::put(TotalIssuance::<T>::get().saturating_add(total_emitted));
        }
    }
}
impl<T: Config> Pallet<T> {
    pub fn block_step() -> Result<(), &'static str> {
        let block_number: u64 = Self::get_current_block_as_u64();
        log::debug!("block_step for block: {:?} ", block_number);

        // --- 1. Adjust difficulties.
        Self::adjust_registration_terms_for_networks();

        // --- 2. Calculate per-subnet emissions
        match Self::root_epoch(block_number) {
            Ok(_) => (),
            Err(e) => {
                log::trace!("Error while running root epoch: {:?}", e);
            }
        }

        // --- 3. Queue emission tuples (hotkey, amount).
        Self::generate_emission(block_number);

        // --- 4. Distribute emissions and apply queued stakes if end of epoch.
        if block_number % 7200 == 0 {
            Self::distribute_emission();
            Self::apply_queued_stakes();
        }

        // Return ok.
        Ok(())
    }
}
impl<T: Config> Pallet<T> {
    pub fn add_stake(
        origin: OriginFor<T>,
        hotkey: T::AccountId,
        amount: u64,
    ) -> DispatchResult {
        let who = ensure_signed(origin)?;
        // Queue the stake change
        QueuedStakes::<T>::append(&who, (hotkey.clone(), amount));
        Ok(())
    }
}
impl<T: Config> Pallet<T> {
    pub fn apply_queued_stakes() {
        for (coldkey, stakes) in QueuedStakes::<T>::iter() {
            for (hotkey, amount) in stakes {
                Self::increase_stake_on_coldkey_hotkey_account(&coldkey, &hotkey, amount);
            }
            QueuedStakes::<T>::remove(&coldkey);
        }
    }
}
impl<T: Config> Pallet<T> {
    pub fn block_step() -> Result<(), &'static str> {
        let block_number: u64 = Self::get_current_block_as_u64();
        log::debug!("block_step for block: {:?} ", block_number);

        // --- 1. Adjust difficulties.
        Self::adjust_registration_terms_for_networks();

        // --- 2. Calculate per-subnet emissions
        match Self::root_epoch(block_number) {
            Ok(_) => (),
            Err(e) => {
                log::trace!("Error while running root epoch: {:?}", e);
            }
        }

        // --- 3. Queue emission tuples (hotkey, amount).
        Self::generate_emission(block_number);

        // --- 4. Distribute emissions and apply queued stakes if end of epoch.
        if block_number % 7200 == 0 {
            Self::distribute_emission();
            Self::apply_queued_stakes();
        }

        // Return ok.
        Ok(())
    }
}

Additional Considerations

open-junius commented 2 weeks ago
  1. remove_stake should be also queued during epoch
  2. can we calculate the emission at the end of epoch, to avoid calculation each block?
distributedstatemachine commented 2 weeks ago

remove_stake should be also queued during epoch

We dont care about the remove stakes , as stakes do not cheat the system by doing that. Stakers should be free to remove their stakes at anytime , but should know the consquences. We can probably warn of this in the cli.

can we calculate the emission at the end of epoch, to avoid calculation each block?

I dont think its feasible , as it complicates yuma consensus calculates, especially with stuff like liquid alpha , where the prior epochs performance is crucial.

distributedstatemachine commented 2 weeks ago

Notes from const: