llamaxyz / llama

Llama is an onchain governance and access control framework for smart contracts.
https://llama.xyz
MIT License
47 stars 5 forks source link

perf: benchmark gas usage and improve it #193

Closed mds1 closed 1 year ago

mds1 commented 1 year ago

First, for the revokePolicy method that loops over all roles:

  1. Check if we can revoke a policy if numRoles is 255 AND the user holds 0 roles
  2. Check if we can revoke a policy if numRoles is 255 AND the user holds 1 role
  3. Check if we can revoke a policy if numRoles is 255 AND the user holds 255 roles

Then, from https://github.com/llama-community/vertex-v1/pull/190:

Improved gas usage: We have a lot of for loops. Solidity is not very smart with for loops, and will continually expand memory usage with each iteration of the loop instead of re-using the same memory (ref https://github.com/ethereum/solidity/issues/13885). This is bad because gas costs for memory usage scale quadratically, so large loops very quickly increase gas costs. By manually resetting the free memory pointer after each iteration of for loops (like https://github.com/foundry-rs/foundry/issues/3971#issuecomment-1398815011), we may be able to get significant gas reductions for large loops.

mds1 commented 1 year ago

Another improvement could be not writing most of the action struct to storage and instead just storing a hash. On mainnet this would make things significantly cheaper, especially for actions with a lot of calldata. Though on L2 where calldata drives gas costs this would make action execution a lot more expensive (but not much more expensive than action creation) since you now have to provide a lot more calldata to execute

0xrajath commented 1 year ago

Adding this here https://github.com/llama-community/vertex/issues/59