Shortening revert strings to fit in 32 bytes will decrease deployment time gas and will decrease runtime gas when the revert condition is met. Revert strings that are longer than 32 bytes require at least one additional mstore, along with additional overhead for computing memory offset, etc.
An even better and gas efficient approach will be to use Solidity Custom Errors instead of revert strings.
Add 1 to uEpoch in line 256 when initializing it instead of calculating uEpoch + 1 multiple times
Cache decimals in the constructor instead of accessing the storage twice
Current code:
Reduce the size of end in the LockedBalance struct so the struct will fit in 2 words instead of 3 (end is a timestamp and if we'll reach the maximum value of uint96 the contract probably won't be active due to the limited size of the Point arrays)
Current struct:
// -------------- first word ----------------- --------------- second word ------------
// [ amount - 128 bits | delegated - 128 bits] [ delegatee - 160 bits | end - 96 bits ]
struct LockedBalance {
// amount and delegated are zipped to one word
int128 amount;
int128 delegated;
// delegatee and end are zipped to the second word
address delegatee;
uint96 end;
}
No need to copy the locked balance in _delegate (all of its calls are in the end of the calling function, so the copied value won't actually be used after the function call)
Move line 506 before line 503 in order to avoid accessing locked_.end twice
Use Shift right/left instead of division/multiplication - in lines 720 and 744 you can optimize the calculation uint256 mid = (min + max + 1) / 2; by using shifting. The optimized version will look like this uint256 mid = (min + max + 1) >> 1; (right shifting by i is like dividing by 2**i, and left shifting by i is like multiplying by 2**i).
Use != 0 instead of > 0 for unsigned integer comparison - this can be seen in multiple places in the code:
contracts/VotingEscrow.sol::288 => if (epoch > 0) {
contracts/VotingEscrow.sol::412 => require(_value > 0, "Only non zero amount");
contracts/VotingEscrow.sol::448 => require(_value > 0, "Only non zero amount");
Loops can be optimized in several ways. Let's take for example the loop in the _checkpoint function.
for (uint256 i = 0; i < 255; i++) {
// Hopefully it won't happen that this won't get used in 5 years!
// If it does, users will be able to withdraw but vote weight will be broken
iterativeTime = iterativeTime + WEEK;
int128 dSlope = 0;
if (iterativeTime > block.timestamp) {
iterativeTime = block.timestamp;
} else {
dSlope = slopeChanges[iterativeTime];
}
int128 biasDelta =
lastPoint.slope *
int128(int256((iterativeTime - lastCheckpoint)));
lastPoint.bias = lastPoint.bias - biasDelta;
lastPoint.slope = lastPoint.slope + dSlope;
// This can happen
if (lastPoint.bias < 0) {
lastPoint.bias = 0;
}
// This cannot happen - just in case
if (lastPoint.slope < 0) {
lastPoint.slope = 0;
}
lastCheckpoint = iterativeTime;
lastPoint.ts = iterativeTime;
lastPoint.blk =
initialLastPoint.blk +
(blockSlope * (iterativeTime - initialLastPoint.ts)) /
MULTIPLIER;
// when epoch is incremented, we either push here or after slopes updated below
epoch = epoch + 1;
if (iterativeTime == block.timestamp) {
lastPoint.blk = block.number;
break;
} else {
pointHistory[epoch] = lastPoint;
}
}
We can do multiple things here:
Variables in solidity are already initialized to their default value, and initializing them to the same value actually costs more gas. So for example in the loop above, the code can be optimized using uint256 i; instead of uint256 i = 0;. There is another redundant initialization of dSlope to 0.
Use ++i and ++epoch instead of i++ and epoch++ to save some gas spent in every iteration.
Use unchecked on the loop index incrementation. This is guaranteed to not overflow because it only goes from 0 to 255.
Consider using iterativeTime as the loop index (and limited it by its initial value + 256 weeks)
So after all these changes, the code will look something like this:
for (uint256 i; i < 255; ) {
// Hopefully it won't happen that this won't get used in 5 years!
// If it does, users will be able to withdraw but vote weight will be broken
iterativeTime = iterativeTime + WEEK;
int128 dSlope;
if (iterativeTime > block.timestamp) {
iterativeTime = block.timestamp;
} else {
dSlope = slopeChanges[iterativeTime];
}
int128 biasDelta =
lastPoint.slope *
int128(int256((iterativeTime - lastCheckpoint)));
lastPoint.bias = lastPoint.bias - biasDelta;
lastPoint.slope = lastPoint.slope + dSlope;
// This can happen
if (lastPoint.bias < 0) {
lastPoint.bias = 0;
}
// This cannot happen - just in case
if (lastPoint.slope < 0) {
lastPoint.slope = 0;
}
lastCheckpoint = iterativeTime;
lastPoint.ts = iterativeTime;
lastPoint.blk =
initialLastPoint.blk +
(blockSlope * (iterativeTime - initialLastPoint.ts)) /
MULTIPLIER;
// when epoch is incremented, we either push here or after slopes updated below
++epoch;
if (iterativeTime == block.timestamp) {
lastPoint.blk = block.number;
break;
} else {
pointHistory[epoch] = lastPoint;
}
unchecked {
++i;
}
}
If you'll choose to use `` as the loop index, it will look like this:
uint timeLimit = iterativeTime + WEEK >> 8; // iterativeTime + WEEK * 256
for (; iterativeTime < timeLimit; ) {
// Hopefully it won't happen that this won't get used in 5 years!
// If it does, users will be able to withdraw but vote weight will be broken
unchecked {
iterativeTime = iterativeTime + WEEK; // if it will overflow, the calculation of timeLimit would have overflow too
}
int128 dSlope;
if (iterativeTime > block.timestamp) {
iterativeTime = block.timestamp;
} else {
dSlope = slopeChanges[iterativeTime];
}
int128 biasDelta =
lastPoint.slope *
int128(int256((iterativeTime - lastCheckpoint)));
lastPoint.bias = lastPoint.bias - biasDelta;
lastPoint.slope = lastPoint.slope + dSlope;
// This can happen
if (lastPoint.bias < 0) {
lastPoint.bias = 0;
}
// This cannot happen - just in case
if (lastPoint.slope < 0) {
lastPoint.slope = 0;
}
lastCheckpoint = iterativeTime;
lastPoint.ts = iterativeTime;
lastPoint.blk =
initialLastPoint.blk +
(blockSlope * (iterativeTime - initialLastPoint.ts)) /
MULTIPLIER;
// when epoch is incremented, we either push here or after slopes updated below
++epoch;
if (iterativeTime == block.timestamp) {
lastPoint.blk = block.number;
break;
} else {
pointHistory[epoch] = lastPoint;
}
}
There are more loops in the contract that can be optimized using the same optimizations.
Gas Optimizations
uEpoch
in line 256 when initializing it instead of calculatinguEpoch + 1
multiple timesCache
decimals
in theconstructor
instead of accessing the storage twice Current code:Optimized implementation:
penaltyRecipient
incollectPenalty
to avoid accessing the storage twice Current version:Optimized version:
end
in theLockedBalance
struct so the struct will fit in 2 words instead of 3 (end is a timestamp and if we'll reach the maximum value ofuint96
the contract probably won't be active due to the limited size of the Point arrays) Current struct:Optimized struct:
_delegate
(all of its calls are in the end of the calling function, so the copied value won't actually be used after the function call)locked_.end
twiceuint256 mid = (min + max + 1) / 2;
by using shifting. The optimized version will look like thisuint256 mid = (min + max + 1) >> 1;
(right shifting by i is like dividing by2**i
, and left shifting by i is like multiplying by2**i
).!= 0
instead of> 0
for unsigned integer comparison - this can be seen in multiple places in the code:Loops can be optimized in several ways. Let's take for example the loop in the
_checkpoint
function.We can do multiple things here:
uint256 i;
instead ofuint256 i = 0;
. There is another redundant initialization of dSlope to 0.Consider using
iterativeTime
as the loop index (and limited it by its initial value + 256 weeks) So after all these changes, the code will look something like this:If you'll choose to use `` as the loop index, it will look like this:
There are more loops in the contract that can be optimized using the same optimizations.