code-423n4 / 2023-08-pooltogether-mitigation-findings

0 stars 0 forks source link

Auctions run at significantly different speeds for different prize tiers #15

Open code423n4 opened 1 year ago

code423n4 commented 1 year ago

Lines of code

https://github.com/GenerationSoftware/pt-v5-claimer/blob/main/src/Claimer.sol#L76-L78 https://github.com/GenerationSoftware/pt-v5-claimer/blob/main/src/Claimer.sol#L136 https://github.com/GenerationSoftware/pt-v5-claimer/blob/main/src/Claimer.sol#L262-L264 https://github.com/GenerationSoftware/pt-v5-claimer/blob/main/src/Claimer.sol#L223-L250 https://github.com/GenerationSoftware/pt-v5-claimer/blob/main/src/Claimer.sol#L289

Vulnerability details

Comments

The V5 implementation delegates the task of claiming prizes to a network of claimers. The fees received by a claimer are calculated based on a dutch auction and limited based on the prize size of the highest tier (the smallest prize). As a result, it is possible that the gas price could exceed the fee received by claimers, leading to prizes not being claimed. If any high value prizes happen to be drawn during this period then they will go unclaimed.

Mitigation

The new implementation only computes the max fee size based on the prize tier that is being claimed for. As a result the fees received by claimers are now larger for larger prize tiers, thereby incentivising lower tiers (i.e. those with higher prizes) to be claimed first and resulting in more fees paid to claimers.

New issue

Because all the tiers run on the same auction, each auction will now run at a completely different speed

Impact

If the _maximumFee parameter specified in the constructor is relatively small, then the max fee for the lower prize tiers (higher prizes) will never be reached anyway. If the _maximumFee is relatively large to give sufficient range for the auctions, the auctions for higher tiers (lower prizes) will ramp up very quickly to the max limit based on the prize tier. The real impact of this is that auctions are now running inefficiently, where fees are likely to be higher than they could be for the higher tiers (i.e. the bots are getting more fees than they would be willing to accept).

Proof of Concept

Based on the updated implementation, the maximum fee to be paid is now a function of the tier being claimed for, not the total number of active tiers:

  function _computeMaxFee(uint8 _tier) internal view returns (uint256) {
    return UD60x18.unwrap(maxFeePortionOfPrize.intoUD60x18().mul(UD60x18.wrap(prizePool.getTierPrizeSize(_tier))));
  }

The return value of this call is used as the first parameter for calls to _computeFeePerClaim and in turn the last parameter of _computeFeeForNextClaim:

  function _computeFeeForNextClaim(
    uint256 _minimumFee,
    SD59x18 _decayConstant,
    SD59x18 _perTimeUnit,
    uint256 _elapsed,
    uint256 _sold,
    uint256 _maxFee
  ) internal pure returns (uint256) {
    uint256 fee = LinearVRGDALib.getVRGDAPrice(
      _minimumFee,
      _elapsed,
      _sold,
      _perTimeUnit,
      _decayConstant
    );
    return fee > _maxFee ? _maxFee : fee;
  }

As you can see, the fee is capped based on the max fee for the prize tier being claimed for. This seems to be doing what was intended, however there is only one auction for all the tiers, and thus the auction decay constant is consistent across all the tiers:

  constructor(
    PrizePool _prizePool,
    uint256 _minimumFee,
    uint256 _maximumFee,
    uint256 _timeToReachMaxFee,
    UD2x18 _maxFeePortionOfPrize
  ) {
    if (_minimumFee >= _maximumFee) {
      revert MinFeeGeMax(_minimumFee, _maximumFee);
    }
    prizePool = _prizePool;
    maxFeePortionOfPrize = _maxFeePortionOfPrize;
    decayConstant = LinearVRGDALib.getDecayConstant(
      LinearVRGDALib.getMaximumPriceDeltaScale(_minimumFee, _maximumFee, _timeToReachMaxFee)
    );
    minimumFee = _minimumFee;
    timeToReachMaxFee = _timeToReachMaxFee;
  }

Whichever is the lowest of the auction _maximumFee and the tier maxFee is the max fee that could possibly be collected. In order to allow all prize tiers to be claimed it is likely that the _maximumFee in the constructor will be increased, however this now means the decay constant is greater and therefore the auctions ramp up faster. For tiers with a lower max fee, this means the max fee is reached significantly faster, leading to inefficient auctions.

Tools used

Manual review

Recommendation

Potentially there is another angle to resolve the issue reported in the original contest by having a mechanism that allows the max fee to be increased if not enough prizes have been redeemed in the last draw, although this introduces additional complexity.

Alternatively there needs to be an auction for each tier with its own decay constant (i.e. min and max fees) to ensure that all the auctions are ramping up in a similar (but not necessarily identical) manner. In my opinion this option makes the most sense and is the least complex and therefore also the hardest to manipulate (if at all).

Assessed type

Other

c4-sponsor commented 11 months ago

asselstine marked the issue as sponsor acknowledged

stalinMacias commented 11 months ago

I think the new logic to calculate the fees is correct, right? And as you mentioned, it is required for the admin to make a "mistake" of setting a small or large value for the maxFee. Plus, claimers (bots) will be competing against each other to claim the prizes, the new logic uses the prize size of the Tier being claimed, which means that the rewards for the users are higher, thus, claimers can also earn more fees for claiming a higher reward. I think that is fair and incentivizes the claimers to perform their services.

djb15 commented 11 months ago

Yeah so the change does resolve the original issue about high value prizes potentially not being claimed (https://github.com/code-423n4/2023-07-pooltogether-findings/issues/415).

This report discusses a potentially unintended consequence of the change that will result in inefficient auctions. Basically since all prize tiers are running on the same auction, they are all using the same decay constant for the VRGA algorithm. And in order for the fix to the original issue to work properly the _maximumFee variable set in the constructor has to be sufficiently large (this is not a "mistake"). However this variable also directly impacts the decay constant of the VRGA algorithm; the higher the value the steeper the curve.

So now, for a low value tier the auction price is ramping up significantly faster than before (i.e. the increase in price per second is greater than it could/should be). I agree the claimer bots will still be doing their job properly, but because the increase per second is much larger, there is less granularity in the auction. The bots are now more likely to receive a larger fee percentage than they would have been willing to accept.

stalinMacias commented 11 months ago

The bots are now more likely to receive a larger fee percentage than they would have been willing to accept.

So, the impact is that bots/claimers can receive more fees than what they would like to receive? Isn't that a good incentive to keep the bots active doing their job?

djb15 commented 11 months ago

I think you're missing the key point that fees are subtracted from prize sizes. Therefore the actual users of the protocol are losing out. Bots are overincentivised.

Picodes commented 11 months ago

@djb15 do you have an example with credible values?

It seems to me that even if the speed is faster it shouldn't reach a speed at which it is unusable. Like the limit for this to be of Medium severity would be if under credible hypothesis the decay per block was ~ 0.1% of the amounts at stake. Then it could indeed lead to a loss for users as bots wouldn't be able to claim at a specific block.

c4-judge commented 11 months ago

Picodes marked the issue as satisfactory

c4-judge commented 11 months ago

Picodes changed the severity to QA (Quality Assurance)

djb15 commented 11 months ago

Sure @Picodes, happy to elaborate. So each tier is allocated the same number of shares (i.e. each tier has an equal claim to liquidity that is distributed across the tiers). The prize size for a tier is the liquidity allocated to the tier divided by the number of prizes for the tier, and there can be a maximum of 10 tiers.

The lowest tier (i.e. highest prize) tier has 4^0 = 1 prizes, and the highest tier (lowest prize) has 4^10 ~= 1,000,000 prizes. Therefore, with the same liquidity distributed to each tier, the grand prize is 1 million times larger than the smallest (daily) prize if there are 10 tiers.

With the change to prize claimer fees to be a percentage of the prize size being claimed, the VRGDA auction max fee needs to be sufficiently high to allow the auction to scale across the different prize sizes. And based on the test suite, the maximum percentage fee (of the prize size being claimed) is 50%. So for example if I was claiming a prize worth 1 POOL, the maximum fee would be 0.5 POOL. So if you wanted to scale the max fee up to 50% of the highest prize tier, then the maximum value of the auction would need to be 500,000 times the size of the lowest prize; this would be unnecessary but I'll touch on this again later.

Now, each claim period is the length of the shortest draw period, which is 1 day. In every day there are 3600*24 = 86400 ~= 100,000 seconds. So, for a linear auction that can scale for the largest prize size, every second the fee is increasing 10x (1000%) the lowest prize size. Obviously a VRGDA auction is not linear, but there will be a portion of the curve for which this behaviour is approached or exceeded.

Like I touched on above, having the auction scale to half the largest prize size would be unnecessary since we just want to incentivise claiming enough to warrant the gas costs. But based on the calculations above, reducing the auction cap by 10,000x would still result in a 0.1% fee increase per second for the lowest prize tier in the near linear period. This would correlate to the auction cap being only 100x the smallest prize size, which is insignificant considering the 1,000,000x scale in prizes.

Hopefully that's clear, let me know if you disagree with any of the above.

c4-judge commented 11 months ago

This previously downgraded issue has been upgraded by Picodes

c4-judge commented 11 months ago

This previously downgraded issue has been upgraded by Picodes