sherlock-audit / 2024-05-pooltogether-judging

8 stars 4 forks source link

Nihavent - When `PrizePool` reserves are low and more than expected number of prizes are won, race conditions are created to claim prizes. Some users' prize funds will be locked due to `PrizePool::claimPrize()` call reverting. #112

Closed sherlock-admin4 closed 2 months ago

sherlock-admin4 commented 3 months ago

Nihavent

high

When PrizePool reserves are low and more than expected number of prizes are won, race conditions are created to claim prizes. Some users' prize funds will be locked due to PrizePool::claimPrize() call reverting.

When PrizePool reserves are low and more than expected number of prizes are won, race conditions are created to claim prizes. Some users' prize funds will be locked due to PrizePool::claimPrize() call reverting

Summary

The determination of winners for a given draw and tier are statistically independant events. That is, if Alice winning a prize for a given draw and tier does not make it more or less likely that Bob will win a prize in the same draw and tier. Consequently, there may be more (or less) than expected winners in any given draw. This can be an issue because the tierLiquidity.prizeSize for a given tier and draw is based on the expected number of winners, not the actual number of winners. In the case where there is more than the expected number of winners there may not be enough liquidity in the reserve to cover the extra prize(s). Under these circumstances, there is a race for users' to call PrizePool::claimPrize() before the liqudiity in the tier runs out. Once tier liquidity runs out, there will be user(s) who are geneuine winners for a given draw and tier (PrizePool::isWinner() returns true), but calls to PrizePool::claimPrize() for their address will revert, resulting in their prize-funds being locked.

As an analogy, the protocol is currently like a lottery where there can be more than 1 grand prize winner (tier 0). In the event there is 2 grand prize winners, it allows both users to make a claim on the full grand prize instead of allocating an equal portion to each winner. The issue is present in each tier level with varying frequencies and impacts.

Vulnerability Detail

The expected number of prizes awarded for any given draw and tier is given by 4^t where t is the tier number. For example, in tier 1, there is 4 expected prizes. In reality, this means over a large number of draws the average number of tier 1 prizes awarded per draw will approach 4.

https://github.com/sherlock-audit/2024-05-pooltogether/blob/main/pt-v5-prize-pool/src/libraries/TierCalculationLib.sol#L39-L41

The way this is implemented, in tier t, is by allowing users to generate 4^t different userSpecificRandomNumber in the PrizePool::isWinner() function. This means, users get 4^t chances to generate a random number in the winning zone (ie. in tier t, users are 4^t more likely to be able to claim a prize than in tier 0).

https://github.com/sherlock-audit/2024-05-pooltogether/blob/main/pt-v5-prize-pool/src/PrizePool.sol#L1022-L1029

    uint256 userSpecificRandomNumber = TierCalculationLib.calculatePseudoRandomNumber(
      lastAwardedDrawId_,
      _vault,
      _user,
      _tier,
@>    _prizeIndex,
      _winningRandomNumber
    );

The user wins a prize for a given vault, draw, tier, and prize index if their user-specific uniformly distributed psuedo-random number falls within the winning zone:

https://github.com/sherlock-audit/2024-05-pooltogether/blob/main/pt-v5-prize-pool/src/libraries/TierCalculationLib.sol#L68-L70

Due to the statistical independence of each user's random numbers generated, there is theoretically no limit to the number of prizes that can be won in a given draw and tier. This results in the possibility of the required liquidity in a tier exceeding the available liquidity in that tier.

The issue is made possible because the tierLiquidity.prizeSize is updated when a user calls PrizePool::claimPrize() via TieredLiquidityDistributor::_getTier() and TieredLiquidityDistributor::_computePrizeSize(). As shown below, the prize size returned is based on dividing the remainingTierLiquidity by prizeCount which is the expected prize count (4^t), not the actual prize count for that draw and tier.

https://github.com/sherlock-audit/2024-05-pooltogether/blob/main/pt-v5-prize-pool/src/abstract/TieredLiquidityDistributor.sol#L411-L429

  function _computePrizeSize(
    uint8 _tier,
    uint8 _numberOfTiers,
    uint128 _tierPrizeTokenPerShare,
    uint128 _prizeTokenPerShare
  ) internal view returns (uint104) {
    uint256 prizeCount = TierCalculationLib.prizeCount(_tier);
    uint256 remainingTierLiquidity = _getTierRemainingLiquidity(
      _tierPrizeTokenPerShare,
      _prizeTokenPerShare,
      _numShares(_tier, _numberOfTiers)
    );

    uint256 prizeSize = convert(
@>    convert(remainingTierLiquidity).mul(tierLiquidityUtilizationRate).div(convert(prizeCount))
    );

    return prizeSize > type(uint104).max ? type(uint104).max : uint104(prizeSize);
  }

Crucially, note that in the event there are less than the expected number of winners in a draw and tier, the protocol does not 'save' these extra prizes in reserves to protect against this issue ocurring in the future. It simply inflates that tier's liquidity for future draws. As shown above, in future draws tierLiquidity.prizeSize will be recalculated based on remainingTierLiquidity and prizeCount.

Also note there is no protection against this issue by ensuring the reserves exceed a sufficient size. At any point, the drawManager can allocate any amount of the reserves to any address as a reward:

https://github.com/sherlock-audit/2024-05-pooltogether/blob/main/pt-v5-prize-pool/src/PrizePool.sol#L434-L449

The risk of at least one prize being unclaimable is present in any draw where a tier's prize size tierLiquidity.prizeSize is greater than the reserve amount. Note that unless the value of the grand prize is kept in the reserve, the issue will be present in every single draw.

Impact

When tierLiquidity.prizeSize exceeds TieredLiquidityDistributor._reserve, and there are more winners than the expected number of winners, a race to call PrizePool::claimPrize() begins. In tier t, the first 4^t prize claims will be successful, with each subsequent call reverting.

Addresses that were entitled to a prize, but were not the first 4^t winning addresses to claim will forfeit their prize.

Code Snippet

The POC below was adapted from a test in PrizePool.t.sol and shows a scenario where five users can make a prize claim in tier 1 for a given draw. There is not enough liquidity for all users to claim, so the fifth claim reverts.

Paste the following import and test in PrizePool.t.sol.


import { InsufficientLiquidity } from "../src/abstract/TieredLiquidityDistributor.sol";

  function test_ClaimPrizesRevertsScenarioOne() public {
    uint users = 50;
    uint balance = 1e18;
    uint totalSupply = users * balance;
    uint8 t = 1; // Show revert in tier 1
    uint256 yieldAmt = 100e18;

    // Vault contributes yield to prize pool
    contribute(yieldAmt);

    // Draw is awarded with this random number
    uint256 random = uint256(keccak256(abi.encode(1234)));
    awardDraw(random);

    // Based on this seed, addresses, and TWAB values, Users 10, 15, 16, 18, and 41 win a prize in tier 1
    address user10 = makeAddr(string(abi.encodePacked(uint(10))));
    address user15 = makeAddr(string(abi.encodePacked(uint(15))));
    address user16 = makeAddr(string(abi.encodePacked(uint(16))));
    address user18 = makeAddr(string(abi.encodePacked(uint(18))));
    address user41 = makeAddr(string(abi.encodePacked(uint(41))));
    address user42 = makeAddr(string(abi.encodePacked(uint(42))));

    // Mock user TWAB
    mockTwabForUser(address(this), user10, t, balance);
    mockTwabForUser(address(this), user15, t, balance);
    mockTwabForUser(address(this), user16, t, balance);
    mockTwabForUser(address(this), user18, t, balance);
    mockTwabForUser(address(this), user41, t, balance);
    mockTwabForUser(address(this), user42, t, balance);

    // Mock vault TWAB
    mockTwabTotalSupply(address(this), t, totalSupply);

    // 5 Different users are able to claim prizes for this draw in tier 1
    assertEq(prizePool.isWinner(address(this), user10, 1, 0), true);
    assertEq(prizePool.isWinner(address(this), user15, 1, 2), true);
    assertEq(prizePool.isWinner(address(this), user16, 1, 3), true);
    assertEq(prizePool.isWinner(address(this), user18, 1, 0), true);
    assertEq(prizePool.isWinner(address(this), user41, 1, 1), true);

    // There is not enough liquidity for all 5 winners
    assertGt(5 * prizePool.getTierPrizeSize(1), prizePool.getTierRemainingLiquidity(1) + prizePool.reserve());

    // Race condition is has been created, first 4 users can claim
    assertEq(prizePool.getTierPrizeSize(1), prizePool.claimPrize(user10, 1, 0, user10, 0, address(this)));
    assertEq(prizePool.getTierPrizeSize(1), prizePool.claimPrize(user15, 1, 2, user15, 0, address(this)));
    assertEq(prizePool.getTierPrizeSize(1), prizePool.claimPrize(user16, 1, 3, user16, 0, address(this)));
    assertEq(prizePool.getTierPrizeSize(1), prizePool.claimPrize(user18, 1, 0, user18, 0, address(this)));

    // Fifth user's claim reverts due to insufficient liquditiy
    vm.expectRevert(abi.encodeWithSelector(InsufficientLiquidity.selector, prizePool.getTierPrizeSize(1)));
    prizePool.claimPrize(user41, 1, 1, user41, 0, address(this));
  }

Tool used

Manual Review

Recommendation

Without changing the core logic of winner selection, my recommended mitigation falls into two categories:

  1. Waiting until the end of the claim period before calculating tierLiquidity.prizeSize and distributing pool tokens. This means the payout values for a given draw and tier will be based on the actual number of winners, not the expected number of winners. As an example, if two addresses were to win the grand prize on the same draw, they would each recieve half of the total grand prize. This implementation would be more like a lottery where the total prize pool in each tier is guaranteed, but not the value of each winner's payout, this is only determined after the number of successful winners has been determined.

  2. Managing the size of the reserve pool (TieredLiquidityDistributor._reserve) by taking steps to keep the reserve pool above some 'minimum reserve buffer':

sherlock-admin4 commented 2 months ago

1 comment(s) were left on this issue during the judging contest.

infect3d commented:

the PoC do not take the _reserve into account which exists for that specific purpose

trmid commented 2 months ago

This is a well known limitation and design choice of the prize pool. As mentioned in the issue, the prize pool operates on a statistical model, awarding prizes based on tier odds. The prize pool offers two different tools for mitigating this variance:

  1. The reserve can be used to backstop "over-awarded" prizes with extra liquidity. This can be managed manually through the contributeReserve function, or automatically through the cut of contributions captured by the reserveShares. However, in recent deployments, the reserve is no longer needed for this purpose due to a new parameter:
  2. The tierUtilizationRatio can be configured such that the prizes only use a portion of the available yield in each prize tier. When configured correctly, this alone is enough to ensure that there will be enough liquidity for every prize within reasonable statistical variance. Although the absolute possibility of an over-awarded prize is not completely removed by this, the chance of a race condition can be brought so low such that there is no reasonable incentive for claimers to try and win the race. It is also worth noting that the most likely prizes to be over awarded are the smallest prizes in the system, therefore lowering impact of the condition even further.

While the issue is valid, the prize pool already offers multiple configuration parameters to mitigate it such that any properly configured prize pool will not be significantly impacted.

nevillehuang commented 2 months ago

@trmid Was this design choice/limitation documented? If not I am leaning towards valid medium severity based on your comments above.

Nihavent commented 2 months ago

Escalate

I believe according to Sherlock rules this issue should be high.

Allow me to demonstrate this with two points:

  1. This issue is bound to happen in tier 0 (grand prize) without any external limitations, this isn't a question of 'if', only a question of 'when'.

  2. If point 1. is accepted, the impact is high because using the designed mitigations to prevent the issue from occuring in tier 0, will imposes unrealistic constraints on the pool settings, and have a material cost for stakers.

    • Mitigating this issue with the _reserve pool requires this value to be maintained above the liquidity in the tier 0 pool at all times, this is costly to stakers, and there is no apparent effort to implement this.
    • Mitigating this issue with the tierLiquidityUtilizationRate would require this value to be set to close to 5e18 (50%) which means ~99.98% of grand prize pool claims would only receive half the tier 0 prize pool, along with every other tier only winning half the pool in every single draw.

I also don't believe it's relevant if the protocol team mentioned it was a design decision after the competition concluded. It should have been mentioned in 'design choices' or 'known issues/acceptable risks' section of the contest README:

304B8E5A-AA1F-4972-959E-B6520AEBFFEB

The Details

  1. The likelihood of this issue occuring in the grand prize pool (tier 0) is high over the life of the protocol and the number of chains deployed.

    https://github.com/sherlock-audit/2024-05-pooltogether-Nihavent/blob/d244743ff201ab439e534e944cf19c7efc3e733f/pt-v5-prize-pool/src/libraries/TierCalculationLib.sol#L17-L27

    • When substituting values for the above deployment pattern, we see the probability of a grand prize being won on any single chain, on any given draw/day is given by:

$$ p(\text{grand prize win}) = \left( \frac{1}{91} \right)^{\sqrt{\frac{0 + 1 - (numTiers - 2)}{1 - (numTiers - 2)}}} = \frac{1}{91} \ $$

$$ p(\text{not two grand prizes same draw, single chain}) = 1 - \left( \frac{1}{91} \right)^2 $$

$$ p(\text{not two grand prizes won on the same draw over 1 year, single chain}) = \left( 1 - \left( \frac{1}{91} \right)^2 \right)^{365} $$

$$ p(\text{not two grand prizes won on the same draw over 1 year, 12 chains}) = \left( \left( 1 - \left( \frac{1}{91} \right)^2 \right)^{365} \right)^{12} \approx 0.589 $$

  1. The impact is high due to the proposed mitigations materially decreasing staker yield and not being enforced.

    • Given point 1. above, we can limit analysis of the impact to the grand prize pool.
    • Below are the designed mitigations for this issue:

    "The reserve can be used to backstop "over-awarded" prizes with extra liquidity.

    • In order for the reserve to successfully protect against two users having a claim on the grand prize pool in the same draw, the size of the reserve must exceed the size of tier 0 liqudity at all times. This is a required invariant for a successful mitigation that is not enforced, and would withhold a significant amount of yield being won by stakers.
      • There is no automated balancing mechanism that adjusts RESERVE_SHARES to ensure that the above invariant is met.
      • Therefore, management of the reserves through contributeReserve() and allocateRewardFromReserve() is required, but no protection in these functions to ensure _reserve exceeds tier 0 liquidity.
      • Maintaining the reserve pool manually through the contributeReserve() comes at a cost to the protocol itself (ie. admin/bot needs to provide liquidity to the reserve out of goodwill because there is no mechanism to collect this extra liquidity).
      • Additionally, mainting a reserve pool which exceeds tier 0 liquidity is costly to stakers as a significant chunk of their staking rewards are withheld by the protocol to protect against the vulnerability.

    "The tierLiquidityUtilizationRate can be configured such that the prizes only use a portion of the available yield in each prize tier.

    • In TieredLiquidityDistributor::_computePrizeSize(), tierLiquidityUtilizationRate is used to scale the prizeSize https://github.com/sherlock-audit/2024-05-pooltogether-Nihavent/blob/d244743ff201ab439e534e944cf19c7efc3e733f/pt-v5-prize-pool/src/abstract/TieredLiquidityDistributor.sol#L424-L426.
    • In order to protect against a second grand prize winner's PrizePool::claimPrize() call reverting, the tierLiquidityUtilizationRate should approach 50% (small allowance for reserves).
    • Setting the tierLiquidityUtilizationRate this low has material impact on stakers, namely 99.98% (1-(1/91^2)) of grand prize winners will only receive half of the grand prize pool.
    • The tierLiquidityUtilizationRate required to protect against reverts in higher tiers is potentially a lot higher than 50%, however the protocol only allows for a single tierLiquidityUtilizationRate that is applied to each tier.
    • Setting this ratio to 50% is penalising regular users in all tiers from their expected long-term APR.
    • There is no setter function for tierLiquidityUtilizationRate meaning it cannot be increased in periods where the reserve pool is healthy.

Depending on how the prosposed mitigations are implemented, they may be more costly to stakers than the issue itself.

sherlock-admin3 commented 2 months ago

Escalate

I believe according to Sherlock rules this issue should be high.

Allow me to demonstrate this with two points:

  1. This issue is bound to happen in tier 0 (grand prize) without any external limitations, this isn't a question of 'if', only a question of 'when'.

  2. If point 1. is accepted, the impact is high because using the designed mitigations to prevent the issue from occuring in tier 0, will imposes unrealistic constraints on the pool settings, and have a material cost for stakers.

    • Mitigating this issue with the _reserve pool requires this value to be maintained above the liquidity in the tier 0 pool at all times, this is costly to stakers, and there is no apparent effort to implement this.
    • Mitigating this issue with the tierLiquidityUtilizationRate would require this value to be set to close to 5e18 (50%) which means ~99.98% of grand prize pool claims would only receive half the tier 0 prize pool, along with every other tier only winning half the pool in every single draw.

I also don't believe it's relevant if the protocol team mentioned it was a design decision after the competition concluded. It should have been mentioned in 'design choices' or 'known issues/acceptable risks' section of the contest README:

304B8E5A-AA1F-4972-959E-B6520AEBFFEB

The Details

  1. The likelihood of this issue occuring in the grand prize pool (tier 0) is high over the life of the protocol and the number of chains deployed.

    https://github.com/sherlock-audit/2024-05-pooltogether-Nihavent/blob/d244743ff201ab439e534e944cf19c7efc3e733f/pt-v5-prize-pool/src/libraries/TierCalculationLib.sol#L17-L27

    • When substituting values for the above deployment pattern, we see the probability of a grand prize being won on any single chain, on any given draw/day is given by:

$$ p(\text{grand prize win}) = \left( \frac{1}{91} \right)^{\sqrt{\frac{0 + 1 - (numTiers - 2)}{1 - (numTiers - 2)}}} = \frac{1}{91} \ $$

  • Therefore, the probability that two grand prizes are not won on the same draw, for any single chain is:

$$ p(\text{not two grand prizes same draw, single chain}) = 1 - \left( \frac{1}{91} \right)^2 $$

  • Therefore, the probability that two grand prizes are not won on the same draw, for any given chain over the period of 1 years is:

$$ p(\text{not two grand prizes won on the same draw over 1 year, single chain}) = \left( 1 - \left( \frac{1}{91} \right)^2 \right)^{365} $$

  • Therefore, the probability that two grand prizes are not won on the same draw, on any of the 12 chains, over the period of 1 year is:

$$ p(\text{not two grand prizes won on the same draw over 1 year, 12 chains}) = \left( \left( 1 - \left( \frac{1}{91} \right)^2 \right)^{365} \right)^{12} \approx 0.589 $$

  • Therefore there is a 41.1% chance of this issue occuring at least once, on any of the 12 chains in any single year.
  • The table below shows this calculation extended further with the same parameters:

    | Years | Not occurance probability | At least 1 occurance probability |
    |-------|---------------------------|----------------------------------|
    | 1     | 58.9%                     | 41.1%              
    | 3     | 20.4%                     | 79.6%            
    | 5     | 7.1%                      | 92.9%        
    | 10    | 0.5%                      | 99.5%
  1. The impact is high due to the proposed mitigations materially decreasing staker yield and not being enforced.

    • Given point 1. above, we can limit analysis of the impact to the grand prize pool.
    • Below are the designed mitigations for this issue:

    "The reserve can be used to backstop "over-awarded" prizes with extra liquidity.

    • In order for the reserve to successfully protect against two users having a claim on the grand prize pool in the same draw, the size of the reserve must exceed the size of tier 0 liqudity at all times. This is a required invariant for a successful mitigation that is not enforced, and would withhold a significant amount of yield being won by stakers.
      • There is no automated balancing mechanism that adjusts RESERVE_SHARES to ensure that the above invariant is met.
      • Therefore, management of the reserves through contributeReserve() and allocateRewardFromReserve() is required, but no protection in these functions to ensure _reserve exceeds tier 0 liquidity.
      • Maintaining the reserve pool manually through the contributeReserve() comes at a cost to the protocol itself (ie. admin/bot needs to provide liquidity to the reserve out of goodwill because there is no mechanism to collect this extra liquidity).
      • Additionally, mainting a reserve pool which exceeds tier 0 liquidity is costly to stakers as a significant chunk of their staking rewards are withheld by the protocol to protect against the vulnerability.

    "The tierLiquidityUtilizationRate can be configured such that the prizes only use a portion of the available yield in each prize tier.

    • In TieredLiquidityDistributor::_computePrizeSize(), tierLiquidityUtilizationRate is used to scale the prizeSize https://github.com/sherlock-audit/2024-05-pooltogether-Nihavent/blob/d244743ff201ab439e534e944cf19c7efc3e733f/pt-v5-prize-pool/src/abstract/TieredLiquidityDistributor.sol#L424-L426.
    • In order to protect against a second grand prize winner's PrizePool::claimPrize() call reverting, the tierLiquidityUtilizationRate should approach 50% (small allowance for reserves).
    • Setting the tierLiquidityUtilizationRate this low has material impact on stakers, namely 99.98% (1-(1/91^2)) of grand prize winners will only receive half of the grand prize pool.
    • The tierLiquidityUtilizationRate required to protect against reverts in higher tiers is potentially a lot higher than 50%, however the protocol only allows for a single tierLiquidityUtilizationRate that is applied to each tier.
    • Setting this ratio to 50% is penalising regular users in all tiers from their expected long-term APR.
    • There is no setter function for tierLiquidityUtilizationRate meaning it cannot be increased in periods where the reserve pool is healthy.

Depending on how the prosposed mitigations are implemented, they may be more costly to stakers than the issue itself.

You've created a valid escalation!

To remove the escalation from consideration: Delete your comment.

You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.

10xhash commented 2 months ago

Escalate

"and there are more winners than the expected number of winners, a race to call PrizePool::claimPrize() begins. In tier t, the first 4^t prize claims will be successful, with each subsequent call reverting" Only 4^t number of user's are expected to be able to claim prizes. Due to relying on probability, in case if there are more user's who become eligible to claim the prize, then it is fine to have a race for the prize. The functionality to award from the reserve is added as an extra and is not supposed to mean that all excess user's should also be rewarded

sherlock-admin3 commented 2 months ago

Escalate

"and there are more winners than the expected number of winners, a race to call PrizePool::claimPrize() begins. In tier t, the first 4^t prize claims will be successful, with each subsequent call reverting" Only 4^t number of user's are expected to be able to claim prizes. Due to relying on probability, in case if there are more user's who become eligible to claim the prize, then it is fine to have a race for the prize. The functionality to award from the reserve is added as an extra and is not supposed to mean that all excess user's should also be rewarded

You've created a valid escalation!

To remove the escalation from consideration: Delete your comment.

You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.

Nihavent commented 2 months ago

Escalate

"and there are more winners than the expected number of winners, a race to call PrizePool::claimPrize() begins. In tier t, the first 4^t prize claims will be successful, with each subsequent call reverting" Only 4^t number of user's are expected to be able to claim prizes. Due to relying on probability, in case if there are more user's who become eligible to claim the prize, then it is fine to have a race for the prize. The functionality to award from the reserve is added as an extra and is not supposed to mean that all excess user's should also be rewarded

Can you indicate where this is documented?

Race conditions are objectively bad for the protocol as VRGDA claimers will lose races to users who setup custom claimer bots to always claim faster than the default VRGDA (in VRGDA price starts low!). The direct result is casual users of the protocol, on average, will earn less yield than 'pro' users who setup custom claimer bots. Turning the protocol into skill-based doesn't seem intended.

The VRGDA-claimer docs clearly indicate the protocol's intention to allow claims when 'actual prize count' > 4^t without mention of races or reverts due to slow claims:

"The schedule starts immediately after the draw is awarded, and is set to take one quarter of the draw period. This provides plenty of leeway in the event that there are more prizes, statistically, for a given draw."

0x73696d616f commented 2 months ago

I had a conversation with the sponsor about this. It is at most medium because they set the utilization ratio to 50%. This means that 3 grand prizes are required, which is a (1/91)^3 == 0.00013270149% chance of happening per draw, so borderline low/medium given the very low likelihood.

WangSecurity commented 2 months ago

Based on the comment above, I believe medium is indeed more appropriate here. @10xhash you escalated to keep this issue as medium or you believe it's medium severity and should be invalid?

10xhash commented 2 months ago

Based on the comment above, I believe medium is indeed more appropriate here. @10xhash you escalated to keep this issue as medium or you believe it's medium severity and should be invalid?

To be invalid, due to this comment

WangSecurity commented 2 months ago

Is it documented anywhere that the protocol only rewards the expected number of winners? As I understand it's not, but it's an acceptable risk, based on the sponsor's comment here. Hence, I agree with the Lead Judge, that even though it may be acceptable behaviour, but still leads to winners not getting their prizes, although the probability is low. Hence, I believe the medium severity is appropriate. Am I wrong anywhere?

10xhash commented 2 months ago

Is it documented anywhere that the protocol only rewards the expected number of winners? As I understand it's not, but it's an acceptable risk, based on the sponsor's comment here. Hence, I agree with the Lead Judge, that even though it may be acceptable behaviour, but still leads to winners not getting their prizes, although the probability is low. Hence, I believe the medium severity is appropriate. Am I wrong anywhere?

Not documented, inferreable from the code and the entire project being probability based.

"leads to winners not getting their prizes" there can be infinite number of actual winners as in the probability is never 0

WangSecurity commented 2 months ago

I agree it's a design decision for the prize pool to operate in that way. Firstly, based on the sponsor's comment the prize pool awards based on odds. Secondly, on the fact that there will be a reserve for such cases (as I understand), but still there is no guarantee that all excess winners should be awarded, but only the the expected number of prizes.

Hence, I believe this issue is indeed design recommendation. Planning to reject @Nihavent's escalation cause it asks for high severity, but accept @10xhash's escalation and invalidate the report.

Nihavent commented 2 months ago

@WangSecurity It may have been a design decision, and internally the sponsor may have accepted this issue up to an acceptable level of risk. But due to this not being documented in the README, Watsons were not in a position to know these details at the time of the contest. Thus, we submitted issues that could cause losses to regular users of the protocol.

To say that the probability of this issue ocurring can never be reduced to 0, so races are okay is oversimplifying and ignoring the mitigations, their externailities, and the fact that the protocol was clearly not designed for prize claim races.

A key piece of evidence that this protocol was not designed with race-conditions in mind is the VRGDA claimer contract which auctions the claim of prizes starting at a low price and increasing over time. This logic always loses races to savvy users who claim their prizes first.

0x73696d616f commented 2 months ago

@Nihavent you could have asked the sponsor about this, it is a very general risk of probability based awarding

Nihavent commented 2 months ago

@0x73696d616f true it is a general risk, but as I and the duplicate https://github.com/sherlock-audit/2024-05-pooltogether-judging/issues/71 suggested, you could split the rewards based on the actual number of claimers at the end of the claim period. This avoids race conditions, increases the fairness to all users of the protocol by ensuring that each tier's liquidity is fully paid out each draw.

As discussed in my escalation, setting the tierLiquidityUtilizationRate to 5e17 quite literally means that the yield in some tiers can never be fully paid out.

I've taken a general issue and described how specifically it impacts this protocol with respect to it's mitigations. If this level of risk vs. yield reduction tradeoff was acceptable I feel it should have been documented to avoid Watsons submitting an issue about it. The burden should not be on Watson to ask the sponsors to confirm levels of acceptable risk when the README asked a question about it and it was not disclosed.

WangSecurity commented 2 months ago

Firstly, I say that this is the design decision not because the sponsor said so, but on the fact that the protocol distributes prizes based on tier odds. So the design of the protocol has such a tradeoff that the number of prizes, calculated based on odds, maybe less than the number of winners. And we have to remember in this case, the users don't lose funds, but the "picked" winners cannot claim their prizes (I don't say it's good or anything, but still the funds they deposited are not lost).

Secondly, design decision doesn't equal acceptable risk, and I wouldn't invalidate an issue if it was an acceptable risk known after the contest. But, when it's the design decision and the way the protocol works, all watsons have a chance to ask sponsors and clarify how the system works.

Thirdly, the constraints for this issue to occur are extreme, based on this comment.

Hence, based on all of this arguments above altogether, I believe this is the design decision and this report is low/info.

The decision remains the same, reject @Nihavent escalation and accept @10xhash escalation and invalidate the issue.

WangSecurity commented 2 months ago

Result: Invalid Has duplicates

sherlock-admin4 commented 2 months ago

Escalations have been resolved successfully!

Escalation status: