code-423n4 / 2022-09-nouns-builder-findings

10 stars 6 forks source link

Founders can receive less tokens that expected #269

Open code423n4 opened 2 years ago

code423n4 commented 2 years ago

Lines of code

https://github.com/code-423n4/2022-09-nouns-builder/blob/7e9fddbbacdd7d7812e912a369cfd862ee67dc03/src/token/Token.sol#L118

Vulnerability details

Because the IDs of the founders tokens are wrongly computed, some of them can have an id higher than 100 and then be never minted.

Proof of Concept

If a founder has percetage of pct, then pct IDs between 0 and 99 should be given to him in the mapping tokenRecipient, such that if a token is minted with tokenId % 100 equal to one of its IDs, it is minted directly to him. But the modulo is not correctly done at when the mapping is filled, so some IDs of a founder can be higher than 100.

It happens for example if Alcie has 10% and Bob gas 11%. For Alice, schedule is equal to 10, so Alices will have the ids 0, 10, 20, ..., 90. But for Bob, schedule is also equal to 9. So it will have the ids 1, 11, 21, ..., 91, 100. Indeed, Bob can't have the ID 0 because it belongs to Alice, same for the ID 10 etc. The last eleventh ID that is given to Bob is 100, that can't be reached.

This happens for example when with two founders the percentages verify (100 / pct1) - (100 / pct2) == 1. So (11%, 12%) or (25%, 33%) will behave samely (with more token lost in some case).

// SPDX-License-Identifier: MIT
pragma solidity 0.8.15;

import { TokenTest } from "./Token.t.sol";

contract TestMEP_M1 is TokenTest {

    function setUpMEP() internal returns (address alice, address bob, uint256 totalOwnership) {
        address[] memory wallets = new address[](2);
        uint256[] memory percents = new uint256[](2);
        uint256[] memory vestExpiries = new uint256[](2);

        wallets[0] = alice = address(0x0a);
        wallets[1] = bob = address(0x0b);

        percents[0] = 10;
        percents[1] = 11;
        totalOwnership = percents[0] + percents[1];

        vestExpiries[0] = type(uint256).max;
        vestExpiries[1] = type(uint256).max;

        deployWithCustomFounders(wallets, percents, vestExpiries);
    }

    // Verifies it on the 100 first tokens
    function testMEP_M1_1() public {

        (address alice, address bob, uint256 totalOwnership) = setUpMEP();

        // Should mint the 100 first tokens
        // Alice should receive 10 tokens
        // Bob should receive 11 tokens
        for (uint256 i; i < 100 - totalOwnership; ++i) {
            vm.prank(address(auction));
            token.mint();
        }

        assertEq(token.balanceOf(alice), 10, "Alice's balance is wrong");
        assertEq(token.balanceOf(bob), 11, "Bob's balance is wrong");
    }

    // Verifies it with the law of large numbers
    function testMEP_M1_2() public {

        (address alice, address bob, ) = setUpMEP();

        uint256 tokensToMint = 123456; // enough large to apply the law of large numbers
        for (uint256 i; i < tokensToMint; ++i) {
            vm.prank(address(auction));
            token.mint();
        }

        uint256 totalSupply = token.totalSupply();
        assertEq(token.balanceOf(alice) * 100 / totalSupply, 10, "Alice's percentage is wrong");
        assertEq(token.balanceOf(bob) * 100 / totalSupply, 11, "Bob's percentage is wrong");
    }
}

Recommended Mitigation Steps

Replace the line 118 of Token.sol by baseTokenId = (baseTokenId + schedule) % 100;.

GalloDaSballo commented 2 years ago

The warden has shown how, due to a logical mistake / typo, founders will receive a lower allocation than expected.

This is contingent on certain allocations being set and will cause a "loss of yield" to the founders estimated to the lost allocation.

I believe the finding could be raised to High Severity due to the graveness of it, however, because it is contingent on specific inputs, I think Medium Severity to be appropriate

Minh-Trng commented 2 years ago

I believe the finding could be raised to High Severity due to the graveness of it, however, because it is contingent on specific inputs, I think Medium Severity to be appropriate

I would like to ask to keep severity in sync with this one and all its duplicates, because they all reference the exact same issue (so either keep all of them as medium or all upgrade to high): https://github.com/code-423n4/2022-09-nouns-builder-findings/issues/499

Leaving some as Medium and others as high would essentially punish the wardens that decided to be more conservative with the severity estimate (full disclosure: my report is amongst them)

GalloDaSballo commented 2 years ago

I believe the finding could be raised to High Severity due to the graveness of it, however, because it is contingent on specific inputs, I think Medium Severity to be appropriate

I would like to ask to keep severity in sync with this one and all its duplicates, because they all reference the exact same issue (so either keep all of them as medium or all upgrade to high): #499

Leaving some as Medium and others as high would essentially punish the wardens that decided to be more conservative with the severity estimate (full disclosure: my report is amongst them)

Thank you for the heads-up, I believe there were so many duplicates for this finding that I made two piles in the judging sheet, I have yet to go over #499 and many other dups.

Feel free to check-in with me tomorrow if I haven't fixed the judging by then

tbtstl commented 2 years ago

I actually think this one should be high severity – it's important that these token ownership percentages end up consistent on a long enough time scale

GalloDaSballo commented 2 years ago

I appreciate the generosity of the sponsor, ultimately the bug can happen only if (baseTokenId + schedule) is greater than 100, which, because the schedule is X / 100, will not always happen, and the loss will not be total but only for the specific tokens for which the modulo is necessary.

For those reasons (conditionality of bug due to admin input), I think Medium Severity to be more appropriate.

I do recommend mitigation nonetheless