code-423n4 / 2022-10-thegraph-findings

0 stars 0 forks source link

calculateL2TokenAddress() may return old L2GraphToken address from cache leading to critical damage to user funds #291

Closed code423n4 closed 1 year ago

code423n4 commented 1 year ago

Lines of code

https://github.com/code-423n4/2022-10-thegraph/blob/309a188f7215fa42c745b136357702400f91b4ff/contracts/l2/gateway/L2GraphTokenGateway.sol#L156 https://github.com/code-423n4/2022-10-thegraph/blob/309a188f7215fa42c745b136357702400f91b4ff/contracts/l2/gateway/L2GraphTokenGateway.sol#L236

Vulnerability details

Description

L2GraphTokenGateway's outboundTransfer is responsible for L2->L1 Graph token transfer, while finalizeInboundTransfer finalizes L1->L2 transfer.

outboundTransfer  burns the transferred amount in L2 using the following code:

L2GraphToken(calculateL2TokenAddress(l1GRT)).bridgeBurn(outboundCalldata.from, _amount);

while finalizeInboundTransfer mints the transfer amount in L2:

L2GraphToken(calculateL2TokenAddress(l1GRT)).bridgeMint(_to, _amount);

The problem is that calculateL2TokenAddress may return a stale copy of the GraphToken:

function calculateL2TokenAddress(address l1ERC20) public view override returns (address) {
        if (l1ERC20 != l1GRT) {
            return address(0);
        }
        return address(graphToken());
    }

graphToken() is defined in Managed.sol:

function graphToken() internal view returns (IGraphToken) {
        return IGraphToken(_resolveContract(keccak256("GraphToken")));
    }

_resolveContract gets the requested proxy contract from the cache:

function _resolveContract(bytes32 _nameHash) internal view returns (address) {
        address contractAddress = addressCache[_nameHash];
        if (contractAddress == address(0)) {
            contractAddress = controller.getContractProxy(_nameHash);
        }
        return contractAddress;
    }

Note that anyone can sync the contracts using this function:

function syncAllContracts() external {
        _syncContract("Curation");
        _syncContract("EpochManager");
        _syncContract("RewardsManager");
        _syncContract("Staking");
        _syncContract("GraphToken");
        _syncContract("GraphTokenGateway");
    }

If a new GraphToken proxy has been deployed on L2 (for some sort of fix), until syncAllContracts() is called there are two critical impacts possible.

  1. Users claiming transfers to L2 will receive old Graph tokens instead of the functioning Graph tokens
  2. Users making transfers to L1 will burn old Graph tokens. This means attackers can double transfer their Graph tokens.

Impact

Protocol users may lose funds in the duration between deployment of a new GraphToken proxy and call to syncAllContracts().

Proof of Concept

The Graph wishes to create a new GraphToken contract and invalidate the previous one. They deploy a new proxy and call controller.setContractProxy() to connect the new GraphToken to the gateway. After this update but before syncAllContracts(), a user withdraws to L1 all his tokens. Therefore, attacker made use of an old GraphToken which means they will get more money than they deserve (Assuming they will be credited in the new proxy contract).

Tools Used

Manual audit

Recommended Mitigation Steps

Different options:

  1. Removing the caching functionality for GraphToken
  2. Changing a contract proxy will trigger an update call to any user of the controller (subscription model).

Note to judges: I am honestly not sure where in the severity scale this fits. I am pretty certain it's not QA because of the danger to user funds, and the preconditions make me feel it's probably not a high. I trust your judgement on this.

0xean commented 1 year ago

Inclined to mark this as QA. It requires the deployment of a new proxy which means that the canonical L2 graph token is no longer the token. Once that very unlikely scenario is realized, it requires the team to not correctly configure the bridge and call sync.

Will await sponsor review, but this seems pretty far out the realm of pratical outcomes.

pcarranzav commented 1 year ago

I agree with @0xean's assesment, this sounds more like QA. Most upgrades happen by simply making the proxy point to a new implementation, rather than setting a new proxy address. In a critical scenario where we need to replace the proxy (which would already mean something's gone terribly wrong), the Governor can execute the proxy change and the sync in a single block by using a tx batch (note that the Governor is a Gnosis Safe).

0xean commented 1 year ago

downgrading to QA, closing as dupe of #293 which is the wardens QA report