code-423n4 / 2023-09-maia-findings

24 stars 17 forks source link

The check if (callee == address(0)) revert UnrecognizedBridgeAgent(); in RootBridgeAgent.sol and lack of data synchronization permits the sending of messages to removed branch bridge agents. #548

Open c4-submissions opened 1 year ago

c4-submissions commented 1 year ago

Lines of code

https://github.com/code-423n4/2023-09-maia/blob/f5ba4de628836b2a29f9b5fff59499690008c463/src/RootBridgeAgent.sol#L818 https://github.com/code-423n4/2023-09-maia/blob/f5ba4de628836b2a29f9b5fff59499690008c463/src/RootBridgeAgent.sol#L910

Vulnerability details

Impact

When a call is made from rootBridgeAgent to branchBridgeAgent, there's a check in place to prevent a call to address(0) to ensure the destination agent exists. However, this setup still allows deactivated Bridge Agents to receive messages.

Proof of Concept

In the contract CoreBranchRouter.sol, there's a function designed to remove a BranchBridgeAgent:

function _removeBranchBridgeAgent(address _branchBridgeAgent) internal {
    // Save Port Address to memory
    address _localPortAddress = localPortAddress;

    // Revert if it is not an active BridgeAgent
    if (!IPort(_localPortAddress).isBridgeAgent(_branchBridgeAgent)) revert UnrecognizedBridgeAgent();

    // Remove BridgeAgent
    IPort(_localPortAddress).toggleBridgeAgent(_branchBridgeAgent);
}

However, in practice, it doesn't truly remove the BranchBridgeAgent; it merely toggles its status:

function toggleBridgeAgent(address _bridgeAgent) external override requiresCoreRouter {
    isBridgeAgent[_bridgeAgent] = !isBridgeAgent[_bridgeAgent]; //@audit inactive

    emit BridgeAgentToggled(_bridgeAgent);
}

Additionally, data synchronization between the Root Chain and the Branch Chain is lacking after the removal of a Bridge Agent in its branch, resulting in outdated data in the getBranchBridgeAgent mapping on the Root Chain after toggling a branchBridgeAgent.

As a result, the address of the toggled BranchBridgeAgent remains in the getBranchBridgeAgent mapping:

RootBridgeAgent.sol:

62:    /// @notice Chain -> Branch Bridge Agent Address. For N chains, each Root Bridge Agent Address has M =< N Branch Bridge Agent Address.
mapping(uint256 chainId => address branchBridgeAgent) public getBranchBridgeAgent;

Consequently, the check if (callee == address(0)) revert UnrecognizedBridgeAgent(); doesn't function for "removed" agents, allowing them to still receive messages:

RootBridgeAgent.sol:

808:function _performCall(
    uint16 _dstChainId,
    address payable _refundee,
    bytes memory _payload,
    GasParams calldata _gParams
) internal {
    // Get destination Branch Bridge Agent
    address callee = getBranchBridgeAgent[_dstChainId];

    // Check if valid destination
    if (callee == address(0)) revert UnrecognizedBridgeAgent();

Tools Used

vs

Recommended Mitigation Steps

Consider fully removing branch agents instead of toggling them, and ensure data synchronization between the branch chain and the root chain to address this issue effectively.

Assessed type

Invalid Validation

c4-pre-sort commented 1 year ago

0xA5DF marked the issue as sufficient quality report

c4-pre-sort commented 1 year ago

0xA5DF marked the issue as primary issue

0xBugsy commented 1 year ago

We believe this is a duplicate of #564

c4-judge commented 1 year ago

alcueca marked the issue as satisfactory

alcueca commented 1 year ago

I believe that this is different from #564 in that here the vulnerability is that Root to Branch messages are allowed after the bridge agent is removed, while in #564 it is the bridge agent in the branch that can also send messages in the opposite direction because of a different bug.

0xBugsy commented 1 year ago

Removing Bridge Agents really is something that should virtually never happen in production but if it does we need to make sure users are still able to retrieve their failed settlements 'stuck' in the branch and by deactivating the bridge agent what we are intending to achieve is removing its ability to request branch port tokens.

alcueca commented 1 year ago

From the sponsor:

If a branch bridge agent is deactivated we should only be able to remove pending deposits/settlements and not spend any new tokens that's why only the tx types that don't involve new token deposits / settlements are permitted.

alcueca commented 1 year ago

To be clarified in documentation

c4-judge commented 1 year ago

alcueca changed the severity to QA (Quality Assurance)

c4-judge commented 1 year ago

alcueca marked the issue as grade-b

viktorcortess commented 1 year ago

Hello Alcueca, and thanks to Mr. mcgrathcoutinho for drawing attention to my humble submission and to MrPotatoMagic for his detailed explanation of the problem that we found.

Could you please explain why it was downgraded to QA grade-B when it describes the same problem?

From 564:

Impact Bridge agents are removed/toggled off to stop communication to/from them (confirmed by sponsor - check image below) in case of some situation such as a bug in protocol or in case of an upgrade to a newer version of the protocol (in case LayerZero decides to upgrade/migrate their messaging library)

Note: Over here there is no call made back to the root chain again which means the synced branch bridge agent is still synced in root (as shown here). But even if it is synced in Root chain, there are no checks or even state variables to track activeness (bool isActive) of branch bridge agent on the Root chain end as well. Not related to this POC but good to be aware of.

From my submission:

Impact When a call is made from rootBridgeAgent to branchBridgeAgent, there's a check in place to prevent a call to address(0) to ensure the destination agent exists. However, this setup still allows deactivated Bridge Agents to receive messages.

Additionally, data synchronization between the Root Chain and the Branch Chain is lacking after the removal of a Bridge Agent in its branch, resulting in outdated data in the getBranchBridgeAgent mapping on the Root Chain after toggling a branchBridgeAgent.

So, in both cases, the impact is that toggling doesn't work as intended even if MrPotatoMagic described the other side of the problem.

In the comment above, you mentioned that my issue is different because the message is going in the opposite direction.

But there was nothing in the documentation about "toggling that should work only in one direction" and as the screenshot from the sponsor says - it's for "disabling bridge agent from further communication" and it doesn't work properly because bridge agent is still able to communicate after toggling.

However, the protocol operates as a network with a central root and numerous identical branches connected through the root. Messages travel in both directions, and it's a problem when an emergency function designed to halt this connection doesn't function properly, regardless of the direction it goes.

I appreciate the sponsor mentioning me as a duplicate of the issue 564 and I hope that we could attract attention to the problem of the toggling bridge agents.