code-423n4 / org

Code4rena Governance and Discussion
71 stars 17 forks source link

Approve Race Condtion - Suggested NC or Invalid #51

Closed GalloDaSballo closed 9 months ago

GalloDaSballo commented 1 year ago

See my older thoughts: https://github.com/code-423n4/2022-01-yield-findings/issues/6 https://github.com/code-423n4/2022-05-backd-findings/issues/117

The approval race is technically true, and could be considered a threat based on your opinion / bias

If you believe that tokens you've approved belong to you, then you may be inclined to say that approve is vulnerable to front-run.

However we can counter that by saying that cancelling the allowance, either via approve(X, 0) or via decreaseAllowance(X, 123), the allowance could still be front-run, meaning there is always the possibility that an allowance can be used before it's revoked

The alternative perspective (the one I have), is that giving your allowance is equivalent to giving your coins away, because the race condition above means in clawing them back you could still lose them.

The reason why some people argue this to be valid is that in the case of increasing an allowance, setting: approve(X, 1) Then calling approve(X, n)

Will put the first allowed tokens at risk as they could be "doubly spent".

My conclusion from the entire takeaway is to treat any allowance as if the tokens are spent / lost.

And to set back to approve 0.

Given the above I would recommend a Non-Critical Severity or outright making it invalid as race-conditions are a property of public mempool chains

GalloDaSballo commented 1 year ago

Other Judges (Please add to this):

@0xean Initially High https://github.com/code-423n4/2022-08-olympus-findings/issues/77

Changed to QA https://github.com/code-423n4/2022-10-thegraph-findings/issues/293

@HardlyDifficult https://github.com/code-423n4/2022-02-foundation-findings/issues/14

Discussion with @alcueca (Suggested to invalidate) https://github.com/code-423n4/2022-01-yield-findings/issues/6#issuecomment-1028092982

pashov commented 1 year ago

I think the explanation of the vulnerability is missing to show it's actual problem. Here is an example scenario:

  1. Bob approves Alice to spend 100 of his tokens
  2. After a while, Bob decides Alice should actually be able to spend only 50 of his tokens and sends a tx to set the allowance to 50
  3. Alice sees this tx and front-runs it to spend 100 tokens
  4. Now Alice again has an allowance of 50 and she can transfer those tokens out, totalling 150 tokens transferred

So Bob has only approved Alice to spend 100 and then decreased to 50, but she was able to transfer 150. Yes, you could say when Bob approved Alice to spend 100 tokens he could have treated the tokens as spent/lost, but then he lost 50 more.

Essentially this is stealing value from a user which should result in at least a Medium severity vulnerability in my opinion.

trust1995 commented 1 year ago

I agree "race conditions area property of public mempool chains", however there are good ways to mitigate this issue at the contract level so that the specific risk of losing X+Y of approve(X), approve(Y) is removed. Since 90% of users, I would say, do not expect that they are trusting spender for X+Y, but for MAX(X,Y), it is worthy enough to point out as M risk for sponsors.

For those unfamiliar with double spend mitigation, the options are:

  1. increaseAllowance() and decreaseAllowance().
  2. Force approves to toggle between zero and non-zero value

This is not to say I don't agree that users should consider any approved tokens as spent due to frontrunning risks.

GalloDaSballo commented 1 year ago

I agree "race conditions area property of public mempool chains", however there are good ways to mitigate this issue at the contract level so that the specific risk of losing X+Y of approve(X), approve(Y) is removed. Since 90% of users, I would say, do not expect that they are trusting spender for X+Y, but for MAX(X,Y), it is worthy enough to point out as M risk for sponsors.

For those unfamiliar with double spend mitigation, the options are:

  1. increaseAllowance() and decreaseAllowance().
  2. Force approves to toggle between zero and non-zero value

This is not to say I don't agree that users should consider any approved tokens as spent due to frontrunning risks.

What severity would you recommend given your considerations?

trust1995 commented 1 year ago

Unless it is an issue we have evidence that sponsor is aware of, I would say M.

GalloDaSballo commented 1 year ago

Specific counterargument to the above is: If that's a med, then why aren't all allowances med?

## POC
By design allowance can be revoked by approve(X, 0) but this can be front-run, as such you should never give allowance and just transfer

## Remediation
Delete the approve function, allow exclusively transfer and calls

Why is this invalid vs the above?

GalloDaSballo commented 1 year ago

Additional counter argument against Med is, why isn't address(0) also a Med then?

Answer, it's not med because it's the user that self-directs the "odd behaviour", in the same way approving non-zero to non-zero, with a target that is malicious, is an odd behaviour, if our severities have any consistency then we cannot argue Med to be appropriate even if you believe this is a valid vulnerability due to the triple conditionality:

trust1995 commented 1 year ago

We can look at it as different threat scenarios, frontrunning risk of first order, and of second order. approve(100) and then approve(0) - users are mostly aware that their 100 is at risk approve(100) and then approve(150) - users are aware that 150 is at risk, not that 250 is at risk.

Concerning your reasoning: If we have a majority that agrees this behavior is only exhibited by reckless users, then I would definitely agree to decreasing it to QA level. That's why it's important to get some more opinions on this.

pashov commented 1 year ago

Allowance is a good functionality and a great idea. The problem is in the way ERC20's approve() function sets it, just overwriting the old value in storage. If for example decreaseAllowance() is used, here is how my previous example looks:

  1. Bob approves Alice to spend 100 of his tokens
  2. After a while, Bob decides Alice should actually be able to spend only 50 of his tokens and sends a tx to decrease the allowance by 50
  3. Alice sees this tx and front-runs it to spend 100 tokens
  4. Now Bob's tx to decrease the allowance fails, because it tries to do the math 0 - 50 which will underflow and throw an error.

This way Alice could have spent a maximum of 100 tokens, instead of 150 - Bob is protected.

And for the zero address checks, I think this is a great resource - https://forum.openzeppelin.com/t/removing-address-0x0-checks-from-openzeppelin-contracts/2222

I'd say zero-address checks vulnerability is not that a user is inputting 0 by mistake, but if the call was "coded" incorrectly, missing an argument or something. This has been an issue in older Solidity versions but since now there are msg.data length validations, this is a much lower severity issue.

Picodes commented 1 year ago

I also think this should be QA. In addition to what @GalloDaSballo pointed out, the attack also requires that the attacker waits until he sees an approval transaction in the mempool to act and extract the user's funds, which seems unlikely to me: the user could transfer its tokens, use a private rpc to change its allowance, etc. The most rational thing for the attacker is to exploit the approval asap.

0xean commented 1 year ago

+1 to QA

If the spender is malicious the user is already in a bad situation as outlined by @Picodes

The ability for the already malicious user to extract more is potentially true, but I just don't think it's all that likely. Also worth considering that most people are still using infinite approvals, in which case maximum value extraction can already occur with no front running.

GalloDaSballo commented 1 year ago

Allowance is a good functionality and a great idea. The problem is in the way ERC20's approve() function sets it, just overwriting the old value in storage. If for example decreaseAllowance() is used, here is how my previous example looks:

  1. Bob approves Alice to spend 100 of his tokens
  2. After a while, Bob decides Alice should actually be able to spend only 50 of his tokens and sends a tx to decrease the allowance by 50
  3. Alice sees this tx and front-runs it to spend 100 tokens
  4. Now Bob's tx to decrease the allowance fails, because it tries to do the math 0 - 50 which will underflow and throw an error.

This way Alice could have spent a maximum of 100 tokens, instead of 150 - Bob is protected.

And for the zero address checks, I think this is a great resource - https://forum.openzeppelin.com/t/removing-address-0x0-checks-from-openzeppelin-contracts/2222

I'd say zero-address checks vulnerability is not that a user is inputting 0 by mistake, but if the call was "coded" incorrectly, missing an argument or something. This has been an issue in older Solidity versions but since now there are msg.data length validations, this is a much lower severity issue.

The example shows how this is ultimately a biased discussion (half-full, half-empty glass), you are claiming that decreaseAllowance reduces the total at risk to 100, I'm claiming that the 100 is at risk because of the initial allowance and will act accordingly.

There's a point at which personal responsibility has to be drawn, giving away any allowance is a dangerous operation and we don't label it as a vulnerability but a property of the system.

zobront commented 1 year ago

+1 to @GalloDaSballo arguments.

trust1995 commented 1 year ago

Agree that likelihood is very low. If we factor that in against potential high impact of lost user funds, and that only approved spender may abuse, it is between L and M. We can refer to the C4 risk criterea

QA (Quality Assurance) Includes both Non-critical (code style, clarity, syntax, versioning, off-chain monitoring (events, etc) and Low risk (e.g. assets are not at risk: state handling, function incorrect as to spec, issues with comments). Excludes Gas optimizations, which are submitted and judged separately.

2 — Med: Assets not at direct risk, but the function of the protocol or its availability could be impacted, or leak value with a hypothetical attack path with stated assumptions, but external requirements.

Which one sounds more like the issue at hand?

GalloDaSballo commented 1 year ago

Agree that likelihood is very low. If we factor that in against potential high impact of lost user funds, and that only approved spender may abuse, it is between L and M. We can refer to the C4 risk criterea

QA (Quality Assurance) Includes both Non-critical (code style, clarity, syntax, versioning, off-chain monitoring (events, etc) and Low risk (e.g. assets are not at risk: state handling, function incorrect as to spec, issues with comments). Excludes Gas optimizations, which are submitted and judged separately.

2 — Med: Assets not at direct risk, but the function of the protocol or its availability could be impacted, or leak value with a hypothetical attack path with stated assumptions, but external requirements.

Which one sounds more like the issue at hand?

Per the Rules, it only affects:

Because those are layers and layers of conditionality, and because approve is the standard for ERC20s I believe Non-Critical to be the only appropriate severity.

A Low severity implies this can happen with low Likelyhood but with a specific user mistake, this requires multiple layers of recklessness

A Low severity also implies that this can happen reliably, while the given risk can only happen if all those conditions align, and when they do, it's a direct outcome of misinterpretation of how a Public Mempool Blockchain system works and arguably a feature of it.

Per the above recommend Non-Critical

Edit: Rephrased to Non-Critical, we have NC as a category which does cover both Informational and Non-Critical findings

GalloDaSballo commented 1 year ago

We need a malicious target as additional conditionality to give strength to your argument otherwise we would default to saying. "Any allowance is risky," while your argument directly implies that the marginal (the delta of allowance) is the part at risk

trust1995 commented 1 year ago

I can get behind that reasoning.

gpersoon commented 1 year ago

I would say, the probability of it going wrong is low (you have to time it carefully and the receiver of the funds has to be malicious (or at least a bit weird) ) and the impact usually is low to medium (unless maybe large amounts are being transferred). Combining would lead to a low risk issue. With solution of increase/decrease allowance.

Additional thought: It also depends on who is doing the approve, if it is a smart contract and it is done in the following way then there is no issue:

if ( allowance(...) >= 100) approve(..., 50);

If the approve is done via a user interface and the user first inspects the allowance, sees it is 100 and then sets it to 50 then there is a small time window (probably one block, e.g. 12 seconds), where this might be abused.

Note: without the change in approval it doesn't matter if the receiver is malicious because it can't access more than the allowance.

GalloDaSballo commented 1 year ago

I would say, the probability of it going wrong is low (you have to time it carefully and the receiver of the funds has to be malicious) and the impact usually is low to medium (unless maybe large amounts are being transferred). Combining would lead to a low risk issue. With solution of increase/decrease allowance.

Would like to ask you more precisely what makes you think is at risk wrt allowance, and why the marginal allowance would be at Low Risk vs the entire allowance (assuming a malicious target is given the allowance)

HickupHH3 commented 1 year ago

I'd be more inclined to give it a medium severity if there's been an instance of the race condition being exploited. In the years I've been in crypto, I've not seen this to ever been a problem or part of an actual exploit.

As @0xean pointed out, most users tend to give infinite approvals anyway.

+1 to NC, at the very most low.

MrToph commented 1 year ago

I agree with your first post, I never really saw this as an issue. Imo, it's a failure of Alice to think that any existing approval won't be spent before her new approve transaction is mined. Alice doesn't understand how a blockchain works. For me, adding increaseAllowance is in the same category as adding additional validation/zero-checks for parameters, it gives the users additional protection but it's not strictly needed. (she can set her approval to zero, check if she's been frontrun and then set the appropriate new approval.)

GalloDaSballo commented 11 months ago

I would like to shoutout @alcueca for being vindicated by the recent changes introduced by OZ

Screenshot 2023-09-13 at 15 02 06

Original discussion: https://github.com/code-423n4/2022-01-yield-findings/issues/6

KnownFactC4 commented 9 months ago

Per the Autumn 2023 C4 Supreme Court verdicts, the Supreme Court's verdict on this issue is:

We have long rejected the finding as anything above NC OZ has deprecated increaseAllowance and decreaseAllowance

We officially confirm: Approve and safeApprove front-run is NOT a valid vulnerability Approve and safeApprove are NOT deprecated increaseAllowance and decreaseAllowance ARE deprecated, although usage of those functions is not regarded as a bug report.

Link to verdict: https://docs.google.com/document/d/1Y2wJVt0d2URv8Pptmo7JqNd0DuPk_qF9EPJAj3iSQiE/edit#heading=h.gel2fq608ycy