code-423n4 / 2024-03-coinbase-findings

1 stars 0 forks source link

(Still) incorrect check that point is not identity element. #183

Open c4-bot-4 opened 6 months ago

c4-bot-4 commented 6 months ago

Lines of code

https://github.com/code-423n4/2024-03-coinbase/blob/e0573369b865d47fed778de00a7b6df65ab1744e/src/FreshCryptoLib/FCL.sol#L79

Vulnerability details

Impact

The identity element may be passed as the public key in FCL.ecdsa_verify().

Proof of Concept

The check that the public key (Qx, Qy) is not the identity element is performed in FCL.ecAff_isOnCurve(), which has been corrected to

if (((0 == x) && (0 == y)) || x == p && y == p) {
    return false;
}

Here x and y are the coordinates of the public key. This still fails to consider the case where x and y are some higher multiple of p. The remainder of the check whether the point is on the curve, as well as all subsequent curve calculations, are (of course) all done mod p, so these are equivalent representations of the (0,0) identity element but pass this critical check (graded High in the previous audit (see ID 8)).

Recommended Mitigation Steps

Check modulo p.

if ((0 == x % p) && (0 == y % p)) {
    return false;
}

Assessed type

Invalid Validation

c4-pre-sort commented 6 months ago

raymondfam marked the issue as sufficient quality report

c4-pre-sort commented 6 months ago

raymondfam marked the issue as primary issue

raymondfam commented 6 months ago

Could have had a coded POC to illustrate the flaw.

wilsoncusack commented 6 months ago

p * 2 exceeds uint256

image
c4-sponsor commented 6 months ago

wilsoncusack (sponsor) disputed

3docSec commented 6 months ago

As noted by the sponsor:

This still fails to consider the case where x and y are some higher multiple of p

Given that constant p = 0xFFFFFFFF00000001000000000000000000000000FFFFFFFFFFFFFFFFFFFFFFFF, no higher multiple of p that can fit the uint256 type that holds x and y values.

c4-judge commented 6 months ago

3docSec marked the issue as unsatisfactory: Invalid

d3e4 commented 6 months ago

@wilsoncusack There is still a failure to check that the coordinates of the public key are valid. Since x and y are always reduced modulo p in subsequent calculations, if x < 2^256 - p or y < 2^256 - p such that (x, y) is a valid key, then e.g. (x+p, y) is also a valid key. These are rare, but not too hard to find (about one in two billion (2^31)). This means that an attacker can create a key pair such that for any single message with signature he can produce up to three additional public keys which will all be validated by ecdsa_verify in the SmartWallet but rejected elsewhere (where they are hopefully correctly verified). This breaks the important assumption that a signature's validity is the same on every platform.

3docSec commented 6 months ago

The above attack is novel over to what reported in the finding so it can’t be considered for judging.

However, it’s worth noting that the scenario mentioned above requires the key to be known and have specific characteristics, so the attacker has to be in control of the key so it’s a self-hack attack.

The warden is invited to not call the sponsor in cause during the QA phase that is addressed to judges. If anything needs an escalation, we will be the first ones to make it happen.

wilsoncusack commented 6 months ago

@d3e4 thanks for your time and attention on this. Could you please produce a proof of concept for this attack? X and Y input values are still bound by type uint256 even if they are subsequently modulo, so I am not sure I am following.

d3e4 commented 6 months ago

@wilsoncusack p < 2^256 so x+p may be in [p, 2^256-1], i.e. both x and x+p are in [0, 2^256-1], but only x is in [0, p-1]. Here are four examples of public keys pairs (x,y) and (x+p,y) such that both pass validation by ecAff_isOnCurve.

pragma solidity ^0.8.0;

contract Test {

    //curve prime field modulus
    uint256 constant p = 0xFFFFFFFF00000001000000000000000000000000FFFFFFFFFFFFFFFFFFFFFFFF;
    //short weierstrass first coefficient
    uint256 constant a = 0xFFFFFFFF00000001000000000000000000000000FFFFFFFFFFFFFFFFFFFFFFFC;
    //short weierstrass second coefficient
    uint256 constant b = 0x5AC635D8AA3A93E7B3EBBD55769886BC651D06B0CC53B0F63BCE3C3E27D2604B;

    function testDuplicatePublicKey() public pure {
        uint256 x; uint256 y;

        (x,y) = (5, 31468013646237722594854082025316614106172411895747863909393730389177298123724);
        assert(ecAff_isOnCurve(x, y));                  //   correct   validation
        assert(ecAff_isOnCurve(x + p, y));              // incorrect   validation
        assert(ecAff_isOnCurveCORRECTED(x, y));         //   correct   validation
        assert(!ecAff_isOnCurveCORRECTED(x + p, y));    //   correct invalidation

        (x,y) = (6, 24739918328848417526480033809572464882617473122143770809273908485596236620747);
        assert(ecAff_isOnCurve(x, y));
        assert(ecAff_isOnCurve(x + p, y));
        assert(ecAff_isOnCurveCORRECTED(x, y));
        assert(!ecAff_isOnCurveCORRECTED(x + p, y));

        (x,y) = (8, 82784132184742902406611976374283817475915538652053173371854007222633983132083);
        assert(ecAff_isOnCurve(x, y));
        assert(ecAff_isOnCurve(x + p, y));
        assert(ecAff_isOnCurveCORRECTED(x, y));
        assert(!ecAff_isOnCurveCORRECTED(x + p, y));

        (x,y) = (9, 64265364456294082747112527522981594904960689488638802860996194286474266915479);
        assert(ecAff_isOnCurve(x, y));
        assert(ecAff_isOnCurve(x + p, y));
        assert(ecAff_isOnCurveCORRECTED(x, y));
        assert(!ecAff_isOnCurveCORRECTED(x + p, y));
    }

    function ecAff_isOnCurve(uint256 x, uint256 y) internal pure returns (bool) {
        if (((0 == x) && (0 == y)) || (x == p && y == p)) {
            return false;
        }
        unchecked {
            uint256 LHS = mulmod(y, y, p); // y^2
            uint256 RHS = addmod(mulmod(mulmod(x, x, p), x, p), mulmod(x, a, p), p); // x^3+ax
            RHS = addmod(RHS, b, p); // x^3 + a*x + b

            return LHS == RHS;
        }
    }

    function ecAff_isOnCurveCORRECTED(uint256 x, uint256 y) internal pure returns (bool) {
        // note >= instead of ==, and || instead of && (it wasn't || that was the problem, but the ==)
        if (((0 == x) && (0 == y)) || (x >= p || y >= p)) {
            return false;
        }
        unchecked {
            uint256 LHS = mulmod(y, y, p); // y^2
            uint256 RHS = addmod(mulmod(mulmod(x, x, p), x, p), mulmod(x, a, p), p); // x^3+ax
            RHS = addmod(RHS, b, p); // x^3 + a*x + b

            return LHS == RHS;
        }
    }
}
wilsoncusack commented 6 months ago

Thank you for this! Will respond shortly.

wilsoncusack commented 6 months ago

@3docSec we are validating the possible impact here, but I see this as a legitimate clarification of the original issue.

wilsoncusack commented 6 months ago

@d3e4 any comment on this?

This new finding seems correct to me. However the probability of it happening is low as they said and I am not sure what the attacker would gain from it. Usually we want to protect against the case where someone can forge a signature for a valid public key for which they do not have the private key. In this attack the attacker can choose 3 "invalid" public keys and generate valid signatures for them if it knows the valid private key.

d3e4 commented 6 months ago

@d3e4 any comment on this?

This new finding seems correct to me. However the probability of it happening is low as they said and I am not sure what the attacker would gain from it. Usually we want to protect against the case where someone can forge a signature for a valid public key for which they do not have the private key. In this attack the attacker can choose 3 "invalid" public keys and generate valid signatures for them if it knows the valid private key.

The only low probability here is in the sense that it happens by chance only once every four (sorry about the miscalculation above) billion key generations. But the probability that there are such public keys out there in actual use is very high. Note that from such a key the attacker does not need to know the private key to generate the false duplicate public keys to validate the same message in ecdsa_verify. It is also certain that someone can create this kind of key deliberately if they want to. I don't have enough insight into the rest of your project to say anything specific about how this could be abused, but could perhaps one kind of abuse be that the attacker uses this invalid key to successfully perform some action via SmartWallet with some result, but then through some other integration involving this key the victims trying to refer to the same key on the result of this action will experience failure (or vice versa)? It could for example be in the context of proving ownership of a key. It could be related to a registry of keys. The "forging" enabled here is a forgery of the key. The issue can be seen in the claim "this key has signed this message" as an invalid verification of "this key", whereas the other kind of forgery arises from a failure to verify "has signed", in the claim. Both are necessary to validate the full claim. In general, the issue arises when the signature is validated in different contexts. In essence, ecdsa_verify violates one of the main specifications which guarantee the security of ECDSA.

3docSec commented 6 months ago

I see this as a legitimate clarification of the original issue

OK @wilsoncusack , noted, thank you very much for weighing in.

We can confirm confidently the legitimacy of the issue and get to discussing its impact.

The faulty logic is in ecAff_isOnCurve, which is used only once, at the top of ecdsa_verify:

File: FCL.sol
50:     function ecdsa_verify(bytes32 message, uint256 r, uint256 s, uint256 Qx, uint256 Qy) internal view returns (bool) {
51:         if (r == 0 || r >= n || s == 0 || s >= n) {
52:             return false;
53:         }
54: 
55:         if (!ecAff_isOnCurve(Qx, Qy)) {
56:             return false;
57:         }

The key point here is the parameters passed, Qx, Qy, which are the public key stored in the wallet:

File: CoinbaseSmartWallet.sol
299:         bytes memory ownerBytes = ownerAtIndex(sigWrapper.ownerIndex);
---
317:             (uint256 x, uint256 y) = abi.decode(ownerBytes, (uint256, uint256));

Which comes from the wallet owner having previously added this public key via an authenticated request.

Since ecAff_isOnCurve is stateless, it would be logically identical (and possibly more effective in gas 😉) if this function was called only once in the addOwnerPublicKey to validate the public key in input. It's then clear that this function is a verification of an input which, in this case, is always provided by a trusted source - the legitimate owner.

In other words, the vulnerability allows an owner to add and use a public key that is NOT on the elliptic curve, but the system incorrectly accepts it as a valid public key.

Even if we assume the worst case, that is the case that these public keys that are not on the elliptic curve can be cracked instantly, the impact could still be categorized as user error because it's the legitimate owner who provided a key that is not on the curve in the first place. This is what I meant with the expression "self-hack" that I used before without (my bad) elaborating more.

So while the finding is interesting, we all learned from it, and @d3e4 did a hell of a job of discovering this hidden quirk and factually proving his statements, the C4 rules are clear that user errors fall under the QA umbrella.

c4-judge commented 6 months ago

3docSec changed the severity to QA (Quality Assurance)

c4-judge commented 6 months ago

3docSec marked the issue as grade-a

wilsoncusack commented 6 months ago

Thanks both.

@3docSec let's keep in mind the README, where we said

Exploits should only be considered in the context of a call path starting with ecdsa_verify. Other functions are not intended to be called directly.

and for invariants

FreshCryptoLib All calls with valid sets of message, r, s, Qx, and Qy for the secp256r1 curve should return true. All calls with invalid sets of message, r, s, Qx, and Qy for the secp256r1 curve should revert or return false.

It seems to be this DOES constitute a false positive, which is significant. But in the context of our project it is not clear that it puts any funds at risk, and yes likely falls in the user error bucket.

d3e4 commented 6 months ago

@3docSec

Which comes from the wallet owner having previously added this public key via an authenticated request.

How does this work? Is it not possible to add this invalid key do you mean? Because if it is, this illustrates precisely my idea of a key registry. The attacker can now illegitimately claim to have this key registered as if it were valid!

It's then clear that this function is a verification of an input which, in this case, is always provided by a trusted source - the legitimate owner.

NO, NO, NO! This is the verification of an ECDSA signature. It is absolutely not to be trusted! This is precisely what needs to be verified!

I managed to find this audit report which found this very same issue, where they rated it as High, in case this offers any clearer perspective. https://reports.zellic.io/publications/biconomy-secp256r1/findings/high-secp256r1-validity-of-public-keys-is-not-checked

All calls with invalid sets of message, r, s, Qx, and Qy for the secp256r1 curve should revert or return false.

This was indeed listed as a "Main invariant". I think the breaking of main invariants should be very concerning.

3docSec commented 6 months ago

NO, NO, NO! This is the verification of an ECDSA signature. It is absolutely not to be trusted! This is precisely what needs to be verified!

The fault is not in the signature verification as you wrote in this statement, but in the verification of the public key. I agree with you that a permissive signature verification is most likely critical severity, but that's very far from what this finding is about. This finding is about public key validation, and if this point is not clear - as it looks from your reply - I strongly recommend you review my reasoning here - I explained it step by step, please do make a genuine effort in following it because the answer to the question How does it work? was there for you already.


The report you linked is about a generic Secp256r1 library, so how the library will be used is not known by the auditors, and they certainly had to score findings based on the worst case they could imagine. It was also made by a private audit company, where severity assessment possibly follows different criteria than it does here.

Quoting the report:

The impact on the security of projects making use of the Secp256r1 library for signature verification is highly dependent on how signatures are otherwise used. See section ref↗ for a discussion of this as well as the reason for our severity rating.

If you follow the ref link, you will see what is the worst case they thought of when awarding high severity:

The main point of possible concern would be if the same signature is independently verified in two (or more) places, with different code. The system as a whole might be built on the assumption that a signature is either valid or invalid and that all signature verifications will agree on validity.

This point totally makes sense, and it does not speak of any capability of signature forging - that's likely why they picked Medium impact and not the flashing red alert you'd expect from a leaky signature check.

CoinbaseSmartWallet does not fall into this "system as a whole" case, as CoinbasSmartWallet._validateSignature is the single source of truth with only one path for each type of signature, no agreement is to be made, no assumption of consensus can be broken.


All calls with invalid sets of message, r, s, Qx, and Qy for the secp256r1 curve should revert or return false.

I agree that this invariant is broken here, I think we all do. However, the only actor who can break it is the legitimate wallet owner, by providing an incorrect public key as the authentication method for their own wallet, and without any consequence other than having recklessly made their own wallet less secure.

d3e4 commented 6 months ago

The fault is not in the signature verification as you wrote in this statement, but in the verification of the public key.

Public key verification is part of the signature verification algorithm. It is not something separate.

I explained it step by step, please do make a genuine effort in following it because the answer to the question How does it work? was there for you already.

I’m not sure what adding the key by an authenticated request involves. What you later say about how this is retrieved for public key validation and your gas savings idea is completely correct, but changes nothing about the issue.

that's likely why they picked Medium impact and not the flashing red alert you'd expect from a leaky signature check.

It’s sort of a flashing red alert in the context of ECDSA signature verification itself (hence “High”), but I could certainly see that it is only Medium in this specific application. I would like to note that violation of standards such as EIP-4626 are often considered Medium just by virtue of being a violation. A violation of ECDSA signature verification is at least as bad.

I agree that this invariant is broken here, I think we all do. However, the only actor who can break it is the legitimate wallet owner, by providing an incorrect public key as the authentication method for their own wallet, and without any consequence other than having recklessly made their own wallet less secure.

Are we then saying that a main invariant is allowed to be broken? The key and signature are not exclusive to the wallet. If it were there would be no need to invoke very expensive external security protocols. Therefore, if only considering a narrow subcontext “the only actor who can break it” is a moot point, or false if interpreted more broadly.

3docSec commented 6 months ago

Thanks for sharing your opinion, I'll consider it carefully.

3docSec commented 6 months ago

Final verdict, based in facts gathered from wardens and sponsor:

This issue will then remain Low severity, I've however requested to include it in the final report due to its significance.