Open hats-bug-reporter[bot] opened 1 month ago
This can be corrected by the market creator not posting questions that resolve to wildly different bounds Most common use case for multi scalar markets is asking related questions that all resolve to a % value, and that's not affected by this issue since they are bounded from 0% to 100% Multi scalar are not a fits all solution
Thanks for your feedback,
I have to disagree with you here, unfortunately.
The code shows that you assume payout values can be above (uint128),max
in supported markets; otherwise, the whole shrinking/capping to prevent overflows would not be implemented. Just because this will not happen in every market does not make it invalid. You state that, most commonly, the values will be between 0 and 100, but no restriction in the code would prevent a correctly acting and non-malicious user from creating a market that could end up with these high results.
This issue clearly describes a bug where well-acting users would lose funds in a nonmalicious market that anyone in the system can create. A market with this kind of ratio difference could be built for countless scenarios and this issue will most likely occur as anyone can deploy a market via the factory.
The max payout is a fixed value and if an outcome get more than this value for the payout, it will be considered to be this value. The code and comments explain why this maximum payout is determined and its name is pretty explicity (maxPayout -> maximum payout in term of allocation point).
For sure, if we could have perfect arithmetic in solidity, we'd always have a payment proportional to the answers. But this is not the case and to handle some edgecases where it is possible for the answer to one question to have an answer surprisingly higher than expected. For example, we could make a market on a magic (mtg) match a asking "How many time will Player, draw [card name]?". In a classic match, player may draw 0-4 time the card, but we can have weird edge case where a player doing some combo will be able to loop an unlimited amount of time through his deck, drawing a card a unlimited number of time. The player could for example state that he would loop 2^1000 times. In this case resolution rules already ask to give the closest possible answer so the answer to the question would be 2^256-2 which would cause some overflows, preventing the payout. Making it 2**(256 / 2)-1 ensures that the outcome tokens of this specific card will redeem for approximately 1 and the other approximately 0 (and if n cards like that are drawn a very large number of times, they will each redeem for approximately 1/n). Here we handling some edgecases the most graceful way possible.
Hey @clesaege ,
I'm sorry, but I can't see how this issue should be invalid.
Also, I disagree with multiple of the arguments you state above:
In summary, you agreed that the scenario is possible and will lead to incorrect payout proportions. Nevertheless, you have invalidated the issue by not accepting/ignoring the mitigation I proposed, which would scale all payouts precisely and mitigate the user's loss of profits. I kindly ask you to reconsider this, as there is no reason to invalidate this issue. All three things needed for a valid submission are present:
We are really on the handling of an edgecase. You can disagree on the way it's handled, propose better solutions (and we can discuss on those, but for my part I believe we should keep the code pretty simple and not try to overengineer to get around limitation of integer computing), but the thing is that the code is doing exactly what it is supposed to do. It's setting a max payout in term of allocation points per answer such that we don't get an overflow if reality would be to return a very high value.
You can't use the argument that we are protecting from edgecases to state that the way we are handling edgecases is a vuln. And you can't misrepresent my comments.
- You have confirmed in the message above that this issue can result from a valid market scenario. -> Scenario is valid
That's an edgecase, and we handle it in a graceful way such that it doesn't overflow and funds don't get stucked. We also have some interesting properties such that when multiple values are basically "infinity", we distribute the funds equally. That was choice compared to just treat those as invalid.
- You agree that winners will get less reward than intended. -> Loss of Funds to users
I do not agree on that. I said "For sure, if we could have perfect arithmetic in solidity, we'd always have a payment proportional to the answers. But this is not the case and to handle some edgecases where it is possible for the answer to one question to have an answer surprisingly higher than expected.". And because perfect arithmetic doesn't exist, we do handle edgecases. Here the way we do so is to cap the amount of allocation points an outcome can get. This is explained in the comments, variable names are explicit and the code is doing exactly what is both intended and described by the comments.
I'm sorry @clesaege but you are just repeating the argument of the maximum being needed, which I never critizized
"We are really on the handling of an edgecase. You can disagree on the way it's handled, propose better solutions (and we can discuss on those, but for my part I believe we should keep the code pretty simple and not try to overengineer to get around limitation of integer computing), but the thing is that the code is doing exactly what it is supposed to do. It's setting a max payout in term of allocation points per answer such that we don't get an overflow if reality would be to return a very high value."
Again, you are misstating that I am criticizing your maximum payout. The maximum payout is correct and should be there. The problem is that you are not handling the aftereffects of this security measure correctly.
A similar scenario would be that you implement a security feature in your DeFi product that blocks incorrectly acting users. Then I found a case where you invalidally blocked users acting correctly. Now, you reason that this is invalid because, in most cases, you block malicious users.
"That's an edgecase, and we handle it in a graceful way such that it doesn't overflow and funds don't get stucked. We also have some interesting properties such that when multiple values are basically "infinity", we distribute the funds equally. That was choice compared to just treat those as invalid."
While you handle the overflow issue correctly, you don't handle the other fund loss resulting from this mitigation. The other design choices mentioned in the rest of the paragraph are not relevant to the validity of this issue.
"I do not agree on that. I said "For sure, if we could have perfect arithmetic in solidity, we'd always have a payment proportional to the answers. But this is not the case and to handle some edgecases where it is possible for the answer to one question to have an answer surprisingly higher than expected.". And because perfect arithmetic doesn't exist, we do handle edgecases. Here the way we do so is to cap the amount of allocation points an outcome can get. This is explained in the comments, variable names are explicit and the code is doing exactly what is both intended and described by the comments."
A user is paid a smaller percentage of the pot than he should, as returned per reality. This is as clear a loss of funds as it can be. The rest of this argument is irrelevant to the issue and is just a general statement.
In total, you are just repeating that your payout cap is correct, which it is. However, the whole issue was never about implementing a payout capm, but mitigating its aftereffects. I'm going to concise all of this into a summary again so it's clear to everyone reading this:
You are implementing a security feature for a scenario you expect
Due to the limitation of computing, we designed the multi-scalar markets such that there is a maximum amount of allocation points which can be attributed to one outcome.
This security feature has some after-effects that you didn't expect, which lead to a loss of funds
I haven't seen any after effect in your report. If the answer to some question is above the cap, the outcome will be attributed the amount of allocation points equal to the cap.
My described mitigation can fully mitigate these after-effects / losses to users
You describe a different design of the system, which even if it has merits (and also cons of having extra complexity) is out of scope of this bounty. At this stage we are not doing any system redesign but only solving vulns.
In your logic, this is an invalid issue
Indeed because the code is doing what is intended, described by the comment and assumed by the variable names.
Hey @clesaege,
"I haven't seen any after effect in your report."
My report clearly showcases that a user gets a smaller share of the pot than he should as returned by reality. You are pretty much implementing a wrapper on top of reality and skewing the results unintentionally. I would describe that as a pretty clear after effect.
"You describe a different design of the system, which even if it has merits (and also cons of having extra complexity) is out of scope of this bounty."
I'm not at all describing a different system. You would pretty much just need to note by what percentage you shrunk the value above the max if it got triggered and in the end loop once more over all the other results and also shrink them accordingly. The whole issue can be fixed in less than 10 loc. Attached I'm adding you how this could be done.
uint256 shrinkageMax = 0;
uint256 maxIndex = 0;
for (uint256 i = 0; i < numOutcomes; i++) {
payouts[i] = uint256(realitio.resultForOnceSettled(questionsIds[i]));
if (payouts[i] == uint256(INVALID_RESULT)) {
payouts[i] = 0;
} else if (payouts[i] > maxPayout) {
uint256 shrinkage = maxPayout * 100_000 / payouts[i]; //adding 100_000 as a buffer to reduce rounding errors.
if(shrinkage > shrinkageMax) { //For the case where multiple go over the max
shrinkageMax = shrinkage;
maxIndex = i;
}
payouts[i] = maxPayout;
}
allZeroesOrInvalid = allZeroesOrInvalid && payouts[i] == 0;
}
if (shrinkageMax != 0) {
for (uint256 i = 0; i < numOutcomes; i++) {
if (i != maxIndex)
{
payouts[i] = payouts[i] * shrinkageMax / 100_000;
}
}
}
This can hardly be called a redesign of the whole system, it's a simple bug fix.
in less than 10 loc
Way too much to handle an edgecase gracefully at the bounty stage.
My report clearly showcases that a user gets a smaller share of the pot than he should as returned by reality.
Should according to you*. The design is to limit the maximum allocation points per outcome and that's exactly what the code is doing. Here you are discussing system design which is out of scope.
I think our discussion at this point seems to be unproductive. Neither of these arguments in any way invalidates the issue and would invalidate it on any audit, either via bug bounty or a platform. Something being an edge case does not make an issue invalid, and something being intended but leading to unintended after-effects does not make it invalid.
I understand your motivation to downgrade issues as hard as possible here, but this is just outside any grey areas where we could discuss between invalid/valid.
The issue is not invalid because it's an edgecase, the issue is invalid because the code is working as intended as demonstrated by the variable names and the comments. There is a limit of allocation points per outcome. If the answer of one question goes over this limit, it is shrunk back to this limit. From your report and comments, you just confirmed this behaviour and didn't display any unexpected behaviour.
If you still think your report is valid, please show a behaviour different than the one I just described.
We are circling back to the same argument here. The maximum payout does exactly what it should and should be kept. I also never stated that it should be removed in my issue or the following comments. But it has an after-effect, which you do not correctly mitigate.
A similar situation would be if you scale a value to some decimals. Correctly do some calculations where this decimal would be needed, but in the end, forget to scale it back to its original decimals, leading to an error. In that case, the first step (shrinking/scaling) is correct and should be there, but you are forgetting a second step (rescaling in both cases), which leads to issues. The first step being correct is not a reason why the error that occurs later in the second step is invalid.
The systems contains a maximum amount of allocation points per outcome. It does not contain any rescaling when an answer exceeds the maximum amount of allocation points. You haven't shown an error in the code happening due to that but only discussed a different design of the system.
Due to circling back this will be my last message on this topic.
This is a valid bug, as I said in a previous contest:
this bug would come up in any report, you can decide not to fix it, but it is still a possible bug
The problem here is that there is no mediator, there is only the whitehat with the protocol and the protocol has the money, so the less bugs they accept, the less money the protocol will spend and the more they can claim to be more ‘secure’
~Hats on the other hand already charged, it doesn't affect him much, but I wonder what would happen if hats were to take profits on the basis of the money given by the protocol?~
@rotcivegaf The fact that it didn't come up in 2 rounds of internal reports is an evidence of the contrary. We pay for actual issues. Not for any issue that someone would put in a report. People doing reports have little incentives not to include something if they think it might be an issue. Audit reports may also have scopes wider than simply verifying that the contracts do not do anything unexpected. There is a big different between a bug, and a disagreement on what the behaviour should be. The first is a finding to be rewarded, the second is out of scope.
there is only the whitehat with the protocol and the protocol has the money, so the less bugs they accept, the less money the protocol will spend and the more they can claim to be more ‘secure’
If something was a real issue, we would fix it and pay the hunter. You can see this in the previous competition (where 43,000$ were paid). Who in there right mind would choose to make his protocol unsecure to save on paying bounties? (I can agree that on the severity grading there is a conflict of interest, but not on the acceptance or rejection, as rejecting something real is gonna affect the protocol way more than the hunter)
Bringing issues of past report that were already lost is not relevant in this discussion so this will be my last message on this topic.
"You haven't shown an error in the code happening due to that but only discussed a different design of the system."
I have clearly shown that a user will receive a smaller share of the payout than he would have received if the whole market would have taken place on reality without your wrapper. You are clearly skewing results in this case. That is an error.
Github username: @J4X_98 Twitter username: J4X_Security Submission hash (on-chain): 0x44135b783d69a9e687774608b7a95fe80f37d846068dc807d9fcd19ee8603743 Severity: high
Description: Description:
The issue with the resolveMultiScalarMarket() function is that when a payout exceeds the set maximum, only the highest payout is scaled down, but the others remain unchanged. This causes the winning option to receive a smaller percentage of the total pool than it should, leading to an incorrect distribution of funds and a loss for users who chose the correct outcome.
Attack Scenario:
When the contract calls
resolveMultiScalarMarket()
to resolve a market, a max payout is set to prevent overflows.As the payouts describe the ratio of payouts between the different options and will, in total, end up as one, one of the options may be super high in comparison to the others, so these values are possible. When the payouts are then handled, we can see that each option's payout value will be the numerator, and the sum of all will be the denominator.
This leads to a problem as we will incorrectly scale down the winner to the max but not the others' winnings accordingly. So, percentage-wise, the one with the very high payout (greater than
uint128.max
) will receive less of the pot than he deserves.Let's make a simple example to showcase this. For the sake of simplicity, we will assume that the max is 100. One option got 120, and the others each got 5. The percentage should now be:
But actually, due to the scaling, the distribution of the pool will look as follows:
As you can see, the actual winning option will receive less of the pool than intended. This is a loss of funds to all users who correctly picked this option.
Mitigation:
The issue can be mitigated by also scaling the other values by as much as the one above the max was reduced, to keep the percentage the same, or at least close.