Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/66
0xbranded
Borrowing and funding fees of both longs/shorts suffer from two distinct sources of precision loss. The level of precision loss is large enough to consistently occur at a significant level, and can even result in total omission of fee payment for periods of time. This error is especially disruptive given the sensitive nature of funding fee calculations both in determining liquidations (a core functionality), as well as payments received by LPs and funding recipients (representing a significant loss).
The first of the aforementioned sources of precision loss is relating to the DENOM
parameter defined and used in apply
of math.vy
:
DENOM: constant(uint256) = 1_000_000_000
def apply(x: uint256, numerator: uint256) -> Fee:
fee : uint256 = (x * numerator) / DENOM
...
return Fee({x: x, fee: fee_, remaining: remaining})
This function is in turn referenced (to extract the fee
parameter in particular) in several locations throughout fees.vy
, namely in determining the funding and borrowing payments made by positions open for a duration of time:
paid_long_term : uint256 = self.apply(fs.long_collateral, fs.funding_long * new_terms)
...
paid_short_term : uint256 = self.apply(fs.short_collateral, fs.funding_short * new_terms)
...
P_b : uint256 = self.apply(collateral, period.borrowing_long) if long else (
self.apply(collateral, period.borrowing_short) )
P_f : uint256 = self.apply(collateral, period.funding_long) if long else (
self.apply(collateral, period.funding_short) )
The comments for DENOM
specify that it's a "magic number which depends on the smallest fee one wants to support and the blocktime." In fact, given its current value of $10^{-9}$, the smallest representable fee per block is $10^{-7}$%. Given the average blocktimes of 2.0 sec on the BOB chain, there are 15_778_800 blocks in a standard calendar year. Combined with the fee per block, this translates to an annual fee of 1.578%.
However, this is also the interval size for annualized fees under the current system. As a result, any fee falling below the next interval will be rounded down. For example, given an annualized funding rate in the neighborhood of 15%, there is potential for a nearly 10% error in the interest rate if rounding occurs just before the next interval. This error is magnified the smaller the funding rates become. An annual fee of 3.1% would round down to 1.578%, representing an error of nearly 50%. And any annualized fees below 1.578% will not be recorded, representing a 100% error.
The second source of precision loss combines with the aforementioned error, to both increase the severity and frequency of error. It's related to how percentages are handled in params.vy
, particularly when the long/short utilization is calculated to determine funding & borrow rates. The utilization is shown below:
def utilization(reserves: uint256, interest: uint256) -> uint256:
return 0 if (reserves == 0 or interest == 0) else (interest / (reserves / 100))
This function is in turn used to calculate borrowing (and funding rates, following a slightly different approach that similarly combines the use of utilization
and scale
), in [dynamic_fees](https://github.com/sherlock-audit/2024-08-velar-artha/blob/main/gl-sherlock/contracts/params.vy#L33-L55)
of params.vy
:
def dynamic_fees(pool: PoolState) -> DynFees:
long_utilization : uint256 = self.utilization(pool.base_reserves, pool.base_interest)
short_utilization: uint256 = self.utilization(pool.quote_reserves, pool.quote_interest)
borrowing_long : uint256 = self.check_fee(
self.scale(self.PARAMS.MAX_FEE, long_utilization))
borrowing_short : uint256 = self.check_fee(
self.scale(self.PARAMS.MAX_FEE, short_utilization))
...
def scale(fee: uint256, utilization: uint256) -> uint256:
return (fee * utilization) / 100
Note that interest
and reserves
maintain the same precision. Therefore, the output of utilization
will have just 2 digits of precision, resulting from the division of reserves
by 100
. However, this approach can similarly lead to fee rates losing a full percentage point in their absolute value. Since the utilization is used by dynamic_fees
to calculate the funding / borrow rates, when combined with the formerly described source of precision loss the error is greatly amplified.
Consider a scenario when the long open interest is 199_999 $10^{18}$ and the reserves are 10_000_000 $10^{18}$. Under the current utilization
functionality, the result would be a 1.9999% utilization rounded down to 1%. Further assuming the value of max_fee = 65
(this represents a 100% annual rate and 0.19% 8-hour rate), the long borrow rate would round down to 0%. Had the 1.9999% utilization rate not been rounded down 1%, the result would have been r = 1.3
. In turn, the precision loss in DENOM
would have effectively rounded down to r = 1
, resulting in a 2.051% borrow rate rounded down to 1.578%.
In other words, the precision loss in DENOM
alone would have resulted in a 23% error in this case. But when combined with the precision loss in percentage points represented in utilization
, a 100% error resulted. While the utilization and resulting interest rates will typically not be low enough to produce such drastic errors, this hopefully illustrates the synergistic combined impact of both sources of precision loss. Even at higher, more representative values for these rates (such as r = 10
), errors in fee calculations exceeding 10% will consistently occur.
All fees in the system will consistently be underpaid by a significant margin, across all pools and positions. Additionally trust/confidence in the system will be eroded as fee application will be unpredictable, with sharp discontinuities in rates even given moderate changes in pool utilization. Finally, positions will be subsidized at the expense of LPs, since the underpayment of fees will make liquidations less likely and take longer to occur. As a result, LPs and funding recipients will have lesser incentive to provide liquidity, as they are consistently being underpaid while taking on a greater counterparty risk.
As an example, consider the scenario where the long open interest is 1_099_999 $10^{18}$ and the reserves are 10_000_000 $10^{18}$. Under the current utilization
functionality, the result would be a 10.9999% utilization rounded down to 10%. Assuming max_fee = 65
(100% annually, 0.19% 8-hour), the long borrow rate would be r = 6.5
rounded down to r = 6
. A 9.5% annual rate results, whereas the accurate result if neither precision loss occurred is r = 7.15
or 11.3% annually. The resulting error in the borrow rate is 16%.
Assuming a long collateral of 100_000 $10^{18}$, LPs would earn 9_500 $10^{18}$, when they should earn 11_300 $10^{18}$, a shortfall of 1_800 $10^{18}$ from longs alone. Additional borrow fee shortfalls would occur for shorts, in addition to shortfalls in funding payments received.
Liquidation from borrow rates along should have taken 106 months based on the expected result of 11.3% per annum. However, under the 9.5% annual rate it would take 127 months to liquidate a position. This represents a 20% delay in liquidation time from borrow rates alone, not including the further delay caused by potential underpaid funding rates.
When PnL is further taken into account, these delays could mark the difference between a period of volatility wiping out a position. As a result, these losing positions could run for far longer than should otherwise occur and could even turn into winners. Not only are LP funds locked for longer as a result, they are at a greater risk of losing capital to their counterparty. On top of this, they are also not paid their rightful share of profits, losing out on funds to take on an unfavorably elevated risk.
Thus, not only do consistent, material losses (significant fee underpayment) occur but a critical, core functionality malfunctions (liquidations are delayed).
In the included PoC, three distinct tests demonstrate the individual sources of precision loss, as well as their combined effect. Similar scenarios were demonstrated as discussed above, for example interest = 199_999 $10^{18} with reserves = 10_000_000 $10^{18}$ with a max fee of 65.
The smart contracts were stripped to isolate the relevant logic, and foundry was used for testing. To run the test, clone the repo and place Denom.vy
in vyper_contracts, and place Denom.t.sol
, Cheat.sol
, and IDenom.sol
under src/test.
Manual Review
Consider increasing the precision of DENOM
by atleast 3 digits, i.e. DENOM: constant(uint256) = 1_000_000_000_000
instead of 1_000_000_000
. Consider increasing the precision of percentages by 3 digits, i.e. divide / multiply by 100_000 instead of 100.
Each added digit of precision decreases the precision loss by an order of magnitude. In other words the 1% and 1.5% absolute errors in precision would shrink to 0.01% and 0.015% when using three extra digits of precision.
Consult Denom.vy
for further guidance on necessary adjustments to make to the various functions to account for these updated values.
KupiaSecAdmin
Escalate
This is a system design choice. It just charges less fees, there is no loss of funds happening to the protocol. Also this can be modified by updating parameters.
sherlock-admin3
Escalate
This is a system design choice. It just charges less fees, there is no loss of funds happening to the protocol. Also this can be modified by updating parameters.
You've created a valid escalation!
To remove the escalation from consideration: Delete your comment.
You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.
msheikhattari
It's not a design choice, this is clearly a precision loss issue. There are real loss of funds and delayed liquidation that consistently occur given the real world parameters such as blocktime. these are clearly documented in the PoC with specific errors exceeding 10% consistently, and sometimes even total loss of fee payment.
DENOM is a constant that cannot be updated. It must be corrected by increasing the decimals of precision.
Given the high likelihood and high impact of loss of funds not only from fee miscalculation but delayed/avoided liquidations, this is a high severity issue.
KupiaSecAdmin
@msheikhattari - The logic only modifies the fee ratio by rounding it down, if the ratio doesn't meet what the protocol team is looking for, they can simply increase the numerator by updating parameters. Also even though with lower fee ratio, why does it cause loss? If borrow fee becomes less, there will be more positions openers who will also increase utilization ratio which also will increase the borrowing fee. Still, this is low/info at most.
msheikhattari
No parameters can be updated to fix this issue - DENOM must directly be changed (which is a constant). It currently supports annual fees of about 1.5% increments which is way too much precision loss with errors consistently exceeding 10% in all fee calculations.
The numerator is blocks * rate, the point is that the rate component cannot support fees with a precision below 1.5% because of the DENOM parameter that is too small.
I included a PoC which clearly demonstrates this currently unavoidable precision loss.
rickkk137
No parameters can be updated to fix this issue - DENOM must directly be changed (which is a constant). It currently supports annual fees of about 1.5% increments which is way too much precision loss with errors consistently exceeding 10% in all fee calculations.
The numerator is blocks * rate, the point is that the rate component cannot support fees with a precision below 1.5% because of the DENOM parameter that is too small.
I included a PoC which clearly demonstrates this currently unavoidable precision loss.
I read your PoC,root cause in this report is here which mentioned in this report and borrowing_paid calculation doesn't have effect
def utilization(reserves: uint256, interest: uint256) -> uint256:
return 0 if (reserves == 0 or interest == 0) else (interest / (reserves / 100))
WangSecurity
Though, this may be considered a design decision, the calculation of fees still has precision loss which would pay less fees due to rounding down. Hence, this should've been included in the known issues question in the protocol's README, but it wasn't. Also, DENOM
couldn't be changed after the contracts were deployed, but it wasn't flagged as a hardcoded variable that could be changed.
Hence, this should remain a valid issue. Planning to reject the escalation.
msheikhattari
Given that significant disruptions to liquidations and loss of funds result, this issue should in consideration for high severity.
WangSecurity
Agree with the above, as I understand, there are no extensive limitations, and the loss can be significant. Planning to reject the escalation but make the issue high severity.
rickkk137
root cause in this issue and #72 is same
KupiaSecAdmin
@WangSecurity - Even though DENOM
can't be modified, denominator
and numerator
are updatable which determines the fee ratio. So making this valid doesn't seem appropriate and it can never be high severity.
rickkk137
@KupiaSecAdmin I agree with u ,this issue talks about two part first one:
def utilization(reserves: uint256, interest: uint256) -> uint256:
return 0 if (reserves == 0 or interest == 0) else (interest / (reserves / 100))
second one:
def apply(x: uint256, numerator: uint256) -> Fee:
fee : uint256 = (x * numerator) / DENOM
first one has precision loss but second one doesn't have
KupiaSecAdmin
@rickkk137 - Regarding first one, isn't this issue same? https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/87
rickkk137
@KupiaSecAdmin no,I don't think so, imo #87 is not possible i wrote my comment there
msheikhattari
Please refer to the PoC, the issue is currently unmitigatable due to the precision of all fees that result from the DENOM
parameter. It doesn't result directly from any of the calculations but the lowest representable precision of fee parameters. Let me know if I can clarify anything further.
WangSecurity
I believe @rickkk137 is correct that this and #72 have the same root cause and explain the same problem. The escalation will still be rejected, and #72 + #60 (duplicate) will be duplicated with this report because it goes more in-depth in explaining the issue and shows a higher severity.
msheikhattari
That issue is talking about setting min values for long_utilization
and short_utilization
, the only similarity is that they are both sources of precision loss. Otherwise the underlying issue is different.
WangSecurity
Yes, excuse me, focused too much on the utilisation part. The precision loss from Denom is relatively low based on the discussion under #126. But here the report combines 2 precision loss factors resulting in a major precision loss. Hence, I'm returning to my previous decision to reject the escalation, increase severity to high and leave it solo, because it combines 2 precision loss factors resulting in a more major issue than #72 which talks only about utilisation. Planning to apply this decision in a couple of hours.
aslanbekaibimov
@WangSecurity
The duplication rules assume we have a "target issue", and the "potential duplicate" of that issue needs to meet the following requirements to be considered a duplicate.
Identify the root cause Identify at least a Medium impact Identify a valid attack path or vulnerability path Fulfills other submission quality requirements (e.g. provides a PoC for categories that require one)
Don't #72 and #60 satisfy all 4 requirements?
WangSecurity
Good question, but #72 and #60 identified only one source of precision loss, so the following should apply:
The exception to this would be if underlying code implementations OR impact OR the fixes are different, then they may be treated separately.
That's the reason I think they should be treated separately. The decision remains, reject the escalation, increase severity to high.
rickkk137
But here the report combines 2 precision loss factors resulting in a major precision loss
def utilization(reserves: uint256, interest: uint256) -> uint256: """ Reserve utilization in percent (rounded down). @audit this is actually rounded up... """ @>>> return 0 if (reserves == 0 or interest == 0) else (interest / (reserves / 100))
@external @pure def utilization2(reserves: uint256, interest: uint256) -> uint256: """ Reserve utilization in percent (rounded down). @audit this is actually rounded up... """ @>>> return 0 if (reserves == 0 or interest == 0) else (interest / (reserves / 100_000))
function testCombined() public { // now let's see what would happen if we raised the precision of both fees and percents uint max_fee = 65; uint max_fee2 = 65_000; // 3 extra digits of precision lowers error by 3 orders of magnitude
uint256 reserves = 10_000_000 ether;
uint256 interest = 199_999 ether; // interest & reserves same in both, only differ in precision.
uint256 util1 = denom.utilization(reserves, interest);
@>>> uint256 util2 = denom.utilization2(reserves, interest); // 3 extra digits of precision here also
// borrow rate
uint fee1 = denom.scale(max_fee, util1);
@>>> uint fee2 = denom.scale2(max_fee2, util2);
assertEq(fee1 * 1_000, fee2 - 999); // fee 1 is 1.000, fee 2 is 1.999 (~50% error)
}
the watson passed util2 to denom.scale2 function in his PoC and that made huge difference and utilization2 function is custom function has written by watson
**msheikhattari**
Yes, these combined sources of precision loss were demonstrated to have a far more problematic impact as shown in the PoC.
**WangSecurity**
But, as I understand from @rickkk137, his main point is without the precision inside the Utilisation, the loss would be small and not sufficient for medium severity. Is it correct @rickkk137 ?
**rickkk137**
Yes,correct
**msheikhattari**
The precision loss resulting from DENOM is more significant than that from the utilization. The PoC and described scenario in the issue provides exact details on the loss of each.
When combined, not only is the precision loss more severe but also more likely to occur.
**WangSecurity**
But as I see in #126, the precision loss from Denom is not that severe, is it wrong?
**msheikhattari**
That issue is slightly different, but what was pointed out here is that the most granular annual fee representable is about 1.6% - these are the intervals for fee rates as well (ex. 2x 1.6, 3x 1.6...)
Utilization on the other hand experiences precision loss of 1% in the extreme case (ex 14.9% -> 14%)
So in absolute terms the issue arising from DENOM is more significant, when combined these issues become far more significant than implied by their nominal values, not only due to multiplied loss of precision but increased likelihood of loss (if precision loss from one source bumps it just over the boundary of another, as outlined in the PoC)
**WangSecurity**
Yeah, I see. #126 tries to show a scenario where DENOM precision loss would round down the fees to 0, and for that to happen, the fees or collateral have to be very small, which results in a very small loss. But this issue just shows the precision loss from DENOM and doesn't try to show rounding down to 0. That's the key difference between the two reports.
Hence, my decision remains that this will remain solo with high severity as expressed above. Planning to reject the escalation. The decision will be applied tomorrow at 10 am UTC:
> *Note: #126 won't be duplicated with this report as it doesn't show Medium impact*
**WangSecurity**
Result:
High
Unique
**sherlock-admin2**
Escalations have been resolved successfully!
Escalation status:
- [KupiaSecAdmin](https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/66/#issuecomment-2345173630): rejected
# Issue H-2: User can sandwich their own position close to get back all of their position fees
Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/94
## Found by
0x37, KupiaSec, bughuntoor
## Summary
User can sandwich their own position close to get back all of their position fees
## Vulnerability Detail
Within the protocol, borrowing fees are only distributed to LP providers when the position is closed. Up until then, they remain within the position.
The problem is that in this way, fees are distributed evenly to LP providers, without taking into account the longevity of their LP provision.
This allows a user to avoid paying fees in the following way:
1. Flashloan a large sum and add it as liquidity
2. Close their position and let the fees be distributed (with most going back to them as they've got majority in the pool)
3. WIthdraw their LP tokens
4. Pay back the flashloan
## Impact
Users can avoid paying borrowing fees.
## Code Snippet
https://github.com/sherlock-audit/2024-08-velar-artha/blob/main/gl-sherlock/contracts/positions.vy#L156
## Tool used
Manual Review
## Recommendation
Implement a system where fees are gradually distributed to LP providers.
## Discussion
**KupiaSecAdmin**
Escalate
This is invalid, `FEES.update` function is called after every action, so when the liquidity of flashloan is added, the accrued fees are distributed to prev LP providers.
**sherlock-admin3**
> Escalate
>
> This is invalid, `FEES.update` function is called after every action, so when the liquidity of flashloan is added, the accrued fees are distributed to prev LP providers.
You've created a valid escalation!
To remove the escalation from consideration: Delete your comment.
You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.
**spacegliderrrr**
Comment above is simply incorrect. `Fees.update` doesn't actually make a change on the LP token value - it simply calculates the new fee terms and makes a snapshot at current block.
The value of LP token is based on the pool reserves. Fees are actually distributed to the pool only upon closing the position, hence why the attack described is possible.
**rickkk137**
invalid
it is intended design
```python
def f(mv: uint256, pv: uint256, ts: uint256) -> uint256:
if ts == 0: return mv
else : return (mv * ts) / pv
def g(lp: uint256, ts: uint256, pv: uint256) -> uint256:
return (lp * pv) / ts
Pool contract uses above formola to compute amount of LP token to mint and also burn value for example: pool VEL-STX: VEL reserve = 1000 STX reserve = 1000 VEL/STX price = $1 LP total_supply = 1000 pv stand for pool value and ts stand for total_supply and mv stand for mint value LP_price = ts / pv = 1000 / 1000 = $1 its mean if user deposit $1000 into pool in result get 1000 LP token and for burn burn_value = lp_amount * pool_value / total_supply and based on above example total_supply=2000 pool_value=2000 because user deposit $1000 into pool and mint 1000lp token
the issue want to say mailicious user with increase lp price can retrieve borrowing_fee ,but when mailicious users and other users closes their profitable positions pool_value will be decreased[the protocol pay users' profit from pool reserve],hence burn_value become less than usual state
requirment internal state for attack path:
WangSecurity
I see how it can be viewed as the intended design, but still, the user can bypass the fees essentially and LPs lose them, when they should've been to receive it. Am I missing anything here?
rickkk137
attack path is possible when user's position is in lose and loss amount has to be enough to increase the LP price , note that if loss amount be large enough the position can be eligible for liquidation and the user has to close his position before liquidation bot
spacegliderrrr
@WangSecurity Your comment is correct. Anytime the user is about to close their position (whether it be unprofitable, neutral or slightly profitable), they can perform the attack to avoid paying the borrowing fees, essentially stealing them from the LPs.
WangSecurity
attack path is possible when user's position is in lose and loss amount has to be enough to increase the LP price , note that if loss amount be large enough the position can be eligible for liquidation and the user has to close his position before liquidation bot
So these are the constraints:
Are the above points correct and is there anything missing?
deadrosesxyz
No, loss amount does not need to be large. Attack can be performed on any non-profitable position, so the user avoids paying fees.
WangSecurity
In that case, I agree that it's an issue that the user can indeed bypass the fees and prevent the LPs from receiving it. Also, I don't see the pre-condition of the position being non-profitable as an extensive limitation. Moreover, since it can be any non-profitable position, then there is also no requirement for executing the attack before the liquidation bot (unless the position can be liquidated as soon as it's non-profitable).
Thus, I see that it should be high severity. If something is missing or these limitations are extensive in the context of the protocol, please let me know. Planning to reject the escalation, but increase severity to high.
WangSecurity
Result: High Has duplicates
sherlock-admin4
Escalations have been resolved successfully!
Escalation status:
Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/52
4gontuk, KupiaSec
The CONTEXT
function in gl-sherlock/contracts/api.vy
uses the <quote-token>/USD
price for valuation, assuming a 1:1 peg between the quote token and USD. This assumption can fail during de-peg events, leading to incorrect valuations and potential exploitation.
The CONTEXT
function calls the price
function from the oracle
contract to get the price of the quote token. This price is adjusted based on the quote_decimals
, implying it is using the <quote-token>/USD
price for valuation.
CONTEXT
Function in api.vy
:
The CONTEXT
function calls the price
function from the oracle
contract to get the price of the quote token.def CONTEXT(
base_token : address,
quote_token: address,
desired : uint256,
slippage : uint256,
payload : Bytes[224]
) -> Ctx:
base_decimals : uint256 = convert(ERC20Plus(base_token).decimals(), uint256)
quote_decimals: uint256 = convert(ERC20Plus(quote_token).decimals(), uint256)
# this will revert on error
price : uint256 = self.ORACLE.price(quote_decimals,
desired,
slippage,
payload)
return Ctx({
price : price,
base_decimals : base_decimals,
quote_decimals: quote_decimals,
})
price
Function in oracle.vy
:
The price
function in oracle.vy
uses the extract_price
function to get the price from the oracle.
########################################################################
TIMESTAMP: public(uint256)
@internal
def extract_price(
quote_decimals: uint256,
payload : Bytes[224]
) -> uint256:
price: uint256 = 0
ts : uint256 = 0
(price, ts) = self.EXTRACTOR.extractPrice(self.FEED_ID, payload)
# Redstone allows prices ~10 seconds old, discourage replay attacks
assert ts >= self.TIMESTAMP, "ERR_ORACLE"
self.TIMESTAMP = ts
# price is quote per unit base, convert to same precision as quote
pd : uint256 = self.DECIMALS
qd : uint256 = quote_decimals
s : bool = pd >= qd
n : uint256 = pd - qd if s else qd - pd
m : uint256 = 10 ** n
p : uint256 = price / m if s else price * m
return p
########################################################################
PRICES: HashMap[uint256, uint256]
@internal
def get_or_set_block_price(current: uint256) -> uint256:
"""
The first transaction in each block will set the price for that block.
"""
block_price: uint256 = self.PRICES[block.number]
if block_price == 0:
self.PRICES[block.number] = current
return current
else:
return block_price
########################################################################
@internal
@pure
def check_slippage(current: uint256, desired: uint256, slippage: uint256) -> bool:
if current > desired: return (current - desired) <= slippage
else : return (desired - current) <= slippage
@internal
@pure
def check_price(price: uint256) -> bool:
return price > 0
# eof
extract_price
Function in oracle.vy
:
The extract_price
function adjusts the price based on the quote_decimals
, which implies it is using the <quote-token>/USD
price for valuation.def extract_price(
quote_decimals: uint256,
payload : Bytes[224]
) -> uint256:
price: uint256 = 0
ts : uint256 = 0
(price, ts) = self.EXTRACTOR.extractPrice(self.FEED_ID, payload)
# Redstone allows prices ~10 seconds old, discourage replay attacks
assert ts >= self.TIMESTAMP, "ERR_ORACLE"
self.TIMESTAMP = ts
# price is quote per unit base, convert to same precision as quote
pd : uint256 = self.DECIMALS
qd : uint256 = quote_decimals
s : bool = pd >= qd
n : uint256 = pd - qd if s else qd - pd
m : uint256 = 10 ** n
p : uint256 = price / m if s else price * m
return p
During a de-peg event, LPs can withdraw more value than they deposited, causing significant losses to the protocol.
@external
def mint(
base_token : address, #ERC20
quote_token : address, #ERC20
lp_token : address, #ERC20Plus
base_amt : uint256,
quote_amt : uint256,
desired : uint256,
slippage : uint256,
payload : Bytes[224]
) -> uint256:
"""
@notice Provide liquidity to the pool
@param base_token Token representing the base coin of the pool (e.g. BTC)
@param quote_token Token representing the quote coin of the pool (e.g. USDT)
@param lp_token Token representing shares of the pool's liquidity
@param base_amt Number of base tokens to provide
@param quote_amt Number of quote tokens to provide
@param desired Price to provide liquidity at (unit price using onchain
representation for quote_token, e.g. 1.50$ would be
1500000 for USDT with 6 decimals)
@param slippage Acceptable deviaton of oracle price from desired price
(same units as desired e.g. to allow 5 cents of slippage,
send 50000).
@param payload Signed Redstone oracle payload
"""
ctx: Ctx = self.CONTEXT(base_token, quote_token, desired, slippage, payload)
return self.CORE.mint(1, base_token, quote_token, lp_token, base_amt, quote_amt, ctx)
De-peg Event: The pegged token de-pegs to 0.70 USD (external event).
Withdraw:
def burn(
base_token : address,
quote_token : address,
lp_token : address,
lp_amt : uint256,
desired : uint256,
slippage : uint256,
payload : Bytes[224]
) -> Tokens:
"""
@notice Withdraw liquidity from the pool
@param base_token Token representing the base coin of the pool (e.g. BTC)
@param quote_token Token representing the quote coin of the pool (e.g. USDT)
@param lp_token Token representing shares of the pool's liquidity
@param lp_amt Number of LP tokens to burn
@param desired Price to provide liquidity at (unit price using onchain
representation for quote_token, e.g. 1.50$ would be
1500000 for USDT with 6 decimals)
@param slippage Acceptable deviaton of oracle price from desired price
(same units as desired e.g. to allow 5 cents of slippage,
send 50000).
@param payload Signed Redstone oracle payload
"""
ctx: Ctx = self.CONTEXT(base_token, quote_token, desired, slippage, payload)
return self.CORE.burn(1, base_token, quote_token, lp_token, lp_amt, ctx)
def CONTEXT(
base_token : address,
quote_token: address,
desired : uint256,
slippage : uint256,
payload : Bytes[224]
) -> Ctx:
base_decimals : uint256 = convert(ERC20Plus(base_token).decimals(), uint256)
quote_decimals: uint256 = convert(ERC20Plus(quote_token).decimals(), uint256)
# this will revert on error
price : uint256 = self.ORACLE.price(quote_decimals,
desired,
slippage,
payload)
return Ctx({
price : price,
base_decimals : base_decimals,
quote_decimals: quote_decimals,
})
To mitigate this issue, the protocol should use the <base-token>/<quote-token>
price directly if available, or derive it from the <base-token>/USD
and <quote-token>/USD
prices. This ensures accurate valuations even if the quote token de-pegs from USD.
mePopye
Escalate
On behalf of the watson
sherlock-admin3
Escalate
On behalf of the watson
You've created a valid escalation!
To remove the escalation from consideration: Delete your comment.
You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.
WangSecurity
After additionally considering this issue, here's my understanding. Let's assume a scenario of 30% depeg and USDT = 0.7 USD.
Hence, even though it's not a direct loss of funds but a loss in value, this should be a valid medium (considering depeg as an extensive limitation). Thus, planning to accept the escalation and validate with medium severity. The duplicate is #113, are there any additional duplicates?
WangSecurity
Result: Medium Has duplicates
sherlock-admin2
Escalations have been resolved successfully!
Escalation status:
Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/72
aslanbek, pashap9990
funding fee in some cases will be zero because of precision loss
long_utilization = base_interest / (base_reserve / 100)
short_utilization = quote_interest / (quote_reserve / 100)
borrow_long_fee = max_fee * long_utilization
[min = 10]
borrow_short_fee = max_fee * short_utilization
[min = 10]
funding_fee_long = borrow_long_fee * (long_utilization - short_utilization) / 100
funding_fee_short = borrow_short_fee * (short_utilization - long_utilization) / 100
let's assume alice open a long position with min collateral[5e6] and leverage 2x when btc/usdt $50,000
long_utilization = 0.0002e8 / (1000e8 / 100) = 2e4 / 1e9 = 0.00002[round down => 0]
short_utilization = 0 / 1e12 /100 = 0
borrowing_long_fee = 100 (0) = 0 [min fee = 1] ==> 10 borrowing_short_fee = 100 (0) = 0 [min fee = 1] ==> 10
funding_fee_long = 10 (0) = 0 funding_fee_short = 10 0 = 0
1000 block passed
funding_paid = 5e6 1000 0 / 1_000_000_000 = 0 borrowing_paid = (5e6) (1000 10) / 1_000_000_000 = 50
long_utilization and short_utilization are zero until base_reserve / 100 >= base_interest and quote_reserve / 100 >= quote_interest**
pool status: "base_reserve" : 1000e8 BTC "quote_reserve" : 1,000,000e6 USDT
"collector" : "0xCFb56482D0A6546d17535d09f571F567189e88b3",
"symbol" : "WBTCUSDT",
"base_token" : "0x03c7054bcb39f7b2e5b2c7acb37583e32d70cfa3",
"quote_token" : "0x05d032ac25d322df992303dca074ee7392c117b9",
"base_decimals" : 8,
"quote_decimals": 6,
"blocktime_secs": 3,
"parameters" : {
"MIN_FEE" : 1,
"MAX_FEE" : 100,
"PROTOCOL_FEE" : 1000,
"LIQUIDATION_FEE" : 2,
"MIN_LONG_COLLATERAL" : 5000000,
"MAX_LONG_COLLATERAL" : 100000000000,
"MIN_SHORT_COLLATERAL" : 10000,
"MAX_SHORT_COLLATERAL" : 200000000,
"MIN_LONG_LEVERAGE" : 1,
"MAX_LONG_LEVERAGE" : 10,
"MIN_SHORT_LEVERAGE" : 1,
"MAX_SHORT_LEVERAGE" : 10,
"LIQUIDATION_THRESHOLD": 5
},
"oracle": {
"extractor": "0x3DaF1A3ABF9dd86ee0f7Dd13a256400d01866E04",
"feed_id" : "BTC",
"decimals" : 8
}
https://github.com/sherlock-audit/2024-08-velar-artha/blob/main/gl-sherlock/contracts/params.vy#L63
Funding fee always is lower than what it really should be
Place below test in tests/test_positions.py and run with pytest -k test_precision_loss -s
def test_precision_loss(setup, open,VEL, STX, long, positions, pools):
setup()
open(VEL, STX, True, d(5), 10, price=d(50000), sender=long)
chain.mine(1000)
fee = positions.calc_fees(1)
assert fee.funding_paid == 0
1-scale up long_utilzation and short_utilzation 2-set min value for long_utilzation and short_utilzation
sherlock-admin3
Escalate
Let’s assume interest = 1_099_999e18 reserve=10_000_000e18 max_fee = 100 long_utilization = interest / reserve / 100 = 1_099_999e18 / 10_000_000e18 / 100 = 10.99 ~ 10 //round down borrowing_fee = max_fee long_utillization / 100 = 100 10.99 / 100 = 10.99 after one year
//result without precision loss block_per_year = 15_778_800 funding_fee_sum = block_per_year funding_fee = 15778800 10.99 = 173,409,012 borrowing_long_sum = block_per_year borrowing_fee = 15778800 10.99 = 173,409,012
borrowing_paid = collateral borrowing_long_sum / DENOM = 1_099_999e18 173,409,012 / 1e9 = 190,749e18
funding_paid = collateral * funding_fee_sum / DENOM = 190,749e18
//result with precision loss block_per_year = 15_778_800 funding_fee_sum = block_per_year funding_fee = 15778800 10 = 157788000 borrowing_long_sum = block_per_year borrowing_fee = 15778800 10 = 157788000
borrowing_paid = collateral borrowing_long_sum / DENOM = 1_099_999e18 157788000 / 1e9 = 173,566e18
funding_paid = collateral * funding_fee_sum / DENOM = 173,566e18
result:1% difference exists in result
You've created a valid escalation!
To remove the escalation from consideration: Delete your comment.
You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.
WangSecurity
To clarify, due to this precision loss, there will be a 1% loss of the funding fee or am I missing something?
rickkk137
open_interest = 1,099,999e6 reserve = 10,000,000e6
long_util = open_interest / reserve / 100 = 10.99 borrowing_fee = max_fee long_util / 100 = 100 10.99 / 100 = 10.99 funding_fee = borrowing_fee long_util / 100 = 10.99 10.99 / 100 = 1.20
lets assume there is a long position with 10000e6 colleral and user want to close his position after a year[15778800 blocks per year]
result without precision loss
borrowing_paid = collateral * borrowing_sum / DENOM
borrowing_paid = 10,000e6 15778800 10.99 / 1e9 = 1,734,090,120[its mean user has to pay $1734 as borrowing fee] funding_paid = collateral funding_sum / DENOM = 10,000e6 1.20 * 15778800 / 1e9 = 189,345,600[its mean user has to pay $189 as funding fee]
result with precision loss
borrowing_paid = collateral borrowing_sum / DENOM borrowing_paid = 10,000e6 15778800 10 / 1e9 = 1,577,880,000[its mean user has to pay $1,577 as borrowing fee] funding_paid = collateral funding_sum / DENOM = 10,000e6 1 15778800 / 1e9 = 157,788,000[its mean user has to pay $157 as funding fee]
LPs loss = $157[~1%] user pay $32 less than expected [32 * 100 / 189 ~ 16%]
rickkk137
WangSecurity
I agree that this issue is correct and indeed identifies the precision loss showcasing the 1% loss. Planning to accept the escalation and validate with medium severity.
WangSecurity
Result: Medium Has duplicates
sherlock-admin3
Escalations have been resolved successfully!
Escalation status:
Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/74
PASCAL, pashap9990
LPs cannot set minimum base or quote amounts when burning LP tokens, leading to potential losses due to price fluctuations during transactions.
LPs cannot set the base received amount and the quote received amount
LPs may receive significantly lower amounts than expected when burning LP tokens, resulting in financial losses
https://github.com/sherlock-audit/2024-08-velar-artha/blob/main/gl-sherlock/contracts/api.vy#L104
Consider change config in tests/conftest.py I do it for better understanding,Fee doesn't important in this issue
PARAMS = {
'MIN_FEE' : 0,
'MAX_FEE' : 0,
'PROTOCOL_FEE' : 1000
...
}
Textual PoC:
we assume protocol fee is zero in this example
1-Bob mints 20,000e6 LP token[base_reserve:10,000e6, quote_reserve:10,000e6]
2-Alice opens long position[collateral 1000 STX, LEV:5,Price:1]
3-Price goes up til $2
4-Bob calls calc_burn[lp_amt:10,000e6,total_supply:20,000e6][return value:base 3750 VEL,quote 7500 STX]
5-Bob calls burn with above parameters
6-Alice calls close position
7-Alice's tx executed before Bob's tx
8-Bob's tx will be executed and Bob gets 3875 VEL and 4750 STX
9-Bob losts $2500
Coded PoC:
place this test in tests/test_positions.py and run this command pytest -k test_lost_assets -s
def test_lost_assets(setup, VEL, STX, lp_provider, LP, pools, math, open, long, close, burn):
setup()
#Alice opens position
open(VEL, STX, True, d(1000), 5, price=d(1), sender=long)
reserve = pools.total_reserves(1)
assert reserve.base == 10000e6
assert reserve.quote == 10000e6
#Bob calls calc_burn, Bob's assets in term of dollar is $15,000
amts = pools.calc_burn(1, d(10000) , d(20000), ctx(d(2)))
assert amts.base == 3752500000
assert amts.quote == 7495000000
#Alice closes her position
bef = VEL.balanceOf(long)
close(VEL, STX, 1, price=d(2), sender=long)
after = VEL.balanceOf(long)
vel_bef = VEL.balanceOf(lp_provider)
stx_bef = STX.balanceOf(lp_provider)
amts = pools.calc_burn(1, d(10000) , d(20000), ctx(d(2)))
assert amts.base == 3877375030
assert amts.quote == 4747749964
#Bob's tx will be executed
burn(VEL, STX, LP, d(10000), price=d(2), sender=lp_provider)
vel_after = VEL.balanceOf(lp_provider)
stx_after = STX.balanceOf(lp_provider)
print("vel_diff:", (vel_after - vel_bef) / 1e6)#3877.37503 VEL
print("stx_diff:", (stx_after - stx_bef) / 1e6)#4747.749964 STX
#received values in term of dollar is ~ $12,500,Bob lost ~ $2500
Consider adding min_base_amount and min_quote_amount to the burn function's params or adding min_assets_value for example when the price is $2 LPs set this param to $14800, its mean received value worse has to be greater than $14800
rickkk137
Escalate LP token price directly compute based pool reserve and total supply lp token and the issue clearly states received amount can be less than expected amount and in Coded PoC liquidity provider expected $15000 but in result get $12500
loss = $2500[1.6%]
Causes a loss of funds but requires certain external conditions or specific states, or a loss is highly constrained. The loss of the affected party must exceed 0.01% and 10 USD
sherlock-admin3
Escalate
Slippage related issues showing a definite loss of funds with a detailed explanation for the same can be considered valid high
You've created a valid escalation!
To remove the escalation from consideration: Delete your comment.
You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.
WangSecurity
However, the LPs can add input slippage parameters, i.e. desired
and slippage
to mitigate these issues. Am I missing something here?
rickkk137
@WangSecurity desired
and slippage
just has been used to control price which fetch from oracle and protocol uses of that for converting quote to base or base to quote to compute total pool's reserve in terms of quote token but there is 2 tips here
let's examine an example with together: u have 1000 lp token and u want convert them to usd and pool_reserve and total_supply_lp is 1000 in our example, burn_value = lp_amt pool_reserve / total_supply = 1000 1000 / 1000 = 1000 usd based on above value u send your transaction to network but a close profitable transaction will be executed before your transaction and get $100 as payout,its mean pool reserve is 900 burn_value = lp_amt pool_reserve / total_supply = 1000 900 / 1000 = 900 usd u get $900 instead of $1000 and u cannot control this situation as a user
WangSecurity
Yeah, I see, thank you. Indeed, the situation which would cause an issue to the user happens after the slippage is checked and this conversion cannot be controlled by the user. Planning to accept the escalation and validate with medium severity. Are there any duplicates (non-escalated; I see there are some escalations about slippage; I will consider them as duplicates if needed)?
WangSecurity
Result: Medium Has duplicates
sherlock-admin4
Escalations have been resolved successfully!
Escalation status:
Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/75
KupiaSec
In api contract, it uses 224 bytes as maximum length for Redstone's oracle payload, but oracle price data and signatures of 3 signers exceeds 225 bytes thus reverting transactions.
In every external function of api contract, it uses 224 bytes as maximum size for Redstone oracle payload.
However, the RedstoneExtractor
requires oracle data from at least 3 unique signers, as implemented in PrimaryProdDataServiceConsumerBase
contract. Each signer needs to send token price information like token identifier, price, timestamp, etc and 65 bytes of signature data.
Just with basic calculation, the oracle payload size exceeds 224 bytes.
Here's some proof of how Redstone oracle data is used:
As shown from the proof above, the payload size of Redstone data is huge, so setting 224 bytes as upperbound reverts transactions.
Protocol does not work because the payload array size limit is too small.
https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/api.vy#L83 https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/api.vy#L58
Manual Review
The upperbound size of payload array should be increased to satisfy Redstone oracle payload size.
KupiaSecAdmin
Escalate
Information required for this issue to be rejected.
sherlock-admin3
Escalate
Information required for this issue to be rejected.
You've created a valid escalation!
To remove the escalation from consideration: Delete your comment.
You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.
WangSecurity
@KupiaSecAdmin, could you help me understand how did you get the 9571 data size? can't see it in the links, but I assume I'm missing it somewhere.
KupiaSecAdmin
@WangSecurity - The data size is fetched from this transaction which uses RedStone oracle.
The calldata size of the transaction is 9574 bytes, but the function receives 2 variables which sums up 64 bytes, so remaining bytes 9574 - 64 = 9510
bytes of data is for RedStone oracle payload.
Maybe, it includes some additional oracle data, but the main point is that oracle price data of 3 oracles can't fit in 224 bytes. Because each oracle signature is 65 bytes which means it has 195 bytes of signature, and also the oracle data should include price, timestamp, and token name basically. So it never fits in 224 bytes.
rickkk137
i used minimal foundry package to generate some payload
bytes memory redstonePayload0 = getRedstonePayload("BTC:120:8,ETH:69:8");
//result length 440 chars
425443000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002cb4178004554480000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000019b45a50001921f3ed12d000000200000022af29790252e02178ff033567c55b4145e48cbec0434729f363a01675d9a88d618583b722a092e3e6d9944b69c79cadf793b2950f1a99fd595933609984ef1c01c0001000000000002ed57011e0000
bytes memory redstonePayload = getRedstonePayload("BTC:120:8");
//result length 312 chars
425443000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002cb41780001921f3ed1d400000020000001e3cabe40b42498dccea014557946f3bae4d29783c9dc7deb499c5a2d6d1901412cba9924d1c1fdfc429c07361ed9fab2789e191791a31ecdbbd77ab95a373f491c0001000000000002ed57011e0000
def mint(
base_token : address, #ERC20
quote_token : address, #ERC20
lp_token : address, #ERC20Plus
base_amt : uint256,
quote_amt : uint256,
desired : uint256,
slippage : uint256,
@>>> payload : Bytes[224]
payload parameter's length is 224 its mean we can pass a string with max length 448 character ,hence we can pass above payload to api functions,I want to say length of payload depend on number of symbols which we pass to get payload and max size for 2 symbol is 440 character
KupiaSecAdmin
@rickkk137 - Bytes type is different from strings as documented here, Bytes[224] means 224 bytes.
rickkk137
Each pair of hexadecimal digits represents one byte and in above examples first example's length is [440 / 2] 220 bytes and second one's length is [312/2] 156 bytes, further more in redstoneExtractor contract developer just uses index 0 which its mean requestRedstonePayload function just get one symbol as a parameter in Velar
KupiaSecAdmin
@rickkk137 - Seems you misunderstand something here. You generated Redstone oracle payload using their example repo that results in 220 bytes, this is the oracle data of one signer. For Verla, it requires oracle data of 3 signers so that it can take median price of 3 prices.
And I agree with the statement that it uses one symbol, yes Verla only requires BTC price from Redstone oracle.
rickkk137
Data is packed into a message according to the following structure
The data signature is verified by checking if the signer is one of the approved providers
its mean just one sign there is in every payload
Symbol | Value | Timestamp | Size(n) | Signature |
---|---|---|---|---|
32b | 32b | 32b | 1b | 65b |
max size for 1 symbol: 32 + 32 + 32 + 1 + 65 = 172 bytes https://github.com/redstone-finance/redstone-evm-connector
function getAuthorisedSignerIndex(
address signerAddress
) public view virtual override returns (uint8) {
if (signerAddress == 0x8BB8F32Df04c8b654987DAaeD53D6B6091e3B774) {
return 0;
} else if (signerAddress == 0xdEB22f54738d54976C4c0fe5ce6d408E40d88499) {
return 1;
} else if (signerAddress == 0x51Ce04Be4b3E32572C4Ec9135221d0691Ba7d202) {
return 2;
} else if (signerAddress == 0xDD682daEC5A90dD295d14DA4b0bec9281017b5bE) {
return 3;
} else if (signerAddress == 0x71d00abE308806A3bF66cE05CF205186B0059503) {
return 4;
} else {
revert SignerNotAuthorised(signerAddress);
}
}
WangSecurity
@KupiaSecAdmin just a small clarification, the idea that you need 3 signers in payload is based on which documentation?
KupiaSecAdmin
@WangSecurity - It comes from the RedStone implementation that Verla uses: https://github.com/redstone-finance/redstone-oracles-monorepo/blob/2bbf16cbbaa36f7046034dbbd968f3673a0657e8/packages/evm-connector/contracts/data-services/PrimaryProdDataServiceConsumerBase.sol#L12-L14
And you know, usually, using one signer data as oracle causes issue because its data can be malicious, that's how the protocol takes 3 signers and take median price among them.
WangSecurity
Unfortunately, I'm still not convinced enough this is actually a valid finding. Firstly, the transaction linked before, which has 9574 bytes size of calldata and uses the RedStone oracle, doesn't have the Redstone's payload as an input parameter as Velar does it. Secondly, this transaction is on Avalanche, while Velar will be deployed on Bob.
Hence, this is not a sufficient argument that 224 Bytes won't be enough.
Thirdly, payload
is used when calling the extract_price
function which doesn't even use that payload. Hence, I don't see a sufficient argument for this being a medium, but before making the decision, I'm giving some time to correct my points.
rickkk137
@WangSecurity u can pass n asset to fetchPayload function and that isn't const its mean payload length is flexible which depend on protocol and when we look at extract_price which just uses index 0 its mean they are suppose to pass just one symbol to redstone function to get payload and base on redstone document payload's length just for one symbol is 172 bytes which is less than 224 bytes
payload_size = n(32 + 32) + 32 + 1 + 65 payload_size_for_one_asset = 1 (32 + 32) + 32 + 1 + 65 = 172 bytes payload_size_for_two_asset = 2 (32 + 32) + 32 + 1 + 65 = 226 bytes payload_size_for_three_asset = 3 (32 + 32) + 32 + 1 + 65 = 290 bytes ...
KupiaSecAdmin
@WangSecurity - The 9574 bytes of payload is one example of Redstone payload.
The point is that it's obvious the price data can't fit in 224 bytes. As @rickkk137 mentioned, payload size for one asset of one signer is 172 bytes, but Verla requires oracle data of 3 signers, which will result in >500 bytes.
rickkk137
Velar protocol uses version 0.6.1 redstone-evm-connector and in this version RedstoneConsumerBase::getUniqueSignersThreshold
returns 1 in this path / @redstone-finance/evm-connector / contracts / core / RedstoneConsumerBase.sol
,hence just one signer is required
https://www.npmjs.com/package/@redstone-finance/evm-connector/v/0.6.1?activeTab=code
also when we look at make file we realized Velar's developers directly copy RedstoneConsumerBase contract without any changes,furthermore usingDataService function get unique signer as a parameter and when they restricted payload to 224 bytes its mean they want to pass 1 as a uniqueSignersCount
const wrappedContract = WrapperBuilder.wrap(contract).usingDataService({
dataServiceId: "redstone-rapid-demo",
@>>> uniqueSignersCount: 1,
dataFeeds: ["BTC", "ETH", "BNB", "AR", "AVAX", "CELO"],
});
KupiaSecAdmin
@rickkk137 - Check RedstoneExtractor.sol
, the contract inherits from PrimaryProdDataServiceConsumerBase
contract which is located in "./vendor/data-services/PrimaryProdDataServiceConsumerBase.sol" that is copied after make, which returns 3 as number of signers.
WangSecurity
As I understand, @KupiaSecAdmin is indeed correct here, and the current payload size won't work when the contracts are deployed on the Live chain. After clarifying with the sponsor, they've said they used 224 only for testnet and will increase it for the live chain, but there's no info about it in README or code comments. Hence, this should be indeed a valid issue. Planning to accept the escalation and validate with Medium severity. Medium severity because there is no loss of funds, and the second definition of High severity excludes contracts not working:
Inflicts serious non-material losses (doesn't include contract simply not working).
Hence, medium severity is appropriate:
Breaks core contract functionality, rendering the contract useless or leading to loss of funds.
Are there any duplicates?
KupiaSecAdmin
@WangSecurity - Agree with having this as Medium severity. Thanks for your confirmation.
WangSecurity
Result: Medium Unique
sherlock-admin4
Escalations have been resolved successfully!
Escalation status:
Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/79
KupiaSec
The protocol only allows equal or increased timestamp of oracle prices whenever an action happens in the protocol. This validation is wrong since it will lead to DoS for users.
The protocol uses RedStone oracle, where token prices are added as a part of calldata of transactions.
In RedStone oracle, it allows prices from 3 minutes old upto 1 minute in the future, as implemented in RedstoneDefaultsLib.sol
.
@internal
def extract_price(
quote_decimals: uint256,
payload : Bytes[224]
) -> uint256:
price: uint256 = 0
ts : uint256 = 0
(price, ts) = self.EXTRACTOR.extractPrice(self.FEED_ID, payload)
# Redstone allows prices ~10 seconds old, discourage replay attacks
assert ts >= self.TIMESTAMP, "ERR_ORACLE"
self.TIMESTAMP = ts
In oracle.vy
, it extracts the token price from the RedStone payload, which also includes the timestamp of which the prices were generated.
As shown in the code snippet, the protocol reverts when the timestamp extracted from the calldata is smaller than the stored timestamp, thus forcing timestamps only increase or equal to previous one.
This means that the users who execute transaction with price 1 minute old gets reverted when there is another transaction executed with price 30 seconds old.
NOTE: The network speed of all around the world is not same, so there can be considerable delays based on the location, api availability, etc.
By abusing this vulnerability, an attacker can regularly make transactions with newest prices which will revert all other transactions with slightly older price data like 10-20 seconds older, can be all reverted.
The vulnerability causes DoS for users who execute transactions with slightly older RedStone oracle data.
Manual Review
It's recommended to remove that non-decreasing timestamp validation. If the protocol wants more strict oracle price validation than the RedStone does, it can just use the difference between oracle timestamp and current timestamp.
KupiaSecAdmin
Escalate
Information required for this issue to be rejected.
sherlock-admin3
Escalate
Information required for this issue to be rejected.
You've created a valid escalation!
To remove the escalation from consideration: Delete your comment.
You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.
rickkk137
the protocol for every action fetch a redstone's payload and attach that to the transaction let's assume: User A:fetch payload a in T1 user B:fetch payload b in T2 user C:fetch payload c in T3 and T3 > T2 > T1 and block.timestamp-[T1,T2,T3] < 3 minutes(I mean all of them are valid) if all of them send their Txs to the network sequence is very important its mean just with this sequence [t1,t2,t3] none of TXs wouldn't be reverted, if t2 will be executed first then t1 will be reverted and if t3 will be executed first t1 and t2 will be reverted
contract RedstoneExtractor is PrimaryProdDataServiceConsumerBase {
+ uint lastTimeStamp;
+ uint price;
function extractPrice(bytes32 feedId, bytes calldata)
public view returns(uint256, uint256)
{
+ if(block.timestamp - lastTimeStamp > 3 minutes){
bytes32[] memory dataFeedIds = new bytes32[](1);
dataFeedIds[0] = feedId;
(uint256[] memory values, uint256 timestamp) =
getOracleNumericValuesAndTimestampFromTxMsg(dataFeedIds);
validateTimestamp(timestamp); //!!!
- return (values[0], timestamp);
+ lastTimeStamp = block.timestamp;
+ price = values[0];
+ }
+ return (price, lastTimeStamp);
}
}
But there isn't loss of funds ,users can repeat their TXs
KupiaSecAdmin
@rickkk137 - Thanks for providing the PoC. Of course there's no loss of funds, and users can repeat their transactions but those can be reverted again. Overall, there's pretty high chance that users' transactions will bereverted.
rickkk137
I agree with u and this can be problematic and Here's an example of this approach but the issue's final result depend on sherlock rules
WangSecurity
Not sure I understand how this can happen.
For example, we fetched the price from RedStone at timestamp = 10. Then someone fetches the price again, and for the revert to happen, the timestamp of that price has to be 9, correct?
How could that happen? Is it the user who chooses the price?
KupiaSecAdmin
@WangSecurity - As a real-world example, there will be multiple users who get price from RedStone and call Verla protocol with that price data. NOTE: price data(w/ timestamp) is added as calldata for every call. So among those users, there will be users calling protocol with price timestamp 8, some with 9, some with 10.
If one call with price timestamp 10 is called, other remaining calls with price timestamp 8 and 9 will revert.
WangSecurity
Thank you for this clarification. Indeed, this is possible, and it can happen even intentionally. On the other hand, this is only one-block DOS and the users could proceed with the transactions in the next block (assuming they use the newer price). However, this vulnerability affects the opening and closing of positions, which are time-sensitive functions. Hence, this should be a valid medium based on this rule:
Griefing for gas (frontrunning a transaction to fail, even if can be done perpetually) is considered a DoS of a single block, hence only if the function is clearly time-sensitive, it can be a Medium severity issue.
Planning to accept the escalation and validate with medium severity.
WangSecurity
Result: Medium Unique
sherlock-admin4
Escalations have been resolved successfully!
Escalation status:
tx.origin
to determine the user is prone to attacksSource: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/82
Bauer, Greed, Japy69, KupiaSec, Waydou, bughuntoor, ctf_sec, y4y
Usage of tx.origin
to determine the user is prone to attacks
Within core.vy
to user on whose behalf it is called is fetched by using tx.origin
.
self._INTERNAL()
user : address = tx.origin
This is dangerous, as any time a user calls/ interacts with an unverified contract, or a contract which can change implementation, they're put under risk, as the contract can make a call to api.vy
and act on user's behalf.
Usage of tx.origin
would also break compatibility with Account Abstract wallets.
Any time a user calls any contract on the BOB chain, they risk getting their funds lost. Incompatible with AA wallets.
https://github.com/sherlock-audit/2024-08-velar-artha/blob/main/gl-sherlock/contracts/core.vy#L166
Manual Review
Instead of using tx.origin
in core.vy
, simply pass msg.sender
as a parameter from api.vy
T1MOH593
Escalate
Noticed there were 19 escalations on preliminary valid issues. This is final escalation to make it 20/20 🙂
sherlock-admin3
Escalate
Noticed there were 19 escalations on preliminary valid issues. This is final escalation to make it 20/20 🙂
You've created a valid escalation!
To remove the escalation from consideration: Delete your comment.
You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.
WangSecurity
bruh
WangSecurity
Planning to reject the escalation and leave the issue as it is.
WangSecurity
Result: Medium Has duplicates
sherlock-admin4
Escalations have been resolved successfully!
Escalation status:
Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/83
0x37, 0xbranded, bughuntoor
Due to special requirements around receiving funding fees for a position, the funding fees received can be less than that paid. These funding fee payments are still payed, but a portion of them will not be withdrawn, and become stuck funds. This also violates the contract specification that sum(funding_received) = sum(funding_paid)
.
In calc_fees
there are two special conditions that impact a position's receipt of funding payments:
# When there are negative positions (liquidation bot failure):
avail : uint256 = pool.base_collateral if pos.long else (
pool.quote_collateral )
# 1) we penalize negative positions by setting their funding_received to zero
funding_received: uint256 = 0 if remaining == 0 else (
# 2) funding_received may add up to more than available collateral, and
# we will pay funding fees out on a first come first serve basis
min(fees.funding_received, avail) )
If the position has run out of collateral by the time it is being closed, he will receive none of his share of funding payments. Additionally, if the available collateral is not high enough to service the funding fee receipt, he will receive only the greatest amount that is available.
These funding fee payments are still always made (deducted from remaining collateral), whether they are received or not:
c1 : Val = self.deduct(c0, fees.funding_paid)
When a position is closed under most circumstances, the pool will have enough collateral to service the corresponding fee payment:
# longs
base_collateral : [self.MATH.MINUS(fees.funding_received)],
quote_collateral: [self.MATH.PLUS(fees.funding_paid),
self.MATH.MINUS(pos.collateral)],
...
# shorts
base_collateral : [self.MATH.PLUS(fees.funding_paid), # <-
self.MATH.MINUS(pos.collateral)],
quote_collateral: [self.MATH.MINUS(fees.funding_received)],
When positions are closed, the original collateral (which was placed into the pool upon opening) is removed. However, the amount of funding payments a position made is added to the pool for later receipt. Thus, when positions are still open there is enough position collateral to fulfill the funding fee payment and when they close the funding payment made by that position still remains in the pool.
Only when the amount of funding a position paid exceeded its original collateral, will there will not be enough collateral to service the receipt of funding fees, as alluded to in the comments. However, it's possible for users to pay the full funding fee, but if the borrow fee exceeds the collateral balance remaining thereafter, they will not receive any funding fees. As a result, it's possible for funding fees to be paid which are never received.
Further, even in the case of funding fee underpayment, setting the funding fee received to 0 does not remedy this issue. The funding fees which he underpaid were in a differing token from those which he would receive, so this only furthers the imbalance of fees received to paid.
core.vy
includes a specification for one of the invariants of the protocol:
# * funding payments match
# sum(funding_received) = sum(funding_paid)
This invariant is clearly broken as some portion of paid funding fees will not be received under all circumstances, so code is not to spec. This will also lead to some stuck funds, as a portion of the paid funding fees will never be deducted from the collateral. This can in turn lead to dilution of fees for future funding fee recipients, as the payments will be distributed evenly to the entire collateral including these stuck funds which will never be removed.
Manual Review
Consider an alternative method of accounting for funding fees, as there are many cases under the current accounting where fees received/paid can fall out of sync.
For example, include a new state variable that explicitly tracks unpaid funding fee payments and perform some pro rata or market adjustment to future funding fee recipients, specifically for that token.
spacegliderrrr
Escalate
Issue should be invalidated. It does not showcase any Medium-severity impact, but rather just a slightly incorrect code comment. Contest readme did not include said invariant as one that must always be true, so simply breaking it slightly does not warrant Medium severity.
sherlock-admin3
Escalate
Issue should be invalidated. It does not showcase any Medium-severity impact, but rather just a slightly incorrect code comment. Contest readme did not include said invariant as one that must always be true, so simply breaking it slightly does not warrant Medium severity.
You've created a valid escalation!
To remove the escalation from consideration: Delete your comment.
You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.
msheikhattari
If the protocol team provides specific information in the README or CODE COMMENTS, that information stands above all judging rules.
The included code comment was an explicit invariant outlined by the team:
# - the protocol handles the accounting needed to maintain the sytem invariants:
# * funding payments match
# sum(funding_received) = sum(funding_paid)
The problematic code was actually intended to enforce this invariant, however it does so incorrectly. In the case of a negative position due to fee underpayment, sum(funding_received) > sum(funding_paid)
. To correct for this, or serve as a deterrent, these positions will not receive their funding payment. However the token in which they underpaid is not the same token that they will not receive funding payment for. As a result the imbalance between funding paid and received is not corrected - it is actually worsened.
Not only that, but users may have paid their full funding payment and the sum(funding_received) = sum(funding_paid)
invariant holds. But if the remaining balance was then not enough to cover their borrow fee, they will not receive their funding payment which would actually cause this invariant to break.
This specification is the source of truth, and the code clearly does not abide by it. The issue proposes alternative methods for accounting funding fee payments to ensure this invariant consistently holds.
msheikhattari
Also I do think this issue is similar to #18 and #93
Will leave it to HoJ if they are combined since the source of the error is the same, but different impacts are described.
This issue discusses the code being not to spec and breaking an invariant. The other two issues mention a known issue from the sponsor of stuck/undistributed funds
WangSecurity
@spacegliderrrr is correct that indeed breaking the invariant from the code comment doesn't make the issue medium-severity necessarily. But, as I understand, it also works as intended. The position pays its fees, but if there is not enough collateral, then position cannot pay the fee and this fee will be unpaid/underpaid. And, IIUC, in this protocol the borrowing fees comes first and if the remaining collateral cannot pay the funding fee in full, then it shouldn't be. Hence, this is also enough of a contextual evidence that the comment is outdated. Do I miss anything here?
spacegliderrrr
@WangSecurity Most of your points are correct. Yes the code works as expected. Funding fee comes before borrowing fee. But for both of them, since due to fail to liquidate position on time, fees can exceed the total position collateral. Because of this, funding fees are paid with priority and if there's anything left, borrowing fees are paid for as much as there's left.
In the case where funding fees are overpaid (for more than the total position collateral), the other side receives these fees in a FIFO order, which is also clearly stated in the comments.
msheikhattari
@spacegliderrrr is correct, funding fees are paid before borrow fees. As such there is no evidence that the comment spec is outdated, nor does any other documentation such as the readme appear to indicate so.
In the case where funding fees are overpaid (for more than the total position collateral), the other side receives these fees in a FIFO order, which is also clearly stated in the comments.
To elaborate on this point, the vulnerability describes the case that funding_received
from the other side is greater than the funding_paid
. While it is indeed acknowledged fees are received in FIFO order, this is a problematic mitigation for this issue. This current approach is a questionable means of correcting the imbalance of funding payments to receipts. There are many cases where it not only doesn't correct for the funding payment imbalance, but actually worsens it, as explained above.
Regardless, this seems to be a clearly defined invariant. Not only from this comment but further implied by the logic of this fee handling, which penalizes negative positions to build up some "insurance" tokens to hedge against the case that funding fees are underpaid. It also intuitively makes sense for this invariant to generally hold as otherwise malfunctions can occur such as failure to close positions; several linked issues reported various related problems stemming from this invariant breaking as well.
WangSecurity
Another question I've got after re-reading the report. The impact section says there will be stuck tokens in the contract which will never be paid. But, as I understand the problem is different. The issue is that these tokens are underpaid, e.g. if the funding fee is 10 tokens, but the remaining collateral is only 8, then there are 2 tokens underpaid. So, how the are stuck tokens in the contract, if the funding fee is not paid in full. Or it refers to the funding_received being larger than the funding_received actually?
WangSecurity
Since no answer is provided, I assume it's a typo in the report and the contract doesn't have fewer tokens, since the fee is underpaid, not overpaid. But, the issue here is not medium severity, because the position with collateral less than the funding fees and it cannot pay the full fee. Just breaking the invariant is not sufficient for medium severity. Planning to accept the escalation and invalidate the issue.
msheikhattari
Apologies for the delayed response @WangSecurity
Yes, the original statement was as intended. That portion of the report was pointing out that there are two problematic impacts of the current approach that cause sum(funding_paid) != sum(funding_received)
funding_received
by one side of the pool to exceed the funding_paid
by the other side, in the case that a position went negative.funding_received
to be less than funding_paid
, even for positions which did not go insolvent due to funding rates.The outlined invariant is broken which results in loss of funds / broken functionality. It will prevent closing of some positions in the former case, and will result in some funds being stuck in the latter case.
The current approach of penalizing negative positions is intended to 'build up an insurance' as stated by the team. But as mentioned here, not only does it not remediate the issue it actually furthers the imbalance. The funding tokens have already been underpaid in the case of a negative position - preventing those positions from receiving their fees results in underpayment of their funding as well.
WangSecurity
It's possible for funding_received by one side of the pool to exceed the funding_paid by the other side, in the case that a position went negative.
Could you elaborate on this? Similar to my analysis here, If the position is negative, but the collateral is not 0, the funding paid is larger than the position's collateral, the funding paid will be decreased to pos. collateral. The funding received will be 0 (if collateral is < funding paid, c1: remaining =0 and deducted =pos.collateral. Then c2: remaining =0 and deducted =0. funding received =0, because remaining =0. So, not sure how funding_received can be > funding paid and I need a more concrete example with numbers.
It's possible for funding_received to be less than funding_paid, even for positions which did not go insolvent due to funding rates.
Agree that it's possible.
The outlined invariant is broken which results in loss of funds / broken functionality. It will prevent closing of some positions in the former case, and will result in some funds being stuck in the latter case.
However, the report doesn't showcase that this broken invariant (when funding received < funding paid) will result in any of this. The report only says that it will lead to a dilution of fees, but there is still no explanation of how.
Hence, without a concrete example of how this leads to positions not being closed, thus leading to losses of these positions, and owners, I cannot verify.
I see that the invariant doesn't hold, but this is not sufficient for Medium severity (only broken invariants from README are assigned Medium, not invariants from code comments). Hence, the decision for now is to invalidate this issue and accept the escalation. But, if there's a POC, how does this lead to loss of funds in case funding received < funding paid, or how does funding received > funding paid and how does this lead to a loss of funds, I will consider changing my decision.
WangSecurity
If no answer is provided, planning to accept the escalation and invalidate the issue.
msheikhattari
I see that the invariant doesn't hold, but this is not sufficient for Medium severity (only broken invariants from README are assigned Medium, not invariants from code comments)
Doesn't the below rule from the severity categorization apply here? That's what I had in mind when creating the issue, since this comment was the only source of documentation on this specification. There is no indication that this is out of date based on any other specs or code elsewhere.
Hierarchy of truth: If the protocol team provides no specific information, the default rules apply (judging guidelines).
If the protocol team provides specific information in the README or CODE COMMENTS, that information stands above all judging rules. In case of contradictions between the README and CODE COMMENTS, the README is the chosen source of truth.
Nevertheless there are loss of funds from each case, let me first prove that funding_received > funding_paid
is possible:
Then c2: remaining =0 and deducted =0. funding received =0, because remaining =0.
This is the crux of the issue I am reporting here. That's true in the case of a single position, the issue is that funding_received
of that position being set to 0 does not mitigate the underpayment of funding which it made by going negative - they are opposite tokens in the pair.
While that positions funding_payment
is capped at the collateral of the specific position, the other side of the pool will continue to accrue funding receipts from this portion of the collateral until it's liquidated:
paid_long_term : uint256 = self.apply(fs.long_collateral, fs.funding_long * new_terms)
received_short_term : uint256 = self.divide(paid_long_term, fs.short_collateral)
This is because this collateral is not excluded from fs.long_collateral
, it must be explicitly subtracted upon liquidation.
Now the issue causing loss of funds on this side is that positions will fail to close. On the other side, where funding_received < funding_paid
, this is especially problematic in cases where the full funding payment was made, but the collateral fell short of the borrow fee.
In this case, the balance sum(funding_received) = sum(funding_paid)
was not broken by this position, as it made its full funding payment, but it will be excluded from receiving its portion of funding receipts. These tokens will not be directly claimable by any positions in the pool, causing loss of funds in that sense.
Upholding this balance of funding payments to receipts is an important invariant which causes loss of funds and protocol disruptions as outlined above. This is even acknowledged by the team, since this current approach is meant to build up an "insurance" by penalizing negative positions to pay out future negative positions.
What it fails to acknowledge is that the penalized position has already underpaid its funding fee (in token A), and now the balance is further distorted by eliminating its funding payment (in token B). To move closer to equilibrium between the two sides of funding, a direct approach such as socializing fee underpayments is recommended instead.
There are a few different moving parts so let me know if any component of this line of reasoning is unclear and I could provide specific examples if needed.
WangSecurity
Doesn't the below rule from the severity categorization apply here? That's what I had in mind when creating the issue, since this comment was the only source of documentation on this specification. There is no indication that this is out of date based on any other specs or code elsewhere
I didn't say the rule applies here, but if there's something said in the code comments and the code doesn't follow it, the issue still has to have at least Medium severity to be valid:
The protocol team can use the README (and only the README) to define language that indicates the codebase's restrictions and/or expected functionality. Issues that break these statements, irrespective of whether the impact is low/unknown, will be assigned Medium severity. High severity will be applied only if the issue falls into the High severity category in the judging guidelines.
So if the issue breaks an invariant from code comments, but it doesn't have Medium severity, then it's invalid.
This is the crux of the issue I am reporting here. That's true in the case of a single position, the issue is that funding_received of that position being set to 0 does not mitigate the underpayment of funding which it made by going negative - they are opposite tokens in the pair. While that positions funding_payment is capped at the collateral of the specific position, the other side of the pool will continue to accrue funding receipts from this portion of the collateral until it's liquidated:
That still doesn't prove how funding_recevied
can be > funding_paid
. The function calc_fees
which is where this calculation of funding_received
and funding_paid
happens is called inside value
function which is called only inside close
and is_liquidatable
, which is called only inside liquidate
.
Hence, this calculation of funding paid and received will be made only when closing or liquidating the position. So, the following is wrong:
the other side of the pool will continue to accrue funding receipts from this portion of the collateral until it's liquidated
It won't accrue because the position is either already liquidated or closed. Hence, the scenario of funding_received > funding_paid
is still not proven.
About the funding_received < funding_paid
. As I understand, it's intended that the position doesn't receive any funding fees in this scenario which evident by code comment.
So if the position didn't manage to pay the full funding_paid
(underpaid the fees), they're intentionally excluded from receiving any funding fees and it's not a loss of funds and these will be received by other users.
Hence, my decision remains: accept the escalation and invalidate the issue. If you still see that I'm wrong somewhere, you're welcome to correct me. But, to make it easier, provide links to the appropriate LOC you refer to.
msheikhattari
Hence, this calculation of funding paid and received will be made only when closing or liquidating the position. So, the following is wrong:
That's not quite correct. Yes, calc_fees
is only called upon closing/liquidating the position. But, this only only calculates the user's pro-rate share of the accrued interest:
def calc(id: uint256, long: bool, collateral: uint256, opened_at: uint256) -> SumFees:
period: Period = self.query(id, opened_at)
P_b : uint256 = self.apply(collateral, period.borrowing_long) if long else (
self.apply(collateral, period.borrowing_short) )
P_f : uint256 = self.apply(collateral, period.funding_long) if long else (
self.apply(collateral, period.funding_short) )
R_f : uint256 = self.multiply(collateral, period.received_long) if long else (
self.multiply(collateral, period.received_short) )
return SumFees({funding_paid: P_f, funding_received: R_f, borrowing_paid: P_b})
The terms period.received_long
, period.received_short
are relevant here and these are continuously, globally updated upon any interaction by any user with the system. As a result, those positions will count towards the total collateral until explicitly closed, inflating funding_received
beyond its true value.
and it's not a loss of funds and these will be received by other users.
The point of this issue is that user's will not receive those lost funds. Since the global funding_received
included the collateral of positions which were later excluded from receiving their fee, the eventual pro-rata distribution of fees upon closing the position will not be adjusted for this. Thus some portion of fees will remain unclaimed.
The current approach is problematic with loss of funds and disrupted liquidation functionality. There are more direct ways to achieve the correct balance of funding payments to receipts.
WangSecurity
To finally confirm, the problem here is that these funds are just locked in the contract, and no one can get them, correct?
rickkk137
as I got the main point in this report is funding_received can exceed funding_paid but this isn't correct
It's possible for funding_received by one side of the pool to exceed the funding_paid by the other side, in the case that a position went negative.
WangSecurity
Yeah, there isn't a proof it can happen, the problem we are discussing now is that in cases where funding received is larger than funding paid (can happen when the position is negative), the funding received would be stuck in the contract with no one being able to get them. @rickkk137 would the funding fees be distributed to other users in this case?
rickkk137
In my poc the position finally become negative but funding received is equal to funding paid But funding receiving wouldn't stuck in contracts and will be distributed to later positions
WangSecurity
Thank you for this correction. With this, the escalation will be accepted and the issue will be invalidated. The decision will be applied in a couple of hours.
msheikhattari
Hi @rickkk137 can you specifically clarify the scenario which the PoC is describing? Because from my understanding, it is not showing the case which I described. The nuance is subtle, allow me to explain
Let's say there is one position on the long side, two on the short side, and that longs are paying shorts. Now if one of the short positions goes negative, the other will not receive the total funding fees over the period, it will only get its pro-rata share from the time that the (now liquidated) position was still active.
For comparison, it seems your PoC is showing a single long and short, and you are showing that when the long is liquidated the short should still receive their funding payment. This is a completely different scenario from what is described in this issue.
To answer your earlier question @WangSecurity
To finally confirm, the problem here is that these funds are just locked in the contract, and no one can get them, correct?
Yes, thats correct. That portion of the funding payments is not accessible to any users.
rickkk137
when a position be penalized and funding_received for that position become zero and base or quote collateral's pool will be decrease
@external
def close(id: uint256, d: Deltas) -> PoolState:
...
base_collateral : self.MATH.eval(ps.base_collateral, d.base_collateral),
quote_collateral : self.MATH.eval(ps.quote_collateral, d.quote_collateral),
its mean other position get more funding_received compared to past because funding_recieved has reserve relation with base or qoute collateral
paid_long_term : uint256 = self.apply(fs.long_collateral, fs.funding_long * new_terms)
@>> received_short_term : uint256 = self.divide(paid_long_term, fs.short_collateral)
msheikhattari
The pro rata share of funding received will be correctly adjusted moving forward from liquidation. But the point is that the period.funding_received terms already included the now liquidated collateral, so the other positions do not receive adjusted distributions to account for that.
rickkk137
def query(id: uint256, opened_at: uint256) -> Period:
"""
Return the total fees due from block `opened_at` to the current block.
"""
fees_i : FeeState = Fees(self).fees_at_block(opened_at, id)
fees_j : FeeState = Fees(self).current_fees(id)
return Period({
borrowing_long : self.slice(fees_i.borrowing_long_sum, fees_j.borrowing_long_sum),
borrowing_short : self.slice(fees_i.borrowing_short_sum, fees_j.borrowing_short_sum),
funding_long : self.slice(fees_i.funding_long_sum, fees_j.funding_long_sum),
funding_short : self.slice(fees_i.funding_short_sum, fees_j.funding_short_sum),
@>>> received_long : self.slice(fees_i.received_long_sum, fees_j.received_long_sum),
@>>> received_short : self.slice(fees_i.received_short_sum, fees_j.received_short_sum),
})
both will be updated when liquidable position will be closed
**msheikhattari**
That's not quite right. From [`current_fees`](https://github.com/sherlock-audit/2024-08-velar-artha/blob/18ef2d8dc0162aca79bd71710f08a3c18c94a36e/gl-sherlock/contracts/fees.vy#L137):
```vyper
paid_short_term : uint256 = self.apply(fs.short_collateral, fs.funding_short * new_terms)
received_long_term : uint256 = self.divide(paid_short_term, fs.long_collateral)
received_long_sum : uint256 = self.extend(fs.received_long_sum, received_long_term, 1)
So as mentioned in my point above, the collateral at the time of each global fee update is used. When a user later claims his pro-rate share, he will receive his fraction of the total collateral at the time of the fee update. Fee updates are performed continuously after each operation, and the total collateral may no longer be representative due to liquidated positions being removed from this sum. However, they were still included in the received_{short/long}_term
, which is added onto the globally stored received_{short/long}_sum
Thus some share of these global fees are not accessible.
WangSecurity
As I understand it: The fee state is checked twice in the liquidate/close. Let's take close for example:
close
, then it calls value
, which calls calc_fees
) which gives us 0 funding_received and thus 0 added to the quote_collateral.core::close
updates the Pool and then updates the fees.FeeState.base_collateral
(assuming the scenario with two shorts) when calculating the fees received by shorts for this term. But, the FeeState.base collateral
is changed only after calculating received_short_term/sum
So even though the closed/liquidated position didn't receive any fees and they should go to another short, the received_short_term
accounted as there were two shorts opened, and each received their funding fees.
Hence, when the other short position gets the fees, they will add only a portion from that period, not the full fee.
Thus, I agree it should remain valid. However, medium severity should be kept because, in reality, if this situation occurs, the collateral of that closing/liquidatable position would be quite low (lower than funding_paid
), there would be many positions, so the individual loss would be smaller (in terms of the amount, not %), the period with incorrectly applied fees would be small (given the fact that fees are updated at each operation). Hence, the loss is very limited. Planning to reject the escalation and leave the issue as it is, it will be applied at 10 am UTC.
WangSecurity
Result: Medium Unique
sherlock-admin4
Escalations have been resolved successfully!
Escalation status:
WangSecurity
Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/85
bughuntoor
First depositor could DoS the pool
Currently, when adding liquidity to a pool, the way LP tokens are calculated is the following:
mintValue * lpSupply / poolValue
def f(mv: uint256, pv: uint256, ts: uint256) -> uint256:
if ts == 0: return mv
else : return (mv * ts) / pv # audit -> will revert if pv == 0
However, this opens up a problem where the first user can deposit a dust amount in the pool which has a value of just 1 wei and if the price before the next user deposits, the pool value will round down to 0. Then any subsequent attempts to add liquidity will fail, due to division by 0.
Unless the price goes back up, the pool will be DoS'd.
DoS
https://github.com/sherlock-audit/2024-08-velar-artha/blob/main/gl-sherlock/contracts/pools.vy#L178
Manual Review
Add a minimum liquidity requirement.
msheikhattari
Escalate
DoS has two separate scores on which it can become an issue:
- The issue causes locking of funds for users for more than a week (overrided to 4 hr)
- The issue impacts the availability of time-sensitive functions (cutoff functions are not considered time-sensitive). If at least one of these are describing the case, the issue can be a Medium. If both apply, the issue can be considered of High severity. Additional constraints related to the issue may decrease its severity accordingly. Griefing for gas (frontrunning a transaction to fail, even if can be done perpetually) is considered a DoS of a single block, hence only if the function is clearly time-sensitive, it can be a Medium severity issue.
Low at best. Per the severity guidelines, this is not DoS since no user funds are locked and no sensitive functionality is impacted (ongoing positions/LPs are not impacted). Additionally, this both assumes that no other LPs make any deposits within the same block as the attacker (as the price would be equivalent), and that the price is monotonically decreasing after the attack was initiated. Not only is it low impact, but also low likelihood.
sherlock-admin3
Escalate
DoS has two separate scores on which it can become an issue:
- The issue causes locking of funds for users for more than a week (overrided to 4 hr)
- The issue impacts the availability of time-sensitive functions (cutoff functions are not considered time-sensitive). If at least one of these are describing the case, the issue can be a Medium. If both apply, the issue can be considered of High severity. Additional constraints related to the issue may decrease its severity accordingly. Griefing for gas (frontrunning a transaction to fail, even if can be done perpetually) is considered a DoS of a single block, hence only if the function is clearly time-sensitive, it can be a Medium severity issue.
Low at best. Per the severity guidelines, this is not DoS since no user funds are locked and no sensitive functionality is impacted (ongoing positions/LPs are not impacted). Additionally, this both assumes that no other LPs make any deposits within the same block as the attacker (as the price would be equivalent), and that the price is monotonically decreasing after the attack was initiated. Not only is it low impact, but also low likelihood.
You've created a valid escalation!
To remove the escalation from consideration: Delete your comment.
You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.
WangSecurity
While indeed this doesn't qualify the DOS requirements, this issue can still result in the functionality of the protocol being blocked and the users wouldn't be able to use this pool. I agree it's low likelihood, but it's still possible.
@spacegliderrrr since this issue relies on the precision loss, it requires a POC, can you make one? And also how low should be the price so it leads to rounding down?
spacegliderrrr
Price doesn't really matter - it's just needed to deposit little enough tokens that they're worth 1 wei of quote token. So if for example the pair is WETH/ USDC, a user would need to deposit ~4e8 wei WETH (considering price of $2,500).
As for the PoC, because the code is written in Vyper and Foundry does not support it, I cannot provide a PoC.
WangSecurity
In that case, could you make a very detailed attack path, I see you provided an example of the price that would cause an issue, but still would like a very detailed attack path.
WangSecurity
One important thing to consider is the following question:
We will report issues where the core protocol functionality is inaccessible for at least 7 days. Would you like to override this value? Yes, 4 hours
So, I see that DOS rules say that the funds should be locked for a week. But, the question is about core protocol functionality being inaccessible. The protocol specified they want to have issues about core protocol functionality being inaccessible for 4 hours.
This finding, indeed doesn't lead to locking funds or blocking time-sensitive functions, but it can lead to the core protocol functionality being inaccessible for 4 hours. I see that it's low likelihood, but the likelihood is not considered when defining the severity. Hence, even though for this finding there have to be no other depositors in the next block or the price being lower for the next 4 hours, this can happen. Thus, medium severity for this issue is appropriate. Planning to reject the escalation and leave the issue as it is.
msheikhattari
This issue doesn't impact ongoing operations, so its similar to frontrunning of initializers. No irreversible damage or loss of funds occur.
Core protocol functionality being inaccessible should have some ramifications like lock of funds or broken time-sensitive functionality (like withdrawals). No funds are in the pool when this issue is taking place.
msheikhattari
In any case this issue does need a PoC - per the severity criteria all issues related to precision loss require one.
WangSecurity
@spacegliderrrr could you make a detailed numerical POC, showcasing the precision loss and DOS.
Core protocol functionality being inaccessible should have some ramifications like lock of funds or broken time-sensitive functionality (like withdrawals). No funds are in the pool when this issue is taking place
As I've said previously, this still impacts the core protocol functionality, as the users cannot deposit into the pool, and this very well could last for more than 4 hours. Hence, this is sufficient for medium severity as the core protocol functionality is inaccessible for more than 4 hours.
spacegliderrrr
amt0 = 4e8 * 2500e18 / 1e18 = 10000e8 = 1e12
lowered = 1e12 / 1e12 = 1
@external
@pure
def base_to_quote(tokens: uint256, ctx: Ctx) -> uint256:
lifted : Tokens = self.lift(Tokens({base: tokens, quote: ctx.price}), ctx)
amt0 : uint256 = self.to_amount(lifted.quote, lifted.base, self.one(ctx))
lowered: Tokens = self.lower(Tokens({base: 0, quote: amt0}), ctx)
return lowered.quote
amt0 = 4e8 * 2499e18 / 1e18 = 0.9996e12
lowered = 0.9996e12 / 1e12 = 0
def f(mv: uint256, pv: uint256, ts: uint256) -> uint256:
if ts == 0: return mv
else : return (mv * ts) / pv
WangSecurity
As far as I can tell, the POC is correct and this issue will indeed happen. As was said previously, planning to reject the escalation and leave the issue as it is.
WangSecurity
Result: Medium Unique
sherlock-admin4
Escalations have been resolved successfully!
Escalation status:
Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/89
bughuntoor
Whale LP providers can open positions on both sides to force users into high fees.
LP fees within the protocol are based on utilization percentage of the total funds in the pool. The problem is that this could easily be abused by LP providers in the following way.
Consider a pool where the majority of the liquidity is provided by a single user.
As long as the whale can maintain majority of the liquidity provided, attack remains profitable. If at any point they can no longer afford maintaining majority, they can simply close their positions without taking a loss, so this is basically risk-free.
Loss of funds
https://github.com/sherlock-audit/2024-08-velar-artha/blob/main/gl-sherlock/contracts/params.vy#L33
Manual Review
Consider a different way to calculate fees
msheikhattari
Escalate Invalid. Quoting a valid point from your own comment:
Issue should be low/info. Ultimately, all LPs would want is fees and this would give them the highest fees possible. Furthermore, the attack is extremely costly, as it would require user to lock up hundreds of thousands/ millions, losing a significant % of them. Any user would have an incentive to add liquidity at extremely high APY, which would allow for both new positions opens and LP withdraws.
This attack inflates borrow fees, but the high APY will attract other LP depositors which would drive the utilization back down to normal levels, reducing the fee. Unlike the issue that you were escalating, this one has no such time sensitivity - the market would naturally tend towards rebalance within the next several days / weeks. It's not reasonable to assume that the existing positions would remain open despite high fees and other LPs would not enter the market over the coming days/weeks.
Not only that, the other assumptions of this issue are incorrect:
If at any point they can no longer afford maintaining majority, they can simply close their positions without taking a loss, so this is basically risk-free.
Wrong. Each opened long AND short position must pay a fixed fee, so the whale is taking a risk. He is betting that the current positions will not close, and his stake will not get diluted, just long enough to eke out a net profit. And this is assuming he had a majority stake to begin with, which for the more liquid pools where the attack is most profitable due to a large amount of open interest, is a highly questionable assumption.
The game theory makes it unlikely that the whale would be able to extract enough extra fees to even make a profit net of the operating fees of such an attack.
sherlock-admin3
Escalate Invalid. Quoting a valid point from your own comment:
Issue should be low/info. Ultimately, all LPs would want is fees and this would give them the highest fees possible. Furthermore, the attack is extremely costly, as it would require user to lock up hundreds of thousands/ millions, losing a significant % of them. Any user would have an incentive to add liquidity at extremely high APY, which would allow for both new positions opens and LP withdraws.
This attack inflates borrow fees, but the high APY will attract other LP depositors which would drive the utilization back down to normal levels, reducing the fee. Unlike the issue that you were escalating, this one has no such time sensitivity - the market would naturally tend towards rebalance within the next several days / weeks. It's not reasonable to assume that the existing positions would remain open despite high fees and other LPs would not enter the market over the coming days/weeks.
Not only that, the other assumptions of this issue are incorrect:
If at any point they can no longer afford maintaining majority, they can simply close their positions without taking a loss, so this is basically risk-free.
Wrong. Each opened long AND short position must pay a fixed fee, so the whale is taking a risk. He is betting that the current positions will not close, and his stake will not get diluted, just long enough to eke out a net profit. And this is assuming he had a majority stake to begin with, which for the more liquid pools where the attack is most profitable due to a large amount of open interest, is a highly questionable assumption.
The game theory makes it unlikely that the whale would be able to extract enough extra fees to even make a profit net of the operating fees of such an attack.
You've created a valid escalation!
To remove the escalation from consideration: Delete your comment.
You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.
WangSecurity
@spacegliderrrr do you have any counterarguments?
spacegliderrrr
@WangSecurity Issue above showcases a real issue which could occur if a whale decides to attack a pool.
Each opened long AND short position must pay a fixed fee, so the whale is taking a risk. He is betting that the current positions will not close, and his stake will not get diluted, just long enough to eke out a net profit.
True, there's some risk, though most of it can be mitigated. For example if the attack is performed when most opened positions are at negative PnL (which would mean closing them is profitable to the LP providers), most of the risk is mitigated as users have 2 choices - close early at a loss or keep the position open at high fees (either way, profitable for the LP provider).
the market would naturally tend towards rebalance within the next several days / weeks.
True, though as mentioned, it would take days/ weeks in which the whale could profit.
The issue does involve some game theory, but nonetheless shows an actual risk to honest users.
WangSecurity
I also agree there are lots of risks, with this scenario. But, it's still possible to pose losses on other users in a way of arbitrary increasing fees. The market would rebalance, but it can even take less than a day to cause losses to users. Hence, I agree that this issue should remain medium severity, because even though the issue has high constraints, still can cause losses. Planning to reject the escalation.
WangSecurity
Result: Medium Unique
sherlock-admin4
Escalations have been resolved successfully!
Escalation status:
Source: https://github.com/sherlock-audit/2024-08-velar-artha-judging/issues/96
bughuntoor
User could have impossible to close position if funding fees grow too big.
In order to prevent positions from becoming impossible to be closed due to funding fees surpassing collateral amount, there's the following code which pays out funding fees on a first-come first-serve basis.
# 2) funding_received may add up to more than available collateral, and
# we will pay funding fees out on a first come first serve basis
min(fees.funding_received, avail) )
However, this wrongly assumes that the order of action would always be for the side which pays funding fees to close their position before the side which claims the funding fee.
Consider the following scenario:
X
) and an open short position. Long position pays funding fee to the short position.X + Y
). it is due liquidation, but due to bot failure is not yet liquidated (which based on comments is expected and possible behaviour)X
collateral. (total quote collateral is currently 2X) funding_paid
which in our case will be counted as exactly as much as the collateral (as in these calculations it cannot surpass it). And it then subtracts that same quote collateral.funding_received
is calculated as X + Y
and therefore that's the amount the total quote collateral is reduced by. The new total quote collateral is 2X - (X + Y) = X - Y
. (X - Y) - X
which will underflow.Marking this as High, as a user could abuse it to create a max leverage position which cannot be closed. Once it is done, because the position cannot be closed it will keep on accruing funding fees which are not actually backed up by collateral, allowing them to double down on the attack.
Loss of funds
https://github.com/sherlock-audit/2024-08-velar-artha/blob/main/gl-sherlock/contracts/positions.vy#L250 https://github.com/sherlock-audit/2024-08-velar-artha/blob/main/gl-sherlock/contracts/positions.vy#L263 https://github.com/sherlock-audit/2024-08-velar-artha/blob/main/gl-sherlock/contracts/positions.vy#L211
Manual Review
Fix is non-trivial.
KupiaSecAdmin
Escalate
The underflow does not happen by nature of deduct
function. Thus this is invalid.
sherlock-admin3
Escalate
The underflow does not happen by nature of
deduct
function. Thus this is invalid.
You've created a valid escalation!
To remove the escalation from consideration: Delete your comment.
You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.
spacegliderrrr
Escalate
Severity should be High. Issue above describes how a user could open risk-free max leverage positions, basically making a guaranteed profit from the LPs.
Regarding @KupiaSecAdmin escalation above - please do double check the issue above. The underflow does not happen in deduct
but rather in the MATH.eval
operations. The problem lies within the fact that if order of withdraws is reversed, funding receiver can receive more fees than the total collateral (as long as it is available by other users who have said collateral not yet eaten up by funding fees). Then, some of the funding paying positions will be impossible to be closed.
sherlock-admin3
Escalate
Severity should be High. Issue above describes how a user could open risk-free max leverage positions, basically making a guaranteed profit from the LPs.
Regarding @KupiaSecAdmin escalation above - please do double check the issue above. The underflow does not happen in
deduct
but rather in theMATH.eval
operations. The problem lies within the fact that if order of withdraws is reversed, funding receiver can receive more fees than the total collateral (as long as it is available by other users who have said collateral not yet eaten up by funding fees). Then, some of the funding paying positions will be impossible to be closed.
You've created a valid escalation!
To remove the escalation from consideration: Delete your comment.
You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.
ami0x226
This is invalid.
- The original long is closed. This does not have an impact on the total quote collateral, as it is increased by the funding_paid which in our case will be counted as exactly as much as the collateral (as in these calculations it cannot surpass it). And it then subtracts that same quote collateral.
When original long is closed, total quote collateral is changed.
File: gl-sherlock\contracts\positions.vy
209: quote_reserves : [self.MATH.PLUS(pos.collateral), #does not need min()
210: self.MATH.MINUS(fees.funding_paid)],
211: quote_collateral: [self.MATH.PLUS(fees.funding_paid),
212: self.MATH.MINUS(pos.collateral)],
Heres, pos.collateral = X, fees.funding_paid = X + Y
.
Then,
quote_collateral <- quote_collateral + X + Y - X = quote_collateral + Y = 2X + Y
, and
quote_reserves <- quote_reserves + X - X - Y = quote_reserves - Y
.
When original short is closed in step5, new total quote collateral is 2X + Y - (X + Y) = X
and there is no underflow in step6.
As a result, the scenario of the report is wrong.
The loss causes in quote_reserves
, but, in practice, Y is enough small by the frequent liquidation and it should be assumed that the liquidation is done correctly.
Especially, because the report does not mention about this vulnerability, I think this is invalid
ami0x226
Also, Funding paid cannot exceed collateral of a position from the apply
function.
File: gl-sherlock\contracts\math.vy
167: def apply(x: uint256, numerator: uint256) -> Fee:
172: fee : uint256 = (x * numerator) / DENOM
173: remaining: uint256 = x - fee if fee <= x else 0
174: fee_ : uint256 = fee if fee <= x else x
175: return Fee({x: x, fee: fee_, remaining: remaining})
File: gl-sherlock\contracts\fees.vy
265: def calc(id: uint256, long: bool, collateral: uint256, opened_at: uint256) -> SumFees:
269: P_f : uint256 = self.apply(collateral, period.funding_long) if long else (
270: self.apply(collateral, period.funding_short) )
274: return SumFees({funding_paid: P_f, funding_received: R_f, borrowing_paid: P_b})
spacegliderrrr
Heres, pos.collateral = X, fees.funding_paid = X + Y.
Here's where you're wrong. When the user closes their position, funding_paid
cannot exceed pos.collateral
. So fees.funding_paid == pos.collateral
when closing the original long. Please re-read the issue and code again.
ami0x226
Heres, pos.collateral = X, fees.funding_paid = X + Y.
Here's where you're wrong. When the user closes their position,
funding_paid
cannot exceedpos.collateral
. Sofees.funding_paid == pos.collateral
when closing the original long. Please re-read the issue and code again.
That's true. I mentioned about it in the above comment
Also, Funding paid cannot exceed collateral of a position from the apply function.
I just use fees.funding_paid = X + Y
to follow the step2 of bughuntoor
's scenario:
- Eventually the funding fee grows larger than the whole long position (X + Y). it is due liquidation, but due to bot failure is not yet liquidated (which based on comments is expected and possible behaviour)
rickkk137
invalid funding_paid cannot exceed than collateral also funding_received cannot be greater funding_paid
WangSecurity
@spacegliderrrr can you make a coded POC showcasing the attack path from the report?
WangSecurity
We've got the POC from the sponsor:
Hence, the issue is indeed valid. About the severity, as I understand, it's indeed high, since there are no extensive limitations, IIUC. Anyone is free to correct me and the POC, but from my perspective it's indeed correct.
But for now, planning to reject @KupiaSecAdmin escalation, accept @spacegliderrrr escalation and upgrade severity to high.
KupiaSecAdmin
@WangSecurity - No problem with having this issue as valid but the severity should be Medium at most I think. Because, 1) Usually in real-world, funding fees don't exceed the collateral. 2) When positions get into risk, liquidation bots will work most of times.
WangSecurity
As far as I understand, this issue can be triggered intentionally, i.e. the first constraint can be reached intentionally, as explained at the end of the Vulnerability Detail section:
as a user could abuse it to create a max leverage position which cannot be closed
But you're correct that it depends on the liquidation bot malfunctioning, which is also mentioned in the report:
Eventually the funding fee grows larger than the whole long position (X + Y). it is due liquidation, but due to bot failure is not yet liquidated (which based on comments is expected and possible behaviour)
I agree that this is indeed an external limitation. Planning to reject both escalations and leave the issue as it is.
WangSecurity
Result: Medium Has duplicates
sherlock-admin3
Escalations have been resolved successfully!
Escalation status: