Consensys / defi-score

DeFi Score: An open framework for evaluating DeFi protocols
https://defiscore.io
Other
279 stars 79 forks source link

Liquidity Is Not A Measure of Risk #7

Open KyleJKistner opened 4 years ago

KyleJKistner commented 4 years ago

Liquidity The currently scoped platforms all attempt to incentive liquidity by using dynamic interest rate models which produce varying rates depending on the level of liquidity in each asset pool. However, incentivized liquidity does not mean guaranteed liquidity. A user takes on risk that they will not be able to withdraw their lent out assets on demand because all the assets are currently lent out.

Liquidity risk is assessed by a single data point that is derivable from on-chain data, which is the level of liquidity. This data point is the 30 day EMA of liquidity, normalized using logarithmic min-max normalization of the amount of liquidity in USD across all of the available lending pools. The absolute level of liquidity is used instead of the percentage utilization (outstandingDebt/totalAssets) because it has a side effect of also scoring larger pools higher.

This has no bearing on the safety of the protocols from the perspective of lenders, and simply puts a thumb on the scale for bigger protocols, which is the opposite of the way risk actually flows. If the intention is to create information in the marketplace that can help lenders determine the risk-adjusted rate, then this is not only not useful but actively distortionary. Only factors that can cause the underlying lending pool to shrink in size due to losses should be considered. To this end, collateral quality is an excellent metric while size of the lending pool is an irrelevant metric.

This is a UX issue, not an issue of risk. For all of the protocols, eventually borrowers will be liquidated or forced to pull out as borrowing rates skyrocket. This means that, while possibly annoying, it will take a few weeks for lenders to get back their full funds. It also means they'll be earning extremely high interest rates while they wait. Again, as this does not reflect a potential loss to lenders, it doesn't have bearing on the effective risk-adjusted rate.

I would suggest that a better and more relevant criteria would be to quantify the throughput of the liquidation mechanism and to weigh this against what quantity of liquidation would be needed given the level of borrowing activity on the platform. It would be noted that smaller protocols with a lower absolute quantity of funds being borrowed are actually safer because they require less throughput in their liquidation mechanism, and are therefore less likely to result in borrowers defaulting, ceteris paribus.

eboadom commented 4 years ago

Agree with you @KyleJKistner on the of "liquidation power" (throughput of the liquidation mechanism against the liquidated amounts) as the most basic measurement of liquidity risk, but don't you think that non-availability of liquidity to withdraw by lenders is also directly a risk? I am referring to liquidity not borrowed, so accessible without the intervention of the liquidation mechanism. In my opinion, if this parameter is included in this liquidity risk measurement, it removes dependency on the potential imperfections on the liquidation mechanism, in terms of time and amounts.

KyleJKistner commented 4 years ago

Don't you think that non-availability of liquidity to withdraw by lenders is also directly a risk? I am referring to liquidity not borrowed, so accessible without the intervention of the liquidation mechanism.

If we define a "risk" in terms of a probability of the underlying lending pool losing value, then I do not believe that non-availability of liquidity for lenders represents a meaningful risk. Certainly not in isolation.

it removes dependency on the potential imperfections on the liquidation mechanism, in terms of time and amounts.

The availability of liquidity does not enable lenders to avoid the imperfections of a liquidation mechanism in any practical sense. Failures in the liquidation mechanism are sudden and unpredictable, a black swan event. There exists a doubly exceptional case where the liquidation mechanism has failed in a permanent way and there is a bank run, and lenders are repeatedly subjected to an imperfect liquidation mechanism, causing them to sustain repeated losses. This is a sort of interactive risk that does not exist on its own. I think that the scores should attempt to isolate primitive forms of risk (collateral risk, liquidation throughput, smart contract risk) before delving into interactive risks.

jclancy93 commented 4 years ago

Thanks for sharing your thoughts @KyleJKistner. I agree with a lot of the points here.

This has no bearing on the safety of the protocols from the perspective of lenders, and simply puts a thumb on the scale for bigger protocols, which is the opposite of the way risk actually flows.

It does have the side-effect of scoring larger pools higher, but I disagree that it is the opposite way risk flows. Higher liquidity equates to higher TVL for a protocol and high TVL over time means that these protocols are more "battle-tested". No doubt that it's an imperfect metric though and doesn't directly relate to the financial risk of permanent loss. I personally think we can get much more quantitative on the financial risk element of the score and it is one of the most pressing updates we need to make to the scoring algorithm.

I would suggest that a better and more relevant criteria would be to quantify the throughput of the liquidation mechanism and to weigh this against what quantity of liquidation would be needed given the level of borrowing activity on the platform. It would be noted that smaller protocols with a lower absolute quantity of funds being borrowed are actually safer because they require less throughput in their liquidation mechanism, and are therefore less likely to result in borrowers defaulting, ceteris paribus.

Liquidation mechanics are really important for all these protocols, and smaller protocols should have an advantage here. They should consume less of the on-chain liquidity pool and have less risk of a market liquidity crisis because of it. Something I've struggled with when thinking about liquidation mechanics is how to compare off-chain mechanisms like Compound to on-chain mechanisms like bZx/Fuclrum. I would be interested to hear any thoughts you have on this.

KyleJKistner commented 4 years ago

I think we agree that TVL is a positive signal in terms of smart contract security. Meanwhile, total amount borrowed is a negative in terms of counterparty risk.

Something I've struggled with when thinking about liquidation mechanics is how to compare off-chain mechanisms like Compound to on-chain mechanisms like bZx/Fuclrum. I would be interested to hear any thoughts you have on this.

Generally I believe that Compound and dYdX's throughput is more robust than Fulcrum which is in turn more robust than Nuo. I would rank it as such: dYdX > Compound >> Fulcrum >> Nuo. There's several factors to take into consideration, such as partial liquidations (this enables a protocol to sustain a higher overall borrowing amount), size of the liquidation community (how many addresses frequently call in to liquidate), and access to underlying liquidity. When it comes to a market like ETHDAI, there's a lot of similarity in how capable Fulcrum, Compound, and dYdX (but not Nuo) are of handling liquidations since F/C/d all essentially draw from the same liquidity pool, have partial liquidations, and permissionless community liquidations. Similar is true of wBTC, since most of its liquidity is on Kyber. On the other hand, an asset like LINK would have a much easier time being liquidated on Compound rather than Fulcrum because liquidators on Compound can very easily tap into the largest source of LINK liquidity which is Binance while Fulcrum is restricted to Kyber. That said, Fulcrum has much less need for liquidity for its assets compared to Compound and dYdX because it is much smaller, so it is likely requiring less of its total maximum throughput.

It's difficult to put an exact quantity on throughput, but it is fairly straightforward to break down the components that make a mechanism more robust. One thing that can be quantified neatly is the size of the liquidity pool capable of being accessed.