sidestream-tech / unified-auctions-ui

Unified MakerDAO auctions
https://unified-auctions.makerdao.com
GNU Affero General Public License v3.0
16 stars 13 forks source link

Investigate functionality to start collateral auctions #363

Closed valiafetisov closed 2 years ago

valiafetisov commented 2 years ago

Goal

Get understanding how difficult is it to get all information/rights/etc to start new auctions.

Context

Starting new auctions or, in maker terms, barking on underwater vaults is the functionality that is missing from a complete end-to-end auction flow. Currently, we only provide functionality to facilitate fair market participation in started collateral auctions, and depend on other community members to start auctions. While it is actually profitable by itself and might be required for the newly onboarded collaterals.

Assets

Tasks

LukSteib commented 2 years ago

Just wanted to add the following resources here which are nice IMO:

KirillDogadin-std commented 2 years ago

overview on vaults at risk

curl 'https://papi.blockanalitica.com/maker/vaults-at-risk/' --compressed

just sends back the vaults at risk with no auth required, so i guess we could negotiate if it's allowed to borrows this api's functionality in our ui.

The only q becomes wether it only displays vaults at risk or also the ones that are already undercollateralized.

KirillDogadin-std commented 2 years ago

Initiation of vault liquidation

Happens in the following manner:

  1. Extracts vault parameters ( stored collateral and the debt) from vat. link
  2. Make sure that the value stored in the vault is less than the debt, otherwise throws error Dog/not-unsafe link
  3. Check if the trade is possible and overall value that is being auctioned currently does not exceed the limit (which is set in contract) link
  4. if the value that can still be auctioned is less than the value contained in the vault, partial liquidation is initiated. The check is performed so that the liquidated volume is not to small (regulated by dust parameter in collateral) link
  5. Vat confiscates the vault (the contents of the vault are now belonging to the vat, the debt is belonging to the protocol) link
  6. Adjust the variable that stores the currently auctioned value link
  7. Start the auction (with side effect of incentives arriving to the specified wallet. The incentive amount can be predicted via using Clipper contract coefficients tip and chip)
  8. emit event link

Detecting underwater and at-risk auctions

There're services that have been already listed.

  1. One of those is using api mentioned at https://github.com/sidestream-tech/unified-auctions-ui/issues/363#issuecomment-1216329903 ,
  2. the other one is https://data-api.makerdao.network/redoc#tag/vaults which sadly:
    • only has an endpoint that lists the vault with limited querying capabilities and returns response that does not contain information about the vault in the format that would allow just to display it or determine wether the vault is underwater.
    • has two experimental endpoints for vaults at risk that seem to be identical

So regarding (2):

Both Apis related questons:

Additional verification needed:


Questions to address in the future when there's more clear information

zoey-kaiser commented 2 years ago

Some additional UI based questions:

Very long term UI questions

valiafetisov commented 2 years ago

Happens in the following manner:

In order to review/validate the outlined process, you need to provide a link to the source of the conclusion to every statement in the numbered list.

What about authorisations? What about insensitives? What about market prices, OSM prices, etc?

  • How many collaterals are supported?

Also, how the support is added? Automatic? If manual, what is the process?

You also missed the question about the CORS and possibility allow our UIs to fetch data directly. I suggest to create a separate comment/private meeting issue to collect questions for the other CU.

  • Do we need to support any network except mainnet?
  • Do we need OSM based predictions about what happens to the vault soon-ish at the first version?

Value added and rough effort estimation is needed to answer those kind of questions. Eg:

Do we ideally want to recreate the same graphs in our UI?

Effort estimation is needed. You as a designer can propose to display graph(s) if it adds a value. Then based on the added value and the effort needed to implement it, the scoping can take place.

How does the frontend barking flow look?

What about authorisations? What about insensitives? What about market prices, OSM prices, wallet/vat balances, allowances?

KirillDogadin-std commented 2 years ago

What about authorisations?

If the question is wether any is needed, then apparently not.

What about insensitives?

They arrive to the specified wallet, adjusted the comment.

What about market prices, OSM prices, etc?

The question is again very general and i might not get the purpose of it. In the context of starting the liquidation these do not appear directly relevant since the dog contract relies only on information stored about the vault and the information stored about the collateral. So addressing osm and market prices seems to be more of a second step since it is more relevant to updating the values of the price in the chain. Which seems to be done via poke method in the Spotter contract

You also missed the question

Added

I suggest to create a separate comment/private meeting issue to collect questions for the other CU.

Idea is not exactly clear. Are you implying something like this? #418

zoey-kaiser commented 2 years ago

Effort estimation is needed. You as a designer can propose to display graph(s) if it adds a value. Then based on the added value and the effort needed to implement it, the scoping can take place.

This effort estimation is hard to do right now. Adding graphs in the frontend is not very difficult with a good Vue Package, however getting the data we need to display the graphs is the difficult part.

BlockAnalitica shows the following information in graphs:

We would have to see if the APIs return the history of the specific vault or only the current value. I would therefore pass the estimation onto @KirillDogadin-std, as he has better insight into the information we get from either the API or Blockchain (depending on what we choose).

How does the frontend barking flow look?

As @KirillDogadin-std, stated we currently do not believe that it is needed.

KirillDogadin-std commented 2 years ago

https://github.com/sidestream-tech/unified-auctions-ui/issues/363#issuecomment-1216613866

For now I can tell that there's a history of the vault in the dedicated endpoint in https://data-api.makerdao.network/redoc#tag/vaults/operation/read_vault_history_v1_vaults_vault_history__vault__get . To understand completely we would need to get access to the documentation of this api (#418) because there're keys in the response json that are not named clearly enough. So far i see at least partial information coverage there. But i can't tell what's missing because of (yet again) naming.

From the blockchain perpective: just due to the fact that the liquidation ratio is set manually as far as i understood (https://github.com/makerdao/dss/blob/60690042965500992490f695cf259256cc94c140/src/spot.sol#L91) it's essentially required to go block by block to fetch the history of the liquidation ratio per collateral. Just from the liquidation ratio it would already mean that we'd have to develop the block-by-block parsing functionality and populate database we operate this. Seems like overly big amount of work.

zoey-kaiser commented 2 years ago

#363 (comment)

For now I can tell that there's a history of the vault in the dedicated endpoint in https://data-api.makerdao.network/redoc#tag/vaults/operation/read_vault_history_v1_vaults_vault_history__vault__get . To understand completely we would need to get access to the documentation of this api because there're keys in the response json that are not named clearly enough. So far i see at least partial information coverage there. But i can't tell what's missing because of (yet again) naming.

From the blockchain perpective: just due to the fact that the liquidation ratio is set manually as far as i understood (https://github.com/makerdao/dss/blob/60690042965500992490f695cf259256cc94c140/src/spot.sol#L91) it's essentially required to go block by block to fetch the history of the liquidation ratio per collateral. Just from the liquidation ratio it would already mean that we'd have to develop the block-by-block parsing functionality and populate database we operate this. Seems like overly big amount of work.

Thank you for your insight! I would say we settle on the following for now: Let us first create a simple mock to get the initial flow figured out. Once we have a first draft and more insight into what information the APIs can provide, we can discuss the value of adding at least some of the graphs.

valiafetisov commented 2 years ago

Barking on the auction Detecting underwater and at-risk auctions

You mean vaults, not auctions, I suppose? Also, as suggested before, please refrain from using maker terminology if you don't describe particular technical details.

So addressing osm and market prices seems to be more of a second step since it is more relevant to updating the values of the price in the chain.

I just want you and me to have a broader picture before focusing on the implementation: what makes vaults risky, how often that happens, is it predictable, etc.

I would therefore pass the estimation onto @KirillDogadin-std,

Before passing it to @KirillDogadin-std, please follow the same process: first answer the question "What data would help user understand what has/will happen?" only then outline how to get this data (or where it's present), which will be the foundation for the estimations.

Idea is not exactly clear. Are you implying something like this? https://github.com/sidestream-tech/unified-auctions-ui/issues/418

I'm referring to this kind of issue https://github.com/sidestream-tech/auction-ui/issues/428 titled as Prepare meeting with X where the questions/updates/etc are collected and then the meeting outcomes are documented.

KirillDogadin-std commented 2 years ago

You mean vaults, not auctions, I suppose? ...

Adjusted

I just want you and me to have a broader picture before focusing on the implementation: what makes vaults risky, how often that happens, is it predictable, etc.

Ok, then

KirillDogadin-std commented 2 years ago

Possible useful link for future: contains some terminology clarification and formulas. Good cheatsheet. https://github.com/makerdao/developerguides/blob/master/vault/monitoring-collateral-types-and-vaults/monitoring-collateral-types-and-vaults.md

zoey-kaiser commented 2 years ago

Answer from @KirillDogadin-std in https://github.com/sidestream-tech/unified-auctions-ui/issues/419#issuecomment-1217795353

Do we only get vaults that are liquidated, ready to be liquidated or something else? (What does "at-risk" mean)

the minimal desired result is to have the vaults that are ready to be liquidated. the best case scenario is to also list vaults that are at risk. vaults at risk are the vaults that might soon become ready to be liquidated. Vaults that are liquidated are not in scope because these vaults are alcready being auctioned and this is covered by our ui.

What information do we get for every single vault?

Do we get historical data about the vaults or only the current values?

historical data might be useful at some point, but is not required for the minimal functionality and it's questionable wether we should use it for displaying vaults at risk. reason for that is the fact that we maybe could just orient ourselves on the current collateralization ratio and the value that is queued in the osm contract.

The rates module speaks of ...

Can we give the user an estimate

technically yes, but my question would be "how far in advance" would you like to make this prediction? First we have to define the "at risk" margin that we are going to use. e.g. with margin 10% we would consider predictions for all vaults that are within 10% from being underwater.

Can we give the user a approximate estimate of how much incentive they gain when liquidating a vault?

Yes, the values are stored in the clipper contract of the collateral (link)

What values can be used to create this percentage?

we have a collateralization and liquidation ratios.

in contract terms these are the formulas:

Collateralization Ratio = Vat.urn.ink * Vat.ilk.spot * Spot.ilk.mat / (Vat.urn.art * Vat.ilk.rate)
Liquidation ratio = Ilk.mat
zoey-kaiser commented 2 years ago
  • [ ] please adjust the comment with links and references to docs and/or contract lines before i answer.

I am referring to this. As we already previously established, users need to call drip to recalculate certain aspects of the debt involved. (Hence the difference between the ideal rate and the actual rate).

My question would be can we mock the calculations done by drip to show the users a preview of what the ideal rate might be compared to the actual rate. This is something we also do with the collateral auctions (locally recalculating the price drop every step by cut).

Yes, the values are stored in the clipper contract of the collateral (link)

Based of this I assume, we will have a store where we have a record to stores the coin value (incentive amount) per collateral, is this correct? (Or do we embed it into our auction object?)

in contract terms these are the formulas:

Thank you for the clarification!

technically yes, but my question would be "how far in advance" would you like to make this prediction? First we have to define the "at risk" margin that we are going to use. e.g. with margin 10% we would consider predictions for all vaults that are within 10% from being underwater.

That is a very good question. Would your proposal be 10% or was it only an example? I currently do not really know what 10% would mean in this case. Could you elaborate your decision making process on why 10% (or another value) might make sense?

zoey-kaiser commented 2 years ago
KirillDogadin-std commented 2 years ago

My question would be can we mock

yes, we can mock this calculation of this param.

Based of this I assume, we will have a store

i think its too early to go into technical implementation since we did not cover the whole picture of understanding yet,

Would your proposal be 10% or was it only an example?

just an example. in this case 10% is collateralization ratio minus liquidation ratio. so 150 cr - 140 lr = 10

KirillDogadin-std commented 2 years ago

As promised , here's the summary on how the vaults function

interacting with vaults.

Here i will try to cover the logic that is under the whole collateralization.

opening the vault

Opening the vault essentially means that the wallet owner hands over the collateral to the contract and in exchange receives some amount of DAI. This will mean that the owner is in debt of the vault and the collateral is the guarantee that this debt will be returned one way (repaid) or another (vault is liqudated and collateral is sold).

vault parameters

the vault's basic parameters are the debt of the owner (dai extracted) and the amount of collateral stored inside. It's always the goal to store more value in the vault than the debt given out. There's a specific margin defined - liquidation ratio. This parameter determines the point when the vault's stored value is considered too small relative to the debt. Which leads us to (condition 1):

condition 1: valult's stored value has to always be greater than the specific parameter determined by the debt and a predefined coefficient See (1) below on how to determine if this condition is met

The vault's owner has to pay out stability fees for the received DAI. These fees are accumulated over time in the vault in the form of the additional debt. The stability fee is a fluid parameter that changes with time.

  1. Vault charachteristics that are relevant to determine wether condition 1 is met:
    • rate - determines the accumulated stability fee.
    • spot - determines the maximum value of the debt per unit of collateral.
    • ink - stored amount of collateral
    • art - initial debt
    • The (condition 1) can be expressed as spot * ink > art * rate where on the left side of ineqality we have the maximum allowed debt and on the right side we have the actual debt.

rate is the value that increments over time. For this we would have to perfomr operations on the blockchain. Therefore, the keepers are incentivised to update the rates in order to be able to liquidate vaults and gain profit because updating rate essentially updates the accumulated debt value and therefore allows to make some vault into a liquidation target. (detailed in (2) below)

spot is something that is determined by the market. The keepers are incentivised to update the value of it to be able to liquidate vaults because this essentially lowers the value (updates the value and it becomes lower) that is stored in the vault. (detailed in (3) below)

  1. Updating rate is done via calling drip function of jug contract that accepts the collateral name

    • the side effect is that this calls fold function of vat contract which
      • updates the rate of the collateral stored in this contract (essentially copies over the value)
      • updates the cumulative debt values for the collateral that are stored in vat and vow contracts.
  2. Updating spot parameter is done through spot contract via calling poke function.

    • This function refers to the price feed and the stored-in-contract liquidation ratio mat to calculate the next value.
    • Role of the price feed in this case is taken by so called OSM-Module.
    • There's one osm contract per collateral.
    • One does not have to communicate with OSM directly to update prices.
    • Extra: OSM serves as a buffer which feeds the market prices of the collateral into the system with a delay. All OSMs are tracked in the mother contract here : to look up osm for collateral one converts the string into hex string and uses "Read contract" functionality of etherscan to receive the address. You can use this to convert the string. E.g. "ETH-A" is "0x4554482d41" and this leads to https://etherscan.io/address/0x81FE72B5A8d1A857d176C3E7d5Bd2679A9B85763 and then find out that the source of the price is https://etherscan.io/address/0x64de91f5a373cd4c28de3600cb34c7c6ce410c85#code contract

Tracking existing vaults

The cdp manager implements a linked list that allows to retrieve information about all the existing vaults:

by having the collateral name we can get the necessary info from vat via ilks mapping, each collateral is described as :

by having the vault owner address we can get the necessary info from vat via urns mapping, each vault is described as:

Detecting vaults at risk

It is possible to do it via various means. So here i would just draft the brute force approach.

  1. initialize index of the vault to be the first index of the linked list stored in the cdp contract
  2. Extract information from vat about the vault, calculate the difference spot * ink - art * rate. If it is close to zero but still greater than it, vault is at risk
  3. Address the linked list stored in the cdp contract, get the next vault's index.
  4. Extract information ....
  5. ....
  6. No more elements in the linked list, terminate.

Sources

Documentation

Contracts

LukSteib commented 2 years ago

The cdp manager implements a linked list that allows to retrieve information about all the existing vaults:

  • collateral name
  • owner address

Can you please elaborate on that point a bit more. The cdpi provides us with the latest id. However assuming this id to be incremental some of the previous id don't seem to be in the list (see screenshot below). Is this due to vault's that have been liquidated? Is there a way to easily determine the ids of vaults in existence?

image

by having the vault owner address we can get the necessary info from vat via urns mapping, each vault is described as:

Was trying to reproduce this step manually via etherscan. urns takes two parameters - can you elaborate what these are?

spot - determines the maximum value of the debt per unit of collateral.

Still having a hard time to wrap my head around the concept of spot. Can you try to outline the relation between spot, the liquidation ratio and the liquidation price for a given vault?

KirillDogadin-std commented 2 years ago

In the description you are referring to more than the three contracts referenced, right? Can you add all of the related ones (like spot , jug )

i've linked 2 directories that contain contracts. If one is interested in other contracts that are related, they can still use the reference to go look at them. Also specific contracts are already linked in the documentation articles at the top.

The cdpi provides us with the latest id.

Id is incremental, blockchain numbers are expressed in 16-base which means that 23 is not a valid request. The valid one is 0x23.

Is there a way to easily determine the ids of vaults in existence?

i did not see the line of code that deletes the vault's records.

For finding the non-liquidated vaults the idea in my head that comes up in the form of a very straightforward logic is:

  1. get the first id of the vault that belongs to vat (aka is confiscated = liquidated)
  2. go through the list and write down all the ids that belong to vat.
  3. get the numbers from 0 to value incremental counter, remove all numbers that are in the list of liquidated vaults.
  4. we now have a list of existing vaults.

urns takes two parameters - can you elaborate what these are?

collateral symbol and the owner.

so something like urns['0x4554482d41'][<wallet>] is valid entry

Still having a hard time to wrap my head around the concept of spot. Can you try to outline the relation between spot, the liquidation ratio and the liquidation price for a given vault?

Liquidation price is something that i did not mention in the wiki, i would say it's the price of the collateral contained that would force your vault to be liquidated if the debt exceeds it. it's spot * collateral_amount.

With this defenition, spot is a liquidation price of 1 unit of collateral.

LukSteib commented 2 years ago

Id is incremental, blockchain numbers are expressed in 16-base which means that 23 is not a valid request. The valid one is 0x23

Get your point, just wondering why it seems to be a valid request for other methods. See below. Anyways let's not loose ourselves in these details for now.

image image image

get the first id of the vault that belongs to vat (aka is confiscated = liquidated)

Where do you derive that vault belonging to vat = liquidated vault? How are you planning to get these ids?

collateral symbol and the owner

Providing an address received via calling cdpmanager.owner and a collateral string received via calling cdpmanager.ilks always yields ink and art of 0 (see example below). Any further hints what I am doing wrong?

image

Spot is the value that is determined based on the market price: s = f(m)

thx for the further explainers!

KirillDogadin-std commented 2 years ago

just wondering why it seems to be a valid request for other methods.

ah, my bad, i though that one is obligated to provide hex at all times in etherscan. Looking into the requests the service sends - no, my assumption was not correct. The 23 is perfectly valid. The 0s just indicate that this vault is the only vault that someone has opened. The wiki probably is written in an unclear way which suggests that there's one list that contains all of the vaults. However there's multiple instances of those which are created on the Per-User base.

SO i open 7 vaults, you open 1 vault. My vaults will be in the linked list, your vaults technically too, but the prev and next values will be 0 which essentially is a single vault.

Where do you derive that vault belonging to vat = liquidated vault?

Bad idea from my side. CDP manager does not get affected during confiscation. Then instead we're forced to go over the Bark events and fetch the vaults from there and then determine the owner via cdp manager tools.

Any further hints what I am doing wrong?

are you sure that the vault is not empty?

image

LukSteib commented 2 years ago

SO i open 7 vaults, you open 1 vault. My vaults will be in the linked list, your vaults technically too, but the prev and next values will be 0 which essentially is a single vault.

Ok makes sense.

Bad idea from my side. CDP manager does not get affected during confiscation. Then instead we're forced to go over the Bark events and fetch the vaults from there and then determine the owner via cdp manager tools.

Okey. So currently I have the following high level picture in my had. Please re-iterate if flawed. In order to get an overview on currently active vaults we would need to

  1. fetch all events from opening a new vault via the appropriate contract
    • -> provides us a long list of all vaults that have been created
  2. fetch all events from liquidating an existing vault via the appropriate contract
    • -> allows us to reduce the list from 1. to non liquidated vaults
  3. For each vault of the reduced list after 2. fetch collateral type and owner address via the appropriate contract(s)
  4. With collateral type and owner address we can fetch the current vault parameters

are you sure that the vault is not empty?

Ok, got it. Was trying with the wrong parameters. To document step by step what I did in order to get desired vault parameters

  1. Searched for an examplary active vault via https://maker.blockanalitica.com/simulations/vaults-at-risk/
    • went with Vault ID: 28300
  2. Queried ilks method of the DssCdpManager with 28300 as input
    • obtained 0x4554482d42000000000000000000000000000000000000000000000000000000 as byte32 for ETH-B
  3. Queried urns method of DssCdpManager with 28300 as input
    • obtained 0x24D86f0DEBe681a34C7bE7E3EaC6F3A6b2517100 as the vault address
  4. Queried urns method of Vat with output from 2. as first input and output from 3. as second input
    • obtained 3663707338812318488603 as output for ink (ie. vault's amount of collateral) and 3951524264999358736137941 as output for art(ie. vault's initial debt)
  5. Queried ilks method of Vat with output from 2.
    • obtained 1111460595480448191015149282 as rate (ie. stability fees?!) and 1205461538461538461538461538461 as spot (ie. maximum value of debt per unit of collateral)
  6. Queried ilks method of Spotter with output from 2.
    • obtained 1300000000000000000000000000 as ilk.mat (ie. liquidation ratio for collateral type)
  7. Computed the collateralisation ratio of the vault at hand via the formula: Collateralization Ratio = Vat.urn.ink * Vat.ilk.spot * Spot.ilk.mat / (Vat.urn.art * Vat.ilk.rate)

Liquidation price is something that i did not mention in the wiki, i would say it's the price of the collateral contained that would force your vault to be liquidated if the debt exceeds it. it's spot * collateral_amount.

Calculation of liqudation price seems not correct. In the example used above with all the given params I would expect a liquidation price of $1,558.41 (as shown here) but no clue on how to compute

KirillDogadin-std commented 2 years ago

Calculation of liqudation price seems not correct

another and last speculation / guess : spot * liquidation ratio - actual price that comes in from the osm

LukSteib commented 2 years ago

actual price that comes in from the osm

Is there a way for us to determine this via contracts? I know that

KirillDogadin-std commented 2 years ago

osm.peek() gives you the current price

LukSteib commented 2 years ago

osm.peek() gives you the current price

Ok, do you know wether this one is protected by auth? Receiving the error below when trying to execute via etherscan:

Error: Returned error: execution reverted: OSM/contract-not-whitelisted

KirillDogadin-std commented 2 years ago

indeed, there's the whitelist you have to be on in order to be able to call this function.

so i would say we stick to deriving the price from the known (available) values. which is spot and mat (liquidation ratio and the safety margin)

KirillDogadin-std commented 2 years ago

@LukSteib since you've mentioned that you miss in the understanding regarding the following functions, please first ask see if the previous posts cause any questions.

drip

Updating rate is done via calling drip function of jug contract that accepts the collateral name

  • the side effect is that this calls fold function of vat contract which
    • updates the rate of the collateral stored in this contract (essentially copies over the value)
    • updates the cumulative debt values for the collateral that are stored in vat and vow contracts.

poke

The question is again very general and i might not get the purpose of it. In the context of starting the liquidation these do not appear directly relevant since the dog contract relies only on information stored about the vault and the information stored about the collateral. So addressing osm and market prices seems to be more of a second step since it is more relevant to updating the values of the price in the chain. Which seems to be done via poke method in the Spotter contract

KirillDogadin-std commented 2 years ago

Regarding the api's functionality:

the code of the vaults at risk extraction looks logically correct.

Although there's the alternative extraction logic that does some more complex logic that has different approach with what is extracted from the database.

1st question: what went wrong/ did not work so that the alternative endpoint was developed? 2nd question: what's contained in the database in the edw_share.raw.storage_diffs or where can one look up the table structure? Because the sqlalchemy models do not seem to have this defined

For now it seems to me that the information provided by the vaults at risk endpoint is sufficient for our goals. The concern that then remains is caching time. On top of that i'm wondering how many vaults/how often is the information contained in the 'valut at risk' endpoint is not valid / is corrupted.

Overall the alternative logic seems like a lot of magic and it would be really tricky to understand what's going on there without comments or documentation that provide the reasoning.

LukSteib commented 2 years ago

On the drip and poke discussion:

drip

Follow up questions on that:

poke

Probably I am still not capable to put the different pieces together correctly:


On the api

For now it seems to me that the information provided by the vaults at risk endpoint is sufficient for our goals.

What do you mean by that? Would you recommend using the dedicated endpoint? Or recreating the logic and directly fetch from chain ourselves?

the code of the vaults at risk extraction looks logically correct.

Can you outline what is done there, similar to previous code investigations?

KirillDogadin-std commented 2 years ago

Is there a way to determine in advance what effect calling drip would have on the rate of the given collateral type. For a user it's important to know since presumably calling drip will incur tx fees

yes, as usual we can reimplement on our side the function that computes the value in the contract.

Is there a chance for us to determine when drip was called for the last time for a given collateral type (e.g. by parsing events of the jug contract)?

yes, this is stored in the jug.sol contract in ilks mapping in the variable rho

(as above) Is there a way for us to determine in advance what effect calling poke would have on the spot of a given collateral?

yes, the same answer from the above

Especially in the light of your previous comments on OSM price feed having a buffer I am wondering wether poke would only need to be called once every X amount of time and otherwise it would be useless way to burn tx fees

yes, you could burn money for nothing here potentially.


What do you mean by that?

in the context of the whole post i mean "the endpoint seems like it should work, but we know that it does not since it's still experimental, now i wonder why. If it does not work for 2% of cases, we might just use it".

Would you recommend using the dedicated endpoint?

not before we find out why it is experimental (aka what cases are there when it does not provide accurate info and what info is not reliable in these cases)

Or recreating the logic and directly fetch from chain ourselves?

since it's relying on the database that has the caching time, it does not make sense to do this on our side. We would write an api with the same/similar logic, access the same database. I don't see what we win in this case.

Can you outline what is done there, similar to previous code investigations?

KirillDogadin-std commented 2 years ago

Own implemenatation of detecting vaults that are going underwater soon

  1. Fetch all the vaults that exist and store it in the database:
    • Get the value of the latest vault index cdpi from cdp manager contract
    • Extract owner and Collateral type for each of the vault indices via owns and ilks mappings in the cdp manager contract
      • new data: vaultIndex, vaultOwnerAddress, vaultCollateralType
    • Extract the locked collateral and normalized debt from each vault via urns mapping of Vat contract:
      • new data: vaultCollateralAmount, vaultInitialDebtDai
    • Extract the collateral parameters from the vat contract: stability fees rate, price with safety margin spot
      • new data: accumulatedStabilityRate, maxDaiPerCollateral
    • Extract the oracle address from the blockchain via osmmom contract
      • new data: oracleAddress
  2. Process the database rows where vault can be at risk:
    • agree on the margin that we use to determine wether the vault is at risk. Let's call it riskCoefficient
    • extract from the db all the rows which comply with conditionvaultCollateralAmount * maxDaiPerCollateral < accumulatedStabilityRate * vaultInitialDebt * riskCoefficient
    • refresh the values in these vaults to the latest parameters via blockchain
  3. Look up if the vault is going underwater
    • get the current price and the future price of the collateral
      • use src variable of the osm oracle contract (can get it by oracleAddress) to find the supplier of data and extract the future price via this supplier: use the LogMedianPrice event, [example of the median contract](https://etherscan.io/address/0x83076a2f42dc1925537165045c9fde9a4b71ad97#code .
        • new data: futureCollateralUnitPrice
      • use the values to compute the future spot value: futureMaxDaiPerCollateral = futureCollateralUnitPrice / par / mat where par and mat are extracted from the Spotter contract.
        • new data: futureMaxDaiPerCollateral
  4. Result:
    • vault is going underwater if futureMaxDaiPerCollateral * vaultCollateralAmount < accumulatedStabilityRate * vaultInitialDebt

Fetching of the data about the vaults apparently has to be periodically repeated. The reason for this is that interactions with vaults are supposed to be done through the cdp manager and there're no events apparently emitted in this contract (e.g. frob), nor there're events emitted in the vat.frob that is in the call chain.

valiafetisov commented 2 years ago

I am not sure if your comment contains distinction between initial loading of data (eg store it in the database) and update of that data. Does it? Or do we need to refetch the same thing periodically?

Before I go into details, can you please estimate how many requests will need to be made in order to a) fetch vaults b) update vaults with latest data c) how much time those will take

KirillDogadin-std commented 2 years ago

added the segment about refetching.

KirillDogadin-std commented 2 years ago

a) fetch vaults - ~30k requests as of now - equal to the amount of the vaults. b) 4 requests per vault to populate it with the data.

c) A lot of time,

Just extracting owner from the cdp manager for 10 ids takes around 2.6 seconds

%timeit -r 4 -n 3 getOwnerAddress()
2.62 s ± 95.9 ms per loop (mean ± std. dev. of 4 runs, 3 loops each)

then, 1 request takes 0.26 seconds or so therefore, 30k vaults, 5 requests to fetch information for db, 0.26 seconds per functioncall

30000 * 5 * 0.26 = 39000 sec = 10 hours

it's not bad to run this once, but then we have to write a functionality that tracks the activity in the chain and reacts to function calls so that we keep the information up-to-date

valiafetisov commented 2 years ago

Since now it's obvious your proposed solution wouldn't work, can you please outline the alternative technical solution?

KirillDogadin-std commented 2 years ago

Well, i don't see how one can avoid using the database for detection of such vaults, therefore from my perspective the problem to fix is keeping the database updated.

For this purpose we could use the same approach as https://github.com/makerdao/pymaker/blob/08821054d009a3b75fd83b248008002391a9c95a/pymaker/auctions.py#L617-L630 implements. There the blocks are filtered out based on the contract and then based on the signature of the transaction the function that was called is determined.

Then the move with the database would be to have a service running that just tracks the transactions that are happening in the Vat contract (frob for vault manipulation and grab for vault liquidation`)

If this approach sound solid, then i can get more technical description here.

valiafetisov commented 2 years ago

There the blocks are filtered out based on the contract and then based on the signature of the transaction the function that was called is determined

I am just wondering if the filtering can happen on the provider side and delivered to us via websockets. Can the events be used instead? I see that no events are emitted by the vat contract, but for example for the confiscation the new auction event is fired.

Would listening to frob and grab solve refetching of all 4 parameters per vault? Would we still need to refetch everything else (eg oracle prices, etc)? How much requests and time we would still need to fetch initial data (or can it be skipped)? What are the alternative for fetching this initial data from the chain (eg, using external public api)?

KirillDogadin-std commented 2 years ago

Can the events be used instead?

  1. with barking and vault liquidation it is possible. here the setup is:
    • call dog contract
    • dog contract has auth in vat to call grab
    • dog contract executes the function grab and emits the event
  2. with vault manipulation no event that definitely allows to know what happens with the vault was found by me.
    • vault manipulation should be done via the cdp manager (according to the documentation)
    • cdp manager calls the frob function of vat contract
    • nor cdp manager, nor vat emit any events during manipulations.

Would listening to frob and grab solve refetching of all 4 parameters per vault?

i've poked around to get the proposal into more solid condition and reverified:

KirillDogadin-std commented 2 years ago

global feature proposal

Types

https://github.com/sidestream-tech/unified-auctions-ui/pull/446

Services

LukSteib commented 2 years ago

Thx for the proposal.

Based on the previous discussions I am still lacking an answer on the question: How feasible is this approach? Can you provide an opinion on that?


Some scope and volume related input:

I've spent some time looking into solutions that display vaults (especially: https://tracker-vaults.makerdao.network/). Guiding question: How to limit scope of requests we potentially would need to make.

KirillDogadin-std commented 2 years ago

Do you know how it is determined wether a vault is active or not?

according to the repo it's just the number of the collateral stored in the vault. You can't know it from the blockchain, again - the database is required.

If you need to reduce the number of requests made, we can achieve it by asking for a table dump / querying the data api - from there we can save 30k requests to blockchain for each vault, after that we will only have to fill the leftover gap with requesting information about collateral types from the blockchain (so N collaterals - N requests), i assume there's not more than 100 reqs needed here ;).

valiafetisov commented 2 years ago

How feasible is this approach? Can you provide an opinion on that?

For my understanding, were those two questions already answered somewhere else?

Types

As discussed in the daily: https://github.com/sidestream-tech/unified-auctions-ui/pull/446#pullrequestreview-1096584426

KirillDogadin-std commented 2 years ago

For my understanding, were those two questions already answered somewhere else?

sry, forgot to document the verbal discussion;

yes & the answer is it is feasible given that we have the access to the outside database which we use to "copy over" (The mentioned data-api for example) from periodically OR if we do not mind from time to time use up a lot of requests and time to refresh the database state.

valiafetisov commented 2 years ago

yes & the answer is it is feasible given that we have the access to the outside database

  • DEV - path to recovery file. If file does not exist - generates 100 random numbers and populates the vault table with the ids from this list, caches the contents into file. Recovers from file if the path is set.

Probably a better name is VAULTS_RECOVERY_FILE

  • DO_REFETCH_ALL_VAULTS that triggers the complete refetch of the vaults. Does not trigger (throws error) if the DEV envvar is set.

I would argue that refetching should be the default logic until some isDevelopment flag is set.

  • UPDATE_FREQUENCY - number of seconds

Better naming is VAULTS_REFETCH_INTERVAL_SECONDS – env variable names for amounts should always include denomination units. Btw, are we actually planning to refetch vaults since I thought we would use websocket delivery to receive events about changes?

KirillDogadin-std commented 2 years ago

What would be the number of requests in that case? How much time it will need to sync to the latest info?

it depends on the negotiations' outcome.

E.g. for 30k vaults we could say that we're allowed to fetch 1000 vaults per request from their api. Then if the single query takes 10 seconds, we would need 5 minutes if we do it sync.

What is the exact API endpoint we will be dependent upon?

https://data-api.makerdao.network/redoc#tag/vaults/operation/read_current_vaults_v1_vaults_current_state_get

/vaults/current_state

Can we still aim for the frontend to be able to work without this service and the database

The explainer here is not clear to me. Let me make a statement and let's see if it answers your question: /vaults/:id would not be absolutely critical endpoint to have if the frontend already knows the ids it's interested in and wants to update the store state with the latest information.

What is the format of the file? Should we use sqlite as the database and the recovery file as well?

using sqlite is a good option. Then the recovery file is the sqlite database itself.

What would production also need this in case it restarts eg during deployment?

Seems like the "what" word is not intended to be in this question, otherwise please adjust it.

assuming that it's an sqlite, yes. If it would be e.g. postres, then the recovery file would not be needed since the database service would be persisting the stored data.

Where do we fetch this file during development? Is there some kind of endpoint on staging?

this question is not really clear: could you rephrase or give examples to elaborate?

I would argue that refetching should be the default logic until some isDevelopment flag is set.

ok, works for me


adjusted the proposal for the rest of comments, see the diff there

valiafetisov commented 2 years ago

The explainer here is not clear to me. Let me make a statement and let's see if it answers your question: /vaults/:id would not be absolutely critical endpoint to have if the frontend already knows the ids it's interested in and wants to update the store state with the latest information.

The question is: in case the user knows the vault id (eg is coming from the twitter that just announced that vault #100 is in liquidatable state) can we fetch all related data directly from the chain on page load instead of relying on our and DICU dbs: a) avoiding the need for /vaults/:id endpoint or b) doing this in case endpoint throws an error/is outdated. In any case, we should also add lastSyncedAt to each Vault in the database

Where do we fetch this file during development? Is there some kind of endpoint on staging?

During the deployment, do we need to wait for 5 minutes every time we start a dev server and do few thousands requests? Can you propose a mechanism to avoid it that is not an overkill. I imagine an endpoint that returns sqlite file, so that it can be easily curled from the development of even synced under the hood without extra command

curl https://unified-auctions.makerdao.com/api/vaults.sqlite --output vaults.sqlite

Does not trigger (throws error) if the VAULTS_RECOVERY_FILE envvar is NOT set.

I imagine that it makes sense to have default value for the VAULTS_RECOVERY_FILE env var.

KirillDogadin-std commented 2 years ago

The question is

yes, we can. adding the column to the db model

During the deployment, do we need to wait for 5 minutes every time we start a dev server and do few thousands requests? Can you propose a mechanism to avoid it that is not an overkill. I imagine an endpoint that returns sqlite file, so that it can be easily curled from the development of even synced under the hood without extra command

If i understand correctly, you're talking about the development flow where each time when we start the app it's mandatory to throw thousands of requests. The initial proposal is to only throw 1 request, get ~100 vaults and that's it. Then when the local file exists it can be mounted to the image and not refetched every time.

I imagine that it makes sense to have default value for the VAULTS_RECOVERY_FILE env var.

adjusted

valiafetisov commented 2 years ago

yes, we can. adding the column to the db model

So option b) it is? Can you reflect it in the proposal?

If i understand correctly, you're talking about the development flow where each time when we start the app it's mandatory to throw thousands of requests

I am talking about a way to have complete experience without needing to refetch everything. Fetching only 100 vaults is not ideal in case you want it to kickstart and proceed with vault liquidations (eg the bot is privately run by default will start without a database by a person who just cloned it or will start with outdated database in case it wasn't run until a crash) – all those are valid use-cases.

KirillDogadin-std commented 2 years ago

So option b) it is? Can you reflect it in the proposal?

in fact i was leaning towards option a) where the specific vault is instead just fetched from the blockchain. And this is reflected accordingly already.

I imagine an endpoint that returns sqlite file, so that it can be easily curled from the development of even synced under the hood without extra command

so then to get this proposal straight before i add it: