Closed valiafetisov closed 2 years ago
Just wanted to add the following resources here which are nice IMO:
overview on vaults at risk
curl 'https://papi.blockanalitica.com/maker/vaults-at-risk/' --compressed
just sends back the vaults at risk with no auth required, so i guess we could negotiate if it's allowed to borrows this api's functionality in our ui.
The only q becomes wether it only displays vaults at risk or also the ones that are already undercollateralized.
Initiation of vault liquidation
Happens in the following manner:
Dog/not-unsafe
linkdust
parameter in collateral) linktip
and chip
)Detecting underwater and at-risk auctions
There're services that have been already listed.
So regarding (2):
Both Apis related questons:
Additional verification needed:
Questions to address in the future when there's more clear information
Some additional UI based questions:
How can we link the collateral auctions view and the new vaults view together?
The Blockanalitica site has many helpful graphs. Do we ideally want to:
How does the frontend barking flow look?
Very long term UI questions
Happens in the following manner:
In order to review/validate the outlined process, you need to provide a link to the source of the conclusion to every statement in the numbered list.
What about authorisations? What about insensitives? What about market prices, OSM prices, etc?
- How many collaterals are supported?
Also, how the support is added? Automatic? If manual, what is the process?
You also missed the question about the CORS and possibility allow our UIs to fetch data directly. I suggest to create a separate comment/private meeting issue to collect questions for the other CU.
- Do we need to support any network except mainnet?
- Do we need OSM based predictions about what happens to the vault soon-ish at the first version?
Value added and rough effort estimation is needed to answer those kind of questions. Eg:
Do we ideally want to recreate the same graphs in our UI?
Effort estimation is needed. You as a designer can propose to display graph(s) if it adds a value. Then based on the added value and the effort needed to implement it, the scoping can take place.
How does the frontend barking flow look?
What about authorisations? What about insensitives? What about market prices, OSM prices, wallet/vat balances, allowances?
What about authorisations?
If the question is wether any is needed, then apparently not.
What about insensitives?
They arrive to the specified wallet, adjusted the comment.
What about market prices, OSM prices, etc?
The question is again very general and i might not get the purpose of it. In the context of starting the liquidation these do not appear directly relevant since the dog contract relies only on information stored about the vault and the information stored about the collateral. So addressing osm and market prices seems to be more of a second step since it is more relevant to updating the values of the price in the chain. Which seems to be done via poke
method in the Spotter
contract
You also missed the question
Added
I suggest to create a separate comment/private meeting issue to collect questions for the other CU.
Idea is not exactly clear. Are you implying something like this? #418
Effort estimation is needed. You as a designer can propose to display graph(s) if it adds a value. Then based on the added value and the effort needed to implement it, the scoping can take place.
This effort estimation is hard to do right now. Adding graphs in the frontend is not very difficult with a good Vue Package, however getting the data we need to display the graphs is the difficult part.
BlockAnalitica shows the following information in graphs:
We would have to see if the APIs return the history of the specific vault or only the current value. I would therefore pass the estimation onto @KirillDogadin-std, as he has better insight into the information we get from either the API or Blockchain (depending on what we choose).
How does the frontend barking flow look?
As @KirillDogadin-std, stated we currently do not believe that it is needed.
https://github.com/sidestream-tech/unified-auctions-ui/issues/363#issuecomment-1216613866
For now I can tell that there's a history of the vault in the dedicated endpoint in https://data-api.makerdao.network/redoc#tag/vaults/operation/read_vault_history_v1_vaults_vault_history__vault__get . To understand completely we would need to get access to the documentation of this api (#418) because there're keys in the response json that are not named clearly enough. So far i see at least partial information coverage there. But i can't tell what's missing because of (yet again) naming.
From the blockchain perpective: just due to the fact that the liquidation ratio is set manually as far as i understood (https://github.com/makerdao/dss/blob/60690042965500992490f695cf259256cc94c140/src/spot.sol#L91) it's essentially required to go block by block to fetch the history of the liquidation ratio per collateral. Just from the liquidation ratio it would already mean that we'd have to develop the block-by-block parsing functionality and populate database we operate this. Seems like overly big amount of work.
For now I can tell that there's a history of the vault in the dedicated endpoint in https://data-api.makerdao.network/redoc#tag/vaults/operation/read_vault_history_v1_vaults_vault_history__vault__get . To understand completely we would need to get access to the documentation of this api because there're keys in the response json that are not named clearly enough. So far i see at least partial information coverage there. But i can't tell what's missing because of (yet again) naming.
From the blockchain perpective: just due to the fact that the liquidation ratio is set manually as far as i understood (https://github.com/makerdao/dss/blob/60690042965500992490f695cf259256cc94c140/src/spot.sol#L91) it's essentially required to go block by block to fetch the history of the liquidation ratio per collateral. Just from the liquidation ratio it would already mean that we'd have to develop the block-by-block parsing functionality and populate database we operate this. Seems like overly big amount of work.
Thank you for your insight! I would say we settle on the following for now: Let us first create a simple mock to get the initial flow figured out. Once we have a first draft and more insight into what information the APIs can provide, we can discuss the value of adding at least some of the graphs.
Barking on the auction Detecting underwater and at-risk auctions
You mean vaults, not auctions, I suppose? Also, as suggested before, please refrain from using maker terminology if you don't describe particular technical details.
So addressing osm and market prices seems to be more of a second step since it is more relevant to updating the values of the price in the chain.
I just want you and me to have a broader picture before focusing on the implementation: what makes vaults risky, how often that happens, is it predictable, etc.
I would therefore pass the estimation onto @KirillDogadin-std,
Before passing it to @KirillDogadin-std, please follow the same process: first answer the question "What data would help user understand what has/will happen?" only then outline how to get this data (or where it's present), which will be the foundation for the estimations.
Idea is not exactly clear. Are you implying something like this? https://github.com/sidestream-tech/unified-auctions-ui/issues/418
I'm referring to this kind of issue https://github.com/sidestream-tech/auction-ui/issues/428 titled as Prepare meeting with X
where the questions/updates/etc are collected and then the meeting outcomes are documented.
You mean vaults, not auctions, I suppose? ...
Adjusted
I just want you and me to have a broader picture before focusing on the implementation: what makes vaults risky, how often that happens, is it predictable, etc.
Ok, then
Possible useful link for future: contains some terminology clarification and formulas. Good cheatsheet. https://github.com/makerdao/developerguides/blob/master/vault/monitoring-collateral-types-and-vaults/monitoring-collateral-types-and-vaults.md
Answer from @KirillDogadin-std in https://github.com/sidestream-tech/unified-auctions-ui/issues/419#issuecomment-1217795353
Do we only get vaults that are liquidated, ready to be liquidated or something else? (What does "at-risk" mean)
the minimal desired result is to have the vaults that are ready to be liquidated. the best case scenario is to also list vaults that are at risk. vaults at risk are the vaults that might soon become ready to be liquidated. Vaults that are liquidated are not in scope because these vaults are alcready being auctioned and this is covered by our ui.
What information do we get for every single vault?
Do we get historical data about the vaults or only the current values?
historical data might be useful at some point, but is not required for the minimal functionality and it's questionable wether we should use it for displaying vaults at risk. reason for that is the fact that we maybe could just orient ourselves on the current collateralization ratio and the value that is queued in the osm contract.
The rates module speaks of ...
Can we give the user an estimate
technically yes, but my question would be "how far in advance" would you like to make this prediction? First we have to define the "at risk" margin that we are going to use. e.g. with margin 10% we would consider predictions for all vaults that are within 10% from being underwater.
Can we give the user a approximate estimate of how much incentive they gain when liquidating a vault?
Yes, the values are stored in the clipper contract of the collateral (link)
What values can be used to create this percentage?
we have a collateralization and liquidation ratios.
in contract terms these are the formulas:
Collateralization Ratio = Vat.urn.ink * Vat.ilk.spot * Spot.ilk.mat / (Vat.urn.art * Vat.ilk.rate)
Liquidation ratio = Ilk.mat
- [ ] please adjust the comment with links and references to docs and/or contract lines before i answer.
I am referring to this. As we already previously established, users need to call drip
to recalculate certain aspects of the debt involved. (Hence the difference between the ideal rate and the actual rate).
My question would be can we mock the calculations done by drip
to show the users a preview of what the ideal rate might be compared to the actual rate. This is something we also do with the collateral auctions (locally recalculating the price drop every step
by cut
).
Yes, the values are stored in the clipper contract of the collateral (link)
Based of this I assume, we will have a store where we have a record to stores the coin
value (incentive amount) per collateral, is this correct? (Or do we embed it into our auction object?)
in contract terms these are the formulas:
Thank you for the clarification!
technically yes, but my question would be "how far in advance" would you like to make this prediction? First we have to define the "at risk" margin that we are going to use. e.g. with margin 10% we would consider predictions for all vaults that are within 10% from being underwater.
That is a very good question. Would your proposal be 10% or was it only an example? I currently do not really know what 10% would mean in this case. Could you elaborate your decision making process on why 10% (or another value) might make sense?
My question would be can we mock
yes, we can mock this calculation of this param.
Based of this I assume, we will have a store
i think its too early to go into technical implementation since we did not cover the whole picture of understanding yet,
Would your proposal be 10% or was it only an example?
just an example. in this case 10% is collateralization ratio minus liquidation ratio. so 150 cr - 140 lr = 10
As promised , here's the summary on how the vaults function
Here i will try to cover the logic that is under the whole collateralization.
opening the vault
Opening the vault essentially means that the wallet owner hands over the collateral to the contract and in exchange receives some amount of DAI. This will mean that the owner is in debt of the vault and the collateral is the guarantee that this debt will be returned one way (repaid) or another (vault is liqudated and collateral is sold).
open
in the cdp contractfrob
and specify the changes in the amount of collateral stored + changes in the debtvault parameters
the vault's basic parameters are the debt of the owner (dai extracted) and the amount of collateral stored inside. It's always the goal to store more value in the vault than the debt given out. There's a specific margin defined - liquidation ratio. This parameter determines the point when the vault's stored value is considered too small relative to the debt. Which leads us to (condition 1):
condition 1: valult's stored value has to always be greater than the specific parameter determined by the debt and a predefined coefficient See (1) below on how to determine if this condition is met
The vault's owner has to pay out stability fees for the received DAI. These fees are accumulated over time in the vault in the form of the additional debt. The stability fee is a fluid parameter that changes with time.
rate
- determines the accumulated stability fee.spot
- determines the maximum value of the debt per unit of collateral.ink
- stored amount of collateralart
- initial debtspot * ink > art * rate
where on the left side of ineqality we have the maximum allowed debt and on the right side we have the actual debt.rate
is the value that increments over time. For this we would have to perfomr operations on the blockchain. Therefore, the keepers are incentivised to update the rates in order to be able to liquidate vaults and gain profit because updating rate
essentially updates the accumulated debt value and therefore allows to make some vault into a liquidation target. (detailed in (2) below)
spot
is something that is determined by the market. The keepers are incentivised to update the value of it to be able to liquidate vaults because this essentially lowers the value (updates the value and it becomes lower) that is stored in the vault. (detailed in (3) below)
Updating rate
is done via calling drip
function of jug
contract that accepts the collateral name
fold
function of vat
contract which
rate
of the collateral stored in this contract (essentially copies over the value)vat
and vow
contracts.Updating spot
parameter is done through spot
contract via calling poke
function.
mat
to calculate the next value.Tracking existing vaults
The cdp manager implements a linked list that allows to retrieve information about all the existing vaults:
by having the collateral name we can get the necessary info from vat
via ilks
mapping, each collateral is described as :
Art
rate
spot
line
dust
by having the vault owner address we can get the necessary info from vat
via urns
mapping, each vault is described as:
ink
amount of stored collateralart
the debt of this vault without stability fees (how much dai was extracted by owner from this vault)Detecting vaults at risk
It is possible to do it via various means. So here i would just draft the brute force approach.
vat
about the vault, calculate the difference spot * ink - art * rate
. If it is close to zero but still greater than it, vault is at riskSources
Documentation
Contracts
spot
, jug
)The cdp manager implements a linked list that allows to retrieve information about all the existing vaults:
- collateral name
- owner address
Can you please elaborate on that point a bit more. The cdpi
provides us with the latest id. However assuming this id to be incremental some of the previous id don't seem to be in the list (see screenshot below). Is this due to vault's that have been liquidated? Is there a way to easily determine the ids of vaults in existence?
by having the vault owner address we can get the necessary info from vat via urns mapping, each vault is described as:
Was trying to reproduce this step manually via etherscan. urns
takes two parameters - can you elaborate what these are?
spot - determines the maximum value of the debt per unit of collateral.
Still having a hard time to wrap my head around the concept of spot
. Can you try to outline the relation between spot
, the liquidation ratio and the liquidation price for a given vault?
In the description you are referring to more than the three contracts referenced, right? Can you add all of the related ones (like spot , jug )
i've linked 2 directories that contain contracts. If one is interested in other contracts that are related, they can still use the reference to go look at them. Also specific contracts are already linked in the documentation articles at the top.
The cdpi provides us with the latest id.
Id is incremental, blockchain numbers are expressed in 16-base which means that 23
is not a valid request. The valid one is 0x23
.
Is there a way to easily determine the ids of vaults in existence?
i did not see the line of code that deletes the vault's records.
For finding the non-liquidated vaults the idea in my head that comes up in the form of a very straightforward logic is:
vat
(aka is confiscated = liquidated)urns takes two parameters - can you elaborate what these are?
collateral symbol and the owner.
0x4554482d41
so something like urns['0x4554482d41'][<wallet>]
is valid entry
Still having a hard time to wrap my head around the concept of spot. Can you try to outline the relation between spot, the liquidation ratio and the liquidation price for a given vault?
s = f(m)
, where s
is spot, m
is market price and f()
is the conversion logic. It defines "how much debt is allowed per unit of collateral". Let's say it's 10 dai per unit.
spot
(doesnt matter how it works here) is to make the protocol count it as you having 80 market-ETH in the vault.spot
is calculated. The approximate formula is spot = market price / liquidation ratio
. So if the market price of ETH is 1000$ and the liquidation ratio is 200% then spot is 500 DAI (here 1 DAI = 1 $). Liquidation price is something that i did not mention in the wiki, i would say it's the price of the collateral contained that would force your vault to be liquidated if the debt exceeds it. it's spot
* collateral_amount
.
With this defenition, spot
is a liquidation price of 1 unit of collateral.
Id is incremental, blockchain numbers are expressed in 16-base which means that 23 is not a valid request. The valid one is 0x23
Get your point, just wondering why it seems to be a valid request for other methods. See below. Anyways let's not loose ourselves in these details for now.
get the first id of the vault that belongs to vat (aka is confiscated = liquidated)
Where do you derive that vault belonging to vat = liquidated vault? How are you planning to get these ids?
collateral symbol and the owner
Providing an address
received via calling cdpmanager.owner
and a collateral string received via calling cdpmanager.ilks
always yields ink
and art
of 0 (see example below). Any further hints what I am doing wrong?
Spot is the value that is determined based on the market price: s = f(m)
thx for the further explainers!
just wondering why it seems to be a valid request for other methods.
ah, my bad, i though that one is obligated to provide hex at all times in etherscan. Looking into the requests the service sends - no, my assumption was not correct. The 23
is perfectly valid. The 0s just indicate that this vault is the only vault that someone has opened. The wiki probably is written in an unclear way which suggests that there's one list that contains all of the vaults. However there's multiple instances of those which are created on the Per-User base.
SO i open 7 vaults, you open 1 vault. My vaults will be in the linked list, your vaults technically too, but the prev
and next
values will be 0
which essentially is a single vault.
Where do you derive that vault belonging to vat = liquidated vault?
Bad idea from my side. CDP manager does not get affected during confiscation. Then instead we're forced to go over the Bark
events and fetch the vaults from there and then determine the owner via cdp manager tools.
Any further hints what I am doing wrong?
are you sure that the vault is not empty?
SO i open 7 vaults, you open 1 vault. My vaults will be in the linked list, your vaults technically too, but the prev and next values will be 0 which essentially is a single vault.
Ok makes sense.
Bad idea from my side. CDP manager does not get affected during confiscation. Then instead we're forced to go over the Bark events and fetch the vaults from there and then determine the owner via cdp manager tools.
Okey. So currently I have the following high level picture in my had. Please re-iterate if flawed. In order to get an overview on currently active vaults we would need to
are you sure that the vault is not empty?
Ok, got it. Was trying with the wrong parameters. To document step by step what I did in order to get desired vault parameters
Vault ID
: 28300ilks
method of the DssCdpManager with 28300
as input
0x4554482d42000000000000000000000000000000000000000000000000000000
as byte32 for ETH-B
urns
method of DssCdpManager with 28300
as input
0x24D86f0DEBe681a34C7bE7E3EaC6F3A6b2517100
as the vault addressurns
method of Vat with output from 2. as first input and output from 3. as second input
3663707338812318488603
as output for ink
(ie. vault's amount of collateral) and 3951524264999358736137941
as output for art
(ie. vault's initial debt)ilks
method of Vat with output from 2.
1111460595480448191015149282
as rate
(ie. stability fees?!) and 1205461538461538461538461538461
as spot
(ie. maximum value of debt per unit of collateral)ilks
method of Spotter with output from 2.
1300000000000000000000000000
as ilk.mat
(ie. liquidation ratio for collateral type)Collateralization Ratio = Vat.urn.ink * Vat.ilk.spot * Spot.ilk.mat / (Vat.urn.art * Vat.ilk.rate)
1.3073
-> 130.73 %Liquidation price is something that i did not mention in the wiki, i would say it's the price of the collateral contained that would force your vault to be liquidated if the debt exceeds it. it's spot * collateral_amount.
Calculation of liqudation price seems not correct. In the example used above with all the given params I would expect a liquidation price of $1,558.41 (as shown here) but no clue on how to compute
Calculation of liqudation price seems not correct
another and last speculation / guess : spot * liquidation ratio
- actual price that comes in from the osm
actual price that comes in from the osm
Is there a way for us to determine this via contracts? I know that
ilks
of the Spotter contract for a certain collateral provides us with an address for pip
(ie. the OSM contract for this collateral type). src
of the respective OSM contract (e.g. OSM_STETHUSD) we get the address of the medianiser contract that is used for this collateral. osm.peek()
gives you the current price
osm.peek() gives you the current price
Ok, do you know wether this one is protected by auth? Receiving the error below when trying to execute via etherscan:
Error: Returned error: execution reverted: OSM/contract-not-whitelisted
indeed, there's the whitelist you have to be on in order to be able to call this function.
so i would say we stick to deriving the price from the known (available) values. which is spot
and mat
(liquidation ratio and the safety margin)
@LukSteib since you've mentioned that you miss in the understanding regarding the following functions, please first ask see if the previous posts cause any questions.
drip
Updating rate is done via calling drip function of jug contract that accepts the collateral name
- the side effect is that this calls fold function of vat contract which
- updates the rate of the collateral stored in this contract (essentially copies over the value)
- updates the cumulative debt values for the collateral that are stored in vat and vow contracts.
poke
The question is again very general and i might not get the purpose of it. In the context of starting the liquidation these do not appear directly relevant since the dog contract relies only on information stored about the vault and the information stored about the collateral. So addressing osm and market prices seems to be more of a second step since it is more relevant to updating the values of the price in the chain. Which seems to be done via poke method in the Spotter contract
Regarding the api's functionality:
the code of the vaults at risk extraction looks logically correct.
Although there's the alternative extraction logic that does some more complex logic that has different approach with what is extracted from the database.
1st question: what went wrong/ did not work so that the alternative endpoint was developed?
2nd question: what's contained in the database in the edw_share.raw.storage_diffs
or where can one look up the table structure? Because the sqlalchemy models do not seem to have this defined
For now it seems to me that the information provided by the vaults at risk endpoint is sufficient for our goals. The concern that then remains is caching time. On top of that i'm wondering how many vaults/how often is the information contained in the 'valut at risk' endpoint is not valid / is corrupted.
Overall the alternative logic seems like a lot of magic and it would be really tricky to understand what's going on there without comments or documentation that provide the reasoning.
On the drip
and poke
discussion:
drip
Follow up questions on that:
drip
would have on the rate
of the given collateral type. For a user it's important to know since presumably calling drip
will incur tx fees drip
was called for the last time for a given collateral type (e.g. by parsing events of the jug contract)? poke
Probably I am still not capable to put the different pieces together correctly:
poke
would have on the spot
of a given collateral?
poke
would only need to be called once every X amount of time and otherwise it would be useless way to burn tx feesOn the api
For now it seems to me that the information provided by the vaults at risk endpoint is sufficient for our goals.
What do you mean by that? Would you recommend using the dedicated endpoint? Or recreating the logic and directly fetch from chain ourselves?
the code of the vaults at risk extraction looks logically correct.
Can you outline what is done there, similar to previous code investigations?
Is there a way to determine in advance what effect calling drip would have on the rate of the given collateral type. For a user it's important to know since presumably calling drip will incur tx fees
yes, as usual we can reimplement on our side the function that computes the value in the contract.
Is there a chance for us to determine when drip was called for the last time for a given collateral type (e.g. by parsing events of the jug contract)?
yes, this is stored in the jug.sol
contract in ilks
mapping in the variable rho
(as above) Is there a way for us to determine in advance what effect calling poke would have on the spot of a given collateral?
yes, the same answer from the above
Especially in the light of your previous comments on OSM price feed having a buffer I am wondering wether poke would only need to be called once every X amount of time and otherwise it would be useless way to burn tx fees
yes, you could burn money for nothing here potentially.
What do you mean by that?
in the context of the whole post i mean "the endpoint seems like it should work, but we know that it does not since it's still experimental, now i wonder why. If it does not work for 2% of cases, we might just use it".
Would you recommend using the dedicated endpoint?
not before we find out why it is experimental (aka what cases are there when it does not provide accurate info and what info is not reliable in these cases)
Or recreating the logic and directly fetch from chain ourselves?
since it's relying on the database that has the caching time, it does not make sense to do this on our side. We would write an api with the same/similar logic, access the same database. I don't see what we win in this case.
Can you outline what is done there, similar to previous code investigations?
rate
, spot
, Art
, line
, dust
) is extracted from the vat contract and recordedmat
, pip
) is extracted from the spot contact and recordedink
, art
) is extracted and recordedOwn implemenatation of detecting vaults that are going underwater soon
cdpi
from cdp manager contract owns
and ilks
mappings in the cdp manager contract
vaultIndex, vaultOwnerAddress, vaultCollateralType
urns
mapping of Vat
contract:
vaultCollateralAmount, vaultInitialDebtDai
rate
, price with safety margin spot
accumulatedStabilityRate
, maxDaiPerCollateral
oracleAddress
riskCoefficient
vaultCollateralAmount * maxDaiPerCollateral < accumulatedStabilityRate * vaultInitialDebt * riskCoefficient
src
variable of the osm oracle contract (can get it by oracleAddress
) to find the supplier of data and extract the future price via this supplier: use the LogMedianPrice
event, [example of the median contract](https://etherscan.io/address/0x83076a2f42dc1925537165045c9fde9a4b71ad97#code .
futureCollateralUnitPrice
spot
value: futureMaxDaiPerCollateral = futureCollateralUnitPrice / par / mat
where par
and mat
are extracted from the Spotter contract.
futureMaxDaiPerCollateral
futureMaxDaiPerCollateral * vaultCollateralAmount < accumulatedStabilityRate * vaultInitialDebt
Fetching of the data about the vaults apparently has to be periodically repeated. The reason for this is that interactions with vaults are supposed to be done through the cdp manager and there're no events apparently emitted in this contract (e.g. frob
), nor there're events emitted in the vat.frob
that is in the call chain.
I am not sure if your comment contains distinction between initial loading of data (eg store it in the database
) and update of that data. Does it? Or do we need to refetch the same thing periodically?
Before I go into details, can you please estimate how many requests will need to be made in order to a) fetch vaults b) update vaults with latest data c) how much time those will take
added the segment about refetching.
a) fetch vaults - ~30k requests as of now - equal to the amount of the vaults. b) 4 requests per vault to populate it with the data.
c) A lot of time,
Just extracting owner from the cdp manager for 10 ids takes around 2.6 seconds
%timeit -r 4 -n 3 getOwnerAddress()
2.62 s ± 95.9 ms per loop (mean ± std. dev. of 4 runs, 3 loops each)
then, 1 request takes 0.26 seconds or so therefore, 30k vaults, 5 requests to fetch information for db, 0.26 seconds per functioncall
30000 * 5 * 0.26 = 39000 sec = 10 hours
it's not bad to run this once, but then we have to write a functionality that tracks the activity in the chain and reacts to function calls so that we keep the information up-to-date
Since now it's obvious your proposed solution wouldn't work, can you please outline the alternative technical solution?
Well, i don't see how one can avoid using the database for detection of such vaults, therefore from my perspective the problem to fix is keeping the database updated.
For this purpose we could use the same approach as https://github.com/makerdao/pymaker/blob/08821054d009a3b75fd83b248008002391a9c95a/pymaker/auctions.py#L617-L630 implements. There the blocks are filtered out based on the contract and then based on the signature of the transaction the function that was called is determined.
Then the move with the database would be to have a service running that just tracks the transactions that are happening in the Vat contract (frob
for vault manipulation and grab
for vault liquidation`)
If this approach sound solid, then i can get more technical description here.
There the blocks are filtered out based on the contract and then based on the signature of the transaction the function that was called is determined
I am just wondering if the filtering can happen on the provider side and delivered to us via websockets. Can the events be used instead? I see that no events are emitted by the vat contract, but for example for the confiscation the new auction event is fired.
Would listening to frob
and grab
solve refetching of all 4 parameters per vault? Would we still need to refetch everything else (eg oracle prices, etc)? How much requests and time we would still need to fetch initial data (or can it be skipped)? What are the alternative for fetching this initial data from the chain (eg, using external public api)?
Can the events be used instead?
grab
grab
and emits the eventfrob
function of vat contract Would listening to frob and grab solve refetching of all 4 parameters per vault?
i've poked around to get the proposal into more solid condition and reverified:
vaultCollateralAmount, vaultInitialDebtDai
can be tracked this way (vat.frob
, vat.grab
)accumulatedStabilityRate
can be done via event Poke
tracking of the spot
contract. maxDaiPerCollateral
has to be done by monitoring calls to vat.fold
Used sources of information:
dog
- https://github.com/makerdao/dss/blob/master/src/dog.sol
bark
- liquidates the vault
Bark
ilks
- collateral informationHole
- liquidation limit maxDirt
- liquidation currentvat
- https://github.com/makerdao/dss/blob/master/src/vat.sol
frob
- adjusts the vault's debt and collateralfold
- adjusts the minimal debt per collateralilks
- information about collateral type:urns
- information about the vaultsjug
- https://github.com/makerdao/dss/blob/60690042965500992490f695cf259256cc94c140/src/jug.sol
drip
- stability fee collectionilks
- known collateralsspot
- https://github.com/makerdao/dss/blob/60690042965500992490f695cf259256cc94c140/src/spot.sol
poke
- update the maximal debt per collateral
Poke
ilks
- known collateralscdpManager
- https://github.com/makerdao/dss-cdp-manager/blob/master/src/DssCdpManager.sol
urns
- addresses of vaults per vault numberilks
- collaterals type per vault numbercdpi
- number of vaultsPermissions, Authorizations that will be needed:
Structure
Types
https://github.com/sidestream-tech/unified-auctions-ui/pull/446
Services
vaults
store
vaults: Record<Vault['id'], VaultTransaction>
getVaultsAtRisk()
liquidateVault(index: number)
getVaultById(id: number)
liquidations.ts
liquidateVault(collateralType: CollateralType, vaultAddress: string, incentiveReceiverAddress: string)
dog.bark()
fetchGlobalLiquidationLimits() -> LiquidationLimit
dog.Hole()
dog.Dirt()
fetchLiquidationLimits(collateral: CollateralType) -> LiquidationLimit
calls dog.ilks(collateral)
vaults.ts
fetchCdpVault(index: number) -> VaultCdpContract
calls cdpManager.urns(index)
to get vault address
calls cdpManager.ilks(index)
to get collateral typefetchVaultsCount() -> number
calls cdpManager.cdpi()
fetchVatVault(type: CollateralType, vaultAddress: string) -> {vault, collateral}: { vault: VaultVatContract, collateral: Collateral }
vat.urns(type, vaultAddress)
to get the collateral amount and initial debtvat.ilks(type, vaultAddress)
to get the collateral configurationfetchVault(index: number) -> Vault
fetchCdpVault
and fetchVatVault
mentioned abovegetNextOsmPrice
getCurrentOsmPrice
/vaults_at_risk/?offset=xx&limit=xx
/backup/dump.sqlite
VAULTS_RECOVERY_FILE
- path to recovery file. If file does not exist - generates 100 random numbers and populates the vault table with the ids from this list, caches the contents into file. Recovers from file if the path is set.DO_REFETCH_ALL_VAULTS
that triggers the complete refetch of the vaults. Does not trigger (throws error) if the VAULTS_RECOVERY_FILE
envvar is NOT set.~/vaults.sqlite
VAULTS_REFETCH_INTERVAL_SECONDS
- number of secondsfetchVault
from the definition above for every index less than fetchVaultsCount
return if we need to fetch vaults.UPDATE_FREQUENCY
seconds runs the update functiondog
contract to react to Bark
events and set vaults to be liquidated in the db.spot
contract and listen to Poke
to update the values in the db of maxDebtPerCollateralUnit
provider.getBlockWithTransactions
to get blockvat
contract: fold
, frob
fold
influences the adjustment of vault's initial debt and collateral amountfrob
influences the adjustment of the collateral stability fee rate./vaults_at_risk
endpoint is hit.Vault
table representsThx for the proposal.
Based on the previous discussions I am still lacking an answer on the question: How feasible is this approach? Can you provide an opinion on that?
Some scope and volume related input:
I've spent some time looking into solutions that display vaults (especially: https://tracker-vaults.makerdao.network/). Guiding question: How to limit scope of requests we potentially would need to make.
total vaults
~30k and active vaults
~2.7k. So only focusing on the latter figure would already quite significantly reduce the volume we are talking about.
WSTETH-B
with 65 active vaults
vs. ETH-A
with 1209 active vaults
Do you know how it is determined wether a vault is active or not?
according to the repo it's just the number of the collateral stored in the vault. You can't know it from the blockchain, again - the database is required.
If you need to reduce the number of requests made, we can achieve it by asking for a table dump / querying the data api - from there we can save 30k requests to blockchain for each vault, after that we will only have to fill the leftover gap with requesting information about collateral types from the blockchain (so N collaterals - N requests), i assume there's not more than 100 reqs needed here ;).
How feasible is this approach? Can you provide an opinion on that?
For my understanding, were those two questions already answered somewhere else?
Types
As discussed in the daily: https://github.com/sidestream-tech/unified-auctions-ui/pull/446#pullrequestreview-1096584426
For my understanding, were those two questions already answered somewhere else?
sry, forgot to document the verbal discussion;
yes & the answer is it is feasible given that we have the access to the outside database which we use to "copy over" (The mentioned data-api for example) from periodically OR if we do not mind from time to time use up a lot of requests and time to refresh the database state.
yes & the answer is it is feasible given that we have the access to the outside database
sync
to the latest info?/vaults/:id
endpoint from above
DEV
- path to recovery file. If file does not exist - generates 100 random numbers and populates the vault table with the ids from this list, caches the contents into file. Recovers from file if the path is set.
Probably a better name is VAULTS_RECOVERY_FILE
sqlite
as the database and the recovery file as well?
DO_REFETCH_ALL_VAULTS
that triggers the complete refetch of the vaults. Does not trigger (throws error) if theDEV
envvar is set.
I would argue that refetching should be the default logic until some isDevelopment
flag is set.
UPDATE_FREQUENCY
- number of seconds
Better naming is VAULTS_REFETCH_INTERVAL_SECONDS
– env variable names for amounts should always include denomination units. Btw, are we actually planning to refetch vaults since I thought we would use websocket delivery to receive events about changes?
What would be the number of requests in that case? How much time it will need to sync to the latest info?
it depends on the negotiations' outcome.
E.g. for 30k vaults we could say that we're allowed to fetch 1000 vaults per request from their api. Then if the single query takes 10 seconds, we would need 5 minutes if we do it sync.
What is the exact API endpoint we will be dependent upon?
/vaults/current_state
Can we still aim for the frontend to be able to work without this service and the database
The explainer here is not clear to me. Let me make a statement and let's see if it answers your question: /vaults/:id
would not be absolutely critical endpoint to have if the frontend already knows the ids it's interested in and wants to update the store state with the latest information.
What is the format of the file? Should we use sqlite as the database and the recovery file as well?
using sqlite is a good option. Then the recovery file is the sqlite database itself.
What would production also need this in case it restarts eg during deployment?
Seems like the "what" word is not intended to be in this question, otherwise please adjust it.
assuming that it's an sqlite, yes. If it would be e.g. postres, then the recovery file would not be needed since the database service would be persisting the stored data.
Where do we fetch this file during development? Is there some kind of endpoint on staging?
this question is not really clear: could you rephrase or give examples to elaborate?
I would argue that refetching should be the default logic until some isDevelopment flag is set.
ok, works for me
adjusted the proposal for the rest of comments, see the diff there
The explainer here is not clear to me. Let me make a statement and let's see if it answers your question: /vaults/:id would not be absolutely critical endpoint to have if the frontend already knows the ids it's interested in and wants to update the store state with the latest information.
The question is: in case the user knows the vault id (eg is coming from the twitter that just announced that vault #100 is in liquidatable state) can we fetch all related data directly from the chain on page load instead of relying on our and DICU dbs: a) avoiding the need for /vaults/:id
endpoint or b) doing this in case endpoint throws an error/is outdated. In any case, we should also add lastSyncedAt
to each Vault
in the database
Where do we fetch this file during development? Is there some kind of endpoint on staging?
During the deployment, do we need to wait for 5 minutes every time we start a dev server and do few thousands requests? Can you propose a mechanism to avoid it that is not an overkill. I imagine an endpoint that returns sqlite file, so that it can be easily curled from the development of even synced under the hood without extra command
curl https://unified-auctions.makerdao.com/api/vaults.sqlite --output vaults.sqlite
Does not trigger (throws error) if the
VAULTS_RECOVERY_FILE
envvar is NOT set.
I imagine that it makes sense to have default value for the VAULTS_RECOVERY_FILE
env var.
The question is
yes, we can. adding the column to the db model
During the deployment, do we need to wait for 5 minutes every time we start a dev server and do few thousands requests? Can you propose a mechanism to avoid it that is not an overkill. I imagine an endpoint that returns sqlite file, so that it can be easily curled from the development of even synced under the hood without extra command
If i understand correctly, you're talking about the development flow where each time when we start the app it's mandatory to throw thousands of requests. The initial proposal is to only throw 1 request, get ~100 vaults and that's it. Then when the local file exists it can be mounted to the image and not refetched every time.
I imagine that it makes sense to have default value for the VAULTS_RECOVERY_FILE env var.
adjusted
yes, we can. adding the column to the db model
So option b)
it is? Can you reflect it in the proposal?
If i understand correctly, you're talking about the development flow where each time when we start the app it's mandatory to throw thousands of requests
I am talking about a way to have complete experience without needing to refetch everything. Fetching only 100 vaults is not ideal in case you want it to kickstart and proceed with vault liquidations (eg the bot
is privately run by default will start without a database by a person who just cloned it or will start with outdated database in case it wasn't run until a crash) – all those are valid use-cases.
So option b) it is? Can you reflect it in the proposal?
in fact i was leaning towards option a) where the specific vault is instead just fetched from the blockchain. And this is reflected accordingly already.
I imagine an endpoint that returns sqlite file, so that it can be easily curled from the development of even synced under the hood without extra command
so then to get this proposal straight before i add it:
curl
command before starting the main functionality.
Goal
Get understanding how difficult is it to get all information/rights/etc to start new auctions.
Context
Starting new auctions or, in maker terms,
barking
on underwater vaults is the functionality that is missing from a complete end-to-end auction flow. Currently, we only provide functionality to facilitate fair market participation in started collateral auctions, and depend on other community members to start auctions. While it is actually profitable by itself and might be required for the newly onboarded collaterals.Assets
dog.bark
method to liquidate a vault https://github.com/makerdao/dss/blob/fa4f6630afb0624d04a003e920b0d71a00331d98/src/dog.sol#L156-L237clicp.kick
method to start an auction https://github.com/makerdao/dss/blob/fa4f6630afb0624d04a003e920b0d71a00331d98/src/clip.sol#L220-L266Tasks