credmark / product-backlog

Repo for product ideas
0 stars 0 forks source link

Historical prices [Spike] #39

Open murphy opened 2 years ago

murphy commented 2 years ago

We need to be able to provide historical prices, fast. This implies that they need to be pre-computed.

Our current price model assumes the existence and predominance of Curve, Sushi, and Uni. Those assumptions don't hold in the past. We need to divide DeFi history into epochs and define the best pricing strategy for each.

MattCMK commented 2 years ago

@0xpetersatoshi @leafyoung talked with Paul - I will go ahead and propose a draft for these epochs depending on historical liquidity of DEXs

leafyoung commented 2 years ago

I have been drilling into this issue from last Friday and raised this issue https://github.com/credmark/credmark-models-py/issues/163

It started with investigation with the spike in TVL for FEI/TRIBE pool. I identified it is due to TRIBE price had a spike.

image

Date Block Number Price
2022.6.29 15042265 0.15269660445795039
2022.6.30 15047598 0.7481559507879423
2022.7.1 15053226 0.15642244758327012

It has to do with the liquidity calculation from UniV2/V3/SushiSwap pools are not accurate that brought price from pools with little liquidity in. I have hence fixed it with tick liquidity:

Result:

Block Number Price
15042265 0.15269660445795039
15047598 0.1528340705990112
15053226 0.15642244758327012
0xpetersatoshi commented 2 years ago

@MattCMK I'm a little confused with the goal of this issue. What is the acceptance criteria or Definition of Done? There is mention of historical prices and CRV, SUSHI, UNI. Are we talking about providing on-chain prices for these assets? If so, per block? per day?

We need to divide DeFi history into epochs and define the best pricing strategy for each.

Is the goal of this issue to ultimately define different epochs of DeFi?

leafyoung commented 2 years ago

@0xpetersatoshi I think this ticket can divided into three parts

  1. Epoch design
  2. Models' update
  3. Store model results in DB to provide fast access
leafyoung commented 2 years ago

The current model gateway architecture.

Model user <- Gateway <- Cache (also named, timescale db)

We need to address following points when storing model outputs in the Snowflake DB. My solution is attached.

  1. Model update => Obtain model versions from cache
  2. Massive amount of computation and time required for refresh => only perform update periodically (?weekly?). Obtain partial data from cache.

The DBT job could be setup with following details.

leafyoung commented 2 years ago

https://github.com/credmark/credmark-model-framework-py/pull/148 https://github.com/credmark/credmark-models-py/pull/179

Performance gain from enabling a local model run (Each test is for three runs)

  1. Chainlink
> time credmark-dev run price.quote -i '{"base":"AAVE"}' -j  -b 15210851 -l - --api_url=http://localhost:8700

real 0m13.969s real 0m15.250s real 0m14.398s


* After
    "contract.metadata": {
        "1.0": 4
    },

real 0m4.828s real 0m4.235s real 0m4.103s


2. Dex blended

time credmark-dev run price.dex-blended -i '{"symbol":"AAVE"}' -j -b 15210855 -l - --api_url=http://localhost:8700

real 1m56.549s real 1m50.403s real 1m56.851s


* After
    "contract.metadata": {
        "1.0": 30
    },

real 0m8.365s real 0m9.796s real 0m8.758s



Cleared bottleneck
- To increase hit on contract local cache and reduce DB operations
- To reduce overhead in calling model run from gateway (starting process is costly)
- To reduce overhead in setup web3 connection