Open bingen opened 3 years ago
Ah. Yes, this was what I considered initially. But, we start running weight bound issues. This doesnt let the weights grow / shrink enough with AMPL marketcap changes.
The underlying BPool enforces bounds on MIN_WEIGHTS=1e18, MAX_WEIGHT=50e18 and TOTAL_WEIGHT=50e18.
Say you start with initial weights 7e18 and 7e18
. Once AMPL expands >615% you'll hit the weight bound 43e18 and 7e18
(similar in the contraction case).
Using the geometric mean lets use space of weights more completely ..
Hm, I see, interesting, thanks.
Anyway, it still seems easier to adjust them afterwards dividing or multiplying by the lowest/biggest. For instance, in that example, you just divide by 7
and get 6e18
and and 1e18
, so you still avoid the square root.
Yea that makes sense. For a n token pool we'll still have to loop over to find the min/max? It could be cheaper than the square root though. Let me know if you try ..
The current implementation still runs into bound issues.. but what you describe shouldn't .. I think
Hi,
I’ve been looking into this dynamic weight pools, really interesting.
I wonder what would be the problem of using this alternate approach: leave all weights the same except Ampleforth’s one, which would be the target. I.e.:
Maybe I’m missing something, but if it works, the advantage would obviously be that it’s much simpler, in particular avoids the need for square roots, which is problematic in Solidity. If we call
y = x/100
, before we had:while now we would have
So if we divide
w_ampl_new / w_ti_new
, in both cases we get1+y
(which is the supply change), meaning that he proportion between weights will be the same, and I think that’s what we want to ensure in the new approach. In particular, that means that prices will be the same too with the new approach.Let me know if I’m missing something, and in that case, I would love to get more context of why that original approach with the geometric mean instead of the target was chosen.
Thanks.