Closed tt293 closed 2 years ago
The LBFGS solver is likely not converging in the maximum iterations set due to conditioning of the system (the reserve numbers span 12 orders of magnitude, which can lead to numerical issues).
Try setting the keyword arg verbose=true
in route!
and see what the solver outputs.
Thanks for the quick reply!
The solver does not even go through any iterations it seems:
N = 3 M = 5
The initial X is infeasible. Restart with its projection.
At X0 3 variables are exactly at the bounds
At iterate 0 f= 5.41813D+24 |proj g|= 5.66405D+23
* * *
Tit = total number of iterations
Tnf = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip = number of BFGS updates skipped
Nact = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F = final function value
* * *
N Tit Tnf Tnint Skip Nact Projg F
3 1 21 1 0 1 5.664D+23 5.418D+24
F = 5.4181273904498980E+024
ABNORMAL_TERMINATION_IN_LNSRCH
Now, using what you said about the orders of magnitude, in this small problem I can actually still obtain the correct solution by scaling the reserves for tokens 1 and 3 by 1e-10, and adjusting the prices to something like [5, 0.01, 0.01]
, as I want any profit to accrue in token 1. The solution will then also be scaled down by 1e-10, but I can just reinflate the numbers.
However, even for a given token, I have seen reserves across different CFMMs easily span more than 10 orders of magnitude at times, and I wonder what a possible solution in that scenario might be?
I do like the elegance of the underlying approach, so I am curious whether this can actually be made more robust in some way.
Thanks!
In this case, as with many iterative methods, the important part is actually ensuring that the problem is correctly scaled. The termination conditions for many algorithms (such as this one, as you point out) depend on the actual magnitude of the numbers, which is why the algorithm works quite well when you scale everything down by 1e-10 vs. passing the amounts directly to the solver! (We have had a few reports of people using this method/package in production, so we suspect such scaling can be done.)
How to appropriately scale reserve amounts is left as an exercise to the reader :)
Going to add a note to the docs about numerical issues / scaling issues and then close this.
Updated docs
In this case, as with many iterative methods, the important part is ensuring that the problem is correctly scaled. The termination conditions for many algorithms (such as this one, as you point out) depend on the actual magnitude of the numbers, which is why the algorithm works quite well when you scale everything down by 1e-10 vs. passing the amounts directly to the solver! (We have had a few reports of people using this method/package in production, so we suspect such scaling can be done.)
How to appropriately scale reserve amounts is left as an exercise to the reader :)
How to deal with Uniswap V3/Curve V2 pools?
Hello,
thanks for this great repo! It was easy to get it to run, and the examples worked fine. I was then trying to plug in some actual numbers from some Uniswap AMMs on a blockchain, for which I believe an arbitrage opportunity exists, but the netflows came out with some negative amounts for 2 of the three tokens - and to me the amounts do not look like they are within tolerance of 0. Any ideas/suggestions on where and why this is going wrong?
Thanks