Open uricohen opened 1 year ago
It is hard to say without seeing an example, but it is reasonable to guess that it is related to tolerances. Which tolerances should be blamed is harder to say though. We would be interested to see a test case if you have one you can share.
When you say "performance is reduced considerably", do you mean that the solver does not converge to full accuracy, or that it requires more iterations, or both / something else?
We do internal data scaling on $P$ and $A$ only, in an attempt to improve conditioning of the KKT matrix that we factor at every iteration. That doesn't take into account the scaling of the linear terms though, i.e. the linear part of the cost or RHS of the constraints.
As an example of a type of issue that might cause: If you had a very large LP, say, then it could happen that very small linear terms produce poor performance because the duality gap would be really small (because the objective is really small) relative to the absolute duality gap tolerance tol_gap_abs
. If this is what is happening, then scaling up all of the objective terms (make them all norm 1, say) could help.
I will try to create a minimal reproduction of the issue. If you would like to close this until I have one, please go ahead.
I'm a very happy user of Clarabel and have now moved away from all my previous choices (ECOS, SCS, quadprog).
I am using it from Python, using the CVXPY API and qpsolvers, to solve large scale problems, e.g., 256 variables and 100K linear equality and inequality constraints.
I now run into an issue where scaling the problem by a factor of 100 changes the results considerably. Clarabel seems to be working well when the data mean is order 1 and the performance is reduced considerably when it is 100 times smaller, even though the problems are equivalent.
Is it a tolerance issue? Should I scale the data myself? What's your recommendation on this issue?
Well done, and best wishes.