Open m00ngoose opened 1 year ago
Hi, it's difficult to say without knowing more about the issues.
For example, the "completely unable to converge", what does it mean?
Is the solver converging but then stalling out, or diverging straight away?
If you could at least share the stat
for a few failing instances (as e.g. in https://github.com/giaf/hpipm/issues/149) I could get an idea about the issue.
In general, checking these statistics gives an idea about the behavior of the convergence.
You could also try out a couple of versions of BLASFEO (e.g. Haswell and generic targets, and both high performance and reference) and check that you get the same behavior, just to make sure that there are no issues in there. Just remember to always clean out and recompile completely also HPIPM when you change the BLASFEO library.
About a few settings, alpha_min is just the tolerance to decide when to give it up. You could try to increase mu0, in a way this would correspond to start further away from the solution but hopefully being more robust. reg_prim would have an effect only if your problem is ill conditioned or with a nearly singular Hessian. Again, it's difficult to say without having the possibility to see the problem formulation.
About partial/full condensing, it depends on what the issue is, in general they should not make much difference but they would make convergence worse in case of unstable systems. You could try with and without removal of x0, just remember to mark these inequality constraints as equality ones if you want them to be actually removed https://github.com/giaf/hpipm/blob/master/examples/c/example_d_ocp_qp_x0emb.c#L142 https://github.com/giaf/hpipm/blob/master/examples/c/example_d_ocp_qp_x0emb.c#L175 In general, once you create the QP data using the codegen function, you can easily try out all the C examples without having to code them yourself https://github.com/giaf/hpipm/blob/master/examples/c/Makefile#L42
Hi,
I'm looking into using hpipm to replace another less-powerful solver. I've successfully constructed the inputs and can get comparable outputs on some simplified problems. My issue is that when I try to add back in more complexity, convergence is very fragile, either completely unable to converge or very dependent on the solver settings in a way that's currently opaque to me. I am not yet concerned with performance.
My request is something like: for someone comfortable in the domain of the problem setting ie. know the approximate magnitudes of controls/states/lambdas, but less comfortable with ipm, how should I go about investigating non-convergence (in particular, exit-code 2 min_step rather than max iteration)? What should I dump out, what should I look for, what sorts of changes does one make as a result? Relatedly, how should I go about picking solver settings? Particularly thinking of alpha_min, mu0, reg_prim as these seem(?) most relevant. Can you give any intuition on selecting values?
The only relevant thing I've found googling is https://discourse.acados.org/t/error-status-3-acados-minstep-any-configuration-to-alleviate-it/536/2, but it seems levenberg marquardt is implemented in acados rather than hpipm, so I'm not sure whether it would help or how I could apply it without pulling in all of acados as a dependency.
Apologies that sharing full problems is difficult for legal reasons. To give some idea, I'm currently looking at qp ocp problems with 10-50 stages, 2 controls per stage, 5-15 states per stage, 0 slacks, 5-15 box/general constraints total (+ optional 2 constraints per stage * control), x0 embedded. Do you have any intuition on whether partial/full condensation is likely to improve matters?
Thanks in advance for any help you can give!