Open baggepinnen opened 1 month ago
I am also wondering about this. The original paper says
TinyMPC ran [on the Crazyflie] at 500 Hz with a horizon length of N = 15 for the figure-eight tracking task and the attitude-recovery task.
and
TinyMPC converged at all steps within a maximum of 7 iterations and under the allotted 2 ms solve time defined by the 500 Hz control frequency
Which sounds crazy good! But like you, I also find that whenever constraints are active it takes way more iterations to converge, especially when the prediction horizon is short enough for some constraints to "come as a surprise" and thus never be fully satisfied. And if it runs at 500 Hz on a micro controller, a simulation should be nearly instant on a modern desktop.
But also for the obstacle avoidance task,
For the obstacle-avoidance task { ... }. Additionally, we reduced the MPC frequency to 100 Hz and increased N to 20.
the constraints are likely going to be active a lot more of the time, requiring the lower frequency.
My current go-to is to just have a very low number of max iterations, and a longer prediction horizon. Of course this may not be viable with some problems.
Some high level thoughts for both of you (since I haven't had to the time to run @baggepinnen code yet but wanted to try to be somewhat responsive):
The secret behind most good robotics demos is a lot of hyperparameter tuning - whether MPC or RL or anything else
This is not how I view linear-quadratic MPC. The tuning is performed with control-oriented objectives, and the QP solver is a robust and mature technology that solves the problem I have specified. If we were talking about nonlinear MPC or extremely large problems I'd be willing to make some concessions, but solving (in this case very small) QPs efficiently should ideally not demand any tweaking with the problem specification.
Our 500 Hz controller ran, if I recall correctly, with a relatively coarse convergence tolerance (which is common for MPC), and so one other issue you may be having is that running to very accurate convergence can be much slower. I would suggest early exists on convergence in general and expect ADMM to do a better job reaching a more accurate solution as it adds in a number of extra bells and whistles for good convergence which we strip out for speed.
I have tried running with relaxed tolerance, but the constraint violations would become so large that the result was meaningless.
I had the exact same problem running it on a development board, I thought I was doing something wrong. When the constraints were restrictive enough to be usable, the algorithm didn't converge in the maximum iterations anymore.
I passed by this conversation by chance and thought I might as well give my perspective on this matter.
Algorithms based on ADMM tend to have vastly different convergence behaviors based on the problem you are trying to solve, and it's initial condition. You might converge in 10 iterations or get stuck at a relatively low accuracy with >10000 iterations. On the other hand, you have interior-point based solvers which tend to be more robust but can be more costly, i.e., they normally robustly converge in ~10-20 iterations independent of the underlying problem with a higher cost per iteration. You might take a look at HPIPM which is specifically designed and optimized for MPC problems. The C interface is horrible to use, but you get an extremely high performing MPC solver (this is what acados is using under the hood). If you need a more general QP solver (similar to OSQP), you might also find PIQP interesting (shamless plug on my side).
It also depends on where you deploy the solver. If you want to deploy on a microcontroller which has almost no compute and memory, you are more or less forced to go with an ADMM type of algorithm because you can't afford the online factorizations of the KKT system. This is also the main selling point of TinyMPC (not to discredit TinyMPC in this thread). But if you run on a more capable chip like a Raspberry Pi Zero, you are probably better off with an interior-point solver since it gives you very robust results (if you can afford higher sampling times). Think of it this way, what does a solver (like OSQP or TinyMPC) give you if it converges on average in 10 iterations, but sometimes crapes out and would suddenly take >10000 iterations to converge? Are you willing to take the risk of destabilizing your system because the solver didn't converge? Using a more robust solver like HPIPM gives you at least robust solve times with highly accurate solutions. It's slower, yes, but you can count on a consistent solve time and plan accordingly.
In the end, it's an engineering decision on what you value more: speed or robustness :)
Hi there :wave:
I'm exploring using TinyMPC to generate MPC controllers from Julia the JuliaControl ecosystem, and have setup a small little example with 5 state variables, 1 input and a prediction horizon of 10 (a double mass-spring-damper). It seems no matter how I tune
rho
, I get poor performance whenever any constraint is active. Below is an example run, where the execution time of the optimization is plotted as a function of the iteration index on the right. Most runs of this tiny problem take several hundreds of ms, and this is on a desktop PC. OSQP is close to 100x faster on this example, converging to the same estimate ofrho
as the best tuning I could find for TinyMPC.I had a look at the example problems that were included in the TinyMPC repository, the pendulum-on-cart and quadrotor problems. Both of those examples have constrains set such that they are never active during the simulation, is this right? If so, why linear MPC over standard LQR in such an example?
I pushed my little interface to a branch here https://github.com/JuliaControl/RobustAndOptimalControl.jl/blob/e74066e2c0205385d829e79061847c23a521339a/src/mpc.jl but I also include my Pluto notebook together with a rendered HTML file of the notebook below if you want to have a look. I changed the convergence tolerance and the maximum number of iterations in
codegen.cpp
such that the calculation ofKinf
andPinf
would converge to something accurate.TinyMPC_benchmark.zip
Do you have any example where you benchmark TinyMPC where constraints are active? Active constraints would be the only motivation I can see to use linear-quadratic MPC over LQR. Hopefully I have missed something in my little evaluation, but I have stared at the problem for a while now without any luck :sweat_smile: