google-deepmind / mujoco_mpc

Real-time behaviour synthesis with MuJoCo, using Predictive Control
https://github.com/deepmind/mujoco_mpc
Apache License 2.0
898 stars 130 forks source link

Bug in robust planner: not updating nominal policy before optimizing #314

Open MiaoDragon opened 3 months ago

MiaoDragon commented 3 months ago

Hi,

I'm a PhD student and uses mujoco_mpc for my research. When experimenting with the project, I found that there is a potential bug in the robust planner implementation. I found this out by printing the times parameter for the sampling planner, and found that they stay all 0 all the time. I traced out the bug to be in robust_planner.cc,

void RobustPlanner::OptimizePolicy(int horizon, ThreadPool& pool)

The problem is that in sampling planner, we call UpdateNominalPolicy(horizon) before optimizing the policy, but this is not done in the robust planner. One solution to this is to add the function UpdateNominalPolicy(horizon) in the RankedPlanner, and call

delegate_->UpdateNominalPolicy(horizon);

before optimizing the policy candidates.

Thanks and please let me know what you think.

yuvaltassa commented 3 months ago

I think this is @nimrod-gileadi 's code?

nimrod-gileadi commented 3 months ago

Thanks for diagnosing this issue. I think you are right!

RobustPlanner is meant to be agnostic to which delegate it's using, though, and UpdateNominalPolicy is not part of the RankedPlanner interface. So I think that the fix would be to move this call:

this->UpdateNominalPolicy(horizon);

from SamplingPlanner::OptimizePolicy to SamplingPlanner::OptimizePolicyCandidates.

Do you want to send us a pull request, or would you rather we fixed it?