RobotLocomotion / drake

Model-based design and verification for robotics.
https://drake.mit.edu
Other
3.36k stars 1.27k forks source link

Solve constrained optimization through augmented Lagrangian #16442

Open hongkai-dai opened 2 years ago

hongkai-dai commented 2 years ago

Currently Drake doesn't implement doing optimization with augmented Lagrangian method. We think this method can be very helpful for constrained optimization.

The tentative plan includes supporting the following features

RussTedrake commented 2 years ago

I also think that we should provide a method that constructs a new program as the augmented Lagrangian of the original problem. (e.g. AddDecisionVariables for the original decision variables, taking the current value for the multipliers as an argument, and then AddCost for the augmented Lagrangian cost), no? I guess we also need to provide the updates for the multipliers. But that way all of the existing solvers can be applied to the augmented Lagrangian formulation?

(Perhaps this is contained your "gradient-based solver" bullet...?)

hongkai-dai commented 2 years ago

What I planned is like this

  1. Create a new solver like AugmentedLagrangianSolver, which takes in a solver ID (like SNOPT/IPOPT/NLOPT).
  2. Inside this solver, when calling Solve() with a MathematicalProgram with constraints, it will first construct a new MathematicalProgram corresponding to the augmented Lagrangian for some given Lagrangian multiplier and penalty terms. Then the solver (with the ID provided in step 1) will solve this un-constrained MathematicalProgram.
  3. It then updates the Lagrangian multiplier and the penalty term, and then go to step 2 again.

Does this make sense? One implementation is done in Anzu as https://github.shared-services.aws.tri.global/robotics/anzu/blob/master/common/nevergrad_al.py.

RussTedrake commented 2 years ago

Yes, that sounds great. I just wanted to make sure I understood the plan. Thanks.