Open hongkai-dai opened 2 years ago
I also think that we should provide a method that constructs a new program as the augmented Lagrangian of the original problem. (e.g. AddDecisionVariables
for the original decision variables, taking the current value for the multipliers as an argument, and then AddCost
for the augmented Lagrangian cost), no? I guess we also need to provide the updates for the multipliers. But that way all of the existing solvers can be applied to the augmented Lagrangian formulation?
(Perhaps this is contained your "gradient-based solver" bullet...?)
What I planned is like this
Does this make sense? One implementation is done in Anzu as https://github.shared-services.aws.tri.global/robotics/anzu/blob/master/common/nevergrad_al.py.
Yes, that sounds great. I just wanted to make sure I understood the plan. Thanks.
Currently Drake doesn't implement doing optimization with augmented Lagrangian method. We think this method can be very helpful for constrained optimization.
The tentative plan includes supporting the following features