Closed ivborissov closed 2 months ago
Please don't use this now. It's an experimental package.
Hi @ivborissov thanks for the offer of help. This is something I am very interested in and have already started working on https://github.com/SciML/Optimization.jl/pull/727. This will be generalized once the bundled implementation feels good enough - right now it's quite a naive implementation. Feel free to review the PR.
@mohamed82008 that approach is quite interesting. Correct me if I am misunderstanding - your hypothesis is that instead of the traditional iterative updates of the dual variables in the lagrangian you are utilizing another optimization solver to solve for the dual variables?
Yes it is very slow though and not required for convergence. I was just messing around.
@Vaibhavdixit02 Cool! Thanks for the link, I have missed, that you had already started the implementation.
Hi @Vaibhavdixit02 ,
In follow up to this conversation https://discourse.julialang.org/t/global-constrained-nonlinear-optimization/111972/9. I d be glad to contribute to implementing Augmented Lagrangian for Optimization.jl and I wanted to learn how do you see this implementation.
Those are the implementations I have found.
:tron
solver for local optimization, potentially can be extended to support other JSO compatible solvers)NLopt AUGLAG
https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/#augmented-lagrangian-algorithm ( supportsNLopt
local and global algorithms for local optimization)IPOPT
andKNITRO
solvers)One idea is to make an interface to one of the existing implementations, but almost all the implementations restrict the local optimizers choice to package specific (
NLopt
) or ecosystem standards (JSO
). Implementing an Augmented Lagrangian at Optimization.jl level has the advantage of supporting a very rich local optimizers choice.