SciML / Optimization.jl

Mathematical Optimization in Julia. Local, global, gradient-based and derivative-free. Linear, Quadratic, Convex, Mixed-Integer, and Nonlinear Optimization in one simple, fast, and differentiable interface.
https://docs.sciml.ai/Optimization/stable/
MIT License
688 stars 75 forks source link

Augmented Lagrangian #731

Closed ivborissov closed 2 months ago

ivborissov commented 3 months ago

Hi @Vaibhavdixit02 ,

In follow up to this conversation https://discourse.julialang.org/t/global-constrained-nonlinear-optimization/111972/9. I d be glad to contribute to implementing Augmented Lagrangian for Optimization.jl and I wanted to learn how do you see this implementation.

Those are the implementations I have found.

  1. https://github.com/JuliaSmoothOptimizers/Percival.jl (pure julia, uses :tron solver for local optimization, potentially can be extended to support other JSO compatible solvers)
  2. NLopt AUGLAG https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/#augmented-lagrangian-algorithm ( supports NLopt local and global algorithms for local optimization)
  3. https://github.com/JuliaNonconvex/NonconvexAugLagLab.jl (pure julia, not sure about the status of the package)
  4. https://github.com/JuliaSmoothOptimizers/NCL.jl (pure julia, supports IPOPT and KNITRO solvers)
  5. https://github.com/pjssilva/NLPModelsAlgencan.jl (julia interface to Fortran-based Algencan optimizer)

One idea is to make an interface to one of the existing implementations, but almost all the implementations restrict the local optimizers choice to package specific (NLopt) or ecosystem standards (JSO). Implementing an Augmented Lagrangian at Optimization.jl level has the advantage of supporting a very rich local optimizers choice.

mohamed82008 commented 3 months ago

https://github.com/JuliaNonconvex/NonconvexAugLagLab.jl

Please don't use this now. It's an experimental package.

Vaibhavdixit02 commented 3 months ago

Hi @ivborissov thanks for the offer of help. This is something I am very interested in and have already started working on https://github.com/SciML/Optimization.jl/pull/727. This will be generalized once the bundled implementation feels good enough - right now it's quite a naive implementation. Feel free to review the PR.

Vaibhavdixit02 commented 3 months ago

@mohamed82008 that approach is quite interesting. Correct me if I am misunderstanding - your hypothesis is that instead of the traditional iterative updates of the dual variables in the lagrangian you are utilizing another optimization solver to solve for the dual variables?

mohamed82008 commented 3 months ago

Yes it is very slow though and not required for convergence. I was just messing around.

ivborissov commented 3 months ago

@Vaibhavdixit02 Cool! Thanks for the link, I have missed, that you had already started the implementation.