stevengj / nlopt

library for nonlinear optimization, wrapping many algorithms for global and local, constrained or unconstrained, optimization
Other
1.79k stars 553 forks source link

Request: add a callback function for optimization algorithms #37

Open lgpage opened 9 years ago

lgpage commented 9 years ago

Please can you look into adding a callback function for optimization algorithms that is called at the end of each (outer / main) optimization loop / iteration.

I tried adding this myself, specifically to the nlopt API and MMA algorithm, however I could not get it right. This attempt can be found at https://github.com/lgpage/nlopt/tree/opt_cbs

stevengj commented 9 years ago

What would this be for? (I'm not even sure how I would define "outer iteration" in general.)

Maybe a callback that would be called each time a feasible point with an improved objective is found?

lgpage commented 9 years ago

I'm not sure how wide the user case for this would be, probably not very, but for my research it was needed. This being topology optimization with SIMP, Adjoint and MMA methods.

One method for topology optimization with SIMP is implementing an increasing penalization scheme where one needs to adjust the penalization parameter at the end of each MMA iteration and only run the MMA for a fixed number of iterations.

Without knowing to much about the other optimizations algorithms implemented in nlopt I would suggest a callback at the point when the optimization variable in updated for the next iteration?

jdumas commented 7 years ago

Hi,

More generally, I think it would be quite convenient to be able to call a custom callback at every minor/major iterations, e.g. when developing a graphical interface and one wants to update a windows that visualize of the solution. Or to print some statistics on the convergence (showing how "fast" the objective value is decreasing, along with the values of the additional constraints).

stevengj commented 7 years ago

I still don't understand how to define an "outer" iteration in a way that is meaningful for all algorithms, or why a GUI interface can't just update every time the objective function is called.

jdumas commented 7 years ago

Well, for example I don't see the point of updating the GUI during the line-search part of a BFGS routine. I feel that it would make the solution flicker in a way that is difficult to interpret, plus it can be costly. In the case of MMA it would mean only showing feasible points after each inner iterations (I think?).

For most algorithms, which have a single loop where f is evaluated once per iteration, I would say it has only "outer" iterations (= number of objective function evaluation), and no inner iterations. Then, for algorithms where it makes sense, like MMA of Augmented Lagrangian, we can also define "inner" iterations.

Let me know if that makes sense.

jdumas commented 7 years ago

Another case where I think it would be useful to have a callback at every "step" would be to implement continuation methods. Let's say I have a constraint expressed as g(x) = \sum_i |x_i|^p - c, with a high value of p in order to approximate a max() function. It makes senses to start optimizing with a small value of p, and then progressively increase p (e.g. double p every 20 iterations). But to do that we need to have proper way to define an "iteration" (it is not simply whenever g is evaluated, because of line search in BFGS/inner iterations in GCMMA, etc.). Doubling p during a line-search might mess things up, but doubling it afterwards is probably fine.

Also since most algorithms accumulate function gradients over several iterations (to approximates the hessian e.g.), it is not satisfactory to call nlopt's solve() multiple times with increasing values of p, as one would like to "reuse" what has been accumulated over previous iterations.

stevengj commented 7 years ago

@jdumas, the problem with that is that you would need to understand an algorithm at a fairly deep level to know whether this modification would even converge. That doesn't mesh well with a "generic" callback feature implemented for every algorithm.

(Not to mention that there are much better ways to implement a max function in constraints by adding dummy variables.)

jdumas commented 7 years ago

(Not to mention that there are much better ways to implement a max function in constraints by adding dummy variables.)

This is true, but there are certain situations where you might want to avoid that. For example MMA has a complexity of O(n*m), so it doesn't play very well if there is a high number of constraints.

Anyway this was just an illustrative example. I understand your position. I just wished nlopt offered this possibility at least for gradient-based algorithms. And if it allows the user to screw up, well in the end it's their problem, not yours =).

tepperly commented 1 year ago

I am adding NLOpt as an optional solver for a package that currently supports Ipopt, HiOp, and another implementation of MMA. These three algorithms support an iteration callback function that's called after each iteration. It would be nice if NLopt could support a similar iteration callback API. It would also be nice if the Lagrange multipliers at the solution were available if the solver is able to provide an estimate of their value.

stevengj commented 1 year ago

Lagrange multipliers is a separate issue, #327.