gonum / optimize

Packages for solving minimization problems [DEPRECATED]
66 stars 9 forks source link

[question] MajorIterations settings #169

Closed milosgajdos closed 8 years ago

milosgajdos commented 8 years ago

Thanks for all the awesome work on all of these awesome numerical packages!

I have a simple question about MajorIterations optimization settings.

I have noticed that sometimes when I set the settings.MajorIterations to a small number, let's say 5, objective function Func is called more than 5 times? I tested this on several values using the same objective function and noticed the same thing. What's even more interesting sometimes optimize.Local fails to converge after calling the objective Func method approximately 100 times despite settings.MajorIterations set to 5?

My intuition and expectation was that settings.MajorIterations should stop iterating once the set number of iterations are reached and return IterationLimit?

I don't know if this is a bug, or a feature or my total misunderstanding of this setting. What I've noticed though is that for some problems I tried to use it for it does seem to work based on the intuition I described above i..e. Func really is called only settings.MajorIterations times.

btracey commented 8 years ago

The direct setting you are looking for is settings.FuncEvaluations. The optimization will stop after 5 function evaluations.

The concept of "Iteration" is not synonymous with function evaluation, and is a more nebulous term. We have taken whatever the view is of the optimizer. In the quasi-Newton methods, an "Iteration" is "choose a direction and perform a linesearch" this often takes more than one function evaluation.

It sounds like you are just supplying Func and not Grad. At present, you will thus be using Nelder-Mead. It requires dim+1 function evaluations to even construct the original simplex, and sometimes an iteration of nelder mead (defined as updating the simplex size) requires dim+1 evaluations. This is how you could find such a big disparity between them.

In short, setting FuncEvaluations is likely what you want, but know that if your limit is as low as 5 you might not get very far without supplying a gradient.

milosgajdos commented 8 years ago

Thanks for the excellent answer! The outcome did seem a bit "suspicious" to me :-)

The results I've mentioned here actually did use gradient as well. The particular optimization method I have used was optimize.BFGS{}

Thanks a lot for you explanation, it now makes more sense to me!

btracey commented 8 years ago

Could you test if FuncEvaluations is doing what you think it should? I'm surprised with BFGS it's taking 100 function evaluations in 5 iterations. Perhaps the problem isn't very well scaled?

milosgajdos commented 8 years ago

Yes FuncEvaluations does seem to work as you described.

And again, yes, the problem is defo not scaled - if by scaling you mean some kind of normalization to some unit sizes. I tested this on some hand-crafted data with no normalization performed.

Thanks for your answers!