Closed jonathanBieler closed 4 years ago
Cool to see an attempt at an Optim extension. I really want what you're doing to be feasible, so it's valuable to see an attempt. Let me get back to you later!
May I quickly note that, for your case, emulating the approach by NewtonTrustRegion may be what you want:
state.f_x_previous = value(d) # potentially value!(d,state.x)
copy!(state.x_previous, state.x)
to prevent an additional function evaluation. (See NLSolversBase for documentation on value
)
The methods that use a linesearch update x_previous
in perform_linesearch
: https://github.com/JuliaNLSolvers/Optim.jl/blob/master/src/utilities/perform_linesearch.jl#L29
I've managed to fix some of these issues, I implemented pick_best_f
and pick_best_x
which fix my OptimizationResults, I also wasn't using value!
to update my function and that was causing some problems.
For the convergence ideally one could display other conditions that the default ones in the results (CMAES as a couple of conditions on the covariance matrix), but it's not that important.
The last point is actually quite important to me as I want to moduralize this, so please show me what you could show in the best of all worlds
I have a bunch of extra conditions for convergence so instead of:
* Convergence: true
* |x - x'| ≤ 1.0e-32: false
|x - x'| = 1.17e-09
* |f(x) - f(x')| ≤ 1.0e-32 |f(x)|: true
|f(x) - f(x')| = 0.00e+00 |f(x)|
* |g(x)| ≤ 1.0e-08: false
|g(x)| = NaN
...
I would get something like:
* Convergence: true
* |x - x'| ≤ 1.0e-32: false
|x - x'| = 1.17e-09
* |f(x) - f(x')| ≤ 1.0e-32 |f(x)|: true
|f(x) - f(x')| = 0.00e+00 |f(x)|
* σ/σ_0 * 1/√(maximum(d)) > TolUpSigma (1e-32) : false
σ/σ_0 * 1/√(maximum(d)) = 1e-54
...
I would maybe do something like this: an abstract type would be responsible for convergence assessment, say ConvergenceAssessor
(concret would be e.g. AbsoluteXTolAssessor
) that takes state, d and options as argument and update itself to reflect if the condition is met or not, and store all the numbers needed to produce the summary. It would also have a string
method that would produce one of the * block above.
assess_convergence
would return a Vector
of these and you could then just loop over them to check convergence and produce the output summary.
Btw, would you like CMAES in Optim in the long term ? I think with the way it's done now it doesn't add much complication to use it as an independent package, but adding it in wouldn't be too hard either.
I would like to see it. Random packages tend to get bitrot, while things inside of a common interface with an org tend to get upgrades. I think if you put CMAES then @antoine-levitt will come and add manifold support, you'll get complex number and numerical diff support, when linear solver swapping and sparse diff comes they'll sweep through and add it (whatever is applicable to this algorithm of course). Adding constraints and a MathOptInferface binding will also come naturally.
I think you and @AStupidBear should double check each other's CMA/CMAES Julia implementations and see if they are in sync. If they are, then one into Optim would be my preferred choice.
@jonathanBieler did you end up contributing to blackboxoptim instead?
@pkofod No I have my version here:
https://github.com/jonathanBieler/CMAES.jl
I never managed to make perform as well as the python version, there's several versions in the original papers so it's not super clear what is wrong.
There is also an implementation in https://github.com/wildart/Evolutionary.jl.
I never managed to make perform as well as the python version, there's several versions in the original papers so it's not super clear what is wrong.
how much worse is it? is it in your blackboxoptimbenchmarks repo? I don't think so?
IIRC it was converging too fast and was failing on hard functions. It's not included in my blackboxoptimbenchmarks repo no. I can try to update it if you want, the code I've posted is a bit outdated and messy.
Sorry if that's not quite the right place to ask these questions, but I don't know of others.
I'm working a bit on my CMAES implementation and it works pretty well but I have a couple of issues.
I included
x_previous
andf_x_previous
in my state so I can usedefault_convergence_assessment
but I also need to add some other checks, and disablef_increased
, I'm not sure what's the right way to do it. Should I put all my others conditions inconverged
?I haven't found where
x_previous
andf_x_previous
are updated in optimisers that use them, I addedAt the beginning of my
update_state
but I think that will do an additional function evaluation. Is there some default way to update those ?after_while
but I haven't manage to fix it.Thanks!