Closed rafaqz closed 1 month ago
Ok here's a MWE with rosenbrock
:
julia> rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
rosenbrock (generic function with 2 methods)
julia> x0 = zeros(2)
2-element Vector{Float64}:
0.0
0.0
julia> lb = fill(-1.0, 2)
2-element Vector{Float64}:
-1.0
-1.0
julia> ub = fill(200.0, 2)
2-element Vector{Float64}:
200.0
200.0
julia> _p = [1.0, 100.0]
2-element Vector{Float64}:
1.0
100.0
julia>
julia> l1 = rosenbrock(x0, _p)
1.0
julia> prob = OptimizationProblem(rosenbrock, x0, _p; lb, ub)
OptimizationProblem. In-place: true
u0: 2-element Vector{Float64}:
0.0
0.0
julia> solve(prob, NelderMead())
ERROR: MethodError: objects of type Nothing are not callable
Stacktrace:
[1] (::OptimizationOptimJL.var"#20#24"{OptimizationCache{β¦}})(G::Vector{Float64}, ΞΈ::Vector{Float64})
@ OptimizationOptimJL ~/.julia/packages/OptimizationOptimJL/yMF3E/src/OptimizationOptimJL.jl:291
[2] gradient!!(obj::OnceDifferentiable{Float64, Vector{Float64}, Vector{Float64}}, x::Vector{Float64})
@ NLSolversBase ~/.julia/packages/NLSolversBase/kavn7/src/interface.jl:63
[3] gradient!
@ ~/.julia/packages/NLSolversBase/kavn7/src/interface.jl:51 [inlined]
[4] gradient!(obj::Optim.BarrierWrapper{OnceDifferentiable{Float64, Vector{Float64}, Vector{Float64}}, Optim.BoxBarrier{Vector{Float64}, Vector{Float64}}, Float64, Float64, Vector{Float64}}, x::Vector{Float
64})
@ Optim ~/.julia/packages/Optim/EJwLF/src/multivariate/solvers/constrained/fminbox.jl:118
[5] optimize(df::OnceDifferentiable{β¦}, l::Vector{β¦}, u::Vector{β¦}, initial_x::Vector{β¦}, F::Fminbox{β¦}, options::Optim.Options{β¦})
@ Optim ~/.julia/packages/Optim/EJwLF/src/multivariate/solvers/constrained/fminbox.jl:327
[6] __solve(cache::OptimizationCache{β¦})
@ OptimizationOptimJL ~/.julia/packages/OptimizationOptimJL/yMF3E/src/OptimizationOptimJL.jl:306
[7] solve!
@ ~/.julia/packages/SciMLBase/wVDwN/src/solve.jl:179 [inlined]
[8] #solve#596
@ ~/.julia/packages/SciMLBase/wVDwN/src/solve.jl:96 [inlined]
[9] solve(::OptimizationProblem{β¦}, ::NelderMead{β¦})
@ SciMLBase ~/.julia/packages/SciMLBase/wVDwN/src/solve.jl:93
[10] top-level scope
@ REPL[104]:1
Some type information was truncated. Use `show(err)` to see complete types.
This is from https://github.com/SciML/Optimization.jl/issues/558. The fallback for handling box constraints when a solver doesn't support it directly is to wrap it in Fminbox
which requires gradients. You can work around this by passing a OptimizationFunction with AD backend for now, we should update the wrapping as mentioned in the linked issue
A fix to the MWE above might help make this clearer?
Im not sure what you mean "passing a OptimizationFunction with AD backend"
Edit I guess you mean something like:
using SciMLBase, Optimization
julia> prob = OptimizationProblem(OptimizationFunction(rosenbrock, SciMLBase.NoAD()), x0, _p; lb, ub)
No... that doesn't work either! Surely NoAD() should mean dont call f.grad
??
My actual problems are not differentiable so NoAD()
is really what I need.
This works fine just using Optim directly:
using Optim
rosenbrock(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
lower, upper = [-100, -100], [100, 100]
result = optimize(rosenbrock, lower, upper, zeros(2), NelderMead())
Also works with Fminbox
result = optimize(rosenbrock, lower, upper, zeros(2), Fminbox(NelderMead()))
No... that doesn't work either! Surely NoAD() should mean dont call f.grad ??
I can see how the name might have confused you but it is intended (and is not part of the public API just used in the internals) to mean that derivatives are provided and not use AD for generating them.
Im not sure what you mean "passing a OptimizationFunction with AD backend"
I meant a valid AD backend from https://docs.sciml.ai/Optimization/stable/API/ad/
The reason is for this error is still https://github.com/SciML/Optimization.jl/issues/558
You should use SAMIN
https://julianlsolvers.github.io/Optim.jl/stable/algo/samin/ if your objective is not differentiable
Thanks. I also found that it works with the NLopt nelder-mead,
But a fix of this would be good... it does work fine in Optim.jl, even if you use Fminbox(). So switching from Optim to Optimization this is just the first thing I would try to test a derivative free optimisation.
The point is not that it doesn't work with Optim, in Optim the gradients are automatically calculated if the method needs it https://github.com/JuliaNLSolvers/Optim.jl/blob/01b9391383c562078f3ada4d3c5aeee76af6ca8a/src/multivariate/solvers/constrained/fminbox.jl#L83 - always with ForwardDiff, in Optimization.jl you need to specify the AD library you want to use, so the right way to use it is provide a valid AD choice there. The issue is that the error should be better there's work in progress for that already by @ParasPuneetSingh, hopefully, we can finish that up pretty soon and do a release.
You should use SAMIN https://julianlsolvers.github.io/Optim.jl/stable/algo/samin/ if your objective is not differentiable
FYI SAMIN doesnt support box constraints. oops thats SimulatedAnnealing() SAMIN()
seems to be working!!
Fyi I ran into the same bug trying to switch from Optim.jl
to Optimization.jl
.
Describe the bug π
When optimizing with
NelderMead()
things work completely fine unless I uselb
andub
keywords. When I try to enforce bounds (between zero and one for all parameters) Optimizatin.jl for some reason callsgrad!
some way into the optimization, but it looks likecache.f.grad
here isnothing
forNelderMead
: https://github.com/SciML/Optimization.jl/blob/038c7b6bd34111cd4a2fdb74886bf7c05a026c30/lib/OptimizationOptimJL/src/OptimizationOptimJL.jl#L291I'm not sure why
grad!
is called on a gradient free method.Expected behavior
The optimisation should run as normal with
lb
/ub
argsMinimal Reproducible Example π
I have a complicated simulation behind the objective function, and the error only comes half way through optimizing it. So, not sure how to make an MWE. I'm not sure how this can happen or what is triggering it.
But this is essentially it, without the exact objective func:
Error & Stacktrace β οΈ
Environment (please complete the following information):
using Pkg; Pkg.status()
using Pkg; Pkg.status(; mode = PKGMODE_MANIFEST)
versioninfo()
Additional context
Add any other context about the problem here.