Closed giadasp closed 2 years ago
Can you post your code?
As a side remark, unless you are reporting a bug, you should post on discourse, first.
Since with the same inputs in works in JuMP 0.18.6 I guess it is a bug. Anyway the code is the following:
X=rand(100,2)
sp=rand(100)
r2=rand(100)
nPar=2
function myf(x::Vector,grad::Vector)
nPar=size(x,1)
y=X*x
if size(grad,1) > 0
grad= r2 - (sp ./ (1 .+(exp.(.- y))))
grad= X'* gradend
end
z=log.(1 .+exp.(y))
return sum(r2 .* y - (sp.*z))
end
opt=NLopt.Opt(:LD_SLSQP,nPar)
opt.max_objective = myf
opt.lower_bounds = bds.minPars
opt.upper_bounds = bds.maxPars
# opt.xtol_rel = 0.00000001
# #opt.maxtime = 10.0
# opt.ftol_rel= 0.00000001
#opt.constrtol_abs = 0.0
pars_i=zeros(nPar)
# #(minf,pars_i,ret) = NLopt.optimize(opt,pars_i)
(minf,pars_i,ret)=NLopt.optimize!(opt,pars_i)
There are two mistakes in the gradient calculation:
grad= r2 - (sp ./ (1 .+(exp.(.- y))))
is a 100 element vector (and so is probably the second assignment to grad
), whereas grad
should be a vector of 2 elements;grad
in place, as in the NLopt.jl tutorial:
grad[1] = ...
grad[2] = ...
What you were doing was creating a new grad
variable: you can check that by running myf
as follows
grad = zeros(2)
myf(some_value_for_x, grad)
grad
and verify that grad
has not been changed.
I haven't used JuMP, but maybe its solvers do not make use of the gradient.
Can this issue be closed?
Yes. I was assigning values to the grad vector in a wrong way. Thank you for the support!
Il mer 18 set 2019, 08:40 Michele Zaffalon notifications@github.com ha scritto:
Can this issue be closed?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/JuliaOpt/NLopt.jl/issues/136?email_source=notifications&email_token=AIBXOCTXMSNOFBBPTQLKM3LQKHENFA5CNFSM4IULHNYKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD667N3Q#issuecomment-532543214, or mute the thread https://github.com/notifications/unsubscribe-auth/AIBXOCRF6VMPLGO2Z4CJY2DQKHENFANCNFSM4IULHNYA .
Can I open a related issue? I am trying the following constrained minimisation problem.
function ps(x::Vector,grad::Vector) if length(grad)>0 grad[1]=1 grad[2]=0 grad[3]=0 end return x[1] end
function ps_con(z::Vector,x::Vector,grad::Matrix,w::Vector) if length(grad)>0 grad[1,1]=w[1] grad[2,2]=20x[2] grad[1,2]=-2x[2] grad[1,3]=-1 grad[2,1]=w[2] grad[2,3]=-0.1 end z[1]=-(x[2]^2-1+x[3])+w[1]x[1] z[2]=-(-10x[2]^2+0.1x[3])+w[2]x[1] end
opt=Opt(:LD_MMA,3) opt.min_objective=ps jopt.lower_bounds=[0,-Inf,-Inf]; opt.upper_bounds=[1,Inf,Inf];
inequality_constraint!(opt, (z,x,g) -> ps_con(z,x,g,[1,1]), [1e-8,1e-8]::AbstractVector) (minf,minx,ret) = optimize(opt, [1,1, 1])
but now I get: (1.0, [1.0, 1.0, 1.0], :FORCED_STOP)
There are two errors I can see:
grad[1,3]
and grad[2,3]
*
in the expression for z[1]
.Could you please:
Closing because this is not a bug in NLopt.
Please ask usage questions like this on the community forum: https://discourse.julialang.org/c/domain/opt/13
I don't know what's wrong with my code. I'm using the low-level nlopt wrapper because I need to define custom functions to be optimized on a vector of variables and JuMP doesn't allow it. If I choose the algorithm LD_SLSQP I get ROUNDOFF error, instead, if I use LD_MMA I get Xtol and Ftol reached but the values of the variables don't change from the starting values. Do you know which can be the problem? I tested all the functions I implemented and all of them work outside the nlopt optimization.
Thank you in advance for your support
Giada