jump-dev / NLopt.jl

A Julia interface to the NLopt nonlinear-optimization library
https://nlopt.readthedocs.io/en/latest/
Other
265 stars 46 forks source link

returns :XTOL_REACHED but doesn't modify the input values #136

Closed giadasp closed 2 years ago

giadasp commented 5 years ago

I don't know what's wrong with my code. I'm using the low-level nlopt wrapper because I need to define custom functions to be optimized on a vector of variables and JuMP doesn't allow it. If I choose the algorithm LD_SLSQP I get ROUNDOFF error, instead, if I use LD_MMA I get Xtol and Ftol reached but the values of the variables don't change from the starting values. Do you know which can be the problem? I tested all the functions I implemented and all of them work outside the nlopt optimization.

Thank you in advance for your support

Giada

mzaffalon commented 5 years ago

Can you post your code?

As a side remark, unless you are reporting a bug, you should post on discourse, first.

giadasp commented 5 years ago

Since with the same inputs in works in JuMP 0.18.6 I guess it is a bug. Anyway the code is the following:

X=rand(100,2)
sp=rand(100)
r2=rand(100)
nPar=2
function myf(x::Vector,grad::Vector)
  nPar=size(x,1)
  y=X*x
  if size(grad,1) > 0
    grad= r2 - (sp ./ (1 .+(exp.(.- y))))
    grad= X'* gradend
  end
  z=log.(1 .+exp.(y))
  return sum(r2 .* y - (sp.*z))
end
opt=NLopt.Opt(:LD_SLSQP,nPar)
opt.max_objective = myf
 opt.lower_bounds = bds.minPars
opt.upper_bounds = bds.maxPars
# opt.xtol_rel = 0.00000001
# #opt.maxtime = 10.0
# opt.ftol_rel= 0.00000001
#opt.constrtol_abs = 0.0
pars_i=zeros(nPar)
# #(minf,pars_i,ret) = NLopt.optimize(opt,pars_i)
(minf,pars_i,ret)=NLopt.optimize!(opt,pars_i)
mzaffalon commented 5 years ago

There are two mistakes in the gradient calculation:

mzaffalon commented 5 years ago

Can this issue be closed?

giadasp commented 5 years ago

Yes. I was assigning values to the grad vector in a wrong way. Thank you for the support!

Il mer 18 set 2019, 08:40 Michele Zaffalon notifications@github.com ha scritto:

Can this issue be closed?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/JuliaOpt/NLopt.jl/issues/136?email_source=notifications&email_token=AIBXOCTXMSNOFBBPTQLKM3LQKHENFA5CNFSM4IULHNYKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD667N3Q#issuecomment-532543214, or mute the thread https://github.com/notifications/unsubscribe-auth/AIBXOCRF6VMPLGO2Z4CJY2DQKHENFANCNFSM4IULHNYA .

mx-the-gray commented 3 years ago

Can I open a related issue? I am trying the following constrained minimisation problem.

function ps(x::Vector,grad::Vector) if length(grad)>0 grad[1]=1 grad[2]=0 grad[3]=0 end return x[1] end

function ps_con(z::Vector,x::Vector,grad::Matrix,w::Vector) if length(grad)>0 grad[1,1]=w[1] grad[2,2]=20x[2] grad[1,2]=-2x[2] grad[1,3]=-1 grad[2,1]=w[2] grad[2,3]=-0.1 end z[1]=-(x[2]^2-1+x[3])+w[1]x[1] z[2]=-(-10x[2]^2+0.1x[3])+w[2]x[1] end

opt=Opt(:LD_MMA,3) opt.min_objective=ps jopt.lower_bounds=[0,-Inf,-Inf]; opt.upper_bounds=[1,Inf,Inf];

inequality_constraint!(opt, (z,x,g) -> ps_con(z,x,g,[1,1]), [1e-8,1e-8]::AbstractVector) (minf,minx,ret) = optimize(opt, [1,1, 1])

but now I get: (1.0, [1.0, 1.0, 1.0], :FORCED_STOP)

mzaffalon commented 3 years ago

There are two errors I can see:

  1. the gradient in the constraint function is a 3x2 matrix, but the routine tries to access grad[1,3] and grad[2,3]
  2. you are missing a * in the expression for z[1].

Could you please:

  1. indent your code and quote it with 3 back ticks
  2. post these questions in the Julia mailing list?
odow commented 2 years ago

Closing because this is not a bug in NLopt.

Please ask usage questions like this on the community forum: https://discourse.julialang.org/c/domain/opt/13