Closed tmigot closed 1 week ago
@operator
is the new @register
, right?
@blegat probably knows how to fix this issue.
We need to get the list of UserDefinedFunction
and set it to the nlp_model
as it's done here:
https://github.com/jump-dev/Ipopt.jl/blob/4c156461ef1fda3c9f015520197afda4e8ca3e26/src/MOI_wrapper.jl#L606-L611
Thanks @blegat , so to clarify we should add something like this
attr = MOI.ListOfSupportedNonlinearOperators()
MOI.Nonlinear.register_operator(nlp_model, MOI.get(model, attr))
in the appropriate place, is that correct?
I'm wondering if we don't have a bug in MOI:
using JuMP, MathOptInterface
function hs87(args...; kwargs...)
nlp = Model()
x0 = [390, 1000, 419.5, 340.5, 198.175, 0.5]
lvar = [0, 0, 340, 340, -1000, 0]
uvar = [400, 1000, 420, 420, 10000, 0.5236]
@variable(nlp, lvar[i] <= x[i = 1:6] <= uvar[i], start = x0[i])
a = 131078 // 1000
b = 148577 // 100000
ci = 90798 // 100000
d = cos(147588 // 100000)
e = sin(147588 // 100000)
@constraint(nlp, 300 - x[1] - 1 / a * x[3] * x[4] * cos(b - x[6]) + ci / a * d * x[3] == 0)
@constraint(nlp, -x[2] - 1 / a * x[3] * x[4] * cos(b + x[6]) + ci / a * d * x[4]^2 == 0)
@constraint(nlp, -x[5] - 1 / a * x[3] * x[4] * cos(b + x[6]) + ci / a * e * x[4]^2 == 0)
@constraint(nlp, 200 - 1 / a * x[3] * x[4] * sin(b - x[6]) + ci / a * e * x[3]^2 == 0)
function f1(t)
return if 0 <= t <= 300
30 * t
elseif 300 <= t <= 400
31 * t
else
eltype(x)(Inf)
end
end
function f2(t)
return if 0 <= t <= 100
28 * t
elseif 100 <= t <= 200
29 * t
elseif 200 <= t <= 1000
30 * t
else
eltype(t)(Inf)
end
end
@operator(nlp, op_f1, 1, f1)
@expression(nlp, op_f1)
@operator(nlp, op_f2, 1, f2)
@expression(nlp, op_f2)
@objective(nlp, Min, op_f1(x[1]) + op_f2(x[2]))
return nlp
end
nlp = hs87()
moi_backend = backend(nlp)
MOI.get(moi_backend, MOI.ListOfSupportedNonlinearOperators())
ERROR: MathOptInterface.GetAttributeNotAllowed{MathOptInterface.ListOfSupportedNonlinearOperators}: Getting attribute MathOptInterface.ListOfSupportedNonlinearOperators() cannot be performed: Cannot query MathOptInterface.ListOfSupportedNonlinearOperators() from `Utilities.CachingOptimizer` because no optimizer is attached (the state is `NO_OPTIMIZER`). You may want to use a `CachingOptimizer` in `AUTOMATIC` mode or you may need to call `reset_optimizer` before doing this operation if the `CachingOptimizer` is in `MANUAL` mode.
julia> moi_backend.mode
AUTOMATIC::CachingOptimizerMode = 1
The sentence "You may want to use a `CachingOptimizer` in `AUTOMATIC` mode or you may need to call `reset_optimizer` before doing this operation if the `CachingOptimizer` is in `MANUAL` mode."
seems wrong in that case.
@odow May I ask if I'm doing something wrong in the code snippet above? https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl/issues/195#issuecomment-2330321889
We can't get the list of supported operators because you haven't selected a solver
We could perhaps return something meaningful for this case where you have a CachingOptimizer with no optimizer attached.
Do you have a trick for us so that we can extract the operators and expect that they are supported?
I don't know how much it differs from @register
, so I may be completely wrong.
I just wanted to do this kind of thing: https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl/pull/197/files#diff-47c27891e951c8cd946b850dc2df31082624afdf57446c21cb6992f5f4b74aa2R454-R459
Benoît implemented an optimizer but all JuMP models in OptimizationProblems.jl don't have an optimizer attached.
I just wanted to do this kind of thing: https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl/pull/197/files#diff-47c27891e951c8cd946b850dc2df31082624afdf57446c21cb6992f5f4b74aa2R454-R459
That code is wrong. It returns a Vector{Symbol}
. I should add an example to the docstring:
https://github.com/jump-dev/MathOptInterface.jl/blob/5f5acaaffb3d0703f21d647605c87cf10a2796d0/src/attributes.jl#L2050-L2056
You need to implement support for MOI.UserDefinedFunction
. Here's Ipopt:
https://github.com/jump-dev/Ipopt.jl/blob/4c156461ef1fda3c9f015520197afda4e8ca3e26/src/MOI_wrapper.jl#L600-L620
It might be faster if I just take a look and make a PR :smile:
This should be enough to point you in the right direction: https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl/pull/197#issuecomment-2333003371
I didn't test other than the hs87 example.
Thanks a lot @odow!!! :smiley:
I got the following issue when updating OptimizationProblems.jl to more recent versions of JuMP:
which return the following error
any idea?