plasmo-dev / Plasmo.jl

A Platform for Scalable Modeling and Optimization
Other
143 stars 20 forks source link

Upcoming refactoring of JuMP's nonlinear API #63

Closed odow closed 1 year ago

odow commented 2 years ago

The upcoming release of JuMP v1.2 will break Plasmo. Read more here: https://discourse.julialang.org/t/ann-upcoming-refactoring-of-jumps-nonlinear-api/83052

This is going to affect Plasmo because you extend some internal functions: https://github.com/plasmo-dev/Plasmo.jl/blob/793e2feada744d144983d1c9a3833b38de9b5518/src/nlp_macro.jl#L83-L94

x-ref: https://github.com/jump-dev/JuMP.jl/pull/2955

Please ping me if you have questions.

jalving commented 2 years ago

Thanks for the heads up. It is exciting that this is ahead of schedule. I'll start migrating Plasmo with the new PR.

odow commented 2 years ago

Let me know where you get stuck. Ideally, we should add publicly documented features to JuMP so there are no more hacks. Digging in, I've found a few things that I didn't think people had tried!

jalving commented 2 years ago

Hey @odow, I am getting an error when using a view to evaluate an objective gradient. I attached the example I was using to test this with the output. All of the other NLP evaluations seem to work with views. It is just the objective gradient that fails. Is this just a missing method?

Let me know if you would rather have a JuMP issue filed. This was using your od/moi-nonlinear branch with Julia 1.6.

using JuMP

model = Model()
@variable(model, x1[1:5] >= 0)
@variable(model, y1[1:5] >= 0)
@constraint(model, ref11, sum(x1) <= 10)
@constraint(model, ref12, sum(y1) <= 5)
@NLconstraint(model, x1[1]^2 + x1[2]^2 <= 5)
@variable(model, x2[1:5] >= 0)
@variable(model, y2[1:5] >= 0)
@constraint(model, ref1, sum(x2) <= 10)
@constraint(model, ref2, sum(y2) <= 5)
@NLconstraint(model, x2[1]^2 + x2[2]^2 <= 5)
@constraint(model, [i = 1:5], y1[i] == y2[i])
@NLobjective(model, Min, sum(x1[i] for i = 1:5)^3 + sum(x2[i]^2 for i = 1:5))

# run NLP evaluator
x1 = ones(20)
g1 = zeros(20)
evaluator = JuMP.NLPEvaluator(model)
MOI.initialize(evaluator, [:Hess, :Jac, :Grad])

# works
MOI.eval_objective_gradient(evaluator, g1, x1)

# method error on SubArray
g_view = view(g1, 1:20)
MOI.eval_objective_gradient(evaluator, g_view, x1) 

Here is the output:

ERROR: MethodError: no method matching _extract_reverse_pass(::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, ::MathOptInterface.Nonlinear.ReverseAD.NLPEvaluator, ::MathOptInterface.Nonlinear.ReverseAD._FunctionStorage)
Closest candidates are:
  _extract_reverse_pass(::Vector{T}, ::MathOptInterface.Nonlinear.ReverseAD.NLPEvaluator, ::Union{MathOptInterface.Nonlinear.ReverseAD._FunctionStorage, MathOptInterface.Nonlinear.ReverseAD._SubexpressionStorage}) where T at /home/jordan/.julia/packages/MathOptInterface/kCmJV/src/Nonlinear/ReverseAD/reverse_mode.jl:323
Stacktrace:
 [1] eval_objective_gradient(d::MathOptInterface.Nonlinear.ReverseAD.NLPEvaluator, g::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, x::Vector{Float64})
   @ MathOptInterface.Nonlinear.ReverseAD ~/.julia/packages/MathOptInterface/kCmJV/src/Nonlinear/ReverseAD/mathoptinterface_api.jl:179
 [2] eval_objective_gradient(evaluator::MathOptInterface.Nonlinear.Evaluator{MathOptInterface.Nonlinear.ReverseAD.NLPEvaluator}, g::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, x::Vector{Float64})
   @ MathOptInterface.Nonlinear ~/.julia/packages/MathOptInterface/kCmJV/src/Nonlinear/evaluator.jl:136
 [3] top-level scope
   @ REPL[205]:1
odow commented 2 years ago

Can you try on the latest JuMP#master and MOI v1.6.0?

jalving commented 2 years ago

Thanks! That fixed it. Almost done updating Plasmo.

jalving commented 2 years ago

All of the tests now pass locally. The update turned out to be not bad. Removing all of the hacky NLP functions I was doing actually revealed a silent bug.

I ended up using the MOI.Nonlinear._bound private method for convenience. Are there issues with doing this? Will MOI always assume that NLP sets can produce bounds?

odow commented 2 years ago

I ended up using the MOI.Nonlinear._bound private method for convenience

:( We're trying to move away from needing to use private methods! What are you using it for?

odow commented 2 years ago

Oh these? https://github.com/jump-dev/MathOptInterface.jl/blob/1ab507872e52574d3218282674c3084f06bc5e31/src/Nonlinear/types.jl#L236-L239 please just copy the four lines into Plasmo so you don't need to use MOI.Nonlinear._ methods.

jalving commented 2 years ago

Done. No more MOI hacks in this package :)

odow commented 2 years ago

Awesome!

jalving commented 1 year ago

Closed by #66