jump-dev / JuMP.jl

Modeling language for Mathematical Optimization (linear, mixed-integer, conic, semidefinite, nonlinear)
http://jump.dev/JuMP.jl/
Other
2.24k stars 396 forks source link

`TypeError` when using `rad2deg` in a `@NLexpression` #2455

Closed ferrolho closed 3 years ago

ferrolho commented 3 years ago

Hi! I think I have found a bug in JuMP.jl. I am facing a TypeError if I use rad2deg in a @NLexpression. I have reproduced this issue using Ipopt.jl and KNITRO.jl. If I write down the conversion myself, the error is not triggered.

I stumbled upon this problem in a notebook I have recently created for JuMPTutorials.jl, but I will link to a copy of the notebook on a repository of mine since the PR is being reviewed. Here is the notebook.

At some point in the notebook, you will find this comment (in a code cell)

# Helper functions

and you will see that the lines

@NLexpression(model, c_L[j = 1:n], a₀ + a₁ * rad2deg(α[j]))
@NLexpression(model, c_D[j = 1:n], b₀ + b₁ * rad2deg(α[j]) + b₂ * rad2deg(α[j])^2)

are commented, because they trigger the error I am reporting.

If I replace the calls to rad2deg(x) with 180 * x / π, the error goes away.

@NLexpression(model, c_L[j = 1:n], a₀ + a₁ * (180 * α[j] / π))
@NLexpression(model, c_D[j = 1:n], b₀ + b₁ * (180 * α[j] / π) + b₂ * (180 * α[j] / π)^2)

Here is the stacktrace when using Ipopt.jl:

TypeError: in typeassert, expected Float64, got a value of type Int64

Stacktrace:
 [1] eval_univariate_2nd_deriv at /home/henrique/.julia/packages/JuMP/qhoVb/src/_Derivatives/forward.jl:440 [inlined]
 [2] forward_eval_ϵ(::Array{Float64,1}, ::JuMP._VectorView{ForwardDiff.Partials{5,Float64}}, ::Array{Float64,1}, ::JuMP._VectorView{ForwardDiff.Partials{5,Float64}}, ::Array{JuMP._Derivatives.NodeData,1}, ::SparseArrays.SparseMatrixCSC{Bool,Int64}, ::JuMP._VectorView{ForwardDiff.Partials{5,Float64}}, ::JuMP._VectorView{ForwardDiff.Partials{5,Float64}}, ::JuMP._Derivatives.UserOperatorRegistry) at /home/henrique/.julia/packages/JuMP/qhoVb/src/_Derivatives/forward.jl:374
 [3] _hessian_slice_inner(::NLPEvaluator, ::JuMP._FunctionStorage, ::JuMP._VectorView{ForwardDiff.Partials{5,Float64}}, ::JuMP._VectorView{ForwardDiff.Partials{5,Float64}}, ::Type{Val{5}}) at /home/henrique/.julia/packages/JuMP/qhoVb/src/nlp.jl:791
 [4] _hessian_slice(::NLPEvaluator, ::JuMP._FunctionStorage, ::Array{Float64,1}, ::SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true}, ::Float64, ::Int64, ::Array{Float64,1}, ::Type{Val{5}}) at /home/henrique/.julia/packages/JuMP/qhoVb/src/nlp.jl:849
 [5] macro expansion at /home/henrique/.julia/packages/JuMP/qhoVb/src/nlp.jl:767 [inlined]
 [6] macro expansion at ./timing.jl:233 [inlined]
 [7] eval_hessian_lagrangian(::NLPEvaluator, ::SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true}, ::Array{Float64,1}, ::Float64, ::SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true}) at /home/henrique/.julia/packages/JuMP/qhoVb/src/nlp.jl:744
 [8] eval_hessian_lagrangian(::Ipopt.Optimizer, ::Array{Float64,1}, ::Array{Float64,1}, ::Float64, ::Array{Float64,1}) at /home/henrique/.julia/packages/Ipopt/P1XLY/src/MOI_wrapper.jl:1219
 [9] (::Ipopt.var"#eval_h_cb#50"{Ipopt.Optimizer,Array{Tuple{Int64,Int64},1}})(::Array{Float64,1}, ::Symbol, ::Array{Int32,1}, ::Array{Int32,1}, ::Float64, ::Array{Float64,1}, ::Array{Float64,1}) at /home/henrique/.julia/packages/Ipopt/P1XLY/src/MOI_wrapper.jl:1329
 [10] eval_h_wrapper(::Int32, ::Ptr{Float64}, ::Int32, ::Float64, ::Int32, ::Ptr{Float64}, ::Int32, ::Int32, ::Ptr{Int32}, ::Ptr{Int32}, ::Ptr{Float64}, ::Ptr{Nothing}) at /home/henrique/.julia/packages/Ipopt/P1XLY/src/Ipopt.jl:267
 [11] solveProblem(::IpoptProblem) at /home/henrique/.julia/packages/Ipopt/P1XLY/src/Ipopt.jl:513
 [12] optimize!(::Ipopt.Optimizer) at /home/henrique/.julia/packages/Ipopt/P1XLY/src/MOI_wrapper.jl:1441
 [13] optimize!(::MathOptInterface.Bridges.LazyBridgeOptimizer{Ipopt.Optimizer}) at /home/henrique/.julia/packages/MathOptInterface/ZJFKw/src/Bridges/bridge_optimizer.jl:264
 [14] optimize!(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}) at /home/henrique/.julia/packages/MathOptInterface/ZJFKw/src/Utilities/cachingoptimizer.jl:215
 [15] optimize!(::Model, ::Nothing; bridge_constraints::Bool, ignore_optimize_hook::Bool, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /home/henrique/.julia/packages/JuMP/qhoVb/src/optimizer_interface.jl:130
 [16] optimize! at /home/henrique/.julia/packages/JuMP/qhoVb/src/optimizer_interface.jl:106 [inlined] (repeats 2 times)
 [17] top-level scope at In[4]:115
 [18] include_string(::Function, ::Module, ::String, ::String) at ./loading.jl:1091

And here is the stacktrace when using KNITRO.jl:

ERROR: User routine for hess_callback returned -500.
       Could not evaluate second derivatives at the current point.
       No further progress can be made.

┌ Warning: Knitro encounters an exception in evaluation callback: TypeError(:typeassert, "", Float64, 0)
└ @ KNITRO /home/henrique/.julia/packages/KNITRO/8BikX/src/kn_callbacks.jl:261
odow commented 3 years ago

MWE:

using JuMP, Ipopt
model = Model(Ipopt.Optimizer)
@variable(model, x <= 1)
@NLobjective(model, Max, rad2deg(x))
optimize!(model)

The issue is switchexpr contains bits like this:

                  if operator_id <= 52
                      if operator_id == 51
                          return 0::T
                      else
                          if operator_id == 52
                              return 0::T
                          end
                      end
                  else

https://github.com/jump-dev/JuMP.jl/blob/1e7cc25fd9b9842a9f4e75bd5154f3ae3667f234/src/_Derivatives/forward.jl#L461-L496

I guess https://github.com/jump-dev/JuMP.jl/blob/1e7cc25fd9b9842a9f4e75bd5154f3ae3667f234/src/_Derivatives/forward.jl#L483 should be

ex = :(return convert(T, $deriv_expr)::T
mlubin commented 3 years ago

Maybe add a special case for the derivative expression of rad2deg? It's unusual that the derivative evaluates to an Int. I'm worried that if we add a convert, it will end up being a MethodError in some case we haven't thought of yet.