jump-dev / JuMP.jl

Modeling language for Mathematical Optimization (linear, mixed-integer, conic, semidefinite, nonlinear)
http://jump.dev/JuMP.jl/
Other
2.24k stars 395 forks source link

Improve Error Messages for Querying Modified Models #3155

Closed pulsipher closed 1 year ago

pulsipher commented 1 year ago

With the following simple (yet a little contrived example) I get a relatively unintuitive error:

using JuMP, Gurobi
model = Model(Gurobi.Optimizer)
@variable(model, x[1:2] >= 0)
@objective(model, Min, x[1])
optimize!(model)

for i in 1:2
    set_start_value(x[i], value(x[i]))
end
ERROR: OptimizeNotCalled()
Stacktrace:
 [1] get(model::Model, attr::MathOptInterface.VariablePrimal, v::VariableRef)
   @ JuMP C:\Users\bbgui\.julia\packages\JuMP\puvTM\src\JuMP.jl:1249
 [2] value(v::VariableRef; result::Int64)
   @ JuMP C:\Users\bbgui\.julia\packages\JuMP\puvTM\src\variables.jl:1032
 [3] value(v::VariableRef)
   @ JuMP C:\Users\bbgui\.julia\packages\JuMP\puvTM\src\variables.jl:1032
 [4] top-level scope
   @ .\REPL[19]:2

The warning provided in https://github.com/jump-dev/JuMP.jl/blob/9b4c88021178f6b1245a78cb1a23852bc1778eeb/src/JuMP.jl#L1206-L1211 would be more helpful, but this is only implemented for MOI.get with attr::MOI.AbstractModelAttribute, not the other dispatches. It would be great to add this to the other methods.

In this case and the larger less contrived model this was inspired from, I can use direct mode to get around this and avoid unnecessary loops. It would be nice to have a flag when initializing the model to opt out of using is_model_dirty. I am working on a benchmarking paper right now on iterative algorithms using JuMP and Pyomo where this capability would allow for more of an apples to apples comparison with Pyomo such that I avoid using direct mode and/or using a different update strategy.

odow commented 1 year ago

We should add the warning to variable and constraint attributes.

Documentation is here: https://jump.dev/JuMP.jl/stable/manual/solutions/#OptimizeNotCalled-errors

and/or using a different update strategy.

I think this is unavoidable. JuMP and Pyomo have different mechanisms for accessing solutions. Where possible, JuMP queries information directly from the solver without caching. Before we added is_model_dirty we used to hit very subtle bugs in how the solvers invalidate solution information on problem modification.

Is it really a problem to change to this?

solution = value.(x)
set_start_value.(x, solution)
odow commented 1 year ago

Is this what you're after? https://github.com/jump-dev/JuMP.jl/pull/3156

pulsipher commented 1 year ago

Is it really a problem to change to this?

solution = value.(x)
set_start_value.(x, solution)

It is for this simple example, but in my use case we only query and update a subset of variables (determined by some solution results) from a much larger set of variables (e.g., containing millions of variables), so having to iterate twice over the variables is less than desirable. We could cache the subset of variables on the first pass, but then we end up adding allocations. Not a big deal, but I can get around all of this in direct mode.

Is this what you're after? https://github.com/jump-dev/JuMP.jl/pull/3156

This resolves the core of this issue, the addition of an opt out was merely a suggestion but perhaps it is little too niche.

odow commented 1 year ago

We could cache the subset of variables on the first pass,

Yes, this is what I had in mind

but then we end up adding allocations.

Is this a bottleneck? A couple of allocations aren't going to hurt in comparison to the optimization.

Not a big deal, but I can get around all of this in direct mode.

If you're aiming to be performant, you should probably be using direct mode anyhow.

If you're using Gurobi you should be fine. If you're using another solver, you'll want to test that they actually return the solution that you expect if you've modified the problem. These things can be tricky.