Closed odow closed 1 year ago
Finding infeasible constraints from the solver output using direct_model
has been very helpful for my debugging: https://discourse.julialang.org/t/how-to-reference-jump-constraint-from-solver-index/57865
It's not that special and the code is ugly but I wrote this function to easily print out conflicting constraints
function conflict_cons(model)
compute_conflict!(model)
error_found = false
conflict_constraint_list = ConstraintRef[]
for (F, S) in list_of_constraint_types(model)
for con in all_constraints(model, F, S)
try
if MOI.get(model, MOI.ConstraintConflictStatus(), con) == MOI.IN_CONFLICT
push!(conflict_constraint_list, con)
println(con)
end
catch
con_type = MOI.get(model, MOI.ConstraintConflictStatus(), con)
if ~error_found
@info "Only one error will be printed"
error("error found with " * string(F) * string(S) * string(con) * con_type)
error_found = true
end
end
end
end
end
I wrote this function to easily print out conflicting constraints
I use this:
"""
print_conflict!(model)
Compute and print a conflict for an infeasible `model`.
"""
function print_conflict!(model)
JuMP.compute_conflict!(model)
ctypes = list_of_constraint_types(model)
for (F, S) in ctypes
cons = all_constraints(model, F, S)
for i in eachindex(cons)
isassigned(cons, i) || continue
con = cons[i]
cst = MOI.get(model, MOI.ConstraintConflictStatus(), con)
cst == MOI.IN_CONFLICT && @info name(con) con
end
end
return nothing
end
This only prints the conflict (I usually have only a dozen constraints tops), though it could be modified easily to return the list of constraints. Obviously, printing is not very useful if constraints / variables have no name...
I don't quite remember why I included the isassigned(cons, i) || continue
. I think it's because I could sometimes declare array-shaped constraints without defining all the indices, or something like that.
It's interesting that we both wrote code to do approximately the same thing:
I often have 1000s of constraints and my function mostly works fine (except where computing the conflict takes excessive time but this is normally a sign my model is a mess) although it could certainly be improved.
I also tend to name all variables and constraints as this helps a lot with debugging.
In addition a lot of the time I have a 'core' model which represents the core physical properties I care about, I then call this method for each experiment and add the things relevant for my current experiment. I find this helps as it gives me faith that I have a valid starting point. But this methodology is already described in the JuMP docs.
function create_model()
model=Model()
@variable(model, a)
@constraint(model, capacity, a ≤ 10)
return model
end
experiment1 = create_model()
@constraint(experiment1, ...)
@objective(experiment1, ...)
optimize!(experiment1)
experiment2 = create_model()
@constraint(experiment2, ...)
@objective(experiment2, ...)
optimize!(experiment2)
A relatively simple to implement strategy for debugging infeasibility is to do a sort of bisection search on a list of constraint names by turning off half, say, at a time and testing the remainder.
A relatively simple to implement strategy for debugging infeasibility is to do a sort of bisection search on a list of constraint names by turning off half, say, at a time and testing the remainder.
This is what I have done for many years, especially for classes of models where no solvers support IIS
. In a lot of cases, I actually don't need that the set of constraints to be irreducible. I just need a small-ish number of constraints to look at and reason about what is causing the infeasibility.
There's the copy_conflict
function:
julia> using JuMP, Gurobi
julia> model = Model(Gurobi.Optimizer);
julia> set_silent(model)
julia> @variable(model, 0.1 <= x <= 0.9, Int)
x
julia> @variable(model, y <= 0)
y
julia> @constraint(model, x <= y)
x - y ≤ 0.0
julia> optimize!(model)
julia> compute_conflict!(model)
julia> conflict, index_map = copy_conflict(model);
julia> print(conflict)
Feasibility
Subject to
x ≥ 0.1
x ≤ 0.9
x integer
PR to improve docs here: https://github.com/jump-dev/JuMP.jl/pull/3039
Another thing we've been using in a project to verify solutions at small scales is just to have an enumeration function for the integer variables (have to be bounded):
function min_via_enum(f, n, values = fill(0:1,n))
solutions = Iterators.product(values...)
best_val = Inf
best_sol = nothing
for sol in solutions
sol_vec = collect(sol)
val = f(sol_vec)
if best_val > val
best_val = val
best_sol = sol_vec
end
end
return best_val, best_sol
end
in the mixed-integer case, f
can optimize over the continuous variable for integer fixed
I have a start here: https://jump.dev/JuMP.jl/previews/PR3043/tutorials/getting_started/debugging/
Comments appreciated #3043
YALMIP has a great collection of debugging tips:
Here is another reference for prior art, Math Program Inspector.
Thanks @ccoffrin : the bibliography in your link gives this nice reference to Bruce McCarl's paper here: https://ageconsearch.umn.edu/record/15553/files/30020403.pdf
I've merged a tutorial: https://jump.dev/JuMP.jl/dev/tutorials/getting_started/debugging/
From discussion with @ccoffrin, one thing we could add is a modeling suggestion to structure models so that they have complete recourse (that is, they never have an infeasible solution).
Proposed feasibility relaxation tool is here: https://github.com/jump-dev/MathOptInterface.jl/pull/1995 we'd just need some plumbing on the JuMP side to tidy it up.
MOI.Utilities.PenaltyRelaxation
has been added to MOI. Will be in the v1.11.0 release.
One area where I think we can do much more to help users is to provide tooling and documentation to help them debug and test the optimization models that they build.
I had this as one of the items in https://github.com/jump-dev/JuMP.jl/issues/2348, but I'm lifting it to a stand-alone issue.
I think what this needs is a new tutorial that goes over some common issues, and provides links to other resources.
If anyone has ideas/links, please post them below.
Motivation
@mtanneau's JuMP-dev talk discussed a lot of this, but it keeps coming up. I see a lot of common issues on Discourse where people are missing some strategies to debug things.
Prior art
It's also a topic that is surprisingly lacking in other modeling libraries.
check
s to their code to validate data, and that "you may have to spend some time reviewing your formulation before you discover what is wrong." The tooling suggestion is to print out a constraint or objective to look at the data. But that doesn't scale to large problems very well.feasrelax
andIIS
support: https://www.gurobi.com/documentation/9.5/examples/diagnose_and_cope_with_inf.htmlI found this excellent paper that should be required reading for anyone who encounters a bug modelling:
It does have one great paragraph: "Just like science, debugging cannot be a cold, calculating and linear process. Both involve essential elements of inspired guesswork, hutches and sudden flashes of insight which cut through the mist."
They lay out a good framework for debugging as a table/flow-chart. We could aim to replicate that.
Other links
Documentation: debugging Julia code in general
Give examples of how different common Julia errors can arise (BoundsError, MethodError, ...)
Other resources
Documentation: debugging solver issues
If solver returns something unexpected, 99% of the time there is a bug in your code
Solver returns something unexpected.
Other resources
Erwin has a good tactic for debugging unbounded problems: http://yetanothermathprogrammingconsultant.blogspot.com/2018/08/the-best-way-to-debug-infeasible-models.html
So
becomes
Documentation: unit testing
@test
to ensureprimal_feasibility_report
to validate (in)feasible solutionsTooling: feasibility relaxation
Some solvers have native support, but we should be able to write this as a function in JuMP.
Tooling: how to prove your formulation is correct
I've never really seen a good answer for this, and this just stung me. My model was:
So we need some way of easily validating inputs and outputs. We have
primal_feasibility_report
, so perhaps we could have some unit testing framework that let you provide sets of variable values to assert that they are (in)feasible.@mtanneau pointed out that small test cases in unit tests can be brittle if there are multiple optimal solutions, you're using a local solver, or you're timing out early.