Closed odow closed 3 months ago
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 98.35%. Comparing base (
434940d
) to head (d61127b
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Thank you so much, yes that is better.
The Big-M trick only works if the domain is limited (in this case by the Big-M), hence I find it a bit self-contradictory. But I am fine with leaving it. Who knows who gets which benefits by seeing the Big-M formulation there
I guess the M
is chosen by the user. Whereas JuMP cannot automatically perform the same reformulation only if we know the bounds to infer a suitable M
.
In practice, all decision variables in reality have bounds (you cannot have infinite anything). It's just that the user may not directly specify them, or know an appropriate value.
I just saw this with the new release. The ``Trick 2'' bothers me as it is not possible to use big M constraints if variables have not a finite domain. Doing so actually add bounds.
MWE for which this changes the result:
model = Model()
@variable(model, x[1:2])
@variable(model, z, Bin)
@constraint(model, z --> {sum(x) <= 1})
@objective(model, Max, sum(x))
Termination status: unbounded while
model = Model()
@variable(model, x[1:2])
@variable(model, z, Bin)
M = 100 # the specific value does not matter
@constraint(model, sum(x) <= 1 + M * (1 - z))
@objective(model, Max, sum(x))
Termination status: optimal, objective value: M+1
Closes https://github.com/jump-dev/JuMP.jl/issues/3701
Is this better @schlichtanders?