Closed a-torgovitsky closed 4 years ago
In estbounds
, the feasibility of the problem is determined by evaluating the true problem if estimate = FALSE
.
For this true problem, I mean the one that solves min/max A_tgt x subject to the constraints.
Looking back to this procedure that I had, it seems that using mincriterion
to determine whether the problem is feasible is a better approach because mincriterion
solves only one LP/QP while the approach that is currently used solves two LPs.
Hence, do you think I should change this step to simply using mincriterion
?
In addition, do you mean that if the objective value from mincriterion
is less than the tolerance level by gurobi
(which is 1e-9
), I should consider the problem as feasible?
Thanks!
If estimate = FALSE
, I would just start by directly trying the "true problem" that you mention.
Then check Gurobi's return flag -- was the problem feasible or infeasible?
Actually, I don't think it's necessary to complicate things by falling back on estimate = true
behavior in case of infeasibility. Just return lb = +Inf
and ub = -Inf
and a warning that the identified set is empty. I think this makes more sense, since estimate = TRUE
is the default.
Done! I am also thinking about the possibility that only one of the min and max problem is feasible and the other is infeasible. It seems that this is possible when the feasibility region by the constraints is unbounded so that only one of the min/max LP is feasible.
In these cases, should I return the infeasible bound as lb = +Inf
or ub = -Inf
(depending on which one is infeasible), and include a warning message saying that one of the bounds is infeasible?
Thanks!
Good point, but it's not possible that one of min or max is feasible and the other is not --- the constraint set is the same in both problems!
What you are describing is not infeasibility, it is unboundedness, which is a different concept.
If the max problem is unbounded then you would want to return ub = +Inf
because there is no upper bound.
This is perfectly meaningful, it just says that the identified set is a half-infinite interval.
Ah I see, yes you are right! It is possible that one of them is unbounded while the other one is not, and this is different from infeasibility.
I will update my code to get the status code for infeasible and unboundedness from gurobi
so it return the correct answer. Thanks!
Done! I have updated estbounds
to reflect the changes.
Based on the definition of the optimization status codes for Gurobi here, I consider the following cases:
OPTIMAL
, there is nothing else to do.INF_OR_UNBD
, I set DualReductions = 0
in params
and run the LP/QP again. The results returned will be the one with DualReductions = 0
.UNBOUNDED
, I set the objective as 0 and run the LP/QP again because the guide said that it says nothing about feasibility. If the LP/QP with objective equals to 0 is OPTIMAL
, then I set the result as Inf
for the max problem, and set it as -Inf
otherwise.If the status is not OPTIMAL
or UNBOUNDED
, a warning message will be printed.
Below are two examples to illustrate the above.
The objective function for both examples are x_1 + x_2.
The constraints for the first example is x_1 = 1 (same for the obs
and shp
constraints).
The constraints for the second example are x_1 = 1 and x_1 = 2, so it is an infeasible LP.
I also set the variable x_2 unbounded above for illustrative purpose.
> # Example 1: Bounded below and unbounded above
> Amat <- matrix(c(1,0), nrow = 1)
> lpm.inf <- lpmodel(A.obs = Amat,
+ A.shp = Amat,
+ A.tgt = matrix(c(1,1), nrow = 1),
+ beta.obs = 1,
+ beta.shp = 1)
>
> estbounds(lpmodel = lpm.inf,
+ solver = "gurobi",
+ estimate = FALSE)
True bounds: [1, Inf]
>
> # Example 2: I make the shape constraint contradict the obs constraint so the LP is infeasible
> lpm.inf2 <- lpm.inf
> lpm.inf2$beta.shp <- 2
> estbounds(lpmodel = lpm.inf2,
+ solver = "gurobi",
+ estimate = FALSE)
The identified set is empty.
Warning message:
In estbounds(lpmodel = lpm.inf2, solver = "gurobi", estimate = FALSE) :
The identified set is empty. Please check 'ub.status' and 'lb.status' for the status codes.
I also return the status codes for the upper and lower bounds as ub.status
and lb.status
to let the user check the code if necessary.
In this example, lower bound is optimal and upper bound is infeasible. optimal-infeasible.RData.zip First, this can't be correct, since again the constraints are the same for both problems. It also leads to nonsensical output:
> rm(list = ls())
> load("optimal-infeasible.RData")
> library("lpinfer")
>
> r <- estbounds(lpmodel = lpm, estimate = FALSE)
Warning message:
In estbounds(lpmodel = lpm, estimate = FALSE) :
The identified set is empty. Please check 'ub.status' and 'lb.status' for the status codes.
> print(r$lb.status)
[1] "OPTIMAL"
> print(r$ub.status)
[1] "INFEASIBLE"
> print(r)
True bounds: [0.749786195338944, -Inf]
In this second example, I am very surprised that Gurobi is returning infeasible. Are you using the default feasibility tolerances? The criterion is very close to zero. spurious-infeasible.RData.zip
> rm(list = ls())
> load("spurious-infeasible.RData")
> library("lpinfer")
>
> r <- estbounds(lpmodel = lpm, estimate = FALSE)
Warning message:
In estbounds(lpmodel = lpm, estimate = FALSE) :
The identified set is empty. Please check 'ub.status' and 'lb.status' for the status codes.
> print(r$lb.status)
[1] "INFEASIBLE"
> print(r$ub.status)
[1] "INFEASIBLE"
> print(r)
The identified set is empty.
>
> r <- estbounds(lpmodel = lpm, estimate = TRUE)
> print(r$mincriterion)
[1] -1.312181e-08
I figured out that the problem is because I was actually not using the default tolerance. I originally used 1e-9 in the code, which is much smaller than the default 1e-6 tolerance level.
After using the default tolerance level, I get something that makes more sense in your examples:
Example 1:
> library(lpinfer)
> r <- estbounds(lpmodel = lpm, estimate = FALSE)
> r
True bounds: [0.746135619607553, 0.764358633219347]
Example 2:
> r <- estbounds(lpmodel = lpm, estimate = FALSE)
> r
True bounds: [0.396098897474879, 0.76468163732417]
Ok, very good. In general, we want to stick with the defaults for Gurobi, unless we have a good reason to change them (e.g. in #78 ).
Then if we do change them, we want to remember to change them back afterwards, since the Gurobi solver instance is persistent across bootstrap runs. (That's how you coded it, right? We don't create a new solver each time we bootstrap?)
I see, I will stick with the defaults for Gurobi.
Actually I think I created a new solver each time in the procedure (creating a list of params
object each time and then pass it to gurobi
.)
Do you mean that what I should do is to just set up params
in the beginning, and if I change any one of the set-up in one of the bootstrap runs, I have to reset them to the original settings?
Based on my experience with other Gurobi APIs (or at least my imperfect memory of it...), I had thought that there was some sort of overhead with starting a Gurobi instance, pulling the license, and setting up aspects of the model.
For example, you might expect that we could have substantial cost savings since we are only changing the objective function on each bootstrap draw.
However, I read through the R section of the manual today and didn't find any reference to these issue or any way to do that.
So maybe I am mistaken and it is a non-issue.
Anyway, I looked at your code in gurobi.optim
and I think it looks good.
Also minor comment: Sticking with the defaults means just letting Gurobi choose. So you don't want to hard-code a value in like this: https://github.com/conroylau/lpinfer/blob/85cac2c67bccf9cc86c7a26d7cb18cda2970e91f/R/optim.R#L59-L60 There shouldn't be any need to, since Gurobi will use its default, whether that is 1e-6, or whether they happen to change it in the future to some different value.
Sure, thank you for telling me about this. I will keep this in mind if I use other Gurobi's APIs in the future.
I have also updated the Gurobi optimization code by replacing the options with ellipsis, which represent some options that we might want to impose in Gurobi. If nothing is passed to the ellipsis, the defaults in Gurobi will be used.
By the way, is it okay if I hard-coded the OutputFlag
as 0 in Gurobi so it won't print the messages from solving the optimization problem?
Interesting, can you explain how the ellipsis works in this context? I'm not familiar with it. I just quickly tried this and it didn't work:
> params <- list(...)
Error: '...' used in an incorrect context
>
Is there something special about the way it appears in gurobi.optim
?
And setting OutputFlag
to 0 is fine. That one we have a good reason to change.
Sure, I think ellipsis has to be used as part of a function to represent the unnamed arguments. For example, I can define it as follows:
f.demo <- function(a, ...) {
print(a)
x <- list(...)
print(x)
}
If I only pass a = 1
to f.demo
, then I get:
> f.demo(a = 1)
[1] 1
list()
Here, the list from the ellipsis is empty because I passed nothing to it.
On the other hand, if I want to pass something like options for the Gurobi solver, I can do something like this:
> f.demo(a = 1, Method = 1, ScaleFlag = 3)
[1] 1
$Method
[1] 1
$ScaleFlag
[1] 3
By doing this way, I do not need to specify the exact arguments of the function in advance.
I guess this is more flexible because I do not need to specify the Gurobi options in the package function gurobi.optim
in advance.
I see, thanks for the explanation! This seems like a really good solution to me!
I think we can close this issue now?
Sure!
How are we deciding whether the problem is infeasible or not?
Is it based on the return from the solver? Or based on the minimum objective value?
I am finding that in some cases that I know are feasible I get this warning message and the
mincriterion
value is very small, like 1e-10, smaller than Gurobi's tolerance.