Closed benoitsteiner closed 3 years ago
Hi @benoitsteiner ,
Thank you so much for the PR ! Most of the team is away at the moment (summer vacations), so it might take a few weeks before we are able to merge this. Meanwhile I'll start having a look on my side. Would you like this feature to be released for the competition ?
Best, Maxime
Hi @gasse. It would be great to have this available for the competition.
Hi @benoitsteiner,
The PR looks good. However, as you've probably noticed, it is not intuitive to map the branching action_set
to the Khalil2016
observation function. For that reason, we have done some (unrelased) changes related to that. Perhaps you could give you opinion on it.
In #210, we changed Khalil2016
to return an array that has as many rows as they are variables. Non-branching candidates are simply filled with nan
. That way the array can be directly indexed with the action set. It's a bit wasteful in term of memory, but we believe it should reduce the confusion.
@gasse @dchetelat This PR would not break existing code (and would make Khalil2016
more usable). We could release a bug-fix with this, and have the new indexing in the next release.
Hi @AntoinePrv,
There are several ways to refer to variables in SCIP (original variables, transformed variables, and column). This forced me to go back to the source code to figure out what's really going on. The change from column to variable is a good one in the long run since this makes the API more user friendly.
However I'm not sure #210 is the best way to go for the competition, since it's not backward compatible with the existing code (at least in the NodeBipartite case) that is row/column centric. This would force participants to either generate new training/test sets or reindex their existing datasets. Another potential issue with #210 is that users would need to delete NaN from their observations to ensure they can still train neural networks.
Would it be possible to merge #210 after the conclusion of the competition?
Yes that's exactly the plan :) The big changes @AntoinePrv mentioned will be released in v0.8, after the competition ends. Meanwhile we can still release minor fixes during the competition, as long as they do not breaks things for the participants.
Best, Maxime
@benoitsteiner we've released this in v0.7.3
.
The Branching environment has a flag to decide whether to extract pseudo candidates or not: this PR adds the same flag to the Khalil2016 observation.