Open tle4336 opened 10 months ago
Anyone could help me with this issue of having two distinct hashcodes being mapped to the same named variables, leading the pulp
solver to treat them as two distinct variables rather than one unique variable? Like, is the way of circumventing this behavior is to export the model into the .mps
file locally, and then read it back right afterwards so that a new, unique hashcode could be generated for the same named variables?
You can check the docs for different ways to export and import the problem from mps, json.
I would like to revisit this topic again (https://github.com/coin-or/pulp/issues/155). Is there any ways for us to re-write the hashcodes created by multiple child processes by exporting the model (in
.mps
file) to a local folder, and then read it back immediately afterwards? I read here and point #1 said that "PuLP permits having variable names because it uses an internal code for each one. But we do not export that code. So we identify variables by their name only," and my current model has tens of thousands of variables with distinct names (yes, absolutely no overlap names) and thousands of constraints.Based on that sentence, if I use multi-thread to add hundreds of constraints to a
pulp
object, and then at the end of the process, export thatpulp
object into a.mps
file and then read that model back immediately (the whole purpose is to ensure that there is ONE hashcode for each of the same variables appearing in the objective function and constraints). I would like to ask for your expertise (@pchtsp @stumitchell ) if this would help solve the issue of "having two hashcodes for the same named variables that appear in objective function and some of the constraints, because those constraints are built in a multi-thread environment?"