Closed odow closed 1 year ago
Patch and project coverage have no change.
Comparison is base (
620724b
) 87.30% compared to head (95f7a45
) 87.30%. Report is 1 commits behind head on master.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Hey @odow, thank you for your PR. This improves readability and performance at the same time, nice :) JuMP is a wonderful package. Some years ago, we even tried switching to Julia for our energy modelling system in order to benefit from JuMP, but it proved too cumbersom for different reasons...
Yes, JuMP's internal memory consumption has already been mentioned by a colleague of mine, but it is still relatively low. For comparability, I would keep the variable names (the other packages also keep them). Turning on the direct model would be totally fine by me.
I'll merge this one. If you have other propositions, feel free to create a PR :)
(I think I should remove or-tools from the list, I also had huge problems to install it...)
Hey! I've been meaning to check this project out. People keep mentioning it to me.
Your docs are very nice, and I quite like this style of modeling. I couldn't get the benchmarks to run (told me it couldn't find
ortools-python
and it took ages to try and find a feasible install?), but here are a couple of minor changes. (Feel free to close this PR if you would prefer to keep them as-is.)I thought it might be helpful to explain why JuMP's memory usage is so high: we actually store three copies of the problem data.
x
andy
variablesHere's a demonstration:
Biggest change with
direct_model
is that memory usage drops by 20% (not the expected 1/3 because there's some other overhead, etc).If we turn off passing variable names to Gurobi, we get more improvement:
But I wouldn't expect most users to care about this.