Closed dannyopts closed 9 months ago
This is the problem I have noticed and wanted to report.
So what I have found is that the hack to save token is closing environment as soon as solution is found. The other parts of code are still expecting to have environment up and running so when you try to explore solution than it throws this error.
I was thinking about option to get solution before env is closed. But also if problem is not solvable and you try to get IIS you will hit the same problem.
Maybe computing the IIS when using a remote server should require you to manage the env yourself and this can just be added to the docs?
Since computing the IIS could happen at any time I cant think of how this could work, unless it was allowed to compute the IIS straight after the run with something wrapping function like solve_or_computeIIS which would itself manage the env creation?
I am currently experiencing a bug when using a remote compute gurobi solver.
The issue is that the env is destroyed before we retrieve the solution, and if we lose a race then we cant retrieve the solution and blow up with an error
More details on how to reproduce below but the TLDR is: The problem is that the job is aborted when we exit the stack on line 582. I presume deletion of this env on the compute side is then an async process. If we ask "in time" we can get the solved values back. But if not then we error out.
I think the fix is simple and we just need to retrieve the solution before we exit the stack.
A workaround is to pass in an env rather than letting linopy create for you.
Happy to create a PR to make this fix.
Steps to reproduce
You need to have a gurobi compute server running with a valid license: I am doing this via docker compose
Create a second lic file to point at this compute server
Then build a simple model and solve it pointing to the gurobi server
Now run the model:
This "should" solve with no issues.
Now add a time.sleep(5) before calling get_solver_solution in run_gurobi (simulating some slowness in the os) - this seems to be realistic when working with larger models.
Now run the original model again
When I do this I see
Followed by