odow / SDDP.jl

A JuMP extension for Stochastic Dual Dynamic Programming
https://sddp.dev
Other
304 stars 62 forks source link

Free up resources in rolling horizon #699

Closed pauleseifert closed 11 months ago

pauleseifert commented 11 months ago

Hi Oscar,

I have implemented a larger model and want to investigate a rolling horizon version.

The model is trained in serial mode, reserving (stages*states_per_stage+1) gurobi instances. I use loops to load the new data and create a model = SDDP.MarkovianPolicyGraph() in each iteration. Then, the model is trained, policy simulated, and transition variables saved to the hard drive with a new model = SDDP.MarkovianPolicyGraph() for the next period.

However, gurobi instances are not freed up in between, and the model eventually runs out of available licences or kills the Gurobi Token server.

I tried to set the variable model = nothing at the end of each loop without the expected result. Do you have any ideas on how to resolve this?

Best, Paul

odow commented 11 months ago

You can force the garbage collector with GC.gc(). But you should make sure that all references to the model are no longer available. The best way would be to wrap the rolling horizons step into a function that takes in the current state and returns the first few output steps. Then call the function from within the loop, and potentially force GC.gc() after each function call.

You can also create a single env = Gurobi.Env() object and pass that with optimizer = ()->Gurobi.Optimizer(env).

pauleseifert commented 11 months ago

That solved the problem, thanks!