odow / SDDP.jl

A JuMP extension for Stochastic Dual Dynamic Programming
https://sddp.dev
Other
309 stars 61 forks source link

Accessing Optimal Decision #675

Closed SolidAhmad closed 1 year ago

SolidAhmad commented 1 year ago

Once we have trained our model and convergence was achieved, we end up with a policy graph where each subproblem at each node contains all the cuts generated in the backward pass. However, we don't have access to the explicit variables that lead to the lower bound. Rather, we have to simulate to have an idea of how the variables interact with the policy graph. But I am only interested in the first stage optimal state variables, namely, The state variables that are used to calculate the lower bound in the last iteration, is there a way to access or calculate that directly as opposed to inferring those variables through simulations?

odow commented 1 year ago

You can get a decision rule for a node:

https://sddp.dev/stable/tutorial/first_steps/#Obtaining-the-decision-rule

If your first stage is deterministic, you can get the JuMP model from node 1 as follows:

sp = model[1].subprolem

But if your first stage is deterministic, then just do a single simulation and look at the values.

SolidAhmad commented 1 year ago

But if your first stage is deterministic, then just do a single simulation and look at the values.

I get that you meant stochastic. That makes sense, thank you!