Closed Esnilg closed 3 years ago
You could imagine doing something like:
SDDP.LinearPolicyGraph() do sp, node
@variable(sp, var >= 0, SDDP.State, initial_value = 0)
SDDP.parameterize(sp, 1:3) do w
v = fix_value(var.in)
println("In $(node) with w=$(w), got var=$(v)")
@stageobjective(sp, w + var.out)
end
end
Why do you want to do this though? Do you just want to print the forward pass?
There is this wildly undocumented pre_optimize_hook
that you can set
https://github.com/odow/SDDP.jl/commit/88af622304a44a6481485dfaf2f6ca15255143f4
which has access to the incoming states
Something like this
model = SDDP.LinearPolicyGraph() do sp, node
@variable(sp, var >= 0, SDDP.State, initial_value = 0)
SDDP.parameterize(sp, 1:3) do w
v = fix_value(var.in)
println("In $(node) with w=$(w), got var=$(v)")
@stageobjective(sp, w + var.out)
end
end
for (k, node) in model.nodes
SDDP.pre_optimize_hook(node) do model, node, state, noise, scenario_path, require_duals
@show state
end
end
Closing because this seems resolved. There isn't really a good reason to print the state variable during training, so it's not something I want to support long-term.
hello oscar
One question, is there any way to print a state variable (var.in) during the SDDP train?