Closed SolidAhmad closed 1 year ago
I have several random variables and having trouble writing the subproblems in a file
Why do you want to do this? What do you want to write? What do you hope to see and what would you use it for?
Does this code snippet help?
julia> using SDDP, HiGHS
julia> Ω = Dict(
(1, 1) => [(Pd1 = 0, Pd2 = 0), (Pd1 = 0, Pd2 = 0)],
(2, 1) => [(Pd1 = 212, Pd2 = 214), (Pd1 = 402, Pd2 = 407)],
(2, 2) => [(Pd1 = 212, Pd2 = 284), (Pd1 = 402, Pd2 = 539)],
(3, 1) => [(Pd1 = 281, Pd2 = 284), (Pd1 = 533, Pd2 = 539)],
(3, 2) => [(Pd1 = 281, Pd2 = 377), (Pd1 = 533, Pd2 = 715)],
)
Dict{Tuple{Int64, Int64}, Vector{NamedTuple{(:Pd1, :Pd2), Tuple{Int64, Int64}}}} with 5 entries:
(3, 2) => [(Pd1 = 281, Pd2 = 377), (Pd1 = 533, Pd2 = 715)]
(3, 1) => [(Pd1 = 281, Pd2 = 284), (Pd1 = 533, Pd2 = 539)]
(1, 1) => [(Pd1 = 0, Pd2 = 0), (Pd1 = 0, Pd2 = 0)]
(2, 2) => [(Pd1 = 212, Pd2 = 284), (Pd1 = 402, Pd2 = 539)]
(2, 1) => [(Pd1 = 212, Pd2 = 214), (Pd1 = 402, Pd2 = 407)]
julia> model = SDDP.MarkovianPolicyGraph(
transition_matrices = Array{Float64,2}[
[1.0]',
[0.5 0.5],
[0.5 0.5; 0.5 0.5],
],
sense = :Min,
lower_bound = 0.0,
optimizer = HiGHS.Optimizer,
) do subproblem, node
t, markov_state = node
@variable(subproblem, 0 <= pCmax <= 500, SDDP.State, initial_value = 0)
@variables(subproblem, begin
0 <= pE1 <= 400
0 <= pE2 <= 400
pC1 >= 0
pC2 >= 0
u[1:6]
end)
@constraints(subproblem, begin
con_1, pE1 + pC1 == 0
con_2, pE2 + pC2 == 0
sum(u) == 1
pC1 <= pCmax.out + pCmax.in
pCmax.out == sum(100 * (i-1) * u[i] for i in 1:6)
end)
stage_cost = [1, 140_000, 70_000]
@stageobjective(
subproblem,
35 * (pE1 + pE2) + 25 * (pC1 + pC2) + stage_cost[t] * pCmax.out
)
P = [6000 / 8760, 2760 / 8760]
SDDP.parameterize(subproblem, Ω[(t, markov_state)], P) do ω
set_normalized_rhs(con_1, ω.Pd1)
set_normalized_rhs(con_2, ω.Pd2)
end
return
end
A policy graph with 5 nodes.
Node indices: (1, 1), (2, 1), (2, 2), (3, 1), (3, 2)
julia> node = model[(2, 1)]
Node (2, 1)
# State variables : 1
# Children : 2
# Noise terms : 2
julia> node.subproblem
A JuMP Model
Minimization problem with:
Variables: 13
Objective function type: AffExpr
`AffExpr`-in-`MathOptInterface.EqualTo{Float64}`: 4 constraints
`AffExpr`-in-`MathOptInterface.LessThan{Float64}`: 1 constraint
`VariableRef`-in-`MathOptInterface.GreaterThan{Float64}`: 6 constraints
`VariableRef`-in-`MathOptInterface.LessThan{Float64}`: 3 constraints
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.
Names registered in the model: con_1, con_2, pC1, pC2, pCmax, pE1, pE2, u
julia> print(node.subproblem)
Min 0
Subject to
con_1 : pE1 + pC1 = 0
con_2 : pE2 + pC2 = 0
u[1] + u[2] + u[3] + u[4] + u[5] + u[6] = 1
pCmax_out - 100 u[2] - 200 u[3] - 300 u[4] - 400 u[5] - 500 u[6] = 0
-pCmax_in - pCmax_out + pC1 ≤ 0
pCmax_out ≥ 0
pE1 ≥ 0
pE2 ≥ 0
pC1 ≥ 0
pC2 ≥ 0
_[13] ≥ 0
pCmax_out ≤ 500
pE1 ≤ 400
pE2 ≤ 400
julia> SDDP.parameterize(node, (Pd1 = 212, Pd2 = 214))
julia> print(node.subproblem)
Min 35 pE1 + 35 pE2 + 25 pC1 + 25 pC2 + 140000 pCmax_out + _[13]
Subject to
con_1 : pE1 + pC1 = 212
con_2 : pE2 + pC2 = 214
u[1] + u[2] + u[3] + u[4] + u[5] + u[6] = 1
pCmax_out - 100 u[2] - 200 u[3] - 300 u[4] - 400 u[5] - 500 u[6] = 0
-pCmax_in - pCmax_out + pC1 ≤ 0
pCmax_out ≥ 0
pE1 ≥ 0
pE2 ≥ 0
pC1 ≥ 0
pC2 ≥ 0
_[13] ≥ 0
pCmax_out ≤ 500
pE1 ≤ 400
pE2 ≤ 400
Any update? If not, I will close this issue.
Closing as stale. Please re-open if you have any follow-up :smile:
Apologies, I had to switch tasks and left this one for a while. Thank for your thorough follow up I will give your snippet a try
I am modeling a generation expansion planning model with a Markovian chain policy graph. I have several random variables and having trouble writing the subproblems in a file. Namely, in how to can I parameterize the model below to write into a file? are there any other ways to debug and access the process of training the model?. In this page we see a text log of the iterations and nodes the model visits, is there a way to produce that for a markovian policy graph model?
Said model: