Closed daxida closed 2 months ago
See https://sddp.dev/stable/guides/add_integrality/
We do not necessarily converge to an optimal solution if there are integer variables.
You could try:
SDDP.train(model; iteration_limit = 10, duality_handler = SDDP.LagrangianDuality())
# or
SDDP.train(model; iteration_limit = 10, duality_handler = SDDP.StrengthenedConicDuality())
Thank you for your answer.
The Lagrangian returns 1.28 and the Conic 1.2 so no luck in that regard.
I will try some other approach and see if I can get rid of the integer variables to model this problem.
The Lagrangian returns 1.28 and the Conic 1.2 so no luck in that regard.
This makes sense because of the big-M formulation (even if the M is quite tight).
It has been a while since I wrote https://sddp.dev/stable/guides/add_integrality, and I think it stands up well.
For small problems like this, you're right. Just use JuMP and get a better answer. No debate there. But:
SDDP.jl cannot guarantee that it will find a globally optimal policy when some of the variables are discrete. However, in most cases we find that it can still find an integer feasible policy that performs well in simulation.
Moreover, when the number of nodes in the graph is large, or there is uncertainty, we are not aware of another algorithm that can claim to find a globally optimal policy.
In general, we recommend that you introduce integer variables into your model without fear of the consequences, and that you treat the resulting policy as a good heuristic, rather than an attempt to find a globally optimal policy.
if your problem is too big to solve as a monolithic JuMP problem, give SDDP a try. If it works. Great. If it doesn't. Try a different heuristic. You can't hope to solve the problem to global optimality no matter what algorithm you use.
Closing because this seems resolved. It is not a bug in SDDP.jl, and I think the docs summarize the issue well enough.
Please comment if you have more questions and I can re-open.
I'm sorry I could not find the time to answer before.
It was indeed not a bug. Maybe I should have written it in some other place.
I have not implemented yet the fully fletched model in SDDP, mostly scared about performance since the equivalent JuMP model can already take ~20mins for some scenarios and I read this reply of yours in another user-question issue #775 :
As a rule of thumb, you should expect the SDDP model with uncertainty to take 10^3 - 10^5 times longer than it to solve. If it solves in less than a second, the SDDP will still likely take hours to solve. If it takes longer. Stop.
But now I'm not sure how to understand this part too:
if your problem is too big to solve as a monolithic JuMP problem, give SDDP a try.
Do you mean by monolithic JuMP problem, the model when feeded the (in my case up to ~8 or so, weekly) scenarios to solve simultaneously?
And last
For small problems like this...
I'm not sure how to assess the effective size of the model for our conversation about performance. To be concrete I will post here the current complete version in JuMP. Note that T
can be either 168 (a week), 168 x 2 or 168 x 4, but these last two are quite slow at the moment (a scenario is passed here through ipt
).
Do you mean by monolithic feeding multiple (lets say ~5) ipt
s instances to solve simultaneously?
Thank you again for your time.
model = Model(HiGHS.Optimizer)
set_silent(model)
T = length(ipt.demand)
solar = ipt.solar
demand = ipt.demand
added_demand = 0.1
demand .+= added_demand
grid = ipt.outage
diesel_power = ipt.info.diesel_power
ccoe = ipt.info.ccoe
dcoe = ipt.info.dcoe
b_initial = ipt.info.b_initial
b_min = ipt.info.b_min
capacity = ipt.info.capacity
grid_power = ipt.info.grid_power
coef = div(60, granularity_data)
w1 = ipt.info.w1
w2 = ipt.info.w2 * coef
w3 = ipt.info.w3 * coef
w4 = ipt.info.w4 * coef
# M parameter for the BigM technique
max_possible_produced = maximum([grid[t] + solar[t] - demand[t] for t in 1:T])
pessimist = diesel_power + max_possible_produced
M = pessimist # ~ 30
# Take the smallest M bigger than any possible b_t
M2 = 2
@variable(model, b_min <= b[1:T+1] <= 1)
fix(b[1], b_initial; force = true)
@variable(model, u_diesel[1:T], Bin)
@variable(model, u_grid[1:T], Bin)
@variable(model, big_m_helper[1:T], Bin)
@variable(model, max_delta_zero[1:T])
@variable(model, big_m_helper2[1:T], Bin)
@variable(model, min_trans_one[1:T])
# Track the amount of times we started the diesel generator
@variable(model, diesel_starts[1:T], Bin)
@constraint(model, [t in 2:T], diesel_starts[t] >= u_diesel[t] - u_diesel[t-1])
@constraint(model, diesel_starts[1] >= u_diesel[1])
# Track the maximum amount of time we used the diesel generator
# Uses McCormick envelope constraints
@variable(model, run_time[1:T] >= 0)
@variable(model, max_run_time[1:T] >= 0)
@constraint(model, run_time[1] == 0)
@constraint(model, max_run_time[1] == run_time[1])
@variable(model, bilin_prod[1:T] >= 0) # This is run_time * u_diesel
M3 = 25
xL = 0
xU = 1
yL = 0
yU = M3 # <= max possible of run_time
for t in 1:T
@constraints model begin
bilin_prod[t] >= xL * run_time[t] + yL * u_diesel[t] - xL * yL
bilin_prod[t] >= xU * run_time[t] + yU * u_diesel[t] - xU * yU
bilin_prod[t] <= xU * run_time[t] + yL * u_diesel[t] - xU * yL
bilin_prod[t] <= xL * run_time[t] + yU * u_diesel[t] - xL * yU
end
end
for t in 2:T
@constraints(model, begin
run_time[t] == bilin_prod[t-1] + u_diesel[t-1]
max_run_time[t] >= run_time[t]
max_run_time[t] >= max_run_time[t-1]
end)
end
# -------------- Heuristics
# NOTE: There is always a pick in battery at the start, so to increase b_min
@constraint(model, [t in 15:T+1], b[t] >= 0.3)
# NOTE: In solutions without this constraint it seems to be verified anyway
for t in 1:T
# Never use the grid and the diesel at the same time
@constraint(model, u_grid[t] + u_diesel[t] <= 1)
# Always use grid if possible
fix(u_grid[t], 1; force = true)
end
# -------------- Heuristics for u_diesel?
for t in 1:T
if grid[t] == 0
fix(u_grid[t], 0; force = true) # force to not use it if outage
end
delta = grid_power * u_grid[t] + diesel_power * u_diesel[t] + solar[t] - demand[t]
# Big M technique to get max(delta, 0)
@constraints(model, begin
max_delta_zero[t] >= 0
max_delta_zero[t] >= delta
max_delta_zero[t] <= M * big_m_helper[t]
max_delta_zero[t] <= delta + M * (1 - big_m_helper[t])
end)
# coe = delta > 0 ? ccoe * delta : dcoe * delta
coe = dcoe * delta - (dcoe - ccoe) * max_delta_zero[t]
# NOTE: The battery can not go over 100% => take the min(b_(t+1), 1)
trans = b[t] + coe / (granularity_data * capacity) # transition
# Big M technique to get min(b_t, 1)
@constraints(model, begin
min_trans_one[t] <= 1
min_trans_one[t] <= trans
min_trans_one[t] >= 1 - M2 * big_m_helper2[t]
min_trans_one[t] >= trans - M2 * (1 - big_m_helper2[t])
end)
@constraint(model, b[t+1] == min_trans_one[t])
end
@objective(
model,
Min,
w1 * sum(diesel_starts[t] for t in 1:T) +
w2 * sum(u_diesel[t] for t in 1:T) +
w3 * max_run_time[T] +
w4 * sum(u_grid[t] for t in 1:T)
)
optimize!(model)
Do you mean by monolithic JuMP problem,
I mean a single scenario.
Well I guess the only thing left is for me to actually write the code and see.
I think you can close this now, this time for good, and thank you again!
Well I guess the only thing left is for me to actually write the code and see.
Yip :smile:
thank you again
No problem
Hello, I'm not sure if this is the right place to ask but I've been looking at this for quite some time with no success.
This is about a very simple problem defined in the code below. What I have trouble understanding is how I am getting 1.2 as a the bound for some objective function that is supposed to be the sum of a binary variable over time (I expected 2, definitely not a rational).
The code contains the context, the deterministic JuMP model and the corresponding deterministic SDDP model. I will also add the two plots that the code produce, which also hints to me that the bound should have been 2, based on the generator curve. I also added the program output at the end.
I'm not very familiar with the library yet, but increasing the number of iterations does still produce a bound of 1.2.
And the two plots