Closed jdkworld closed 1 year ago
SDDP.jl is not guaranteed to find globally optimal solutions to problems with discrete variables. I guess I should make this clearer in the docs.
Okay, thank for your answer. Are there any ways to bring it at least closer to an optimal solution? Are there ways to convert it into a Linear Program while still representing the discrete power-massflow relationship?
Are there any ways to bring it at least closer to an optimal solution?
The short answer is no, this is a fundamental limitation of the SDDP method. (**See long answer below.)
Does your real problem have stochasticity? If it's just this deterministic problem, then SDDP is the wrong tool for the job. Solve it as a single MILP instead.
Are there ways to convert it into a Linear Program while still representing the discrete power-massflow relationship?
No. Linear programs cannot represent non-convex or discontinuous functions.
**The long answer: I see you mentioned SDDiP. I find the discussion around SDDiP in the community unhelpful, because it conflates two things.
FIrst, how to compute dual variables. SDDP requires computing a sub gradient of the objective with respect to the incoming state variable. The easiest option is to relax the MIP into an LP and then use linear programming duality. But you could also use Lagrangian duality. In fact, it doesn't have to be a "dual." Any algorithm which finds a valid sub gradient would work. This is what the duality_handler
argument in SDDP.jl controls.
Second, convergence. The SDDiP paper says that if all of your state variables are binary and you use Lagrangian duality to compute a subgradient, then you will converge to an optimal policy.
In your case, you have a continuous state variable, so convergence is not guaranteed, and "SDDiP" is irrelevant. Now you could "binarize" the state variable
@variable(sp, m_st_bin[1:90], SDDP.State, initial_value = 0)
@constraint(sp, sum(m_st_bin) <= 1)
@variable(sp, m_st)
@constraint(sp, m_st == sum(i * m_st_bin[i] for i in 1:90))
where now you have discretized m_st
and you have only binary state variables. That would mean you would converge to a global optimal policy of your new problem, but that's not the same as converging to a global optimal policy of the continuous problem, and you'll have 90 state variables which means performance would be terrible.
I think you can also do a better job with the MIP formulation. I haven't tested, so I might have made a mistake, but isn't your model equivalent to something simpler like:
model = SDDP.LinearPolicyGraph(
stages = 24,
sense = :Min,
lower_bound = 0,
optimizer = HiGHS.Optimizer,
) do sp, t
@variable(sp, 0 <= m_st <= 90, SDDP.State, initial_value = 1)
@variables(sp, begin
u_mode, Bin
0 <= mdot_el <= 2.9
mdot_magic >= 0
P_grid_imp >= 0
P_grid_exp >= 0
end)
@expressions(sp, begin
P_el, (180 - 16) / 2.9 * mdot_el + 16 * u_mode
end)
@constraints(sp, begin
mdot_el <= 2.9 * u_mode
m_st.out == m_st.in + mdot_el + mdot_magic - mdot_exp[t]
(P_grid_imp - P_grid_exp) + P_pv[t] - P_el == 0
end)
@stageobjective(
sp,
C_dayahead[t] * (P_grid_imp - P_grid_exp) +
C_fixedgrid[t] * P_grid_imp +
100 * mdot_magic,
)
end
P_el
and mdot_el
. One is a linear combination of the other.(180-16)/2.8
constant doesn't match the upper bound of 2.9
for mdot_el
Mm
and Mp
constants aren't tight for the upper boundsSorry for the long answer. I hope it points you in the right direction :smile:
Thank you for your long answer! (No need to apologize for such a helpful reply :))
You might find it interesting to know the overall purpose of my script: I am writing a model predictive control (MPC) for this system. I want to take into account longer-term energy storage (10 days up until seasonal energy (H2) storage). The detailed MILP problem for the MPC can solve a 7 day problem with hourly resolution in a reasonable timeframe. I use SDDP to take everything beyond the 7 day horizon into account. Currently, I do this by extracting the value functions from the SDDP algorithm and using these as a terminal cost in my MPC. The overall goal is to operate the whole system for the lowest cost possible. Because future weather and electricity prices cannot be predicted correctly, it is a stochastic problem. Since I am looking at seasonal energy storage, the horizon of the SDDP problem is 1-1,5 years in hourly timesteps. This has worked well so far with the LP equivalent (well, not really equivalent since this is not possible, but it comes close).
To follow up on your answer:
2.9
upper bound for mdot_el
was a typo.m_st
to 90 state variables does not seem like a good option to me especially since I would like to experiment with bigger storage sizes in the end. Thank you so much for your help!
Interesting, I did not realize that all state variables have to be binary to converge to an optimal policy using Lagrangian duality
:smile: yes, many papers brush this aside. And they use LP duality with continuous state variables while claiming to do "SDDiP."
The MIP formulation does not reach the optimal solution (visible from visual inspection of the results) and still gets stuck at some point. I think the best approach for me now is to keep experimenting with this as I add stochasticity. I do this by extracting the value functions from the SDDP algorithm and using these as a terminal cost in my MPC
Yeah. If you have the integrality, then you should view SDDP as a heuristic that can incorporate uncertainty.
So focus not on whether the SDDP.jl finds an optimal policy, but whether the terminal value function improves your MPC. Since you'll never actually operate the SDDP.jl policy, it doesn't even matter if it is feasible or not.
I've updated the documentation in #582 and marked this as closed.
Please re-open (or open a new issue) if you have further questions.
Hello,
This is more of a question so please let me know if there is a better place to ask this question.
I am using SDDP.jl to optimize an energy system to operate with the lowest possible cost. The energy system includes photovoltaics, grid power, electrolyser (device that makes hydrogen from electricity), hydrogen storage and hydrogen fueling station for cars. In an optimal solution, the hydrogen storage is filled during hours with PV generation or low electricity prices. In this example, I want to optimize for 24 hours.
The electrolyser is modeled with a binary variable indicating the on/off state, because there is a big difference in power consumption between the off state (0 kW) and the standby/on state (16-180 kW). See the image. I then use the bigM method to turn the binary variable into meaningful constraints.
However, SDDP.jl. is not able to find an optimal solution. After only 1-3 iterations it gets stuck at a non-optimal solution. I have tried several duality handlers, but nothing worked. I tried SDDiP as well but apparently this does not exist anymore? Any LP-equivalents of the model (constant electrolyser efficiency) are working as expected.
What can I do to make it work and find an optimal solution? Thanks in advance for the help.
Here is my code:
A few comments on the code: