odow / SDDP.jl

Stochastic Dual Dynamic Programming in Julia
https://sddp.dev
Other
289 stars 60 forks source link

Model with binary variable does not find the optimal solution. #575

Closed jdkworld closed 1 year ago

jdkworld commented 1 year ago

Hello,

This is more of a question so please let me know if there is a better place to ask this question.

I am using SDDP.jl to optimize an energy system to operate with the lowest possible cost. The energy system includes photovoltaics, grid power, electrolyser (device that makes hydrogen from electricity), hydrogen storage and hydrogen fueling station for cars. In an optimal solution, the hydrogen storage is filled during hours with PV generation or low electricity prices. In this example, I want to optimize for 24 hours.

The electrolyser is modeled with a binary variable indicating the on/off state, because there is a big difference in power consumption between the off state (0 kW) and the standby/on state (16-180 kW). See the image. I then use the bigM method to turn the binary variable into meaningful constraints. image

However, SDDP.jl. is not able to find an optimal solution. After only 1-3 iterations it gets stuck at a non-optimal solution. I have tried several duality handlers, but nothing worked. I tried SDDiP as well but apparently this does not exist anymore? Any LP-equivalents of the model (constant electrolyser efficiency) are working as expected.

What can I do to make it work and find an optimal solution? Thanks in advance for the help.

Here is my code:

using HiGHS
using SDDP
using Statistics
using GLPK
using CSV
using DataFrames
using Tables

P_pv = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.96244, 3.3567996, 10.802293666666666, 12.274539, 13.062315666666667, 3.3319430666666667, 0.11330176666666666, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
C_dayahead = [0.05026, 0.04874, 0.047240000000000004, 0.036289999999999996, 0.03009, 0.027309999999999997, 0.02853, 0.02684, 0.0287, 0.0357, 0.03753, 0.03836, 0.04395, 0.04307, 0.03882, 0.04802, 0.04893, 0.05099, 0.05279, 0.055170000000000004, 0.055049999999999995, 0.0507, 0.055119999999999995, 0.05511]
C_fixedgrid = [0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284, 0.1284]
mdot_exp = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3, 0.0, 0.0, 0.0, 0.0, 5, 0.0, 2, 0.0, 0.0]

Mm = 3
Mp = 200

model = SDDP.LinearPolicyGraph(
    stages = 24,
    sense = :Min,
    lower_bound = 0,
    optimizer = GLPK.Optimizer,
) do subproblem, t
    # State variables
    @variable(subproblem, 0 <= m_st <= 90, SDDP.State, initial_value = 1)
    # Binary variables
    @variable(subproblem, u_mode, Bin)
    # Other variables
    @variables(subproblem, begin
        0.0 <= mdot_el <= 2.9
        0.0 <= mdot_el_on <= 2.9
        0.0 <= mdot_magic <= 30.0 
        16.0 <= P_el_on <= 180
        0.0 <= P_el <= 180
        P_grid_imp >= 0
        P_grid_exp <= 0
        P_grid
    end)
    # special constraints
    # @constraint(subproblem, !u_mode => {mdot_el_on == 0})
    # @constraint(subproblem, u_mode => {mdot_el_on == 3.04710769 * u_sp - 0.21936407})
    # Transition function and constraints
    @constraints(subproblem, begin
        P_el_on == (180-16)/2.8 * mdot_el_on + 16

        P_el <= P_el_on
        P_el <= Mp*u_mode
        P_el >= P_el_on - Mp * (1 - u_mode)

        mdot_el <= mdot_el_on
        mdot_el <= Mm * u_mode
        mdot_el >= mdot_el_on - Mm * (1 - u_mode)

        m_st.out == m_st.in + mdot_el  + mdot_magic - mdot_exp[t]
        P_grid + P_pv[t] - P_el == 0
        P_grid == P_grid_imp + P_grid_exp
    end)
    # Stage-objective
    @stageobjective(subproblem, C_dayahead[t] * P_grid + C_fixedgrid[t] * P_grid_imp + 100 * mdot_magic)
end

SDDP.train(model; iteration_limit = 10)
# , duality_handler = SDDP.BanditDuality()
# , duality_handler = SDDP.ContinuousConicDuality()
# , duality_handler = SDDP.LagrangianDuality()
# , duality_handler = SDDP.StrengthenedConicDuality()

simulations = SDDP.simulate(model, 1, [:m_st, :P_el, :mdot_el, :mdot_magic, :P_grid, :P_grid_imp, :P_grid_exp, :u_mode] )

sim = 1
results = vcat(DataFrame.(simulations[sim])...)
results.m_st_in = [stage[:m_st].in for stage in simulations[sim]]
results.m_st_out = [stage[:m_st].out for stage in simulations[sim]]
#results.timestamp = optdata.timestamp
results.dayahead = C_dayahead
results.mdot_exp = mdot_exp
results.P_pv = P_pv
results.noise_term = replace(results.noise_term, nothing => missing)
results.objective_state = replace(results.objective_state, nothing => missing)
#CSV.write("D:\\filename.csv", results)

A few comments on the code:

odow commented 1 year ago

SDDP.jl is not guaranteed to find globally optimal solutions to problems with discrete variables. I guess I should make this clearer in the docs.

jdkworld commented 1 year ago

Okay, thank for your answer. Are there any ways to bring it at least closer to an optimal solution? Are there ways to convert it into a Linear Program while still representing the discrete power-massflow relationship?

odow commented 1 year ago

Are there any ways to bring it at least closer to an optimal solution?

The short answer is no, this is a fundamental limitation of the SDDP method. (**See long answer below.)

Does your real problem have stochasticity? If it's just this deterministic problem, then SDDP is the wrong tool for the job. Solve it as a single MILP instead.

Are there ways to convert it into a Linear Program while still representing the discrete power-massflow relationship?

No. Linear programs cannot represent non-convex or discontinuous functions.

**The long answer: I see you mentioned SDDiP. I find the discussion around SDDiP in the community unhelpful, because it conflates two things.

FIrst, how to compute dual variables. SDDP requires computing a sub gradient of the objective with respect to the incoming state variable. The easiest option is to relax the MIP into an LP and then use linear programming duality. But you could also use Lagrangian duality. In fact, it doesn't have to be a "dual." Any algorithm which finds a valid sub gradient would work. This is what the duality_handler argument in SDDP.jl controls.

Second, convergence. The SDDiP paper says that if all of your state variables are binary and you use Lagrangian duality to compute a subgradient, then you will converge to an optimal policy.

In your case, you have a continuous state variable, so convergence is not guaranteed, and "SDDiP" is irrelevant. Now you could "binarize" the state variable

    @variable(sp, m_st_bin[1:90], SDDP.State, initial_value = 0)
    @constraint(sp, sum(m_st_bin) <= 1)
    @variable(sp, m_st)
    @constraint(sp, m_st == sum(i * m_st_bin[i] for i in 1:90))

where now you have discretized m_st and you have only binary state variables. That would mean you would converge to a global optimal policy of your new problem, but that's not the same as converging to a global optimal policy of the continuous problem, and you'll have 90 state variables which means performance would be terrible.

I think you can also do a better job with the MIP formulation. I haven't tested, so I might have made a mistake, but isn't your model equivalent to something simpler like:

model = SDDP.LinearPolicyGraph(
    stages = 24,
    sense = :Min,
    lower_bound = 0,
    optimizer = HiGHS.Optimizer,
) do sp, t
    @variable(sp, 0 <= m_st <= 90, SDDP.State, initial_value = 1)
    @variables(sp, begin
        u_mode, Bin
        0 <= mdot_el <= 2.9
        mdot_magic >= 0
        P_grid_imp >= 0
        P_grid_exp >= 0
    end)
    @expressions(sp, begin
        P_el, (180 - 16) / 2.9 * mdot_el + 16 * u_mode
    end)
    @constraints(sp, begin
        mdot_el <= 2.9 * u_mode
        m_st.out == m_st.in + mdot_el + mdot_magic - mdot_exp[t]
        (P_grid_imp - P_grid_exp) + P_pv[t] - P_el == 0
    end)
    @stageobjective(
        sp, 
        C_dayahead[t] * (P_grid_imp - P_grid_exp) + 
        C_fixedgrid[t] * P_grid_imp + 
        100 * mdot_magic,
    )
end

Sorry for the long answer. I hope it points you in the right direction :smile:

jdkworld commented 1 year ago

Thank you for your long answer! (No need to apologize for such a helpful reply :))

You might find it interesting to know the overall purpose of my script: I am writing a model predictive control (MPC) for this system. I want to take into account longer-term energy storage (10 days up until seasonal energy (H2) storage). The detailed MILP problem for the MPC can solve a 7 day problem with hourly resolution in a reasonable timeframe. I use SDDP to take everything beyond the 7 day horizon into account. Currently, I do this by extracting the value functions from the SDDP algorithm and using these as a terminal cost in my MPC. The overall goal is to operate the whole system for the lowest cost possible. Because future weather and electricity prices cannot be predicted correctly, it is a stochastic problem. Since I am looking at seasonal energy storage, the horizon of the SDDP problem is 1-1,5 years in hourly timesteps. This has worked well so far with the LP equivalent (well, not really equivalent since this is not possible, but it comes close).

To follow up on your answer:

  1. My real problem does have stochasticity, I am still working on including that.
  2. Sorry, the 2.9 upper bound for mdot_el was a typo.
  3. Interesting, I did not realize that all state variables have to be binary to converge to an optimal policy using Lagrangian duality. Indeed, discretizing m_st to 90 state variables does not seem like a good option to me especially since I would like to experiment with bigger storage sizes in the end.
  4. I tested your MIP formulation. I was not aware that SDDP could handle MIPs, SDDP is really new to me and I am still trying to understand everything. The MIP formulation does not reach the optimal solution (visible from visual inspection of the results) and still gets stuck at some point. However, it performs MUCH better than my original MILP formulation! I think the best approach for me now is to keep experimenting with this as I add stochasticity.

Thank you so much for your help!

odow commented 1 year ago

Interesting, I did not realize that all state variables have to be binary to converge to an optimal policy using Lagrangian duality

:smile: yes, many papers brush this aside. And they use LP duality with continuous state variables while claiming to do "SDDiP."

The MIP formulation does not reach the optimal solution (visible from visual inspection of the results) and still gets stuck at some point. I think the best approach for me now is to keep experimenting with this as I add stochasticity. I do this by extracting the value functions from the SDDP algorithm and using these as a terminal cost in my MPC

Yeah. If you have the integrality, then you should view SDDP as a heuristic that can incorporate uncertainty.

So focus not on whether the SDDP.jl finds an optimal policy, but whether the terminal value function improves your MPC. Since you'll never actually operate the SDDP.jl policy, it doesn't even matter if it is feasible or not.

odow commented 1 year ago

I've updated the documentation in #582 and marked this as closed.

Please re-open (or open a new issue) if you have further questions.