spine-tools / SpineOpt.jl

A highly adaptable modelling framework for multi-energy systems
https://www.tools-for-energy-system-modelling.org/
GNU Lesser General Public License v3.0
53 stars 13 forks source link

Decomposition #183

Closed spine-o-bot closed 1 year ago

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 19, 2020, 10:09

The purpose of this issue is to discuss our approach to Decomposition in SpineOpt

Introduction :

Decomposition approaches take advantage of certain problem structures to separate them into multiple related problems which are each more easily solved. Decomposition also allows us to do the inverse, which is to combine independent problems into a single problem, where each can be solved separately but with communication between them (e.g. investments and operations problems)

Decomposition thus allows us to do a number of things

Objective

Employ one or more decomposition techniques in Spine Opt to

Example use of decomposition

Consider the below example - a typical long term investments model is run whose investment decisions feed into an operations model with no feedback. This is not ideal because things happen in the operations model that impact the value of technologies and their investment value. It is not possible to consider these details in the investments model because these make the problem too large.

image

An approach to tackling this problem is using Benders decomposition. In this example, it can be viewed in two ways :

  1. We have a single combined investments/operation problem that is decomposed into related investment problem (master problem) and operations problem (sub problem(s)).
  2. We have separate investments and operations problems that we would like to link together to allow the results of each to influence the other.

In general, Benders decomposition works by taking advantage of the structure of problems to decompose a single problem into a related master problem and sub problems. This works when variables, if treated as constants, would allow the full problem to be solved as a set of independent sub problems. If we consider a combined investments/operations problem - relaxing the investment variables allows the operational sub problems to be solved separately.

The influence of the relaxed variables on the overall problem objective is communicated back to the master problem by means of constraint marginal values which tell the master problem the marginal change in objective costs resulting from a change in the master problem variables (in this case the investment decisions).

Take the units_available constraint in Spine Opt :

units_available(u, t) < number_of_units(u, t) + units_invested_available(u, t)

Here, units_invested_available is a decomposed master problem variable which is fixed in the operational sub problems. The marginal value of the units_available constraint represents the change in objective function costs if another unit of this type is added in this timestep. This information is communicated back to the master problem in the form of Benders cuts, which allows the investment variables to be re-optimized with new information about their impact on operating costs. This thus allows detailed operations to influence investment decisions.

This is a pretty qualitative description of the approach - more formal descriptions in the context of energy systems can be found here :

Some resources/references :

Overview : image

Learnings from trial implementation in SpineOpt

In the SpineOpt branch "decomposition" I am trialing an implementation of the decomposition structure described above and these are my initial learnings

Proposed high-level approach.

With the above in mind, I am proposing

Advantages of a decomposed SpineOpt model

An example configuration that would use the proposed bi-level structure :

Master Problem (Long term investment model)

Sub Problem (shorter term operational model)

Some issues encountered so far:

function constraint_unit_lifetime_indices()
    unique(
        (unit=u, stochastic_path=path, t=t)
        for u in indices(unit_investment_lifetime)
        for t in time_slice(temporal_block=unit__investment_temporal_block(unit=u))
        for path in active_stochastic_paths(
            unique(
                ind.stochastic_scenario
                for ind in units_invested_available_indices(
                    unit=u, t=vcat(to_time_slice(TimeSlice(end_(t) - unit_investment_lifetime(unit=u), end_(t))), t),
                )  
            )
        )
    )
end

function constraint_mp_unit_lifetime_indices()
    unique(
        (unit=u, stochastic_path=path, t=t)
        for u in indices(unit_investment_lifetime)
        for t in mp_time_slice(temporal_block=unit__investment_temporal_block(unit=u))
        for path in active_stochastic_paths(
            unique(
                ind.stochastic_scenario
                for ind in mp_units_invested_available_indices(
                    unit=u, t=vcat(to_mp_time_slice(TimeSlice(end_(t) - unit_investment_lifetime(unit=u), end_(t))), t),
                )  
            )
        )
    )
end

You can see above that I need to_time_slice in the sp constraint indices and to_mp_time_slice in the mp constraint indices. This is because we are assuming a single temporal structure. If, instead the temporal structure was dimensioned by model and took a reference to the JuMP model as an argument, perhaps we could get around this.

Next steps

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 19, 2020, 10:09

changed the description

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 19, 2020, 10:11

changed the description

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 19, 2020, 10:14

changed the description

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 19, 2020, 10:32

changed the description

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 19, 2020, 10:33

changed the description

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 19, 2020, 10:33

changed the description

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jun 19, 2020, 17:34

Sounds interesting. I need to read it a couple more times -I think- to really understand the issues. One thing that comes to mind immediately is whether or not using multiple model objects makes sense or applies here.

Re #172

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Sep 29, 2020, 09:17

mentioned in commit f64db0fc6be6b4ee94a41ffcc55a63a6161e6cc1

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Oct 22, 2020, 08:21

So I said I would provide more detail about the intended formulation.

So let's take the pretty minimal example of minimising capacity and operating cost for a fleet of units with a linear cost coefficient $operational\_cost_u$ :

\begin{aligned}
minimise:&
\\
&+ \sum_u investment\_cost_u units\_invested_u\\
&+ \sum_{u,t} operational\_cost_uflow_{u,t}\\
subject\ to:&
\\
&flow_{u,t} \le max\_flow_u(units_u + units\_invested_u)\\
&\sum_{u,t} flow_{u,t} = demand_t \\

\end{aligned}

So this is a single problem that can't be decoupled over t because the investment variable units_invested couples the timesteps together. If units_invested were a constant in the problem, then all t's could be solved individually. This is the basic idea in Benders decomposition. We decompose the problem into a master problem and sub problems with the master problem optimising the coupling investment variables which are treated as constants in the sub problems.

The master problem in the initial benders iteration is simply to minimise total investment costs:

\begin{aligned}
minimise &Z:
\\
&Z \ge \sum_u investment\_cost_u units\_invested_u\\
\end{aligned}

The solution to this problem yields values for the investment variables which are fixed as $\overline{units\_invested_u}$ in the sub problem and will clearly be zero in the first iteration.

The sub problem for benders iteration b then becomes :

\begin{aligned}
minimise:&
\\
obj_b = &+ \sum_{u,t} operational\_cost_uflow_{u,t}\\
subject\ to:&
\\
&flow_{u,t} \le max\_flow_u(units_u + \overline{units\_invested_u} ) \qquad \mu_{b,u,t} \\
&\sum_{u,t} flow_{u,t} = demand_t \\
\\
\end{aligned}

This sub problem can be solved individually for each t. This is pretty trivial in this small example but if we consider a single t to be a single rolling horizon instead, decoupling the investment variables means that each rolling horizon can be solved individually rather than having to solve the entire model horizon as a single problem.

$\mu_{u,t}$ is the marginal value of the capacity constraint and can be interpreted as the decrease in the objective function at time t for an additional MW of flow from unit u. This information is used to construct a benders cut which represents the reduction in the sub problem objective function which is possible in this benders iteration by adjusting the variable units_investment. This is effectively the decrease in operating costs possible by adding another unit of type u and is expressed as :

$obj_{b} + \sum_{u,t}max\_flow_u \mu_{b,u,t} (units\_invested_u - \overline{units\_invested_{u,b}})$

In the first benders iteration, the value of the investment variables will have been zero so $\overline{units\_invested_{u,b}}$ will have the value of zero and thus the expression represents the total reduction in cost from an addition of a new unit of type u. This Benders cut is added to the master problem which then becomes, for each subsequent benders iteration, b:

\begin{aligned}
minimise &Z:
\\
&Z \ge \sum_u investment\_cost_u units\_invested_u\\

subject\ to:&
\\
Z \ge& + \sum_u investment\_cost_u units\_invested_u\\ 
& + \sum_b\lbrack obj_{b} + \sum_{u,t}max\_flow_u \mu_{b,u,t} (units\_invested_u - \overline{units\_invested_{u,b}})\rbrack\ 

\end{aligned}

Note the benders cuts are added as inequalities because they represent an upper bound on the value we are going to get from adjusting the master problem variables in that benders iteration. If we consider the example of renewable generation - because it's marginal cost is zero, on the first benders iteration, it could look like there would be a lot of value in increasing the capacity because of the marginal values from the sub problems. However, when the capacity variables are increased accordingly and curtailment occurs in the sub-problems, the marginal values will be zero when curtailment occurs and so, other resources may become optimal in subsequent iterations.

Clearly this is a simple example but I think it illustrates the general strategy. The pseudo code will look something like

  initialise master problem
  initialise sub problem
  solve first master problem
  create master problem variable time series
  solve rolling spine opt model
  save zipped marginal values
  while master problem not converged
      update master problem
      solve master problem
      update master problem variable timeseries for benders iteration b
      rewind sub problem
      update sub problem
      solve rolling spine opt model
      save zipped marginal values
      test for convergence
  end

@mihlema @manuelma @Tasqu @jkiviluo uha @nnhjy

Is this helpful in terms of our discussion? Any thoughts?

It is clear from the above that @mihlema's suggestion of zipping the marginal values could work... i.e. we can rewrite the benders cut term as :

$obj_{b} + \sum_{u}(units\_invested_u - \overline{units\_invested_{u,b}})max\_flow_u\sum_{t} \mu_{b,u,t}$

spine-o-bot commented 3 years ago

In GitLab by @mihlema on Oct 28, 2020, 12:07

Thanks Jody for the description and sorry for only getting back to this now.

I think this looks pretty good, but I guess the devil is the detail of the implementation. At first glance, it seems like we can cover everything with the functionality we currently have. The question is how we want to trigger the algorithm. I think it could make sense as a first step to have separate run_spine_opt algorithms. Db side we could have for a start model parameters (is_benders_investment_decompositionin combination withis_subproblem/is_masterproblem`). I still hope we can go for a more generalized way in the long run, but to start simple this could work.

Two potential problems come to mind:

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 4, 2020, 16:16

So a problem I hadn't forseen is getting the dual solution from the sub-problem. This is something GAMS appears to automatically provide for a MIP but which we will have to do ourselves. This is my proposed approach :

Does anyone see any pitfalls here?

I want to try and do this in a way that is solver independent but I am not sure that JumP will automatically pick up that the new problem is an LP and give me duals automatically...

Anyone any thoughts? @manuelma @jkiviluo @Tasqu @mihlema

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 5, 2020, 08:06

By the way, this seems to be exactly what GAMS does... I can see the following in the log files:

Fixing integer variables, and solving final LP...

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 10, 2020, 09:34

A quick update on this :

On top of this, I implemented a mechanism to report constraint dual values - very useful for getting power prices etc. Here is what I wrote in the commit:

(Marginal value reporting. One can now add a constraint name e.g. constraint_nodal_balance to a report and the constraint's marginal value will be reported in the output DB. When adding a constraint name as an output we need to preface the actual constraint name with "constraint_" to avoid ambiguity with variable names (e.g. units_available). So to report the marginal value of units_available we add an output object called "constraint_units_available"

This works as follows: save_marginal_values() now adds marginal values to m.ext[values] if it finds an output in report__output() that matches a constraint name (prepended by "constraint_"). Once added to m.ext[values], the output report writing takes care of the rest.

Issues:

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 10, 2020, 15:44

mentioned in commit 87286aa54c3a11fbc82390b190382b23617fc40e

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 10, 2020, 15:53

mentioned in commit 4a43c0c459cee420fcf627a0e984831ea147fd9b

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 10, 2020, 16:00

So using CBC doesn't work to get the dual values... we need to somehow switch solvers without rebuilding the model - but this doesn't sound immediately doable... unless there are solver specific options/switches we can use - but then we don't have as much of a plug and play solver structure.

One way or another, we will want to calculate duals even regardless of decomposition, so we need to think about this.

Edit: from here it looks like we need to call CLP... but can we change solver without rebuilding the model? Perhaps we can copy it, but that sounds like a big performance hit. ANy thoughts @manuelma ?

Optimizing model master_problem... 13.044423 seconds (23.71 M allocations: 1.161 GiB, 3.79% gc time)
Saving master problem results...  0.083956 seconds (185.75 k allocations: 9.286 MiB)
Processing master problem solution  0.000359 seconds (719 allocations: 28.766 KiB)
Fixing variable values...  0.030111 seconds (121.84 k allocations: 6.194 MiB)
Optimizing model operations...  0.199281 seconds (63.71 k allocations: 6.414 MiB)
Fixing integer values for final LP to obtain duals...  0.197367 seconds (294.23 k allocations: 13.362 MiB)
Optimizing final LP of operations to obtain duals...  0.011867 seconds (25.76 k allocations: 2.969 MiB)
Optimal solution found, objective function value: 2.052944223049287e7
Saving results...ERROR: LoadError: ArgumentError: ModelLike of type Cbc.Optimizer does not support accessing the attribute MathOptInterface.ConstraintDual(1)
spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 12, 2020, 09:49

mentioned in commit d5a113fe28da3f750f331c5f13fce6fe668d55a3

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 12, 2020, 12:51

mentioned in commit e8828cafe1377946a90ae4fe1e23a202e4589df9

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 16, 2020, 11:12

mentioned in commit 72515958d29c842bcb638874a8eca240124c64b5

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 16, 2020, 17:14

mentioned in commit 44f8aeb7957ed798659eff85881dfdd1887688cf

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 17, 2020, 09:02

mentioned in commit 261b1638cbd2e366d860cad759658dc8802b8455

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 18, 2020, 11:25

mentioned in commit 1d0da19c8f0c95fadc04b5f6219cf0ea87ac7fcc

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 18, 2020, 11:37

Well this is a nice milestone...

Test Summary: | Pass  Total
test set      |  688    688

Major kudos to @manuelma - the tests are enormously helpful in identifying issues.

We may be getting close to being able to merge this and work on adding investment variables for connections and storage. Although it's been a while since I last pulled master which is the next step.

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Nov 18, 2020, 11:39

Very nice! Although, I had the impression we had more tests, like 800 something in master??

spine-o-bot commented 3 years ago

In GitLab by @Tasqu on Nov 18, 2020, 12:48

Something like 800-900 I think, after I added a few more for the stochastic structure.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 18, 2020, 14:05

So looks like there's most tests to do after I pull from master...

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 18, 2020, 16:28

mentioned in commit 75731d1f46f4e239bde71319e2abbb9f5f3563de

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 18, 2020, 16:28

mentioned in commit 686a6f3647100a583344bf736c017873990a82cb

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 18, 2020, 16:35

mentioned in commit 5543693a94895777a934e655ad1f5bf682a787f7

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 18, 2020, 19:49

OK, so this is more like it:

Test Summary: | Pass  Total
test set      |  844    844

I had commented some out while debugging!

@Tasqu Nice work on the model__stochastic_structure stuff and the multi-model aware generate_stochastic_structure stuff. It required some refactoring on my side but I think it's a neater more consistent solution than I had come up with.

Looks like I'm ready to fire up a merge request which I'll work on tomorrow.

Some of the check_data_structure checks need to be model specific. Right now they are just considering multiple instances of a SpineOpt "operations" model, but they might be other models. For example, SpineOpt's "master" problem doesn't needs nodes... but may in the future. I have handled this using the model_type method parameter for now.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 18, 2020, 19:56

mentioned in commit 2771603016ef0c46d4f139e58e4c7e8f9e5cc415

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 19, 2020, 09:58

mentioned in merge request !54

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 19, 2020, 11:06

@mihlema @manuelma @Tasqu You will see that I have submitted MR !54
Since I went in the end with unit__investment_stochastic_structure in combination with model__stochastic_structure (and the same for temporals), there is very little impact on regular users who are not using decomposition. In fact, I believe the only breaking change right now is that the model__report relationship needs to be defined. Actually, maybe we can add a check for that.

Other than that, I believe no other changes are required.

It would be great to get this merged asap so I can work on the next steps, which I think are (in order of priority):

Thoughts?

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Nov 19, 2020, 11:07

I have just merged the downward ramps into master. I think you should merge the new master into your branch and check if all tests still pass. If they do, I have no problem with merging right now and tie any loose ends in master.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 19, 2020, 11:09

Great - there are some conflicts - I'll get working on those and report back

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 19, 2020, 13:52

@manuelma @mihlema

In the latest master, I see that the following have been commented out in total_costs.jl, was this intentional?

  #:res_proc_costs,
  #:res_start_up_costs
spine-o-bot commented 3 years ago

In GitLab by @manuelma on Nov 19, 2020, 14:20

Not sure about this, I don't remember having commented them... @mihlema ?

spine-o-bot commented 3 years ago

In GitLab by @mihlema on Nov 19, 2020, 14:36

Yes, that was indeed me (sort of) intentionally. I am on the verge of removing them, because I don't think they're necessary. Are you using those at the moment? (Of course they shouldn't just lie around there uncommented, I'll pick this up)

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 19, 2020, 14:38

No, I'm not using them - it just looked a little suspiciously like it might have been a hangover from some debugging :-)

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 19, 2020, 14:39

Anyway, just checking- I'll leave it as is

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Nov 19, 2020, 15:16

So all tests were passing after the merge with master :

Test Summary: | Pass  Total
test set      |  941    941

Someone has clearly been busy writing tests :-)

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Nov 19, 2020, 16:41

mentioned in commit 0a1e673311206c3c641ab3a77ffd1031cdb76fe5

spine-o-bot commented 3 years ago

In GitLab by @mihlema on Jan 26, 2021, 21:30

Again, I'm coming back to my all time favorite that Benders decomposition shouldn't be used in the way we're using it. Again with the same story as before: that the duals of our lower level problem are not meaningful. Reasoning is that the duals of the relaxed problem cannot reflect e.g. start up costs (as these a constants in the objective function of the relaxed problem). So we will only get an optimal solution for the relaxed problem - this might be sufficient, I don't know. If we want to solve the original MILP problem to optimality, we should go for Dantzig–Wolfe decomposition.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jan 26, 2021, 22:24

There's no should or shouldn't in all of this. The question is, is it useful? I believe the answer is yes. We should be aware of the hazards and limitations. My hypothesis is that using decomposition in this way will get you a lot closer to the global optimum solution of a combined operations/planning problem than would be possible given the simplifications that would need to be made to solve the same problem otherwise. E.g. fewer days, lower temporal resolution etc.

I will make the same point I always make... any optimisation model that has ever produced a marginal cost of energy accepts this issue with the duals. Lagrangian Relaxation, the industry standard for years, relies upon them. They are not perfect, but they are useful.

Also, it is an iterative approach which is robust in that it does not rely completely on the duals of the sub-problem. If an investment variable changes as a result of a dual value which subsequently fails to be representative of its true impact on the problem objective function, that gets "washed out" in the next iteration.

I wouldn't get too hung up on this from an ideological point of view. The proof of the pudding is in the eating as they say - lets do some tests - if it's useful for a certain problem, it's useful and that's all we're saying.

spine-o-bot commented 3 years ago

In GitLab by @mihlema on Jan 27, 2021, 02:50

Yes, I didn't mean to say that what we're doing is not useful. I just had a talk about this with professor Kazempour from DTU, who suggested that in our case we should rather use Dantzig Wolfe as this would guarantee optimality in the MILP case. I guess my thinking is just that if there is a better way of doing it, we should think about it. But it is not a pressing issue.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jan 27, 2021, 09:11

@mihlema Thanks a million for this! We should definitely look into it!

mihlema commented 3 years ago

Most of the issues regarding decomposition have been addressed, and Bender's decomposition has been implemented for investments. Next step enhancements for decomposition are:

clizbe commented 1 year ago

Closing due to stale status.

clizbe commented 1 year ago

Closing because stale.