spine-tools / SpineOpt.jl

A highly adaptable modelling framework for multi-energy systems
https://www.tools-for-energy-system-modelling.org/
GNU Lesser General Public License v3.0
49 stars 12 forks source link

Linking related optimizations #318

Open spine-o-bot opened 3 years ago

spine-o-bot commented 3 years ago

In GitLab by @mihlema on May 6, 2020, 12:22

Summary

A complex rolling structure can for instance occure when multiple markets are cleared at different points in time with different roll_forward and window_durations.

Example

The sequential clearing of the Day-ahead market and the Intra-day market can be taken as an example. The day-ahead market is cleared at Day0: 12 p.m. (noon) once a day optimizing the unit commitment decision for the next day Day1: 12a.m. - 12a.m.. Using the forecasted values available at Day0: 12 p.m.. Translated into SpineModel temporal_structure:

The intra-day market is cleared hourly (e.g. 12a.m.) and optimized the delivery for the next hour (e.g. 1 a.m.). Taking into account the decisions from the DA market and using the forecast available at 12a.m. . Translated into SpineModel temporal_structure:

@jkiviluo @DillonJ @manuelma @Tasqu

spine-o-bot commented 3 years ago

In GitLab by @mihlema on May 6, 2020, 12:23

changed the description

spine-o-bot commented 3 years ago

In GitLab by @mihlema on May 6, 2020, 12:27

changed the description

spine-o-bot commented 3 years ago

In GitLab by @mihlema on May 6, 2020, 14:49

changed the description

spine-o-bot commented 3 years ago

In GitLab by @manuelma on May 6, 2020, 18:27

I guess we can also have different model objects representing each rolling situation, and then find a way to alternate between them?

spine-o-bot commented 3 years ago

In GitLab by @Tasqu on May 7, 2020, 05:12

This would be my first attempt as well, since there might be other features the user might want to tweak between the day-ahead and intraday examples that have nothing to do with temporal_blocks.

However, I suppose we would have to change a lot of the current implementation in any case, since we would need to have common variables between multiple models that are currently stored in m.ext. Depending on the desired interactions between the models, defining these might be tricky.

spine-o-bot commented 3 years ago

In GitLab by @mihlema on Jun 11, 2020, 11:51

changed title from {-Complex rolling structures might not be possible with current temporal structure-} to {+Linking related optimizations+}

spine-o-bot commented 3 years ago

In GitLab by @mihlema on Jun 25, 2020, 10:57

I have been thinking about this a little bit also with respect to #183. I think I am in favor in having a concise framework on linking models that can also be extended later on by other models.

The first idea was to have multiple models within one db. But this would lead to some issues regarding our current model strucuture. Given that multiple models within one db would follow different temporal_strucutures, we'd need to link temporal_blocks to different models. That would in turn lead to the fact that either (1) nodes would be connected to multiple temporal_blocks or (2) nodes that appear in both models would need to be duplicated.

1 The first idea does not seem too great given the following example:

Unit_A and Unit_B are part of model_A and model_B, respectively. Both models do represent the same geographical "system" meaning that they both share node_both. model_A follows tblk_A, model_B follow tblk_B. model_A optimizes flows from Unit_A to node_both, and model_B flows from Unit_B to node_both.

However, in this setting, there would be no way to tell the optimization of model_A that unit_B doesn't belong to it's optimization problem. Hence, we would need e.g. unit_model relationships or a parameter on each unit to indicate which model it belongs to. - I find this not a great solution.

2 The second idea could work, but quickly become annoying. If we want to have a linking constraint between the models we'd need to enforce a mapping between the nodes e.g. node_both_mA => node_both_mB. If both models share their complete geographical system, this seems very unintuitive

Another option would be link different models through different databases. But at the same time we would like to "have both models in spineopt at the same time" for performance (e.g. not writing back and forth to the db for the exchange between models and keeping the ability to use the model update step). So far this brought me the following idea: image

Model A, Model B, Model C share common data from source db (Note: I think this should rather be a "data pointer" rahter than an import, but I don't know how that would be done). Each Model db contains the information about the model structure and which variables are active for the respective model.

The config db (or maybe tool?) holds information about the sequence of models, iteration and termination criterion, linking constraints. For Benders decomposition this could look something like this:

Furthermore, for this example, we need to introduce in which sequence the models are executed, e.g. model_a first or something.

Next step would be to define the intersection between the models. model A -> model B: unit_invested; model B -> A: marginal costs. Of course we need to define when this information is transferred. Also in config we need to hold the information what the break criterion is - and when it will be evaluated.

I think, we can provide a standard config for popular things, in particular decomposition (as we're also promising this).

From a spine_opt perspective this generic approach would probably be a bit tricky and would require some significant changes to run_spineopt.jl, as the information from the config needs to be translated into an execution script (I believe).

This idea is not quite finalized and there quite a few things that still need to be evaluated, for instance if we could even do this with run_spineopt.jl but I thought it's good to think about a generic way of doing this. I believe this would leave the door open for a lot of flexibility for future models. Any thoughts @Tasqu @manuelma @DillonJ @jkiviluo ?

spine-o-bot commented 3 years ago

In GitLab by @Tasqu on Jun 25, 2020, 11:55

Personally, I'm not against using separate input DBs for different models, albeit it sort of undermines the point of having a model ObjectClass to begin with. As you pointed out, the alternative would pretty much have to be adding the model dimension to every single RelationshipClass to avoid ambiguous definitions, which could get ugly. (Idealistically, it could be nice to be able to store the data and all the required definitions for a model in the same database, but it could be too messy to be practical.)

I've previously given some thought to how the models run in a sequence could or should interact with each other, and to me the simplest way would seem to be having all the models run using the same variable dictionary. Since all the models would use common variables, there wouldn't be a need to externally define which variables from model A interact with which variables from model B etc., since this information would be "embedded" in the structures of models A and B respectively. However, this approach would require very precise control over which variables are saved from which models, so that only the desired variables from previous model runs would be fixed for the next ones. Furthermore, the structures of models A and B would have to be very precisely defined, and a more generic approach to constraining variables using other variables could be necessary, if we'd want to support constraining upper limits etc. in addition to simply fixing the variables from previous runs.

An alternative would of course be to run some script between models A and B to define how the results of A impact model B. However, I'm not sure if this could be generalized in any manner, or if the script would have to be tailor-made on a case-by-case basis. Furthermore, we'd have to discuss whether these scripts would be parts of SpineOpt or not, since this type of general "task scheduling" starts to sound like Toolbox functionality.

spine-o-bot commented 3 years ago

In GitLab by @mihlema on Jun 30, 2020, 08:18

I think we can't assume that each model handles the same variables. Even for Benders decomposition this is not the case (mp -> units_invested, m -> unit_flow). We could have different run_spinemodels on a case-by-case basis. However, this might become cumbersome for code maintenance. We have to carefully keep all run_spinemodels update once we add functionality (e.g. new constraints, new variables).

spine-o-bot commented 3 years ago

In GitLab by @mihlema on Jun 30, 2020, 08:20

Linking models using different algorithms

Problem identification

At the moment, run_spinemodel does not support the linking of multiple models. Possible use cases are Benders decomposition for investment models, linking of sequential markets with different rolling structures, agent based models and so forth. The underlying algorithms can be different in their strucutre and have different "do's" and "while" conditions: image

As it seems to be fairly difficult to incorporate multiple algorithms in one run_spinemodel and as it would probably also not be the easiest to extend this functionality, we’d need designated run_spinemodel’s on a case by case basis. - Which might cause some trouble in keeping all of them "in sync".

Proposal

These run_spinemodel’s do have some information in common though:

The only thing being completely different is the underlying algorithm. All algorithms could be described through a sequence of calculation specifications (as illustrated above for Benders and DA-ID coupling). We provide an “algorithm generator”, that evaluates the sequence of calculation specifications into lines of code. This algorithm is then written into run_spinemodel, resulting in the following structure:

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Jun 30, 2020, 11:57

I think your alternative #1 is much more simple than e.g. having multiple databases. Defining which units are in which model is something that should be done anyway. I guess we're currently assuming that based on which nodes are in the model. However, you could have two alternative units and choosing which one to use in which model run needs to be established.

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jun 30, 2020, 20:56

I have to say this looks quite interesting. In principle, any program (or algorithm) can be represented using a tree-structure, so it's not impossible to develop that structure within our framework.

I think we need to start with a very simple program such as if condition A else B end and see what classes, objects, relationships and parameters we'd need to specify something like that.

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Jul 1, 2020, 11:16

While that looks really cool, I'm afraid how much work there will be to make it functional and versatile. Current graph does not have directionality and supporting conditionality in the interface will mean quite a bit of work. And then when someone wants to do something not supported, the only way to make it work is to upgrade the interface with even fancier logic.

How about something more simple (at least first):
Allow including a Julia code snippet to the beginning of SpineOpt. The purpose of this snippet is to give the instructions how to loop and iterate between sub-models. This code snippet could be a parameter in model, but it would really be a multi-line JSON. That code snippet would establish the logic between different models (loops and conditionals) - it would be calling SpineOpt with the right set of parameters and even parallelizing when possible etc. By default, it would just run SpineOpt as now.

E.g. a SpineOpt_decomposed would be presented by single model entity in Spine Toolbox, but once it's inside SpineOpt it is divided into multiple sub-models as instructed by the code snippet. Since it's code, it gives lots of freedom to do whatever needed (and unfortunately also things that should not be done).

Another choice would be to establish a standard code framework for loops/decomposition etc. that will be fed with model parameters that instruct how to implement the logic. This could be second stage - first make the code snippet to work - then we learn what kind of structures are needed and maybe we can generalize. After it's generalized, we can also think how to present the logic in a graph like above.

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jul 1, 2020, 11:29

I agree it may require more resources than we actually have, unfortunately.

I like your idea @jkiviluo, but I think we should stay as away as possible from passing code in parameter values. Just because of code injection. If we're going to go with this, I think we should develop that data structure to represent code: just basic branching and looping, if and while, might be sufficient.

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Jul 3, 2020, 14:16

I'm ok either way. I was thinking that the code snippet would be the first step - something we can do safely now that there are no outside users. We get it to work first and then we can iterate it toward a data structure that directs the logic.

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jul 6, 2020, 20:45

@mihlema and myself did some brainstorming about this on slack. Based on that, I wrote a proposal here.

I think the proposal works, but it's just a first attempt, so any comments are welcome.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jul 10, 2020, 10:12

So I had a go at implementing the decomposition structure using the approach described here : https://gitlab.vtt.fi/spine/model/-/wikis/Custom-programs-in-SpineOpt

I think it works - but I have some thoughts that maybe we need to think through :

  1. While it works - what is the advantage of the approach over writing the code directly? Writing the psuedo code for the high level approach was certainly an awful lot easier than figuring out the correct relationships, objects and parameters to create - and further, there is a level of abstraction - in that the instructions are not the names of functions directly.

For example, here is the psuedo code for the decomposition structure I would like to implement :

setup_model(mp)
setup_model(sp)
while optimize_model(mp)
    fix_mp_variables(mp, sp)
    optimize_model(sp)
    while roll_forward(sp)
        optimize_model(sp)
    end
    save_marginal_values(sp)
    rewind_model(sp)
end
write_report(sp)

Would it be another option to simply restructure the existing code and have this high-level optimization control flow structure written directly in Julia in a self-contained file which contains the main optimisation flow (shouldn't look much more complicated than the pseudo code above) and any custom functions for that flow (e.g. save_marginal_values() is specific to the decomposition structure and it would live in the custom optimsiation control flow file.

Then we provide some mechanism to choose between alternative optimisation flows.

  1. The second point is that the proposal doesn't really address some of the implementation issues such as multiple temporal structures / model dependent time slices and the higher level issue of how we can have multiple models living side by side within Julia with access to SpineInterface.
spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jul 10, 2020, 11:32

The proposal was never meant to address 2, and I agree we should focus on that first, but that's a different functionality. Once we figure that out, we can think whether or not we want to create programs through data and go back to that proposal.

So we need a way to make models coexist in the same db, while sharing the 'spatial' structure of nodes, units, etc, but having individual temporal structures, is that the requirement more or less?

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jul 10, 2020, 12:57

Perhaps we need to think it through more - and work through some use cases.

For example, during our case studies call, I was thinking of @mihlema's ideas for a multi-timescale market model that replicates day-ahead (and beyond) down to intraday and balancing. My first thoughts were that the different market timescales only differ in terms of the forecast error, temporal resolution and, perhaps, what constraints are included. My initial thoughts were that our flexible temporal and stochastic structure should be enough to capture all these differences and I wondered if different models were really needed to implement this - because in reality, day-ahead, intraday and balancing all form part of a single continuum and our structure is well suited to capturing this. However, the one thing we can't do now is have specific constraints apply during specific temporal blocks, but this seems trivial to do.

But what do you think of the idea of expressing the optimization flow in a single julia file - wouldn't it be a bit simpler? What does the data side system add? I feel that inevitably there would be something you would need to do that you can't specify using data but you can easily in a Julia script? My idea is that you would structure the current run_spinemodel code to include a single file that contains something like

setup_model(mp)
setup_model(sp)
while optimize_model(mp)
    fix_mp_variables(mp, sp)
    optimize_model(sp)
    while roll_forward(sp)
        optimize_model(sp)
    end
    save_marginal_values(sp)
    rewind_model(sp)
end
write_report(sp)
spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jul 10, 2020, 13:00

basically, alternative run_spinemodel codes that are somehow selectable?

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jul 10, 2020, 13:04

I think I am already sold on that we don't need the program generator data side, at least not right now. I'm fine coding in Julia what we need. But for instance we need to learn how to do roll_forward(this_model) and rewind_model(that_other_model) and for that we don't have a proposal yet. We need to find a way to link models to temporal structures. Or maybe it's enough to keep one temporal structure per system and replicate it for each model in our code?

spine-o-bot commented 3 years ago

In GitLab by @mihlema on Jul 10, 2020, 15:43

So we need a way to make models coexist in the same db, while sharing the 'spatial' structure of nodes, units, etc, but having individual temporal structures, is that the requirement more or less?

Depending on the level or generality this might not be enough. An energy system can also be represented by different agents, that all have their own sub-problems. Each agent has different units associated.

We should at least leave the door open for this functionality.

But what do you think of the idea of expressing the optimization flow in a single julia file - wouldn't it be a bit simpler? What does the data side system add? I feel that inevitably there would be something you would need to do that you can't specify using data but you can easily in a Julia script? My idea is that you would structure the current run_spinemodel code to include a single file that contains something like

setup_model(mp)
setup_model(sp)
while optimize_model(mp)
    fix_mp_variables(mp, sp)
    optimize_model(sp)
    while roll_forward(sp)
        optimize_model(sp)
    end
    save_marginal_values(sp)
    rewind_model(sp)
end
write_report(sp)

I have been thinking about this before. I guess the main motivation for create this model_generator, ~as propossed here and in the wiki, is that we'd have a strong link between the data model object and the actual optimization model. To me, this seems fairly clean.

I agree that simply writing different execution script might be easy. However, we'd need (I guess) model_method parameters such as:

In the code, we need to process this information. I guess it'd need to do something like this:

for mod in model()
$(mod).ext = .. #somehow splice model name here
initialize_temporal_strucutre($(mod)) ...
end
if !isempty(indices(is_benders_subproblem)) && !isempty(indices(is_benders_subproblem)) 
sp = filter(x -> is_benders_subproblem(model=m)==true, model()
mp = -"- #for master problem
setup_model($(mp))
....

For doing a similar thing for the market sequence we'd need:

And then the mapping as proposed above. Maintainability of this scripts can possibly become annoying.

I am already sold on that we don't need the program generator data side, at least not right now. I'm fine coding in Julia what we need. But for instance we need to learn how to do roll_forward(this_model) and rewind_model(that_other_model) and for that we don't have a proposal yet. We need to find a way to link models to temporal structures. Or maybe it's enough to keep one temporal structure per system and replicate it for each model in our code?

I guess the model sequence coming from the actual model object is the main advantage of the model generator proposal. roll_forward(this_model) is literally given in the data.

In the end I guess the question boils down to whether we rather want new users to add new algorithms through script (1) of through the database (2).

(1) to do:

(2) to do:

I say new users, because I think that we will provide standard algorithms such as benders decomposition either way anyways. And for both ways, these standard example algorithms can serve as an example on how to do it.

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jul 10, 2020, 15:49

Those are good points, I generally agree. About this, roll_temporal_structure(this_model), the problem is currently roll_temporal_structure takes no arguments, as the temporal structure is assumed to be unique and independent of any model. So we need to associate each model to a different temporal structure in Julia so they can be rolled and 'rewinded' independently. But how do we do that in data? Do we need model__temporal_block, and how does it play along with node__temporal_block for instance?

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jul 10, 2020, 16:01

One point is that I'm not sure that there is a direct link between between the model object in the datastore and the MOI model object... right now, the model object_class really just holds high-level system parameters.

Let's say the model object class did correspond to the MOI model and the associated parameters were related to the MOI model... if we had multiple MOI models, it is likely they wouldn't have much commonality with respect to parameters... for example, if the master problem MOI model needs a "marginal value tolerance" parameter (or whatever) that is very model specific, but could be defined for every model which doesn't make sense.

That said, there may be parameters common to all models - e.g. convergence tolerance, max iterations etc... but we're not passing those anyway - we have to do that inside Julia right now.

So my feeling is that each model is rather an object class so you can define class specific parameters. We could treat objects within that class as alternative sets of parameters.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jul 10, 2020, 16:07

We also need to ask ourselves what the high level requirements are...

Do we need to support multiple rolling structures (we might, I'm not sure).

The decomposition structure in Example 2 assumes no rolling at the outer level - but if we wanted this - it woulnd't be just two rolling structures, it would be nested rolling structures- i.e. the model_window of the inner loop might get rolled... but then this is another layer of complexity and it's sounding onerous.

Then, would two side-by-side rolling structures be useful? In my decomposition structure, assuming no rolling at the outer level, all I actually need is an independent set of timeslices that covers the full model window (and no rolling).

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jul 10, 2020, 16:14

Also - this question of agent based models... where now a model is actually part of your system... that is a really different kettle of fish - it's not really model linking in the sense that we understand it (but it is certainly linking models).

I'm not sure that the functionality that is needed for that is the same functionality that is required for the higher level linking functionality. But maybe it is - we need to brainstorm that. Different objects related to different models sounds messy. We could link a model to an object though, perhaps and have a different structure for that. and perhaps the additional data for that model lives elsewhere? A separate model DB perhaps?

Anyway - the point is we might need different structures for the higher level iterative linking and for the model within a model case.

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Jul 13, 2020, 08:44

There are two different levels of model linking. One is the level of SpineOpt. There it should be feasible to setup a rule-based structure (model generator functionality). However, as Jody's example demonstrated, it might be wiser to start with code snippets that serve a specific modelling purpose and learn from those whether and how to generalize. I think this current issue should be just about linking within SpineOpt.

The other level is how to make different modelling frameworks to talk to each other. That needs to happen through Spine Toolbox. For instance, in EU project TradeRES that recently started, we need to make optimization models discuss with agent based models (and there is a multitude of different programming languages at play). Issue 757 is a proposal from last week how to handle that. It doesn't go into detail how to make models actually exchange information. Unidirectional links should be already feasible through current Toolbox workflow. Simple looping in Toolbox we've also discussed before (sub-DAGs) and that should enable most of the use cases. More complex stuff should be possible with scripting - that may be enough for now (or even during Spine project). But let's continue that discussion in that issue (https://gitlab.vtt.fi/spine/toolbox/-/issues/757).

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jul 15, 2020, 08:42

So what is our next move here?

On our last call, we discussed rolling of the investment problem. Thinking about it a little further, in the context of the decomposed structure I don't believe this is necessary. This is because the master investment problem is actually relatively small compared to the operational sub problems since it is completely decoupled. And in fact, solving the master investment problem over a number of years all in one go, has many advantages such as optimizing seasonal storage better and timing investments better.

The master problem objective function is basically :

   min sum( investment_variable(u,t1) * marginal_value(t2) )
            for u in indices(candidate_units)
            for t1 in investment_timeslices
            for t2 in t_in_t(investment_timeslices, operational_timeslices)
       )

So depending on your investment temporal resolution (which could be years or months), there aren't many variables at all in the master problem, so it is not necessary to roll it. Rolling of the investment problem is only really required for a conventional investment problem formulation that is simultaneously trying to optimise chronological operations (not the case in a decomposed structure, that's the point).

So in this case, it doesn't look like we have a use case for nested rolling?

If the below is the decomposition algorithm we want to implement, what is our next move given the current state of the model? The work I have done is now very far behind the current master and I would need to start from scratch.

Given that, for decomposition, a higher level rolling structure is not required, I found that all I needed to do for the below structure was create a second set of timeslices and related functions (e.g to_timeslice) that operate over the entire model window.

Thoughts @manuelma @mihlema @Tasqu

setup_model(mp)
setup_model(sp)
while optimize_model(mp)
    fix_mp_variables(mp, sp)
    while optimize_model(sp)
        roll_forward(sp) || break
    end
    save_marginal_values(sp)
    rewind_model(sp)
end
write_report(sp)
spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jul 15, 2020, 12:21

@mihlema a further thought... if we were to include agent-based models within the decomposed structure above - how would the algorithm look? I haven't worked with agent-based models and I'm not clear on the precise relationship between the agent models and, say, the operations problem - how often are they solved for example? Once a day? Once per-interval? Or, are they directly included in the operations sub problem - i.e. is it a single optimisation?

Perhaps you could write the psuedo code for an angent-based model so we know what would need to be supported if we want to at least leave the door open?

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jul 15, 2020, 13:17

Thanks @DillonJ that helps a lot. There's no apparent end condition for the outer loop though... How is that managed?

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jul 15, 2020, 13:54

The idea was that optimize_model(mp) would return false when some convergence criteria are met.

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jul 15, 2020, 14:04

I think we need model__temporal_block anyways, right? So we can have static time slices for the master problem spanning the whole horizon, and rolling time slices for the subproblem.

So a first step could be to generate the temporal structure on a per-model basis and adapt all the related functions in SpineOpt so they get a model object as argument. I propose I work on that now.

Once that's done, we can think of how to implement that particular pseudo-code above. Is the current version of run_spineopt a particular case of it? Or they are completely separate? How does SpineOpt knows it needs to run one or the other? How does SpineOpt knows which is the master model object and which is the sub model object, do we need an extra parameter for that?

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Jul 17, 2020, 06:47

One type of agent-based model, that would be closest to SpineOpt, is one where the sub-problems are agents optimizing their individual profits and a master problem that represents the market clearance mechanism. In this case the problem would be solved as often as there are markets to clear.

I would think it would be easier to build a separate model for the purpose, but maybe borrowing constraints from SpineOpt. At least we shouldn't spend much time for an agent-based model in Spine project.

spine-o-bot commented 3 years ago

In GitLab by @mihlema on Nov 22, 2020, 16:45

changed the description

mihlema commented 3 years ago

I believe we're quite satisfied with the functionality that we have at the moment: Arbitrary amount of models with separated temporal and stochastic structures. We have a script to support benders decomposition, and we have the infrastructure to do more. Assigning priority 3 to this issue, in case we want to enhance the current implementation.

manuelma commented 1 year ago

Is this the issue where we should discuss about specifying the sequence of solves? @jkiviluo @DillonJ @mihlema what do you think?

There is quite a bit of useful background here in my opinion. My concern at the moment is which way we should go:

  1. We could try and develop a data structure so that the user can specify their algorithm through data, including any sequence of solves, fixing variables from previous models' results, looping etc.
  2. We could progress the current approach of hard-coding algorithms like Benders and MGA, which are activated by specificying method parameter values.

I'm inclined to stay with 2. Option 1 seems amazing but very hard to get right; more likely we will end up with something pretty unusable, or what do you think?

DillonJ commented 1 year ago

I think there is a difference between an optimisation algorithm (e.g. Benders) vs. workflows that implement a higher level modelling objective. I don't think we would want to mess with the core code to imeplement some very specific user workflows.

I think that rather than thinking of linking sequences of solves within SpineOpt we should rather think of sequences of solves within Julia. SpineInterface and SpineOpt already have rich functionality in terms of accessing data, that we could leverage for some specific solve sequences that a user might want to implement. This way, we could get all the performance benefits without crowding the core code with a wide variety of workflows. With this in mind, we could focus on what additional interfaces and functionalities SpineInterface and SpineOpt could offer to help with this. For example, SpineOpt could expose the solution/output that is already in memory.

Edit: Also thinking some sort of call back option if a custom Julia workflow wanted to do somethig between SpineOpt rolling windows.

manuelma commented 1 year ago

@DillonJ so I understand you too want to stay away of building any algorithm through data?

DillonJ commented 1 year ago

@DillonJ so I understand you too want to stay away of building any algorithm through data?

Just seeing this now that @datejada has assigned this to me :-)

I think it would be too messy/clunky and as we've seen with benders, these things are so specific it's very unlikely you would achieve any benefit implementing some sort of generic data infrastructure that would make this kind of thing possible.

I guess it would help to have a couple of use cases here to test the approaches?

My feeling is that linking solves in Julia would achieve the best tradeoff between ease of implementation and performance.

datejada commented 1 year ago

Four use cases for this action:

Two current use cases:

jkiviluo commented 1 year ago

We've been just implementing this for FlexTool and I like the way it works. To do it for SpineOpt, there needs to be some additional parameters in the database that would help the wrapper code that runs SpineOpt to decide how to roll (possibly nested) and what to keep from each solve. Translating https://github.com/irena-flextool/flextool/issues/57 for SpineOpt:

jkiviluo commented 1 year ago

By the way, I think that building the nested model structure is the primary level thing. Then, utilizing Bender's in some of the models is at the secondary level and can be done only where we have defined how to do Bender's on that problem. Bender's can sometimes remove the need for nesting models, but it's still a model level feature (while the nested model structure is above that).

tarskul commented 10 months ago

@jkiviluo shared a link with documentation on the implementation in Flextool: https://irena-flextool.github.io/flextool/how_to/#how-to-use-nested-rolling-window-solves-investments-and-long-term-storage