TulipaEnergy / TulipaEnergyModel.jl

An energy system optimization model that is flexible, computationally efficient, and academically robust.
Apache License 2.0
22 stars 19 forks source link

Define scenario specification workflow #414

Open abelsiqueira opened 8 months ago

abelsiqueira commented 8 months ago

How do we expect scenarios to work.

Discussing with @gnawin some use cases.


Example: sensitivity study

abelsiqueira commented 7 months ago

Thinking about the scenario variants specification, slightly based on the diagram in #415. The main features that I see are:

Suggestion using configuration file:

Example in yaml:

variant:
  - name: Double cap for ocgt->demand
    pre-graph:
      scale-capacity-flow:
        from: ocgt
        to: demand
        factor: 2
  - name: Variant with double demand
    pre-cluster:
      scale-asset-time-series:
        asset: demand
        factor: 2

or in TOML:

[[variant]]
name = "Double cap for ocgt->demand"
[variant.pre-graph.scale-capacity-flow]
from = "ocgt"
to = "demand"
factor = 2.0

[[variant]]
name = "Variant with double demand"
[variant.pre-cluster.scale-asset-time-series]
asset = "demand"
factor = 2.0

Suggestion using julia file:

Example:

@variant "Double cap for ocgt->demand" begin
    @create_graph begin
        graph["ocgt", "demand"].capacity *= 2
    end
end

@variant "Double demand" begin
    @cluster begin
        asset_profile["demand"] *= 2
    end
end

The file-based is probably easier to implement for a small number of options, but harder to extend. The code-based needs more thought.

gnawin commented 7 months ago

This already looks great to me.

Don't allocate memory (don't copy everything) for each one of them This assumes running one variant at a time, so maybe that is the opposite of what we want?

Currently with COMPETES, we only do one at a time. So that would be our user case unless we would want to support parallel runs?

The file-based is probably easier to implement for a small number of options, but harder to extend.

For our user case, we only do a small number (<30, mostly around 10). Unless we support parallel runs, I can hardly imagine we will do more.

suvayu commented 7 months ago

These are more higher-level points about how a user might want to specify scenarios, not relevant for lower-level implementation of scenarios


These are what I can think of at the moment, but I have no idea how common/important they might be.

clizbe commented 6 months ago

It sounds like @suvayu is focusing on how to create a database for a single run, whereas @abelsiqueira is maybe thinking of specifying multiple runs simultaneously.

I see a few options: 1) Create data for each scenario, then have a way of running them all at once (or in series) 2) Also have an option to run multiple scenarios that only differ by 1 or 2 factors Such as a "wind production" set, which are the same as the base case, but with varying levels of wind production 3) Have a "scenario settings" file that can adjust certain common features and those are applied across multiple runs Such as "wind scalar" or "constraint X active" If we work out the ability then the file could start small and be expanded as people do analyses. This one is a bit tricky because it adds another level of abstraction that could be abused. Do you specify it in database or the scenario settings?

datejada commented 6 months ago

Blocked by #289 and #547

clizbe commented 5 months ago

Let's have a discussion about this soon.

clizbe commented 5 months ago

Related #56

clizbe commented 1 month ago

This already looks great to me.

Don't allocate memory (don't copy everything) for each one of them This assumes running one variant at a time, so maybe that is the opposite of what we want?

Currently with COMPETES, we only do one at a time. So that would be our user case unless we would want to support parallel runs?

The file-based is probably easier to implement for a small number of options, but harder to extend.

For our user case, we only do a small number (<30, mostly around 10). Unless we support parallel runs, I can hardly imagine we will do more.

I agree that our current use-case is a limited number of hand-crafted scenarios, but that's a limitation. I think enabling batch runs (whether parallel or series) will be an important new feature for better analyses. Sebastiaan is also pushing the idea of "having the model run constantly, exploring the scenario space," but we'll see what that turns into.