PyPSA / pypsa-ariadne

High resolution, sector-coupled model of the German Energy System
https://ariadneprojekt.de/en/model-documentation-pypsa/
MIT License
14 stars 6 forks source link

Kopernikus-Projekt Ariadne - Gesamtsystemmodell PyPSA-DE

Dieses Repository enthält das Gesamtsystemmodell PyPSA-DE für das Kopernikus-Projekt Ariadne, basierend auf der Toolbox PyPSA und dem Datensatz PyPSA-Eur. Das Modell bildet Deutschland mit hoher geographischer Auflösung, mit voller Sektorenkopplung und mit Integration in das europäische Energiesystem ab.

This repository contains the entire scientific project, including data sources and code. The philosophy behind this repository is that no intermediary results are included, but all results are computed from raw data and code.

Clone the repository - including necessary submodules!

To start you need to clone the PyPSA-Ariadne repository. Since the repository relies on Git Submodules to integrate the PyPSA-Eur dataset as a basis on which to expand, you need to include the --recurse-submodules flag in your git clone command:

git clone --recurse-submodules git@github.com:PyPSA/pypsa-ariadne.git

Alternatively, after having cloned the repository without activating submodules, you can run the two following commands:

git submodule update --init --recursive

This command first initializes your local configuration file, second fetches all the data from the project(s) declared as submodule(s) (in this case, PyPSA-Eur) as well as all potential nested submodules, and third checks out the appropriate PyPSA-Eur commit which is defined in the PyPSA-Ariadne repository.

You can fetch and merge any new commits from the remote of the submodules with the following command:

git submodule update --remote

More information on Git Submodules can be found here.

Getting ready

You need conda or mamba to run the analysis. Using mamba, you can create an environment from within you can run it:

mamba env create -f environment.yaml

For external users: Use config.public.yaml

The default workflow configured for this repository assumes access to the internal Ariadne2 database. Users that do not have the required login details can run the analysis based on the data published during the first phase of the Ariadne project.

This is possible by providing an additional config to the snakemake workflow. For every snakemake COMMAND specified in the instructions below, public users should use:

snakemake --configfile=config/config.public.yaml COMMAND

The additional config file specifies the required database, model, and scenario names for Ariadne1. If public users wish to edit the default scenario specifications, they should change scenarios.public.yaml instead of scenarios.manual.yaml. More details on using scenarios are given below.

For internal users: Provide login details

The snakemake rule retrieve_ariadne_database logs into the interal Ariadne IIASA Database via the pyam package. The credentials for logging into this database have to be stored locally on your machine with ixmp4. To do this activate the project environment and run

ixmp4 login <username>

You will be prompted to enter your <password>.

Caveat: These credentials are stored on your machine in plain text.

To switch between internal and public use, the command ixmp4 logout may be necessary.

Run the analysis

Before running any analysis with scenarios, the rule build_scenarios must be executed. This will create the file config/scenarios.automated.yaml which includes input data and CO2 targets from the IIASA Ariadne database as well as the specifications from the manual scenario file. [This file is specified in the default config.yaml via they key run:scenarios:manual_file (by default located at config/scenarios.manual.yaml)].

snakemake -call build_scenarios -f

Note that the hierarchy of scenario files is the following: scenarios.automated.yaml > (any explicitly specified --configfiles) > config.yaml> config.default.yamlChanges in the file scenarios.manual.yamlare only taken into account if the rule build_scenarios is executed.

For the first run, open config.yaml and set

enable:
    retrieve: true # set to false once initial data is retrieved
    retrieve_cutout: true # set to false once initial data is retrieved

and then run from main repository

snakemake -call

This will run all analysis steps to reproduce results.

To generate a PDF of the dependency graph of all steps build/dag.pdf run:

snakemake -c1 --use-conda -f dag

Repo structure

Differences to PyPSA-EUR

License

The code in this repo is MIT licensed, see ./LICENSE.md.