PyPSA / pypsa-eur

PyPSA-Eur: A Sector-Coupled Open Optimisation Model of the European Energy System
https://pypsa-eur.readthedocs.io/
316 stars 220 forks source link

Prepare data for pathway optimisation with sector coupling #573

Open nworbmot opened 5 years ago

nworbmot commented 5 years ago

Heating

Other studies

Fraunhofer IEE (formerly IWES) has cost assumptions over time for vehicles, heating, etc., as does Palzer PhD thesis, see e.g. references in Synergies of Sector Coupling paper, e.g. http://www.energiesystemtechnik.iwes.fraunhofer.de/de/projekte/suche/laufende/interaktion_strom_waerme_verkehr.html has diesel versus petrol versus EV for DE until 2050, see e.g. Figure 0-9, costs in Table 10-35 (beware battery costs too high since study is old).

nworbmot commented 4 years ago

This is now done for electricity and building heating in scripts/add_existing_baseyear.py. Still to do:

nworbmot commented 4 years ago

For the squashing of assets with the same attributes to reduce the number of optimization variables, my suggestion would be:

For the aggregation, e.g. all generators with the same bus, marginal cost, efficiency, p_min_pu and p_max_pu would be aggregated to a single representative. The existing p_nom's would be added up. If one of the generators is p_nom_extendable, then the representative is also p_nom_extendable.

For the disaggregation, all the new capacity goes to the generator with p_nom_extendable (if more than one is p_nom_extendable, it can be split, but in our use cases, at most one should be p_nom_extendable). The dispatch n.generators_t.p is split to the generators proportional to their p_nom.

Here's code for identifying the mapping from generators to representative aggregated generators:

#attributes that should match
attributes = {"Link" : pd.Index(["marginal_cost","p_max_pu","p_min_pu"]).append(n.links.columns[n.links.columns.str.contains("efficiency")]).append(n.links.columns[n.links.columns.str.contains("bus")]),
              "Generator" : pd.Index(["marginal_cost","bus","p_max_pu","p_min_pu","efficiency"])}

for c in n.iterate_components(attributes.keys()):

    #store the mapping to representatives here
    mapping[c.name] = pd.Series()
    selection = c.df[attributes[c.name]]

    #to compare time-dependent, just check the average
    for attribute in attributes[c.name]:
        if attribute in c.pnl and not c.pnl[attribute].empty:
            selection.loc[c.pnl[attribute].columns,attribute + "_t"] = c.pnl[attribute].mean()

    for i in selection.index:
        found = False
        for j in mapping[c.name].index:
            if i[:-4] != j[:-4]:
                continue
            if selection.loc[i].equals(selection.loc[j]):
                print(i,"is same as",j)
                found = True
                break
        if not found:
            mapping[c.name][i] = i
        else:
            mapping[c.name][i] = j

The new representative components are given by mapping[c.name].unique(). All others should be dropped and p_nom added to the representative component. Check for p_nom_extendable.