Closed trevorb1 closed 2 months ago
Thanks @trevorb1, i'll have a look at this tomorrow. Personally i'd say all custom node stuff would merit it's own script but I think it is too large of an effort to make those changes within the current project scope.
Personally i'd say all custom node stuff would merit it's own script but I think it is too large of an effort to make those changes within the current project scope.
Super fair, and I totally agree. I added in the custom node stuff! I have to do a run where I test this (custom nodes) against the existing code to make sure nothing has changed; but the bulk of the code should be good for a review :) Thanks!
@trevorb1 I like the way you've setup things by splitting into individual scripts rather than just functions (what I did). I think this could be a good base to amend other larger scripts as well whenever we have the time for that, likely not within the current project. Two main comments and perhaps will have more at a later point;
@trevorb1 I'm trying to figure out the necessity/benefit of the below section in main.py
.
constants.py
if we decide on that, let's discuss on Monday). Also not sure why the Indian nodes are selected as (the only) custom nodes here?The results are not in line with 'master' so requires some debugging. Perhaps projected demand values are a magnitude off? First fig. is based on your pr and the second based on master.
@maartenbrinkerink Thanks for flagging the difference between current and my updates. I will debug that and ping you when its fixed.
Why is the if/else required here. Shouldn't the workflow function in the same way every time? I.e. either read the paths from the preprocess snake file or locally here.
I switched the demand projection rule from the shell
to the script
command in snakemake (see here). An advantage of directly running through snakemake is that it will handle the parsing of inputs and makes the rule calls clearer. A disadvantage is that it makes the scripts hard to troubleshoot and test imo. This is because the scripts rely on snakemake for the inputs/outputs, which will not be present if you run the script by itself (just my experience, but using the snakemake debug feature has been a little awkward).
To be able to run the script independently, I have added in the if, else to check if the script is being run through snakemake or not. If running though snakemake (ie. executing the workflow), all input/output/params will be grabbed from the rule definition. If running the script independently (ie. for debugging/testing), the input/output/params need to be supplied from the user (ie. hardcoded like you mentioned). The PyPSA-Eur team developed a nicer implementation that would be nice to incorporate in the future, here. Note, this also relates to issue #165
@maartenbrinkerink I believe the old and new results match now! A quick point, the logic relating to the peak demand ratio isnt actually used at all; see here. Is this a mistake, or is this just old code? If its old code I will just delete that from this PR! Thanks! :)
Hi @trevorb1. Workflow runs fine on my machine so will merge the pr! Regarding peak demand, this is unused code. Initially we were playing with the idea to allow for different scaling in total and peak demand to mimic e.g. demand response technologies but we never got around to that. Feel free to remove it.
Description
In this PR I have refactored the demand module. The logic has not changed, just made the code readable.
What is not included in the refactor yet is the custom node logic. @maartenbrinkerink, do you think its best to keep custom node logic with the main scripts, or should we do custom nodes all at the end of the workflow at once?
Issue Ticket Number
na
Documentation
na