Closed willu47 closed 4 years ago
Hi @willu47 - is this in addition to the SelectedResults.csv output or is this instead of that? A lot of OSeMOSYS analysis out there is based on the existing output structure so I'm not sure we want to replace it, but adding the tables would be a good addition.
Hi @tniet, these new tables are an add on at the moment. They make automated testing and processing of the results much easier...
Hi @willu47 - that's probably good, at least for now.
@tniet or @abhishek0208 - This is now ready for review and to merge into master if you are happy
Note that with these changes to the results writing, we are now able to demonstrate that both long and short versions of the code produce identical results. Actually, there are some small rounding differences between the two formulations, but this does not make a large difference.
Hi @willu47 - Looks good to me. Shall I do the merge?
And are there issues we can close from this as well? I think it closes #28 as well as #26?
Hi @tniet - nope, neither of those issues can be closed - #28 actually requires the re-introduction of the TechWithCapacityToMeetPeakTS
parameter together with the constraint which used it. And the use conditional operators in the results only effects the writing of results, not the generation of equations, so #26 is still open. I'll check through the other issues and tag any relevant ones in the description above.
Hi @willu47 - Sounds good. Not sure where those two numbers for issues came from now that I look at it again. I'll do the merge.
Table Format
Each table is produced using the following syntax:
For example:
This produces csv files in the
results
directory with headers of the index;REGION, EMISSION, YEAR, VALUE
for the above example.The conditional expressions in the initial index have been added to prevent writing of zero values, which reduces file size.
Identical results
The tests have been restructured and can be run using the Python package pytest. The tests contain a single fixture which sets up and runs the model, dumping the results into a temporary directory. Each test then compares the data written out from the model for a single parameter (corresponding to a single CSV file) with a canonical version hardcoded in the body of the test code. The tests can be viewed in
tests/test_gnu_mathprog.py
.