Closed ChaelKruip closed 9 years ago
YES! I am very much in favour of doing this.
In addition to making the calculation more transparent, it also reduces the research effort from trying to find the capacity_credit
to finding reliable load_profiles
(which we already have in many cases).
I started working on this, but I'm no longer convinced it's the right thing to do:
My preference would be to keep the LOLE calculation in ETEngine, but move it out of GraphApi into a dedicated LOLE calculation module. Otherwise, we'll move the core feature to Merit but still need just as much code in ETEngine to set it up.
However, my argument is a little less compelling if the feature is going to be needed outside of ETEngine (e.g. merit-convergence, DECC project, etc).
Discussed with @ChaelKruip, and this will be added to Merit after all. The feature may be desirable for the DECC project, and must-run data is already available in the M/O without having to actually run the calculation.
I think the general challenge here is that we have different 'applications'/'modules' nowadays, and they all need to communicate with each other, E.g:
Before, we could just send the required data over, and that worked just fine, but I think it is time now to think about a more structural approach to this.... what do you think @ChaelKruip and @antw?
I think the general challenge here is that we have different 'applications'/'modules' nowadays, and they all need to communicate with each other.
I fully agree. I think we need to ensure that communications happen in a uniform way for all modules (and the ETM).
Before, we could just send the required data over, and that worked just fine, but I think it is time now to think about a more structural approach to this....
In time, I'd very much like to be rid of the "monolithic" ETEngine; we've talked in the past about turning ETEngine into a RubyGem, but I think I'd like to go further and extract major features to entirely separate processes. This gives us (1) strong separation of concerns, (2) easier testing, (3) the possibility for each component to use different (more appropriate) languages without having to rewrite everything.
For example:
Alternatively, Celluloid may provide the means to do it all in one big process, but I don't know much about that at the moment.
I think this dream scenario is some way off – and I certainly don't have all the answers yet – but hopefully ETLoader will be the first step towards it. :sunglasses:
For example:
I like the direction this is going! One question though: why would ETSource coincide with the role of 'coordinator'? Is there a specific reason not to split off ETSource as a separate module as well and have a dedicated 'coordinator' handle the interfacing between everything?
One question though: why would ETSource coincide with the role of 'coordinator'? Is there a specific reason not to split off ETSource as a separate module as well and have a dedicated 'coordinator' handle the interfacing between everything?
The diagram doesn't make this clear, but the "co-ordinator" and "ETSource" remain separate projects (or "modules", "repositories", whatever you want to call them). However, ETSource doesn't need to be a separate operating system process. I see the co-ordinator as loading the ETSource data, and providing sub-sets of that data to the other processes:
I see the co-ordinator as loading the ETSource data, and providing sub-sets of that data to the other processes
That makes a lot of sense to me now :smile:
Excellent work and write up, @antw. I especially like that little crown on top of the coordinator. :smile:
One question though:
Would it be possible for the coordinator to get information from other sources than ETSource? This is required when we want to run Merit for other models (e.g. DECC).
Would it be possible for the coordinator to get information from other sources than ETSource?
I haven't really thought any of this through properly, it was really just a brain-dump of how I think things could be organised eventually. What that diagram depicts is an ETEngine replacement.
If all of the processes have clearly-defined public interfaces, we could have a separate DECC web API (Sinatra, Graph, whatever) which talks directly to the Merit process.
Question that arrises with me: would it be the responsibility of the coordinator to coordinate requests from an external app/vendor to just one of the projects/modules/services, or would they be able to talk to one directly?
would it be the responsibility of the coordinator to coordinate requests from an external app/vendor to just one of the projects/modules/services, or would they be able to talk to one directly?
I don't know. I have literally put as much thought into this as deciding what meal to eat tonight. :laughing:
I would not expect each process to have it's own web API, so either we'd write them as-and-when other projects needed them, or we'd have a way to tell the co-ordinator which components we want to use. The idea is that the "web API" in the above diagram is a replacement for the Rails part of ETEngine, and the co-ordinator is new, and manages calculating the graph, deciding which queries are to be run, etc.
Anyway, back to the real issue:
@ChaelKruip One minor problem I've hit is that the current LOLE calculation and peak electricity demand query can be run on both the present
and future
graphs, but Merit is only set up on the future
graph.
I think this means we're going to have to also set up (but not run) the M/O for the present
graph? I'm guessing we can't do away with the present
values for these two queries?
One minor problem I've hit is that the current LOLE calculation and peak electricity demand query can be run on both the present and future graphs
I discussed this with @jorisberkhout and we think that only the future value is relevant for now. I suggest to keep the 'future only' loader intact for the time being. I'll think about possible applications / uses where we might be needing Merit / LOLE to run on both present
and future
graphs but right now I don't think we need it.
Closing.
The LOLE calculation on ETE currently compares the reliable installed capacity to the demand curve for every hour and counts the hours that demand exceeds capacity. This calculation can be incorporated into Merit quite naturally as all the required information is known to Merit.
It would be best if the LOLE functionality could be called independently from the merit order calculation though for reasons of performance and clarity.