Open kyllingstad opened 4 months ago
An execution
represents one co-simulation run, so maybe just rename it to cosim::run
?
The simulator
/slave
thing is tricky. They represent the same entity, but on different levels and for different users:
slave
is the lowest-level one, and it abstracts away the interface or protocol that lies between libcosim and the external subsimulator. That is, it provides a unified interface for subsimulators based on FMI 1.0, FMI 2.0, FMI 3.0 (we hope!), and Proxy-FMU. Users are people who use libcosim to run co-simulations.simulator
is the interface seen by co-simulation algorithms, observers, and manipulators. It has some extra features, like the ability to add variable modifiers. Also it gets and sets variables individually, but the values are cached and bundled before they are passed on to slave
to improve performance (we hope!). Users are algorithm/observer developers.Here are some ideas:
simulator
to subsim
and slave
to simulator
. Rationale: Both are interfaces to simulators, but the former is used in a context where the simulator is always a sub-simulator within a co-simulation. The latter interface is used before it gets added to a co-simulation. At that point it is “just a simulator”, and in principle any kind of simulator could hide behind it – even an entire co-simulation.[^1] Also, cosim::subsim
doesn't look too bad.simulator
interface, and rename slave
to simulator
and use it within algorithms too. The “extra features” provided by simulator
could instead be delegated to a wrapper which the algorithm can opt into – something along the lines of modifiable_simulator_variable_cache
, only a bit terser. (I haven't thought enough about this to know that it will work, though.) Rationale: It would be nice to have just one interface for one concept.[^1]: This is an interesting idea which was put into formal mathematical terms by Gomes et al. (2018) (Sec. 4.2). Perhaps the class (currently called) execution
could implement the interface (currently called) slave
?
For model
, I think my preference would be to follow the newer convention established by function
and function_type
. So whatever slave
ends up being called, model
would be renamed to the same with a _type
suffix – for example slave_type
.
Side comment; would it make sense to make these changes while also incorporating the new elements from FMI3.0? So if backwards compatibility is not achievable, then there is a good reason for it?
Perhaps. I don't know yet whether FMI 3.0 support will necessarily lead to backwards-incompatible changes to our interfaces, though; it could be that the new features only require additive changes. (Without having thought deeply about it, I suspect the big question is about array types and whether they can be supported by the current {get,set}_<type>_variable[s]()
functions, and if not, whether it would be weird/cumbersome to introduce additional, completely separate getter and setter functions to deal with those.)
Personally, I am not so worried about backwards compatibility, though, as long as we are still in the pre-1.0 stage and therefore have made no promises about it. But we shouldn't break client code often, nor should we break it for unimportant stuff. That's partly why I created this issue. The idea was to get all current API problems on the table and deal with them in one fell swoop before we finally commit to a 1.0 API, after which we do have to maintain backwards compatibility.
About this comment that I made earlier:
Drop the current
simulator
interface, and renameslave
tosimulator
and use it within algorithms too. The “extra features” provided bysimulator
could instead be delegated to a wrapper which the algorithm can opt into – something along the lines ofmodifiable_simulator_variable_cache
, [...]
It reminds me of another issue that we could potentially address with an API change and code refactoring: The slave_simulator
implementation is very complicated and hard to follow. I worry that it has become bloated because we try to cram too many concepts into one entity. Perhaps variable bundling (a performance optimisation) and variable modification (a feature) can be recast as two separate code entities, for example separate wrapper or proxy classes? I further suspect that the bloatedness of slave_simulator
contributes to, or at least would prevent an easy solution to, the performance issues mentioned in #759. Maybe the subsimulator interface seen by algorithm
really should be a quite low-level one so that there is more potential for performance optimisations within algorithm implementations? Or maybe we can keep the interfaces at a somewhat high level but allow ourselves to leak performance optimisations (e.g. buffer swapping) into them?
First off, a lot of good questions already from @kyllingstad in this thread.
I think we should use model
and/or model_instance
as the terminology for what's now either called a slave
or a simulator
- where simulator
really means sub_simulator
since we naturally assume that a libcosim
user is using libcosim
because they want some co-simulation in their lives. This the lowest level and it's only job really is to wrap FMILibrary.
To me at least the word simulator
already implies a system, even if that system could be self-contained in a single FMU. Basically a model
is anything that exposes FMI, a simulator
is normally a system of several such connected model
s, or just a single model
. The difference then is that simulator
should expose all user APIs, whereas model
should not.
The algorithm
is naturally just part of the simulator
's configuration. I also agree that cosim::execution
should be changed to f.ex. simply cosim::run
or cosim::simulate
, and be integrated in the simulator
interface rather than it's own class.
First of all, thank you very much @kyllingstad . This discussion is very important I feel. Fantastic that you opened and guide it!
I agree with @restenb .
I as well always felt that the term simulator
is not optimal. At least for me, I always felt it being ambiguous. Maybe that relates to what you expressed, @restenb .
Maybe as an additional proposal: For what we termed simulator
so far, SSP used the term component
. I.e. a component
is an instance of a model
. model
then means FMU in our case. As a component
is an instance of a model
(fmu), a system structure can contain multiple component
s that reference the same fmu, i.e. multiple instances of the same model
(fmu).
I kind of like this.
If there is a place where simulator
fits, then I tend to attribute the term to the co-simulation master algorithm, as the role running the system (and the show :-) )
In the survey paper by Gomes et al. which I mentioned earlier, they have made an effort to define and use a precise terminology for various co-simulation concepts. Maybe there is something we can use there. For example, they define the phrase “simulation unit”, which refers to one of the components/subsimulators/model instances in a co-simulation. (But, as they point out, a co-simulation also fulfils the criteria for being a simulation unit, which opens the door to nested co-simulations.)
To me terms like simulation unit
and simulation component
seem a bit over-encompassing in their abstractness. That's another reason I prefer FMU=model
and FMU being simulated = model instance
. At least I think the model to be simulated somehow "deserves" it's own terminology distinct from other components that could possibly be included in a simulation setup. Our observers are one example of a simulation component
that is not a model
.
Calling it a simulator
should then require the ability to simulate
the model instances to meet some criteria for results, interactivity, and observability of the simulation. This includes an appropriate orchestrator
- a point I see the paper also touches on - and possibly more (like our observers, again). In our case this is basically fixed_step_algorithm
and time_series_observer
by default.
As the libcosim API has grown, it has grown messy and unintuitive in places (as APIs tend to do). I am therefore opening this issue to highlight the current issues with it and to get feedback and suggestions for a possible future refactoring. This is by no means urgent, it's more on the “things to consider before a 1.0 release” list.
The most annoying part of it (to me, at least), is the whole simulator/slave stack. Here, we have two interfaces,
simulator
andslave
, each of which has a purpose to serve, but the names do not make those purposes nor their differences evident in any way. Not to mentionslave_simulator
, the (only) implementation of the former interface, which only confuses things even more. Why the FMI-basedslave
implementation should be calledslave_instance
is not crystal clear either. Layer mix-ups aside, the names have their individual issues too.simulator
represents a subsimulator, but the name would be equally good for the whole co-simulator. We haveslave
, but we don't have amaster
. And so on.Then there is
model
, which is the blueprint for aslave
. (That is, aslave
is an instance of amodel
, and multiple instances can be created based on the same model.) But the term strongly suggests that the entity in question is based on a mathematical model, which need not be the case. For example, it could be an interface to a hardware device. A similar blueprint/instance relationship applies to functions, which were a late addition to the library. There, we use the namesfunction_type
andfunction
, respectively. Perhaps something similar could be used for subsimulators?FInally, I am not too fond of
execution
. If I remember correctly, we borrowed it from HLA terminology, but it does not seem to have caught on in the FMI sphere. And even though I wrote the code for that class and have been using it for years, it still feels strange to me.I have some ideas for improvements, but I'll save them for the comments below.