spine-tools / SpineOpt.jl

A highly adaptable modelling framework for multi-energy systems
https://www.tools-for-energy-system-modelling.org/
GNU General Public License v3.0
57 stars 13 forks source link

Archetype implementation #8

Closed spine-o-bot closed 1 year ago

spine-o-bot commented 3 years ago

In GitLab by @Poncelet on Jun 20, 2018, 12:20

How to implement the archetypes?

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Jun 21, 2018, 18:40

One option is that archetypes are just on the database side. What goes into Spine Model would then be simplified. Similarly, we might want to have expressions and symbols resolved before they reach Spine Model. On the other hand, Julia is quite capable and resolving problem with variations might benefit from having all that information in Spine Model.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 22, 2018, 12:23

I don't like the idea of processing that contains model logic between the database and Julia - you then have model information embedded in some code somewhere which could lead to problems similar to the WILMAR mess over time, thus defeating the goals of what we're trying to do here and interfering with the workflow we have envisaged.

This processing box simply doesn't exist in any of the visions we have for the tool!

The key here is implementing a nice data model that allows us to create a process with arbitrary inputs and outputs and allowing the definition of arbitrary constraints between the two. We have made good progress on this and it just needs a little final tweaking.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 26, 2018, 08:02

Hi folks,

I have come up with a refined proposal to implement unit archtypes. The proposal can be found in the following powerpoint presentation:

https://drive.google.com/open?id=14Z4LfqZZaI4xwNlWwNW1M95pVmxOUDQb

The idea is quite simple and exploits the JSON field and array data.

In terms of the implementation, it is quite simple also. We define the following new object classes:

each unit_constraint object contains the unit_constraint coefficients stored as a JSON array where the terms can be expressions. We then associate the constraint with an archtype and when we associate that archtype with a unit, this tells the model to create the associated constraints and the interpretation of parameters that are associated with the unit.

One other change that we would have to make is that rather than an arbitrary number of unit_output relationships, we would need to define them as unit_output_commodity1, unit_output_commodity2 etc. This is so the model knows what commodity's activity variable the coefficients relate to.

Also, the implementation (as previous versions of it) relies on expressions which we will have to flesh out more. This will be another discussion topic in itself.

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Jun 27, 2018, 06:00

When comparing with https://docs.google.com/presentation/d/1_lLkTJBPOp53asl_3zLs_GY2j3q8a7P2tNSkrxQR2zk/edit#slide=id.g31266ca2d5_0_9 I don't see couple of features that help to ensure integrity of the unit.

In that slide variables are objects connected also to the Archetype. This allows to check that the unit has the right number of connections to be using the archetype. In both approaches, the variables are connected with the unit also, but it's bit unclear from Jody's slide whether you are connecting directly with a commodity? I.e. is unit_input_commodity1 a relationship between unit and a commodity? Or is it an object of a new object class called 'input' or 'variable'? If this latter, how it would know which commodity it represents?

When the variable object is part of the archetype (has a relationship to the archetype), it also allows to define input and output commodities that are valid by connecting commodities with the variable object. I.e. a gas turbine would be allowed to use certain gaseous fuels.

Grouping of constraints can be useful too, related to the next paragraph.

We should enable the possibility to change when each constraint is enforced over the solve horizon. This would allow to switch from a detailed unit presentation close to real time to a more relaxed presentation farther in the horizon. I think the natural place for this parameter is in the relationship between the constraint (or group of constraints) and the unit (and probably one should use symbol to be able to easily change the value over a set of units). However, we might want to consider also the temporal stage discussed in #17 . I mean that maybe the unit presentation should be allowed to change only when the temporal stage (stochastic tree or time resolution) changes. This might help to avoid some problems in the formulation of equations later. Not sure about this at all though - at least it's not straightforward to include in the archetype. Maybe the 'when_to_apply' would need to be an object that is connected to a specific temporal stage(s).

We also need to enable to connect the archetype with equations written in Spine Model. The constraint object should in this case link to the corresponding equation in Spine Model (and eventually we should be able to see the equation code from Spine Toolbox too when needed).

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 27, 2018, 08:56

To me the implementation at https://docs.google.com/presentation/d/1_lLkTJBPOp53asl_3zLs_GY2j3q8a7P2tNSkrxQR2zk/edit#slide=id.g31266ca2d5_0_9 appears way too heavy and cumbersome. I think you simply need to define way too much data in too many places. For example, a unit with a single input and two outputs for a constraint involving two of the variables you would need to define, as I can see, at least the following (and I've probably missed stuff):

In my proposed approach we need to define:

Furthermore, for an existing archetype, to create a new constraint needs the following using Juha's proposal:

Using mine you need:

I think it is clear that your proposal is more functional and mine is more compact. But I feel yours is not workable in it's current form - we need to find somewhere in the middle - do you think we can agree on that at least?

For me the biggest issue is the number of objects and relationships you need to define just to create a relatively simple constraint. There are two ways forward, we take my approach and we try and build into it some validation/safeguards to ensure integrity, or we try and trim yours down. Using my approach you could, for example, do a validation julia-side: "Unit X is attached to Archetype Y which expects two output commodities but only one is defined". You could trigger this validation at any time. It would be a function Julia side just like manuel is suggesting to resolve expressions.

Thoughts?

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 27, 2018, 08:59

We can define relationship parameters on the constraint_temporal_stage relationship to turn constraints on/off in different stages. We could also do this on the archetpye_constraint_temporal_stage relationship if we want to do it only for certain archetypes.

it's bit unclear from Jody's slide whether you are connecting directly with a commodity? I.e. is unit_input_commodity1 a relationship between unit and a commodity?

Yes, unit_input_commodity1 is a relationship

spine-o-bot commented 3 years ago

In GitLab by @Poncelet on Jun 27, 2018, 10:30

I have the feeling that the current discussion is addressing two distinct things: 'archetypes' and 'user constraints'.

I am strongly in favor of separating these two things. Just to make clear how I see it:

Currently, the majority/all of the discussion here seems to relate to the user constraints. If you agree with this, I recommend making a new issue for describing how to implement/define user constraints. This issue was intended to discuss how to represent and implement the information related to the interpretation layer being the archetype.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 27, 2018, 12:43

I don't think they are quite separate. A user constraint is something that could possibly involve any and all objects in the model - objects of different classes. For example the SNSP constraint in Ireland - it's unlikely you would find off-the-shelf package that would handle such a specific constraint, which involves demand, generation, imports and exports. So it's great to have the functionality to be able to add such a constraint to your model using data.

By contrast, a unit archetype is a unit that has an arbitrary number of input commodities and output commodities. I am just calling the relationships between these inputs and outputs "constraints". We could equally call them something else to avoid confusion... for example, conversions. These "unit_constraints" or conversions just apply within the unit itself. User constraints are more broad in scope and can potentially involve any system element.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 27, 2018, 12:47

The key complexity in designing the specific data model that can handle defining arbitrary unit archetypes is defining the possible input and output commodities and defining the relationships between them (which we can generalise as a series of linear constraints). The trick is linking the coefficients that we define in data and associate with the archtype, with the actual unit and it's parameters. We want to be able to have something that is intuitive, robust and simple for the user to implement. I think it won't be possible to find the perfect solution, but I think we can definitely come up with something good!

spine-o-bot commented 3 years ago

In GitLab by @Poncelet on Jun 28, 2018, 06:55

In response to @DillonJ: I understand that there is a difference between a user constraint like a SNSP constraint, and a constraint within a unit. However:

  1. Defining a way of describing generic constraints formed by data within the unit is still something fundamentally different from 'interpreting' the data presented in certain parameters.
  2. I think the principle of using data to define a SNSP constraint or a constraint within a unit could be more or less the same (also in TIMES, there are a number of user constraints which can be defined, both within a unit and outside a unit).

Regarding the last comment: I think the overall Spine Model equations should be sufficient to define units with arbitrary number of inputs and outputs (is currently the case), and to define the vast majority of possible constraints (some generic constraints can currently be induced via parameters pRatioOutputInputFlow, pRatioInputInputFlow and pRatioOutputOutputFlow, however more parameters and corresponding constraints are required). The constraints and parameters covered in Spine Model will not cover every possible imaginable case, so there will certainly be a value related to defining constraints by data. However, I do believe that there should not be a fundamental difference between user constraints that are not within a unit.

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jun 28, 2018, 07:51

I like the distinction between user constraints and archetype @Poncelet is proposing. I got that idea from Espoo, that archetype was kind of a way to switch between different sets equations that are already available in the model. For example (it's easier for me to think in power systems) the user could choose between a DC model and an AC model through specifying an archetype parameter in their data —something like that.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 28, 2018, 08:02

@Poncelet @manuelma but there is no suggestion that "user constraints" are related to archetypes in any way.

What we are suggesting is that we allow the user to define an arbitrary transfer function of a generic process that has arbitrary inputs and outputs. These are completely different to user constraints in my opinion.

Let's take the example of the fixed_output_ratio constraint. This is saying that electricty/heat = fixed_output_ratio.

If I understand correctly @Poncelet @manuelma are suggesting that we would hard code all the versions of this constraint that we can imagine and the archetype object would have parameters that would allow us to switch between the different versions of the constraint.

Then there is the generic transfer function vision which I believe is closer to what myself and @juha_k have been considering.

We can of course enable both. Best of both worlds?!

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jun 28, 2018, 08:04

Plus one to enable both, archetypes and generic transfer function.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 28, 2018, 08:07

The advantage of the generic transfer function approach is that it will avoid a lot of clutter building up in the Julia code for every conceivable process.

I would really like to be able to define a new unit archetype just using data rather than having to code many versions of the constraints in Julia.

But I also see the need for many standard unit_level constraints to be defined in code and have different versions of these for different types of unit - like ramp rates, min up and down times etc.

So in my mind, something like the fixed_output_ratio is something that might be better implemented data side using the arbitrary transfer function type approach - mainly because it is creating a link between different commodities inside the process. Ramp rates, min up/down times, pmin, pmax etc. etc. are all standard unit constraints that I think should be implemented Julia side which would make use of the archetype object when there are different formulations of these

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 28, 2018, 08:09

@manuelma I think we can use the archetype object for both - it would contain the information about which version of hard coded constraints apply to units of that type and would also hold the generic transfer function type constraints (and it may of course hold none).

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jun 28, 2018, 08:11

Yes, good idea.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 28, 2018, 08:19

The more I think about it - the more I think this would be a very very powerful feature of the SPINE model, setting us apart from other tools. The intention of SPINE is to support energy system integration analysis by allowing the definition of a wide range of energy system integration problems.

With this archetype concept, we are allowing a user to create a new technology on the fly and define the parameters for it without touching constraints in the model. This is a nice feature. What I am really envisioning here is that if tomorrow someone wants to model a technology that takes electricity and heat as an input and produces cauliflowers as an output - this is possible without writing any constraints :-)

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jun 28, 2018, 08:37

And let's not forget that we're trying to supply a user friendly environment to also write JuMP equations easily ---the convenience functions to access parameters.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 28, 2018, 08:39

Yes! The point to emphasize here is that we are only talking about the very specific case where someone is creating a new technology and wants to define the relationships between the input and output commodities.

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Jun 28, 2018, 08:49

That's what I've had in mind although apparently I haven't been very good in expressing it. It should be easy for the user to create a new type of a unit (archetype) and apply that to a group of units. Archetype helps also to ensure that right parameters are available for that type of a unit.

I think we should be able to have most commonly used equations within the Spine Model (including CHP type constraints). At least I'm more comfortable reading written equations than equations being formed from parameters. I would think user constraints would be mostly for system specific technical system-wide, regulatory and policy constraints that can be rather arbitrary.

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Jun 28, 2018, 09:03

When it comes to how difficult it is to add new archetypes or constraints (Jody's lists above), I would first say that the archetypes and constraints are not defined often. Some additional trouble should be ok in order to allow more functionality and better data integrity. Also, many items on the list are only defined once at the whole data store level (input1,...). When defining actual units, then many properties can be inherited from the archetype and do not need to be re-defined. It might actually save trouble for the user. If the archetype uses gas as input1, then the unit will inherit that (and if there is a conflict if the unit is connected to a wrong type of node then we can give a warning).

Finally, the capability to say when each constraint is applied is very important.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 28, 2018, 09:14

I have updated the proposal here:

https://drive.google.com/open?id=14Z4LfqZZaI4xwNlWwNW1M95pVmxOUDQb

If we relate the commodities to the archetype instead of the unit, then we get around the integrity issue completely.

We can easily control the application of constraints over time by using parameters that are defined on the constraint_temporal_stage relationship, or using time patterned parameters

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jun 28, 2018, 09:58

Just wanted to confirm that this won't be needed for milestone v0.1 (end of august).

In other words, 'A' case studies should run without this functionality?

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Jun 28, 2018, 10:08

When you connect a unit with a commodity, you need to define whether that's input1, output1 or whatever. Archetype could have this pre-defined and it would be then inherited to the unit - as long as there is a match. I guess that's implicit in your proposal.

Another issue is that some units can have interchangeable inputs - you could burn brown coal or black coal or even biomass up-to some level. But that's probably best left as a future worry. Also there are start-up fuels, but that should be possible using the same concepts.

@manuelma As long as we have something that works it should be ok to not have this by beginning of August. Also case studies are only due at M17 so there is some room. Of course case studies would like to have clear data structure when they are inputting data...

spine-o-bot commented 3 years ago

In GitLab by @Poncelet on Jun 28, 2018, 10:17

@manuelma :I indeed think that Archetypes are not on the agenda for milestone v0.1.

spine-o-bot commented 3 years ago

In GitLab by @Poncelet on Jun 28, 2018, 11:17

On the concepts (Archetype/User constraints/Unit Templates)

I guess we still have different ideas of what an archetype is. For you (@DillonJ), the essence of an archetype seems to be defining this 'arbitrary transfer function'. For @juha_k , it seems to be a combination of the arbitrary transfer function and allowing this information to be inherited to units.For me, the essence is the specification of how to interpret the hard-coded parameters of a certain unit.

How we call the different functionalities is something which can be discussed, but I do prefer to clearly separate these different concepts/functionalities:

  1. arbitrary transfer functions
  2. inheriting information
  3. Adding an interpretation layer

@juha_k :

It should be easy for the user to create a new type of a unit (archetype) and apply that to a group of units.

and

When defining actual units, then many properties can be inherited from the archetype and do not need to be re-defined. It might actually save trouble for the user. If the archetype uses gas as input1, then the unit will inherit that (and if there is a conflict if the unit is connected to a wrong type of node then we can give a warning).

and

When you connect a unit with a commodity, you need to define whether that's input1, output1 or whatever. Archetype could have this pre-defined and it would be then inherited to the unit - as long as there is a match. I guess that's implicit in your proposal.

It seems to me that this functionality you refer to (inheriting) is more what I thought we agreed to call the 'UnitTemplate', and for me is again a different concept (so different from defining an arbitrary transfer function and different from the interpretation layer). This UnitTemplate is basically functionality of Spine Toolbox, but does not provide relevant information for Spine Model.

****On the the implementation of the arbitrary transfer function functionality

Regarding the arbitrary transfer function functionality of a generic process, having arbitrary inputs and outputs. What I envisage is that Spine Model will allow this generic transfer function by using the hard-coded, but generically-defined, equations/parameters for almost all cases. Let's take the example of the fixed output ratio constraint.

This relates to the following equation in the Spine Model description: p_RatioOutputOutputFlow So what I am suggesting is that the user can simply define a unit, specify which are the input and output commodities and then specify the parameter p^{RatioOutputOutputCommodity} for the unit and the commodity groups it applies to, and attach a value to this parameter. So for fixed ratios between different output commodities (or even groups of output commodities), there is just one hard-coded equation and one generic parameter required.

So to come back to the following:

Let's take the example of the fixed_output_ratio constraint. This is saying that electricty/heat = fixed_output_ratio. If I understand correctly @Poncelet @manuelma are suggesting that we would hard code all the versions of this constraint that we can imagine and the archetype object would have parameters that would allow us to switch between the different versions of the constraint.

I don't believe this is what I meant (if I understood your comment correctly). There is basically just one constraint in this case (as discussed above). By specifying the parameter pRatioOutputOutputCommodity of the unit, the required constraint would be generated.

So, to sum up. I envisage that to define an arbitrary transfer function of a generic process with arbitrary inputs and outputs:

@Poncelet @manuelma but there is no suggestion that "user constraints" are related to archetypes in any way.

I hope the above makes clear why I relate your idea of an archetype (creating this arbitrary transfer function) to user constraints.

The advantage of the generic transfer function approach is that it will avoid a lot of clutter building up in the Julia code for every conceivable process. I would really like to be able to define a new unit archetype just using data rather than having to code many versions of the constraints in Julia.

I do not believe we need a lot of equations or clutter in the Julia code to define the basic functionality with which the vast majority of the transfer functions could be generated.

But I also see the need for many standard unit_level constraints to be defined in code and have different versions of these for different types of unit - like ramp rates, min up and down times etc.

The different versions of equations are for me purely related to the level of detail (AC power flow Vs DC power flow Vs trade-based, detailed accounting of technical constraints or not, etc.).

So in my mind, something like the fixed_output_ratio is something that might be better implemented data side using the arbitrary transfer function type approach - mainly because it is creating a link between different commodities inside the process. Ramp rates, min up/down times, pmin, pmax etc. etc. are all standard unit constraints that I think should be implemented Julia side which would make use of the archetype object when there are different formulations of these

I prefer to have both implemented in Julia side (as in the above example of the pRatioOutputOutputFlow parameter and corresponding constraint).

I would think user constraints would be mostly for system specific technical system-wide, regulatory and policy constraints that can be rather arbitrary.

I guess that this will mostly be the case. However, I do not see why a user constraint could not be used to define a relationship/constraint within a specific unit, which we did not anticipate.

The more I think about it - the more I think this would be a very very powerful feature of the SPINE model, setting us apart from other tools. The intention of SPINE is to support energy system integration analysis by allowing the definition of a wide range of energy system integration problems. With this archetype concept, we are allowing a user to create a new technology on the fly and define the parameters for it without touching constraints in the model. This is a nice feature. What I am really envisioning here is that if tomorrow someone wants to model a technology that takes electricity and heat as an input and produces cauliflowers as an output - this is possible without writing any constraints :-)

I fully agree that the model should be able to do so, but I do believe we do not need purely data driven constraints for this (and should avoid this as much as possible).

I think we should be able to have most commonly used equations within the Spine Model (including CHP type constraints). At least I'm more comfortable reading written equations than equations being formed from parameters.

I fully agree here.

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jun 28, 2018, 11:43

Thanks very much for this post @Poncelet it puts things in perspective.

I believed the equation should look something like this:

\sum\limits_{c \in cg2} p^{left}_{c,u, \ldots} v^{flow}_{c,u,outn,t} \{\leq, =, \geq\} \ 
p^{ratio\_output\_output\_flow}_{u, cg21, cg1}
\sum\limits_{c \in cg1} p^{right}_{c,u, \ldots} v^{flow}_{c,u,out,t}

so the parameters $p^{left}_{c,u, \ldots}$ and $p^{right}_{c,u, \ldots}$ are specified through data?

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Jun 28, 2018, 11:44

Regarding

  1. arbitrary transfer functions
  2. inheriting information
  3. Adding an interpretation layer

I agree that the archetype functionality should do 1 and 3 and partially 2.

Regarding inheriting information - yes, but mainly relationships - like unit_input_commodity1 and unit_output_commodity1 etc. I agree @Poncelet that the unit_template idea was what we had in mind for defining default parameters for a technology.

I also agree with @Poncelet that most of the equations should be defined in the model Julia code but I would still like someone to be able to define a new technology using the archtype idea - because I think this is at the very heart of energy systems integration... for me- this is a core requirement to allow a user to define not just power to x but x to y and even y and z to w and x.

spine-o-bot commented 3 years ago

In GitLab by @Poncelet on Jun 28, 2018, 12:27

@manuelma :

I agree that the additions of the parameters p^{left} and p^{right} would further generalize the constraint.

However, I have one comment, and one issue/question which we need to make a decision on:

The comment: With the parameters p^{left} and p^{right}, the parameter p^{RatioOutputOutputFlow} is no longer needed needed.

The issue/question: Having multiple parameters to generate a specific constraint makes it more flexible, but also less transparent for the user. Here, I think we need to strike a good balance between user-friendlyness (but limited flexibility), and full flexibility (at the cost of some user friendlyness).

What I would propose is to have two equations in the model:

  1. The equation induced by the parameter p^{RatioOutputOutputFlow}, which is unchanged: p_RatioOutputOutputFlow This equation is fairly easy to generate for the user, and would probably suit >95% of the users' needs
  2. The equation @manuelma proposed (but without the p^{RatioOutputOutputFlow} as it is superfluous now) for users who require more than what can be done in the first equation. As the parameters now purely represent coefficients to variables rather than something meaningful such as a ratio between different output commodities/commodity groups, this second equation is something I would call a 'User constraint'.
spine-o-bot commented 3 years ago

In GitLab by @manuelma on Jun 28, 2018, 12:32

Right, the question is whether the equation I proposed (withouth the ratio_output_output_flow parameter of course) is what @DillonJ and @juha_k have in mind.

spine-o-bot commented 3 years ago

In GitLab by @Poncelet on Jun 28, 2018, 12:42

@DillonJ :

Regarding the following:

  1. arbitrary transfer functions
  2. inheriting information
  3. Adding an interpretation layer

    I agree that the archetype functionality should do 1 and 3 and partially 2.

I have a different opinion. What I would prefer is that these three functionalities are split (see further below).

Regarding this:

I also agree with @Poncelet that most of the equations should be defined in the model Julia code but I would still like someone to be able to define a new technology using the archtype idea - because I think this is at the very heart of energy systems integration... for me- this is a core requirement to allow a user to define not just power to x but x to y and even y and z to w and x.

What I do believe is that we at least agree on the functionality we want :). We might disagree on the implementation. Ofcourse, we need the user to be able to define not just power to x but also x to y and even y to z to w and x (which basically refers to the arbitrary transfer function functionality). As in my earlier comment, I believe we can achieve this functionality with a combination of (i) generically defined hard-coded constraints, and (ii) a flexible way to allow the user to define his own constraints (user constraints) when unexpected relationships are required (an example of such a constraint was presented by @manuelma ).

A unit could thus be described by entering the 'hard-coded' parameters of that unit, and where needed, the specific parameters relating the the user constraints of that unit (see example Manuel, in the end, these are also hard-coded, but do not have a meaning making them more difficult to interpret). If in addition we want to pass that information through to other units, we define the parameters (both the meaningful ones, and those relating to user constraints) on a 'UnitTemplate' level. If we want to have a certain interpretation of the meaningful parameters, then we could have a relationship between an 'Interpretation' object (I've tried to not call this last object an archetype to avoid confusion) and a Unit or UnitTemplate. So I believe that with this structure, we have everything in place to achieve all the functionality we want.

Wrapping up: how I believe we could achieve the desired functionality:

  1. Arbitrary transfer function -> combination of hard-coded, but generic and meaningful parameters, and user constraints (as in example manuel)
  2. Inheriting information -> UnitTemplate Object and relationship between a UnitTemplate and a Unit
  3. Adding an interpretation layer -> 'Interpretation' object (what I used to call an archetype) and a relationship between the 'Interpretation' object and a Unit or a UnitTemplate.
spine-o-bot commented 3 years ago

In GitLab by @Poncelet on Jun 28, 2018, 12:49

@manuelma

Right, the question is whether the equation I proposed (withouth the ratio_output_output_flow parameter of course) is what @DillonJ and @jkiviluo have in mind.

It is at least what I had in mind when talking about 'user constraints within a unit' :). Thanks for providing the example, I think it helps to align ideas!

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Jun 29, 2018, 06:31

I think I agree conceptually with @Poncelet wrap up, although I don't seem to grasp the 'Interpration' name for the archetype. What I still wonder is whether Kris meant that one could have an unit without an archetype? In my view archetype establishes what kind of unit is in question and it therefore has to be established (by connecting to specific constraints and commodities). This allows to check for some input data errors. If we would let the Spine Model to interpret what kind of unit is in question based on filled parameters, that would be more prone to mistakes. And I think lot of time is often spent finding those mistakes when modelling.

UnitTemplate is for default parameters (and it's useful to have them separate, since one might have several unit templates for one archetype due to technological and geographical differences).

I guess arbitrary transfer function should be possible to build fully from data too (without hard-coded parameters). I doubt we would be able to make all conceivable combinations hard-coded. Often some dimensions are summed (like CO2 emissions over a region) and these can get rather complex.

spine-o-bot commented 3 years ago

In GitLab by @Poncelet on Jul 2, 2018, 10:20

@juha_k : I guess it depends on what you define as an archetype. To be honest, I think the term 'archetype' is used in so many different ways that it's not easy to align the idea of the archetype again. I would therefore suggest to get rid of the term archetype

If I understand correctly what you describe here as an archetype is that it contains information regarding:

How I see it is as follows:

Both relationships are possible as they provide a different type of information

Both relationships are optional: inheritance of input/output commodities and parameter values is not strictly needed, and there should be a default interpretation to each parameter in case no Interpretation_Object is attached to the unit

Regarding the issue of mistake checking: what kind of tests did you have in my mind?

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Jul 3, 2018, 09:32

For me archetype has only one meaning. It gives the structure for particular types of units and therefore enables to enforce validity checking (you can give structure by associating pre-defined or user defined constraints). One builds the archetype once and does it with care. Afterwards it allows to ensure that the instances of those archetypes (units) have the right parameters and relationships. I think what you are suggesting is more prone to input errors, since the interpretation is based on parameters - the parameters can take different meaning depending on the unit type and consequently it is not possible to build good validations.

Regarding validation: if you are using an extraction CHP archetype, you need to have at least one input commodity, heat as output and electricity as output. You need to define the cv and cb ratios. Etc.

spine-o-bot commented 3 years ago

In GitLab by @Poncelet on Jul 5, 2018, 09:33

@juha_k :

Regarding validation: if you are using an extraction CHP archetype, you need to have at least one input commodity, heat as output and electricity as output. You need to define the cv and cb ratios. Etc.

I don't see how this would work, as users can define their own commodities (which can for instance also be named high_temperature_heat or high_voltage_electricity).

In order to do these kind of checks, we would need to have certain hard-coded, boolean attributes of commodity objects which can be specified when the user defines its commodities. For instance: when defining a new commodity which the user decides to call high_temperature_heat, the user can check a box saying that this is a 'heat' commodity.

spine-o-bot commented 3 years ago

In GitLab by @Poncelet on Jul 5, 2018, 09:38

@juha_k :

For me archetype has only one meaning. It gives the structure for particular types of units and therefore enables to enforce validity checking (you can give structure by associating pre-defined or user defined constraints). One builds the archetype once and does it with care. Afterwards it allows to ensure that the instances of those archetypes (units) have the right parameters and relationships. I think what you are suggesting is more prone to input errors, since the interpretation is based on parameters - the parameters can take different meaning depending on the unit type and consequently it is not possible to build good validations.

So how does this differ from what I have called the 'UnitTemplate'?

Furthermore, I am really in favor of having a separate way of specifying the 'interpretation layer' from the inheriting of structure (inputs/outputs) and parameter values.

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Jul 8, 2018, 08:57

Archetype would define structure and unitTemplate would provide default parameter values (and it's handy to have these separate, since it can be useful to have multiple parameter templates for one structural archetype).

The archetype would contain the 'interpration layer' (as discussed in the telco - maybe you wrote your comment before the telco).

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Aug 1, 2018, 12:57

Let's try again in order to continue the discussion. Here is what I think currently:

Archetype

UnitTemplate

Unit

Unit_group

'Interpretation_object' in this system is the selection of constraints (whether that happens in the unitTemplate, unit or in the unit_group)

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Aug 7, 2018, 09:45

@juha_k I think I get the idea. So a unit object will 'inherit properties' from all unittemplate objects it's related to. Now the question is, what happens when a unit object is related to an archetype object? Is the idea to raise an error whenever the unit definition is incomplete (or inconsistent) according to the archetype? If yes, does the validation need to happen Toolbox-side, say, when importing the database into the Toolbox?

spine-o-bot commented 3 years ago

In GitLab by @Poncelet on Aug 7, 2018, 14:15

@manuelma : I had a call with Juha last week Friday to try to reach a consensus/consolidate the different views on archetypes and to move forward. Based on that call, I put some "definitions" and examples on paper. I can share it if you want, but it might be more efficient if Juha and I first align our thoughts and then present the ideas if that's allright for you.

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Aug 7, 2018, 14:30

Thanks @Poncelet I can wait till you have something ready to share, no problem.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Aug 9, 2018, 10:06

Hi folks. @juha_k @Poncelet I am back from vacation and would like to be part of this discussion.

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Aug 10, 2018, 12:24

The document is now in Spine Design folder. @DillonJ, @Poncelet and me will discuss it next week.

spine-o-bot commented 3 years ago

In GitLab by @manuelma on Aug 30, 2018, 14:42

@juha_k @Poncelet @DillonJ how far did you get on this? Can you post a link to the google document?

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Aug 31, 2018, 07:25

Still on the works. Let's see if can 'publish' something today.

spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Sep 7, 2018, 07:29

@juha_k @Poncelet @manuelma

I have been thinking a lot about Archetypes and have some ideas how we could do it more generically right across all object classes. This is a little lengthy so please bear with me!

For me, a major problem with our current concept of archetypes, till now, is that we presuppose what commodities a user will have in our model and for me, the whole point of SPINE is that we don't make these kinds of assumptions. This is particularly true of the commodity validation aspect of the functionality that has been mentioned.

For example - we might implement a whole suite of archetypes, but the user just has to want one commodity more or less for the whole thing to break...

For example, a user could want to do something as simple as add an additional emission commodity.... or what if a user wan't a model with no electricity? We should support all of this - this is what will make SPINE different!

The fundamental point is that we don't know up front what commodities a user wants and we can't possibly put together a suite of archetypes for every possible combination.

The basic idea is this:

So this doesn't get around one of the key goals of the archetype idea - to spare the user the pain and complexity of putting a unit together from scratch. We want to do this - but we have the challenge that we don't know what commodities the user wants in their model - so here comes the second part of the idea.

We implement a wizard and/or series of scripts that puts together a barebones energy system model based on some elementary fundamental parameters:

We can do the same for unit templates - they appear in the unit list in their own category - and do it so that all object classes automatically have an archetype and template category. We can right click on absolutely any object in the model and make it an archetype or a template and we will have scripts to automatically create these based on information about the commodities in the model.

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Sep 7, 2018, 07:56

I think it's ok to enable making of archetypes from units, but my vision for archetype is quite different from this (and it's not necessarily incompatible, there just might be two different uses for the archetype).

For me, archetype is something that regular users do not typically mess with. It's a tool for development of components and enables a component library. And those archetypes can then be well developed and well tested to work with the methods they have been linked with. The archetypefeaturemethod chain is the main thing in this view of the archetype. Commodity problem is secondary, although it's a good point from Jody that it can be too constraining in some cases. However, I believe most of the time users will be just happy with the regular 'electricity', 'heat', 'coal', etc. commodities and it can be helpful that the archetype provides these. That said, we can do archetypes that do not dictate what the commodities are called. Or maybe there is a model level feature that can turn off commodity tracking in archetypes, when someone is not using the regular commodity names.

The emission example is a good one to ponder. Let's say you want to track NOx and the archetype does not include it. It's not simple in any case. The model will need to know what to do with that commodity. In case of emissions, it's typically a function of fuel use (although NOx is temperature dependent), so there should be a link to a method/constraint that will consider the additional commodity. If it's just replacing a commodity with another it should be more straightforward. In any case, there are at least four different ways to cope with the commodity problem:

  1. user must modify the archetype first (make a local version of the archetype) and establish what happens with the new commodity (it could be relatively easy to add optional commodity for existing commodity)
  2. it's allowed to add new (or replace) commodities (but the user might get a warning) and establish at the unit level what happens with the commodity
  3. commodity tracking can be turned off for the archetype or for the whole model instance. With new commodities, it still needs to be established how to deal with them.
  4. archetypes do not care about the name of commodities (only that you have a number of input and a number of output commodities - some of which may be optional). User must do mapping (e.g. input1 - coal...)
spine-o-bot commented 3 years ago

In GitLab by @DillonJ on Sep 7, 2018, 08:40

The second part of my idea - the automatic creation of archetypes based on information about the commodities in the model solves the "user not having to mess with them" issue, no? But on top of that, we get a lot of useful generic functionality?

This idea of developing and testing archetypes really means we have a very specific model structure in mind at the outset and this is something we conceived SPINE to avoid.

Whatever we do here, I really think that:

Creating archetypes using scripts that take in basic information about a user's model could be a way to achieve both objectives

spine-o-bot commented 3 years ago

In GitLab by @jkiviluo on Sep 7, 2018, 13:46

@DillonJ and me had a call. I think we agreed that it's possible to have it both ways. My 4 point list above should be able to accommodate issues with commodity, but we need to be careful to allow commodity agnostic archetypes. Jody will prepare a list of functionalities Spine Toolbox would need to facilitate archetypes (both Kris and my versions). Jody, Kris and me will try to finish that and share it with everybody.