sbmlteam / sbml-specifications

The specification documents for SBML.
6 stars 2 forks source link

Decide semantics of assignment/rule/reaction with no <math> #330

Closed sbmlsecretary closed 2 years ago

sbmlsecretary commented 8 years ago

There are many cases in the SBML specification when a rule or an assignment overrides the previously-declared value of an element. Now that the child of those elements is optional, we need to decide what happens when something that used to have a value is declared as having a new value, but that value is not included in the model. For example:

I think it will be helpful if the spec explicitly calls out those situations. I think we basically have three options: 1) The value of the element at the point at which it is changed by something becomes undefined. 2) Simulation of a model in that situation may proceed as if the element with no child did not exist. 3) Simulations of such models are impossible, and an error should be given to the users.

The second and third options are both possible as an optional response of particular simulators if we go with 1), but those results would not be exchangeable. If we go with option 2), everybody must interpret the situation the same way, which makes simulations exchangeable again; if we go with 3) we send an explicit message with the spec that nobody should even try to simulate those models.

It is important to keep in mind that simulation is not the only use of SBML, but it remains the case that simulators will end up with these models, and need some explicit guidance about what to do.

It might be instructive to find out what the 'de-facto standard' is for what simulators do with models that have no KineticLaw, since those models already exist. If everyone does their own thing, we probably should go with 1. If everyone rejects the model outright, maybe we should go with 3. And if everybody proceeds as if the reaction wasn't there, we could go with 2. (Another option would be to go with 1, and have 2 or 3 as 'best practice'.)

Reported by: luciansmith

Original Ticket: sbml/sbml-specifications//332

sbmlsecretary commented 8 years ago

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

I don’t think this is really any different than a parameter with no value. Namely, the model is incomplete, so a simulator should notify the user of this fact. If it tries to carry on, odds are that it will be silently hiding a problem with the model. I cannot think of a simulation use case in which one would want to simulate an incomplete model. As you mention, SBML is for more than just simulation, and I do understand the value of incomplete models for these other purposes, but not for simulation.

Original comment by: ccmyers

sbmlsecretary commented 8 years ago

Here's a concrete example of what I'm trying to get at: is the following model underdetermined, or can it be simulated?

<?xml version="1.0" encoding="UTF-8"?>
<sbml xmlns="http://www.sbml.org/sbml/level3/version2/core" level="3" version="2">
  <model>
    <listOfParameters>
      <parameter id="a" value="2" constant="true"/>
    </listOfParameters>
    <listOfInitialAssignments>
      <initialAssignment symbol="a"/>
    </listOfInitialAssignments>
  </model>
</sbml>

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

My intuition would say, solution 1) is most plausible where here "undefined" would mean the value becomes NaN. What ever happens to NaN values in the next step of the calculation is the result of the model.

Original comment by: andreas-draeger

sbmlsecretary commented 8 years ago

Claiming that the value is actually NaN is actually a slightly stronger statement than stating that the value is undefined. Many SBML simulators take 'undefined' values in SBML models and fill them in with default values (while telling the user they are doing so). Stating explicitly that you should insert NaN would mean that this practice is actually incorrect, which we could say, but I think we probably shouldn't. It would be a relatively significant break with tradition, and might interfere with our other goal with L3v2: to let packages fill in undefined values for SBML elements.

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

From the three choices above, I opt for (3) Simulations of such models are impossible, and an error should be given to the users. -- Simply because I do not like too many implicit assumptions.

Citing Chris: "the model is incomplete, so a simulator should notify the user of this fact. " +1.

Even more so, I think that it may not be the role of the specification to tell people what to do with incomplete/wrong SBML models. If the above underdetermined model is a good model in the sense of SBML, that is probably all we should care about. A hint that underdetermined models may result from the current SBML spec is good, but I would not go as far as telling people what to do with it in the simulation tool.

(But I maybe wrong ;-))

Original comment by: dagwa

sbmlsecretary commented 8 years ago

I have a bit of a problem with "an error should be given" does running a simulation which returns NaN's consitute enough of an error to fall under option 3? If so, then I could support it.

Original comment by: bgoli

sbmlsecretary commented 8 years ago

I would tend towards 2. The element with no child does not modify the set of equations defining the model. Effectively it is invisible. It can be useful for future developments or to carry annotations. Having not child is different from explicitely declaring a non-value I think. An absence is not a negation.

Original comment by: lenov

sbmlsecretary commented 8 years ago

For the record, Nicolas's conclusion is where I started when attempting to write up what things meant in the spec. There are statements like, "An Event with no Trigger will never fire, and neither will an Event with a Trigger with no 'math' child." or "An InitialAssignment with no 'math' child does not assign any value to its 'symbol'." I did have to be careful to say that this only applied in the absence of a package supplying the missing information.

Then I started to think, "Well, hmm, is this fair? Or is an InitialAssignment with no 'math' a stronger statement; more along the lines of 'This element has an initial value, but I do not know what it is,' instead of 'I am not supplying any information about the initial value of this element'.

I really want to know what existing software does when it encounters a reaction with no kinetic law. Does anyone know off the top of their heads, or do I have to do some actual research? ;-)

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

Our simulator gives an error message that the kinetic law is missing and halts.

Original comment by: ccmyers

sbmlsecretary commented 8 years ago

I guess the L3v1 spec does say "However, missing kinetic laws preclude the application of many model analysis techniques, including simulation."

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

Looks like Copasi treats it as if the reaction rate was zero. No warnings or anything, either.

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

Same with libroadrunner: runs without complaint; assumes reaction rate of zero. (Also, I know libroadrunner will generally supply default values to everything if not supplied by the user.)

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

So, OK. Given that of the three simulators we investigated, two of them (essentially) follow option 2 ('proceed as if it didn't exist'), and one of them follows option 3 ('stop and give an error'), this colors our choices:

1) Say it's undefined; let people do what they want: this follows current behavior, which decreases exchangeability of underdefined models between tools. (i.e. a user creates a model in libroadrunner, takes it to iBioSim, which suddenly complains.) On the other hand, different user bases for the different tools may expect these different behaviors, and tool authors may wish to defend their actions. 2) Tell people to ignore underdefined constructs: this follows Copasi and libroadrunner behavior, and would encourage iBioSim to change its behavior.
3) Tell people to error out on underdefined constructs: this follows iBioSim, and encourages Copasi and libroadrunner to change their behavior.

Choosing either option 2 or 3 may improve exchangeability, but could decrease the user experience on the tools that have to change. OTOH, if we tell people to do this only for the new situations that are now newly valid in l3v2, we may prevent the sort of divergent behavior we now see. On the other other hand, we are still telling people what to do with their tools, and they may wish to provide a consistent user experience one way or the other ('try to guess what they want' vs. 'do only what I'm explicitly told'). Either approach may be valid in different contexts.

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

The reason why I believe this should be an error is that missing math is more often than not an error in the model. If you go ahead and simulate, you are making it harder for the user to realize that they have forgotten to enter something. If they want the kinetic law to evaluate to zero, why not make them enter 0. This is in the spirit of “no defaults”. Assuming no law evaluates to zero, rather than an undefined value, means it has a default.

Whatever we decide, I think we should not make there be a different behavior for “no kinetic law” and “a kinetic law but no math”. This would certainly create confusion. My preference for reasons above is that they both evaluate to an “undefined value” meaning simulation should report an error to the user.

Original comment by: ccmyers

sbmlsecretary commented 8 years ago

With PySCeS if there is a reaction without a kinetic law I show a warning on load and return a NaN during the simulation, which essentially leads to all output being NaN.

However, this behaviour is specific to reactions as it allows one to do pure structural analysis with PySCeS on models without dummy rate equations being defined. Any other missing math or values throw errors on model load and/or simulation.

However, if there were InitialAssignments on stoichiometric coefficients that had missing math, this would also raise errors. I'm not a great fan of implicit behaviours.

Original comment by: bgoli

sbmlsecretary commented 8 years ago

This one is a mine field ! In L3V1 SBML said a model with a missing math element is not valid. This left the field clear for simulator developers to choose the behaviour they wanted when dealing with an invalid model.

Now for L3V2 we are saying that the model is valid and it is not unrealistic to expect the specification to answer the question; what does it mean if this is missing. However, us doing that is not necessarily going to influence the simulator developers, who have already clearly made decisions about how to handle the situation. They will not necessarily change their approach because we said so!

You could also end up in the ridiculous situation where someone has a valid L3V2 model with missing math that a software refuses to simulate (because we have said that is what should happen) but the way round it would be to convert it to an invalid L3V1 model that can be simulated by certain simulators.

If we are allowing things to be missing (!) then we have to go with 'this is undefined'.

Original comment by: sarahkeating

sbmlsecretary commented 8 years ago

But in the case where we have an existing parameter value and an initialAssignment with no math, we have a defined value. If we do not have a parameter value either, then the value is undefined, no matter what the decision is on the initialAssignment.

So saying "ignore a construct with a missing math in a mathematical context" does not impact maths I think. (note the "in a mathematical context". which may be a way out of the current dilemna by cutting some slack to people who do not actually want to perform simulations)

I am sympathetic with Chris' concern, but this is a tool issue. COPASI lists plenty of such situations. It warns the user and points to possible problems. But it does not stop the user from using possible problematic models if so they wish.

Original comment by: lenov

sbmlsecretary commented 8 years ago

However, I think a “best practice” should be described that states that when it is missing a tool should report it is missing. It is then free to make an assumption and carry forward, if it likes. But it is best that the tool is stating to the user that an assumption has been made.

This would allow tools who want to carry forward assuming 0, NaN, or simply failing to all be correct. It is just best if the tool tells the user that they are going off the reservation and user experiences with different tools may vary at this point.

Original comment by: ccmyers

sbmlsecretary commented 8 years ago

I like Nicolas's 'mathematical context' caveat.

OK, it seems like we're coalescing around a sort of a consensus here. What if I added the following to the definition of all SBML elements with now-optional 'math' children:

"In a mathematical context, the meaning of [this SBML object] with no 'math' child is undefined. An SBML Level 3 package may supply the missing information, but if not, individual tools may choose what to do, including ignoring the construct entirely, producing an error, prompting the user for additional input, or proceeding with a default value. It is considered best practice to inform the user about what the tool is doing in these situations."

This would be slightly modified in different contexts, for example, for the 'Trigger', I would change the middle part to '...ignoring the Event entirely...' and remove the 'proceeding with a default' clause, because there's no reasonable default 'Trigger' formula I can imagine.

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

This works for me.

Original comment by: bgoli

sbmlsecretary commented 8 years ago

Works for me too.

Original comment by: ccmyers

sbmlsecretary commented 8 years ago

So, I went to go actually write up this change, and it's starting to rub me the wrong way again. As an example, originally, I had:

"Similarly, a \Priority with no \token{math} child leaves the priority of the event undefined. A package may define how to include extra information that would define how to apply this \Priority; otherwise, the event behaves as if it did not have a priority."

and then changed it to:

"Similarly, a \Priority with no \token{math} child leaves the priority of the event undefined. An SBML Level~3 package may supply this missing information. In a mathematical context, individual tools may choose what to do in this situation, including ignoring the \Priority entirely, producing an error, or prompting the user for additional input. It is considered best practice to inform the user about what the tool is doing in these situations."

and I can't help but think that the new version is nothing more than a time bomb for future problems and incompatibilities. In the original version, we can make a test case with expected output for an Event with a Priority that has no math. In the new version, we can create a model that nobody knows what to do with it: produce an error? Treat the Event as if it had no Priority? Ignore the Event entirely?

Similarly, we now would have the case where an Event with no Trigger would simply never fire, but an Event with a Trigger with no math can produce an error or refuse to be simulated at all.

I do understand that appeal of not wanting to assume anything, but if we already knew what to do with a model without the construct, I'm no longer sure it's a bad idea to just continue to treat the model the same way if it has a partially-defined construct.

Could we get away with just letting tool writers produce a warning if they wanted?

"Similarly, a \Priority with no \token{math} child leaves the priority of the event undefined. A package may define how to include extra information that would define how to apply this \Priority; otherwise, the event behaves as if it did not have a priority. However, a tool may produce a warning in this situation if it so chooses."

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

I think you could go a little further and say a tool SHOULD produce a warning and COULD simply decide such a model is incomplete and not simulate it at all.

I really think it is problematic to allow simulator developers to silently interpret the absence of something as having meaning. I also do not like absence of trigger to mean cannot trigger, this would infer that the trigger has a default of “false”. If we really believe in no defaults, then it should be undefined whether it should trigger or not.

I think it is indeed a time bomb to not at least strongly encourage software developers to notify the user when they are leaving the space defined in the specification.

Original comment by: ccmyers

sbmlsecretary commented 8 years ago

There are a lot of things whose absence implies meaning, even in L3v1: the absence of an initial assignment for a value means that nothing is initially assigned to that value. The absence of a Priority means that an event, if triggered simultaneously with other events, may be executed in any order. The absence of a Delay means there is no time delay between the event trigger and it being executed. The absence of an assignment rule to a variable means that variable does not get anything assigned to it. The absence of an algebraic rule means there are no additional constraints on the variables in the model.

In L3v2, we now allow the absence of a Trigger, so we have to say what that means. We don't (in the current draft) say that it means that the Trigger is always 'false'; we say that it can never transition from 'false' to 'true', and thus the event never fires. If a Trigger is present but has no 'math' child, the same thing could be true. It's a simple explanation that makes intuitive sense, and can be tested.

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

Ok, point taken. However, most of these other items can still do something effective without the things you mention. An initilalAssignment without math seems incomplete and likely a mistake that the user should fix before simulation. I think care needs to be taken to preserve the same result across tools.

I think we should either describe precisely the semantics of a missing element OR indicate that a tool SHOULD either quit simulation or warn the user that it is providing non-standard results at that point.

I still remember a poster by a student who showed simulation of many SBML models on different tools with different results and claiming this means that there was a problem with SBML. We want to try to prevent such situations

Original comment by: ccmyers

sbmlsecretary commented 8 years ago

Right, it's exactly the 'different results in different tools' that I want to avoid. The simplest way to do this is to either precisely define the semantics of the missing elements, or to make missing elements illegal. Since we decided this situation was legal (to allow packages to supply missing semantics, among other reasons), the next best option (in my opinion) would be to define the semantics, and then put all the weird situations into the test suite. Especially since there's a relatively intuitive and consistent tack you can take: 'mathematically, treat the element like it didn't exist; warn the user'.

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

Lucian's chain of argument sounds reasonable to me. I agree that it is better to define such intuitive rules as described two posts above. We should list where we can find such rules and declare all other cases of missing math etc. to result in undefined math with the need to display some kind of error message.

Original comment by: andreas-draeger

sbmlsecretary commented 8 years ago

Here's a summary of the current draft's take on the mathematical interpretation of missing constructs:

The one thing that's not on the list is 'a Reaction with no KineticLaw', since that's been true since forever ago, and SBML has never actually said what the semantic mathematical meaning of this situation is. Which, of course, has led to our current divergent set of behaviors (sigh). We could potentially describe the current suite of options, and then recommend a new 'best practice', perhaps?

We haven't actually talked about the validation rules yet, but I think it's probably a good idea to say that they are still in effect, particularly since if a package supplies the missing information, the same general constraints will still be needed: an AssignmentRule that somehow calculates its assignment from user data (to make up an example) will still require that the variable in question not be affected by a RateRule nor by a Reaction. It would also emphasize that validation is not the same as using a model in a mathematical context.

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

FunctionDefinition with no math: Number of arguments to function can be anything, but no mathematical result is defined without additional information. [This is the one situation we cannot put into the test suite: a call to an empty function defintion cannot have a default return value.]

Result NaN, maybe?

KineticLaw with no math: same as Reaction with no KineticLaw, with caveat that there might be LocalParameters that packages could use. Event Trigger with no math: same as Event with no Trigger Event Priority with no math: same as Event with no Priority Event Delay with no math: same as Event with no Delay EventAssignment with no math: same as no EventAssignment. To add: validation rules still in effect.

I would still suggest that there is some sort of validation rules that indicate a model is complete for simulation. These can be best practice clearly, but I think it is good to have this to find many of this situations above which I believe are more often errors than done for some useful benefit. I understand the need to allow for the cases above, so models can be incomplete during construction, but I still feel that once you press the simulation button, that you should be able to “check the model is complete” before allowing simulation to proceed. In iBioSim, we have a function that we call “checkModelCompleteness”. Currently, it checks that:

1) Compartments have a size 2) Species have an initial amount/concentration 3) Parameters have an initial value 4) Reactions either have flux bounds OR a kinetic law 5) Local parameters within reactions have a value

We will need to obviously extend this now to look for missing math. I feel that having such a function in libSBML/JSBML would help promote a “best practice” of warning the user before simulating an incomplete model. If before simulation, a tool runs this check, they can stop immediately (as we do) OR provide a warning and carry on with assumed values.

The reason I’m so adamant about this is the process of checking this has found countless bugs not only in models, but also in our own software where things were getting lost for some reason.

The one thing that's not on the list is 'a Reaction with no KineticLaw', since that's been true since forever ago, and SBML has never actually said what the semantic mathematical meaning of this situation is. Which, of course, has led to our current divergent set of behaviors (sigh). We could potentially describe the current suite of options, and then recommend a new 'best practice', perhaps?

To be consistent with your descriptions above, I would assume that for kinetic simulation, it would be as if the reaction does not exist, but, of course, FBA may still be applicable if there are bounds. I suppose we could change things to drop the reaction in our simulations too, but I would still present the warning to the user.

We haven't actually talked about the validation rules yet, but I think it's probably a good idea to say that they are still in effect, particularly since if a package supplies the missing information, the same general constraints will still be needed: an AssignmentRule that somehow calculates its assignment from user data (to make up an example) will still require that the variable in question not be affected by a RateRule nor by a Reaction. It would also emphasize that validation is not the same as using a model in a mathematical context.

Agreed.

Original comment by: ccmyers

sbmlsecretary commented 8 years ago

I totally agree that having a universal 'can this model be simulated' check would be great. Do you think this needs to be added to the specification itself? Or do you think it works better as simply a libsbml function? If you think it should be a spec thing, we can file a new issue for that (to be added to the 'best practice' section, presumably).

If you and Brett (and others, I guess?) are OK with allowing simulations of reactions with no kinetic laws, I would be willing to add that to the spec, too. We could even say that this is a change for L3v2 models, so that simulators' old behaviors are still valid.

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

I totally agree that having a universal 'can this model be simulated' check would be great. Do you think this needs to be added to the specification itself? Or do you think it works better as simply a libsbml function? If you think it should be a spec thing, we can file a new issue for that (to be added to the 'best practice' section, presumably).

I think this is something that should be in the best practice section. However, my recollection is best practice is allowed to change at anytime, so it does not need to be before release of L3V2. However, it may not be that difficult to write. It is also a pretty simple function to write into validation.

If you and Brett (and others, I guess?) are OK with allowing simulations of reactions with no kinetic laws, I would be willing to add that to the spec, too. We could even say that this is a change for L3v2 models, so that simulators' old behaviors are still valid.

I think it would be odd to say consider kinetic laws without math in one way and reactions without kinetic laws another. So, perhaps it is simplest to say in either of these cases, the reaction has no effect in simulation.

Original comment by: ccmyers

sbmlsecretary commented 8 years ago

Just caught up, great discussion, just my current 2c.

In general I'm convinced about ignoring the "element containing missing math" idea, except where it comes to Reactions without a KineticLaw, where removing the Reaction completely from the model (even just for simulation) seems strange.

I guess mostly because a Reaction is more than a KineticLaw. What does "ignoring the Reaction" actually mean, removing it from the network? If so, this can have a significant effect on the model's stoichiometric structure and would mean that software that does both structural analysis and simulation would essentially be analysing two different reaction networks.

Also In actual modelling terms whether a Reaction returns a rate of zero, NaN or throws an exception makes very little difference (if the software informs you about it at some point) and I think it will be a hard sell to try convince any of the existing software developers that they are doing it wrong :-)

Original comment by: bgoli

sbmlsecretary commented 8 years ago

That makes sense, and has the added benefit of continuing existing behavior. I didn't quite mean to say that simulators should ignore the Reaction entirely, but instead that they should treat the reaction as if it had a reaction rate of zero. On reflection, though, this feels a little more like a 'meaningful default' instead of 'just ignore it', so I tend to agree that in this case, saying that the kinetic law is undefined seems like the way to go. (Not to mention that users of various tools have learned to expect that tool's behavior at this point, so a change in behavior not only means recoding those tools, but retraining those users.)

Chris is also correct that it would be weird to say that a missing KineticLaw meant one thing, but a KineticLaw with a missing 'math' child meant something else, so my inclination is to say they both mean the same thing: that the value of the KineticLaw is undefined.

This tracker item is getting a little unweildy as far as figuring out 'what are we actually voting on at this point?' so let me see if I can sum up the current proposal:

Did I miss anything? Would anyone like to propose an alternative to any of the above points?

I'll open a new tracker item for Chris's 'best practices' suggestion for defining 'mathematically complete' models.

(^): we could say that a FunctionDefinition with no math returns NaN, perhaps? I'm not enamored by the idea, but it's more instinct than reason at this point. I'd be happier with mentioning it as a possibility, though.

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

Just one remark: there could always be packages that redefine or extend elements and give meaning where it would otherwise be lacking. For instance, if a reaction has upper bound = lower bound = value > 0, but lacks a kinetic law, its flux could be interpreted as being that constant value, even though the core model would lack kinetic information.

Furthermore, an "isSimulateble" function would need to take the context / the simulation framework into account. Different rules may apply depending on how the model is going to be simulated. For instance, kinetic laws have now become totally irrelevant in FBC scenarios since local parameters are no longer used.

It might therefore be difficult to write such a check function in general and we would need to specify exactly what it should check and under which circumstances.

Original comment by: andreas-draeger

sbmlsecretary commented 8 years ago

Looks good.

Original comment by: ccmyers

sbmlsecretary commented 8 years ago

Yes, and you're absolutely right that this needs to be emphasized in these discussions in the spec. A major reason that we're relaxing these rules in the first place is so that packages can supply the missing information. While it seems like a good idea to tell people what to do if nothing supplies that information, telling them to look for it is important!

This is also a good point about the proposed 'best practices' section; I'll copy it over.

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

I get your point. It would certainly depend on what we mean by “is simulatable”. It could be narrowed to be “is a SBML core simulatable model”, namely when reduced to SBML core that it is complete with respect to ODE simulation. This would include comp/arrays (with static arrays), since you could flatten and check. It would not include models with say Spatial/FBC/Multi/Qual etc. These packages would need their own “is type simulatable” checks.

Original comment by: ccmyers

sbmlsecretary commented 8 years ago

Well with the FBC package we already do this with the "strict" attribute which essentially enables a bunch of arbitrary, package specific validation rules.

Original comment by: bgoli

sbmlsecretary commented 8 years ago

I agree, looks good.

Original comment by: bgoli

sbmlsecretary commented 8 years ago

So far, we have two editor votes in favor of the final version of this change from Brett and Andreas, though most people have expressed their opinions at some point in the process.

I've now updated the spec in SVN to reflect the final proposal; here are the relevant sections:


\changed{An \InitialAssignment with no \token{math} child leaves undefined what assignment is to be made to the corresponding \token{symbol}. The absence of a \token{math} element is permitted because it is possible for SBML Level~3 packages to add constructs that extend \InitialAssignment and define how a value is to be computed. In the absence of any such construct, no assignment is carried out when there is no \token{math} element. This leaves the model unchanged: any element that had a value will continue to have that value; any element whose value was undefined will continue to have its value undefined. A simulator encountering this situation may choose to produce a warning. No other validation rules are affected by the absence of a \token{math} child: it is still invalid to have an \InitialAssignment and an \AssignmentRule that assign to the same model element, for example.}


\changed{A \Rule with no \token{math} child leaves undefined how the rule behaves mathematically. The absence of a \token{math} element is permitted because it is possible for SBML Level~3 packages to add constructs that extend the \Rule and define this behavior. In the absence of any such construct, no assignments or other changes to the model are carried out when there is no \token{math} element. This leaves the model unchanged: any element that had a value will continue to have that value; any element whose value was undefined will continue to have its value undefined. A simulator encountering this situation may choose to produce a warning. No other validation rules are affected by the absence of a \token{math} child: it is still invalid to have an \InitialAssignment and an \AssignmentRule that assign to the same model element, for example.}


\changed{A \Constraint with no \token{math} child does not define a mathematical constraint. The absence of a \token{math} element is permitted because it is possible for SBML Level~3 packages to add constructs that extend \Constraint and define how a value is to be computed. In the absence of any such construct, no restriction on the model's behavior is implied. A simulator encountering this situation may choose to produce a warning.}


[From Reaction section, when talking about how the KineticLaw is optional: \changed{In the absence of any further definition, some simulators choose to give an error and refuse to simulate models that have a \Reaction with no \KineticLaw. Others assume that the effective rate of a \Reaction with no \KineticLaw is zero. Still others define this value to be 'notanumber' (\token{NaN}). This behavior is not standardized, and should not be relied on when exchanging models for simulation.}


\changed{A \KineticLaw with no \token{math} child leaves undefined the mathematics of the corresponding \Reaction. The absence of a \token{math} element is permitted because it is possible for SBML Level~3 packages to add constructs that extend \KineticLaw and define how a value is to be computed. Otherwise, the model behaves mathematically as if no \KineticLaw was defined at all. This is sometimes useful when the modeler desires to define one or more \LocalParameter objects, which live in the \KineticLaw.}


Similarly, a \Trigger with no \token{math} child leaves undefined when the corresponding \Event will trigger. The absence of a \token{math} element is permitted because it is possible for SBML Level~3 packages to add constructs that extend \Trigger and define how a value is to be computed. In the absence of any such construct, the event is never triggered. A simulator encountering this situation may choose to produce a warning.


\changed{Similarly, a \Priority with no \token{math} child leaves the priority of the event undefined. The absence of a \token{math} element is permitted because it is possible for SBML Level~3 packages to add constructs that extend \Priority and define how a value is to be computed. In the absence of any such construct, the event behaves as if it did not have a priority. A simulator encountering this situation may choose to produce a warning.}


\changed{Similarly, a \Delay with no \token{math} child leaves the priority of the event undefined. The absence of a \token{math} element is permitted because it is possible for SBML Level~3 packages to add constructs that extend \Delay and define how a value is to be computed. In the absence of any such construct, the event is considered to execute as if it had no \Delay. A simulator encountering this situation may choose to produce a warning.}


\changed{An \EventAssignment with no \token{math} child leaves undefined what assignment is to be made to the corresponding \token{symbol}. The absence of a \token{math} element is permitted because it is possible for SBML Level~3 packages to add constructs that extend \EventAssignment and define how a value is to be computed. In the absence of any such construct, no assignment is carried out when there is no \token{math} element. This leaves the model unchanged: any element that had a value will continue to have that value; any element whose value was undefined will continue to have its value undefined. A simulator encountering this situation may choose to produce a warning. No other validation rules are affected by the absence of a \token{math} child: it is still invalid to have an \EventAssignment and an \AssignmentRule that assign to the same model element, for example.}

Any other suggestions and/or votes are welcome!

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

I approve the text above.

Original comment by: andreas-draeger

sbmlsecretary commented 8 years ago

I agree

Original comment by: sarahkeating

sbmlsecretary commented 8 years ago

Original comment by: luciansmith

sbmlsecretary commented 8 years ago

With three editors approving the final-final version of this change (Brett, Andreas, Sarah), I am marking this as 'pending' and am adding it to the 'design changes' page for l3v2 (http://sbml.org/Documents/Specifications/SBML_Level_3/Version_1/Core/Design_changes_planned_for_the_Level_3_Version_2_Core_Specification). If Nicolas or Dagmar wish to re-open the issue at any time, they may do so. Otherwise, the change has been incorporated into the L3v2 SVN, and will be released with that specification.

Original comment by: luciansmith

sbmlsecretary commented 6 years ago

Original comment by: luciansmith

sbmlsecretary commented 6 years ago

With the release of L3v2, this issue is now resolved.

Original comment by: luciansmith