Closed mehalter closed 1 year ago
@brandomr @bgyori @jpfairbanks what do you think of this? Still needs a README with some details of the semantics it encodes
This seems great to me. We could enhance the example to use distributions over the parameters and to include metadata extractions, but those seem like "bonus points" to me.
I think any issues around the edges will come out during the stress test later this week so feel like this is good enough to move forward with unless others have strong feelings about things that should be addressed prior to Thursday.
When taking a closer look, I did realize one thing that I might want to change: when proposing the rate_constant
representation, I thought we could have it be any of (1) null (2) a floating point number (3) the ID of a model parameter whose value/distribution is defined separately. I now realize that allowing (2) is not so great because it results in unnamed/unidentified parameters that are difficult to refer to. Could we just allow (1) and (3)?
@bgyori what you're proposing seems preferable to me in general, but I don't think it will matter from the HMI/TA4 perspective since we allow the initials
to be numbers.
To make your suggestion concrete, in the Lotka Volterra example we'd move the Wolves
rate constant into a parameter
(e.g. wrc
) then refer to that id wrc
right?
What if I merge this as is, then add it as a discussion point to the agenda for tomorrow?
Yes, that's right. Sure, we can merge it as is for now.
Micah and I just talked over this format as merged. Since we don't want to reopen the discussion, I'm just documenting our interpretation and how it might differ from other implementations.
There are signs and rates on the vertices and edges. In our internal representation we are going to store the vertices without signs but with Float64 rates, and store edges with signs and nonnegative rates. When reading a model:
When writing a model, we will emit positive vertex rates and set the sign as the sign of the internally stored rate.
That is a little inconsistent, but is the easiest thing to do to compromise with the merged framework. If all the vertex and edge rates are positive, then I think we get the unambiguously correct behavior, but when the vertex or edge rates are negative, then the correct behavior is ambiguous.
We don't have time to relitigate anything in this spec, so I'm just documenting our interpretation. Other implementations should take a similar interpretation, but can do their own thing if they want.
This implements the JSON schema validation for our new regulatory network JSON schema.
Resolves #11