Closed MauriceHendrix closed 4 years ago
See my slack comment...
Seems more a cellmlmanip / sympy issue.
You can stop sympy.exp
eval-ing in the transpiler by wrapping the Sympy function. Add something like:
def _exp_handler(self, node):
def _wrapped_exp(expr):
return sympy.exp(expr, evaluate=False)
return _wrapped_exp
in cellmlmanip.transpiler.Transpiler
. You'll need to add 'exp': self._exp_handler
to the self.handlers
dict and remove exp from SIMPLE_MATHML_TO_SYMPY_NAMES
.
I made changes to transpiler in https://github.com/ModellingWebLab/cellmlmanip/commit/28f461ddd22195c9ab962eae5308dc49755ff8fd. (189-transpiler-exp-evaluate)
Does that fix your error?
I made changes to transpiler in 28f461d. (189-transpiler-exp-evaluate)
Does that fix your error?
Makes no difference generated code is the same! I tried changing all my simplify calls to sympify(..., evaluate=Falsse) but that didn't help (the offending eqs aren't built by my code).
Interestingly if I throw 1.0 / (1.0 + exp((-(var_ICaL__V + 60.0)) / 0.024))
into symplify I get:
1.0*exp(41.6666666666667*var_ICaL__V)/(1.0*exp(41.6666666666667*var_ICaL__V) + 1.83567266916216e-1086)
If you add evaluate=False
in different parts of cellmlmanip can you find one that works? I'm guessing with this we've ruled out 'simplification' in the transpiler stage, so can we narrow down where it's happening (possibly multiple places)? Do we even need double-wrapping or something silly?
Interestingly if I throw 1.0 / (1.0 + exp((-(var_ICaLV + 60.0)) / 0.024)) into symplify I get: `1.0exp(41.6666666666667var_ICaL__V)/(1.0exp(41.6666666666667var_ICaLV) + 1.83567266916216e-1086)`
Sympy is sitting on top of arbitrary precision, so it does work! It's when you wrap the very small number in a float()
for printing you get back a zero.
Do we even need double-wrapping or something silly?
The brute-force way is to wrap the sympy.exp
in a sympy.UnevaluatedExpr
, but then it's up to you to doit()
when required.
Interestingly if I throw 1.0 / (1.0 + exp((-(var_ICaLV + 60.0)) / 0.024)) into symplify I get: `1.0exp(41.6666666666667var_ICaL__V)/(1.0exp(41.6666666666667var_ICaLV) + 1.83567266916216e-1086)`
Sympy is sitting on top of arbitrary precision, so it does work! It's when you wrap the very small number in a
float()
for printing you get back a zero.
aah that's interesting!
Right s it's "caused" by the printing, but if I adjust the printing not to warp it in a float but to just print, I get a C++ compile error
error: floating constant truncated to zero [-Werror=overflow]
Yeah, I'd expect that, because it's not really the printing zero that's at fault, it's the 'simplification' that is only true if you're evaluating in arbitrary precision, which generated code isn't. So it's really the simplification that needs to stop in this case.
Yeah, I'd expect that, because it's not really the printing zero that's at fault, it's the 'simplification' that is only true if you're evaluating in arbitrary precision, which generated code isn't. So it's really the simplification that needs to stop in this case.
any idea where though? I can't really find any simplifications?
I thought we were using sympy.Float
objects which do have a limited precision? https://docs.sympy.org/latest/modules/evalf.html#floating-point-numbers
Or am I misunderstanding that @tamuri ?
Hmm, but initially all numbers are NumberDummy
objects, and only substituted for floats late in the process, so perhaps before that point sympy assumes it can treat them as arbitrary precision?
At that stage it shouldn't simplify though, should it? Because they're dummies
Or am I misunderstanding that @tamuri ?
I believe Sympy increases the precision of the Float as necessary. I guess there's probably an upper bound.
e.g.
s.N(s.pi, 100000)
# and
In [121]: type(s.N(s.pi, 100000))
Out[121]: sympy.core.numbers.Float
OK says something like that here too http://www.cfm.brown.edu/people/dobrush/am33/SymPy/numeric.html#accuracy-and-error-handling
any idea where though? I can't really find any simplifications?
I think it's the sub
and xreplace
calls that are responsible.
any idea where though? I can't really find any simplifications?
I think it's the
sub
andxreplace
calls that are responsible.
I was just about to suggest the same :)
You could wrap each of those calls with with sympy.evaluate(False):
but at some point you'll have to .doit()
...probably just before you print?
I'm just trying out evaluate=False with subs but it seems not to work as I expect:
x = sympy.sympify('y*y', evaluate=False)
x
y*y
>>>
>>> x.subs({'y':'3.0'}, evaluate=False)
9.00000000000000
>>> x.subs({'y':'t'}, evaluate=False)
t**2
>>>
subs
doesn't do anything with the evaluate argument. You need to wrap with context:
In [123]: import sympy
In [124]: x = sympy.sympify('y*y', evaluate=False)
In [125]: x
Out[125]: y*y
In [126]: x.subs({'y':'3.0'}, evaluate=False)
Out[126]: 9.00000000000000
In [127]: with sympy.evaluate(False):
...: print(x.subs({'y': '3.0'}))
...:
3.0*3.0
subs
doesn't do anything with the evaluate argument. You need to wrap with context:In [123]: import sympy In [124]: x = sympy.sympify('y*y', evaluate=False) In [125]: x Out[125]: y*y In [126]: x.subs({'y':'3.0'}, evaluate=False) Out[126]: 9.00000000000000 In [127]: with sympy.evaluate(False): ...: print(x.subs({'y': '3.0'})) ...: 3.0*3.0
aah ok
that works
Well, except that the main purpose of the strip_units
option is that it controls whether to strip unit information from the numbers. The simplification is just a side effect, and apparently could be avoided using the context as Asif suggests above if we want to do so?
Well, except that the main purpose of the
strip_units
option is that it controls whether to strip unit information from the numbers. The simplification is just a side effect, and apparently could be avoided using the context as Asif suggests above if we want to do so? * I just realised we already have an option implemented for this: strip_units=False e.g. model.get_equations_for([v1], strip_units=False) The naming isn't the most intuitive might it be a good idea to rename to evaluate=False or something like no_simplify or whatever?
I removed the comment as actually it didn't work
This goes back to the 2 different sympy representations in the same model problem that makes it tricky to understand what's going on
I understand the idea of waiting with API design until we're more clear on what the requirements are, but we do spend a lot of time confusing each other :D
the
livshitz_rudy_2007
generated with chaste_cg fails the chaste test:the cellml: livshitz_rudy_2007.cellml
the pycml output: livshitz_rudy_2007.cpp
The chaste_cg output: livshitz_rudy_2007.cpp
My Dev test tells me it's that same except 3 equations that look like below.
for example for
var_ICaL__dss1
: pycml:const double var_ICaL__V = var_chaste_interface__cell__V
const double var_ICaL__dss1 = 1.0 / (1.0 + exp((-(var_ICaL__V + 60.0)) / 0.024));
cellmlmanip (formatted for chaste):const double var_ICaL__dss1 = 1.0 / (1.0 + 0.0 * exp(-41.666666666666664 * var_chaste_interface__cell__V));