Closed JCGoran closed 2 months ago
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 86.67%. Comparing base (
70553cc
) to head (da9dad0
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Logfiles from GitLab pipeline #204052 (:white_check_mark:) have been uploaded here!
Status and direct links:
Logfiles from GitLab pipeline #204522 (:white_check_mark:) have been uploaded here!
Status and direct links:
I would have preferred to not replace the test, but to have both: the existing one to check accuracy of v(t)
and the new one to check stability. To have both it might be better to make the constant 0.005
a parameter that we can set from the Python script. Then one can use the same MOD file for both cases.
Logfiles from GitLab pipeline #205206 (:white_check_mark:) have been uploaded here!
Status and direct links:
To have both it might be better to make the constant 0.005 a parameter that we can set from the Python script.
Since the current workflow is nrnivmodl -> python simulate.py
, I don't see a way to set the parameters in the mod file in the Python script itself (that is, unless we add another script beforehand which modifies the AST of the mod file).
Added new test (simulate2.py
with mod file leonhard2
).
Logfiles from GitLab pipeline #207481 (:white_check_mark:) have been uploaded here!
Status and direct links:
You can have global or range variables in the MOD file:
UNITS {
(mA) = (milliamp)
}
NEURON {
SUFFIX leonhard2
NONSPECIFIC_CURRENT il
RANGE c
}
ASSIGNED {
il (mA/cm2)
}
BREAKPOINT {
il = c * (v - 1.5)
}
(or similar). And then set from Python:
s.leonhard_c = 0.05
# s.c_leonhard = 0.05
(or some variant thereof.)
You can have global or range variables in the MOD file: ...
Huh, that's neat, updated!
Logfiles from GitLab pipeline #209316 (:white_check_mark:) have been uploaded here!
Status and direct links:
It's now a bit hard to understand how the two tests are very different. One is about checking the accuracy of v(t)
. The other is about checking the stability of the numerical method, e.g. that the method is fully implicit.
We check the one my using small time-steps and checking all the value along the way. While, to me, checking stability is about taking a (stupidly) large timestep and checking it doesn't explode. I understand that we can check both by simply cranking c
to be a higher value, under the assumption that dt
doesn't automatically shrink. However, I can't judge if 0.5
is large enough compared to the sane value 0.005
.
The worry is that future maintenance would remove the second compare(0.5)
because it's "redundant".
Logfiles from GitLab pipeline #209442 (:white_check_mark:) have been uploaded here!
Status and direct links:
Since NEURON's ODE solver is not extremely accurate in the intermediate region where the jump is present, we only compare the final values (without the fix for the conductances, the result of
v
is a bunch ofnan
s).This requires updating the references again: https://github.com/BlueBrain/nmodl-references/pull/9