Closed AlexVanMeegen closed 3 years ago
Thanks for the PR! It looks very good.
rtol
being below 1.19e-7
is exclusively due to my ignorance. I just checked it and if we set rtol to 1e-7
(the numpy.testing
standard for assert_allclose
), all tests pass except for test_meanfield_calcs.py::Test_transfer_function_1p_taylor::test_correct_output[mean_driven_regime]
, where we get
E AssertionError:
E Not equal to tolerance rtol=1e-07, atol=0
E
E Mismatched elements: 20 / 20 (100%)
E Max absolute difference: 1.22686527e-05
E Max relative difference: 5.68353909e-06
E x: array([[2.196778-1.211425e-03j, 2.20737 -1.294038e-03j],
E [2.142894-3.561053e-01j, 2.145234-3.785869e-01j],
E [1.99727 -6.656993e-01j, 1.980996-6.995807e-01j],...
E y: array([[2.196765-1.211419e-03j, 2.20737 -1.294038e-03j],
E [2.142882-3.561033e-01j, 2.145233-3.785869e-01j],
E [1.997259-6.656957e-01j, 1.980996-6.995806e-01j],...
Would you say that this is a reasonable change due to the improved numerics?
And what would be the right way to proceed? Should we raise the tolerance to 1e-7
? And does it make any difference that python is using float64
as its standard?
Thanks for checking the tests!
Regarding the failing test with rtol 1e-7
: Is it possible to check whether the test is deep in the mean driven regime, i.e. (mu - V_th_rel) / sigma >> 1
(I still don't really understand how the testsuite works :sweat_smile:)? I know that the old implementation runs into trouble in this regime so this could explain the larger difference.
Regarding the tolerance of the test: I honestly don't know what are best practices but I think one should certainly not test beyond machine precision. Machine precision is basically 1 + eps == 1
so that's inherently there. These errors accumulate through each successive operation, thus I think one should not aim for machine precision on the level of the entire (quite complex) computations here. But I have no clue which value is reasonable; I guess there is some black magic / experience involved. I saw float32 precision in other libraries (don't remember where) and this might be reasonable for float64 algorithms. Probably best to ask experienced people like @mhelias, @diesmann, or @heplesser.
Yes, it's a good idea to ask the experts and I will do that. But this general question shouldn't hold us back from merging this PR :D.
Yes, it's possible to get the exact parameters:
mu = <Quantity([104.40491181 97.38091181], 'millivolt')>
sigma = <Quantity([20.61420113 20.58426277], 'millivolt')>
V_th_rel = <Quantity(20.0, 'millivolt')>
So we get
(mu - V_th_rel) / sigma
= <Quantity([4.09450317 3.75922678], 'dimensionless')>
Would you say this is in the shady regime of the old implementation?
Mmm, mu
is certainly very large but sigma
is as well. That's a case that I did not look at in detail. If sigma
would not be that large, |y_th| = |mu - V_th_rel| / sigma
would lead to very large exp(y_th**2)
factors, e.g. in Phi
, that cause trouble. But i don't know if a large mu
alone can be problematic.
Do you happen to know if another test of nu_fb433
is that deep in the mean driven regime (say mu > 80 mV
)?
We have a test run with exactly the same parameters for nu0_fb433
. And that one passes without any problems.
Just noticed that the rtol for both nu0_fb and nu0_fb433 is pretty high, rtol=0.05
. This was already the case previous to #56. Increasing the rtol
leads to
E AssertionError:
E Not equal to tolerance rtol=1e-06, atol=0
E
E Mismatched elements: 1 / 1 (100%)
E Max absolute difference: 2.76750186
E Max relative difference: 0.0132945
E x: array(205.40136)
E y: array(208.168862)
for nu0_fb433
and nu0_fb
fails with a similarly large difference.
Due to the high rate >200 spks/s, I assume that the failing test corresponds to the mu
and sigma
we discussed above, can you confirm this?
If this is true, I think the culprit for Test_transfer_function_1p_taylor
is indeed the difference of the output rate. I think the absolute difference of ~2 spks/s is likely an artifact of the numerics and I (naturally) tend to trust the novel implementation more, in particular because I know that it tolerates the noise free limit |y_th| >> 1
very well.
Dug a bit deeper. I think the issue is a bug in the test suite in real_shifted_siegert
; there was a /np.sqrt(2)
instead of /2
(the latter is in agreement with aux_calcs
as well as Fourcaud & Brunel but please double check).
Now I could increase to rtol=1e-6
and it still works for nu_0
and nu0_fb
. Increasing the accuracy further does not work because even quad
reaches its limits I think (see the IntegrationWarning
). nu0_fb433
fails with a difference O(1e-4)
but this is expected as it is compared to real_shifted_siegert
instead of the Taylor version, right?
Oh that's great! I always thought that the discrepancy is due to the naive implementation of the integration and therefore considered this test nothing but an additional cross check.
And regarding your question:
Due to the high rate >200 spks/s, I assume that the failing test corresponds to the mu and sigma we discussed above, can you confirm this?
Yes, that were the results for the high mu
regime.
I will have a look at the code soon. Currently, I am at a retreat and can answer in the coffee breaks only.
No hurry, enjoy the retreat! :smiley: I have to admit that I am confused. As far as I can see the fixtures test of nu0_fb433
shows that new and old implementation agree to a very high accuracy. Thus, I don't see how the larger deviations in transfer_function_1p_taylor
can occur.
As I said, for the moment I replaced most tolerances with rtol=1e-7
.
Additionally, I found a bug in test_correct_output
for Test_nu0_fb433
. They didn't load the expected output correctly and unfortunately an empty output let the test pass silently. This is resolved now.
Correspondingly, we now get a similar difference for nu0_fb433
as for the transfer function:
E AssertionError:
E Not equal to tolerance rtol=1e-07, atol=0
E
E Mismatched elements: 1 / 1 (100%)
E Max absolute difference: 0.00120236
E Max relative difference: 5.77590215e-06
E x: array(208.167659)
E y: array(208.168862)
So I guess the difference is due to improved numerics. If you agree, I think we are ready to overwrite the old fixtures and set this as our new standard.
Great, thanks, this clears my confusion!
Yes, I agree that the remaining difference is due to the improved numerics and I think it makes sense to set new fixtures.
Thanks for the constructive PR review!
Addresses #58 building on #56. Simply gets rid of exponential prefactors like
exp(-y_th**2)
, combines them analytically, and multiplies them in the very end. Of course fully vectorised.Adds a bit of boilerplate between
nu_0
and_nu0_dPhi
. Wasn't sure if this was worth introducing yet another helper function, please just let me know which way you like better.Several tests fail but (if I saw it correctly) with max abs. diff. in the
1e-15
range. Still, please have a thorough look!Did not add any unit tests because the new functions are minor and I think the best way to test them is to directly test
nu0_fb433
.Fixing
method=Taylor
intest_no_negative_firing_rates
still leads to negative rates. As I don't see any dangerous points in the numerics (had a look at the numbers, they are all in a very reasonable range), I guess that this is indeed inherent in the formula.PS: Out of curiosity, why is the
rtol
in the tests below1.19e-7
which is machine precision for single precision floats?