Open stephanlachnit opened 2 years ago
I think this this due to floating precision
Almost certainly. Quite a few of the tests are super finicky regarding numerical precision, which is one of the reason we often seed the RNG when we need "random" numbers --- if we'd take actual (psuedo)random numbers the test would "fail". This actually requires a more fundamental decision on what we want with the tests, and how accurate they should be, since it will also depend on e.g. your scipy installation.
I updated symfit 0.5.4 to include 30f01207eebfdb3a9da33f182d7e52e0cb2469b4 in Debian and enabled the CI to run pytest. Interestingly, the CI fails on i386 (= x86 32bit) consistently, while passing on all architectures. The test in question is
tests/test_minimizers.py::test_multiprocessing
It seems like that Parameter
a
is fitted to the maximum value of 20, not entirely sure why. I think this this due to floating precision (see https://wiki.debian.org/ArchitectureSpecificsMemo#Floating_point). Maybe better starting estimates can be done to prevent this test from failing? I also don't see why the array is shuffled with a static seed before the fitting procedure, seems kinda odd to me.Here is the relevant part of the log: