tBuLi / symfit

Symbolic Fitting; fitting as it should be.
http://symfit.readthedocs.org
MIT License
234 stars 17 forks source link

`tests/test_minimizers.py::test_multiprocessing` fails on i386 #348

Open stephanlachnit opened 2 years ago

stephanlachnit commented 2 years ago

I updated symfit 0.5.4 to include 30f01207eebfdb3a9da33f182d7e52e0cb2469b4 in Debian and enabled the CI to run pytest. Interestingly, the CI fails on i386 (= x86 32bit) consistently, while passing on all architectures. The test in question is tests/test_minimizers.py::test_multiprocessing

It seems like that Parameter a is fitted to the maximum value of 20, not entirely sure why. I think this this due to floating precision (see https://wiki.debian.org/ArchitectureSpecificsMemo#Floating_point). Maybe better starting estimates can be done to prevent this test from failing? I also don't see why the array is shuffled with a static seed before the fitting procedure, seems kinda odd to me.

Here is the relevant part of the log:

autopkgtest [10:49:31]: test pytest: [-----------------------
============================= test session starts ==============================
platform linux -- Python 3.10.2, pytest-6.2.5, py-1.10.0, pluggy-0.13.0
rootdir: /tmp/autopkgtest-lxc.zqapa3ng/downtmp/build.L5J/src, configfile: pytest.ini
collected 148 items

symfit/contrib/interactive_guess/tests/test_interactive_fit.py ......... [  6%]
.........                                                                [ 12%]
tests/test_argument.py ....                                              [ 14%]
tests/test_auto_fit.py ......                                            [ 18%]
tests/test_constrained.py .....s............                             [ 31%]
tests/test_distributions.py ..                                           [ 32%]
tests/test_finite_difference.py ........                                 [ 37%]
tests/test_fit_result.py ..........                                      [ 44%]
tests/test_general.py .........s..............s......                    [ 65%]
tests/test_global_opt.py ....                                            [ 68%]
tests/test_minimize.py ......                                            [ 72%]
tests/test_minimizers.py .....F.                                         [ 77%]
tests/test_model.py ............                                         [ 85%]
tests/test_objectives.py .....                                           [ 88%]
tests/test_ode.py ........                                               [ 93%]
tests/test_support.py .........                                          [100%]

=================================== FAILURES ===================================
_____________________________ test_multiprocessing _____________________________

    def test_multiprocessing():
        """
        To make sure pickling truly works, try multiprocessing. No news is good
        news.
        """
        np.random.seed(2)
        x = np.arange(100, dtype=float)
        a_values = np.array([1, 2, 3])
        np.random.shuffle(a_values)

        def gen_fit_objs(x, a, minimizer):
            """Generates linear fits with different a parameter values."""
            for a_i in a:
                a_par = Parameter('a', 4.0, min=0.0, max=20)
                b_par = Parameter('b', 1.2, min=0.0, max=2)
                x_var = Variable('x')
                y_var = Variable('y')

                con_map = {y_var: {x_var, a_par, b_par}}
                model = CallableNumericalModel({y_var: f}, connectivity_mapping=con_map)

                fit = Fit(
                    model, x, a_i * x + 1, minimizer=minimizer,
                    objective=SqrtLeastSquares if minimizer is not MINPACK else VectorLeastSquares
                )
                yield fit

        minimizers = subclasses(ScipyMinimize)
        chained_minimizer = (DifferentialEvolution, BFGS)
        minimizers.add(chained_minimizer)

        pool = mp.Pool()
        for minimizer in minimizers:
            results = pool.map(worker, gen_fit_objs(x, a_values, minimizer))
            a_results = [res.params['a'] for res in results]
            # Check the results
>           assert a_values == pytest.approx(a_results, 1e-2)
E           assert array([3, 2, 1]) == approx([20.0 ± 2.0e-01, 1.9999999837078741 ± 2.0e-02, 0.999999984299464 ± 1.0e-02])
E            +  where approx([20.0 ± 2.0e-01, 1.9999999837078741 ± 2.0e-02, 0.999999984299464 ± 1.0e-02]) = <function approx at 0xf72c4220>([20.0, 1.9999999837078741, 0.999999984299464], 0.01)
E            +    where <function approx at 0xf72c4220> = pytest.approx

tests/test_minimizers.py:341: AssertionError
=============================== warnings summary ===============================
symfit/core/operators.py:53
  /tmp/autopkgtest-lxc.zqapa3ng/downtmp/build.L5J/src/symfit/core/operators.py:53: DeprecationWarning: invalid escape sequence '\*'
    """

symfit/core/support.py:318
  /tmp/autopkgtest-lxc.zqapa3ng/downtmp/build.L5J/src/symfit/core/support.py:318: DeprecationWarning: invalid escape sequence '\*'
    """

symfit/core/fit.py:37
  /tmp/autopkgtest-lxc.zqapa3ng/downtmp/build.L5J/src/symfit/core/fit.py:37: DeprecationWarning: invalid escape sequence '\_'
    """

symfit/core/minimizers.py:222
  /tmp/autopkgtest-lxc.zqapa3ng/downtmp/build.L5J/src/symfit/core/minimizers.py:222: DeprecationWarning: invalid escape sequence '\*'
    '''

symfit/core/minimizers.py:339
  /tmp/autopkgtest-lxc.zqapa3ng/downtmp/build.L5J/src/symfit/core/minimizers.py:339: DeprecationWarning: invalid escape sequence '\*'
    """

symfit/core/minimizers.py:811
  /tmp/autopkgtest-lxc.zqapa3ng/downtmp/build.L5J/src/symfit/core/minimizers.py:811: DeprecationWarning: invalid escape sequence '\*'
    """

symfit/core/fit_results.py:31
  /tmp/autopkgtest-lxc.zqapa3ng/downtmp/build.L5J/src/symfit/core/fit_results.py:31: DeprecationWarning: invalid escape sequence '\*'
    """

symfit/core/objectives.py:393
  /tmp/autopkgtest-lxc.zqapa3ng/downtmp/build.L5J/src/symfit/core/objectives.py:393: DeprecationWarning: invalid escape sequence '\c'
    """

-- Docs: https://docs.pytest.org/en/stable/warnings.html
=========================== short test summary info ============================
FAILED tests/test_minimizers.py::test_multiprocessing - assert array([3, 2, 1...
======= 1 failed, 144 passed, 3 skipped, 8 warnings in 269.22s (0:04:29) =======
Exception ignored in: <function Pool.__del__ at 0xefc81c88>
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/pool.py", line 268, in __del__
    self._change_notifier.put(None)
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 378, in put
    self._writer.send_bytes(obj)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 205, in send_bytes
    self._send_bytes(m[offset:offset + size])
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 416, in _send_bytes
    self._send(header + buf)
  File "/usr/lib/python3.10/multiprocessing/connection.py", line 373, in _send
    n = write(self._handle, buf)
OSError: [Errno 9] Bad file descriptor
autopkgtest [10:54:02]: test pytest: -----------------------]
autopkgtest [10:54:02]: test pytest:  - - - - - - - - - - results - - - - - - - - - -
pytest               FAIL non-zero exit status 1
pckroon commented 2 years ago

I think this this due to floating precision

Almost certainly. Quite a few of the tests are super finicky regarding numerical precision, which is one of the reason we often seed the RNG when we need "random" numbers --- if we'd take actual (psuedo)random numbers the test would "fail". This actually requires a more fundamental decision on what we want with the tests, and how accurate they should be, since it will also depend on e.g. your scipy installation.