tslearn-team / tslearn

The machine learning toolkit for time series analysis in Python
https://tslearn.readthedocs.io
BSD 2-Clause "Simplified" License
2.92k stars 342 forks source link

Continuous integration failing test on Linux for test check_pipeline_consistency of class LearningShapelets #426

Closed YannCabanes closed 1 year ago

YannCabanes commented 2 years ago

This bug was first noticed in continuous integration tests of the PR #411 (which is now merged), but this bug seems unrelated to the PR. The continuous integration tests are failing with Linux but they pass with Windows and MacOS. I use Linux and Python 3.8, and the tests pass on my local computer. The failing test is related to the test the class tslearn.shapelets.shapelets.LearningShapelets by functions: test_all_estimators (tslearn/tests/test_estimators.py) --> check_estimator (tslearn/tests/test_estimators.py) --> check_pipeline_consistency (tslearn/tests/sklearn_patches.py).

YannCabanes commented 2 years ago

Here is the continuous integration test error message:

=================================== FAILURES =================================== _ test_allestimators[LearningShapelets-LearningShapelets]

name = 'LearningShapelets' Estimator = <class 'tslearn.shapelets.shapelets.LearningShapelets'>

`@pytest.mark.parametrize('name, Estimator', get_estimators('all')) def test_all_estimators(name, Estimator): """Test all the estimators in tslearn.""" allow_nan = (hasattr(checks, 'ALLOW_NAN') and Estimator().get_tags()["allow_nan"]) if allow_nan: checks.ALLOW_NAN.append(name) if name in ["GlobalAlignmentKernelKMeans", "ShapeletModel", "SerializableShapeletModel"]:

Deprecated models

    return
    check_estimator(Estimator)`

tslearn/tests/test_estimators.py:215: tslearn/tests/test_estimators.py:197: in check_estimator check(estimator) /opt/hostedtoolcache/Python/3.9.14/x64/lib/python3.9/site-packages/sklearn/utils/_testing.py:311: in wrapper return fn(*args, **kwargs) tslearn/tests/sklearn_patches.py:558: in check_pipeline_consistency assert_allclose_dense_sparse(result, result_pipe)

x = array([[3.7043095e-03], [6.7453969e-01], [6.3824987e-01], [1.2295246e-03], [2.0980835e-05]...4e-03], [8.6247969e-01], [1.4195442e-03], [5.0067902e-06], [9.4977307e-01]], dtype=float32) y = array([[0.40121353], [0.06187719], [0.05123574], [0.21641088], [0.2602595 ], [0.076... [0.25475943], [0.12683961], [0.27159142], [0.29283226], [0.16161257]], dtype=float32) rtol = 1e-07, atol = 1e-09, err_msg = ''

`def assert_allclose_dense_sparse(x, y, rtol=1e-07, atol=1e-9, err_msg=""): """Assert allclose for sparse and dense data.

Both x and y need to be either sparse or dense, they
can't be mixed.

Parameters
----------
x : {array-like, sparse matrix}
    First array to compare.

y : {array-like, sparse matrix}
    Second array to compare.

rtol : float, default=1e-07
    relative tolerance; see numpy.allclose.

atol : float, default=1e-9
    absolute tolerance; see numpy.allclose. Note that the default here is
    more tolerant than the default for numpy.testing.assert_allclose, where
    atol=0.

err_msg : str, default=''
    Error message to raise.
"""
if sp.sparse.issparse(x) and sp.sparse.issparse(y):
    x = x.tocsr()
    y = y.tocsr()
    x.sum_duplicates()
    y.sum_duplicates()
    assert_array_equal(x.indices, y.indices, err_msg=err_msg)
    assert_array_equal(x.indptr, y.indptr, err_msg=err_msg)
    assert_allclose(x.data, y.data, rtol=rtol, atol=atol, err_msg=err_msg)
elif not sp.sparse.issparse(x) and not sp.sparse.issparse(y):
    # both dense
    assert_allclose(x, y, rtol=rtol, atol=atol, err_msg=err_msg)`

E AssertionError: E Not equal to tolerance rtol=1e-07, atol=1e-09 E E Mismatched elements: 30 / 30 (100%) E Max absolute difference: 0.7881605 E Max relative difference: 23.541649 E x: array([[3.704309e-03], E [6.745397e-01], E [6.382499e-01],... E y: array([[0.401214], E [0.061877], E [0.051236],...

/opt/hostedtoolcache/Python/3.9.14/x64/lib/python3.9/site-packages/sklearn/utils/_testing.py:418: AssertionError