scikit-image / scikit-image

Image processing in Python
https://scikit-image.org
Other
6.09k stars 2.23k forks source link

libatlas 3.10.3 related failures on debian #7399

Open lagru opened 7 months ago

lagru commented 7 months ago

From https://github.com/scikit-image/scikit-image/issues/7391#issuecomment-2054159456:

Details > I did, the mentioned ones are gone, but now there is another one; this time in amd64 (and obviousy unrelated): > ``` > _____________________ test_polynomial_weighted_estimation ______________________ > > def test_polynomial_weighted_estimation(): > # Over-determined solution with same points, and unity weights > tform = estimate_transform('polynomial', SRC, DST, order=10) > tform_w = estimate_transform( > 'polynomial', SRC, DST, order=10, weights=np.ones(SRC.shape[0]) > ) > assert_almost_equal(tform.params, tform_w.params) > > # Repeating a point, but setting its weight small, should give nearly > # the same result. > point_weights = np.ones(SRC.shape[0] + 1) > point_weights[0] = 1.0e-15 > tform1 = estimate_transform('polynomial', SRC, DST, order=10) > tform2 = estimate_transform( > 'polynomial', > SRC[np.arange(-1, SRC.shape[0]), :], > DST[np.arange(-1, SRC.shape[0]), :], > order=10, > weights=point_weights, > ) > > assert_almost_equal(tform1.params, tform2.params, decimal=4) > > skimage/transform/tests/test_geometric.py:666: > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > /usr/lib/python3.11/contextlib.py:81: in inner > return func(*args, **kwds) > /usr/lib/python3.11/contextlib.py:81: in inner > return func(*args, **kwds) > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > args = (.compare at 0x7ffad9691580>, array([[ 1.16431672e-10, -1.37662973e-11, 1... -7.77541170e-06, 1.12016980e-06, 3.45277758e-05, > -1.55254763e-05, -1.16166818e-05, -6.45696044e-06]])) > kwds = {'err_msg': '', 'header': 'Arrays are not almost equal to 4 decimals', 'precision': 4, 'verbose': True} > > @wraps(func) > def inner(*args, **kwds): > with self._recreate_cm(): > > return func(*args, **kwds) > E AssertionError: > E Arrays are not almost equal to 4 decimals > E > E Mismatched elements: 3 / 132 (2.27%) > E Max absolute difference: 0.00044363 > E Max relative difference: 26.96123959 > E x: array([[ 1.1643e-10, -1.3766e-11, 1.5589e-07, 1.0080e-07, 9.3082e-06, > E 2.4074e-05, 2.8658e-05, -5.3848e-05, 1.0834e-10, 5.3736e-09, > E -1.7943e-07, 2.8586e-08, -9.4791e-06, 7.8816e-05, -2.8878e-05,... > E y: array([[-9.0369e-07, -1.1004e-05, -2.1400e-07, 6.5641e-06, 9.2999e-06, > E 1.9720e-05, -3.2870e-05, -5.0437e-05, -1.3570e-05, -9.9079e-05, > E 2.0259e-04, -1.4415e-05, -1.1206e-05, 1.7683e-05, -2.8268e-05,... > > /usr/lib/python3.11/contextlib.py:81: AssertionError > ``` > In ppc64el and loongarch64, I get: > ``` > _______________________ test_ellipse_parameter_stability _______________________ > _____________________________ test_reproducibility _____________________________ > > def test_reproducibility(): > """ensure cut_normalized returns the same output for the same input, > when specifying random seed > """ > img = data.coffee() > labels1 = segmentation.slic(img, compactness=30, n_segments=400, start_label=0) > g = graph.rag_mean_color(img, labels1, mode='similarity') > results = [None] * 4 > for i in range(len(results)): > results[i] = graph.cut_normalized( > labels1, g, in_place=False, thresh=1e-3, rng=1234 > ) > graph.cut_normalized(labels1, g, in_place=False, thresh=1e-3, rng=1234) > > for i in range(len(results) - 1): > > assert_array_equal(results[i], results[i + 1]) > > skimage/graph/tests/test_rag.py:224: > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > args = (, array([[0, 0, 0, ..., 0, 0, 0], > [0, 0, 0, ..., 0, 0, 0], > [0, 0, 0, ..., 0, 0, 0...35, 135, ..., 11, 11, 11], > [135, 135, 135, ..., 11, 11, 11], > [135, 135, 135, ..., 11, 11, 11]])) > kwds = {'err_msg': '', 'header': 'Arrays are not equal', 'strict': False, 'verbose': True} > > @wraps(func) > def inner(*args, **kwds): > with self._recreate_cm(): > > return func(*args, **kwds) > E AssertionError: > E Arrays are not equal > E > E Mismatched elements: 231553 / 240000 (96.5%) > E Max absolute difference: 394 > E Max relative difference: 1. > E x: array([[0, 0, 0, ..., 0, 0, 0], > E [0, 0, 0, ..., 0, 0, 0], > E [0, 0, 0, ..., 0, 0, 0],... > E y: array([[ 0, 0, 0, ..., 11, 11, 11], > E [ 0, 0, 0, ..., 11, 11, 11], > E [ 0, 0, 0, ..., 11, 11, 11],... > > /usr/lib/python3.12/contextlib.py:81: AssertionError > ``` > > All logs [here](https://buildd.debian.org/status/logs.php?pkg=skimage&ver=0.23.2%7Erc1-1&suite=experimental).

There are some arch specific failures on Debian that seem related to libatlas. This affects test_polynomial_weighted_estimation for amd64 and test_ellipse_parameter_stability for ppc64el and loongarch64.

See also https://github.com/scikit-image/scikit-image/issues/7391#issuecomment-2056655869 for possible dependencies that might have caused the test assertion failures:

The main difference seems to be that the failing build used libatlas 3.10.3 instead of blas 3.12.0 / libopenblas 0.3.26. All other differences seem minor:

  • Python distutils, lib2to3, tk from 3.12.2 to 3.12.3
  • pyproject-metadata only in the Debian release number,
  • liblua 5.46, gringo 5.6.2, clasp 3.3.5, aspcud 1.9.6 are only in the failed (experimental) build
  • Python defcon 0.10.3 is only in the failed (experimental) build
  • Python ufolib2 0.16.0 is only in the successfull (unstable) build

I must say that I don't know why in the experimental build libatlas was preferred over blas; it is a numpy dependency and libblas is there the preferred one. However, they should give the same results, shouldn't they?

cc @olebole

github-actions[bot] commented 4 weeks ago

Hello scikit-image core devs! There hasn't been any activity on this issue for more than 180 days. I have marked it as "dormant" to make it easy to find.

To our contributors, thank you for your contribution and apologies if this issue fell through the cracks! Hopefully this ping will help bring some fresh attention to the issue. If you need help, you can always reach out on our forum

If you think that this issue is no longer relevant, you may close it, or we may do it at some point (either way, it will be done manually).