tensorly / viz

Easy visualization and evaluation of matrix and tensor factorization models
https://tensorly.org/viz/
MIT License
16 stars 4 forks source link

[JOSS Review] Some Tests are failing #15

Closed sara-02 closed 1 year ago

sara-02 commented 1 year ago

Following the steps outlined in this dev setup link http://tensorly.org/viz/stable/contributing.html#development-environment and making changes as pointed out in #14, I ran the pytest and 2 tests failed. The screenshot is attached below. image

sara-02 commented 1 year ago

https://github.com/openjournals/joss-reviews/issues/4754

yngvem commented 1 year ago

Good catch! For the visualisation test, it seems like you stumbled upon an unfortunate seed where the core consistency was negative. The first test that fails is a doctest, it seems like it could fail in the rare circumstance when TensorLy (due to a bad initialisation) fails to a good estimation of the true underlying factors. We could maybe make this test more robust by using multiple initializations (like we do in the example scripts), but that would maybe be too much for a simple example like this?

What're your thoughts on this @sara-02? Do you think it's better that the doctest have a higher likelihood of passing (though I cannot remember any time this test failed in the CI-pipeline), or would you prefer it the example was as straightforward as possible?

Also, for debugging purposes, do you have the log from running the tests still? It should have a seed on the top that might be useful for debugging purposes (though NumPy's RNG no longer tries to give the same result on different platforms, so it may not help).

sara-02 commented 1 year ago

Good catch! For the visualisation test, it seems like you stumbled upon an unfortunate seed where the core consistency was negative. The first test that fails is a doctest, it seems like it could fail in the rare circumstance when TensorLy (due to a bad initialisation) fails to a good estimation of the true underlying factors. We could maybe make this test more robust by using multiple initializations (like we do in the example scripts), but that would maybe be too much for a simple example like this?

What're your thoughts on this @sara-02? Do you think it's better that the doctest have a higher likelihood of passing (though I cannot remember any time this test failed in the CI-pipeline), or would you prefer it the example was as straightforward as possible?

Also, for debugging purposes, do you have the log from running the tests still? It should have a seed on the top that might be useful for debugging purposes (though NumPy's RNG no longer tries to give the same result on different platforms, so it may not help).

@yngvem I do not have the logs handly, but I can try running the tests again and share the logs in case it fails.

yngvem commented 1 year ago

Sounds good! we also tried to come with a fix for the predictive_power doctest in https://github.com/tensorly/viz/pull/17, hopefully that should do the trick if it still fails.