Closed jlaehne closed 1 year ago
Hello!
Just wanted to share that the incriminated test runs fine with my configuration, so I don't think it is an upstream library modification issue:
lumispy 0.2.3.dev0 (main branch) hyperspy 1.8.0.dev0 (both RELEASE_next_minor and RELEASE_next_patch branches)
============================= test session starts =============================
collecting ... collected 2 items
test_cl_spectrum.py::TestCLSpectrum::test__make_signal_mask
test_cl_spectrum.py::TestCLSpectrum::test_remove_spikes
======================== 2 passed, 3 warnings in 3.57s ========================
Process finished with exit code 0
PASSED [ 50%]PASSED [100%]
I have python 3.10.9 and numpy 1.23.5.
Could it be related to the changes in the way random numbers are generated? See https://github.com/hyperspy/hyperspy/pull/3103
Ah sorry I just noticed that the tests have been commented out in the code ^^
Indeed it fails on my machine too.
Changing they way random is seeded does not really seem to change anything though
I think @ericpre is right and it's just a float comparison issue in the test. Test fails based on what random number is generated if I don't let it to a fixed seed of 1.
Changing line 55 to s.data[0, 2, 29] += 1.001 fixes it.
I have opened PR #187 that fixes it
Some recent update to HyperSpy or an upstream library breaks the test for automatic peak removal:
You can see the full output here: https://github.com/LumiSpy/lumispy/actions/runs/4451990112/jobs/7819241076
The smaller of the two artificial peaks is not caught on every run, if the
threshold
isauto
.I commented out line 70 and 85 in
test_cl_spectrum.py
to make the tests run through in #184. Also I setdecimal=4
(instead of5
) in all assertions as it was loosing accuracy on some runs.So far, I have not found why the numerics suddenly seem to be less reliable.
In principle, the function should still run, but it seems to be less robust.
Any idea @jordiferrero or @ericpre