LumiSpy / lumispy

Luminescence data analysis with HyperSpy.
https://lumispy.org
GNU General Public License v3.0
26 stars 18 forks source link

Fix test for `remove_spikes` #185

Closed jlaehne closed 1 year ago

jlaehne commented 1 year ago

Some recent update to HyperSpy or an upstream library breaks the test for automatic peak removal:

>       np.testing.assert_almost_equal(s3.data[0, 2, 29], 1, decimal=5)
E       AssertionError: 
E       Arrays are not almost equal to 5 decimals
E        ACTUAL: 2.000005802611809
E        DESIRED: 1

You can see the full output here: https://github.com/LumiSpy/lumispy/actions/runs/4451990112/jobs/7819241076

The smaller of the two artificial peaks is not caught on every run, if the threshold is auto.

I commented out line 70 and 85 in test_cl_spectrum.py to make the tests run through in #184. Also I set decimal=4 (instead of 5) in all assertions as it was loosing accuracy on some runs.

So far, I have not found why the numerics suddenly seem to be less reliable.

In principle, the function should still run, but it seems to be less robust.

Any idea @jordiferrero or @ericpre

Attolight-NTappy commented 1 year ago

Hello!

Just wanted to share that the incriminated test runs fine with my configuration, so I don't think it is an upstream library modification issue:

lumispy 0.2.3.dev0 (main branch) hyperspy 1.8.0.dev0 (both RELEASE_next_minor and RELEASE_next_patch branches)

============================= test session starts =============================
collecting ... collected 2 items

test_cl_spectrum.py::TestCLSpectrum::test__make_signal_mask 
test_cl_spectrum.py::TestCLSpectrum::test_remove_spikes 

======================== 2 passed, 3 warnings in 3.57s ========================

Process finished with exit code 0
PASSED       [ 50%]PASSED           [100%]

I have python 3.10.9 and numpy 1.23.5.

ericpre commented 1 year ago

Could it be related to the changes in the way random numbers are generated? See https://github.com/hyperspy/hyperspy/pull/3103

Attolight-NTappy commented 1 year ago

Ah sorry I just noticed that the tests have been commented out in the code ^^

Indeed it fails on my machine too.

Changing they way random is seeded does not really seem to change anything though

Attolight-NTappy commented 1 year ago

I think @ericpre is right and it's just a float comparison issue in the test. Test fails based on what random number is generated if I don't let it to a fixed seed of 1.

Changing line 55 to s.data[0, 2, 29] += 1.001 fixes it.

I have opened PR #187 that fixes it