Open Moritz-Alexander-Kern opened 1 year ago
At first glance:
Based on the information provided, it seems that the discrepancy in the results is likely due to differences in how floating-point operations are handled on the respective hardware or compiler. These variations seem to lead to minor differences in the output.
For WLPI:
The max absolute difference of 1.33781874e-14 does not hint to a fundamental mistake in the Elephant implementation of WLPI .
It might be the tolerances set for the unit test with np.allclose
are too tight, causing even small differences to be flagged as errors. Therefore the fix is likely going to be a higher tolerance for this unit test (?).
Update: Tolerance for WLPI fixed.
For MultitaperCoherence:
It's possible that this situation is similar to the one encountered in WLPI. To address it, one potential initial approach could involve modifying the assertion statement to compare the arrays using a function like np.allclose
. By doing so, one would obtain an approximation of the discrepancy between the expected and calculated values in case an error is raised.
For MultitaperCoherence this might be related to: https://github.com/numpy/numpy/issues/24000 .
I tracked it down to this line: https://github.com/NeuralEnsemble/elephant/blob/2651da96c121f69f11d3775fd6acceab9f0d4e38/elephant/spectral.py#L703-L704
For now I suggest to wait for release of numpy 1.25.1, the issue (#24000) regarding this is scheduled for this release.
Tested with numpy==1.26.2, no changes.
Tested with numpy==1.26.4, no changes.
=================================================================================== test session starts ====================================================================================
platform linux -- Python 3.10.12, pytest-8.1.1, pluggy-1.4.0
rootdir: /home/kern/git/INM-6/elephant
plugins: anyio-4.2.0
collected 33 items
elephant/test/test_spectral.py .........................F....... [100%]
========================================================================================= FAILURES =========================================================================================
____________________________________________________________ MultitaperCoherenceTestCase.test_multitaper_cohere_perfect_cohere _____________________________________________________________
self = <elephant.test.test_spectral.MultitaperCoherenceTestCase testMethod=test_multitaper_cohere_perfect_cohere>
def test_multitaper_cohere_perfect_cohere(self):
# Generate dummy data
data_length = 10000
sampling_period = 0.001
signal_freq = 100.0
noise = np.random.normal(size=(1, data_length))
time_points = np.arange(0, data_length * sampling_period,
sampling_period)
signal = np.cos(2 * np.pi * signal_freq * time_points) + noise
# Estimate coherence and phase lag with the multitaper method
freq1, coh, phase_lag = elephant.spectral.multitaper_coherence(
signal,
signal,
fs=1/sampling_period,
n_segments=16)
> np.testing.assert_array_equal(phase_lag, np.zeros(phase_lag.size))
elephant/test/test_spectral.py:980:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (<built-in function eq>, array([ 0.00000000e+00, 1.70311846e-18, -5.90941469e-18, 9.99652423e-19,
3.00045105...., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]))
kwds = {'err_msg': '', 'header': 'Arrays are not equal', 'strict': False, 'verbose': True}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
E AssertionError:
E Arrays are not equal
E
E Mismatched elements: 587 / 589 (99.7%)
E Max absolute difference: 7.0909479e-18
E Max relative difference: inf
E x: array([ 0.000000e+00, 1.703118e-18, -5.909415e-18, 9.996524e-19,
E 3.000451e-18, -1.319078e-19, 1.620084e-18, -1.323537e-18,
E 7.722359e-19, 2.487218e-18, 2.764620e-18, 3.428711e-18,...
E y: array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
E 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
E 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,...
/usr/lib/python3.10/contextlib.py:79: AssertionError
Describe the bug The ebrains TC team have notified me about the following issue when testing Elephant on JUSUF and Galileo100.
When trying to install elephant on JUSUF, two tests fail: (on Galileo100 only the WLPI ground truth test fails)
elephant/test/test_phase_analysis.py::WeightedPhaseLagIndexTestCase::test_WPLI_ground_truth_consistency_real_LFP_dataset
elephant/test/test_spectral.py::MultitaperCoherenceTestCase::test_multitaper_cohere_perfect_cohere
To Reproduce
$ spack spack install --test root py-elephant
Expected behavior No error should be raised when running the unit tests.
Environment $ spack debug report
Error