Closed scott-huberty closed 1 month ago
@scott-huberty I am not sure what to think of _check_sfreq()
. It is the first time that I notice this function. I suppose it got added during some bug fix? It seems like a MNE issue rather than a PyLossless issue... We are not used to working with floating point sampling frequency because EGI works with integer sfreq, but many manufacturers give a calibrated sfreq which will normally not match exactly to an integer and can vary slightly from one amp to the next. Normally, the mapping between annotations (in seconds) and events (in samples) for epoching should work as long as the time specified in annotations fit with integers multiple of the sampling frequency and numerical error issues (e.g., due to roundoff) should probably be dealt by MNE rather than us since they chose to support floating point sfreq. Just my two cents. You may be aware of subtlety motivating the need for _check_sfreq()
that I am not.
@christian-oreilly It's been a long time but If I recall we worked on this together in a pair programming session during my PhD. What you said above mostly matches up with my understanding of the problem. here are the related issues/PR's
Here is my understanding of what got us here:
test_simulated
has a non-integer sfreqI think we discussed, but never implemented, adding an option to the pipeline config or .run
signature, like force_integer_sfreq
, that the user could change to False
if they are okay with the behavior described above.
As far as MNE, I suppose you are right that an issue could be submitted there, but I'm not sure how high priority it will be for them.
Yeah, I remembered the general issue that this can cause, I just did not remember the _check_sfreq()
function and the fact that we solved this by systematically resampling to an integer frequency... which is a lossy operation. Anyway, we can revisit this down the line if someone raises an issue related to the use of the pipeline with floating-point frequencies. For now, to the extent that this approach fixes the issues for the tests, I think it is low priority and fine as-is.
Sounds good 🙏 @christian-oreilly
This fixes a few things and we'll see if the CI's tell us whether we need to fix a few more.
FutureWarning
was being triggered in torch by MNE-ICAlabel. I've submitted a fix toMNE-ICALabel
already, but in the meantime I'm just filtering out the warning in our tests. Once MNE-ICALabel .7 is released we can remove thepytest.mark.filterwarnings
openneuro-py
andtorch
since those packages are now listed inrequirements_testing.txt
pytest.mark.xfail
ontest_TopoViz
, because it was failing due to some issue with chrome driver. The failing test also causes the CI's to hang for a really long time. In the mean time I'm marking this test to be skipped, until we can fix it.