Closed rdgao closed 4 years ago
Yes, definitely, a more formal (at least a little bit) system for testing FOOOF would be great.
Purely qualitatively, this could just be a collection of PSDs, that one has to eyeball after any relevant change, which, if housed in a notebook, could also serve as an extended demo / tutorial.
And/or, a bit more quantitatively could be an automatic test suite that checks the model-fit on a group of PSDs, checking that the fit and/or error is at least as good as the current master version of the algorithm. This would inherit the issues of over-interpreting fit/error - but could be a way to check more cases: it could simply flag PSDs that 'significantly change' in their fit, prompting you to go and look at them (allowing for testing to cover a greater number of cases).
So, we moved away from using real data in the test suite. There is still some in the tutorials.
Overall, this is not something I currently envision adding to the core FOOOF repository. I can imagine maybe having a 'data examples' repository on the organization that does something similar to this idea though, which might become more relevant at some future point if we or someone else explores different fit functions or different algorithms entirely.
That's more a development idea though - so unless someone wants to argue that we still want to do this, here in this repo, then I think I'll close this issue here.
would it be useful to have a collection of test PSDs from different recording modalities (LFP, ECoG, EEG, MEG, etc), not for any quantitative analysis of fooof, but for quick eyeballing that it still fits something reasonable for all the possible test cases when trying new algorithms?