NeuroTechX / moabb

Mother of All BCI Benchmarks
https://neurotechx.github.io/moabb/
BSD 3-Clause "New" or "Revised" License
646 stars 168 forks source link

Creating data splitters for moabb evaluation #624

Open brunaafl opened 3 weeks ago

brunaafl commented 3 weeks ago

Based on the Issue https://github.com/NeuroTechX/moabb/issues/612#issue-2328074142, I've created 3 data splitters related to each of the three types of moabb evaluation: WithinSubjectSplitter, CrossSessionSplitter, and CrossSubjectSplitter, defined in the file splitters.py, two evaluation splits: OfflineSplit and TimeSeriesSplit, and one meta-split, SamplerSplit, defined on meta_splitters.py file.

For the intra-subject splitters (Within and CrossSession Splitters), I assumed that all data and metadata from all subjects was already known and loaded, which can maybe create a problem with the lazy loading done in this cases. Therefore, I based them on 'Individual' versions (IndividualWithin and IndividualCrossSession) that assume metadata from a specific subject.

I ended up creating two draft versions (Group and LazyEvaluation on unified_eval.py) of an evaluation integrating all modalities, with LazyEvaluation trying to maintain the loading of data on the intra-subjects evaluations just when needed. However, taking a look at https://github.com/NeuroTechX/moabb/issues/481 and https://github.com/NeuroTechX/moabb/pull/486, maybe it was not the best and easiest solution, so I stopped working on that since it may not be that useful.

I'm working now on building the tests and refining and fixing the code a bit.