Open jsosulski opened 4 years ago
what we need is a better way of notating metadata that allows us some flexibility in terms of the criteria we use for searching. As of now that is not implemented...so my suggestion is to write your own paradigm for now. In the future, if there is more interest in expanding this repo, we should re-write the BaseDataset
to, instead of hard-coding certain attributes, have a properly schema-checked metadata
attribute
Reviving this issue. How about adopting a results structure as is done in BIDS?
E.g. an example structure from a BIDS file:
sub-<label>[_ses-<label>]_task-<label>[_acq-<label>][_run-<index>][_recording-<label>]_physio.json
Here we have:
sub
: subject, we already have thisses
: session, we already have thistask
: so far we assume, every subject in a session / dataset performs the same task.acq
: acquisition parameters, i.e. conditions. I would need something like this to push my dataset to moabbrun
: we have this alreadyrecording
: I dunnoFor the current datasets we can just plug in "default" or something for all the columns we dont need/want. What do you think @sylvchev? Would an approach like this fix your issues @Div12345 ?
Seems nice to stick with a structure close the one used in BIDS!
This seems reasonable to have a structure like the one you mentioned. We could indeed use default parametrization for existing datasets. This will require to update the BaseDataset
and Results
classes?
I can check in the next days what changes would be necessary but that sounds like the gist of it.
An added benefit of having this structure internally is that it would make adding BIDS datasets quite trivial.
Hi @jsosulski,
Checking the status of old issues!
I was wondering, does this issue still make sense?
Hi, I am currently writing a wrapper for a P300 dataset of ours. We recorded auditory oddbal under different conditions, e.g. interstimulus interval. How would you recommend to handle this using the dataset architecture of moabb? In my opinion it does not make sense to pool data across ISIs, hence I can either split the conditions by using 'fake subjects', i.e. 'subject_1_isi_200' 'subject_1_isi_500' or use 'fake sessions'.
A different approach would be to define this in a custom paradigm, but I reckon this defeats the purpose of the benchmark aspect of moabb?