SPOClab-ca / BENDR

170 stars 57 forks source link

Using downstream.py with BCI Competition IV 2a dataset #8

Open SalehAce1 opened 2 years ago

SalehAce1 commented 2 years ago

Hello,

I'm trying to use the BCI Competition IV 2a data from here http://bbci.de/competition/iv/ The files I get have the .gdf extension. Here is how I setup my downstream.yml:

Configuratron:
  use_only:
    - bci_iv_2a
  preload: True
  sfreq: 256
  deep1010:
    return_mask: False

encoder_weights: "configs/models/encoder.pt" 
context_weights: "configs/models/contextualizer.pt"

datasets: !include configs/downstream_datasets.yml

and here is my downstream_datasets.yml:

bci_iv_2a:
  name: "BCI Competition IV 2a"
  toplevel: configs/datasets/bci
  tmin: -2
  tlen: 6
  data_max: 100
  data_min: -100
  extensions:
          - .gdf
  picks:
    - eeg
  train_params:
    epochs: 15
    batch_size: 60 # This dataset likes batches of ~60 (community is overfitting this)
  lr: 0.00005

Once I run downstream.py using the command: python3 downstream.py BENDR --random-init --results-filename "results/BENDR_random_init.xlsx" I get the error:

Skipping A08T.gdf. Exception: ('No stim channels found, but the raw object has annotations. Consider using mne.events_from_annotations to convert these to events.',).

I am not sure how to proceed from here.

Thank you.

MichaelMaiii commented 2 years ago

I guess the author didn't use the original .gdf files, u have to specify the filename format, the events, and so on. One more question is the validation session doesn't have the events that correspond to the class labels. In this case, it is difficult to use the BCI_IV_2a dataset with dn3 unless the author releases the after preprocessing dataset version.

Vpredictor commented 2 years ago

I am wondering the same thing. Could you give us some guidance? @kostasde

dsb589 commented 2 years ago

Thanks for raising this, @SalehAce1 - I am running into a similar problem. When I try to run downstream.py on the bci_iv_2a dataset without changing the file extension to -.gdf, I get this error: AssertionError: datasets should not be an empty iterable. If I do change it to -.gdf, I get this error: cannot reshape array of size 520975 into shape (57855,newaxis). I'm wondering if you've seen either of these errors by any chance. Also, do you know if you are using the dataset at link Data Sets 2a here? This is what I've been trying to use, but perhaps this is wrong. @kostasde