Closed kevin116mitchell closed 4 years ago
You can fit the EEGNet model to any input size, you just have to configure it. Also if you're referring to the BCI challenge data from Kaggle (https://www.kaggle.com/c/inria-bci-challenge) then that has 56 EEG channels (the extra channels are I believe eye-movement channels which we didn't use).
So your code would look something like this
from EEGModels import EEGNet
model = EEGNet(nb_classes = 2, Chans = 56, Samples = 161)
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics = ['accuracy'])
fittedModel = model.fit(X_train, Y_train, batch_size = 64, epochs = 300, validation_data=(X_validate, Y_validate), class_weight = class_weights)
where
In your case since you're using half the data X_train would be of shape (2720, 1, 56, 161). Similarly if you are using validation data it will also be of the shape (val_trials, 1, 56, 161).
The analysis in the paper used epoch windows of length 160 points (1.25s at 128Hz), so I guess you're accidentally taking one point extra, either at the beginning or the end of the trial. This shouldn't really change the results in any meaningful way though.
Okay this helped. Thank you!
Hey I'm wondering if you could help me with down sampling. I'm trying to implement EEGNet on the keras bci challenge data as you guys did in your paper.
When I call epochs.resample(sfreq = 128), I'm left with an epoch shape of (2720, 59, 161) and 2720 because I'm only using half the training data. I get a weird shape if I down sample raw before epoching as well. I understand it has something to do with the extraction window of [0, 1.25] but I'm not sure how to compensate for that. Any help would be appreciated, thanks!