Open realblack0 opened 3 years ago
Hi please use EEGNetv4 (https://github.com/braindecode/braindecode/blob/644ac12752e962c6a2506a071bebc4378297450b/braindecode/models/eegnet.py#L25) to reproduce results from braindecode
I just tried EEGNetv4 from braindecode, but the results was still same(mean accuracy 61). I think that the problem may lie in preprocessing part. Could you give me advice about preprocessing?
@realblack0 See the following the Colab notebook, the preprocessing is taken from this repo https://github.com/iis-eth-zurich/eeg-tcnet , it has basic operations of segmentation and normalization no filtering is applied, I've used it to test the equivalence of EEGNet implementations from Keras and Braindecode.
Results:
- Keras:
- Model : EEGNet Mean accuracy all dataset: 0.72126762548198 std. 0.08706506238035744
- Model : EEGNet Mean Kappa all dataset: 0.6284418754376309 std. 0.11591307087949579
- PyTorch (EEGNetV4) :
- Model : EEGNETV4 Mean accuracy for all dataset: 0.7274140515382533 std. 0.06291401587087057
- Model : EEGNETV4 Mean Kappa for all dataset: 0.6364804236516348 std. 0.08392478435633635
The notebook is self contained, with all instructions available from data download to evaluation. Here it is you can play with it: https://colab.research.google.com/drive/1ANF8PwvtUPawTeQt4Uu4iwscpyhHBgvM?usp=sharing
@realblack0 See the following the Colab notebook, the preprocessing is taken from this repo https://github.com/iis-eth-zurich/eeg-tcnet , it has basic operations of segmentation and normalization no filtering is applied, I've used it to test the equivalence of EEGNet implementations from Keras and Braindecode.
Results: - Keras: - Model : EEGNet Mean accuracy all dataset: 0.72126762548198 std. 0.08706506238035744 - Model : EEGNet Mean Kappa all dataset: 0.6284418754376309 std. 0.11591307087949579 - PyTorch (EEGNetV4) : - Model : EEGNETV4 Mean accuracy for all dataset: 0.7274140515382533 std. 0.06291401587087057 - Model : EEGNETV4 Mean Kappa for all dataset: 0.6364804236516348 std. 0.08392478435633635
The notebook is self contained, with all instructions available from data download to evaluation. Here it is you can play with it: https://colab.research.google.com/drive/1ANF8PwvtUPawTeQt4Uu4iwscpyhHBgvM?usp=sharing
Thanks for your reply,But i want to ask where has normalization in the Colab notebook, I just see segmentation.
@realblack0 See the following the Colab notebook, the preprocessing is taken from this repo https://github.com/iis-eth-zurich/eeg-tcnet , it has basic operations of segmentation and normalization no filtering is applied, I've used it to test the equivalence of EEGNet implementations from Keras and Braindecode.
Results: - Keras: - Model : EEGNet Mean accuracy all dataset: 0.72126762548198 std. 0.08706506238035744 - Model : EEGNet Mean Kappa all dataset: 0.6284418754376309 std. 0.11591307087949579 - PyTorch (EEGNetV4) : - Model : EEGNETV4 Mean accuracy for all dataset: 0.7274140515382533 std. 0.06291401587087057 - Model : EEGNETV4 Mean Kappa for all dataset: 0.6364804236516348 std. 0.08392478435633635
The notebook is self contained, with all instructions available from data download to evaluation. Here it is you can play with it: https://colab.research.google.com/drive/1ANF8PwvtUPawTeQt4Uu4iwscpyhHBgvM?usp=sharing
Thanks for your reply,But i want to ask where has normalization in the Colab notebook, I just see segmentation.
the normalization is applied with these lines :
for j in range(22):
scaler = StandardScaler()
scaler.fit(X_train[:,0,j,:])
X_train[:,0,j,:] = scaler.transform(X_train[:,0,j,:])
X_test[:,0,j,:] = scaler.transform(X_test[:,0,j,:])
@okbalefthanded: I had the same issue as @realblack0 and scaling the input resolved the issue for EEGNet.
Further I tried to reproduce the ShallowNet results with the original model from braindecode (pytorch/skorch). The major issue here was regularization. Adding a kernel_constraint to the first (or the first two) layers did not help. However adding a kernel_constraint to the final layer improved the performance by roughly 10% acc over all subjects and resolved the issue.
@robintibor: Maybe this Conv2dWithConstraint should be added to the original ShallowNet Implementation?
@martinwimpff Indeed the kernel constraint has a significant effect on the results, for my PyTorch implementation of EEGModels code , I re-implemented the kernel constraint for both the Conv2D and Linear layers as it is implemented in Keras. The MaxNorm function will be:
import torch
def MaxNorm(tensor, max_value, axis=0):
eps = 1e-7
norms = torch.sqrt(torch.sum(torch.square(tensor), axis=axis, keepdims=True))
desired = torch.clip(norms, 0, max_value)
return tensor * (desired / (norms + eps))
事实上,内核约束对结果有重大影响,对于我的 EEGModels 代码的 PyTorch 实现,我重新实现了 Conv2D 和 Linear 层的内核约束,因为它是在 Keras 中实现的。 MaxNorm 函数为:
import torch def MaxNorm(tensor, max_value, axis=0): eps = 1e-7 norms = torch.sqrt(torch.sum(torch.square(tensor), axis=axis, keepdims=True)) desired = torch.clip(norms, 0, max_value) return tensor * (desired / (norms + eps))
Hello, I would like to replicate EEGnet using PyTorch and validate it on the BCI competition iV2a dataset. I've encountered an issue with low accuracy. Could you please provide guidance on how to set the max_value for MaxNorm? Additionally, can you share the steps you followed for preprocessing the dataset? Have you normalized the data? Could you also share your code? @okbalefthanded
事实上,内核约束对结果有重大影响,对于我的 EEGModels 代码的 PyTorch 实现,我重新实现了 Conv2D 和 Linear 层的内核约束,因为它是在 Keras 中实现的。 MaxNorm 函数为:
import torch def MaxNorm(tensor, max_value, axis=0): eps = 1e-7 norms = torch.sqrt(torch.sum(torch.square(tensor), axis=axis, keepdims=True)) desired = torch.clip(norms, 0, max_value) return tensor * (desired / (norms + eps))
Hello, I would like to replicate EEGnet using PyTorch and validate it on the BCI competition iV2a dataset. I've encountered an issue with low accuracy. Could you please provide guidance on how to set the max_value for MaxNorm? Additionally, can you share the steps you followed for preprocessing the dataset? Have you normalized the data? Could you also share your code? @okbalefthanded
The default values as the Keras/TF implementation will produce similar values with the PyTorch version, check your pre-processing operations first, they are the most crucial parts. Yes, we do normalize the data before training. For easier comparison I suggest to run this colab notebook where I the Keras implementation of both EEGNet and EEG-TCNET is reproduced: https://github.com/okbalefthanded/eeg-tcnet/blob/master/eeg_tcnet_colab.ipynb
@realblack0 See the following the Colab notebook, the preprocessing is taken from this repo https://github.com/iis-eth-zurich/eeg-tcnet , it has basic operations of segmentation and normalization no filtering is applied, I've used it to test the equivalence of EEGNet implementations from Keras and Braindecode.
Results: - Keras: - Model : EEGNet Mean accuracy all dataset: 0.72126762548198 std. 0.08706506238035744 - Model : EEGNet Mean Kappa all dataset: 0.6284418754376309 std. 0.11591307087949579 - PyTorch (EEGNetV4) : - Model : EEGNETV4 Mean accuracy for all dataset: 0.7274140515382533 std. 0.06291401587087057 - Model : EEGNETV4 Mean Kappa for all dataset: 0.6364804236516348 std. 0.08392478435633635
The notebook is self contained, with all instructions available from data download to evaluation. Here it is you can play with it: https://colab.research.google.com/drive/1ANF8PwvtUPawTeQt4Uu4iwscpyhHBgvM?usp=sharing
Hi I know this question may be way outdated, but I've noticed that the pytorch part of the colab code uses the same model for every 9 subject instead of initializing weights of the model. Once the model is trained with Subject 1's data, the same model is then trained on Subject 2's data, then the model trained on Subject 1 and 2's data is trained on Subject 3's data to test the model and this goes on. I think it makes more sense to initialize the model before training on new subject's data but then the mean accuracy falls down to 62% ish.
@realblack0 See the following the Colab notebook, the preprocessing is taken from this repo https://github.com/iis-eth-zurich/eeg-tcnet , it has basic operations of segmentation and normalization no filtering is applied, I've used it to test the equivalence of EEGNet implementations from Keras and Braindecode.
Results: - Keras: - Model : EEGNet Mean accuracy all dataset: 0.72126762548198 std. 0.08706506238035744 - Model : EEGNet Mean Kappa all dataset: 0.6284418754376309 std. 0.11591307087949579 - PyTorch (EEGNetV4) : - Model : EEGNETV4 Mean accuracy for all dataset: 0.7274140515382533 std. 0.06291401587087057 - Model : EEGNETV4 Mean Kappa for all dataset: 0.6364804236516348 std. 0.08392478435633635
The notebook is self contained, with all instructions available from data download to evaluation. Here it is you can play with it: https://colab.research.google.com/drive/1ANF8PwvtUPawTeQt4Uu4iwscpyhHBgvM?usp=sharing
Hi I know this question may be way outdated, but I've noticed that the pytorch part of the colab code uses the same model for every 9 subject instead of initializing weights of the model. Once the model is trained with Subject 1's data, the same model is then trained on Subject 2's data, then the model trained on Subject 1 and 2's data is trained on Subject 3's data to test the model and this goes on. I think it makes more sense to initialize the model before training on new subject's data but then the mean accuracy falls down to 62% ish.
The PyTorch code in the Colab notebook uses BrainDecode which in fact is based on Skorch (a sklearn compatible API for Pytorch). So at the instantiation of a new NeurNetClassifier, the model is set to an initial state with random weights.
Dooray! 메일 발송 실패 안내
메일 발송 실패 안내
@.***) 님께 보낸 메일이 전송되지 못하였습니다.
실패 사유를 확인해보세요.
* 받는 사람 :
@.***)
* 발송 시간 :
2024-06-12T18:36:40
* 메일 제목 :
Re: [vlawhern/arl-eegmodels] Can't reproduce experimental results of BCI Competition IV dataset 2a for within classification (#31)
* 실패 사유 :
받는 사람이 회원님의 메일을 수신차단 하였습니다.
이 메일은 발신전용으로 회신되지 않습니다.
더 궁금하신 사항은
***@***.***
으로 문의해 주시기 바랍니다.
© Dooray!.
Hello.
I am trying to reproduce experimental results of BCI Competition IV dataset 2a for within classification in the paper. I reused EEGNet class but got mean accuracy 60~63. I expected around 68.
I read #7 and tried EEGNet-8,2, with kernLength = 32. I did 4-fold blockwise cross-validation that splits training set into three equal contiguous partitions(96/96/96) and selects each one of three partitions as validation set while retaining test set(288). So there were three training for each subject.
I did preprocess using braindecode. And I also tried preprocessing uisng scipy, but got similar accuracy. In my thought, I missed something in preprocessing. Could you check the code below and help me find something I missed? Or, it would be thankful if you share the preprocessing code you used.
Here is code using braindecode: