This is the PyTorch implementation of the FBCNet architecture for EEG-BCI classification.
FBCNet is designed with the aim of effectively extracting the spectro-spatial discriminative information which is the signature of EEG-MI while avoiding the problem of overfitting in the presence of small datasets. In its core, FBCNet architecture is composed of the following four stages:
The multi-view EEG representation followed by the spatial filtering allows extraction of spectro-spatial discriminative features and variance layer provides a compact representation of the temporal information.
This repository is designed as a toolbox that provides all the necessary tools for training and testing of BCI classification networks. All the core functionalities are defined in the codes directory. The package requirements to run all the codes are provided in file req.text. The complete instructions for utilising this toolbox are provided in instructions.txt.
The cv.py and ho.py in /codes/classify/ are the entry points to use this toolbox.
The classification results for FBCNet and other competing architectures are as follows:
If you find this architecture or toolbox useful then please cite this paper:
Mane, Ravikiran, Effie Chew, Karen Chua, Kai Keng Ang, Neethu Robinson, A. Prasad Vinod, Seong-Whan Lee, and Cuntai Guan. "FBCNet: A Multi-view Convolutional Neural Network for Brain-Computer Interface." arXiv preprint arXiv:2104.01233 (2021).
Ravikiran Mane, Effie Chew, Karen Chua, Kai Keng Ang, Neethu Robinson, A.P. Vinod, Seong-Whan Lee, and Cuntai Guan, "FBCNet: An Efficient Multi-view Convolutional Neural Network for Brain-Computer Interface," arXiv preprint arXiv:2104.01233 (2021) https://arxiv.org/abs/2104.01233
We thank Ding Yi for the assistance in code preparation.