This repository provides code for the Attention Temporal Convolutional Network (ATCNet) proposed in the paper: Physics-informed attention temporal convolutional network for EEG-based motor imagery classification
Authors: Hamdi Altaheri, Ghulam Muhammad, Mansour Alsulaiman
Center of Smart Robotics Research, King Saud University, Saudi Arabia
Updates:
In addition to the proposed ATCNet model, the models.py file includes the implementation of other related methods, which can be compared with ATCNet, including:
The following table shows the performance of ATCNet and other reproduced models based on the methodology defined in the main_TrainValTest.py file:
Model | #params | BCI Competition IV-2a dataset (BCI 4-2a) | High Gamma Dataset (HGD)* | ||
training time (m) 1,2 | accuracy (%) | training time (m) 1,2 | accuracy (%) | ||
ATCNet | 113,732 | 13.5 | 81.10 | 62.6 | 92.05 |
TCNet_Fusion | 17,248 | 8.8 | 69.83 | 65.2 | 89.73 |
EEGTCNet | 4,096 | 7.0 | 65.36 | 36.8 | 87.80 |
MBEEG_SENet | 10,170 | 15.2 | 69.21 | 104.3 | 90.13 |
EEGNet | 2,548 | 6.3 | 68.67 | 36.5 | 88.25 |
DeepConvNet | 553,654 | 7.5 | 42.78 | 43.9 | 87.53 |
ShallowConvNet | 47,364 | 8.2 | 67.48 | 61.8 | 87.00 |
1 using Nvidia GTX 1080 Ti 12GB
2 (500 epochs, without early stopping)
* please note that HGD is for "executed movements" NOT "motor imagery"
This repository includes the implementation of the following attention schemes in the attention_models.py file:
These attention blocks can be called using the attention_block(net, attention_model) method in the attention_models.py file, where 'net' is the input layer and 'attention_model' indicates the type of the attention mechanism, which has five options: None, 'mha', 'mhla', 'cbam', and 'se'.
Example:
input = Input(shape = (10, 100, 1))
block1 = Conv2D(1, (1, 10))(input)
block2 = attention_block(block1, 'mha') # mha: multi-head self-attention
output = Dense(4, activation="softmax")(Flatten()(block2))
The preprocess.py file loads and divides the dataset based on two approaches:
The get_data() method in the preprocess.py file is used to load the dataset and split it into training and testing. This method uses the subject-specific approach by default. If you want to use the subject-independent (LOSO) approach, set the parameter LOSO = True.
ATCNet is inspired in part by the Vision Transformer (ViT). ATCNet differs from ViT by the following:
ATCNet model consists of three main blocks:
Visualize the transition of data in the ATCNet model.
Models were trained and tested by a single GPU, Nvidia GTX 2070 8GB (Driver Version: 512.78, CUDA 11.3), using Python 3.7 with TensorFlow framework. Anaconda 3 was used on Ubuntu 20.04.4 LTS and Windows 11. The following packages are required:
The BCI Competition IV-2a dataset needs to be downloaded, and the data path should be set in the 'data_path' variable in the main_TrainValTest.py file. The dataset can be downloaded from here.
If you find this work useful in your research, please use the following BibTeX entry for citation
@article{9852687,
title={Physics-Informed Attention Temporal Convolutional Network for EEG-Based Motor Imagery Classification},
author={Altaheri, Hamdi and Muhammad, Ghulam and Alsulaiman, Mansour},
journal={IEEE Transactions on Industrial Informatics},
year={2023},
volume={19},
number={2},
pages={2249--2258},
publisher={IEEE}
doi={10.1109/TII.2022.3197419}
}
@article{10142002,
title={Dynamic convolution with multilevel attention for EEG-based motor imagery decoding},
author={Altaheri, Hamdi and Muhammad, Ghulam and Alsulaiman, Mansour},
journal={IEEE Internet of Things Journal},
year={2023},
volume={10},
number={21},
pages={18579-18588},
publisher={IEEE}
doi={10.1109/JIOT.2023.3281911}
}
@article{altaheri2023deep,
title={Deep learning techniques for classification of electroencephalogram (EEG) motor imagery (MI) signals: A review},
author={Altaheri, Hamdi and Muhammad, Ghulam and Alsulaiman, Mansour and Amin, Syed Umar and Altuwaijri, Ghadir Ali and Abdul, Wadood and Bencherif, Mohamed A and Faisal, Mohammed},
journal={Neural Computing and Applications},
year={2023},
volume={35},
number={20},
pages={14681--14722},
publisher={Springer}
doi={10.1007/s00521-021-06352-5}
}