eeyhsong / EEG-Conformer

EEG Transformer 2.0. i. Convolutional Transformer for EEG Decoding. ii. Novel visualization - Class Activation Topography.
GNU General Public License v3.0
459 stars 62 forks source link

Question about using SEED dataset #1

Closed comojin1994 closed 1 year ago

comojin1994 commented 1 year ago

Hi,

I'm impressed with your paper.

After reading the paper, I got some questions about using and evaluating the SEED dataset.

In the paper, you mentioned that each session contains 3394 trials, segmented from the original data using a non-overlapped one-second time window. However, I want to know more details about dataset settings.

Q1. Is the input shape of the data (62 x 200, # of channels x one-second data with sampling rate of 200 Hz)?

Q2. The number of subjects is 15, with three sessions for each subject. In addition, each session consists of 15 clips. If you follow the train&test setting in Zheng et al., Did you also split the train set as the first 9 data and the test set as the last 6 data? If this is not the case, could you explain more about this?

W.-L. Zheng and B.-L. Lu, "Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks," IEEE Transactions on autonomous mental development, Vol. 7, No. 3, 2015, pp. 162-175.

Q3. If Q2 is right, did you average results from 45 sessions (15 subjects x 3 sessions) for calculating results?

Q4. If it's alright with you, could you share your preprocessing code about the SEED dataset?

Thank you for sharing your excellent research.

eeyhsong commented 1 year ago

Hello @ @comojin1994, sorry for the late reply.

  1. Yes the input uses one-second data with a sample rate of 200 Hz.
  2. I use 5-fold cross-validation for SEED dataset [1], instead of a 9:6 manner. Both of the two evaluation ways are widely used for this dataset. The 15 sessions of SEED were stimulated with different movie clips, and I think there may be more significant differences in the emotion produced by different sessions.

    [1] Zheng, Wei-Long, Jia-Yi Zhu, and Bao-Liang Lu. "Identifying stable patterns over time for emotion recognition from EEG." IEEE Transactions on Affective Computing 10.3 (2017): 417-429.

  3. Yes, I averaged the results of 45 sessions to get the final results.
  4. I will organize the code and release it later.
  5. I think it is of interest to use this way of connecting convolution and self-attention to be as a new backbone for EEG analysis.

Best wishes. 🤝

comojin1994 commented 1 year ago

Thank you for answering!! 😃

384068026 commented 1 year ago

谢谢你的回答!!😃

您好,我的研究方向是脑电信号情绪识别,您提到的SEED系列数据集我先前也进行过大量试验,关于本项目作者的另一个项目EEG-Transformer项目我也有所研究,针对您的问题,我有意更改相关代码并编写一个将SEED系列代码从原始数据处理到类似EEG-Transformer所需数据的结构的代码,但是想与您沟通一下,如您同意,请添加我的QQ:384068026