Closed comojin1994 closed 1 year ago
Hello @ @comojin1994, sorry for the late reply.
[1] Zheng, Wei-Long, Jia-Yi Zhu, and Bao-Liang Lu. "Identifying stable patterns over time for emotion recognition from EEG." IEEE Transactions on Affective Computing 10.3 (2017): 417-429.
Best wishes. 🤝
Thank you for answering!! 😃
谢谢你的回答!!😃
您好,我的研究方向是脑电信号情绪识别,您提到的SEED系列数据集我先前也进行过大量试验,关于本项目作者的另一个项目EEG-Transformer项目我也有所研究,针对您的问题,我有意更改相关代码并编写一个将SEED系列代码从原始数据处理到类似EEG-Transformer所需数据的结构的代码,但是想与您沟通一下,如您同意,请添加我的QQ:384068026
Hi,
I'm impressed with your paper.
After reading the paper, I got some questions about using and evaluating the SEED dataset.
In the paper, you mentioned that each session contains 3394 trials, segmented from the original data using a non-overlapped one-second time window. However, I want to know more details about dataset settings.
Q1. Is the input shape of the data (62 x 200, # of channels x one-second data with sampling rate of 200 Hz)?
Q2. The number of subjects is 15, with three sessions for each subject. In addition, each session consists of 15 clips. If you follow the train&test setting in Zheng et al., Did you also split the train set as the first 9 data and the test set as the last 6 data? If this is not the case, could you explain more about this?
Q3. If Q2 is right, did you average results from 45 sessions (15 subjects x 3 sessions) for calculating results?
Q4. If it's alright with you, could you share your preprocessing code about the SEED dataset?
Thank you for sharing your excellent research.