eeyhsong / NICE-EEG

[ICLR 2024] M/EEG-based image decoding with contrastive learning. i. Propose a contrastive learning framework to align image and eeg. ii. Resolving brain activity for biological plausibility.
https://arxiv.org/abs/2308.13234
MIT License
73 stars 10 forks source link

self-attention #4

Open WLC125630WLC opened 3 months ago

WLC125630WLC commented 3 months ago

When I read the code in your nice_stand.py file, I didn't see you using self-attention or graph attention mechanisms, but you describe this part in your paper 图片1

2050271010 commented 3 months ago

数据集中显示采样方式为刺激100ms,空白200ms,而论文4.1节中指出刺激100ms,空白100ms,为什么不一致呢?图2(B) 中的1000ms是怎么获取的呢,包含什么内容,这1000ms内,怎么测不同时段的准确率呢?

peasant98 commented 1 month ago

Hi @eeyhsong, any update on this? It seems that the main code doesn't include the attention module -- do you need it?

eeyhsong commented 1 month ago

Hello! @peasant98 @WLC125630WLC

We have added the spatial modules part in https://github.com/eeyhsong/NICE-EEG/blob/main/spatial_modules.py. It's an interesting use of attention to show what brain patterns the model has learned.

Best.

eeyhsong commented 1 month ago

Hi @2050271010 Please carefully check the dataset. There are 100 ms stimuli + 100 ms blank screen. We segment the 1000 ms after the onset to maintain the full response.