prajwalsingh / EEGStyleGAN-ADA

Pytorch code of paper "Learning Robust Deep Visual Representations from EEG Brain Recordings". [WACV 2024]
MIT License
27 stars 4 forks source link

about the dataset CVPR40 5-95HZ #2

Closed qiuyuesong closed 7 months ago

qiuyuesong commented 7 months ago

"Hello, thank you very much for your excellent work. I encountered a problem while trying to reproduce the dataset CVPR40 5-95HZ. I experienced overfitting using the EEG feature extraction method from your article. Could you please explain how you extracted the EEG features for the 5-95Hz data?"

prajwalsingh commented 7 months ago

@qiuyuesong Thank you for reading our work.

For the 5-95 Hz dataset, we first trained an EEG encoder and image encoder in CLIP based setting where we took pre-trained Resnet50 and froze all the layers except the last layer, where we used a fully connected layer that projected the ResNet50 feature in EEG space. We train the complete method end-to-end for a certain number of epochs, whose ablation study you can find in our paper. After the training, we computed the top-k accuracy.

We have just updated the EEGClip code in the repository. You can have a look at it for better understanding.

qiuyuesong commented 7 months ago

@qiuyuesong Thank you for reading our work.

For the 5-95 Hz dataset, we first trained an EEG encoder and image encoder in CLIP based setting where we took pre-trained Resnet50 and froze all the layers except the last layer, where we used a fully connected layer that projected the ResNet50 feature in EEG space. We train the complete method end-to-end for a certain number of epochs, whose ablation study you can find in our paper. After the training we performed, we computed the top-k accuracy.

We have just updated the EEGClip code in the repository. You can have a look at it for better understanding.

Gratitude beyond words.

qiuyuesong commented 7 months ago

@qiuyuesong Thank you for reading our work.

For the 5-95 Hz dataset, we first trained an EEG encoder and image encoder in CLIP based setting where we took pre-trained Resnet50 and froze all the layers except the last layer, where we used a fully connected layer that projected the ResNet50 feature in EEG space. We train the complete method end-to-end for a certain number of epochs, whose ablation study you can find in our paper. After the training, we computed the top-k accuracy.

We have just updated the EEGClip code in the repository. You can have a look at it for better understanding.