prajwalsingh / EEGStyleGAN-ADA

Pytorch code of paper "Learning Robust Deep Visual Representations from EEG Brain Recordings". [WACV 2024]
MIT License
27 stars 4 forks source link

about the 'imageckpt/eegfeat_all_0.6875.pth' #8

Closed xuxiran closed 6 months ago

xuxiran commented 6 months ago

Thank you very much for your excellent work and the open code!

After reading your paper and closed issues, I have reproduced most results in your paper except "EEGClip.py", which I will try to reproduce in a few days.

However, I have a question about "EEGStyleGAN-ADA_CVPR40".

The question is about the 'imageckpt/eegfeat_all_0.6875.pth'. I downloaded the file from your provided link and generated the good pictures. However, could I replace the 'eegfeat_all_0.6875.pth' with some other result I reproduced in "EEG2Feat"? I have many files when I reproduce the "EEG2Feat". For example, "eegfeat_all_0.9754464285714286.pth", seems better than the file you provided. So why do we need to use 'eegfeat_all_0.6875.pth' when generating pictures?

Thank you very much for your excellent work and the open code again!

prajwalsingh commented 6 months ago

Hi @xuxiran, thank you for reading our work.

The checkpoint present in "imageckpt/" folder is actually used for image-to-image translation. After training the EEG2Feat network, we trained a network for Image2EEG features. The goal is to transform an unseen image into EEG feature space and then try to reconstruct it with EEGStyleGAN-ADA. So, 68.76% is the accuracy we were getting when we transformed the image to eeg space (using extracted image features).

You can replace the checkpoint in eegckpt, but for Imageckpt, you have to improve the Image2EEG space.

Let us know if you have any further questions.

xuxiran commented 6 months ago

Thank you very much for your kind explanation! It is very convincing, and I will try to reproduce it today.

I found that I forgot to ask a question, in the "config.py" of "EEG2Feat_Unseen", there is a new dataset. For example, " base_path = '/path to data folder/' train_path = 'data/filter_eeg_imagenet40_cvpr_2017_raw/train//' validation_path = 'data/filter_eeg_imagenet40_cvpr_2017_raw/test//' test_path = 'data/filter_eeg_imagenet40_cvpr_2017_raw///' " I am wondering how to get the dataset "filter_eeg_imagenet40_cvpr_2017_raw".
From my understanding, it seems that this dataset should be the same as the previous dataset ("EEG2Feat"). Therefore, I directly used the "eeg_imagenet40_cvpr2017_raw" dataset. For example, " base_path = '/path to data folder/eeg_imagenet40_cvpr_2017_raw/' train_path = '/train/
' validation_path = '/test/' test_path = '//*' "

Is it correct for me to do this? Or perhaps I should preprocess the raw dataset?

prajwalsingh commented 6 months ago

@xuxiran , yes, it's the same dataset used for EEG2Feat. Sorry for the confusion due to the code. It is not a clean code.

xuxiran commented 6 months ago

Hello, I came again. Thank you very much for your previous kind explanation! I have finished running the code about "Image2EEG". One more question. When I ran the "train.py", the "Val KMeans score Proj" was always lower than 0.32. image

However, when I ran the "evaluate.py", the result was 0.67, which is similar to the result shown in " eegfeat_all_0.6875.pth".

Did I make a mistake in understanding the work here? Why is there a situation where the training set is close to 1 and the validation set is only 0.32, but the results of the test set are still relatively good (0.67).

prajwalsingh commented 6 months ago

Hi @xuxiran , I think there might be some issue with network depth or hyper-parameters because when we train the network, we obtain the proposed accuracy over the validation set. We are attaching the snapshot for the same.

image

xuxiran commented 6 months ago

Hello, @prajwalsingh I came again. I tried running the code on another server and this is the result. image

image it seems lower and lower...

Maybe you are right. I think there might be some issue with network depth or hyper-parameters. However, do you mind providing some possible hyper-parameters? I can have a try. (I am using the parameters downloaded from GitHub)

prajwalsingh commented 6 months ago

Hi @xuxiran , I re-run the code on my system and again got almost similar accuracy, around ~59%+ (training is still ongoing). I have re-uploaded the code again although there is no change from the previous code. You can find the snapshot for the same.

I hope you are using the raw dataset only, in this work, we have focused on the raw CVPR40 dataset only.

image