prajwalsingh / EEGStyleGAN-ADA

Pytorch code of paper "Learning Robust Deep Visual Representations from EEG Brain Recordings". [WACV 2024]
MIT License
29 stars 4 forks source link

Is the EEG encoder only one-layer LSTM? #12

Closed ChiShengChen closed 7 months ago

ChiShengChen commented 7 months ago

Thanks for the interesting work! I have some problems about the EEG encoder in ./EEGClip/EEG_encoder.py, In your paper, you mentioned that the EEG encoder is the multilayer LSTM network, but how I found in ./EEGClip/EEG_encoder.py and config.py the code only has single-layer LSTM, maybe I overlooked something? 2024-03-13 02-01-05 的螢幕擷圖 2024-03-13 02-09-04 的螢幕擷圖

prajwalsingh commented 7 months ago

Hi @ChiShengChen , thank you for reading our work.

You have not overlooked anything, and this is correct only. Our initial experiments used LSTM with four layers, as shown in EEG2Feat. We used the same for EEG_encoder also. Due to computation time we kept it as 1 for EEG CLIP. You can train the network with four layers also, and it will work.

ChiShengChen commented 7 months ago

@prajwalsingh Thanks for quick reply! But I still bother by the code structure, is the EEG-CLIP's EEG encoder (one-layer LSTM) different from the final EEGStyleGAN-ADA's (4-layer LSTM)? To my understanding, the EEG-CLIP's EEG encoder is the same as EEGStyleGAN-ADA's, and the code ./EEGClip only for showing the EEG-CLIP demo but not exactly used the one-layer LSTM as EEG encoder on the paper right? Thanks for your patient!

prajwalsingh commented 7 months ago

Yes, you are right. Both are the same architecture; they differ in depth. We have shown an application of EEGClip for image retrieval tasks (initially, that was the plan only).

We haven't tried using the pre-trained EEG encoder weights using the EEGClip method for image generation.

For EEGStyleGAN-ADA, we trained the EEG encoder using triplet loss and later used it for conditional training of the GAN method.