Closed ChiShengChen closed 7 months ago
Hi @ChiShengChen , thank you for reading our work.
You have not overlooked anything, and this is correct only. Our initial experiments used LSTM with four layers, as shown in EEG2Feat. We used the same for EEG_encoder also. Due to computation time we kept it as 1 for EEG CLIP. You can train the network with four layers also, and it will work.
@prajwalsingh Thanks for quick reply! But I still bother by the code structure, is the EEG-CLIP's EEG encoder (one-layer LSTM) different from the final EEGStyleGAN-ADA's (4-layer LSTM)? To my understanding, the EEG-CLIP's EEG encoder is the same as EEGStyleGAN-ADA's, and the code ./EEGClip
only for showing the EEG-CLIP demo but not exactly used the one-layer LSTM as EEG encoder on the paper right?
Thanks for your patient!
Yes, you are right. Both are the same architecture; they differ in depth. We have shown an application of EEGClip for image retrieval tasks (initially, that was the plan only).
We haven't tried using the pre-trained EEG encoder weights using the EEGClip method for image generation.
For EEGStyleGAN-ADA, we trained the EEG encoder using triplet loss and later used it for conditional training of the GAN method.
Thanks for the interesting work! I have some problems about the EEG encoder in
./EEGClip/EEG_encoder.py
, In your paper, you mentioned that the EEG encoder is the multilayer LSTM network, but how I found in./EEGClip/EEG_encoder.py
andconfig.py
the code only has single-layer LSTM, maybe I overlooked something?