Closed qiaoyub closed 3 months ago
Thanks for your interests in our work.
In the default settings, the GPU needs to have at least 40GB of memory. We have not observed the system memory requirements since the system memory is usually sufficient. However, I think at least 16 or 32GB of system memory is needed.
If you run the code on a 16GB machine, you might need to comment out the code for loading CLIP embeddings, as they are not required during the training of EEG encoders. Additionally, you should use torch.tensor(0) to placeholder for the commented-out variables to avoid issues with incompatible variables in the subsequent code.
Thank you for your reply, I have one more question! If I only need the classification task of the job to obtain textual prompts for its EEG labels, can I avoid the detailed description obtained through BLIP?
At what stage should I train to know the accuracy of EEG classification CA, as shown in Table 3 of your paper! Look forward to your reply
Thanks for your interests in our work.
In the default settings, the GPU needs to have at least 40GB of memory. We have not observed the system memory requirements since the system memory is usually sufficient. However, I think at least 16 or 32GB of system memory is needed.
If you run the code on a 16GB machine, you might need to comment out the code for loading CLIP embeddings, as they are not required during the training of EEG encoders. Additionally, you should use torch.tensor(0) to placeholder for the commented-out variables to avoid issues with incompatible variables in the subsequent code.
你好,我检查了我的设备是2张32的内存条,按道理是不会出现该问题的呀! 我如果只是需要得到该工作中的EEG分类器,我能否进行一些操作得到EEG的分类部分
If you want to obtain the EEG classifier, you only need to proceed the steps to 4 of "Train the model", 5 and 6 is not required.
Any text descriptions or labels are not needed, as the classification task uses one-hot code.
I have not received any reports from others encountering similar issues, so I suspect there might be an issue with your local setup. I'm not sure why “已杀死” is occurring, as it does not seem to be a standard error message in Python. Please check if other external programs might be causing the interruption. If the issue cannot be resolved, I recommend using the Colab platform for experiments, as it provides a more stable environment.
- If you want to obtain the EEG classifier, you only need to proceed the steps to 4 of "Train the model", 5 and 6 is not required.
- Any text descriptions or labels are not needed, as the classification task uses one-hot code.
- I have not received any reports from others encountering similar issues, so I suspect there might be an issue with your local setup. I'm not sure why “已杀死” is occurring, as it does not seem to be a standard error message in Python. Please check if other external programs might be causing the interruption. If the issue cannot be resolved, I recommend using the Colab platform for experiments, as it provides a more stable environment.
1.Thank you for your reply. I have changed the length of len (train_dataset) to 5000 to enable it to run. When I changed it to 5000, the memory displayed occupied 48GB. If I were to use the original length directly, it would exceed the memory capacity
2.Finally, there is one more question, how much does changing the training length to 5000 affect its accuracy?
The extent of the impact is difficult to assess since we have not trained on 5,000 samples. However, generally speaking, performing unsupervised learning with half the number of samples could significantly affect the final results. You can comment out the loading of CLIP embeddings, which should greatly alleviate your memory usage issues.
The extent of the impact is difficult to assess since we have not trained on 5,000 samples. However, generally speaking, performing unsupervised learning with half the number of samples could significantly affect the final results. You can comment out the loading of CLIP embeddings, which should greatly alleviate your memory usage issues.
Thank you for your reply. I only need the EEG classification module. Does embedding CLIP for annotation not affect the accuracy of EEG classification?
Thank you for your work. I will continue to try
I have found that my training results are not very good.
2.Is the EEG classification module of this work a combination of time encoding and frequency encoding?
3.If you could send me a pre training weight of the EEG classification module you trained, I would be very grateful!! My QQ email is 1784648041@qq.com
Yes, CLIP embeddings of annotations do not affect the accuracy of EEG classification.
The classification result for each subject should be similar to Table 2 of our paper (arXiv version). If the results for the corresponding subjects differ significantly from those reported in the table, in addition to the possibility that the data you are using is insufficient, please check whether steps 1, 2, and 3 were executed correctly.
Yes, but they need to be jointly fine-tuned.
I’m sorry, I didn’t save the ckpts of the classification network; I only saved the model used for generating images. I suggest you use a machine with more memory to run the code, or use Colab.
Sorry, as I mentioned in point 4 of my previous comment, I currently only have the final model used for generating images.
Its category label is defined by the dataset creators, and you can find it in 'eeg_5_95_std.pth', which is unrelated to any trained model. You can obtain the label (0-39) of any signal along with its identifier in ImageNet. However, you cannot directly obtain its specific description, such as 'cat,' but I have provided more information in 'cascaded_diffusion.py.
Thank you very much, looking forward to more work from you!
You are welcome. Wishing you smooth progress in your research.
Can you provide the model used for generating images? I wonder to know more about generating images! My QQ email is 1784648041@qq.com
Sure, have done.
Hello, I encountered an issue. When I followed the steps to set up everything and ran main.py, a problem occurred.
(brainvis) cqy@fuying-System-Product-Name:~/myproject/BrainVis$ python main.py data loaded 已杀死
It seems that when loading the data, the system's memory limit was exceeded. Have you encountered this issue before?