RomGai / BrainVis

Official code repository for the paper: "BrainVis: Exploring the Bridge between Brain and Visual Signals via Image Reconstruction"
https://brainvis-projectpage.github.io/
MIT License
43 stars 1 forks source link

Run main.py erro #11

Closed qiaoyub closed 1 month ago

qiaoyub commented 1 month ago

Hello, I encountered an issue. When I followed the steps to set up everything and ran main.py, a problem occurred.

(brainvis) cqy@fuying-System-Product-Name:~/myproject/BrainVis$ python main.py data loaded 已杀死

It seems that when loading the data, the system's memory limit was exceeded. Have you encountered this issue before?

RomGai commented 1 month ago

Thanks for your interests in our work.

In the default settings, the GPU needs to have at least 40GB of memory. We have not observed the system memory requirements since the system memory is usually sufficient. However, I think at least 16 or 32GB of system memory is needed.

If you run the code on a 16GB machine, you might need to comment out the code for loading CLIP embeddings, as they are not required during the training of EEG encoders. Additionally, you should use torch.tensor(0) to placeholder for the commented-out variables to avoid issues with incompatible variables in the subsequent code.

qiaoyub commented 1 month ago

Thank you for your reply, I have one more question! If I only need the classification task of the job to obtain textual prompts for its EEG labels, can I avoid the detailed description obtained through BLIP?

qiaoyub commented 1 month ago

At what stage should I train to know the accuracy of EEG classification CA, as shown in Table 3 of your paper! Look forward to your reply

qiaoyub commented 1 month ago

Thanks for your interests in our work.

In the default settings, the GPU needs to have at least 40GB of memory. We have not observed the system memory requirements since the system memory is usually sufficient. However, I think at least 16 or 32GB of system memory is needed.

If you run the code on a 16GB machine, you might need to comment out the code for loading CLIP embeddings, as they are not required during the training of EEG encoders. Additionally, you should use torch.tensor(0) to placeholder for the commented-out variables to avoid issues with incompatible variables in the subsequent code.

你好,我检查了我的设备是2张32的内存条,按道理是不会出现该问题的呀! 我如果只是需要得到该工作中的EEG分类器,我能否进行一些操作得到EEG的分类部分

RomGai commented 1 month ago
  1. If you want to obtain the EEG classifier, you only need to proceed the steps to 4 of "Train the model", 5 and 6 is not required.

  2. Any text descriptions or labels are not needed, as the classification task uses one-hot code.

  3. I have not received any reports from others encountering similar issues, so I suspect there might be an issue with your local setup. I'm not sure why “已杀死” is occurring, as it does not seem to be a standard error message in Python. Please check if other external programs might be causing the interruption. If the issue cannot be resolved, I recommend using the Colab platform for experiments, as it provides a more stable environment.

qiaoyub commented 1 month ago
  1. If you want to obtain the EEG classifier, you only need to proceed the steps to 4 of "Train the model", 5 and 6 is not required.
  2. Any text descriptions or labels are not needed, as the classification task uses one-hot code.
  3. I have not received any reports from others encountering similar issues, so I suspect there might be an issue with your local setup. I'm not sure why “已杀死” is occurring, as it does not seem to be a standard error message in Python. Please check if other external programs might be causing the interruption. If the issue cannot be resolved, I recommend using the Colab platform for experiments, as it provides a more stable environment.

1.Thank you for your reply. I have changed the length of len (train_dataset) to 5000 to enable it to run. When I changed it to 5000, the memory displayed occupied 48GB. If I were to use the original length directly, it would exceed the memory capacity image image

2.Finally, there is one more question, how much does changing the training length to 5000 affect its accuracy?

RomGai commented 1 month ago

The extent of the impact is difficult to assess since we have not trained on 5,000 samples. However, generally speaking, performing unsupervised learning with half the number of samples could significantly affect the final results. You can comment out the loading of CLIP embeddings, which should greatly alleviate your memory usage issues.

qiaoyub commented 1 month ago

The extent of the impact is difficult to assess since we have not trained on 5,000 samples. However, generally speaking, performing unsupervised learning with half the number of samples could significantly affect the final results. You can comment out the loading of CLIP embeddings, which should greatly alleviate your memory usage issues.

Thank you for your reply. I only need the EEG classification module. Does embedding CLIP for annotation not affect the accuracy of EEG classification?

image

Thank you for your work. I will continue to try

qiaoyub commented 1 month ago

I have found that my training results are not very good. image

2.Is the EEG classification module of this work a combination of time encoding and frequency encoding?

3.If you could send me a pre training weight of the EEG classification module you trained, I would be very grateful!! My QQ email is 1784648041@qq.com

RomGai commented 1 month ago
  1. Yes, CLIP embeddings of annotations do not affect the accuracy of EEG classification.

  2. The classification result for each subject should be similar to Table 2 of our paper (arXiv version). If the results for the corresponding subjects differ significantly from those reported in the table, in addition to the possibility that the data you are using is insufficient, please check whether steps 1, 2, and 3 were executed correctly.

  3. Yes, but they need to be jointly fine-tuned.

  4. I’m sorry, I didn’t save the ckpts of the classification network; I only saved the model used for generating images. I suggest you use a machine with more memory to run the code, or use Colab.

RomGai commented 1 month ago
  1. Sorry, as I mentioned in point 4 of my previous comment, I currently only have the final model used for generating images.

  2. Its category label is defined by the dataset creators, and you can find it in 'eeg_5_95_std.pth', which is unrelated to any trained model. You can obtain the label (0-39) of any signal along with its identifier in ImageNet. However, you cannot directly obtain its specific description, such as 'cat,' but I have provided more information in 'cascaded_diffusion.py.

qiaoyub commented 1 month ago

Thank you very much, looking forward to more work from you!

RomGai commented 1 month ago

You are welcome. Wishing you smooth progress in your research.

qiaoyub commented 2 weeks ago

Can you provide the model used for generating images? I wonder to know more about generating images! My QQ email is 1784648041@qq.com

RomGai commented 2 weeks ago

Sure, have done.