Open samin9796 opened 6 months ago
Thank you for having an interest in our study! We could not find the problem yet, but could you please confirm that you are using 'eval.py' for evaluating the model?
@youngeun1209 Thank you for your reply.
Yes, I used the 'eval.py' for evaluating the model. Just to point out, there is no "--trained_model" argument in eval.py, so I used the "model_config" argument instead to avoid the error.
You can place the trained model in 'loc_g' according to the arguments of logDir, sub, and task.
Hi! I am trying to evaluate the pre-trained model for spoken EEG using the test samples, however, the outcome is not nearly as good as the demo (very far away). Could you please ensure that the original config files (used for the pre-trained model) was shared in this repo? Any pointer on why reproducing similar results is not being possible would be highly appreciated!
Thank you so much!