Hello,I am glad to read your work.But when I run evaluate.py,a problem occured.
This is my input:
python cmt_evaluate.py --project_path "./results/first" --data_path "./extract_dataset_single_epoch" --val_data_list [4] --model_type "Epoch" --batch_size 1 --is_interpret True
Getting Results ===================================>
Traceback (most recent call last):
File "E:\quan\Cross-Modal-Transformer-master\cmt_evaluate.py", line 73, in
main()
File "E:\quan\Cross-Modal-Transformer-master\cmt_evaluate.py", line 68, in main
eval_epoch_cmt(data_loader, device, args)
File "E:\quan\Cross-Modal-Transformer-master\models\epoch_cmt.py", line 328, in eval_epochcmt
pred,,_ = test_model(val_eeg.float().to(device), val_eog.float().to(device),finetune = True)
File "D:\ProgramData\anaconda3\envs\cmt\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, kwargs)
File "E:\quan\Cross-Modal-Transformer-master\models\epoch_cmt.py", line 42, in forward
self_eeg = self.eeg_atten(eeg)
File "D:\ProgramData\anaconda3\envs\cmt\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, *kwargs)
File "E:\quan\Cross-Modal-Transformer-master\models\model_blocks.py", line 138, in forward
src = self.window_embed(x)
File "D:\ProgramData\anaconda3\envs\cmt\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(input, kwargs)
File "E:\quan\Cross-Modal-Transformer-master\models\modelblocks.py", line 105, in forward
b,, _ = x.shape
ValueError: too many values to unpack (expected 3)
Thank you for your work again!
Hello,I am glad to read your work.But when I run evaluate.py,a problem occured. This is my input: python cmt_evaluate.py --project_path "./results/first" --data_path "./extract_dataset_single_epoch" --val_data_list [4] --model_type "Epoch" --batch_size 1 --is_interpret True Getting Results ===================================> Traceback (most recent call last): File "E:\quan\Cross-Modal-Transformer-master\cmt_evaluate.py", line 73, in
main()
File "E:\quan\Cross-Modal-Transformer-master\cmt_evaluate.py", line 68, in main eval_epoch_cmt(data_loader, device, args) File "E:\quan\Cross-Modal-Transformer-master\models\epoch_cmt.py", line 328, in eval_epochcmt pred,,_ = test_model(val_eeg.float().to(device), val_eog.float().to(device),finetune = True) File "D:\ProgramData\anaconda3\envs\cmt\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, kwargs) File "E:\quan\Cross-Modal-Transformer-master\models\epoch_cmt.py", line 42, in forward self_eeg = self.eeg_atten(eeg) File "D:\ProgramData\anaconda3\envs\cmt\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, *kwargs) File "E:\quan\Cross-Modal-Transformer-master\models\model_blocks.py", line 138, in forward src = self.window_embed(x) File "D:\ProgramData\anaconda3\envs\cmt\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(input, kwargs) File "E:\quan\Cross-Modal-Transformer-master\models\modelblocks.py", line 105, in forward b,, _ = x.shape ValueError: too many values to unpack (expected 3) Thank you for your work again!