Closed Aminang closed 4 months ago
please refer to issues https://github.com/NUSTM/FacialMMT/issues/6.
please refer to issues #6. According to the proposed solution, it still can't solve the problem, the following is the error message after I executed the command, how to solve it? load MELD_multimodal_T+A+V_train...
- Creating new train data Traceback (most recent call last): File "main.py", line 119, in
trg_train_data = get_multimodal_data(args, 'train') File "E:\FacialMMT-main\utils\util.py", line 95, in get_multimodal_data data = loading_multimodal_data(args,split) File "FacialMMT-main\utils\dataset.py", line 303, in loading_multimodal_data meld_text_features = meld.preprocess_data() File "FacialMMT-main\src\meld_bert_extraText.py", line 89, in preprocess_data temp_utt.append(tokenizer.tokenize(utterance)) UnboundLocalError: local variable 'tokenizer' referenced before assignment
environment setup issue?
环境设置问题? Environments are configured according to requirements
https://github.com/NUSTM/FacialMMT/blob/main/src/meld_bert_extraText.py#L69
Is the path of the pre-trained model loaded correct?
https://github.com/NUSTM/FacialMMT/blob/main/src/meld_bert_extraText.py#L69
加载的预训练模型的路径是否正确? The issue has been fixed after the change, but a new problem has appeared, and the following is the error message
It feels like the data isn't loading properly.
Check again that the path to the data entry is correct.
Check again that the path to the data entry is correct.
Hello, I printed text_utt_linear output dimension, as well as curr_dia_mask and batchUtt_in_dia_idx, and when traversing the curr_dia_mask later, because the value=1 condition is satisfied, the curr_utt_in_dia_idx==0 is executed, so the program executes curr_utt_len = index-1, which is curr_utt_len The end is equal to -1, so at the end there was an error in text_utt_linear[i][1:curr_utt_len+1], and here is the output
https://github.com/NUSTM/FacialMMT/blob/main/src/models.py#L107
the number of dialogue is 2?
https://github.com/NUSTM/FacialMMT/blob/main/src/models.py#L107
the number of dialogue is 2?
When batch_size=1, the above problem also exists.
I don't seem to be very clear about it either.
I can ensure that the code, data, and models I upload are correctly loaded and yield the corresponding results.
So, you can check each line in debug mode. Before that, please make sure paths of all model loading and data loading are correct.
If you have any further questions, we can continue discussion. The issue#8 will remain open.
Hey, execute the command python main.py --choice_modality T+A+V --plm_name roberta-large --load_multimodal_path FacialMMT-RoBERTa/multimodal_T+A+V_RoBERTa.pt --load_swin_path FacialMMT-RoBERTa/best_swin_RoBERTa.pt --doEval 1, the program returns error UnboundLocalError: local variable 'tokenizer' referenced before assignment, how to do it?