Hello, I have used your training and validation datasets in qa_eval_train.py to train the qa-evaluator model, but the validation accuracy results are always around 0.5, which results the in the same score in the final output[0][0][1]. May I ask what is the reason for this? I really didn't find the problem.
I have not made any changes to all the training code, and the final Validation accuracy is always around 0.5, so that the final score,output[0][0][1], such as
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
tensor(-0.4724, device='cuda:0')
please help me,thank you
Hello, I have used your training and validation datasets in qa_eval_train.py to train the qa-evaluator model, but the validation accuracy results are always around 0.5, which results the in the same score in the final output[0][0][1]. May I ask what is the reason for this? I really didn't find the problem. I have not made any changes to all the training code, and the final Validation accuracy is always around 0.5, so that the final score,output[0][0][1], such as tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') please help me,thank you