AMontgomerie / question_generator

An NLP system for generating reading comprehension questions
MIT License
281 stars 72 forks source link

issues on bert-base-cased-qa-evaluator #22

Open chaozz98 opened 1 year ago

chaozz98 commented 1 year ago

Hello, I have used your training and validation datasets in qa_eval_train.py to train the qa-evaluator model, but the validation accuracy results are always around 0.5, which results the in the same score in the final output[0][0][1]. May I ask what is the reason for this? I really didn't find the problem. I have not made any changes to all the training code, and the final Validation accuracy is always around 0.5, so that the final score,output[0][0][1], such as tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') tensor(-0.4724, device='cuda:0') please help me,thank you