Closed SandhyaSuryanarayana closed 4 years ago
I am also facing a similar issue, when I am using custom domain data to fine tune the model, the accuracy decreasing. For fine tuning, I am using json file generated from the cdqa-annotator. Is annotated dataset messing with the model?
Hi, Could you please share the size of your datasets (i.e. number of question-answer pairs) Did you try to fine-tune the model trained on SQuAD 1.1 or did you use the pre-trained BERT model (with no fine-tune on SQuAD) ?
I used the model trained on SQuAD 1.1 and for one paragraph, there were around 3-4 question answer pair. I have attached my json file sample_cdqa-v1.1-2.zip
Awating your response. I would appreciate if you can help me with this issue.
I think your training / fine-tunning data is too small. There are only 10 paragraphs and 38 QA pairs. If you compare with SQuAD, there are about 80k QA-pairs in the training set. The model is clearly over-fitting.
I'm using a customized CSV file (created using pdf_converter) and fined-tuned the model using a SQUAD like dataset(created using annotator) to build a QA model.
I'm getting wrong answers for most of the questions. I did read another similar issue that was raised here and I did try varying the retriever_score_weight of .predict() method. It does not seem to work. But the correct answer is in the top 5 predictions when I set n_predictions to 5.
I want to know if there is any way I can improve the accuracy of my predictions? Because I'm getting wrong answers only when I'm using the custom data. The accuracy is much better when I ask questions on bnp dataset.
Also, even when I ask the same question that I have used while fine tuning(Json file) the model I still get wrong answers. Is there anything I can do for that?
This is the code that I'm executing.
Any help is really appreciated.
Thank & regards, Sandhya