Open FadelMF opened 1 year ago
Hi, I had a similar problem before when fine-tuning a normalized model. What I can suggest is to fine-tune a non-normalized version of the model or just pick a model that's already non-normalized. I really don't know what causes this issue, but that's all I did to fix it.
Hi @carlesoctav , thank you for the answer and feedback. Sorry for the late respond. Can you explain to me about a normalized and non-normalized model? And, what do you think about using the my-language-bert-model first and then fine-tuned it with my dataset, instead of using the all-mpnet-base-v2 for my language and tasks? Or maybe, use all-mpnet-base-v2 and pre-train with my dataset contains my language, and then fine-tuned it to my downstream task?
Thank you in advance!
Hello sir. @nreimers
I fine-tune S-BERT using my summarization dataset. I tried train using Multiple Negatives Ranking Loss and Triplets Evaluator. My anchor sentence is the title of an online news article, positive sentences are a list of extractive summary sentences and negative sentences are the remaining articles that are not a summary.
A good result after trying on the evaluator is an accuracy of 96%. But after I tried to use my fine-tuned model on semantic search, I got all similar score is 0.9999+ and 1.0, Am I doing wrong?
I really need help, with my thesis. Thank you.