Open ajinkya2903 opened 3 years ago
0.6 is not that high. But the results depend on your training data and the chosen loss function.
I chose ContrastiveLoss as the loss function and my training data was in a considerable amount. Does my loss function selection is making a difference here. I know 0.6 is not that high but for some other examples it is giving like 0.8 cosine score. Is there any way to fix this?
Thanks
Hey, @nreimers . I have finetuned distilbert-nli-mean-tokens on my custom data. It is giving embedding for every input sentence pairs. But it is giving high cosine for irrelevant sentences. I am not getting why it is giving such high cosine scores for such pairs. Can you suggest to me some solution for this?
Thanks