Hi,
I am trying to understand if there is any need for calibration for the sentence-embeddings models. The use case I have is I fine-tune one of the sentence-embeddings models and use the embeddings to compute the cosine similarity of two text inputs to see if they are relevant.
Since I use the online-contrastive loss function with the cosine distance metric, I am sure the cosine score of two embeddings has a direct proportionality to the relativeness of input texts.
Am I wrong in assuming so. Is there a need to evaluate the models for calibration and re-train or modify the models using any calibration algorithms?
Thanks.
Hi, I am trying to understand if there is any need for calibration for the sentence-embeddings models. The use case I have is I fine-tune one of the sentence-embeddings models and use the embeddings to compute the cosine similarity of two text inputs to see if they are relevant. Since I use the online-contrastive loss function with the cosine distance metric, I am sure the cosine score of two embeddings has a direct proportionality to the relativeness of input texts. Am I wrong in assuming so. Is there a need to evaluate the models for calibration and re-train or modify the models using any calibration algorithms? Thanks.