Hi Nils,
I have English, Chinese and Indonesian text data for semantic search use case.
I have sentence pairs with different language combinations and similarity score.
I tried pretrained Xlm-r sentence embedding model, which performs better than fine tuned model on sentence pair data using cosine similarity loss.
Q1. To be able to have good semantic similarity with aligned representation across 3 languages, is it better to have mseloss or cosine?
Q2. Would you recommend training a student model sentence embeddings only for 3 languages above and then experiment on that
Hi Nils, I have English, Chinese and Indonesian text data for semantic search use case. I have sentence pairs with different language combinations and similarity score. I tried pretrained Xlm-r sentence embedding model, which performs better than fine tuned model on sentence pair data using cosine similarity loss.
Q1. To be able to have good semantic similarity with aligned representation across 3 languages, is it better to have mseloss or cosine? Q2. Would you recommend training a student model sentence embeddings only for 3 languages above and then experiment on that