Open ErfolgreichCharismatisch opened 2 years ago
Yes, this is currently work in progress. We hope we can start soon the training process.
For German & English, there are some MSMARCO English-German models on the hub: https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch
Interesting. To avoid double training, I used
teacher_model_name = 'multi-qa-MiniLM-L6-cos-v1'
student_model_name = 'sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2'
and got the 2000 steps result of https://drive.google.com/drive/folders/1--U-RQJscmfiZ7BxCzayLBLImO10HsRc?usp=sharing which can be reused
Eagerly awaiting the results of this training!
I just made this comment in another similar issue - it should solve this problem.
Has anyone here tried the newest multilingual Cross Encoder model? It uses multilingual versions of the MiniLM and MSMarco datasets. It doesn't appear to be in the SBert documentation, but I just stumbled upon it while browsing HF. https://huggingface.co/cross-encoder/mmarco-mMiniLMv2-L12-H384-v1
There isn't any benchmark data, but this paper seems to have used a fairly similar process and shows that these multilingual datasets/models provide very competitive results when compared to monolingual datasets. https://arxiv.org/pdf/2108.13897.pdf
I have a corpus with 144,491 entries with around 2000 characters each forming phrases in english and german.
Each entry in monolingual.
My goal is to enter a query like a question or a set of keywords for it to output the best fitting index in the corpus.
I am using sentence-transformers_paraphrase-multilingual-MiniLM-L12-v2 currently with a
This gives reasonable results, but is there a better approach?
I am asking, because this is an asymmetric semantic search, which should use the MSMARCO Models according to your description, yet those are only in english and https://www.sbert.net/examples/training/ms_marco/multilingual/README.html seems unfinished.
Is the idea to
Which approach using sbert do you suggest?