Open SuvroBaner opened 4 years ago
Our main objective is to achieve the same result what you have done using "distiluse-base-multilingual-cased" for repo_link Hindi and English. Can you please share how to train models to have aligned vector spaces, independent of the languages. Thanks.
Hi @nreimers, Any update on this?
Hi, Code and paper should be release in March. Currently I sadly cannot write too much about it here.
Hello @nreimers, May I know if you have published this? If yes, can you please share it with me.
Regards,
Hi @chiragsanghvi10 Yes, the paper is released: https://arxiv.org/abs/2004.09813
The code is integrated in this repository: https://www.sbert.net/examples/training/multilingual/README.html
Best Nils Reimers
@nreimers thank you for sharing the paper :) really good work. but the documentation link for multilingual models is still broken
Which link, i.e. what is the URL? Than I can fix it
We are testing BERT on a cross lingual dataset with different permutations. Either both the sentences are in English or both of them in Hindi or one of them is English and another one is Hindi explained in the attached file.
Our observation is as follows -
We are not quite sure if our approach is correct. Could you please share more information on this ? Thanks. bert_models_benchmarking.xlsx