Closed AnnaSou closed 2 years ago
I think you didn't do anything wrong because I tried other Chinese sentences, it worked fine.
I tried '能帮我
Maybe '猫喜欢
In order to get the probability of the specified target word, I think maybe you need to revise the function fill_mask in fairseq/models/roberta/hub_interface.py. Line 187 prob = logits.softmax(dim=0), prob is the probabilities for all the words in the dictionary. You may need to encode the target word using the dictionary first and get the probability by the encoded index.
Hope this can help you.
This issue has been automatically marked as stale. If this issue is still affecting you, please leave any comment (for example, "bump"), and we'll keep it open. We are sorry that we haven't been able to prioritize it yet. If you have any new additional information, please include it with your comment!
Closing this issue after a prolonged period of inactivity. If this issue is still present in the latest release, please create a new issue with up-to-date information. Thank you!
Hello,
I trying to get masked words predictions for languages except English with XLM Roberta.
English example worked somewhat fine.
Maybe I am doing something wrong. How to use multilingual XLM-Roberta for masked task? Ideally, I want to mask model with target words. For instance,
xlmr.fill_mask('Cats likes <mask>.', targets=['sleeping'])
. So, that I would get probability for the word "sleeping". In addition, to do it for other languages.Thanks! Anna