facebookresearch / MUSE

A library for Multilingual Unsupervised or Supervised word Embeddings
Other
3.17k stars 543 forks source link

Using --dico_train identical_char still needs dictionaries #42

Closed DavidGOrtega closed 6 years ago

DavidGOrtega commented 6 years ago

according to the docs

when set to "identical_char" it will use identical character strings between source and target languages to form a vocabulary.````

I understood that the dictionary was going to be created using the given corpus

DavidGOrtega commented 6 years ago

To clarify the issue a bit:

python supervised.py --src_lang en --tgt_lang en --src_emb ./../x/embeddings/wiki.en.vec --tgt_emb ./../x/embeddings/car_brands.vec --n_refinement 5 --dico_train identical_char --max_vocab -1
INFO - 04/17/18 19:20:43 - 0:04:50 - Starting iteration 0...
Traceback (most recent call last):
  File "supervised.py", line 96, in <module>
    evaluator.all_eval(to_log)
  File "/mnt/dgortega/MUSE/src/evaluation/evaluator.py", line 190, in all_eval
    self.word_translation(to_log)
  File "/mnt/dgortega/MUSE/src/evaluation/evaluator.py", line 94, in word_translation
    method=method
  File "/mnt/dgortega/MUSE/src/evaluation/word_translation.py", line 89, in get_word_translation_accuracy
    dico = load_dictionary(path, word2id1, word2id2)
  File "/mnt/dgortega/MUSE/src/evaluation/word_translation.py", line 49, in load_dictionary
    assert os.path.isfile(path)
AssertionError
aconneau commented 6 years ago

Hi, when using identical_char, the training dictionary will indeed be a dictionary made of words that appear in both src and tgt word dictionary.

The issue that you're getting is not linked to the training dictionary but to the validation dictionary, which is used to evaluate how well the alignment model is doing after each Procrustes iteration.

By default, MUSE uses the dictionary located at : data/crosslingual/dictionaries/$src_lang-$tgt_lang.5000-6500.txt . Since you're using "en" for both src and tgt, there is no such evaluation dictionary. But you can simply manually create one by taking the English words that appear in data/crosslingual/dictionaries/en-fr.5000-6500.txt and duplicating the first column. This is actually what we did for the English-English experiments in the Appendix of our paper.

Thanks

DavidGOrtega commented 6 years ago

Hi, I tried after creating my issue exactly that but:

1) using the dictionary en-en.5000-6500.txt 2) using a custom dictionary with the words that appears in target embeddings

Both experiments fail; I even followed the given solutions in #40

My target embeddings are indeed a very small embeddings based just on car brands. My hope was that MUSE would be able to align such a small embeddings, in fact, 62 out of 67 words in the car brands corpus are present in the source.

Would MUSE be able to work in those circumstances? Im doing this experiment since teorically there is no minimum corpus size to create a embedding and I was intenrested to see if MUSE would align the words...

DavidGOrtega commented 6 years ago

when I try to run with my small dictionary which is in fact the words of the embeddings I find:

Traceback (most recent call last):
  File "supervised.py", line 96, in <module>
    evaluator.all_eval(to_log)
  File "/mnt/dgortega/MUSE/src/evaluation/evaluator.py", line 192, in all_eval
    self.dist_mean_cosine(to_log)
  File "/mnt/dgortega/MUSE/src/evaluation/evaluator.py", line 157, in dist_mean_cosine
    src_emb = src_emb / src_emb.norm(2, 1, keepdim=True).expand_as(src_emb)
TypeError: norm received an invalid combination of arguments - got (int, int, keepdim=bool), but expected one of:
 * no arguments
 * (float p)
 * (float p, int dim)
DavidGOrtega commented 6 years ago

Hi, previous issue was a pythorch error. After reinstallation it continued to go. However I have an issue

scores = emb2.mm(emb1[i:min(n_src, i + bs)].transpose(0, 1)).transpose(0, 1)
ValueError: result of slicing is an empty tensor

which is related to #39 and #31 I have changed my --dico-max-rank as stated and still same error. My target embedding corpus is just 67 words as stated before...

viking-sudo-rm commented 6 years ago

This is also the same error I am getting in #40. Based on #39, it seems to come from having a small number of words in the vocabulary, but I'm still not sure how to address it.

viking-sudo-rm commented 6 years ago

I was able to figure out how to resolve #40 by looking at src/evaluate/evaluator.py. Might prove helpful to you as well.

DavidGOrtega commented 6 years ago

@viking-sudo-rm In my case just because my target corpus is barely 67 words I had to change also the line 136 of evaluation/word_translation.py

top_matches = scores.topk(100, 1, True)[1]

I changed the top 100 to 10

top_matches = scores.topk(10, 1, True)[1]

glample commented 6 years ago

Thank you @DavidGOrtega , you are right, topk(10, ...) is enough since we only report precision at 1, 5, 10. This is fixed in https://github.com/facebookresearch/MUSE/commit/0546ef8addc4f1769ef558b4b6353fa9078d6d26