Open sayeedk06 opened 5 years ago
Hi, Did you solve this? I'm getting the same problem
Hi, Did you solve this? I'm getting the same problem
well it seems for me I was getting the error for not using the full vocab. If you don't use the full vocab size you need to specify it to False in the code. By default they coded it to True
Thanks! This was the solution for me as well
INFO - 07/12/19 11:07:16 - 0:24:21 - * Reloading the best model from /home/ubuntu/cse495/test/MUSE/dumped/debug/xtoktkxg3w/best_mapping.pth ... INFO - 07/12/19 11:07:16 - 0:24:21 - Reloading all embeddings for mapping ... INFO - 07/12/19 11:09:36 - 0:26:41 - Loaded 2519370 pre-trained word embeddings. Traceback (most recent call last): File "unsupervised.py", line 186, in <module> trainer.export() File "/home/ubuntu/cse495/test/MUSE/src/trainer.py", line 251, in export params.src_dico, src_emb = load_embeddings(params, source=True, full_vocab=True) File "/home/ubuntu/cse495/test/MUSE/src/utils.py", line 406, in load_embeddings return read_txt_embeddings(params, source, full_vocab) File "/home/ubuntu/cse495/test/MUSE/src/utils.py", line 309, in read_txt_embeddings embeddings = np.concatenate(vectors, 0) MemoryError
I get the following error after running the following -python unsupervised.py --src_lang en --tgt_lang bn --src_emb data/wiki.en.vec --tgt_emb data/wiki.bn.vec --n_refinement 5 --max_vocab 15000 --n_epochs 2 --batch_size 25
Whats the issue here? I even decreased the batch_size and max_vocab