Open pnmartinez opened 4 years ago
Hi Pablo,
we used the nltk
English stopwords as a vocab for our model. it looks like that nltk
increased their vocab size (it used to be 9887 words and now it includes 9889 words). in order to use our pre-train model, you should download the same vocab as we used (or probably removed the new stopwords). I probably will fix it in the following days.
let me know if you need more help.
best, Elior
Hello @eliorav !
I see you've closed the issue, but the Google Colab is yet having the same error.
I think we can keep the issue open until the Colab is properly updated, what do you think?
Hello @eliorav,
I do need further help: can you point me to the nltk
that you used? Thank you.
Hello! First of all, thank you for the contribution!
The error
I was trying to see what the model comes up to, so I went for the Google Colab linked.
However, there may be a
torch
version problem (just my hypothesis), as I am getting a dimension mismatch error while initializing the generators and discriminators** (the cells to "Prepare the networks"). See log below.Discussion
The Colab notebook should be updated in order for it to be useful. Do you have any idea what may be happening here? @eliorav