Open Liranbz opened 4 years ago
Yeah I'm trying to train with word2vec. Word2vec can be either 100d, 200d, 300d vector i.e 1d array with 100 values for each word for 100d model
Can anyone help me where should I change the dimension values. for eg: what values should be replaced in below lines: self.embedding(input).view(1, 1, -1) return torch.tensor(indexes, dtype=torch.long, device=device).view(-1, 1)
@Liranbz Did you get sorted out
@NarenInD @Liranbz have you found the solution? I have been also looking for the same. Thank you.
torchtext currently supports pretrained GloVe, FastText, and CharNGram embeddings. Other embeddings can be loaded using torchtext.vocab.Vectors
. If anyone is interested, I can edit the tutorial to show how you could use those.
Hi, Thank you for your tutorial! I tried to change the embedding with pre-trained word embeddings such as word2vec, here is my code:
the dimension size of this word2vec is 300 dimensions Is I need to change other things in my Encoder?
Thank you!