Hello,
I want to apply pre-trained embeddings of WordVec and Glove, I do not want them to be trained, so I commented in the model in forward function these lines,
emb = embedded_dropout(self.encoder, input, dropout=self.dropoute if self.training else 0)
emb = self.lockdrop(emb, self.dropouti)
Now the embeddings are returned as embedding object and the rest of the code needs it as tensor. I do not know what should be in this tensor and how exactly should I do do this??
Another question, how to map the pretrained embedding to the tokens of the corpus? As it appears that all the code works with the same ntokens for both corpus and embeddings.
Hello, I want to apply pre-trained embeddings of WordVec and Glove, I do not want them to be trained, so I commented in the model in forward function these lines,
emb = embedded_dropout(self.encoder, input, dropout=self.dropoute if self.training else 0)
emb = self.lockdrop(emb, self.dropouti)
Now the embeddings are returned as embedding object and the rest of the code needs it as tensor. I do not know what should be in this tensor and how exactly should I do do this??
Another question, how to map the pretrained embedding to the tokens of the corpus? As it appears that all the code works with the same ntokens for both corpus and embeddings.