Open CodeMiningCZW opened 1 year ago
I found one blog (in Japanese) that might be useful https://zenn.dev/selllous/articles/retnet_tutorial.
A simple nn.Embeddng(vocab_size, embedding_size)
will work.
Or you can refer to our example on language modeling.
I also encountered this problem. When I want to use the encoder and decoder modules separately, the code will report an error, I also want to know where the problem is and how to solve it
A simple
nn.Embeddng(vocab_size, embedding_size)
will work. Or you can refer to our example on language modeling.
from fairseq.models.transformer import DEFAULT_MIN_PARAMS_TO_WRAP, Embedding
I can't find the transformer.
In the RetNet model, embed tokens is not given, I can 't run the code. When I use this model, what should the parameter token embeddings pass ? Or how do I define embed _ tokens ?