Open enze5088 opened 4 years ago
model = torch.load(your_model_file) vocab = WordVocab.load_vocab(your_vocab_file)
tokenEmb = model.state_dict()['embedding.token.weight'] segEmb = model.state_dict()['embedding.segment.weight'] posEmb = model.state_dict()['embedding.position.weight']
token_emb = tokenEmb[vocab.to_seq("word")[0]]
Thank you, but I want to get the vector corresponding to each word, so I'm a little confused about the weight matrix.
I see. Thank you very much.
Is vocab.to_seq("word") [0] the index corresponding to Word? Can we just take the value of the corresponding matrix directly?
appear an error ModuleNotFoundError: No module named 'model.bert'
How to Output Embedded Sentence Vector
I want to output the word vector