Closed duncan1315 closed 3 years ago
Double check your word embeddings file. I guess your word embeddings have some words with white space.
Exmaple: hello 0.01 0.1 -1.2 yes 0.2 1.2 3 kn ow 1.3 4.1 -2.0
Solve this problem by changing the following line in function.py
def load_pretrain_emb: .... tokens = line.split()
For me,I change () to ('\t'), it's depend on your embedding data, and delete the first line which show the dim of embedding data.
I change embedding with my own pretrained embedding and got this error, where can I fix it? thx
assert (embedd_dim + 1 == len(tokens)) AssertionError