Closed ricardopieper closed 5 years ago
@ricardopieper Can you share the model file for testing?
Yes, here it is: http://143.107.183.175:22980/download.php?file=embeddings/fasttext/cbow_s50.zip
You can find more here: http://nilc.icmc.usp.br/embeddings
One of the offending lines start with "R$ 0,00", it explodes because 0,00 can't be parsed to float.
I changed the Fasttext model loader class with this code:
def read(self, file_path, max_num_vector=None):
with open(file_path, 'r', encoding='utf-8') as f:
header = f.readline()
self.vocab_size, self.emb_size = map(int, header.split())
for i, line in enumerate(f):
tokens = line.split()
word = " ".join(tokens[0:(len(tokens) - self.emb_size):])
values = np.array([float(val) for val in tokens[(self.emb_size*-1):]])
The idea is that, if we know the size of the word vectors, we can just load the last N splitted values and consider the rest as the word itself.
made a pull request describing the fix https://github.com/makcedward/nlpaug/pull/8 feel free to evaluate the solution
One of the offending lines start with "R$ 0,00", it explodes because 0,00 can't be parsed to float.
Which pre-trained embeddings do you use and hitting this bug?
@makcedward the same I mentioned earlier. Here's a bit more context:
For portuguese word embeddings, we like to use USP's models (USP = Universidade de Sao Paulo, Brazil). They provide models with varied size. In particular we're using fasttext, though we could use any other (for our particular case, fasttext seems to be a bit better).
The one I'm using is this one: http://143.107.183.175:22980/download.php?file=embeddings/fasttext/cbow_s50.zip
You can find more here: http://nilc.icmc.usp.br/embeddings
Also I'm afraid the same fix has to be applied to all other models (glove, word2vec, etc), but I haven't checked.
@ricardopieper
@makcedward the same I mentioned earlier. Here's a bit more context:
For portuguese word embeddings, we like to use USP's models (USP = Universidade de Sao Paulo, Brazil). They provide models with varied size. In particular we're using fasttext, though we could use any other (for our particular case, fasttext seems to be a bit better).
The one I'm using is this one: http://143.107.183.175:22980/download.php?file=embeddings/fasttext/cbow_s50.zip
You can find more here: http://nilc.icmc.usp.br/embeddings
Also I'm afraid the same fix has to be applied to all other models (glove, word2vec, etc), but I haven't checked.
After studying pre-trained embeddings from http://nilc.icmc.usp.br/embeddings, found that word2vec, glove and fasttext embeddings follow same fasttext's (FB official embeddings) file format.
Will suggest to use FasttextAug to load those library. On the other hand,
One of the offending lines start with "R$ 0,00", it explodes because 0,00 can't be parsed to float.
Which pre-trained embeddings do you use and hitting this bug?
Will apply the following change to read content correctly.
def read(self, file_path, max_num_vector=None):
with open(file_path, 'r', encoding='utf-8') as f:
header = f.readline()
self.vocab_size, self.emb_size = map(int, header.split())
for i, line in enumerate(f):
tokens = line.split()
values = [val for val in tokens[(self.emb_size * -1):]]
value_pos = line.find(' '.join(values))
word = line[:value_pos-1]
values = np.array([float(val) for val in values])
Some model files contain embeddings with multiple words (the NILC embeddings for portuguese) which causes the model loading code to explode. For instance, a line in the model file might contain this:
The same does not happen in Spacy, for instance.
I fixed it in my local dev environment, might make a pull request later.