sehsanm / embedding-benchmark

Word Embedding benchmark project By Shahid Beheshti University NLP Lab
GNU General Public License v3.0
6 stars 16 forks source link

Load and Use Models #30

Closed mohamad-mehdi-jafari closed 5 years ago

sehsanm commented 5 years ago

Any update? Other are waiting for this code!

abb4s commented 5 years ago

hi @mohamad-mehdi-jafari , I need a method to get corresponding vector of each word like this: model.getVec("someword") .I noticed that you implemented a getter in getitem(indexOfword) but it need index of word to return vector .I think you can implement this simply by creating a dict of word:vector which can play role of hashmap here.

def __init__(self, vocabulary, vectors):
     self.wordDict={vocabulary[i]:vectors[i] for i in range(0,len(vocabulary))}
def getVec(self,word):
     return self.wordDict[word]

thanks.

sehsanm commented 5 years ago

@mohamad-mehdi-jafari Please perform the changes and submit your code ASAP. We are all waiting for this PR to merge.

mohamad-mehdi-jafari commented 5 years ago

I fixed the issues (hopefully)!

sehsanm commented 5 years ago

Can you please remove the models folder?