Closed Bekyilma closed 7 years ago
Hi,
did you load the model this way: "infersent = torch.load('infersent.allnli.pickle', map_location=lambda storage, loc: storage)" ?
If not, please retry this way. If you did load it this way, could you print the full error message?
Thanks
Hi, yes loaded the model as you mentioned here is the entire error message
AssertionError Traceback (most recent call last)
It seems that you are using an old version InferSent.
Can you pull the latest version and try again?
Thanks
I cloned the latest version but it's still raising the same error.
https://github.com/facebookresearch/InferSent/issues/22#issuecomment-325218843 infersent.use_cuda = False fixed it.
We removed this "use_cuda" in the encoder/models.py but forgot to remove it in the main "models.py", I guess that's where the error comes from.
https://github.com/facebookresearch/InferSent/commit/6c36aa3a62f5197d385095a98d13a88c2f650054 should fix this, and you shouldn't have to use "infersent.use_cuda=False" anymore.
Thanks!
Hi, I was trying out the encoder as it is on InferSent/encoder/demo.ipynb it works fine until I execute this line for encoding the sentences
embeddings = model.encode(sentences, bsize=128, tokenize=False, verbose=True) print('nb sentences encoded : {0}'.format(len(embeddings)))
I am running it on cpu but it's raising the following error
"AssertionError: Torch not compiled with CUDA enabled"