facebookresearch / InferSent

InferSent sentence embeddings
Other
2.28k stars 470 forks source link

CPU version? #8

Closed anupamme closed 7 years ago

anupamme commented 7 years ago

Hello,

Is a CPU version of this code available or on the roadmap?

aconneau commented 7 years ago

Yes, just load the model this way: model = torch.load('infersent.allnli.pickle', map_location=lambda storage, loc: storage) model.use_cuda = False

as mentioned in https://github.com/facebookresearch/InferSent/blob/master/encoder/play.ipynb .

I will add this information directly to the README.

aconneau commented 7 years ago

Added CPU option in README bc165ec3af3cf5df4f46b4906185dc24050ae7ff