Closed anupamme closed 7 years ago
Yes, just load the model this way:
model = torch.load('infersent.allnli.pickle', map_location=lambda storage, loc: storage) model.use_cuda = False
as mentioned in https://github.com/facebookresearch/InferSent/blob/master/encoder/play.ipynb .
I will add this information directly to the README.
Added CPU option in README bc165ec3af3cf5df4f46b4906185dc24050ae7ff
Hello,
Is a CPU version of this code available or on the roadmap?