Closed windweller closed 7 years ago
Actually I was able to fix this by adding:
infersent.use_cuda = False
It is possible that I might be using an older version of InferSent and the newer version made this line unnecessary.
Indeed, the need for this line "infersent.use_cuda = False" was removed in a recent commit. Now you just need to use ".cpu()" or ".cuda()" to switch between CPU/GPU.
If you're on CPU, you may want to try and play with the parameter k in:
torch.set_num_threads(k)
In my case, using less CPU cores than my server had made the generation of embeddings faster (from 40 to 70 sentences/s).
Hi,
I got this warning:
Any idea on why this is happening, and why is it still calling cudnn even though I want to run on CPU?