facebookresearch / InferSent

InferSent sentence embeddings
Other
2.28k stars 470 forks source link

AssertionError: Torch not compiled with CUDA enabled #28

Closed Bekyilma closed 7 years ago

Bekyilma commented 7 years ago

Hi, I was trying out the encoder as it is on InferSent/encoder/demo.ipynb it works fine until I execute this line for encoding the sentences

embeddings = model.encode(sentences, bsize=128, tokenize=False, verbose=True) print('nb sentences encoded : {0}'.format(len(embeddings)))

I am running it on cpu but it's raising the following error

"AssertionError: Torch not compiled with CUDA enabled"

aconneau commented 7 years ago

Hi,

did you load the model this way: "infersent = torch.load('infersent.allnli.pickle', map_location=lambda storage, loc: storage)" ?

If not, please retry this way. If you did load it this way, could you print the full error message?

Thanks

Bekyilma commented 7 years ago

Hi, yes loaded the model as you mentioned here is the entire error message

Nb words kept : 128201/130068 (98.56 %)

AssertionError Traceback (most recent call last)

in () ----> 1 embeddings = infersent.encode(sentences, bsize=128, tokenize=False, verbose=True) 2 print('nb sentences encoded : {0}'.format(len(embeddings))) /Users/bereket/Documents/InferSent/models.py in encode(self, sentences, bsize, tokenize, verbose) 196 volatile=True) 197 if self.use_cuda: --> 198 batch = batch.cuda() 199 batch = self.forward((batch, lengths[stidx:stidx + bsize])).data.cpu().numpy() 200 embeddings.append(batch) /Users/bereket/anaconda/envs/python2/lib/python2.7/site-packages/torch/autograd/variable.pyc in cuda(self, device_id, async) 277 278 def cuda(self, device_id=None, async=False): --> 279 return CudaTransfer.apply(self, device_id, async) 280 281 def cpu(self): /Users/bereket/anaconda/envs/python2/lib/python2.7/site-packages/torch/autograd/_functions/tensor.pyc in forward(ctx, i, device_id, async) 149 return i.cuda(device_id, async=async) 150 else: --> 151 return i.cuda(async=async) 152 153 @staticmethod /Users/bereket/anaconda/envs/python2/lib/python2.7/site-packages/torch/_utils.pyc in _cuda(self, device, async) 64 else: 65 new_type = getattr(torch.cuda, self.__class__.__name__) ---> 66 return new_type(self.size()).copy_(self, async) 67 68 /Users/bereket/anaconda/envs/python2/lib/python2.7/site-packages/torch/cuda/__init__.pyc in _lazy_new(cls, *args, **kwargs) 264 @staticmethod 265 def _lazy_new(cls, *args, **kwargs): --> 266 _lazy_init() 267 # We need this method only for lazy init, so we can remove it 268 del _CudaBase.__new__ /Users/bereket/anaconda/envs/python2/lib/python2.7/site-packages/torch/cuda/__init__.pyc in _lazy_init() 82 raise RuntimeError( 83 "Cannot re-initialize CUDA in forked subprocess. " + msg) ---> 84 _check_driver() 85 torch._C._cuda_init() 86 torch._C._cuda_sparse_init() /Users/bereket/anaconda/envs/python2/lib/python2.7/site-packages/torch/cuda/__init__.pyc in _check_driver() 49 def _check_driver(): 50 if not hasattr(torch._C, '_cuda_isDriverSufficient'): ---> 51 raise AssertionError("Torch not compiled with CUDA enabled") 52 if not torch._C._cuda_isDriverSufficient(): 53 if torch._C._cuda_getDriverVersion() == 0: AssertionError: Torch not compiled with CUDA enabled
aconneau commented 7 years ago

It seems that you are using an old version InferSent.

Can you pull the latest version and try again?

Thanks

Bekyilma commented 7 years ago

I cloned the latest version but it's still raising the same error.

Bekyilma commented 7 years ago

https://github.com/facebookresearch/InferSent/issues/22#issuecomment-325218843 infersent.use_cuda = False fixed it.

aconneau commented 7 years ago

We removed this "use_cuda" in the encoder/models.py but forgot to remove it in the main "models.py", I guess that's where the error comes from.

https://github.com/facebookresearch/InferSent/commit/6c36aa3a62f5197d385095a98d13a88c2f650054 should fix this, and you shouldn't have to use "infersent.use_cuda=False" anymore.

Thanks!