facebookresearch / InferSent

InferSent sentence embeddings
Other
2.28k stars 471 forks source link

RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index' #129

Closed IvanPavlyshyn closed 5 years ago

IvanPavlyshyn commented 5 years ago

Hi, I get an issue when trying to execute model.encode command. Help me please to fix it. I'm using demo example.

Code: embeddings = model.encode(sentences, bsize=128, tokenize=False, verbose=True) print('nb sentences encoded : {0}'.format(len(embeddings)))

Error: `RuntimeError Traceback (most recent call last)

in ----> 1 embeddings = model.encode(sentences, bsize=128, tokenize=False, verbose=True) 2 print('nb sentences encoded : {0}'.format(len(embeddings))) ~\Documents\Quora_QnA\models.py in encode(self, sentences, bsize, tokenize, verbose) 221 batch = batch.cuda() 222 with torch.no_grad(): --> 223 batch = self.forward((batch, lengths[stidx:stidx + bsize])).data.cuda().numpy() 224 embeddings.append(batch) 225 embeddings = np.vstack(embeddings) ~\Documents\Quora_QnA\models.py in forward(self, sent_tuple) 60 idx_sort = torch.from_numpy(idx_sort).cuda() if self.is_cuda() \ 61 else torch.from_numpy(idx_sort) ---> 62 sent = sent.index_select(1, idx_sort) 63 64 # Handling padding in Recurrent Networks RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index' `
aconneau commented 5 years ago

Did you find a fix?

IvanPavlyshyn commented 5 years ago

@aconneau yeah. Set use_cuda = True helps.