maciejkula / spotlight

Deep recommender models using PyTorch.
MIT License
2.97k stars 421 forks source link

GPU to CPU Pred #131

Closed smart-patrol closed 5 years ago

smart-patrol commented 5 years ago

First thanks for the awesome package.

I have model trained on a GPU and want to run inference on a CPU. import torch model = torch.load('/implicit_new.pt',map_location={'cuda:0': 'cpu'})

That works all well and fine but then when calling predict on the loaded model object , get the below:

`--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last)

in () ----> 1 model.predict(1000) ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/spotlight/factorization/implicit.py in predict(self, user_ids, item_ids) 305 user_ids, item_ids = _predict_process_ids(user_ids, item_ids, 306 self._num_items, --> 307 self._use_cuda) 308 309 out = self._net(user_ids, item_ids) ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/spotlight/factorization/_components.py in _predict_process_ids(user_ids, item_ids, num_items, use_cuda) 20 user_ids = user_ids.expand(item_ids.size()) 21 ---> 22 user_var = gpu(user_ids, use_cuda) 23 item_var = gpu(item_ids, use_cuda) 24 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/spotlight/torch_utils.py in gpu(tensor, gpu) 7 8 if gpu: ----> 9 return tensor.cuda() 10 else: 11 return tensor RuntimeError: cuda runtime error (30) : unknown error at /opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/THC/THCGeneral.cpp:70`
maciejkula commented 5 years ago

That looks like a transient PyTorch problem. Can you reproduce it reliably?