Closed smart-patrol closed 5 years ago
First thanks for the awesome package.
I have model trained on a GPU and want to run inference on a CPU. import torch model = torch.load('/implicit_new.pt',map_location={'cuda:0': 'cpu'})
import torch model = torch.load('/implicit_new.pt',map_location={'cuda:0': 'cpu'})
That works all well and fine but then when calling predict on the loaded model object , get the below:
predict
`--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last)
That looks like a transient PyTorch problem. Can you reproduce it reliably?
First thanks for the awesome package.
I have model trained on a GPU and want to run inference on a CPU.
import torch model = torch.load('/implicit_new.pt',map_location={'cuda:0': 'cpu'})
That works all well and fine but then when calling
predict
on the loaded model object , get the below:`--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last)