Closed mattmcclean closed 4 years ago
The problematic line of code is the following. I wonder if the problem is because the callback class named TrainEvalCallback
is set to be default (see code snippet). Do I need to remove this callback class from the learner to do inference only ?
You should use the function load_learner
with cpu=True
instead of torch.load
.
Here your learn.dls
have serialized their device attribute, which is one GPU because you trained in an env with cuda enabled I guess, so it tries to put the model on the same device to match. load_learner
will fix that problem for you.
Am getting the following error when attempting to do inference on a fastai2 model using a CPU only server. The error is the following:
AssertionError: Torch not compiled with CUDA enabled
The version of PyTorch is
1.3.1+cpu
and fastai2 version0.0.11
The stacktrace error is the following:
The sample code I am running is the following: