Closed AndersGiovanni closed 3 years ago
Thanks! This is a mistake on my part.
I will fix this.
hi @AGMoller
At long last I have implemented this (NERDA==1.0.0). See functions:
python model.save_network
python model.load_network_from_file
Thank you so much for you feedback.
First of all thanks a lot for making this project. You've made it super simple to train a custom model!
I experienced some issues with using a trained and saved model (on GPU) on another computer running on CPU, and I thought I'd share how to deal with it.
I could both
torch.save()
andtorch.load()
a model on my GPU pc as you write in #14. Now running my model on another pc with only CPU should be handled by providingmap_location = torch.device('cpu')
as pytorch write in their documentation.So I tried that ofc using the code:
and printing
model.device
would returncpu
. Everything seemed to be working properly.Next when I wanted to
predict_text
I received this assertion error:AssertionError: Torch not compiled with CUDA enabled
. Super weird since I checked that the model was on cpu.It turned out that the
NERDANetwork
which is atorch.nn.Module
still was casted to the 'old' GPU device. So when I was printingmodel.network.device
cuda
was returned, despitemodel.device
returningcpu
.So my solution was to cast both the loaded model and the
NERDANetwork
tocpu
:I hope you'll find it useful!