GateNLP / gate-lf-pytorch-json

PyTorch wrapper for the LearningFramework GATE plugin
Apache License 2.0
1 stars 2 forks source link

Make sure nothing on CUDA gets saved to the wrapper file #38

Closed johann-petrak closed 5 years ago

johann-petrak commented 5 years ago

Currently we save the optimizer instance which contains a reference to the model parameters which could be on CUDA. So when restoring on another machine, this will cause an error because PyTorch tries to put the tensor back on CUDA.

We should remove the optimizer from the fields that get saved and make sure no other torch tensors which could be on CUDA are getting saved either. (It may be possible to just check all fields if they are from the torch package and not store these)