golsun / DialogRPT

EMNLP 2020: "Dialogue Response Ranking Training with Large-Scale Human Feedback Data"
MIT License
336 stars 33 forks source link

CPU inference problem #7

Open pablogranolabar opened 3 years ago

pablogranolabar commented 3 years ago

Hi @golsun, hitting another snag with CPU inference:

$ python3 src/generation.py play -pg=restore/medium_ft.pkl -pr=restore/updown.pth --cpu --sampling
loading from restore/medium_ft.pkl
loading from restore/updown.pth
/home/ec2-user/.local/lib/python3.7/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at  /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
  return torch._C._cuda_getDeviceCount() > 0
Traceback (most recent call last):
  File "src/generation.py", line 224, in <module>
    ranker = get_model(args.path_ranker, cuda)
  File "/home/ec2-user/kompanion.ai/DialogRPT/src/score.py", line 17, in get_model
    model.load(path)
  File "/home/ec2-user/kompanion.ai/DialogRPT/src/model.py", line 102, in load
    weights = torch.load(path)
  File "/home/ec2-user/.local/lib/python3.7/site-packages/torch/serialization.py", line 595, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/home/ec2-user/.local/lib/python3.7/site-packages/torch/serialization.py", line 774, in _legacy_load
    result = unpickler.load()
  File "/home/ec2-user/.local/lib/python3.7/site-packages/torch/serialization.py", line 730, in persistent_load
    deserialized_objects[root_key] = restore_location(obj, location)
  File "/home/ec2-user/.local/lib/python3.7/site-packages/torch/serialization.py", line 175, in default_restore_location
    result = fn(storage, location)
  File "/home/ec2-user/.local/lib/python3.7/site-packages/torch/serialization.py", line 151, in _cuda_deserialize
    device = validate_cuda_device(location)
  File "/home/ec2-user/.local/lib/python3.7/site-packages/torch/serialization.py", line 135, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
golsun commented 3 years ago

hi @pablogranolabar ,

in case of CPU, instead of weights = torch.load(path) as in model.py, you can use weights = torch.load(path, map_location=torch.device('cpu'))