Without this change, I am still getting this error
(bgm2) seki@xubuntu-20:~/src/BackgroundMattingV2$ python inference_video.py --model-type mattingrefine --model-backbone resnet101 --model-checkpoint Model/PyTorch/pytorch_resnet101.pth --video-src ../zerobox-v2/resource/group15B_Short.avi --video-bgr ../zerobox-v2/resource/background_group15B.png --output-dir output_video_group15B --output-type com fgr pha err ref --device cpu
/home/seki/miniconda3/envs/bgm2/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
Traceback (most recent call last):
File "inference_video.py", line 128, in <module>
model.load_state_dict(torch.load(args.model_checkpoint), strict=False)
File "/home/seki/miniconda3/envs/bgm2/lib/python3.8/site-packages/torch/serialization.py", line 594, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/seki/miniconda3/envs/bgm2/lib/python3.8/site-packages/torch/serialization.py", line 853, in _load
result = unpickler.load()
File "/home/seki/miniconda3/envs/bgm2/lib/python3.8/site-packages/torch/serialization.py", line 845, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/home/seki/miniconda3/envs/bgm2/lib/python3.8/site-packages/torch/serialization.py", line 834, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "/home/seki/miniconda3/envs/bgm2/lib/python3.8/site-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/home/seki/miniconda3/envs/bgm2/lib/python3.8/site-packages/torch/serialization.py", line 151, in _cuda_deserialize
device = validate_cuda_device(location)
File "/home/seki/miniconda3/envs/bgm2/lib/python3.8/site-packages/torch/serialization.py", line 135, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Without this change, I am still getting this error