I'm on an OSX machine with M@ and want to test the inference but do not have CUDA. I've tried setting device to CPU but I still get an error. Any suggestions?
/Library/Python/3.11/lib/python/site-packages/torch/serialization.py", line 166, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
I'm on an OSX machine with M@ and want to test the inference but do not have CUDA. I've tried setting device to CPU but I still get an error. Any suggestions?