Closed JieZou1 closed 4 years ago
Hi Jie,
I have same issue as yours and I managed to find a workaround about it (only for inference part): The root-cause is that there are lots of codes using Tensor.cuda()/, which will convert tensors/variables to use cuda devices for efficiency.
You can replace these .cuda() with .to(device=cpu_device)
It works fine on my Mac.
Okay, great! Many thanks for the information, and I will try it out.
Hi,
I am trying to run the inference on my MAC, which has CPU only. I have tried changing the demo.py to use CPU:
cfg.merge_from_list(["MODEL.DEVICE", "cpu"])
But, I still got "Torch not compiled with CUDA enabled" error message. Any idea what is wrong? Can I run inference on CPU? Many thanks.
Best Regards Jie