MhLiao / MaskTextSpotter

A PyTorch implementation of Mask TextSpotter
https://github.com/MhLiao/MaskTextSpotter
414 stars 96 forks source link

[QUESTION] Run inference on CPU #60

Closed JieZou1 closed 4 years ago

JieZou1 commented 4 years ago

Hi,

I am trying to run the inference on my MAC, which has CPU only. I have tried changing the demo.py to use CPU:

cfg.merge_from_list(["MODEL.DEVICE", "cpu"])

But, I still got "Torch not compiled with CUDA enabled" error message. Any idea what is wrong? Can I run inference on CPU? Many thanks.

Best Regards Jie

JayveeHe commented 4 years ago

Hi Jie,

I have same issue as yours and I managed to find a workaround about it (only for inference part): The root-cause is that there are lots of codes using Tensor.cuda()/, which will convert tensors/variables to use cuda devices for efficiency.

You can replace these .cuda() with .to(device=cpu_device)

It works fine on my Mac.

JieZou1 commented 4 years ago

Okay, great! Many thanks for the information, and I will try it out.