roytseng-tw / Detectron.pytorch

A pytorch implementation of Detectron. Both training from scratch and inferring directly from pretrained Detectron weights are available.
MIT License
2.82k stars 567 forks source link

Inference on CPU #222

Open ashnair1 opened 5 years ago

ashnair1 commented 5 years ago

Is it possible to run inference on CPU? In the forward function of the roi_Xconv1fc_gn_head_panet in fast_rcnn_heads.py, it relies on the gpu version of the roi align. How can this issue be solved?

jmills09 commented 5 years ago

@ash1995

I'm running a fork of this repository that has cpu compatibility on just the infer script.

Unfortunately I have already modified the entire repo beyond the point of it likely being useful because I'm using a non-coco dataset. (it's https://github.com/NuTufts/Detectron.pytorch/tree/cpu_train but I don't recommend trying to use it.

The basic changes I had to make was changing all of the .cuda() calls at the end of tensors that pushed them to devices to tensor.to(torch.device(device_id)) device_id = '' if blobs_in[0].is_cuda: device_id = blobs_in[0].get_device() else: device_id = 'cpu' X.to(torch.device(device_id))

This allows you to set the device as CPU. I also had to adjust data_parallel so that it was okay with handling cpu devices. In the above repo I created a datasingular that has the changes.