VITA-Group / FasterSeg

[ICLR 2020] "FasterSeg: Searching for Faster Real-time Semantic Segmentation" by Wuyang Chen, Xinyu Gong, Xianming Liu, Qian Zhang, Yuan Li, Zhangyang Wang
MIT License
524 stars 107 forks source link

inferencing on a cpu #33

Closed ghost closed 4 years ago

ghost commented 4 years ago

@chenwydj can the final trained model use only cpu for inferencing?

chenwydj commented 4 years ago

Hi @deepseek!

Thank you for your interest in our work!

Yes you can inference on CPU. You can just remove all the .cuda() in the code. You may also need to map the pretrained model onto CPU when loading it.