Closed AK391 closed 3 years ago
Hey AK, At the current state, the code does not support inference on CPU, just GPU. Although it is possible to add this option in some minor code changes.
This change will make PTI extremely slow (I predict 10 min per identity), especially in Colab. May I ask what is the purpose of running PTI on CPU instead of GPU?
@danielroich thanks, I was looking to port PTI to gradio hub https://gradio.app/hub which does support gpu but it is fairly new right now and unstable
@AK391 Done :)
I might add that running PTI is very slow on CPU but it can be done from now on. All you have to do on the inference notebook is to change global_config.device from 'cuda' to 'cpu' under Configuration Setup and everything will run on CPU instead of GPU
Hope it helps! Daniel
@danielroich thanks 10 mins per identity is pretty long but ill try to see what I can do
is it possible to do inference on cpu in colab?