danielroich / PTI

Official Implementation for "Pivotal Tuning for Latent-based editing of Real Images" (ACM TOG 2022) https://arxiv.org/abs/2106.05744
MIT License
905 stars 114 forks source link

inference on cpu #8

Closed AK391 closed 3 years ago

AK391 commented 3 years ago

is it possible to do inference on cpu in colab?

danielroich commented 3 years ago

Hey AK, At the current state, the code does not support inference on CPU, just GPU. Although it is possible to add this option in some minor code changes.

This change will make PTI extremely slow (I predict 10 min per identity), especially in Colab. May I ask what is the purpose of running PTI on CPU instead of GPU?

AK391 commented 3 years ago

@danielroich thanks, I was looking to port PTI to gradio hub https://gradio.app/hub which does support gpu but it is fairly new right now and unstable

danielroich commented 3 years ago

@AK391 Done :)

I might add that running PTI is very slow on CPU but it can be done from now on. All you have to do on the inference notebook is to change global_config.device from 'cuda' to 'cpu' under Configuration Setup and everything will run on CPU instead of GPU

Hope it helps! Daniel

AK391 commented 3 years ago

@danielroich thanks 10 mins per identity is pretty long but ill try to see what I can do