eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.2k stars 568 forks source link

Training time for the provided pretrained models #255

Closed shivangi-aneja closed 2 years ago

shivangi-aneja commented 2 years ago

I was wondering if you could provide me with some statistics about how much time it took for you to train these models. More specifically, if you could provide GPU hours and hardware architecture used for training the pSp (SyleGAN Inversion) on the FFHQ dataset, it would be really helpful. I already wrote an email regarding this, feel free to revert there.

yuval-alaluf commented 2 years ago

We ran our inversion experiments with a single P40 GPU for 300,000 iterations. I don't recall how many days this turns out to be.

shivangi-aneja commented 2 years ago

Many thanks for quick response :)