Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
I was wondering if you could provide me with some statistics about how much time it took for you to train these models. More specifically, if you could provide GPU hours and hardware architecture used for training the pSp (SyleGAN Inversion) on the FFHQ dataset, it would be really helpful. I already wrote an email regarding this, feel free to revert there.
I was wondering if you could provide me with some statistics about how much time it took for you to train these models. More specifically, if you could provide GPU hours and hardware architecture used for training the pSp (SyleGAN Inversion) on the FFHQ dataset, it would be really helpful. I already wrote an email regarding this, feel free to revert there.