eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.19k stars 570 forks source link

How to train face frontalized model with compressed size , How much FFHQ data is needed for training the same #285

Closed abdul756 closed 2 years ago

yuval-alaluf commented 2 years ago

I am not sure what you mean by compressed size. For training the frontalization task, we used the entire FFHQ dataset which is 70,000 images. You could probably use less and still get reasonable results.

abdul756 commented 2 years ago

Thanks yuval for your quick reply, I mean how to reduce the size of facefrontalisation model from 1GB to atleast 500 MB.

yuval-alaluf commented 2 years ago

Compressing our model is out of the scope of this work. If you want to compress this, you can look at different techniques to reducing model sizes such as pruning and distillation. You can also take a look at where most model parameters are and try to redesign the architecture a bit to match your constraints.