eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.19k stars 570 forks source link

Toonify the output more #83

Closed cvmlddev closed 3 years ago

cvmlddev commented 3 years ago

Hi, I trained pixel2style2pixel on paired toon dataset(Dataset created using stylegan2, around 8000 pairs Trained psp model using following hyperparameters lpips_lambda=0.8 , l2_lambda=1, id_lambda=1 , w_norm_lambda=0.025 ,stylegan_weights =pretrained_models/ffhq_cartoon_blended.pt

I am actually getting pretty good results 5002_01

What I would like to have is more toon effect, As in, The output looks too much like the source. Was wondering which hyperparameter could help me achieve this? Thanks

yuval-alaluf commented 3 years ago

You could try decreasing the L2 and LPIPS losses a bit. Maybe by doing so, the network will try less to preserve the fine details (e.g. eye size). However, we haven't really tried this ourselves so I am not sure how much of an effect this will have.

Also, while I don't really know how your paired data looks, the results you'll get with pSp are heavily influenced by the toon data you use for training. If your toon training data does not have a large toon effect, the results you'll get during inference will be similar.

Overall, however, I think your results look very good! 💪

watertianyi commented 1 year ago

嗨, 我在成对的卡通数据集上训练了 pixel2style2pixel(使用stylegan2创建的数据集,大约 8000 对使用以下超参数训练的 psp 模型 lpips_lambda=0.8 , l2_lambda=1, id_lambda=1 , w_norm_lambda=0.025 ,stylegan_weights =pretrained_models/ffhq_cartoon_blended.pt

我实际上得到了很好的结果 5002_01

我想要的是更多的香椿效果,例如,输出看起来太像源了。 想知道哪个超参数可以帮助我实现这一目标? 谢谢

How did your paired data set come about?

xuguozhi commented 1 year ago

how about the paired data details?