eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.19k stars 570 forks source link

Training a toonify model with paired data #182

Closed applech666 closed 3 years ago

applech666 commented 3 years ago

Thanks for the excellent work. I've managed to train a toonify model on my own pair dataset which is starting to look good.But I found that the results were not as effective as the target. 0011_97500 I use the following command: python scripts/train.py --dataset_type=toonify --exp_dir=./experiment --workers=2 --batch_size=2 --test_batch_size=2 --test_workers=2 --val_interval=2500 --save_interval=2500 --encoder_type=GradualStyleEncoder --start_from_latent_avg --lpips_lambda=0.8 --l2_lambda=1.0 --id_lambda=0.1 --w_norm_lambda=0.005 --stylegan_weights=pretrained_models/stylegan2-ffhq-config-f.pt

I have 2700 pairs of data in my dataset. Could you please give me some possible advices? Thanks again.

applech666 commented 3 years ago

First, do I need to pre-train a decoder model based on my cartoon data set?

yuval-alaluf commented 3 years ago

First, do I need to pre-train a decoder model based on my cartoon data set?

Yes. I noticed that you are using the StyleGAN generator trained on FFHQ which outputs real faces. Therefore you're getting outputs that look more realistic than your toons data. Notice that we uploaded a link to the toonify StyleGAN generator in this repo so using that is a good start rather than training a GAN yourself.

applech666 commented 3 years ago

I just need to set --stylegan_weights = psp_ffhq_toonify.pt ?

applech666 commented 3 years ago

I started setting --stylegan_weights = ffhq_cartoon_blended.pt, but the result seems to be worse.

yuval-alaluf commented 3 years ago

I started setting --stylegan_weights = ffhq_cartoon_blended.pt, but the result seems to be worse.

Can you clarify what you mean by worse?

huangfaan commented 3 years ago

First, do I need to pre-train a decoder model based on my cartoon data set?

Hey, man. I like your cartoon data set, I am a student, and I am also learning the toonify, would you tell me where your cartoon dataset is from, I want to get it. Thanks!

applech666 commented 3 years ago

0001_127500

step is 127500

huangfaan commented 3 years ago

0001_127500

step is 127500

hello?? Could you tell me where your cartoon dataset is from, I am look forward to receiving your reply, thanks.

applech666 commented 3 years ago

First, do I need to pre-train a decoder model based on my cartoon data set?

Hey, man. I like your cartoon data set, I am a student, and I am also learning the toonify, would you tell me where your cartoon dataset is from, I want to get it. Thanks!

here (https://github.com/eladrich/pixel2style2pixel/issues/21)

li-car-fei commented 2 years ago

I started setting --stylegan_weights = ffhq_cartoon_blended.pt, but the result seems to be worse.

Can you clarify what you mean by worse?

What does ffhq_cartoon_blended.pt do?

li-car-fei commented 2 years ago

I just need to set --stylegan_weights = psp_ffhq_toonify.pt ?

when I do the same with you, I get the error : self.decoder.load_state_dict(ckpt['g_ema'], strict=False) KeyError: 'g_ema'