williamyang1991 / DualStyleGAN

[CVPR 2022] Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer
Other
1.64k stars 254 forks source link

How to style perfectly #109

Open seyeon01 opened 2 months ago

seyeon01 commented 2 months ago

I ran the code in github as it is, but it doesn't What principle is the combination of the original photo and the anime photo in the code?

Is it adjusting multiple levels of iter and batch size or adjusting --style_loss 0.25 --CX_loss 0.25 --perc_loss 1 --id_loss 1 --L2_reg_loss 0.015 in finetune_dual stylegan.py?

I tried until the end, but when I adjusted the parameters, the face shape didn't come out, the original image wasn't learned, or the color was weird. Which parameters should I adjust to get a GitHub-like result?

seyeon01 commented 2 months ago

I ran the code in github as it is, but it doesn't What principle is the combination of the original photo and the anime photo in the code?

Is it adjusting multiple levels of iter and batch size or adjusting --style_loss 0.25 --CX_loss 0.25 --perc_loss 1 --id_loss 1 --L2_reg_loss 0.015 in finetune_dual stylegan.py?

I tried until the end, but when I adjusted the parameters, the face shape didn't come out, the original image wasn't learned, or the color was weird. Which parameters should I adjust to get a GitHub-like result?

What I understand is to adjust the parameters.

williamyang1991 commented 2 months ago

Yes. I tune the parameters for each dataset since we found different styles have very different characteristics. I mainly tune λID and λreg for each style dataset to achieve ideal performance. These two could make the network from model collapse.

And also large batch size (e.g., 8*4=32) gives better results than small batch size,

seyeon01 commented 2 months ago

Thank you for your response. Due to my computer's performance, I need to reduce the number of iterations when increasing the batch size. Will this affect the results?

williamyang1991 commented 1 month ago

I think it is OK.