Open seyeon01 opened 2 months ago
I ran the code in github as it is, but it doesn't What principle is the combination of the original photo and the anime photo in the code?
Is it adjusting multiple levels of iter and batch size or adjusting --style_loss 0.25 --CX_loss 0.25 --perc_loss 1 --id_loss 1 --L2_reg_loss 0.015 in finetune_dual stylegan.py?
I tried until the end, but when I adjusted the parameters, the face shape didn't come out, the original image wasn't learned, or the color was weird. Which parameters should I adjust to get a GitHub-like result?
What I understand is to adjust the parameters.
Yes. I tune the parameters for each dataset since we found different styles have very different characteristics. I mainly tune λID and λreg for each style dataset to achieve ideal performance. These two could make the network from model collapse.
And also large batch size (e.g., 8*4=32) gives better results than small batch size,
Thank you for your response. Due to my computer's performance, I need to reduce the number of iterations when increasing the batch size. Will this affect the results?
I think it is OK.
I ran the code in github as it is, but it doesn't What principle is the combination of the original photo and the anime photo in the code?
Is it adjusting multiple levels of iter and batch size or adjusting --style_loss 0.25 --CX_loss 0.25 --perc_loss 1 --id_loss 1 --L2_reg_loss 0.015 in finetune_dual stylegan.py?
I tried until the end, but when I adjusted the parameters, the face shape didn't come out, the original image wasn't learned, or the color was weird. Which parameters should I adjust to get a GitHub-like result?