Hi, @HideUnderBush! Thanks for you amazing works!
I try to reimplement the face2anime experiments on Danbooru Datasets. However, I face some confusions, could you give me some advice?
Step 1: According to your scripts, I use the 512 px stylegan2 ckpt pretrained on ffhq datasets as base, and finetune on Danbooru Datasets. (I didn't change any other params, is that right?)
Step 2: I use closed_form_factorization.py to decompose the model has trained 35000 iterations (35000.pt) to get factor.out file.
Step 3: I try to achieve image inversion (size is 512), however, when the optimization program finished, I got an almost black result. The MSE loss is very large. (The loss is about 1.4-1.7).
Are there any key points I forgot? I wish you can point out some mistakes about my steps. Thanks for your jobs!
Hi, @HideUnderBush! Thanks for you amazing works! I try to reimplement the face2anime experiments on Danbooru Datasets. However, I face some confusions, could you give me some advice? Step 1: According to your scripts, I use the 512 px stylegan2 ckpt pretrained on ffhq datasets as base, and finetune on Danbooru Datasets. (I didn't change any other params, is that right?) Step 2: I use
closed_form_factorization.py
to decompose the model has trained 35000 iterations (35000.pt) to get factor.out file. Step 3: I try to achieve image inversion (size is 512), however, when the optimization program finished, I got an almost black result. The MSE loss is very large. (The loss is about 1.4-1.7). Are there any key points I forgot? I wish you can point out some mistakes about my steps. Thanks for your jobs!