HideUnderBush / UI2I_via_StyleGAN2

Unsupervised image-to-image translation method via pre-trained StyleGAN2 network
Other
222 stars 30 forks source link

Apply for Anime dataset #10

Open annihi1ation opened 3 years ago

annihi1ation commented 3 years ago

Thanks for your magnificent research! And I wonder if I can get your Anime dataset since all of my dataset can not give me the reasonable result.

HideUnderBush commented 3 years ago

Thanks for your interest!

We used Danbooru dataset for the anime model. You can find: (1) some introduction about this dataset from the gwern's blog https://www.gwern.net/Danbooru2020 (2) "user friendly" download page: https://gist.github.com/stormraiser/a8066517b0b60a50c701ee9c8f720691 (3) Note that Danbooru (origin version) is not an anime face dataset, and there is a cropped version: https://www.kaggle.com/lukexng/animefaces-512x512

Note that, in our paper, we actually used 256x256 anime data and trained on a 256x256 ffhq stylegan2 pre-trained model.

annihi1ation commented 3 years ago

Thanks for the reply! And you mean that we just need to fine-tune the ffhq pre_trained model with the cropped dataset. I wonder if there is any fault in my comprehension. Again, it is nice to you to help me in a short time

HideUnderBush commented 3 years ago

Sorry for the late reply.

Well, you can say that you need to fine-tune the ffhq pre-trained model with the Danbooru dataset (make sure the image size of Danbooru you used/cropped matches that of the pre-trained model). To achieve better quality, you may need to read our paper and check other techniques like layer-swap. And there are also some hyper-param which would affect the results, detailed analysis can be found in experiment comparison and ablation study results.

annihi1ation commented 3 years ago

Thanks for the reply, and I wonder if you have the fine-tuned anime stylegan2 pre-trained model. You know I am just a student who can not afford a high-performance GPU to train the model.😂

Looking forward to your reply!