clovaai / mxfont

Official PyTorch implementation of MX-Font (Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts) ICCV 2021
Other
148 stars 34 forks source link

trainning stragegy #6

Closed ecnuycxie closed 3 years ago

ecnuycxie commented 3 years ago

Hello, thank you for your impressive work. I noticed that you set'max_iter' to 800,000, and the training data set contains 439 different styles (each has nearly 6000 characters). When I set 'batch' to 8, the epoch is between 2 and 3 (which makes me feel strange). Am I right?

8uos commented 3 years ago

Hi, thanks for your question. For the generative models, usually, the number of iterations is more critical than the number of epochs. We observed that 650,000 iterations are enough for our model (with 6 experts). Also, a data point actually includes more than one image (7 images in this code; 3 style images, 3 content images, 1 target image), so the model looks 8 * 7 = 56 images per every iteration, that is much more images than you guessed.

ecnuycxie commented 3 years ago

Thanks for your reply! I still wonder that from which metrics do you observed that the iterations are enough?

8uos commented 3 years ago

I'm sorry for that the metrics such as reported losses do not clearly reflects the quality of generated images, like many other generative models. However, we observed that the quality of generated images goes better when the accuracies of content and style classifier with generated images (reported as AC_g_acc_c and AC_g_acc_s) become larger.

SanghyukChun commented 3 years ago

Closing the issue, assuming the answer resolves the problem. Please re-open the issue as necessary.