yizhiwang96 / deepvecfont

[SIGGRAPH Asia 2021] DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning
MIT License
182 stars 31 forks source link

How to continue training on the basis of pre-trained? #28

Open graphl opened 1 year ago

graphl commented 1 year ago

Hi, thank you for making this! I loaded the parameters of the pre-trained model and trained it, But now the results more like the fonts I added (main.py)Append code:

image

result merge_128

VAL/Loss image

Is this an overfitting? or This is because there are no parameters to load the Adam optimizer ? If I want to continue training my method on the basis of pre-trained, right?

yizhiwang96 commented 1 year ago

Use .load_state_dict before the training (e.g, in this place). Do not use .eval because if is for training. If your dataset is small, it is more like finetune and you need only train for a few steps.

graphl commented 1 year ago

Thank you for the reply,I've tried it。The method you said is very useful. But I ran into other problems。 I trained with epoch = 11 and added some of my own fonts. But test with the parameters of their own training, but the effect is not very good.

Here are some of my test results。

I found that the weight of the font seemed to affect the result.

  1. case 1 merge_128
  2. case 2 merge_128
  3. case 3 merge_128

I found that the thickness of the font seems to affect the result, the thicker the font, the better the effect, and the thinner the font, the worse the effect。What do I have to do to make the thin font better。