yeungchenwa / FontDiffuser

[AAAI2024] FontDiffuser: One-Shot Font Generation via Denoising Diffusion with Multi-Scale Content Aggregation and Style Contrastive Learning
https://yeungchenwa.github.io/fontdiffuser-homepage/
231 stars 21 forks source link

Hello, I am very interested in your research after reading your paper. But I have a confusion about the second stage of training. #39

Open jingmingtao opened 4 months ago

jingmingtao commented 4 months ago

Hello, I am very interested in your research after reading your paper. But I have a confusion about the second stage of training.

May I ask how to load the model from the first stage during the second stage of training? Is it the total model. pth from the first stage? But it seems that it cannot be loaded. If the downloaded scr_210000.pth can be loaded, what is the relationship between this and the first stage of my own training. Please help answer, thank you very much!!

yeungchenwa commented 4 months ago

hi@jingmingtao. Feel sorry that the previous codes may confuse you, and I update the codes. During phase 2, the trained parameters obtained in phase 1 will be resumed in phase 2. You can turn phase_1_ckpt_dir to your checkpoint files directory obtained in phase 1.