yeungchenwa / FontDiffuser

[AAAI2024] FontDiffuser: One-Shot Font Generation via Denoising Diffusion with Multi-Scale Content Aggregation and Style Contrastive Learning
https://yeungchenwa.github.io/fontdiffuser-homepage/
282 stars 25 forks source link

训练阶段的描述是不是有一处笔误? #56

Open l257737602 opened 2 months ago

l257737602 commented 2 months ago

https://github.com/yeungchenwa/FontDiffuser?tab=readme-ov-file#training---phase-2

After the phase 2 training, you should put the trained checkpoint files (unet.pth, content_encoder.pth, and style_encoder.pth) to the directory phase_1_ckpt. During phase 2, these parameters will be resumed. You mean After the phase 1 training?