Hxyz-123 / Font-diff

112 stars 16 forks source link

Diff-Font


Official code implementation based on pytorch for paper, Diff-Font: Diffusion Model for Robust One-Shot Font Generation. Arvix version

Dependencies


pytorch>=1.10.0
tqdm
opencv-python
sklearn
pillow
tensorboardX
blobfile>=1.0.5
mpi4py
attrdict
yaml

Dataset


方正字库 provides free font download for non-commercial users.

Example directory hierarchy

data_dir
    |--- font1
    |--- font2
           |--- 00000.png
           |--- 00001.png
           |--- ...
    |--- ...

Usage


Prepare dataset

python font2img.py --ttf_path ttf_folder --chara total_chn.txt --save_path save_folder --img_size 80 --chara_size 60

Conditional training

Fine-tuning

After conditional training , we suggest an additional fine-tuning step.

Test

Modify the configuration file cfg/test_cfg.yaml

Key setting for testing:

chara_nums: 6625
num_samples: 10
stroke_path: './char_stroke.txt'
model_path: 'path_to_trained_model, suggest using the ema model'
sty_img_path: 'path_to_reference_image'
total_txt_file: './total_chn.txt'
gen_txt_file: './gen_char.txt' # txt file for generation
img_save_path: './result' # path to save generated images
classifier_free: True 
cont_scale: 3.0 # content guidance sacle
sk_scale: 3.0 # stroke guidance sacle

, then run

python sample.py --cfg_path cfg/test_cfg.yaml

Visual display

result_img

FAQ

1. The generated characters content are incorrect.

Please check whether each font in the dataset used for training contains all characters in the '.txt ' file.

2. The generated character images are unclear and structurally incomplete.

This phenomenon indicates that the model training is not sufficient. Please continue to train the model.

Acknowledgements


This project is based on openai/guided-diffusion and DG-Font.