Open ShaihW opened 1 year ago
Hello, can you provide the cfg file and several generated images?
Thank you for the answer Attached 2 examples of the same epoch (but different style image was fed to the network) I concatenated the 10 results to one figure. of 2 rows and 5 columns
the training cfg is:
data_dir: '/home/shai/Font-diff/eng_ch_a/'
chara_nums: 62
diffusion_steps: 1000
noise_schedule: 'linear'
image_size: 80
num_channels: 128
num_res_blocks: 3
lr: 0.0002
batch_size: 8
log_interval: 250
save_interval: 30000
train_step: 420000
attention_resolutions: '40, 20, 10'
sty_encoder_path: './pretrained_models/cont_model_78.ckpt' #### trained with DG-Font
model_save_dir: './treng_trained_models'
classifier_free: False
total_train_step: 800000
resume_checkpoint: ""
test cfg:
dropout: 0.1
chara_nums: 62
diffusion_steps: 1000
noise_schedule: 'linear'
image_size: 80
num_channels: 128
num_res_blocks: 3
batch_size: 5
num_samples: 10
attention_resolutions: '40, 20, 10'
use_ddim: False
timestep_respacing: ddim25
model_path: '/home/shai/Font-diff/treng_trained_models/model540000.pt'
sty_img_path: '/home/shai/Font-diff/eng_chars/id_9936/00001.png'
total_txt_file: './total_en.txt'
gen_txt_file: './en_char.txt'
img_save_path: './result_img_EN_5'
classifier_free: True
cont_scale: 3.0
sk_scale: 3.0
You can set the 'classifier_free' to 'False' in test cfg and try it again.
may i have question? how did you get the file "cont_model_78.ckpt" Is get from the code, if i wanna to get the sty_encoder,how should i do ,thanks a lot
may i have question? how did you get the file "cont_model_78.ckpt" Is get from the code, if i wanna to get the sty_encoder,how should i do ,thanks a lot
Trained it using DG-Font
Any luck with this?
same problem
fixed for "set the 'classifier_free' to 'False' in test cfg"
may i have question? how did you get the file "cont_model_78.ckpt" Is get from the code, if i wanna to get the sty_encoder,how should i do ,thanks a lot
Trained it using DG-Font
That not clarifying the problem. I also need to apply this for English Font.Whats the method? 2 problems i am stuck with in your code : sty_encoder_path: './pretrained_models/chinese_styenc.ckpt' # path to pre-trained style encoder
Where to get an ENglish style encoder ?
stroke_path: './chinese_stroke.txt' # encoded strokes
How to make english stroke set like this file with 1s and 0s
Thank you for the answer Attached 2 examples of the same epoch (but different style image was fed to the network) I concatenated the 10 results to one figure. of 2 rows and 5 columns
the training cfg is:
data_dir: '/home/shai/Font-diff/eng_ch_a/' chara_nums: 62 diffusion_steps: 1000 noise_schedule: 'linear' image_size: 80 num_channels: 128 num_res_blocks: 3 lr: 0.0002 batch_size: 8 log_interval: 250 save_interval: 30000 train_step: 420000 attention_resolutions: '40, 20, 10' sty_encoder_path: './pretrained_models/cont_model_78.ckpt' #### trained with DG-Font model_save_dir: './treng_trained_models' classifier_free: False total_train_step: 800000 resume_checkpoint: ""
test cfg:
dropout: 0.1 chara_nums: 62 diffusion_steps: 1000 noise_schedule: 'linear' image_size: 80 num_channels: 128 num_res_blocks: 3 batch_size: 5 num_samples: 10 attention_resolutions: '40, 20, 10' use_ddim: False timestep_respacing: ddim25 model_path: '/home/shai/Font-diff/treng_trained_models/model540000.pt' sty_img_path: '/home/shai/Font-diff/eng_chars/id_9936/00001.png' total_txt_file: './total_en.txt' gen_txt_file: './en_char.txt' img_save_path: './result_img_EN_5' classifier_free: True cont_scale: 3.0 sk_scale: 3.0
Hello,I would like to ask why I get an error like the following when I use a cpkt trained by DG-Font?I would appreciate a reply from you!
Traceback (most recent call last):
File "/home/llll/PycharmProjects/Fontdiffproject/Font-diff/train.py", line 122, in
Hi, First, thanks for this great repository
I am trying to train such a diffusion network for english font I trained style encoder that works fine I plugged it into the diffusion model, removed the strokes (my cfg file does not contain "stroke_path: " line), supplied english chars in gen_char.txt and total_eng.txt (replaces of total_chn.txt file) and of course supplied my english dataset
now the problem is that the result is bad. I also suspect that it looks a bit Chinese
I will appreciate any help with this