Hxyz-123 / Font-diff

112 stars 16 forks source link

Generation of Charaters outside of total_chn.txt #24

Open lly0101 opened 12 months ago

lly0101 commented 12 months ago

The model is a inspiring one. When I am using your model, I found that we can only generate those characters inside the total_chn.txt, i.e., inside the dataset. When I try to generate those characters outside of the list, the torch dimension (the embedding layer) does not match. I am not sure how the content in the character attributes encoder being encoded. I would like to ask if you have any ways to input those characters that are not including in total_chn.txt and generate those characters, without re-training the whole model. Thank you.

czzerone commented 1 month ago

The model is a inspiring one. When I am using your model, I found that we can only generate those characters inside the total_chn.txt, i.e., inside the dataset. When I try to generate those characters outside of the list, the torch dimension (the embedding layer) does not match. I am not sure how the content in the character attributes encoder being encoded. I would like to ask if you have any ways to input those characters that are not including in total_chn.txt and generate those characters, without re-training the whole model. Thank you.

hello, can you tell me where to get the model, I did not find any trained models released by the author

Outlanderll commented 3 weeks ago

Hello, I would like to know what is in the . /char_stroke.txt' file in test_cfg.yaml?Looking forward to your reply!

lly0101 commented 3 weeks ago

Hello, I would like to know what is in the . /char_stroke.txt' file in test_cfg.yaml?Looking forward to your reply!

The stroke encoding, i.e. chinese_stroke.txt if you are generating Chinese characters.

lly0101 commented 3 weeks ago

The model is a inspiring one. When I am using your model, I found that we can only generate those characters inside the total_chn.txt, i.e., inside the dataset. When I try to generate those characters outside of the list, the torch dimension (the embedding layer) does not match. I am not sure how the content in the character attributes encoder being encoded. I would like to ask if you have any ways to input those characters that are not including in total_chn.txt and generate those characters, without re-training the whole model. Thank you.

hello, can you tell me where to get the model, I did not find any trained models released by the author

I trained by myself.

Outlanderll commented 4 days ago

Hello, I would like to know what is in the . /char_stroke.txt' file in test_cfg.yaml?Looking forward to your reply!

The stroke encoding, i.e. chinese_stroke.txt if you are generating Chinese characters.

Thank you very much for your reply! May I ask if you have used this model to generate other texts such as Arabic and English, the cpkt I have trained using DG-Font gets errors like the following when I apply it to this:Traceback (most recent call last): File "/home/llll/PycharmProjects/Fontdiffproject/Font-diff/train.py", line 122, in main() File "/home/llll/PycharmProjects/Fontdiffproject/Font-diff/train.py", line 56, in main model.sty_encoder.load_state_dict(tmp_dict) File "/home/llll/anaconda3/envs/fontdiffuser/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1671, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for StyleEncoder: Missing key(s) in state_dict: "features.0.weight", "features.0.bias", "features.1.weight", "features.1.bias", "features.1.running_mean", "features.1.running_var", "features.4.weight", "features.4.bias", "features.5.weight", "features.5.bias", "features.5.running_mean", "features.5.running_var", "features.8.weight", "features.8.bias", "features.9.weight", "features.9.bias", "features.9.running_mean", "features.9.running_var", "features.11.weight", "features.11.bias", "features.12.weight", "features.12.bias", "features.12.running_mean", "features.12.running_var", "features.15.weight", "features.15.bias", "features.16.weight", "features.16.bias", "features.16.running_mean", "features.16.running_var", "features.18.weight", "features.18.bias", "features.19.weight", "features.19.bias", "features.19.running_mean", "features.19.running_var", "features.22.weight", "features.22.bias", "features.23.weight", "features.23.bias", "features.23.running_mean", "features.23.running_var", "features.25.weight", "features.25.bias", "features.26.weight", "features.26.bias", "features.26.running_mean", "features.26.running_var", "cont.weight", "cont.bias".

Translated with DeepL.com (free version)