Lightning-AI / litgpt

20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
https://lightning.ai
Apache License 2.0
10.56k stars 1.05k forks source link

Resolve output characters garbled #1422

Open fireyanci opened 5 months ago

fireyanci commented 5 months ago

hello , Because I want my model to have Chinese language ability, but the language and training resources required for full parameter training of Chinese language are huge, I used the Chinese llama model trained by other open-source projects. However, I think the litgpt project is very convenient, so I converted the models from other open-source projects to a lit model. The output of the lit model has Chinese language ability, but there is character garbled phenomenon in the output text. How can I solve the character garbled phenomenon? I look forward to your reply. Thank you (Appendix: Chinese Open Source Model GitHub Address: https://github.com/LlamaFamily/Llama-Chinese?tab=readme-ov-file Chinese Open Source Model File Hugging Face Address: https://huggingface.co/FlagAlpha/Llama3-Chinese-8B-Instruct/tree/main

fireyanci commented 5 months ago

It can understand my question and provide corresponding output, but there are some characters whose output is in garbled form

rasbt commented 5 months ago

I wonder perhaps if it is related to the tokenizer? It could also be a limitation of the terminal outputting certain characters. Unfortunately, I am not super familiar with working with those characters. One thing you could try is perhaps adding a print(<expected special characters>) to the script to see if this is maybe an issue with the terminal output?

fireyanci commented 5 months ago

Thank you very much for your reply

fireyanci commented 5 months ago

Because I've been too busy lately, I've only started trying this method now. I can output those garbled characters from the terminal, and I'm not sure if this is related to the tokenizer,In order to make the model master the Chinese language ability, the developers of the Chinese llama repository have expanded the tokenizer