OpenGVLab / LLaMA-Adapter

[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
GNU General Public License v3.0
5.76k stars 375 forks source link

Does the LLama pre-trained model you used support Chinese? 您提及的使用的llama的预训练模型(在您给出的以下链接)支持中文吗? #127

Open jzssz opened 1 year ago

jzssz commented 1 year ago

when you mentioned "download the LLaMA-7B from [Hugging [Face]] https://huggingface.co/nyanko7/LLaMA-7B/tree/main (unofficial)." https://github.com/OpenGVLab/LLaMA-Adapter#inference

csuhan commented 1 year ago

Our LLaMA-Adapter (V1) does not support Chinese. But you can try ImageBind-LLM which supports bothe English and Chinese.

hyyuan123 commented 9 months ago

Our LLaMA-Adapter (V1) does not support Chinese. But you can try ImageBind-LLM which supports bothe English and Chinese.

您好,https://github.com/OpenGVLab/LLaMA-Adapter/tree/main/imagebind_LLM中的get_chinese_llama.py文件丢失,可以补充一下吗