ymcui / Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki
Apache License 2.0
17.98k stars 1.84k forks source link

多权重合并,chinese_alpaca_lora_plus与自己预训练的权重合并词表大小不一致 #851

Closed lyq080700 closed 9 months ago

lyq080700 commented 9 months ago

提交前必须检查以下项目

问题类型

模型转换和合并

基础模型

LLaMA-Plus-7B

操作系统

Linux

详细描述问题

预训练模型是词表扩充后再进行预训练的,扩充后词表大小为58535,想基于现有的alpaca_lora权重再训练一个权重,但词表大小不一致,预训练的词表更大,想请问把代码中比较两个lora词表大小的代码注释调可行吗? if len(tokenizers_and_loras)==2: t1_vocab_size = len(tokenizers_and_loras[0]["tokenizer"]) t2_vocab_size = len(tokenizers_and_loras[1]["tokenizer"]) assert t1_vocab_size<=t2_vocab_size, \ (f"The vocab size of the first tokenizer is {t1_vocab_size}\n" f"The vocab size of the second tokenizer is {t2_vocab_size}, found to be smaller than {t1_vocab_size}\n" "This is not the intended use. Please check your model and tokenizer.")

依赖情况(代码类问题务必提供)

# 请在此处粘贴依赖情况

运行日志或截图

# 请在此处粘贴运行日志
Traceback (most recent call last):
  File "/root/scripts/merge_llama_with_chinese_lora_low_mem.py", line 263, in <module>
    assert t1_vocab_size<=t2_vocab_size, \
AssertionError: The vocab size of the first tokenizer is 58353
The vocab size of the second tokenizer is 49954, found to be smaller than 58353
This is not the intended use. Please check your model and tokenizer.
github-actions[bot] commented 9 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 9 months ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.