ymcui / Chinese-LLaMA-Alpaca-2

中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Apache License 2.0
7.04k stars 581 forks source link

错误提示:TypeError: __init__() got an unexpected keyword argument 'enable_lora' #439

Closed Mr1994 closed 9 months ago

Mr1994 commented 9 months ago

提交前必须检查以下项目

问题类型

模型转换和合并

基础模型

Chinese-LLaMA-2 (7B/13B)

操作系统

Linux

详细描述问题

我想将训练的数据和之前的7b进行合并就出现这个错误了

 python scripts/merge_llama2_with_chinese_lora_low_mem.py --base_model /llma2/llama.cpp/models/chinese-alpaca-2-7b-hf --lora_model /llma2/output_dir/pt_lora_model/ --output_dir /llma2/new_data/ --output_type huggingface --verbose

================================================================================
Base model: /llma2/llama.cpp/models/chinese-alpaca-2-7b-hf
LoRA model: /llma2/output_dir/pt_lora_model/
Loading /llma2/output_dir/pt_lora_model/
Traceback (most recent call last):
  File "/llma2/Chinese-LLaMA-Alpaca-2/scripts/merge_llama2_with_chinese_lora_low_mem.py", line 240, in <module>
    lora_config = peft.LoraConfig.from_pretrained(lora_model_path)
  File "/llma2/Chinese-LLaMA-Alpaca-2/Chinese-LLaMA-Alpaca-2/lib/python3.9/site-packages/peft/config.py", line 134, in from_pretrained
    config = config_cls(**kwargs)

### 依赖情况(代码类问题务必提供)

pip show peft

Name: peft Version: 0.6.2 Summary: Parameter-Efficient Fine-Tuning (PEFT) Home-page: https://github.com/huggingface/peft Author: The HuggingFace team Author-email: sourab@huggingface.co License: Apache Location: /llma2/Chinese-LLaMA-Alpaca-2/Chinese-LLaMA-Alpaca-2/lib/python3.9/site-packages Requires: accelerate, numpy, packaging, psutil, pyyaml, safetensors, torch, tqdm, transformers Required-by:```

运行日志或截图


![image](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/assets/15978040/b9146758-0bbe-4e46-8e47-c33135b56661)```
ymcui commented 9 months ago

peft降到0.3.0试试吧,你用的版本太新,API可能发生变化了。

Mr1994 commented 9 months ago

的确是可以了 感觉这里太容易踩坑了 我是否可以提交一下代码 或者建议您这边修改一下这里 peft==0.3.0 torch==2.0.1 transformers==4.35.0 sentencepiece==0.1.99 bitsandbytes==0.41.1 将peft 的版本固定在peft==0.3.0 而不是之前的peft>=0.3.0这样子的话可以避免很多问题