# 请在此处粘贴运行日志(请粘贴在本代码块里)
python3 merge_llama3_with_chinese_lora_low_mem.py
--base_model meta-llama/Meta-Llama-3-8B
--lora_model ../../Chinese-LLaMA-Alpaca-2/scripts/training/output/checkpoint-241690/pt_lora_model/
--output_type huggingface
================================================================================
Base model: meta-llama/Meta-Llama-3-8B
LoRA model: ../../Chinese-LLaMA-Alpaca-2/scripts/training/output/checkpoint-241690/pt_lora_model/
Loading ../../Chinese-LLaMA-Alpaca-2/scripts/training/output/checkpoint-241690/pt_lora_model/
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
File "/mlsteam/data/Q21/nick/Chinese-LLaMA-Alpaca-3/scripts/merge_llama3_with_chinese_lora_low_mem.py", line 234, in <module>
lora_config = peft.LoraConfig.from_pretrained(lora_model_path)
File "/usr/local/lib/python3.10/dist-packages/peft/config.py", line 137, in from_pretrained
config = config_cls(**kwargs)
TypeError: LoraConfig.__init__() got an unexpected keyword argument 'enable_lora'
提交前必须检查以下项目
问题类型
模型转换和合并
基础模型
Llama-3-Chinese-8B(基座模型)
操作系统
Linux
详细描述问题
依赖情况(代码类问题务必提供)
运行日志或截图