huggingface / peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
https://huggingface.co/docs/peft
Apache License 2.0
16.43k stars 1.62k forks source link

训练时使用的QLoRA 4rank,进行cuda模型合并导出时出现,KeyError: 'base_model.model.model.model.layers.14.mlp.down_proj' #2213

Open xiaoheiyue opened 15 hours ago

xiaoheiyue commented 15 hours ago

System Info

File "/home/mukuro/projects/LLaMA-Factory/src/llamafactory/model/adapter.py", line 299, in init_adapter model = _setup_lora_tuning( ^^^^^^^^^^^^^^^^^^^ File "/home/mukuro/projects/LLaMA-Factory/src/llamafactory/model/adapter.py", line 181, in _setup_lora_tuning model: "LoraModel" = PeftModel.from_pretrained(model, adapter, **init_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/mukuro/softwares/miniconda3/envs/qwen2.5/lib/python3.11/site-packages/peft/peft_model.py", line 545, in from_pretrained model.load_adapter( File "/home/mukuro/softwares/miniconda3/envs/qwen2.5/lib/python3.11/site-packages/peft/peft_model.py", line 1151, in load_adapter self._update_offload(offload_index, adapters_weights) File "/home/mukuro/softwares/miniconda3/envs/qwen2.5/lib/python3.11/site-packages/peft/peft_model.py", line 1028, in _update_offload safe_module = dict(self.named_modules())[extended_prefix]


KeyError: 'base_model.model.model.model.layers.14.mlp.down_proj'

### Who can help?

_No response_

### Information

- [ ] The official example scripts
- [ ] My own modified scripts

### Tasks

- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)

### Reproduction

llamafactory 使用 peft 0.12.0

### Expected behavior

我希望能解决这个问题正常合并
JINO-ROHIT commented 13 hours ago

looks like some mismatch, can you ensure using

print(dict(model.named_modules()).keys())
xiaoheiyue commented 12 hours ago

looks like some mismatch, can you ensure using

print(dict(model.named_modules()).keys())

是查看原模型的吗?训练的 LoRA adapter 和模型层数是一样的,每个层 att qkv 还有 mlp 的 那些也都有。