Open xiaoheiyue opened 15 hours ago
looks like some mismatch, can you ensure using
print(dict(model.named_modules()).keys())
looks like some mismatch, can you ensure using
print(dict(model.named_modules()).keys())
是查看原模型的吗?训练的 LoRA adapter 和模型层数是一样的,每个层 att qkv 还有 mlp 的 那些也都有。
System Info
File "/home/mukuro/projects/LLaMA-Factory/src/llamafactory/model/adapter.py", line 299, in init_adapter model = _setup_lora_tuning( ^^^^^^^^^^^^^^^^^^^ File "/home/mukuro/projects/LLaMA-Factory/src/llamafactory/model/adapter.py", line 181, in _setup_lora_tuning model: "LoraModel" = PeftModel.from_pretrained(model, adapter, **init_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/mukuro/softwares/miniconda3/envs/qwen2.5/lib/python3.11/site-packages/peft/peft_model.py", line 545, in from_pretrained model.load_adapter( File "/home/mukuro/softwares/miniconda3/envs/qwen2.5/lib/python3.11/site-packages/peft/peft_model.py", line 1151, in load_adapter self._update_offload(offload_index, adapters_weights) File "/home/mukuro/softwares/miniconda3/envs/qwen2.5/lib/python3.11/site-packages/peft/peft_model.py", line 1028, in _update_offload safe_module = dict(self.named_modules())[extended_prefix]