QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
14.51k stars 1.18k forks source link

运行qlora finetune时报错Target module QuantLinear() is not supported. #431

Closed ivanzfb closed 1 year ago

ivanzfb commented 1 year ago

运行脚本:sh finetune/finetune_qlora_single_gpu.sh python版本:3.10 transformer:4.32.0 torch:2.0.1 报错如下: Traceback (most recent call last): File "/home/zfb/text2sql/Qwen-main/finetune.py", line 358, in train() File "/home/zfb/text2sql/Qwen-main/finetune.py", line 336, in train model = get_peft_model(model, lora_config) File "/root/software/miniconda3/lib/python3.10/site-packages/peft/mapping.py", line 98, in get_peft_model return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config, adapter_name=adapter_name) File "/root/software/miniconda3/lib/python3.10/site-packages/peft/peft_model.py", line 893, in init super().init(model, peft_config, adapter_name) File "/root/software/miniconda3/lib/python3.10/site-packages/peft/peft_model.py", line 112, in init self.base_model = PEFT_TYPE_TO_MODEL_MAPPING[peft_config.peft_type]( File "/root/software/miniconda3/lib/python3.10/site-packages/peft/tuners/lora.py", line 180, in init self.add_adapter(adapter_name, self.peft_config[adapter_name]) File "/root/software/miniconda3/lib/python3.10/site-packages/peft/tuners/lora.py", line 194, in add_adapter self._find_and_replace(adapter_name) File "/root/software/miniconda3/lib/python3.10/site-packages/peft/tuners/lora.py", line 352, in _find_and_replace new_module = self._create_new_module(lora_config, adapter_name, target) File "/root/software/miniconda3/lib/python3.10/site-packages/peft/tuners/lora.py", line 305, in _create_new_module raise ValueError( ValueError: Target module QuantLinear() is not supported. Currently, only torch.nn.Linear and Conv1D are supported.

JustinLin610 commented 1 year ago

传进去的模型是int4模型吗?

ivanzfb commented 1 year ago

传进去的模型是int4模型吗?

是INT4模型,正常推理可以的

nlp4whp commented 1 year ago

可能是peft版本的问题,peft在v0.5.0才集成了GPTQ,低于0.5.0会报不支持QuantLinear的错

Kk1984up commented 1 year ago

这个问题后面解决了吗,我也是报错呢

yd-53 commented 11 months ago

可能是peft版本的问题,peft在v0.5.0才集成了GPTQ,低于0.5.0会报不支持QuantLinear的错误

您好,我的peft版本是0.7.1,也发生了这个错误,是因为什么呀

LIXUEGUANG002 commented 10 months ago

Traceback (most recent call last): File "/mnt/pan2/lixueguang/BigModel/embeddingandLLM/Qwen-7B-Chat-Int4/finetune.py", line 374, in train() File "/mnt/pan2/lixueguang/BigModel/embeddingandLLM/Qwen-7B-Chat-Int4/finetune.py", line 349, in train model = get_peft_model(model, lora_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda3/lib/python3.11/site-packages/peft/mapping.py", line 116, in get_peft_model return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config, adapter_name=adapter_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda3/lib/python3.11/site-packages/peft/peft_model.py", line 947, in init super().init(model, peft_config, adapter_name) File "/opt/miniconda3/lib/python3.11/site-packages/peft/peft_model.py", line 119, in init self.base_model = cls(model, {adapter_name: peft_config}, adapter_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda3/lib/python3.11/site-packages/peft/tuners/lora/model.py", line 111, in init super().init(model, config, adapter_name) File "/opt/miniconda3/lib/python3.11/site-packages/peft/tuners/tuners_utils.py", line 93, in init self.inject_adapter(self.model, adapter_name) File "/opt/miniconda3/lib/python3.11/site-packages/peft/tuners/tuners_utils.py", line 231, in inject_adapter self._create_and_replace(peft_config, adapter_name, target, target_name, parent, optional_kwargs) File "/opt/miniconda3/lib/python3.11/site-packages/peft/tuners/lora/model.py", line 193, in _create_and_replace new_module = self._create_new_module(lora_config, adapter_name, target, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda3/lib/python3.11/site-packages/peft/tuners/lora/model.py", line 317, in _create_new_module raise ValueError( ValueError: Target module QuantLinear() is not supported. Currently, only the following modules are supported: torch.nn.Linear, torch.nn.Embedding, torch.nn.Conv2d, transformers.pytorch_utils.Conv1D. 我也是这个错误,为啥关了呢?

jklj077 commented 10 months ago

随着软件包版本的更新,历史解决方案会失效。请在新issue中描述问题,并额外说明相关软件包的版本,如peft、auto-gptq。