modelscope / ms-swift

Use PEFT or Full-parameter to finetune 350+ LLMs or 90+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL, Phi3.5-Vision, ...)
https://swift.readthedocs.io/zh-cn/latest/Instruction/index.html
Apache License 2.0
3.65k stars 312 forks source link

qwen2-7b-instruct-int8用qlora微调后,运行报错 #1161

Closed taisenki closed 3 months ago

taisenki commented 3 months ago

Describe the bug 微调代码如下:

CUDA_VISIBLE_DEVICES=0 \
swift sft \
    --model_type qwen2-7b-instruct-int8 \
    --model_id_or_path Qwen2-7B-Instruct-GPTQ-Int8 \
    --sft_type lora \
    --tuner_backend peft \
    --template_type AUTO \
    --dtype fp16 \
    --quant_method gptq \
    --dataset AI-ModelScope/alpaca-gpt4-data-zh#500 AI-ModelScope/alpaca-gpt4-data-en#500 swift/self-cognition#500 \
    --model_name 小纬 XiaoWei \
    --model_author 小纬 XiaoWei \
    --num_train_epochs 1 \
    --lora_rank 8 \
    --lora_alpha 32 \
    --lora_dropout_p 0.05 \
    --lora_target_modules ALL \
    --gradient_checkpointing true \
    --batch_size 1 \
    --weight_decay 0.1 \
    --learning_rate 1e-4 \
    --gradient_accumulation_steps 16 \
    --use_flash_attn true \
    --output_dir /mnt/f/collector/model/lora/ \
    --train_dataset_sample -1 \
    --check_dataset_strategy warning \
    --max_grad_norm 0.5 \
    --warmup_ratio 0.03 \
    --eval_steps 100 \
    --save_steps 100 \
    --save_total_limit 2 \
    --logging_steps 10 \

微调后启动代码如下:

CUDA_VISIBLE_DEVICES=0 swift deploy --model_type qwen2-7b-instruct-int8 --ckpt_dir qwen2-7b-instruct-int8/v0-20240617-143336/checkpoint-93

模型加载不报错,一推理就报错,报错信息如下:

Traceback (most recent call last):
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/peft/peft_model.py", line 1491, in generate
    outputs = self.base_model.generate(*args, **kwargs)
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/transformers/generation/utils.py", line 1758, in generate
    result = self._sample(
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/transformers/generation/utils.py", line 2397, in _sample
    outputs = self(
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 1149, in forward
    outputs = self.model(
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 1034, in forward
    layer_outputs = decoder_layer(
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 748, in forward
    hidden_states, self_attn_weights, present_key_value = self.self_attn(
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 644, in forward
    query_states = self.q_proj(hidden_states)
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/taisenki/anaconda3/envs/swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1582, in _call_impl
    result = forward_call(*args, **kwargs)
TypeError: QuantLinear.forward() got an unexpected keyword argument 'adapter_names'

Your hardware and system info torch 2.3.0 transformers 4.41.2 auto_gptq 0.7.1+cu121 flash-attn 2.5.9.post1 ms-swift 2.1.0 peft 0.11.1 xformers 0.0.26.post1

请问应该如何处理?

Jintao-Huang commented 3 months ago

fixed