LDLINGLINGLING / MiniCPM_Series_Tutorial

Minicpm和MiniCPM-V的项目和教程。包括推理,量化,边端部署,微调,技术报告、应用六个主题
86 stars 3 forks source link

MiniCPM-V_2_6_awq_int4 with vllm #3

Open tangent2018 opened 3 weeks ago

tangent2018 commented 3 weeks ago

Load model weight error when run with MiniCPM-V_2_6_awq_int4

vllm environment: docker image: vllm/vllm-openai:v0.5.4

model download from: git clone https://www.modelscope.cn/linglingdan/MiniCPM-V_2_6_awq_int4.git

run code

from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

MODEL_NAME = "../../models/MiniCPM-V_2_6_awq_int4"

tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, trust_remote_code=True)
llm = LLM(
    model=MODEL_NAME,
    gpu_memory_utilization=1,
    trust_remote_code=True,
    max_model_len=2048,
    enforce_eager=True,
)

trackback

[rank0]: Traceback (most recent call last):
[rank0]:   File "/home/tangent/AIChat/engines/OpenBMB/MiniCPM-V/code_vllm/local_try.py", line 14, in <module>
[rank0]:     llm = LLM(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/llm.py", line 158, in __init__
[rank0]:     self.llm_engine = LLMEngine.from_engine_args(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 445, in from_engine_args
[rank0]:     engine = cls(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 249, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 47, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/gpu_executor.py", line 36, in _init_executor
[rank0]:     self.driver_worker.load_model()
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 139, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 722, in load_model
[rank0]:     self.model = get_model(model_config=self.model_config,
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/__init__.py", line 21, in get_model
[rank0]:     return loader.load_model(model_config=model_config,
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 324, in load_model
[rank0]:     model = _initialize_model(model_config, self.load_config,
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 152, in _initialize_model
[rank0]:     quant_config = _get_quantization_config(model_config, load_config)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 93, in _get_quantization_config
[rank0]:     quant_config = get_quant_config(model_config, load_config)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/weight_utils.py", line 132, in get_quant_config
[rank0]:     return quant_cls.from_config(hf_quant_config)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/quantization/awq_marlin.py", line 77, in from_config
[rank0]:     return cls(weight_bits, group_size, has_zp, lm_head_quantized)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/quantization/awq_marlin.py", line 44, in __init__
[rank0]:     verify_marlin_supported(self.quant_type,
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/quantization/utils/marlin_utils.py", line 88, in verify_marlin_supported
[rank0]:     raise ValueError(err_msg)
[rank0]: ValueError: Marlin does not support weight_bits = uint4. Only types = [] are supported (for group_size = 128, min_capability = 75, zp = True).
LDLINGLINGLING commented 3 weeks ago

Hello, which version of autoawq are you using?

tangent2018 commented 3 weeks ago

I use the model from https://www.modelscope.cn/linglingdan/MiniCPM-V_2_6_awq_int4.git I think autoawq is not necessary to run vllm?

LDLINGLINGLING commented 3 weeks ago

I think you should try install my fork autoawq : git clone https://github.com/LDLINGLINGLING/AutoAWQ.git cd AutoAWQ pip install e .

Heroman2Space commented 3 weeks ago

same issue. The environment is: torch=2.4.0 torchvision=0.19.0 autoawq==0.2.6+cu121 autoawq_kernels==0.0.6 cuda==12.1

anyone know how to fix it?

tangent2018 commented 2 weeks ago

I try to install AutoAWQ, but AutoAWQ need torch=2.3.1 while vllm(0.5.4) using torch=2.4.0. Perhaps I should build an envionment from zero rather than using docker image. If you have plan to build a docker image for MiniCPM-V_2_6_awq_int4 with vllm, please tell me.

LDLINGLINGLING commented 2 weeks ago

ok,I've been very busy recently. I'll give it a try when I have time.