InternLM / lmdeploy

LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
https://lmdeploy.readthedocs.io/en/latest/
Apache License 2.0
4.56k stars 409 forks source link

[Bug] lmdeploy lite auto_awq量化错误 #2243

Open ZanePoe opened 2 months ago

ZanePoe commented 2 months ago

Checklist

Describe the bug

命令lmdeploy lite auto_awq THUDM/glm-4-9b-chat --work-dir ./models/glm-4-9b-chat-int4 --search-scale True --batch-size 8报错。 但是命令lmdeploy lite auto_awq THUDM/glm-4-9b-chat --work-dir ./models/glm-4-9b-chat-int4就没有问题,正常量化和推理。

Reproduction

lmdeploy lite auto_awq THUDM/glm-4-9b-chat --work-dir ./models/glm-4-9b-chat-int4 --search-scale True --batch-size 8

Environment

sys.platform: linux
Python: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA GeForce RTX 2080 Ti
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.5, V12.5.82
GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
PyTorch: 2.2.2+cu121
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.3.2 (Git Hash 2dc95a2ad0841e29db8b22fbccaf3e5da7992b01)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 12.1
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.9.2
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.2.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 

TorchVision: 0.17.2+cu121
LMDeploy: 0.5.2.post1+
transformers: 4.42.4
gradio: 4.38.1
fastapi: 0.111.1
pydantic: 2.8.2
triton: 2.2.0
NVIDIA Topology: 
        GPU0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      0-27    0               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

Error traceback

Loading calibrate dataset ...
Token indices sequence length is longer than the specified maximum sequence length for this model (1104488 > 128000). Running this sequence through the model will result in indexing errors
Traceback (most recent call last):
  File "/home/zanepoe/miniconda3/envs/lmdeploy/bin/lmdeploy", line 8, in <module>
    sys.exit(run())
             ^^^^^
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/lmdeploy/cli/entrypoint.py", line 36, in run
    args.run(args)
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/lmdeploy/cli/lite.py", line 139, in auto_awq
    auto_awq(**kwargs)
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/lmdeploy/lite/apis/auto_awq.py", line 105, in auto_awq
    vl_model, model, tokenizer, work_dir = calibrate(model,
                                           ^^^^^^^^^^^^^^^^
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/lmdeploy/lite/apis/calibrate.py", line 242, in calibrate
    calib_ctx.calibrate(all_data)
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/lmdeploy/lite/quantization/calibration.py", line 269, in calibrate
    _ = model(data.to(self.device))
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zanepoe/.cache/huggingface/modules/transformers_modules/c24133cef34ff7a7010f1e97c113effdead0966b/modeling_chatglm.py", line 892, in forward
    hidden_states, presents, all_hidden_states, all_self_attentions = self.encoder(
                                                                      ^^^^^^^^^^^^^
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zanepoe/.cache/huggingface/modules/transformers_modules/c24133cef34ff7a7010f1e97c113effdead0966b/modeling_chatglm.py", line 722, in forward
    layer_ret = layer(
                ^^^^^^
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/lmdeploy/lite/quantization/calibration.py", line 464, in _forward
    auto_scale_block(mod, batch_kwargs[i], self.w_bits,
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/lmdeploy/lite/quantization/calibration.py", line 366, in auto_scale_block
    _auto_get_scale(
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/lmdeploy/lite/quantization/calibration.py", line 359, in _auto_get_scale
    best_ratio = _search_module_scale(module2inspect, layers, inp.value,
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/lmdeploy/lite/quantization/calibration.py", line 306, in _search_module_scale
    org_out = block(x, **kwargs)
              ^^^^^^^^^^^^^^^^^^
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zanepoe/miniconda3/envs/lmdeploy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: SelfAttention.forward() missing 2 required positional arguments: 'attention_mask' and 'rotary_pos_emb'
RunningLeon commented 2 months ago

hi, pls. check https://github.com/InternLM/lmdeploy/issues/2210

lyc728 commented 2 months ago

你好 现在我能进行w8a8量化,但是加载模型就会报错

Traceback (most recent call last):
  File "/data/liuyuanchao/swift/lmdeploy/dss_quato.py", line 14, in <module>
    pipe = pipeline(model_path, chat_template_config=ChatTemplateConfig('llama3'))
  File "/usr/local/lib/python3.8/dist-packages/lmdeploy/api.py", line 89, in pipeline
    return pipeline_class(model_path,
  File "/usr/local/lib/python3.8/dist-packages/lmdeploy/serve/vl_async_engine.py", line 24, in __init__
    super().__init__(model_path, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/lmdeploy/serve/async_engine.py", line 190, in __init__
    self._build_turbomind(model_path=model_path,
  File "/usr/local/lib/python3.8/dist-packages/lmdeploy/serve/async_engine.py", line 235, in _build_turbomind
    self.engine = tm.TurboMind.from_pretrained(
  File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/turbomind.py", line 340, in from_pretrained
    return cls(model_path=pretrained_model_name_or_path,
  File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/turbomind.py", line 144, in __init__
    self.model_comm = self._from_hf(model_source=model_source,
  File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/turbomind.py", line 235, in _from_hf
    output_model = OUTPUT_MODELS.get(output_model_name)(
  File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/deploy/target_model/w4.py", line 80, in __init__
    super().__init__(input_model, cfg, to_file, out_dir)
  File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/deploy/target_model/base.py", line 172, in __init__
    self.cfg = self.get_config(cfg)
  File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/deploy/target_model/w4.py", line 92, in get_config
    w1s, _, _ = bin.ffn_scale(i)
  File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/deploy/source_model/llama_awq.py", line 52, in ffn_scale
    return ensure_fp16orint32(self._ffn(i, 'scales'))
  File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/deploy/source_model/llama.py", line 103, in _ffn
    tensor = self.params[
KeyError: 'llm.model.layers.0.mlp.gate_proj.scales'
RunningLeon commented 2 months ago

w8a8 is only supported by pytorch engine. Pls. set backend_config to PytorchEngineConfig when using pipeline

    pipe = pipeline(model_path, 
                     backend_config=PytorchEngineConfig(tp=1, 
                                                        session_len=4096,
                                                        max_prefill_token_num=4096, 
                                                        cache_max_entry_count=0.5),
                                                       )
lyc728 commented 2 months ago

报错了 raise ValueError( ValueError: The model's quantization config from the arguments has no quant_method attribute. Make sure that the model has been correctly quantized

RunningLeon commented 2 months ago

@lyc728 hi, sorry for misunderstanding. auto_awq is for w4a16 and smooth_quant is for w8a8. In your case, you are using w4a16, which is only supported by Turbomind engine and it needs to input model_format='awq'. This is how to use in pipeline: https://lmdeploy.readthedocs.io/en/latest/quantization/w4a16.html#inference

from lmdeploy import pipeline, TurbomindEngineConfig
engine_config = TurbomindEngineConfig(model_format='awq')
pipe = pipeline("./internlm2_5-7b-chat-4bit", backend_config=engine_config)
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)

你好 现在我能进行w8a8量化,但是加载模型就会报错

lyc728 commented 2 months ago

你好 我是参考这个文档进行的量化 https://github.com/InternLM/lmdeploy/blob/main/docs/zh_cn/quantization/w8a8.md lmdeploy lite smooth_quant internlm/internlm-chat-7b --work-dir ./internlm-chat-7b-w8 现在是无法加载模型进行推理,如果把参数设置成model_format='awq' 这个应该是w4的把 跟我转模型不符合

RunningLeon commented 2 months ago

根据这个表格,glm4是不支持w8a8(pytorch engine),但支持w4a16(turbomind engine). https://lmdeploy.readthedocs.io/en/latest/supported_models/supported_models.html#models-supported-by-pytorch https://lmdeploy.readthedocs.io/en/latest/supported_models/supported_models.html#models-supported-by-turbomind

lyc728 commented 2 months ago

我是intervl2和minicpm2.5

lyc728 commented 2 months ago

好的