InternLM / lmdeploy

LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
https://lmdeploy.readthedocs.io/en/latest/
Apache License 2.0
4.67k stars 427 forks source link

[Bug] not support inference qwen1.5 #1697

Closed zzc0208 closed 5 months ago

zzc0208 commented 5 months ago

Checklist

Describe the bug

我想用lmdeploy推理qwen1.5-32-awq,但是报错ValueError: Tokenizer class Qwen2Tokenizer does not exist or is not currently imported.,我看lmdeploy的仓库写兼容qwen1.5,但是实际推理不了 image

Reproduction

lmdeploy serve api_server ~/text-generation-webui/models/Qwen_Qwen1.5-32B-Chat-AWQ --model-name qwen1.5-32B --server-port 2333 --quant-policy 8

Environment

(sd) linjl@bme-server:~$ lmdeploy check_env
sys.platform: linux
Python: 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA GeForce RTX 3090
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.8, V11.8.89
GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
PyTorch: 2.2.2+cu118
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.3.2 (Git Hash 2dc95a2ad0841e29db8b22fbccaf3e5da7992b01)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.8
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.7
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.2.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 

TorchVision: 0.17.2+cu118
LMDeploy: 0.4.2+
transformers: 4.30.2
gradio: 3.41.2
fastapi: 0.111.0
pydantic: 2.7.2
triton: 2.2.0
(sd) linjl@bme-serv

Error traceback

(sd) linjl@bme-server:~$ lmdeploy serve api_server /home/linjl/text-generation-webui/models/Qwen_Qwen1.5-32B-Chat-AWQ --model-name qwen1.5-32B --server-port 2333 --quant-policy 8
2024-05-31 19:04:55,679 - lmdeploy - WARNING - AutoConfig.from_pretrained failed for /home/linjl/text-generation-webui/models/Qwen_Qwen1.5-32B-Chat-AWQ. Exception: 'qwen2'
Traceback (most recent call last):
  File "/home/linjl/anaconda3/envs/sd/bin/lmdeploy", line 8, in <module>
    sys.exit(run())
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/cli/entrypoint.py", line 37, in run
    args.run(args)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/cli/serve.py", line 303, in api_server
    run_api_server(args.model_path,
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/serve/openai/api_server.py", line 1191, in serve
    VariableInterface.async_engine = pipeline_class(
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/serve/async_engine.py", line 206, in __init__
    self._build_turbomind(model_path=model_path,
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/serve/async_engine.py", line 253, in _build_turbomind
    self.engine = tm.TurboMind.from_pretrained(
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 387, in from_pretrained
    return cls(model_path=pretrained_model_name_or_path,
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 160, in __init__
    self.tokenizer = Tokenizer(model_path)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/tokenizer.py", line 504, in __init__
    self.model = HuggingFaceTokenizer(model_folder)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/tokenizer.py", line 201, in __init__
    self.model = AutoTokenizer.from_pretrained(model_dir,
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 688, in from_pretrained
    raise ValueError(
ValueError: Tokenizer class Qwen2Tokenizer does not exist or is not currently imported.
lvhan028 commented 5 months ago
ValueError: Tokenizer class Qwen2Tokenizer does not exist or is not currently imported.

The log shows Qwen2Tokenizer does not exist. Could you try the following code to check if the tokenizer works well?

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('~/text-generation-webui/models/Qwen_Qwen1.5-32B-Chat-AWQ')

BTW, please add --model-format awq when launching the service.

zzc0208 commented 5 months ago

我尝试了您提供的代码,运行结果如下:

Python 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained('/home/linjl/text-generation-webui/models/Qwen_Qwen1.5-32B-Chat-AWQ')
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
>>> 
zzc0208 commented 5 months ago

我尝试了在启动api server的时候加入--model-format awq选项,还是报错 ValueError: Tokenizer class Qwen2Tokenizer does not exist or is not currently imported. image

linkerr commented 5 months ago

[transformers](transformers: 4.30.2)

qwen1.5 要求 transformers>=4.37.0

zzc0208 commented 5 months ago

您好,我更新了transformers的版本,但是目前似乎遇到了另一个问题

(sd) linjl@bme-server:~$ lmdeploy serve api_server ~/text-generation-webui/models/Qwen_Qwen1.5-32B-Chat-AWQ --model-name qwen1.5-32B --server-port 2333 --quant-policy 8 --model-format awq
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Convert to turbomind format:   0%|                                                                                                                                                                  | 0/64 [00:00<?, ?it/s]Traceback (most recent call last):
  File "/home/linjl/anaconda3/envs/sd/bin/lmdeploy", line 8, in <module>
    sys.exit(run())
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/cli/entrypoint.py", line 37, in run
    args.run(args)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/cli/serve.py", line 303, in api_server
    run_api_server(args.model_path,
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/serve/openai/api_server.py", line 1191, in serve
    VariableInterface.async_engine = pipeline_class(
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/serve/async_engine.py", line 206, in __init__
    self._build_turbomind(model_path=model_path,
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/serve/async_engine.py", line 253, in _build_turbomind
    self.engine = tm.TurboMind.from_pretrained(
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 387, in from_pretrained
    return cls(model_path=pretrained_model_name_or_path,
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 161, in __init__
    self.model_comm = self._from_hf(model_source=model_source,
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 296, in _from_hf
    output_model.export()
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/turbomind/deploy/target_model/base.py", line 274, in export
    self.export_transformer_block(bin, i)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/turbomind/deploy/target_model/w4.py", line 127, in export_transformer_block
    self.save_split(qkv_sz, f'layers.{i}.attention.w_qkv.scales_zeros', -1)
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/turbomind/deploy/target_model/base.py", line 252, in save_split
    self.export_weight(split, f'{prefix}.{i}{ext}')
  File "/home/linjl/anaconda3/envs/sd/lib/python3.10/site-packages/lmdeploy/turbomind/deploy/target_model/base.py", line 230, in export_weight
    tm_tensor.copy_from(torch_tensor)
RuntimeError: [TM][ERROR]  Assertion fail: /lmdeploy/src/turbomind/python/bind.cpp:294 

(sd) linjl@bme-server:~$    
lvhan028 commented 5 months ago

Hi, @zzc0208 如果你用的是qwen官方提供的https://huggingface.co/Qwen/Qwen1.5-32B-Chat-AWQ,lmdeploy现在是支持不了的。 因为这个模型量化的时候,group_size为32,而 lmdeploy 支持的是group_size 128

  "quantization_config": {
    "bits": 4,
    "group_size": 32,
    "modules_to_not_convert": null,
    "quant_method": "awq",
    "version": "gemm",
    "zero_point": true
  },

一种解决方式是,使用 lmdeploy lite auto_awq 工具重新量化 qwen1.5-32b-chat。 我们也有计划支持 group_size = 32, 64,不过会比较晚,估计在7月份。

zzc0208 commented 5 months ago

了解了,感谢