InternLM / lmdeploy

LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
https://lmdeploy.readthedocs.io/en/latest/
Apache License 2.0
4.11k stars 373 forks source link

[Bug] awq量化4bit 设置 --tp 4 报错 #1646

Closed zeroleavebaoyang closed 3 months ago

zeroleavebaoyang commented 3 months ago

Checklist

Describe the bug

qwen1.5-32b-chat-awq-4bit 设置 --tp 2 不会报错, 设置--tp4 就会报错,

Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Convert to turbomind format: 0%| | 0/64 [00:00<?, ?it/s]Traceback (most recent call last): File "/home/nlp/miniconda3/envs/baoy/bin/lmdeploy", line 8, in sys.exit(run()) File "/home/nlp/miniconda3/envs/baoy/lib/python3.10/site-packages/lmdeploy/cli/entrypoint.py", line 37, in run args.run(args) File "/home/nlp/miniconda3/envs/baoy/lib/python3.10/site-packages/lmdeploy/cli/serve.py", line 283, in api_server run_api_server(args.model_path, File "/home/nlp/miniconda3/envs/baoy/lib/python3.10/site-packages/lmdeploy/serve/openai/api_server.py", line 1191, in serve VariableInterface.async_engine = pipeline_class( File "/home/nlp/miniconda3/envs/baoy/lib/python3.10/site-packages/lmdeploy/serve/async_engine.py", line 206, in init self._build_turbomind(model_path=model_path, File "/home/nlp/miniconda3/envs/baoy/lib/python3.10/site-packages/lmdeploy/serve/async_engine.py", line 254, in _build_turbomind self.engine = tm.TurboMind.from_pretrained( File "/home/nlp/miniconda3/envs/baoy/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 396, in from_pretrained return cls(model_path=pretrained_model_name_or_path, File "/home/nlp/miniconda3/envs/baoy/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 170, in init self.model_comm = self._from_hf(model_source=model_source, File "/home/nlp/miniconda3/envs/baoy/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 305, in _from_hf output_model.export() File "/home/nlp/miniconda3/envs/baoy/lib/python3.10/site-packages/lmdeploy/turbomind/deploy/target_model/base.py", line 273, in export self.export_transformer_block(bin, i) File "/home/nlp/miniconda3/envs/baoy/lib/python3.10/site-packages/lmdeploy/turbomind/deploy/target_model/w4.py", line 156, in export_transformer_block self.save_split(w2_sz, f'layers.{i}.feed_forward.w2.scales_zeros', 0) File "/home/nlp/miniconda3/envs/baoy/lib/python3.10/site-packages/lmdeploy/turbomind/deploy/target_model/base.py", line 246, in save_split assert tensor.shape[split_dim] % tp == 0 AssertionError

Reproduction

CUDA_VISIBLE_DEVICES=0,1,2,3 nohup lmdeploy serve api_server /opt/nlp/pretrain_models/Qwen1.5-32B-Chat-awq4 \ --model-name qwen \ --server-name 0.0.0.0 \ --server-port 23333 \ --tp 4 \ --rope-scaling-factor 2.0 \ --session-len 32000 \ --quant-policy 8 \ --model-format awq > 32.log 2>&1 &

Environment

~/llm_trainer/scripts$ lmdeploy check_env
sys.platform: linux
Python: 3.10.14 (main, May  6 2024, 19:42:50) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1,2,3,4,5,6,7,8,9: NVIDIA GeForce RTX 4090
CUDA_HOME: /usr/local/cuda-11.8
NVCC: Cuda compilation tools, release 11.8, V11.8.89
GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
PyTorch: 2.1.2+cu121
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.1.1 (Git Hash 64f6bcbcbab628e96f33a62c3e975f8535a7bde4)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.1
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.9.2
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-unused-private-field -Wno-aligned-allocation-unavailable -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.1.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 

TorchVision: 0.16.2+cu121
LMDeploy: 0.4.1+d010a97
transformers: 4.40.2
gradio: Not Found
fastapi: 0.111.0
pydantic: 2.7.1
triton: 2.1.0

Error traceback

No response

AllentDan commented 3 months ago

量化参数的tensor shape不能被tp整除,无法tp