InternLM / lmdeploy

LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
https://lmdeploy.readthedocs.io/en/latest/
Apache License 2.0
4.23k stars 381 forks source link

[Bug] TurboMind引擎部署AWQ模型生成结果重复问题严重 #2334

Closed LLouice closed 1 month ago

LLouice commented 1 month ago

Checklist

Describe the bug

AWQ量化Qwen2-72B-Instruct模型(为了排除模型影响这里实验直接使用官方的模型而非SFT模型), 为了多tp部署,参考pad 0 的解决方案先将中间掩藏层pad 0然后再量化,量化的模型使用vllm, lmdeploy pytorch引擎均部署成功,输出结果正常。

但是转成TurboMind格式部署结果就存在严重的重复不可用的问题

  1. 重复输出

    Snipaste_2024-08-19_16-08-14
  2. 正常输出

    Snipaste_2024-08-19_15-43-37

Reproduction

Convert AWQ model into TurboMind format

convert_awq() {                                                                 
    local tp=4                                                                  
    local model_name=$1                                                         
    local model_path=$2                                                         

    lmdeploy convert \                                                          
                 --model-format awq \                                           
                 --tp $tp \                                                     
                 --dst-path $DIR/models/AWQ/$model_name \                       
                 qwen \                                                         
                 $model_path                                                    
}

Serve TruboMind format model

serve_api_awq() { 
   local model_name=${1-Qwen2-72B-Instruct}                                                                                                  
   local tp=${2-4}                                                                                                                           
   local port=${3-"9006"}                                                                                                                    
   local backend=${4-'turbomind'}                                                                                                            

   NCCL_DEBUG=WARN \                                                                                                                         
      lmdeploy serve api_server \                                                                                                           
      --backend ${backend} \                                                                                                                
      --server-name 0.0.0.0 \                                                                                                               
      --server-port $port \                                                                                                                 
      `#--enable-prefix-caching` \                                                                                                          
      --tp $tp \                                                                                                                            
      --model-name $model_name \                                                                                                            
      --model-format awq \                                                                                                                  
      `#--quant-policy 8` \                                                                                                                 
      --chat-template $DIR/template/chatml.json \                                                                                           
      $DIR/models/AWQ/$model_name 
}

Call api

_fetch() {                                                                                            
    local msg=$1                                                                                          
    curl http://${server_ip}:${server_port}/v1/chat/completions \                                         
    -H "Content-Type: application/json" \                                                                 
    -d @- <<EOF                                                                                           
{                                                                                                         
  "model": "${model_name}",                                                                               
  "messages": [{"role": "user", "content": "$msg"}],                                                      
  "skip_special_tokens": false,                                                                           
  "temperature": 0.3,                                                                                     
  "max_tokens": 512,                                                                                      
  "repetition_penalty": 1.2                                                                              
}                                                                                                         
EOF                                                                                                       
}    

Environment

sys.platform: linux
Python: 3.9.19 (main, May  6 2024, 19:43:03) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1,2,3: NVIDIA A800-SXM4-80GB
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.1, V12.1.66
GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
PyTorch: 2.3.1+cu121
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.3.6 (Git Hash 86e6af5974177e513fd3fee58425e1063e7f1361)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.1
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.9.2
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.3.1, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 

TorchVision: 0.18.1+cu121
LMDeploy: 0.5.3+
transformers: 4.44.0
gradio: 4.37.2
fastapi: 0.112.1
pydantic: 2.8.2
triton: 2.3.1
NVIDIA Topology: 
    GPU0    GPU1    GPU2    GPU3    mlx5_0  mlx5_1  mlx5_2  mlx5_3  mlx5_4  mlx5_5  mlx5_6  mlx5_7  mlx5_8  CPU Affinity    NUMA Affinity
GPU0     X  NV8 NV8 NV8 NODE    PXB PXB NODE    NODE    SYS SYS SYS SYS 0-31,64-95  0
GPU1    NV8  X  NV8 NV8 NODE    PXB PXB NODE    NODE    SYS SYS SYS SYS 0-31,64-95  0
GPU2    NV8 NV8  X  NV8 NODE    NODE    NODE    PXB PXB SYS SYS SYS SYS 0-31,64-95  0
GPU3    NV8 NV8 NV8  X  NODE    NODE    NODE    PXB PXB SYS SYS SYS SYS 0-31,64-95  0
mlx5_0  NODE    NODE    NODE    NODE     X  NODE    NODE    NODE    NODE    SYS SYS SYS SYS     
mlx5_1  PXB PXB NODE    NODE    NODE     X  PIX NODE    NODE    SYS SYS SYS SYS     
mlx5_2  PXB PXB NODE    NODE    NODE    PIX  X  NODE    NODE    SYS SYS SYS SYS     
mlx5_3  NODE    NODE    PXB PXB NODE    NODE    NODE     X  PIX SYS SYS SYS SYS     
mlx5_4  NODE    NODE    PXB PXB NODE    NODE    NODE    PIX  X  SYS SYS SYS SYS     
mlx5_5  SYS SYS SYS SYS SYS SYS SYS SYS SYS  X  PIX NODE    NODE        
mlx5_6  SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX  X  NODE    NODE        
mlx5_7  SYS SYS SYS SYS SYS SYS SYS SYS SYS NODE    NODE     X  PIX     
mlx5_8  SYS SYS SYS SYS SYS SYS SYS SYS SYS NODE    NODE    PIX  X      

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

Error traceback

No response

lvhan028 commented 1 month ago

先把 random sampling 关掉(temperature=0.0),基于这个条件,还是有重复性的情况么?

lzhangzz commented 1 month ago

@LLouice 具体用的哪个commit? b28a1d048491b9ffd6d1bff48a424d40622ae147 之前的版本不支持在 V100 上跑 AWQ

LLouice commented 1 month ago

先把 random sampling 关掉(temperature=0.0),基于这个条件,还是有重复性的情况么?

还是会重复的

LLouice commented 1 month ago

@LLouice 具体用的哪个commit? b28a1d0 之前的版本不支持在 V100 上跑 AWQ

最新的从这个开发版https://github.com/zhyncs/lmdeploy-build/releases/download/f8f8543/lmdeploy-0.5.3+cu121+f8f8543-cp39-cp39-manylinux2014_x86_64.whl安装的。抱歉!之前的check_env信息贴错了,我刚更新了,所有操作都是在A800上进行的

lzhangzz commented 1 month ago

@LLouice

建议试试新版本 https://github.com/zhyncs/lmdeploy-build/releases/tag/b28a1d0 , 之前的版本 sliced-k 累加的精度低一些。

LLouice commented 1 month ago

@LLouice

建议试试新版本 https://github.com/zhyncs/lmdeploy-build/releases/tag/b28a1d0 , 之前的版本 sliced-k 累加的精度低一些。

嗯,试了这版本问题都解决了, thanks!