vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
29.85k stars 4.51k forks source link

[Bug]: KeyError: 'model.layers.24.mlp.down_proj.weight' for llama 7b model SqueezeLLM quantization #4198

Open condy0919 opened 6 months ago

condy0919 commented 6 months ago

Your current environment

Collecting environment information...
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (conda-forge gcc 12.3.0-5) 12.3.0
Clang version: 14.0.6
CMake version: version 3.29.2
Libc version: glibc-2.31

Python version: 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:53:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-100-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB

Nvidia driver version: 535.54.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   43 bits physical, 48 bits virtual
CPU(s):                          192
On-line CPU(s) list:             0-191
Thread(s) per core:              2
Core(s) per socket:              48
Socket(s):                       2
NUMA node(s):                    2
Vendor ID:                       AuthenticAMD
CPU family:                      23
Model:                           49
Model name:                      AMD EPYC 7642 48-Core Processor
Stepping:                        0
Frequency boost:                 enabled
CPU MHz:                         3299.906
CPU max MHz:                     2300.0000
CPU min MHz:                     1500.0000
BogoMIPS:                        4599.98
Virtualization:                  AMD-V
L1d cache:                       3 MiB
L1i cache:                       3 MiB
L2 cache:                        48 MiB
L3 cache:                        512 MiB
NUMA node0 CPU(s):               0-47,96-143
NUMA node1 CPU(s):               48-95,144-191
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Full AMD retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca

Versions of relevant libraries:
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] onnx==1.15.0
[pip3] onnx-graphsurgeon==0.3.27
[pip3] onnxruntime==1.16.3
[pip3] torch==2.2.2
[pip3] torch-hgemm==0.1.0
[pip3] torch-int==0.0.0
[pip3] torchaudio==2.2.1
[pip3] torchvision==0.17.1
[pip3] triton==2.2.0
[conda] blas                      1.0                         mkl    defaults
[conda] cudatoolkit               11.8.0              h4ba93d1_13    conda-forge
[conda] libjpeg-turbo             2.0.0                h9bf148f_0    pytorch
[conda] mkl                       2023.1.0         h213fc3f_46344    defaults
[conda] mkl-service               2.4.0           py311h5eee18b_1    defaults
[conda] mkl_fft                   1.3.8           py311h5eee18b_0    defaults
[conda] mkl_random                1.2.4           py311hdb19cb5_0    defaults
[conda] numpy                     1.26.2          py311h08b1b3b_0    defaults
[conda] numpy-base                1.26.2          py311hf175353_0    defaults
[conda] pytorch                   2.2.2           py3.11_cuda12.1_cudnn8.9.2_0    pytorch
[conda] pytorch-cuda              12.1                 ha16c6d3_5    pytorch
[conda] pytorch-mutex             1.0                        cuda    pytorch
[conda] torch                     2.2.1                    pypi_0    pypi
[conda] torch-hgemm               0.1.0                    pypi_0    pypi
[conda] torch-int                 0.0.0                    pypi_0    pypi
[conda] torchaudio                2.2.1               py311_cu121    pytorch
[conda] torchtriton               2.2.0                     py311    pytorch
[conda] torchvision               0.17.1              py311_cu121    pytorch
[conda] triton                    2.2.0                    pypi_0    pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.0.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    NIC1    NIC2    NIC3    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X  NV12    NV12    NV12    NV12    NV12    NV12    NV12    SYS SYS SYS SYS 0-47,96-143 0       N/A
GPU1    NV12     X  NV12    NV12    NV12    NV12    NV12    NV12    SYS SYS SYS SYS 0-47,96-143 0       N/A
GPU2    NV12    NV12     X  NV12    NV12    NV12    NV12    NV12    SYS SYS SYS SYS 0-47,96-143 0       N/A
GPU3    NV12    NV12    NV12     X  NV12    NV12    NV12    NV12    SYS SYS SYS SYS 0-47,96-143 0       N/A
GPU4    NV12    NV12    NV12    NV12     X  NV12    NV12    NV12    NODE    NODE    NODE    NODE    48-95,144-191   1       N/A
GPU5    NV12    NV12    NV12    NV12    NV12     X  NV12    NV12    NODE    NODE    NODE    NODE    48-95,144-191   1       N/A
GPU6    NV12    NV12    NV12    NV12    NV12    NV12     X  NV12    PXB PXB PXB PXB 48-95,144-191   1       N/A
GPU7    NV12    NV12    NV12    NV12    NV12    NV12    NV12     X  PXB PXB PXB PXB 48-95,144-191   1       N/A
NIC0    SYS SYS SYS SYS NODE    NODE    PXB PXB  X  PIX PXB PXB             
NIC1    SYS SYS SYS SYS NODE    NODE    PXB PXB PIX  X  PXB PXB             
NIC2    SYS SYS SYS SYS NODE    NODE    PXB PXB PXB PXB  X  PIX             
NIC3    SYS SYS SYS SYS NODE    NODE    PXB PXB PXB PXB PIX  X              

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3

🐛 Describe the bug

from vllm import LLM

model = LLM(model='path/to/local/llama2/7b', quantization='squeezellm', gpu_memory_utilization=0.4)

The last error is

  File "/data2_7T/condy/.mambaforge/envs/dev/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 411, in load_weights
    param = params_dict[name]
            ~~~~~~~~~~~^^^^^^
KeyError: 'model.layers.24.mlp.down_proj.weight'
>>>

This issue is simillar to #4013, but it still failed with latest vllm.

condy0919 commented 6 months ago

The params_dict constructed from dict(self.named_parameters()) only has "SqueezeLLM"ed parameters. image

So there is no such model.layers.0.self_attn.qkv_proj.weight anymore.

RyanWMHI commented 6 months ago

I used the try catch continue to jump to the weights

condy0919 commented 6 months ago

I used the try catch continue to jump to the weights

Is the accuracy of the tasks affected?

gushob21 commented 1 month ago

Facing this issue with --quantization=gguf

figuernd commented 3 weeks ago

Same issue with MistralLarge BitsAndBytes