vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.03k stars 3.81k forks source link

[Bug]: vllm not support fp8 kv cache when use flashinfer #6537

Open kuangdao opened 1 month ago

kuangdao commented 1 month ago

Your current environment


Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.6
Libc version: glibc-2.35

Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.4.119-19.0009.44-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA L20
GPU 1: NVIDIA L20
GPU 2: NVIDIA L20
GPU 3: NVIDIA L20
GPU 4: NVIDIA L20
GPU 5: NVIDIA L20
GPU 6: NVIDIA L20
GPU 7: NVIDIA L20

Nvidia driver version: 535.161.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Address sizes:                   52 bits physical, 48 bits virtual
Byte Order:                      Little Endian
CPU(s):                          384
On-line CPU(s) list:             0-383
Vendor ID:                       AuthenticAMD
BIOS Vendor ID:                  Red Hat
Model name:                      AMD EPYC 9K84 96-Core Processor
BIOS Model name:                 3.0
CPU family:                      25
Model:                           17
Thread(s) per core:              2
Core(s) per socket:              96
Socket(s):                       2
Stepping:                        0
BogoMIPS:                        5200.07
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single ibpb vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 avx512_bf16 clzero xsaveerptr wbnoinvd arat avx512vbmi umip avx512_vbmi2 vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Hypervisor vendor:               KVM
Virtualization type:             full
L1d cache:                       6 MiB (192 instances)
L1i cache:                       6 MiB (192 instances)
L2 cache:                        192 MiB (192 instances)
L3 cache:                        768 MiB (24 instances)
NUMA node(s):                    2
NUMA node0 CPU(s):               0-191
NUMA node1 CPU(s):               192-383
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Full AMD retpoline, IBPB conditional, STIBP disabled, RSB filling
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected

Versions of relevant libraries:
[pip3] flashinfer==0.1.0+cu121torch2.3
[pip3] numpy==1.22.2
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] onnx==1.14.0
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.3.0
[pip3] torch-tensorrt==0.0.0
[pip3] torchdata==0.7.0a0
[pip3] torchtext==0.16.0a0
[pip3] torchvision==0.18.0
[pip3] transformers==4.42.3
[pip3] triton==2.3.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.1
vLLM Build Flags:
CUDA Archs: 5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NODE    NODE    NODE    SYS     SYS     SYS     SYS     0-191   0               N/A
GPU1    NODE     X      PIX     NODE    SYS     SYS     SYS     SYS     0-191   0               N/A
GPU2    NODE    PIX      X      NODE    SYS     SYS     SYS     SYS     0-191   0               N/A
GPU3    NODE    NODE    NODE     X      SYS     SYS     SYS     SYS     0-191   0               N/A
GPU4    SYS     SYS     SYS     SYS      X      NODE    NODE    NODE    192-383 1               N/A
GPU5    SYS     SYS     SYS     SYS     NODE     X      PIX     NODE    192-383 1               N/A
GPU6    SYS     SYS     SYS     SYS     NODE    PIX      X      NODE    192-383 1               N/A
GPU7    SYS     SYS     SYS     SYS     NODE    NODE    NODE     X      192-383 1               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

🐛 Describe the bug

export CUDA_VISIBLE_DEVICES=1 export VLLM_ATTENTION_BACKEND=FLASHINFER

python -m vllm.entrypoints.openai.api_server \ --port=8006 \ --host=0.0.0.0 \ --served-model-name=vllm_fp8_kvcache \ --model=deepseek-ai/DeepSeek-Coder-V2-Instruct \ --quantization fp8 \ --kv-cache-dtype=fp8 \ --tensor-parallel-size=1 \ --dtype=bfloat16 \ --max-model-len=8192

i run vllm use the script, i want to use fp8 attention which Qtransopse(Key) or softmax value in attention support fp8 compute by FLASHINFER, such as this issue https://github.com/vllm-project/vllm/issues/6246 . the error .

截屏2024-07-18 17 21 55
kuangdao commented 1 month ago

i set vllm version to 0.5.2 and flashinfer version to 0.8, same error, who can tell me why .

kuangdao commented 1 month ago

any aone care it ? @DarkLight1337

DarkLight1337 commented 1 month ago

@Yard1 can you help with this?

Yard1 commented 1 month ago

I'm pretty sure this is not supported yet. We should be raising an error...

kuangdao commented 1 month ago

thanks, when this feature will be support? @Yard1

suluner commented 1 month ago

Met the same issue, is there any plan to support this feature?

learninmou commented 4 weeks ago

same error, hope flashinfer to support fp8 kvcache + cuda graph