vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
27.25k stars 4k forks source link

[Bug]: RuntimeError: "cat_cuda" not implemented for 'Float8_e4m3fn' #6249

Open LSC527 opened 2 months ago

LSC527 commented 2 months ago

Your current environment

PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.0
Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-3.10.0-1062.9.1.el7.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB

Nvidia driver version: 525.85.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Address sizes:                   46 bits physical, 57 bits virtual
Byte Order:                      Little Endian
CPU(s):                          128
On-line CPU(s) list:             0-127
Vendor ID:                       GenuineIntel
BIOS Vendor ID:                  Intel(R) Corporation
Model name:                      Intel(R) Xeon(R) Platinum 8350C CPU @ 2.60GHz
BIOS Model name:                 Intel(R) Xeon(R) Platinum 8350C CPU @ 2.60GHz
CPU family:                      6
Model:                           106
Thread(s) per core:              2
Core(s) per socket:              32
Socket(s):                       2
Stepping:                        6
Frequency boost:                 enabled
CPU max MHz:                     3500.0000
CPU min MHz:                     800.0000
BogoMIPS:                        5200.00
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities
Virtualization:                  VT-x
L1d cache:                       3 MiB (64 instances)
L1i cache:                       2 MiB (64 instances)
L2 cache:                        80 MiB (64 instances)
L3 cache:                        96 MiB (2 instances)
NUMA node(s):                    2
NUMA node0 CPU(s):               0-31,64-95
NUMA node1 CPU(s):               32-63,96-127
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1:        Mitigation; Load fences, usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Vulnerable, IBPB
Vulnerability Tsx async abort:   Not affected

Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] onnx==1.16.0
[pip3] optree==0.11.0
[pip3] pytorch-quantization==2.1.2
[pip3] pytorch-triton==3.0.0+a9bc1a364
[pip3] torch==2.3.0
[pip3] torch-tensorrt==2.3.0a0
[pip3] torchdata==0.7.1a0
[pip3] torchtext==0.17.0a0
[pip3] torchvision==0.18.0
[pip3] transformers==4.42.3
[pip3] triton==2.3.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.1
vLLM Build Flags:
CUDA Archs: 5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    NIC1    NIC2    NIC3    NIC4    NIC5    NIC6    NIC7    CPU Affinity    NUMA Affinity
GPU0     X  NV8 NV8 NV8 NV8 NV8 NV8 NV8 PXB PXB NODE    NODE    SYS SYS SYS SYS 0-31,64-95  0
GPU1    NV8  X  NV8 NV8 NV8 NV8 NV8 NV8 PXB PXB NODE    NODE    SYS SYS SYS SYS 0-31,64-95  0
GPU2    NV8 NV8  X  NV8 NV8 NV8 NV8 NV8 NODE    NODE    PXB PXB SYS SYS SYS SYS 0-31,64-95  0
GPU3    NV8 NV8 NV8  X  NV8 NV8 NV8 NV8 NODE    NODE    PXB PXB SYS SYS SYS SYS 0-31,64-95  0
GPU4    NV8 NV8 NV8 NV8  X  NV8 NV8 NV8 SYS SYS SYS SYS PXB PXB NODE    NODE    32-63,96-127    1
GPU5    NV8 NV8 NV8 NV8 NV8  X  NV8 NV8 SYS SYS SYS SYS PXB PXB NODE    NODE    32-63,96-127    1
GPU6    NV8 NV8 NV8 NV8 NV8 NV8  X  NV8 SYS SYS SYS SYS NODE    NODE    PXB PXB 32-63,96-127    1
GPU7    NV8 NV8 NV8 NV8 NV8 NV8 NV8  X  SYS SYS SYS SYS NODE    NODE    PXB PXB 32-63,96-127    1
NIC0    PXB PXB NODE    NODE    SYS SYS SYS SYS  X  PIX NODE    NODE    SYS SYS SYS SYS
NIC1    PXB PXB NODE    NODE    SYS SYS SYS SYS PIX  X  NODE    NODE    SYS SYS SYS SYS
NIC2    NODE    NODE    PXB PXB SYS SYS SYS SYS NODE    NODE     X  PIX SYS SYS SYS SYS
NIC3    NODE    NODE    PXB PXB SYS SYS SYS SYS NODE    NODE    PIX  X  SYS SYS SYS SYS
NIC4    SYS SYS SYS SYS PXB PXB NODE    NODE    SYS SYS SYS SYS  X  PIX NODE    NODE
NIC5    SYS SYS SYS SYS PXB PXB NODE    NODE    SYS SYS SYS SYS PIX  X  NODE    NODE
NIC6    SYS SYS SYS SYS NODE    NODE    PXB PXB SYS SYS SYS SYS NODE    NODE     X  PIX
NIC7    SYS SYS SYS SYS NODE    NODE    PXB PXB SYS SYS SYS SYS NODE    NODE    PIX  X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3
  NIC4: mlx5_4
  NIC5: mlx5_5
  NIC6: mlx5_6
  NIC7: mlx5_7

🐛 Describe the bug

I am loading deepseek-v2 using fp8 quant. It seems that torch does not support fp8 cat. Maybe I should report this issue in pytorch, but I still want you gays to be informed.

from auto_fp8 import AutoFP8ForCausalLM, BaseQuantizeConfig

pretrained_model_dir = "DeepSeek-Coder-V2-Instruct"
quantized_model_dir = "DeepSeek-Coder-V2-Instruct-FP8-Dynamic"

# Define quantization config with static activation scales
quantize_config = BaseQuantizeConfig(quant_method="fp8", activation_scheme="dynamic")
# For dynamic activation scales, there is no need for calbration examples
examples = []

# Load the model, quantize, and save checkpoint
model = AutoFP8ForCausalLM.from_pretrained(pretrained_model_dir, quantize_config, trust_remote_code=True, device_map="cpu")
model.quantize(examples)
model.save_quantized(quantized_model_dir)

from vllm import LLM, SamplingParams

llm = LLM(quantized_model_dir, tensor_parallel_size=8, trust_remote_code=True,
              max_model_len=8192,
              enforce_eager=True,
              quantization="fp8",
              )
[rank0]: Traceback (most recent call last):
[rank0]:   File "/home/work/serve/deepseekv2_test/deepseekv2_eval.py", line 62, in <module>
[rank0]:     llm = LLM(args.model, tensor_parallel_size=args.tensor_parallel_size, trust_remote_code=True,
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/llm.py", line 149, in __init__
[rank0]:     self.llm_engine = LLMEngine.from_engine_args(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 414, in from_engine_args
[rank0]:     engine = cls(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 243, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/distributed_gpu_executor.py", line 25, in __init__
[rank0]:     super().__init__(*args, **kwargs)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 42, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 85, in _init_executor
[rank0]:     self._run_workers("load_model",
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 136, in _run_workers
[rank0]:     driver_worker_output = driver_worker_method(*args, **kwargs)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 133, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 243, in load_model
[rank0]:     self.model = get_model(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/__init__.py", line 21, in get_model
[rank0]:     return loader.load_model(model_config=model_config,
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 267, in load_model
[rank0]:     model = _initialize_model(model_config, self.load_config,
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 104, in _initialize_model
[rank0]:     return model_class(config=model_config.hf_config,
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 467, in __init__
[rank0]:     self.model = DeepseekV2Model(config, cache_config, quant_config)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 429, in __init__
[rank0]:     self.layers = nn.ModuleList([
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 430, in <listcomp>
[rank0]:     DeepseekV2DecoderLayer(config,
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 369, in __init__
[rank0]:     self.mlp = DeepseekV2MoE(config=config, quant_config=quant_config)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 113, in __init__
[rank0]:     self.pack_params()
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 137, in pack_params
[rank0]:     self.w1 = torch._utils._flatten_dense_tensors(w1)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/torch/_utils.py", line 509, in _flatten_dense_tensors
[rank0]:     return torch._C._nn.flatten_dense_tensors(tensors)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_device.py", line 78, in __torch_function__
[rank0]:     return func(*args, **kwargs)
[rank0]: RuntimeError: "cat_cuda" not implemented for 'Float8_e4m3fn'
mgoin commented 2 months ago

@LSC527 The issue is that DeepSeek-V2 MoE doesn't support FP8 yet, and FP8 MoE is not supported on Ampere GPUs. You need Ada Lovelace or Hopper GPUs for FP8 hardware support.