vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.81k stars 3.93k forks source link

[Installation]: NotImplementedError get_device_capability #8243

Closed joestein-ssc closed 6 days ago

joestein-ssc commented 1 week ago

Your current environment

Collecting environment information... PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A

OS: AlmaLinux release 8.10 (Cerulean Leopard) (x86_64) GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-22) Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.28

Python version: 3.11.9 (main, Jul 2 2024, 16:32:17) [GCC 8.5.0 20210514 (Red Hat 8.5.0-22)] (64-bit runtime) Python platform: Linux-4.18.0-553.8.1.el8_10.x86_64-x86_64-with-glibc2.28 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3 GPU 1: NVIDIA H100 80GB HBM3 GPU 2: NVIDIA H100 80GB HBM3 GPU 3: NVIDIA H100 80GB HBM3 GPU 4: NVIDIA H100 80GB HBM3 GPU 5: NVIDIA H100 80GB HBM3 GPU 6: NVIDIA H100 80GB HBM3 GPU 7: NVIDIA H100 80GB HBM3

Nvidia driver version: 550.90.07 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 1 Core(s) per socket: 48 Socket(s): 2 NUMA node(s): 8 Vendor ID: GenuineIntel CPU family: 6 Model: 143 Model name: Intel(R) Xeon(R) Platinum 8468 Stepping: 8 CPU MHz: 3800.000 CPU max MHz: 3800.0000 CPU min MHz: 800.0000 BogoMIPS: 4200.00 L1d cache: 48K L1i cache: 32K L2 cache: 2048K L3 cache: 107520K NUMA node0 CPU(s): 0-11 NUMA node1 CPU(s): 12-23 NUMA node2 CPU(s): 24-35 NUMA node3 CPU(s): 36-47 NUMA node4 CPU(s): 48-59 NUMA node5 CPU(s): 60-71 NUMA node6 CPU(s): 72-83 NUMA node7 CPU(s): 84-95 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities

Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.1.3.1 [pip3] nvidia-cuda-cupti-cu12==12.1.105 [pip3] nvidia-cuda-nvrtc-cu12==12.1.105 [pip3] nvidia-cuda-runtime-cu12==12.1.105 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.0.2.54 [pip3] nvidia-curand-cu12==10.3.2.106 [pip3] nvidia-cusolver-cu12==11.4.5.107 [pip3] nvidia-cusparse-cu12==12.1.0.106 [pip3] nvidia-ml-py==12.560.30 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] nvidia-nvjitlink-cu12==12.6.68 [pip3] nvidia-nvtx-cu12==12.1.105 [pip3] pyzmq==26.2.0 [pip3] torch==2.4.0 [pip3] torchvision==0.19.0 [pip3] transformers==4.44.2 [pip3] triton==3.0.0 [conda] Could not collect ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.6.0@32e7db25365415841ebc7c4215851743fbb1bad1 vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 SYS 0-11 0 N/A GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 SYS 24-35 2 N/A GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 SYS 36-47 3 N/A GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 SYS 12-23 1 N/A GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS 48-59 4 N/A GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 PIX 72-83 6 N/A GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS 84-95 7 N/A GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS 60-71 5 N/A NIC0 SYS SYS SYS SYS SYS PIX SYS SYS X

Legend:

X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_bond_0

How you are installing vllm

We are running on kubernetes (which works for test cuda containers) using the vllm 0.6.0 container, tried also on 0.5.4 and same issue

the full error is in the container is

Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 230, in run_rpc_server
    server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path)
  File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 31, in __init__
    self.engine = AsyncLLMEngine.from_engine_args(
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 740, in from_engine_args
    engine = cls(
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 636, in __init__
    self.engine = self._init_engine(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 840, in _init_engine
    return engine_class(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 272, in __init__
    super().__init__(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 270, in __init__
    self.model_executor = executor_class(
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 46, in __init__
    self._init_executor()
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/gpu_executor.py", line 37, in _init_executor
    self.driver_worker = self._create_worker()
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/gpu_executor.py", line 104, in _create_worker
    return create_worker(**self._get_create_worker_kwargs(
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/gpu_executor.py", line 23, in create_worker
    wrapper.init_worker(**kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 444, in init_worker
    self.worker = worker_class(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 99, in __init__
    self.model_runner: GPUModelRunnerBase = ModelRunnerClass(
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 842, in __init__
    self.attn_backend = get_attn_backend(
  File "/usr/local/lib/python3.10/dist-packages/vllm/attention/selector.py", line 108, in get_attn_backend
    backend = which_attn_to_use(num_heads, head_size, num_kv_heads,
  File "/usr/local/lib/python3.10/dist-packages/vllm/attention/selector.py", line 215, in which_attn_to_use
    if current_platform.get_device_capability()[0] < 8:
  File "/usr/local/lib/python3.10/dist-packages/vllm/platforms/interface.py", line 28, in get_device_capability
    raise NotImplementedError
NotImplementedError

Before submitting a new issue...

youkaichao commented 1 week ago

that's strange. it looks like vllm cannot identify the platform you are using.

what's the output of this code?

from vllm.platforms import current_platform
print(current_platform)
joestein-ssc commented 1 week ago
>>> print(current_platform)
<vllm.platforms.cuda.CudaPlatform object at 0x7f2584bdf650>
>>> print(current_platform.get_device_capability())
(9, 0)
>>> print(current_platform.get_device_name())
NVIDIA H100 80GB HBM3
>>> print(current_platform.is_full_nvlink())
True
youkaichao commented 1 week ago

are you using the correct Python? if your current_platform is CudaPlatform, it should have get_device_capability function.

joestein-ssc commented 1 week ago

I am running the vllm docker container on kubernetes, I tried tags 0.5.3, 0.5.4 and 0.6.0

The error is within the container

youkaichao commented 1 week ago

can you try to run within the container?

joestein-ssc commented 1 week ago
>>> print(current_platform.get_device_capability())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.10/dist-packages/vllm/platforms/interface.py", line 28, in get_device_capability
    raise NotImplementedError
>>> print(current_platform.get_device_name())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.10/dist-packages/vllm/platforms/interface.py", line 32, in get_device_name
    raise NotImplementedError
NotImplementedError
youkaichao commented 1 week ago

something is wrong with the docker container.

what's the command you use and the image you use?

joestein-ssc commented 1 week ago

we deploy through kubernetes, the image is vllm/vllm-openai:v0.6.0 (also tried 0.5.4 and 0.5.5)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: llama31405binstruct
spec:
  replicas: 1
  selector:
    matchLabels:
      app: llama31405binstruct
  template:
    metadata:
      labels:
        app: llama31405binstruct
    spec:
      containers:
      - env:
        - name: HF_TOKEN
          valueFrom:
            secretKeyRef:
              key: token
              name: hugging-face
        envFrom:
        - configMapRef:
            name: llama31405binstruct
        image:  vllm/vllm-openai:v0.6.0
        command: ["python3"]
        args: ["-m", "vllm.entrypoints.openai.api_server", "--model", "meta-llama/Meta-Llama-3.1-405B-Instruct", "--max-model-len", "128000"]
        imagePullPolicy: Always
        livenessProbe:
          initialDelaySeconds: 30
          periodSeconds: 10
          tcpSocket:
            port: 8000
        name: app
        ports:
        - containerPort: 8000
          name: service-port
          protocol: TCP
        readinessProbe:
          initialDelaySeconds: 60
          periodSeconds: 30
          tcpSocket:
            port: 8000
      nodeSelector:
        usage: ai
      tolerations:
      - effect: NoSchedule
        key: nvidia.com/gpu
        operator: Exists
youkaichao commented 1 week ago

can you try to debug in the container what's happening?

the code should be in https://github.com/vllm-project/vllm/blob/main/vllm/platforms/__init__.py

zifeitong commented 1 week ago

I suggest you take a look the full error log. That get_device_capability error message is somewhat confusing. I saw the same error message when I misconfigured cuda.

youkaichao commented 1 week ago

That get_device_capability error message is somewhat confusing

if you know the root cause and know how to raise a meaningful information, we'd love to fix it.

zifeitong commented 1 week ago

That get_device_capability error message is somewhat confusing

if you know the root cause and know how to raise a meaningful information, we'd love to fix it.

Will do if I am able to reproduce it.

joestein-ssc commented 6 days ago

We figured this out in case anyone else has this issue. we were missing runtimeClassName: nvidia in the spec.template.spec section.

jonashaag commented 4 days ago
ImportError('libcuda.so.1: cannot open shared object file: No such file or directory')
...
(Pdb) current_platform
<vllm.platforms.interface.UnspecifiedPlatform object at 0x71d267da64e0>
(Pdb) current_platform.get_device_capability()

Root cause seems to be missing CUDA but this should be handled better (should not crash)

youkaichao commented 4 days ago

@jonashaag what do you mean by that? What is your expected behavior if cuda is missing?

jonashaag commented 3 days ago

A better error message

youkaichao commented 3 days ago

we can't do anything here, because sometimes we do use vllm in an UnspecifiedPlatform . e.g. developing with laptop, where we only want python import statements to work.

it is users' responsibility to make sure cuda is working.

tbh, I think the error is already clear. you want to run cuda, and it shows you UnspecifiedPlatform, which clearly means your cuda is missing.