vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
22.1k stars 3.12k forks source link

Can the VLLM framework support Huawei's 910B chip in the later stage? #3052

Open shikungo opened 4 months ago

shikungo commented 4 months ago

Can the VLLM framework support Huawei's 910B chip in the later stage?

WangJianQ-cmd commented 1 week ago
  Traceback (most recent call last):
    File "/root/wangjianqiang/miniforge-pypy3/envs/Qwen/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
      main()
    File "/root/wangjianqiang/miniforge-pypy3/envs/Qwen/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
    File "/root/wangjianqiang/miniforge-pypy3/envs/Qwen/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
      return hook(config_settings)
    File "/tmp/pip-build-env-sz4wv293/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 327, in get_requires_for_build_wheel
      return self._get_build_requires(config_settings, requirements=[])
    File "/tmp/pip-build-env-sz4wv293/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 297, in _get_build_requires
      self.run_setup()
    File "/tmp/pip-build-env-sz4wv293/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 313, in run_setup
      exec(code, locals())
    File "<string>", line 415, in <module>
    File "<string>", line 341, in get_vllm_version
  RuntimeError: Unknown runtime environment
  [end of output]

好像不支持的,我装的时候显示没有CUDA