OpenBMB / MiniCPM

MiniCPM3-4B: An edge-side LLM that surpasses GPT-3.5-Turbo.
Apache License 2.0
7.1k stars 451 forks source link

[Bug]: WSL下安装 vllm報錯 #232

Closed h122skite closed 4 weeks ago

h122skite commented 1 month ago

Is there an existing issue ? / 是否已有相关的 issue ?

Describe the bug / 描述这个 bug

按照教程安裝vLLM報錯

To Reproduce / 如何复现

(ragenv) jayho@AINB03:~/project$ pip install git+https://github.com/OpenBMB/vllm.git@minicpm3
Collecting git+https://github.com/OpenBMB/vllm.git@minicpm3
  Cloning https://github.com/OpenBMB/vllm.git (to revision minicpm3) to /tmp/pip-req-build-ql0y75ov
  Running command git clone -q https://github.com/OpenBMB/vllm.git /tmp/pip-req-build-ql0y75ov
  Running command git checkout -b minicpm3 --track origin/minicpm3
  Switched to a new branch 'minicpm3'
  Branch 'minicpm3' set up to track remote branch 'minicpm3' from 'origin'.
  Installing build dependencies ... |done
  Getting requirements to build wheel ... error
  ERROR: Command errored out with exit status 1:
   command: /home/jayho/kivy_venv/bin/python3 /tmp/tmpqi7366_e get_requires_for_build_wheel /tmp/tmpgwhdya9a
       cwd: /tmp/pip-req-build-ql0y75ov
  Complete output (19 lines):
  /tmp/pip-build-env-ixlwmvqb/overlay/lib/python3.8/site-packages/torch/_subclasses/functional_tensor.py:258: UserWarning: Failed to initialize NumPy: numpy.core.multiarray failed to import (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:84.)
    cpu = _conversion_method_template(device=torch.device("cpu"))
  Traceback (most recent call last):
    File "/tmp/tmpqi7366_e", line 280, in <module>
      main()
    File "/tmp/tmpqi7366_e", line 263, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
    File "/tmp/tmpqi7366_e", line 114, in get_requires_for_build_wheel
      return hook(config_settings)
    File "/tmp/pip-build-env-ixlwmvqb/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 332, in get_requires_for_build_wheel
      return self._get_build_requires(config_settings, requirements=[])
    File "/tmp/pip-build-env-ixlwmvqb/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 302, in _get_build_requires
      self.run_setup()
    File "/tmp/pip-build-env-ixlwmvqb/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 318, in run_setup
      exec(code, locals())
    File "<string>", line 458, in <module>
    File "<string>", line 354, in get_vllm_version
    File "<string>", line 324, in get_nvcc_cuda_version
  AssertionError: CUDA_HOME is not set
  ----------------------------------------
ERROR: Command errored out with exit status 1: /home/jayho/kivy_venv/bin/python3 /tmp/tmpqi7366_e get_requires_for_build_wheel /tmp/tmpgwhdya9a Check the logs for full command output.

Expected behavior / 期望的结果

build pass

Screenshots / 截图

No response

Environment / 环境

- OS: Ubuntu 20.04

- CUDA: no CUDA
- Device: WSL

Additional context / 其他信息

No response

LDLINGLINGLING commented 1 month ago

你好,这个教程里有如果vllm装不上如何处理,https://modelbest.feishu.cn/wiki/LrdMwKKt3iZgoYkQlPRcvY1PnXc#share-OU2AdWJSao2zKLx5g3ocoBBGnVF