vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
29.75k stars 4.49k forks source link

[Installation]: vllm install error in jetson agx orin #7575

Open FanZhang91 opened 2 months ago

FanZhang91 commented 2 months ago

Your current environment

OS: Ubuntu 22.04 jammy Kernel: aarch64 Linux 5.15.136-tegra Uptime: 1d 23h 22m Shell: bash 5.1.16 Disk: 137G / 1.8T (8%) CPU: ARM Cortex-A78AE @ 12x 2.2016GHz GPU: Orin (nvgpu) RAM: 3162MiB / 62841MiB Nova Config: nova-devkit Nova Version: 1.3.0 Jetpack: 6.0+b106 torch: 2.3.0(use pytorch aarch64 wheels)

How you are installing vllm

I want to install vllm in Jetson AGX Orin. I use the script "pip install vllm" and try to install from source "pip install -e .", but both these method return failture.

############################# error info ################################ Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple/ Collecting vllm Using cached https://pypi.tuna.tsinghua.edu.cn/packages/1c/e2/b437c1d5a39814acfc706e714eceee7fe80455a9666a13b7582205b30345/vllm-0.5.4.tar.gz (958 kB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [21 lines of output] /tmp/pip-build-env-94kz6585/overlay/lib/python3.10/site-packages/torch/_subclasses/functional_tensor.py:258: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:84.) cpu = _conversion_method_template(device=torch.device("cpu")) fatal: not a git repository (or any of the parent directories): .git

:56: RuntimeWarning: Failed to get commit hash: Command '['git', 'rev-parse', 'HEAD']' returned non-zero exit status 128. Traceback (most recent call last): File "/home/xxx/miniconda3/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in main() File "/home/xxx/miniconda3/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "/home/xxx/miniconda3/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel return hook(config_settings) File "/tmp/pip-build-env-94kz6585/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 327, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=[]) File "/tmp/pip-build-env-94kz6585/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 297, in _get_build_requires self.run_setup() File "/tmp/pip-build-env-94kz6585/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 313, in run_setup exec(code, locals()) File "", line 458, in File "", line 379, in get_vllm_version RuntimeError: Unknown runtime environment [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. [notice] A new release of pip is available: 24.1.2 -> 24.2 [notice] To update, run: pip install --upgrade pip error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip.
youkaichao commented 2 months ago

might because you are using arm64 platform, which is not supported directly yet.

see also https://github.com/vllm-project/vllm/issues/2021

FanZhang91 commented 2 months ago

might because you are using arm64 platform, which is not supported directly yet.

see also #2021

Is there a plan to support VLLM under ARM architecture, or is there any roundabout solution?

shahizat commented 2 months ago

hi, also interested to run vLLM on the Nvidia AGX orin dev kit.

conroy-cheers commented 1 week ago

Looking to add/restore Jetson support in #9735