...
copying vllm/model_executor/layers/fused_moe/configs/E=8,N=8192,device_name=NVIDIA_H100_80GB_HBM3,dtype=fp8_w8a8.json -> build/lib.linux-x86_64-cpython-312/vllm/model_executor/layers/fused_moe/configs
running build_ext
-- The CXX compiler identification is GNU 14.2.1
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Build type: RelWithDebInfo
-- Target device: cpu
-- Found Python: /home/fengyu/projects/vllm/venv/bin/python (found version "3.12.6") found components: Interpreter Development.Module Development.SABIModule
-- Found python matching: /home/fengyu/projects/vllm/venv/bin/python.
CMake Warning at venv/lib/python3.12/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:22 (message):
static library kineto_LIBRARY-NOTFOUND not found.
Call Stack (most recent call first):
venv/lib/python3.12/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:120 (append_torchlib_if_found)
CMakeLists.txt:84 (find_package)
-- Found Torch: /home/fengyu/projects/vllm/venv/lib/python3.12/site-packages/torch/lib/libtorch.so
-- Enabling core extension.
CMake Warning at cmake/cpu_extension.cmake:73 (message):
vLLM CPU backend using AVX2 ISA
Call Stack (most recent call first):
CMakeLists.txt:110 (include)
-- CPU extension compile flags: -fopenmp;-DVLLM_CPU_EXTENSION;-mavx2
-- Enabling C extension.
CMake Error at cmake/cpu_extension.cmake:123 (add_dependencies):
Cannot add target-level dependencies to non-existent target "default".
The add_dependencies works for top-level logical targets created by the
add_executable, add_library, or add_custom_target commands. If you want to
add file-level dependencies see the DEPENDS option of the add_custom_target
and add_custom_command commands.
Call Stack (most recent call first):
CMakeLists.txt:110 (include)
-- Configuring incomplete, errors occurred!
Traceback (most recent call last):
File "/home/fengyu/projects/vllm/setup.py", line 520, in <module>
setup(
File "/home/fengyu/projects/vllm/venv/lib/python3.12/site-packages/setuptools/__init__.py", line 117, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/fengyu/projects/vllm/venv/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 183, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "/home/fengyu/projects/vllm/venv/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 199, in run_commands
dist.run_commands()
File "/home/fengyu/projects/vllm/venv/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 954, in run_commands
self.run_command(cmd)
File "/home/fengyu/projects/vllm/venv/lib/python3.12/site-packages/setuptools/dist.py", line 950, in run_command
super().run_command(command)
File "/home/fengyu/projects/vllm/venv/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 973, in run_command
cmd_obj.run()
File "/home/fengyu/projects/vllm/venv/lib/python3.12/site-packages/setuptools/_distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/fengyu/projects/vllm/venv/lib/python3.12/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/home/fengyu/projects/vllm/venv/lib/python3.12/site-packages/setuptools/dist.py", line 950, in run_command
super().run_command(command)
File "/home/fengyu/projects/vllm/venv/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 973, in run_command
cmd_obj.run()
File "/home/fengyu/projects/vllm/setup.py", line 263, in run
super().run()
File "/home/fengyu/projects/vllm/venv/lib/python3.12/site-packages/setuptools/command/build_ext.py", line 98, in run
_build_ext.run(self)
File "/home/fengyu/projects/vllm/venv/lib/python3.12/site-packages/setuptools/_distutils/command/build_ext.py", line 359, in run
self.build_extensions()
File "/home/fengyu/projects/vllm/setup.py", line 225, in build_extensions
self.configure(ext)
File "/home/fengyu/projects/vllm/setup.py", line 205, in configure
subprocess.check_call(
File "/usr/lib/python3.12/subprocess.py", line 413, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/home/fengyu/projects/vllm', '-DCMAKE_BUILD_TYPE=RelWithDebInfo', '-DVLLM_TARGET_DEVICE=cpu', '-DCMAKE_C_COMPILER_LAUNCHER=ccache', '-DCMAKE_CXX_COMPILER_LAUNCHER=ccache', '-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache', '-DCMAKE_HIP_COMPILER_LAUNCHER=ccache', '-DVLLM_PYTHON_EXECUTABLE=/home/fengyu/projects/vllm/venv/bin/python', '-DVLLM_PYTHON_PATH=/home/fengyu/projects/vllm:/usr/lib/python312.zip:/usr/lib/python3.12:/usr/lib/python3.12/lib-dynload:/home/fengyu/projects/vllm/venv/lib/python3.12/site-packages:/home/fengyu/projects/vllm/venv/lib/python3.12/site-packages/setuptools/_vendor']' returned non-zero exit status 1.
And for Docker will be the same error.
Before submitting a new issue...
[X] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Your current environment
How you are installing vllm
Follow the CPU installation instructions: https://docs.vllm.ai/en/latest/getting_started/cpu-installation.html
Error:
And for Docker will be the same error.
Before submitting a new issue...