vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
27.26k stars 4k forks source link

[Bug]: tensor parallel processes not working in vllm_cpu #8756

Closed park12sj closed 4 hours ago

park12sj commented 5 hours ago

Your current environment

The output of `python collect_env.py` ```text Collecting environment information... INFO 09-24 11:00:05 importing.py:10] Triton not installed; certain GPU-related functions will not be available. PyTorch version: 2.4.0+cpu Is debug build: False CUDA used to build PyTorch: Could not collect ROCM used to build PyTorch: N/A OS: CentOS Linux release 7.9.2009 (Core) (x86_64) GCC version: (GCC) 12.3.0 Clang version: Could not collect CMake version: version 3.26.1 Libc version: glibc-2.17 Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime) Python platform: Linux-4.19.93-1.nbp.el7.x86_64-x86_64-with-glibc2.17 Is CUDA available: False CUDA runtime version: 12.1.105 CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No devices found. Nvidia driver version: Could not collect cuDNN version: Probably one of the following: /usr/lib64/libcudnn.so.8.9.0 /usr/lib64/libcudnn_adv_infer.so.8.9.0 /usr/lib64/libcudnn_adv_train.so.8.9.0 /usr/lib64/libcudnn_cnn_infer.so.8.9.0 /usr/lib64/libcudnn_cnn_train.so.8.9.0 /usr/lib64/libcudnn_ops_infer.so.8.9.0 /usr/lib64/libcudnn_ops_train.so.8.9.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 72 On-line CPU(s) list: 0-71 Thread(s) per core: 2 Core(s) per socket: 18 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Gold 5220 CPU @ 2.20GHz Stepping: 7 CPU MHz: 2700.001 BogoMIPS: 4400.00 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 1024K L3 cache: 25344K NUMA node0 CPU(s): 0-17,36-53 NUMA node1 CPU(s): 18-35,54-71 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] pyzmq==26.2.0 [pip3] torch==2.4.0+cpu [pip3] torchvision==0.19.0+cpu [pip3] transformers==4.44.2 [conda] numpy 1.26.4 pypi_0 pypi [conda] pyzmq 26.2.0 pypi_0 pypi [conda] torch 2.4.0+cpu pypi_0 pypi [conda] torchvision 0.19.0+cpu pypi_0 pypi [conda] transformers 4.44.2 pypi_0 pypi ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.6.1.post2@9e5ec35b1f8239453b1aaab28e7a02307db4ab1f vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: Could not collect ``` env setting by https://docs.vllm.ai/en/latest/getting_started/cpu-installation.html#build-from-source

Model Input Dumps

No response

🐛 Describe the bug

According to the link below, it says that the sensor parallel is supported in cpu, but it does not work. https://docs.vllm.ai/en/latest/getting_started/cpu-installation.html#related-runtime-environment-variables

ex) export VLLM_CPU_OMP_THREADS_BIND="0-17|18-35"

but

INFO 09-24 10:50:13 selector.py:183] Cannot use _Backend.FLASH_ATTN backend on CPU.
INFO 09-24 10:50:13 selector.py:128] Using Torch SDPA backend.
INFO 09-24 10:50:13 cpu_worker.py:211] OMP threads binding of Process 1563:
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1563, core 0
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1754, core 1
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1755, core 2
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1756, core 3
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1757, core 4
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1758, core 5
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1759, core 6
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1760, core 7
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1761, core 8
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1762, core 9
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1763, core 10
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1764, core 11
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1765, core 12
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1766, core 13
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1767, core 14
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1768, core 15
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1769, core 16
INFO 09-24 10:50:13 cpu_worker.py:211]  OMP tid: 1770, core 17
INFO 09-24 10:50:13 cpu_worker.py:211] 
INFO 09-24 10:50:13 selector.py:183] Cannot use _Backend.FLASH_ATTN backend on CPU.
INFO 09-24 10:50:13 selector.py:128] Using Torch SDPA backend.

image

Before submitting a new issue...

bigPYJ1151 commented 5 hours ago

Did you set -tp=2?

park12sj commented 4 hours ago

I missed the arg😅 Thank you