Closed clintg6 closed 2 weeks ago
oh this is a hip version. let me ping amd folks.
cc @hongxiayang
To reproduce:
git clone https://github.com/vllm-project/vllm.git
cd vllm
docker build -f Dockerfile.rocm -t vllm-rocm .
docker run -it --network=host --group-add=video --ipc=host --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --device /dev/kfd --device /dev/dri vllm-rocm
Then in the terminal run these python commands
from vllm import LLM
llm = LLM("facebook/opt-13b", tensor_parallel_size=4, distributed_executor_backend="mp")
output = llm.generate("San Franciso is a")
Running vllm serve also crashes if ray is not specified
vllm serve facebook/opt-13b --tensor-parallel-size 4
I have same problem (launching llama3.1-70B). Ubuntu 20.04, ROCm 6.1, 4xMI250. Edit: export VLLM_WORKER_MULTIPROC_METHOD=spawn - now it's working.
I was not able to reproduce on mi250x with my repo. (the last commit hash is 85ad7e2d012edd87de9e84e93ed3204c80599695)
One thing I would like to mention is that, please use buildkit to build the docker if you haven't already used it.
DOCKER_BUILDKIT=1 docker build -f Dockerfile.rocm -t vllm-rocm .
As the error shown
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
,
As @KajetanA2 has suggested, can you (@clintg6 ) please try to use the spawn method to see if that helps.
Thanks @KajetanA2 this resolved the issue on my end. Now I am just wondering why spawn isn't set as the default.
Something was changed between https://github.com/vllm-project/vllm/commit/85ad7e2d012edd87de9e84e93ed3204c80599695 and current main. It used to be ok without needing to set it as spawn.
In my local env where it still works, the default env VLLM_WORKER_MULTIPROC_METHOD
value was "spawn", (envs.py) while the current main changed it defaults to "fork".
That is why it used to work without needing set the environment variable in ROCm environment.
cc @youkaichao Looks like we should restore the behavior and set the default for rocm as "spawn" again, agree?
please let me know if https://github.com/vllm-project/vllm/pull/7926 is enough to fix this.
@clintg6 Can you help verify @youkaichao 's PR above? With both generate and serve use cases?
Your current environment
The output of `python collect_env.py`
```text PyTorch version: 2.5.0.dev20240726+rocm6.1 Is debug build: False CUDA used to build PyTorch: N/A ROCM used to build PyTorch: 6.1.40091-a8dbc0c19 OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: 17.0.0 (https://github.com/RadeonOpenCompute/llvm-project roc-6.1.2 24193 669db884972e769450470020c06a6f132a8a065b) CMake version: version 3.26.4 Libc version: glibc-2.31 Python version: 3.9.19 (main, May 6 2024, 19:43:03) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-72-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: AMD Instinct MI300X (gfx942:sramecc+:xnack-) Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: 6.1.40093 MIOpen runtime version: 3.1.0 Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 52 bits physical, 57 bits virtual CPU(s): 256 On-line CPU(s) list: 0-255 Thread(s) per core: 2 Core(s) per socket: 64 Socket(s): 2 NUMA node(s): 2 Vendor ID: AuthenticAMD CPU family: 25 Model: 17 Model name: AMD EPYC 9554 64-Core Processor Stepping: 1 Frequency boost: enabled CPU MHz: 1500.000 CPU max MHz: 3762.9880 CPU min MHz: 1500.0000 BogoMIPS: 6190.45 Virtualization: AMD-V L1d cache: 4 MiB L1i cache: 4 MiB L2 cache: 128 MiB L3 cache: 512 MiB NUMA node0 CPU(s): 0-63,128-191 NUMA node1 CPU(s): 64-127,192-255 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d Versions of relevant libraries: [pip3] mypy==1.7.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] optree==0.9.1 [pip3] pytorch-triton-rocm==3.0.0+21eae954ef [pip3] pyzmq==26.2.0 [pip3] torch==2.5.0.dev20240726+rocm6.1 [pip3] torchvision==0.20.0.dev20240726+rocm6.1 [pip3] transformers==4.44.1 [pip3] triton==3.0.0 [conda] No relevant packages ROCM Version: 6.1.40093-bd86f1708 Neuron SDK Version: N/A vLLM Version: 0.5.4@d3b5b98021ca2030a0056121122a8965f2328fa2 vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: Could not collect ```🐛 Describe the bug
vLLM crashes when using multiprocessing distributed executor but works fine when ray is specified instead. By default vLLM is using mp for distributed workloads so even if distributed_executor_backend isn't specified it still crashes.
Error output is below: