vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.56k stars 3.89k forks source link

[Bug]: Critical distributed executor bug #7791

Closed clintg6 closed 2 weeks ago

clintg6 commented 3 weeks ago

Your current environment

The output of `python collect_env.py` ```text PyTorch version: 2.5.0.dev20240726+rocm6.1 Is debug build: False CUDA used to build PyTorch: N/A ROCM used to build PyTorch: 6.1.40091-a8dbc0c19 OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: 17.0.0 (https://github.com/RadeonOpenCompute/llvm-project roc-6.1.2 24193 669db884972e769450470020c06a6f132a8a065b) CMake version: version 3.26.4 Libc version: glibc-2.31 Python version: 3.9.19 (main, May 6 2024, 19:43:03) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-72-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: AMD Instinct MI300X (gfx942:sramecc+:xnack-) Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: 6.1.40093 MIOpen runtime version: 3.1.0 Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 52 bits physical, 57 bits virtual CPU(s): 256 On-line CPU(s) list: 0-255 Thread(s) per core: 2 Core(s) per socket: 64 Socket(s): 2 NUMA node(s): 2 Vendor ID: AuthenticAMD CPU family: 25 Model: 17 Model name: AMD EPYC 9554 64-Core Processor Stepping: 1 Frequency boost: enabled CPU MHz: 1500.000 CPU max MHz: 3762.9880 CPU min MHz: 1500.0000 BogoMIPS: 6190.45 Virtualization: AMD-V L1d cache: 4 MiB L1i cache: 4 MiB L2 cache: 128 MiB L3 cache: 512 MiB NUMA node0 CPU(s): 0-63,128-191 NUMA node1 CPU(s): 64-127,192-255 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d Versions of relevant libraries: [pip3] mypy==1.7.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] optree==0.9.1 [pip3] pytorch-triton-rocm==3.0.0+21eae954ef [pip3] pyzmq==26.2.0 [pip3] torch==2.5.0.dev20240726+rocm6.1 [pip3] torchvision==0.20.0.dev20240726+rocm6.1 [pip3] transformers==4.44.1 [pip3] triton==3.0.0 [conda] No relevant packages ROCM Version: 6.1.40093-bd86f1708 Neuron SDK Version: N/A vLLM Version: 0.5.4@d3b5b98021ca2030a0056121122a8965f2328fa2 vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: Could not collect ```

🐛 Describe the bug

vLLM crashes when using multiprocessing distributed executor but works fine when ray is specified instead. By default vLLM is using mp for distributed workloads so even if distributed_executor_backend isn't specified it still crashes.

from vllm import LLM
llm = LLM("facebook/opt-13b", tensor_parallel_size=4, distributed_executor_backend="mp")
output = llm.generate("San Franciso is a")

Error output is below:

(VllmWorkerProcess pid=17500) WARNING 08-22 18:13:07 logger.py:147] VLLM_TRACE_FUNCTION is enabled. It will record every function executed by Python. This will slow down the code. It is suggested to be used for debugging hang or crashes only.
(VllmWorkerProcess pid=17500) INFO 08-22 18:13:07 logger.py:151] Trace frame log is saved to /tmp/vllm/vllm-instance-23db0e96103a493c8d8ca99a7a192568/VLLM_TRACE_FUNCTION_for_process_17500_thread_139868197311680_at_2024-08-22_18:13:07.936704.log
(VllmWorkerProcess pid=17498) Process VllmWorkerProcess:
(VllmWorkerProcess pid=17498) Traceback (most recent call last):
(VllmWorkerProcess pid=17498)   File "/opt/conda/envs/py_3.9/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
(VllmWorkerProcess pid=17498)     self.run()
(VllmWorkerProcess pid=17498)   File "/opt/conda/envs/py_3.9/lib/python3.9/multiprocessing/process.py", line 108, in run
(VllmWorkerProcess pid=17498)     self._target(*self._args, **self._kwargs)
(VllmWorkerProcess pid=17498)   File "/vllm-workspace/vllm/executor/multiproc_worker_utils.py", line 210, in _run_worker_process
(VllmWorkerProcess pid=17498)     worker = worker_factory()
(VllmWorkerProcess pid=17498)   File "/vllm-workspace/vllm/executor/gpu_executor.py", line 23, in create_worker
(VllmWorkerProcess pid=17498)     wrapper.init_worker(**kwargs)
(VllmWorkerProcess pid=17498)   File "/vllm-workspace/vllm/worker/worker_base.py", line 444, in init_worker
(VllmWorkerProcess pid=17498)     self.worker = worker_class(*args, **kwargs)
(VllmWorkerProcess pid=17498)   File "/vllm-workspace/vllm/worker/worker.py", line 99, in __init__
(VllmWorkerProcess pid=17498)     self.model_runner: GPUModelRunnerBase = ModelRunnerClass(
(VllmWorkerProcess pid=17498)   File "/vllm-workspace/vllm/worker/model_runner.py", line 842, in __init__
(VllmWorkerProcess pid=17498)     self.attn_backend = get_attn_backend(
(VllmWorkerProcess pid=17498)   File "/vllm-workspace/vllm/attention/selector.py", line 108, in get_attn_backend
(VllmWorkerProcess pid=17498)     backend = which_attn_to_use(num_heads, head_size, num_kv_heads,
(VllmWorkerProcess pid=17498)   File "/vllm-workspace/vllm/attention/selector.py", line 206, in which_attn_to_use
(VllmWorkerProcess pid=17498)     if current_platform.get_device_capability()[0] != 9:
(VllmWorkerProcess pid=17498)   File "/vllm-workspace/vllm/platforms/rocm.py", line 15, in get_device_capability
(VllmWorkerProcess pid=17498)     return torch.cuda.get_device_capability(device_id)
(VllmWorkerProcess pid=17498)   File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py", line 504, in get_device_capability
(VllmWorkerProcess pid=17498)     prop = get_device_properties(device)
(VllmWorkerProcess pid=17498)   File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py", line 518, in get_device_properties
(VllmWorkerProcess pid=17498)     _lazy_init()  # will define _get_device_properties
(VllmWorkerProcess pid=17498)   File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py", line 300, in _lazy_init
(VllmWorkerProcess pid=17498)     raise RuntimeError(
(VllmWorkerProcess pid=17498) RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
(VllmWorkerProcess pid=17499) Process VllmWorkerProcess:
INFO 08-22 18:13:08 selector.py:121] Using ROCmFlashAttention backend.
(VllmWorkerProcess pid=17499) Traceback (most recent call last):
(VllmWorkerProcess pid=17499)   File "/opt/conda/envs/py_3.9/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
(VllmWorkerProcess pid=17499)     self.run()
(VllmWorkerProcess pid=17499)   File "/opt/conda/envs/py_3.9/lib/python3.9/multiprocessing/process.py", line 108, in run
(VllmWorkerProcess pid=17499)     self._target(*self._args, **self._kwargs)
(VllmWorkerProcess pid=17499)   File "/vllm-workspace/vllm/executor/multiproc_worker_utils.py", line 210, in _run_worker_process
(VllmWorkerProcess pid=17499)     worker = worker_factory()
(VllmWorkerProcess pid=17499)   File "/vllm-workspace/vllm/executor/gpu_executor.py", line 23, in create_worker
(VllmWorkerProcess pid=17499)     wrapper.init_worker(**kwargs)
(VllmWorkerProcess pid=17499)   File "/vllm-workspace/vllm/worker/worker_base.py", line 444, in init_worker
(VllmWorkerProcess pid=17499)     self.worker = worker_class(*args, **kwargs)
(VllmWorkerProcess pid=17499)   File "/vllm-workspace/vllm/worker/worker.py", line 99, in __init__
(VllmWorkerProcess pid=17499)     self.model_runner: GPUModelRunnerBase = ModelRunnerClass(
(VllmWorkerProcess pid=17499)   File "/vllm-workspace/vllm/worker/model_runner.py", line 842, in __init__
(VllmWorkerProcess pid=17499)     self.attn_backend = get_attn_backend(
(VllmWorkerProcess pid=17499)   File "/vllm-workspace/vllm/attention/selector.py", line 108, in get_attn_backend
(VllmWorkerProcess pid=17499)     backend = which_attn_to_use(num_heads, head_size, num_kv_heads,
(VllmWorkerProcess pid=17499)   File "/vllm-workspace/vllm/attention/selector.py", line 206, in which_attn_to_use
(VllmWorkerProcess pid=17499)     if current_platform.get_device_capability()[0] != 9:
(VllmWorkerProcess pid=17499)   File "/vllm-workspace/vllm/platforms/rocm.py", line 15, in get_device_capability
(VllmWorkerProcess pid=17499)     return torch.cuda.get_device_capability(device_id)
(VllmWorkerProcess pid=17499)   File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py", line 504, in get_device_capability
(VllmWorkerProcess pid=17499)     prop = get_device_properties(device)
(VllmWorkerProcess pid=17499)   File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py", line 518, in get_device_properties
(VllmWorkerProcess pid=17499)     _lazy_init()  # will define _get_device_properties
(VllmWorkerProcess pid=17499)   File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py", line 300, in _lazy_init
(VllmWorkerProcess pid=17499)     raise RuntimeError(
(VllmWorkerProcess pid=17500) Process VllmWorkerProcess:
(VllmWorkerProcess pid=17499) RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
(VllmWorkerProcess pid=17500) Traceback (most recent call last):
(VllmWorkerProcess pid=17500)   File "/opt/conda/envs/py_3.9/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
(VllmWorkerProcess pid=17500)     self.run()
(VllmWorkerProcess pid=17500)   File "/opt/conda/envs/py_3.9/lib/python3.9/multiprocessing/process.py", line 108, in run
(VllmWorkerProcess pid=17500)     self._target(*self._args, **self._kwargs)
(VllmWorkerProcess pid=17500)   File "/vllm-workspace/vllm/executor/multiproc_worker_utils.py", line 210, in _run_worker_process
(VllmWorkerProcess pid=17500)     worker = worker_factory()
(VllmWorkerProcess pid=17500)   File "/vllm-workspace/vllm/executor/gpu_executor.py", line 23, in create_worker
(VllmWorkerProcess pid=17500)     wrapper.init_worker(**kwargs)
(VllmWorkerProcess pid=17500)   File "/vllm-workspace/vllm/worker/worker_base.py", line 444, in init_worker
(VllmWorkerProcess pid=17500)     self.worker = worker_class(*args, **kwargs)
(VllmWorkerProcess pid=17500)   File "/vllm-workspace/vllm/worker/worker.py", line 99, in __init__
(VllmWorkerProcess pid=17500)     self.model_runner: GPUModelRunnerBase = ModelRunnerClass(
(VllmWorkerProcess pid=17500)   File "/vllm-workspace/vllm/worker/model_runner.py", line 842, in __init__
(VllmWorkerProcess pid=17500)     self.attn_backend = get_attn_backend(
(VllmWorkerProcess pid=17500)   File "/vllm-workspace/vllm/attention/selector.py", line 108, in get_attn_backend
(VllmWorkerProcess pid=17500)     backend = which_attn_to_use(num_heads, head_size, num_kv_heads,
(VllmWorkerProcess pid=17500)   File "/vllm-workspace/vllm/attention/selector.py", line 206, in which_attn_to_use
(VllmWorkerProcess pid=17500)     if current_platform.get_device_capability()[0] != 9:
(VllmWorkerProcess pid=17500)   File "/vllm-workspace/vllm/platforms/rocm.py", line 15, in get_device_capability
(VllmWorkerProcess pid=17500)     return torch.cuda.get_device_capability(device_id)
(VllmWorkerProcess pid=17500)   File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py", line 504, in get_device_capability
(VllmWorkerProcess pid=17500)     prop = get_device_properties(device)
(VllmWorkerProcess pid=17500)   File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py", line 518, in get_device_properties
(VllmWorkerProcess pid=17500)     _lazy_init()  # will define _get_device_properties
(VllmWorkerProcess pid=17500)   File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py", line 300, in _lazy_init
(VllmWorkerProcess pid=17500)     raise RuntimeError(
(VllmWorkerProcess pid=17500) RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
ERROR 08-22 18:13:08 multiproc_worker_utils.py:120] Worker VllmWorkerProcess pid 17498 died, exit code: 1
INFO 08-22 18:13:08 multiproc_worker_utils.py:123] Killing local vLLM worker processes
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
File /vllm-workspace/vllm/executor/multiproc_worker_utils.py:169, in ProcessWorkerWrapper._enqueue_task(self, future, method, args, kwargs)
    168 try:
--> 169     self._task_queue.put((task_id, method, args, kwargs))
    170 except BaseException as e:

File /opt/conda/envs/py_3.9/lib/python3.9/multiprocessing/queues.py:88, in Queue.put(self, obj, block, timeout)
     87 if self._closed:
---> 88     raise ValueError(f"Queue {self!r} is closed")
     89 if not self._sem.acquire(block, timeout):

ValueError: Queue <multiprocessing.queues.Queue object at 0x7f337ad7c580> is closed

The above exception was the direct cause of the following exception:

ChildProcessError                         Traceback (most recent call last)
Cell In[2], line 2
      1 from vllm import LLM
----> 2 llm = LLM("facebook/opt-13b", tensor_parallel_size=4)
      3 output = llm.generate("San Franciso is a")

File /vllm-workspace/vllm/entrypoints/llm.py:175, in LLM.__init__(self, model, tokenizer, tokenizer_mode, skip_tokenizer_init, trust_remote_code, tensor_parallel_size, dtype, quantization, revision, tokenizer_revision, seed, gpu_memory_utilization, swap_space, cpu_offload_gb, enforce_eager, max_context_len_to_capture, max_seq_len_to_capture, disable_custom_all_reduce, **kwargs)
    152     raise TypeError(
    153         "There is no need to pass vision-related arguments anymore.")
    154 engine_args = EngineArgs(
    155     model=model,
    156     tokenizer=tokenizer,
   (...)
    173     **kwargs,
    174 )
--> 175 self.llm_engine = LLMEngine.from_engine_args(
    176     engine_args, usage_context=UsageContext.LLM_CLASS)
    177 self.request_counter = Counter()

File /vllm-workspace/vllm/engine/llm_engine.py:473, in LLMEngine.from_engine_args(cls, engine_args, usage_context, stat_loggers)
    471 executor_class = cls._get_executor_cls(engine_config)
    472 # Create the LLM engine.
--> 473 engine = cls(
    474     **engine_config.to_dict(),
    475     executor_class=executor_class,
    476     log_stats=not engine_args.disable_log_stats,
    477     usage_context=usage_context,
    478     stat_loggers=stat_loggers,
    479 )
    481 return engine

File /vllm-workspace/vllm/engine/llm_engine.py:270, in LLMEngine.__init__(self, model_config, cache_config, parallel_config, scheduler_config, device_config, load_config, lora_config, speculative_config, decoding_config, observability_config, prompt_adapter_config, executor_class, log_stats, usage_context, stat_loggers, input_registry)
    266 self.input_registry = input_registry
    267 self.input_processor = input_registry.create_input_processor(
    268     model_config)
--> 270 self.model_executor = executor_class(
    271     model_config=model_config,
    272     cache_config=cache_config,
    273     parallel_config=parallel_config,
    274     scheduler_config=scheduler_config,
    275     device_config=device_config,
    276     lora_config=lora_config,
    277     speculative_config=speculative_config,
    278     load_config=load_config,
    279     prompt_adapter_config=prompt_adapter_config,
    280     observability_config=self.observability_config,
    281 )
    283 if not self.model_config.embedding_mode:
    284     self._initialize_kv_caches()

File /vllm-workspace/vllm/executor/distributed_gpu_executor.py:25, in DistributedGPUExecutor.__init__(self, *args, **kwargs)
     21 # Updated by implementations that require additional args to be passed
     22 # to the _run_workers execute_model call
     23 self.extra_execute_model_run_workers_kwargs: Dict[str, Any] = {}
---> 25 super().__init__(*args, **kwargs)

File /vllm-workspace/vllm/executor/executor_base.py:46, in ExecutorBase.__init__(self, model_config, cache_config, parallel_config, scheduler_config, device_config, load_config, lora_config, speculative_config, prompt_adapter_config, observability_config)
     44 self.prompt_adapter_config = prompt_adapter_config
     45 self.observability_config = observability_config
---> 46 self._init_executor()

File /vllm-workspace/vllm/executor/multiproc_gpu_executor.py:137, in MultiprocessingGPUExecutor._init_executor(self)
    133     signal.signal(signal.SIGTERM, shutdown)
    135 self.driver_worker = self._create_worker(
    136     distributed_init_method=distributed_init_method)
--> 137 self._run_workers("init_device")
    138 self._run_workers("load_model",
    139                   max_concurrent_workers=self.parallel_config.
    140                   max_parallel_loading_workers)

File /vllm-workspace/vllm/executor/multiproc_gpu_executor.py:186, in MultiprocessingGPUExecutor._run_workers(self, method, async_run_tensor_parallel_workers_only, max_concurrent_workers, *args, **kwargs)
    180     return [
    181         worker.execute_method(method, *args, **kwargs)
    182         for worker in self.non_driver_workers
    183     ]
    185 # Start all remote workers first.
--> 186 worker_outputs = [
    187     worker.execute_method(method, *args, **kwargs)
    188     for worker in self.workers
    189 ]
    191 driver_worker_method = getattr(self.driver_worker, method)
    192 driver_worker_output = driver_worker_method(*args, **kwargs)

File /vllm-workspace/vllm/executor/multiproc_gpu_executor.py:187, in <listcomp>(.0)
    180     return [
    181         worker.execute_method(method, *args, **kwargs)
    182         for worker in self.non_driver_workers
    183     ]
    185 # Start all remote workers first.
    186 worker_outputs = [
--> 187     worker.execute_method(method, *args, **kwargs)
    188     for worker in self.workers
    189 ]
    191 driver_worker_method = getattr(self.driver_worker, method)
    192 driver_worker_output = driver_worker_method(*args, **kwargs)

File /vllm-workspace/vllm/executor/multiproc_worker_utils.py:176, in ProcessWorkerWrapper.execute_method(self, method, *args, **kwargs)
    174 def execute_method(self, method: str, *args, **kwargs):
    175     future: ResultFuture = ResultFuture()
--> 176     self._enqueue_task(future, method, args, kwargs)
    177     return future

File /vllm-workspace/vllm/executor/multiproc_worker_utils.py:172, in ProcessWorkerWrapper._enqueue_task(self, future, method, args, kwargs)
    170 except BaseException as e:
    171     del self.tasks[task_id]
--> 172     raise ChildProcessError("worker died") from e

ChildProcessError: worker died
youkaichao commented 3 weeks ago

oh this is a hip version. let me ping amd folks.

cc @hongxiayang

clintg6 commented 3 weeks ago

To reproduce:

git clone https://github.com/vllm-project/vllm.git
cd vllm
docker build -f Dockerfile.rocm -t vllm-rocm .
docker run -it --network=host --group-add=video --ipc=host --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --device /dev/kfd --device /dev/dri vllm-rocm

Then in the terminal run these python commands

from vllm import LLM
llm = LLM("facebook/opt-13b", tensor_parallel_size=4, distributed_executor_backend="mp")
output = llm.generate("San Franciso is a")

Running vllm serve also crashes if ray is not specified

 vllm serve facebook/opt-13b --tensor-parallel-size 4
KajetanA2 commented 3 weeks ago

I have same problem (launching llama3.1-70B). Ubuntu 20.04, ROCm 6.1, 4xMI250. Edit: export VLLM_WORKER_MULTIPROC_METHOD=spawn - now it's working.

hongxiayang commented 2 weeks ago

I was not able to reproduce on mi250x with my repo. (the last commit hash is 85ad7e2d012edd87de9e84e93ed3204c80599695)

One thing I would like to mention is that, please use buildkit to build the docker if you haven't already used it.

DOCKER_BUILDKIT=1 docker build -f Dockerfile.rocm -t vllm-rocm . 
hongxiayang commented 2 weeks ago

As the error shown RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method, As @KajetanA2 has suggested, can you (@clintg6 ) please try to use the spawn method to see if that helps.

clintg6 commented 2 weeks ago

Thanks @KajetanA2 this resolved the issue on my end. Now I am just wondering why spawn isn't set as the default.

hongxiayang commented 2 weeks ago

Something was changed between https://github.com/vllm-project/vllm/commit/85ad7e2d012edd87de9e84e93ed3204c80599695 and current main. It used to be ok without needing to set it as spawn.

hongxiayang commented 2 weeks ago

In my local env where it still works, the default env VLLM_WORKER_MULTIPROC_METHOD value was "spawn", (envs.py) while the current main changed it defaults to "fork". image

image

That is why it used to work without needing set the environment variable in ROCm environment.

cc @youkaichao Looks like we should restore the behavior and set the default for rocm as "spawn" again, agree?

youkaichao commented 2 weeks ago

please let me know if https://github.com/vllm-project/vllm/pull/7926 is enough to fix this.

hongxiayang commented 2 weeks ago

@clintg6 Can you help verify @youkaichao 's PR above? With both generate and serve use cases?