vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
28.79k stars 4.27k forks source link

[Misc]: Segmentation Fault in vLLM API Server during Model Initialization (NCCL Error: Unhandled System Error) #9156

Open shreyasp-07 opened 1 week ago

shreyasp-07 commented 1 week ago

Anything you want to discuss about vllm.

I'm experiencing a segmentation fault while running the vLLM API server with Ray for distributed inference. The issue seems to be related to NCCL initialization, resulting in an "unhandled system error" during distributed execution, causing the process to fail with a segmentation fault (SIGSEGV).

Command I am running: python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3.1-70B-Instruct --trust-remote-code --device cuda --tensor-parallel-size 4 --gpu-memory-utilization 0.9 --swap-space 10 --dtype bfloat16 --api-key <REDACTED> --enforce-eager --pipeline-parallel-size 2 --max-model-len 110000 --max-seq-len-to-capture 1100 --disable-custom-all-reduce

Error Logs: INFO vLLM API server version 0.6.0 INFO args: Namespace(...) INFO Started engine process with PID <REDACTED> INFO Connecting to existing Ray cluster at address: <REDACTED>... INFO vLLM is using nccl==2.20.5 ERROR NCCL error: unhandled system error (run with NCCL_DEBUG=INFO for details) Fatal Python error: Segmentation fault

Before submitting a new issue...

DarkLight1337 commented 1 week ago

Can you run collect_env.py to show your GPU setup? cc @youkaichao