vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
25.51k stars 3.7k forks source link

systems with SELinux #2230

Open alitirmizi23 opened 8 months ago

alitirmizi23 commented 8 months ago

Hi.

Whenever I start my api_serverr.py on a selinux enabled system, the server gets stuck at "Started a local Ray instance" and nothing else happens. Does selinux prohibit spawning multi-processes or workers? Is it not compatible with Ray?

panxnan commented 5 months ago

i have the same issue too