vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.23k stars 4.57k forks source link

systems with SELinux #2230

Open alitirmizi23 opened 11 months ago

alitirmizi23 commented 11 months ago

Hi.

Whenever I start my api_serverr.py on a selinux enabled system, the server gets stuck at "Started a local Ray instance" and nothing else happens. Does selinux prohibit spawning multi-processes or workers? Is it not compatible with Ray?

panxnan commented 8 months ago

i have the same issue too

github-actions[bot] commented 2 weeks ago

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!