Open Jimmy-Lu opened 3 months ago
I forwarded the issue to Anyscale folks (the company behind ray
). Meanwhile, you can try multiprocessing backend https://docs.vllm.ai/en/latest/serving/distributed_serving.html .
Can you share a bit about how to reproduce this?
@Jimmy-Lu you can follow the issue template to report detailed environment configuration, so that they can help more.
the error itself doesn't seem to be related to vllm.
ray.init()
in that cluster working?the error itself doesn't seem to be related to vllm.
- how did you deploy ray?
- is it consistent? Or one time?
- Is just using
ray.init()
in that cluster working?I ran offline_inference script above and ray auto deployed. And I also tried ray start. Consistent. ray.init() works.
I build vllm from source, and then ran the script above. After the error, I tried different ray version and not work.
do you have some time next week? I'd love to pair program to troubleshoot the issue
do you have some time next week? I'd love to pair program to troubleshoot the issue Yes,thank you
Your current environment
The ray version is 2.10.0 and vllm version is 0.5.0+cu117
🐛 Describe the bug
Using tp=2 as code listed below:
ray start not work: