vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.79k stars 4.68k forks source link

[Installation]: Any plans on providing vLLM pre-compiled for ROCm? #4017

Open satyamk7054 opened 7 months ago

satyamk7054 commented 7 months ago

Hi, are there any plans to provide vLLM releases that are pre-compiled for AMD GPUs?

How you are installing vllm

https://docs.vllm.ai/en/latest/getting_started/amd-installation.html

WoosukKwon commented 7 months ago

Hi @satyamk7054, we are working on releasing an official vLLM docker image for ROCm. Please stay tuned and use our docker file (Dockerfile.rocm) to build your container for now.

satyamk7054 commented 7 months ago

Hi @WoosukKwon , thank you for your response.

Are there any plans to provide a pre-compiled artifact as well?

linchen111 commented 4 months ago

Hi @satyamk7054, we are working on releasing an official vLLM docker image for ROCm. Please stay tuned and use our docker file (Dockerfile.rocm) to build your container for now.您好,我们正在努力发布 ROCm 的官方 vLLM docker 镜像。请继续关注并使用我们的 docker 文件 ( Dockerfile.rocm ) 来构建您的容器。

+1

github-actions[bot] commented 3 weeks ago

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!