EmbeddedLLM / vllm

vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs
https://vllm.readthedocs.io
Apache License 2.0
89 stars 5 forks source link

[Feature]: vllm 0.4.1 in ROCM #27

Closed linchen111 closed 1 week ago

linchen111 commented 6 months ago

🚀 The feature, motivation and pitch

Hello, I am using the vllm 0.2.6 image. But when I tried to install a new version of vllm myself, such as 0.4.1, it failed (I was using mi250x). Do you have any plans to update the images in the Docker Hub?

Alternatives

No response

Additional context

No response

github-actions[bot] commented 1 week ago

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!