issues
search
EmbeddedLLM
/
vllm-rocm
vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs
https://vllm.readthedocs.io
Apache License 2.0
83
stars
5
forks
source link
V0.1.4 rocm update readme
#7
Closed
kliuae
closed
8 months ago
kliuae
commented
8 months ago
Added instructions on model downloading
Added instructions on model downloading