issues
search
EmbeddedLLM
/
vllm
vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs
https://vllm.readthedocs.io
Apache License 2.0
89
stars
5
forks
source link
Update README
#6
Closed
kliuae
closed
1 year ago
kliuae
commented
1 year ago
Add instructions to pull from pre-built docker image
Add instructions to pull from pre-built docker image