issues
search
EmbeddedLLM
/
vllm
vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs
https://vllm.readthedocs.io
Apache License 2.0
89
stars
5
forks
source link
Merge latest working rocm branch
#17
Closed
tjtanaa
closed
11 months ago
tjtanaa
commented
11 months ago
Features
Auto-code path selection
support llama2
support squeezellm rocm
add documentation amd-installation.rst. Describing how to setup vllm ROCm version
format.sh all the code
add base amd.Dockerfile
merge with latest branch 20231129 e19a64c7eff2085790dbf71851208fa2dd31ca4d
Features
merge with latest branch 20231129 e19a64c7eff2085790dbf71851208fa2dd31ca4d