EmbeddedLLM / vllm

vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs
https://vllm.readthedocs.io
Apache License 2.0
89 stars 5 forks source link

Updated docker file #5

Closed kliuae closed 1 year ago

kliuae commented 1 year ago

Updated the docker file to install dependencies and vLLM automatically, and added a script to run throughput benchmarking