ZJUICI / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
0 stars 1 forks source link

WIP: support table multi data base on vllm-0.5.4 #2

Closed zTaoplus closed 2 months ago

zTaoplus commented 2 months ago

TODO: