issues
search
neuralmagic
/
nm-vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
https://nm-vllm.readthedocs.io
Other
251
stars
10
forks
source link
fix upload assets name
#373
Closed
derekk-nm
closed
4 months ago
derekk-nm
commented
4 months ago
to reflect that we're also uploading to pypi.org
to reflect that we're also uploading to pypi.org