neuralmagic / nm-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://nm-vllm.readthedocs.io
Other
251 stars 10 forks source link

Use shared actions #309

Closed dbarbuzzi closed 4 months ago

dbarbuzzi commented 5 months ago

This PR updates the workflows to use the shared install-testmo and set-python actions (and deletes the local versions). At the time of this writing: