issues
search
mlc-ai
/
mlc-llm
Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
18.63k
stars
1.51k
forks
source link
How can I deploy a single-card MLC-LLM model? I want the model inference to run only on one card, not distributed.
#2213
Closed
137591
closed
4 months ago
137591
commented
4 months ago
📚 Documentation
Suggestion
Bug
Link to the buggy documentation/tutorial:
Description of the bug:
tqchen
commented
4 months ago
By default mlc llm deployment should be single card
📚 Documentation
Suggestion
Bug