FMInference / FlexLLMGen

Running large language models on a single GPU for throughput-oriented scenarios.
Apache License 2.0
9.18k stars 548 forks source link

Allow FlexGen to use locally downloaded models #111

Open Vinkle-hzt opened 1 year ago

Vinkle-hzt commented 1 year ago

Support local models by using argument --local, like:

python3 -m flexgen.flex_opt --model /home/username/model/facebook/opt-1.3b --local

It will load the locally downloaded opt-1.3b model instead of the opt-1.3b from huggingface.