FMInference / FlexLLMGen

Running large language models on a single GPU for throughput-oriented scenarios.
Apache License 2.0
9.21k stars 547 forks source link

Faster and memory-efficient weight download #69

Closed Ying1123 closed 1 year ago

Ying1123 commented 1 year ago

Use a more efficient way to download weights and convert them into FlexGen format.