FMInference / FlexLLMGen

Running large language models on a single GPU for throughput-oriented scenarios.
Apache License 2.0
9.18k stars 548 forks source link

Link to paper.pdf is broken #14

Closed pdh closed 1 year ago

pdh commented 1 year ago

The referenced file https://github.com/Ying1123/FlexGen/blob/main/docs/paper.pdf doesn't appear to exist

merrymercy commented 1 year ago

It is fixed https://github.com/FMInference/FlexGen/blob/main/docs/paper.pdf