FMInference / FlexLLMGen

Running large language models on a single GPU for throughput-oriented scenarios.
Apache License 2.0
9.2k stars 547 forks source link

Update docs/paper.md #102

Closed shotarok closed 1 year ago

shotarok commented 1 year ago

What

Why