turboderp / exllamav2

A fast inference library for running LLMs locally on modern consumer-class GPUs
MIT License
3.2k stars 236 forks source link

[Feature Suggestion] SmoothQuant (W8A8) leads to ~50% better throughput #311

Open DreamGenX opened 5 months ago

DreamGenX commented 5 months ago

Hey there,

Thank you for making Exllama!

Most quants tend to have worse throughput and latency compared to fp16 inference. SmoothQuant avoids dequantization step and allows for much higher throughput and better latency than fp16.

There's a repo for SmootQuant that lets you create SmoothQuant models and to do inference: https://github.com/AniZpZ/AutoSmoothQuant

There's also a PR for vLLM by @AniZpZ, but given the complexity of vLLM, is unlikely to get merged any time soon: https://github.com/vllm-project/vllm/pull/1508

Exllama is known for its speed, so I think this could fit really well here.