vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
22.33k stars 3.15k forks source link

[Feature]: Support W4A8KV4 Quantization(QServe/QoQ) #4763

Open bratao opened 1 month ago

bratao commented 1 month ago

🚀 The feature, motivation and pitch

This library https://github.com/mit-han-lab/qserve , introduces a number of innovations. More importantly is the W4A8KV4 Quantization, called on the paper (https://arxiv.org/abs/2405.04532) as QoQ.

The key insight driving QServe is that the efficiency of LLM serving on GPUs is critically influenced by operations on low-throughput CUDA cores. Building upon this insight, in QoQ algorithm, we introduce progressive quantization that can allow low dequantization overhead in W4A8 GEMM. Additionally, we develop SmoothAttention to effectively mitigate the accuracy degradation incurred by 4-bit KV quantization. In the QServe system, we perform compute-aware weight reordering and take advantage of register-level parallelism to reduce dequantization latency. We also make fused attention memory-bound, harnessing the performance gain brought by KV4 quantization. As a result, QServe improves the maximum achievable serving throughput of Llama-3-8B by 1.2x on A100, 1.4x on L40S; and Qwen1.5-72B by 2.4x on A100, 3.5x on L40S, compared to TensorRT-LLM.

Alternatives

No response

Additional context

No response

super-ahn commented 6 days ago

+1. It's the only quantization algorithm that was designed with throughput in mind from the start.

haichuan1221 commented 5 days ago

+1

RanchiZhao commented 4 days ago

+1