vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.31k stars 4.59k forks source link

[Feature]: Quark quantization format upstream to VLLM #10294

Open kewang-xlnx opened 4 days ago

kewang-xlnx commented 4 days ago

Quark is a comprehensive cross-platform toolkit designed to simplify and enhance the quantization of deep learning models. Supporting both PyTorch and ONNX models, Quark empowers developers to optimize their models for deployment on a wide range of hardware backends, achieving significant performance gains without compromising accuracy. Here is the introduction to Quark. Currently, the format of the quantized model exported by Quark is different from the formats supported by VLLM, so we need to contribute codes to VLLM to add support for the Quark format.

Quark Format

1) configuration file config.json of Quark format 2) key names and data types of Quark safetensors

model.layers.1.self_attn.k_proj.input_scale,    torch.float16
model.layers.1.self_attn.k_proj.weight,     torch.float8_e4m3fn
model.layers.1.self_attn.k_proj.weight_scale,   torch.float16
model.layers.1.self_attn.o_proj.input_scale,    torch.float16
model.layers.1.self_attn.o_proj.weight,     torch.float8_e4m3fn
model.layers.1.self_attn.o_proj.weight_scale,   torch.float16
model.layers.1.self_attn.q_proj.input_scale,    torch.float16
model.layers.1.self_attn.q_proj.weight,     torch.float8_e4m3fn
model.layers.1.self_attn.q_proj.weight_scale,   torch.float16
model.layers.1.self_attn.v_proj.input_scale,    torch.float16
model.layers.1.self_attn.v_proj.weight,     torch.float8_e4m3fn
model.layers.1.self_attn.v_proj.weight_scale,   torch.float16

3) KV scale format if kv cache used

model.layers.1.self_attn.k_proj.output_scale,   torch.float16
model.layers.1.self_attn.v_proj.output_scale,   torch.float16

Design

Add the quark format to ROCm/vllm repo by creating a directory for it in vllm/model_executor/layers/quantization and including the following files.

  1. quark.py: implements and manages quantization configurations and processing for quark quantization format for LLMs.
  2. quark_moe.py: implements and manages quantization configurations and processing for quark quantization format for LLMs with MOE structure.
  3. schemes/quark_scheme.py: an abstract base class for various quantization schemes in Quark, including the structure for weight creation, forward process, and post-loading weight processing.
  4. schemes/quark_fp8.py: provides the implementation of the W8A8Fp8 quantization scheme within the Quark framework

At the first stage, we will first integrate the FP8 quantification in Quark format into VLLM, and then integrate other Quark formats such as INT4/INT8 per_tensor/per_channel/per_group into VLLM later when needed.

simon-mo commented 1 day ago

In general we welcome contribution that converts quark format to the standardized format of LLM compressor https://github.com/vllm-project/llm-compressor, @robertgshaw2-neuralmagic @mgoin can help provide pointers.