An easy-to-use LLM quantization and inference toolkit based on GPTQ algorithm (weight-only quantization).
07/05/2024 🚀🚀 v0.9.5: Intel QBits support added for [2,3,4,8] bit quantization/inference on CPU. Cuda kernels have been fully deprecated in favor of Exllama(v1/v2)/Marlin/Triton.
07/03/2024 🚀 v0.9.4: HF Transformers integration added and bug fixed Gemma 2 support.
07/02/2024 🚀 v0.9.3: Added Gemma 2 support, faster PPL calculations on gpu, and more code/arg refractor.
06/30/2024 🚀 v0.9.2: Added auto-padding of model in/out-features for exllama and exllama v2. Fixed quantization of OPT and DeepSeek V2-Lite models. Fixed inference for DeepSeek V2-Lite.
06/29/2024 🚀🚀🚀 v0.9.1: With 3 new models (DeepSeek-V2, DeepSeek-V2-Lite, DBRX Converted), BITBLAS new format/kernel, proper batching of calibration dataset resulting > 50% quantization speedup, security hash check of loaded model weights, tons of refractor/usability improvements, bugs fixes and much more.
06/20/2924 ✨ GPTQModel v0.9.0: Thanks for all the work from ModelCloud team and the opensource ML community for their contributions!
We want GPTQModel to be highly focused on GPTQ based quantization and target inference compatibility with HF Transformers, vLLM, and SGLang.
GPTQModel is an opinionated fork/refactor of AutoGPTQ with latest bug fixes, more model support, faster quant inference, faster quantization, better quants (as measured in PPL) and a pledge from the ModelCloud team and that we, along with the open-source ML community, will take every effort to bring the library up-to-date with latest advancements, model support, and bug fixes.
We will backport bug fixes to AutoGPTQ on a case-by-case basis.
Gemma 2
Model SupportDeepSeek-V2
Model SupportDeepSeek-V2-Lite
Model SupportChatGLM
Model SupportMiniCPM
Model SupportPhi-3
Model SupportQwen2MoE
Model SupportDBRX
Model Support (Converted Model)Sym=False
Support. AutoGPTQ has unusable sym=false
. (Re-quant required)lm_head
module quant inference support for further VRAM reduction. sym=True
+ FORMAT.GPTQ
, TinyLlama).from_quantized()
/.from_pretrained()
apilm_head
quantization support by integrating with Intel/AutoRound.Model | |||||||
---|---|---|---|---|---|---|---|
Baichuan | ✅ | DeepSeek-V2-Lite | 🚀 | Llama | ✅ | Phi/Phi-3 | 🚀 |
Bloom | ✅ | Falon | ✅ | LongLLaMA | ✅ | Qwen | ✅ |
ChatGLM | 🚀 | Gemma 2 | 🚀 | MiniCPM | 🚀 | Qwen2MoE | 🚀 |
CodeGen | ✅ | GPTBigCod | ✅ | Mistral | ✅ | RefinedWeb | ✅ |
Cohere | ✅ | GPTNeoX | ✅ | Mixtral | ✅ | StableLM | ✅ |
DBRX Converted | 🚀 | GPT-2 | ✅ | MOSS | ✅ | StarCoder2 | ✅ |
Deci | ✅ | GPT-J | ✅ | MPT | ✅ | XVERSE | ✅ |
DeepSeek-V2 | 🚀 | InternLM | ✅ | OPT | ✅ | Yi | ✅ |
We aim for 100% compatibility with models quanted by AutoGPTQ <= 0.7.1 and will consider syncing future compatibilty on a case-by-case basis.
GPTQModel is currently Linux only and requires CUDA capability >= 6.0 Nvidia GPU.
WSL on Windows should work as well.
ROCM/AMD support will be re-added in a future version after everything on ROCM has been validated. Only fully validated features will be re-added from the original AutoGPTQ repo.
# clone repo
git clone https://github.com/ModelCloud/GPTQModel.git && cd GPTQModel
# compile and install
pip install -vvv --no-build-isolation .
# If you have `uv` package version 0.1.16 or higher, you can use `uv pip` for potentially better dependency management
uv pip install -vvv --no-build-isolation .
bash install.sh
pip install gptq-model --no-build-isolation
warning: this is just a showcase of the usage of basic apis in GPTQModel, which uses only one sample to quantize a much small model, quality of quantized model using such little samples may not good.
Below is an example for the simplest use of gptqmodel
to quantize a model and inference after quantization:
from transformers import AutoTokenizer
from gptqmodel import GPTQModel, QuantizeConfig
pretrained_model_dir = "facebook/opt-125m"
quant_output_dir = "opt-125m-4bit"
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True)
calibration_dataset = [
tokenizer(
"The world is a wonderful place full of beauty and love."
)
]
quant_config = QuantizeConfig(
bits=4, # 4-bit
group_size=128, # 128 is good balance between quality and performance
)
# load un-quantized model, by default, the model will always be loaded into CPU memory
model = GPTQModel.from_pretrained(pretrained_model_dir, quant_config)
# quantize model, the calibration_dataset should be list of dict whose keys can only be "input_ids" and "attention_mask"
model.quantize(calibration_dataset)
# save quantized model
model.save_quantized(quant_output_dir)
# load quantized model to the first GPU
model = GPTQModel.from_quantized(quant_output_dir)
# inference with model.generate
print(tokenizer.decode(model.generate(**tokenizer("gptqmodel is", return_tensors="pt").to(model.device))[0]))
For more advanced features of model quantization, please reference to this script
Read the gptqmodel/models/llama.py
code which explains in detail via comments how the model support is defined. Use it as guide to PR for to new models. Most models follow the same pattern.
You can use tasks defined in gptqmodel.eval_tasks
to evaluate model's performance on specific down-stream task before and after quantization.
The predefined tasks support all causal-language-models implemented in 🤗 transformers and in this project.
tutorials provide step-by-step guidance to integrate gptqmodel
with your own project and some best practice principles.
examples provide plenty of example scripts to use gptqmodel
in different ways.
Currently, gptqmodel
supports: LanguageModelingTask
, SequenceClassificationTask
and TextSummarizationTask
; more Tasks will come soon!
GPTQModel will use Marlin, Exllama v2, Triton kernels in that order for maximum inference performance.