vllm-project / llm-compressor

Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM
Apache License 2.0
736 stars 61 forks source link

[Question]Does Minicpmv2.6 currently support int8/fp8 quantization? #848

Open wjj19950828 opened 1 month ago

wjj19950828 commented 1 month ago

Does Minicpmv2.6 currently support int8/fp8 quantization?

thanks~

robertgshaw2-neuralmagic commented 1 month ago

We are actively working on support for VLMs, but this is not yet in the current release of llm-compressor. We certainly welcome contributions to this effort if you are interested in helping out!

mgoin commented 1 month ago

Hi @wjj19950828 currently we have experimental support for dynamic fp8 quantization of VLMs on the main branch

I was able to produce a fp8 dynamic minicpmv2.6 model (https://huggingface.co/nm-testing/MiniCPM-V-2_6-FP8-dynamic) using this script:

from transformers import AutoProcessor, AutoModelForCausalLM

from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot, wrap_hf_model_class

MODEL_ID = "openbmb/MiniCPM-V-2_6"

# Load model.
model_class = wrap_hf_model_class(AutoModelForCausalLM)
model = model_class.from_pretrained(MODEL_ID, torch_dtype="auto", trust_remote_code=True).to("cuda")
processor = AutoProcessor.from_pretrained(MODEL_ID, trust_remote_code=True)

# Configure the quantization algorithm and scheme.
# In this case, we:
#   * quantize the weights to fp8 with per channel via ptq
#   * quantize the activations to fp8 with dynamic per token
recipe = QuantizationModifier(
    targets="Linear",
    scheme="FP8_DYNAMIC",
    ignore=["re:.*lm_head", "re:resampler.*", "re:vpm.*"],
)

# Apply quantization and save to disk in compressed-tensors format.
SAVE_DIR = MODEL_ID.split("/")[1] + "-FP8-dynamic"
oneshot(model=model, recipe=recipe, output_dir=SAVE_DIR, trust_remote_code_model=True)
processor.save_pretrained(SAVE_DIR)

It seems to load into vLLM 0.6.3 just fine, but I haven't evaluated the model yet:

vllm serve MiniCPM-V-2_6-FP8-dynamic --trust-remote-code
...
INFO 10-17 00:09:51 model_runner.py:1072] Loading model weights took 9.0869 GB
...
INFO:     Uvicorn running on socket ('0.0.0.0', 8000) (Press CTRL+C to quit)
donpromax commented 2 weeks ago

I also tried fp8 dynamic for internvl2, it works. But I failed to quantize internvl2 using w8a8-int8.