huggingface / optimum-intel

🤗 Optimum Intel: Accelerate inference with Intel optimization tools
https://huggingface.co/docs/optimum/main/en/intel/index
Apache License 2.0
415 stars 112 forks source link

Quantization support for CausalVisualLMs #951

Closed nikita-savelyevv closed 2 weeks ago

nikita-savelyevv commented 1 month ago

What does this PR do?

Add support for (data-aware) compression of CausalVisualLMs:

When quantization_config is given, language model will be compressed according to it. Other model parts, including vision and text embeddings models are compressed to int8_asym.

Example:

optimum-cli

optimum-cli export openvino --task image-text-to-text -m llava-hf/llava-v1.6-mistral-7b-hf \
--weight-format int4 --dataset contextual --awq --num-samples 32 ./llava_int4_awq

Python API

from pathlib import Path

import requests
from PIL import Image
from transformers import AutoConfig, AutoProcessor, AutoTokenizer, TextStreamer

from optimum.intel.openvino import OVModelForVisualCausalLM, OVWeightQuantizationConfig

models_parent_dir = Path("./models")
quantization_config = OVWeightQuantizationConfig(
    bits=4,
    quant_method=OVQuantizationMethod.AWQ,
    dataset="contextual",
    num_samples=32,
    trust_remote_code=False,
    processor=None,
    tokenizer=None,
)

compress = bool(1)
model_arch = "llava"
# model_arch = "nanollava"
# model_arch = "minicpmv"

trc = False
if model_arch == "llava":
    model_id = "llava-hf/llava-v1.6-mistral-7b-hf"
    model_path = models_parent_dir / "llava-v1.6-mistral-7b-hf/FP32"

    processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=trc)
    tokenizer = None
elif model_arch == "nanollava":
    model_id = "qnguyen3/nanoLLaVA"
    model_path = models_parent_dir / "nanoLLaVA/FP32"
    trc = True

    config = AutoConfig.from_pretrained(model_id, trust_remote_code=trc)
    processor = AutoProcessor.from_pretrained(config.mm_vision_tower, trust_remote_code=trc)
    tokenizer = AutoTokenizer.from_pretrained(model_id, trsut_remote_code=trc)
elif model_arch == "minicpmv":
    model_id = "openbmb/MiniCPM-V-2_6"
    model_path = models_parent_dir / "MiniCPM-V-2_6/FP32"
    trc = True

    processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=trc)
    tokenizer = None
else:
    raise Exception

quantization_config.trust_remote_code = trc

if not model_path.exists():
    OVModelForVisualCausalLM.from_pretrained(
        model_id, export=True, trust_remote_code=trc, load_in_8bit=False, compile=False
    ).save_pretrained(model_path)
    processor.save_pretrained(model_path)
    if tokenizer is not None:
        tokenizer.save_pretrained(model_path)

if compress:
    model = OVModelForVisualCausalLM.from_pretrained(
        model_path, trust_remote_code=trc, quantization_config=quantization_config
    )
    compressed_model_path = model_path / "../int4"
    model.save_pretrained(compressed_model_path)
    model = OVModelForVisualCausalLM.from_pretrained(compressed_model_path, trust_remote_code=trc)
else:
    model = OVModelForVisualCausalLM.from_pretrained(model_path, trust_remote_code=trc)

image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = model.preprocess_inputs(processor, "What are these?", raw_image, tokenizer=tokenizer)

if model_arch == "nanollava":
    streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
    output_ids = model.generate(**inputs, max_new_tokens=128, use_cache=True, streamer=streamer)
else:
    output = model.generate(**inputs, max_new_tokens=10, do_sample=False)
    print(processor.decode(output[0], skip_special_tokens=True))

Before submitting

HuggingFaceDocBuilderDev commented 1 month ago

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

AlexKoff88 commented 3 weeks ago

@eaidova, @l-bat, please help with review