[EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit".
Support for VLM: LLaVA, InterVL2, LLaMA 3.2, Qwen2VL. The image-text calibration dataset loading method is more accurate and convenient. LLaVA, InterVL2, and Qwen2VL support mixed dataset calibration (pure image-text and single-text mixed calibration datasets).
Support for VLM: LLaVA, InterVL2, LLaMA 3.2, Qwen2VL. The image-text calibration dataset loading method is more accurate and convenient. LLaVA, InterVL2, and Qwen2VL support mixed dataset calibration (pure image-text and single-text mixed calibration datasets).