CircleRadon / TokenPacker

The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM".
214 stars 9 forks source link
connector lmm mllm token-reduction tokenpacker visual-projector

TokenPacker-v1 arXiv HF Model ZhiHu

Comparisons with existing methods πŸ’‘

Updates πŸ“Œ

What is TokenPacker πŸ‘€

TokenPacker is a novel visual projector, which adopts a coarse-to-fine scheme to inject the enriched characteristics to generate the condensed visual tokens. Using TokenPacker, we can compress the visual tokens by 75%∼89%, while achieves comparable or even better performance across diverse benchmarks with significantly higher efficiency.

Algorithms

We provide the pseudo-codes to showcase the detailed processing flow.

Core codes

As a visual projector, TokenPacker is implemented by a class TokenPacker, which can be found in multimodal_projector/builder.py

Comparisons with various projectors

High-Resolution Image Understanding with TokenPacker πŸ”¬

To support efficient high-resolution image understanding, we further develop an effective image cropping method TokenPacker-HD.

Install πŸ› οΈ

  1. Clone this repository and navigate to TokenPacker folder
    git clone https://github.com/CircleRadon/TokenPacker.git
    cd TokenPacker
  2. Install packages
    conda create -n tokenpacker python=3.10 -y
    conda activate tokenpacker
    pip install --upgrade pip  # enable PEP 660 support
    pip install -e .
  3. Install additional packages for training cases
    pip install -e ".[train]"
    pip install flash-attn --no-build-isolation

Training πŸš€

LLaVA-TokenPacker

Dataset

To make a fair comparison, we use the same training data as in LLaVA-1.5, i.e., LLaVA-Pretrain-558K for stage 1, and Mix665k for stage 2.

Training

LLaVA-TokenPacker-HD

Dataset

To obtain the competitive high-resolution performance, we use 2.7M data as organized by Mini-Gemini, i.e., 1.2M for stage 1 and 1.5M for stage 2.

Training

Note:

Experiments

Model Zoo

Model Max Res. Compre. Ratio Token Num. Max Patch Num. Training Data Download
TokenPacker-7b 336x336 1/4 144 - 558K+665K checkpoints
TokenPacker-13b 336x336 1/4 144 - 558K+665K checkpoints
TokenPacker-HD-7b 1088x1088 1/4 ~954 9 1.2M+1.5M checkpoints
TokenPacker-HD-13b 1088x1088 1/4 ~954 9 1.2M+1.5M checkpoints
TokenPacker-HD-13b 1344x1344 1/4 ~1393 16 1.2M+1.5M checkpoints
TokenPacker-HD-13b 1344x1344 1/9 ~619 16 1.2M+1.5M checkpoints
TokenPacker-HD-13b 1344x1344 1/16 ~347 16 1.2M+1.5M checkpoints

Note:

Visualization

We provide some visual examples.

High-resolution image understanding.

TODO List πŸ“

Acknowledgement πŸ’Œ

BibTeX πŸ–ŠοΈ

@misc{TokenPacker,
  title={TokenPacker: Efficient Visual Projector for Multimodal LLM},
  author={Wentong Li, Yuqian Yuan, Jian Liu, Dongqi Tang, Song Wang, Jianke Zhu and Lei Zhang},
  year={2024},
  eprint={2407.02392},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}