TIGER-AI-Lab / Mantis

Official code for Paper "Mantis: Multi-Image Instruction Tuning" (TMLR2024)
https://tiger-ai-lab.github.io/Mantis/
Apache License 2.0
187 stars 15 forks source link
fuyu language llava-llama3 lmm mantis mllm multi-image-understanding multimodal video vision vlm

Mantis: Interleaved Multi-Image Instruction Tuning (TMLR 2024)

This repository contain the code for our TMLR24 paper Mantis (https://arxiv.org/abs/2405.01483).

Mantis


πŸ€” The recent years have witnessed a great array of large multimodal models (LMMs) to effectively solve single-image vision language tasks. However, their abilities to solve multi-image visual language tasks is yet to be improved.

😦 The existing multi-image LMMs (e.g. OpenFlamingo, Emu, Idefics, etc) mostly gain their multi-image ability through pre-training on hundreds of millions of noisy interleaved image-text data from web, which is neither efficient nor effective.

πŸ”₯ Therefore, we present Mantis, an LLaMA-3 based LMM with interleaved text and image as inputs, train on Mantis-Instruct under academic-level resources (i.e. 36 hours on 16xA100-40G).

πŸš€ Mantis achieves state-of-the-art performance on 5 multi-image benchmarks (NLVR2, Q-Bench, BLINK, MVBench, Mantis-Eval), and maintaining a strong single-image performance on par with CogVLM and Emu2.

πŸ”₯News

Installation

conda create -n mantis python=3.10
conda activate mantis
pip install -e .
# install flash-attention
pip install flash-attn --no-build-isolation

Inference

You can run inference with the following command:

cd examples
python run_mantis.py

Training

Install the requirements with the following command:

pip install -e .[train,eval]
cd mantis/train

Our training scripts follows the coding format and model structure of Hugging face. Different from LLaVA Github repo, you can directly load our models from Hugging Face model hub.

Training examples with different data formats

(These example data are all pre-prepared in the data/examples/ folder, so you can check the format of the data and the debug the training script directly. set CUDA_VISIBLE_DEVICES to the GPU you want to use.)

Training examples with different models

We support training of Mantis based on the Fuyu architecture and the LLaVA architecture. You can train the model with the following command:

Training Mantis based on LLaMA3 with CLIP/SigLIP encoder:

Training Mantis based on Fuyu-8B:

Note:

See mantis/train/README.md for more details.

Check all the training scripts in mantist/train/scripts

Evaluation

To reproduce our evaluation results, please check mantis/benchmark/README.md

Data

Downloading

you can easily preparing Mantis-Insturct's downloading with the following command (The downloading and extracting might take about an hour):

python data/download_mantis_instruct.py --max_workers 8

Model Zoo

Mantis Models

We provide the following models in the πŸ€— Hugging Face model hub:

Run models

Chat CLI

We provide a simple chat CLI for Mantis models. You can run the following command to chat with Mantis-8B-siglip-llama3:

python examples/chat_mantis.py

Intermediate Checkpoints

The following intermediate checkpoints after pre-training the multi-modal projectors are also available for experiments reproducibility (Please note the follwing checkpoints still needs further fine-tuning on Mantis-Instruct to be intelligent. They are not working models.):

Acknowledgement

Star History

Star History Chart

Citation

@article{Jiang2024MANTISIM,
  title={MANTIS: Interleaved Multi-Image Instruction Tuning},
  author={Dongfu Jiang and Xuan He and Huaye Zeng and Cong Wei and Max W.F. Ku and Qian Liu and Wenhu Chen},
  journal={Transactions on Machine Learning Research},
  year={2024},
  volume={2024},
  url={https://openreview.net/forum?id=skLtdUVaJa}
}