aimagelab / LLaVA-MORE

LLaVA-MORE: Enhancing Visual Instruction Tuning with LLaMA 3.1
Apache License 2.0
86 stars 6 forks source link
llama3 llama3-1 llama3-vision llava llava-llama3 llms multimodal-llms vision-and-language

πŸ”₯ LLaVA-MORE πŸ”₯ Enhancing Visual Instruction Tuning with LLaMA 3.1

[![HuggingFace](https://img.shields.io/badge/πŸ€—_LLaVA_MORE-1d8c0a)](https://huggingface.co/collections/aimagelab/llava-more-66aa6c49167e190bf27e7be4) [![HuggingFace](https://img.shields.io/badge/πŸ€—_AImageLab_-white)](https://huggingface.co/aimagelab) [![Website](https://img.shields.io/badge/AImageLab-red)](https://aimagelab.ing.unimore.it/imagelab)
#### [Federico Cocchi](https://federico1-creator.github.io/Federico_Cocchi/), [Nicholas Moratelli](https://nicholasmoratelli.github.io), [Davide Caffagni](https://github.com/dcaffo98), [Sara Sarto](https://github.com/sarasarto), #### [Marcella Cornia](https://aimagelab.ing.unimore.it/imagelab/person.asp?idpersona=90), [Lorenzo Baraldi](https://www.lorenzobaraldi.com/), and [Rita Cucchiara](https://aimagelab.ing.unimore.it/imagelab/person.asp?idpersona=1)

Citation

If you make use of our work, please cite our repo:

@misc{cocchi2024llavamore,
      title={{LLaVA-MORE: Enhancing Visual Instruction Tuning with LLaMA 3.1}},
      author={Cocchi, Federico and Moratelli, Nicholas and Caffagni, Davide and Sarto, Sara and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
      url={https://github.com/aimagelab/LLaVA-MORE},
      year={2024}
}

πŸ“’ Latest Updates

Table of Contents

  1. Overview
  2. Performance
  3. Checkpoints
  4. Installation
  5. Training
  6. Inference
  7. Acknowledgments

Overview

LLaVA-MORE enhances the well-known LLaVA architecture by integrating for the first time the use of LLaMA 3.1 as the language model. We are publicly releasing the checkpoints for stages one and two for the first model with 8B parameters.

To further support the research community in enhancing Multimodal LLM performance, we are also releasing the training code and scripts for distributed training.

Remember to star the repository to stay updated on future releases πŸ€—!

Performance

In this section, we present the performance of our model compared to other versions of LLaVA across different multimodal datasets.

Benchmarks and Comparisons on Instruction Multimodal Datasets in the Literature

| Model Name | Text-VQA* | Science-QA | AI2D | SEED-vid | SEED-all | SEED-img | MMMU | MMBench-Cn | MMBench-En | POPE | GQA | MME-P | MME-C | |----------------------|:----------: |:------------:|:------:|:----------:|:----------:|:----------:|:------:|:------------:|:------------:|:------:|:-----:|:--------:|:-------:| | LLaVA-v1.5-7B | 58.2 | 69.0 | 56.4 | 42.0 | 61.6 | 66.8 | 34.2 | 56.5 | 65.3 | 85.6 | 62.4 | 1474.3 | 314.6 | | LLaVA-v1.5-LLaMA3-8B | 57.6 | 74.2 | 60.7 | 42.0 | 64.3 | 70.1 | 37.3 | 65.4 | 70.3 | 85.4 | 63.5 | 1544.4 | 330.3 | | **LLaVA-MORE-8B** | 58.4 | 76.3 | 61.8 | 42.4 | 64.1 | 69.8 | 39.4 | **68.2** | 72.4 | 85.1 | 63.6 | 1531.5 | **353.3** | | **LLaVA-MORE-8B-S2** | 60.9 | 76.7 | 62.2 | 42.3 | 64.2 | 69.9 | 38.7 | 65.8 | 71.1 | **86.5** | 64.5 | **1563.8** | 293.2 | | **LLaVA-MORE-8B-siglip** | 62.1 | **77.5** | **63.6** | **46.1** | **65.8** | **71.0** | 39.8 | **68.2** | **73.1** | 86.1 | 64.6 | 1531.0 | 315.4 | | **LLaVA-MORE-8B-S2-siglip** | **63.5** | 77.1 | 62.7 | 44.7 | 65.5 | **71.0** | **40.0** | 68.0 | 71.8 | 86.0 | **64.9** | 1541.4 | 336.4 |

* The results of TextVQA are computed with OCR token in the input prompt.

Checkpoints

In the table below, you can find links to ours πŸ€— Hugging Face models.

Model Name πŸ€— Hugging Face Summary
LLaVA_MORE-llama_3_1-8B-pretrain Hugging Face Model Pretrained on LCS-558K and using LLaMA 3.1 8B Instruct as LLM backbone
LLaVA_MORE-llama_3_1-8B-finetuning Hugging Face Model Finetuned on LLaVA-Instruct-665K and using LLaMA 3.1 8B Instruct as LLM backbone
LLaVA_MORE-llama_3_1-8B-S2-pretrain Hugging Face Model Pretrained on LCS-558K and using LLaMA 3.1 8B Instruct as LLM backbone
LLaVA_MORE-llama_3_1-8B-S2-finetuning Hugging Face Model Finetuned on LLaVA-Instruct-665K and using LLaMA 3.1 8B Instruct as LLM backbone
LLaVA_MORE-llama_3_1-8B-siglip-pretrain Hugging Face Model Pretrained on LCS-558K and using LLaMA 3.1 8B Instruct as LLM backbone
LLaVA_MORE-llama_3_1-8B-siglip-finetuning Hugging Face Model Finetuned on LLaVA-Instruct-665K and using LLaMA 3.1 8B Instruct as LLM backbone
LLaVA_MORE-llama_3_1-8B-S2-siglip-pretrain Hugging Face Model Pretrained on LCS-558K and using LLaMA 3.1 8B Instruct as LLM backbone
LLaVA_MORE-llama_3_1-8B-S2-siglip-finetuning Hugging Face Model Finetuned on LLaVA-Instruct-665K and using LLaMA 3.1 8B Instruct as LLM backbone

Installation

To create the conda environment named more use the following instructions. With this environment you will have all the packages to run the code in this repo.

conda create -n more python==3.8.16
conda activate more
pip install -r requirements.txt

Note that the requirements are heavily inspired by the original LLaVA repo.

Training

To help the community in training complex systems in distributed scenarios, we are publicly releasing not only the source code but also the bash scripts needed to train LLaVA-MORE on HPC facilities with a SLURM scheduler.

To further extend the reproducibility of our approach, we are also releasing the wandb logs of the training runs.

Pretraining

sbatch scripts/more/11_pretrain_llama_31_acc_st_1.sh

Finetuning

sbatch scripts/more/12_finetuning_llama_31_acc_st_1.sh

Visual Backbones

As mentioned before, LLaVA-MORE introduces the use of LLaMA 3.1 within the LLaVA architecture for the first time. However, this repository goes beyond that single enhancement. We have also incorporated the ability to use different visual backbones, such as SigLIP, and various methods for managing image resolutions (S2).

Considering that, you can view this repo as an effort to expand the study of Multimodal LLMs in multiple directions and as a starting point for enhancing new features to improve the connection between images and language.

You can find more references in this folder: scripts/more.

Inference

You can try our LLaVA-MORE with LLaMA 3.1 in the Image-To-Text task using the following script.

source activate more
cd local/path/LLaVA-MORE
export PYTHONPATH=.

# tokenizer_model_path
export HF_TOKEN=hf_read_token
export TOKENIZER_PATH=aimagelab/LLaVA_MORE-llama_3_1-8B-finetuning 

python -u llava/eval/run_llava.py

If you get out-of-memory problems, consider loading the model weights in 8 bit (load_in_8bit=True).

Acknowledgments

We thank the LLaVA team for open-sourcing a modular codebase to extend and train different models within the LLaVA family. We are also happy users of the lmms-eval library, which has significantly reduced the evaluation time of our checkpoints across different datasets.

We also thank CINECA for the availability of high-performance computing resources used to train LLaVA-MORE. This work is supported by the PNRR-M4C2 project FAIR - Future Artificial Intelligence Research and by the PNRR project ITSERR - Italian Strengthening of Esfri RI Resilience.

In case you face any issues or have any questions, please feel free to create an issue. Additionally, we welcome you to open a pull request to integrate new features and contribute to our project.