We introduce the Dense Connector - a simple, effective, and plug-and-play vision-language connector that significantly enhances existing MLLMs by leveraging multi-layer visual features, with minimal additional computational overhead! We hope that this work will provide valuable experience and serve as a basic module for future MLLM development!
The Dense Connector utilizes multi-layer visual features to enhance visual representation and augment the visual perception capabilities of the Multimodal Large Language Models (MLLMs) which can be easily integrated into the current MLLMs. We provide three instantiation methods of Dense Connector: Sparse Token Integration (STI), Sparse Channel Integration (SCI), and Dense Channel Integration (DCI). The Dense Channel Integration achieves the best results.
Please follow the instructions below to install the required packages.
Clone this repository
git clone https://github.com/HJYao00/DenseConnector.git
cd DenseConnector
Install Package
conda create -n dc python=3.10 -y
conda activate dc
cd DenseConnector
pip install --upgrade pip
pip install -e .
Install additional packages for training Dense Connector
pip install ninja
pip install flash-attn --no-build-isolation
Please refer to the document for dataset preparation and training.
We evaluate the Dense Connector across 19 diverse benchmarks, including 11 image benchmarks and 8 video benchmarks. The testing procedures for both images and videos can be found here.
Please visit our Model Zoo to access all publicly available Dense Connector checkpoints. We scale the LLM from 2.7B to 70B, incorporating the latest open-source large language model, Llama3-8B-Instruct & Llama3-70B-Instruct
We provide several dialogue examples, with additional results available in the paper.
If you find this repository is useful, please consider star🌟 this repo and cite🖇️ our paper.
@article{yao2024dense,
title={Dense Connector for MLLMs},
author={Yao, Huanjin and Wu, Wenhao and Yang, Taojiannan and Song, YuXin and Zhang, Mengxi and Feng, Haocheng and Sun, Yifan and Li, Zhiheng and Ouyang, Wanli and Wang, Jingdong},
journal={Advances in Neural Information Processing Systems},
year={2024}
}
We extend our gratitude to the open-source efforts of LLaVA, Mini-Gemini and FreeVA.