lzw-lzw / GroundingGPT

[ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model
Apache License 2.0
301 stars 16 forks source link

GroundingGPT: Language-Enhanced Multi-modal Grounding Model

Introduction

GroundingGPT is an end-to-end multimodal grounding model that accurately comprehends inputs and possesses robust grounding capabilities across multi modalities,including images, audios, and videos. To address the issue of limited data, we construct a diverse and high-quality multimodal training dataset. This dataset encompasses a rich collection of multimodal data enriched with spatial and temporal information, thereby serving as a valuable resource to foster further advancements in this field. Extensive experimental evaluations validate the effectiveness of the GroundingGPT model in understanding and grounding tasks across various modalities.

More details are available in our project page.


The overall structure of GroundingGPT. Blue boxes represent video as input, while yellow boxes represent image as input.

News

Dependencies and Installation

    git clone https://github.com/lzw-lzw/GroundingGPT.git
    cd GroundingGPT
    conda create -n groundinggpt python=3.10 -y
    conda activate groundinggpt
    pip install -r requirements.txt 
    pip install flash-attn --no-build-isolation

Training

Training model preparation

Training dataset preparation

Training

-

Inference

Demo

Acknowledgement

Citation

If you find GroundingGPT useful for your your research and applications, please cite using this BibTeX:

@inproceedings{li2024groundinggpt,
  title={Groundinggpt: Language enhanced multi-modal grounding model},
  author={Li, Zhaowei and Xu, Qi and Zhang, Dong and Song, Hang and Cai, Yiqing and Qi, Qi and Zhou, Ran and Pan, Junting and Li, Zefeng and Tu, Vu and others},
  booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
  pages={6657--6678},
  year={2024}
}