TencentARC / ST-LLM

[ECCV 2024🔥] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"
Apache License 2.0
80 stars 2 forks source link
large-language-models video-language-model video-understanding

ST-LLM

ST-LLM: Large Language Models Are Effective Temporal Learners

[![hf](https://img.shields.io/badge/🤗-Hugging%20Face-blue.svg)](https://huggingface.co/farewellthree/ST_LLM_weight/tree/main) [![arXiv](https://img.shields.io/badge/Arxiv-2311.08046-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2404.00308) [![License](https://img.shields.io/badge/Code%20License-Apache2.0-yellow)](https://github.com/farewellthree/ST-LLM/blob/main/LICENSE)

PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC

News :loudspeaker:

Introduction :bulb:

MethodMVBenchVcgBenchVideoQABench
AvgCorrectDetailContextTemporalConsistMSVDMSRVTTANet
VideoChatGPT32.72.382.402.522.621.982.3764.949.335.7
LLaMA-VID-2.892.963.003.532.462.5169.757.747.4
Chat-UniVi-2.992.892.913.462.892.8165.054.645.8
VideoChat251.12.983.022.883.512.662.8170.054.149.1
ST-LLM54.93.153.233.053.742.932.8174.663.250.9

Demo 🤗

Please download the conversation weights from here and follow the instructions in installation first. Then, run the gradio demo:

CUDA_VISIBLE_DEVICES=0 python3 demo_gradio.py --ckpt-path /path/to/STLLM_conversation_weight

We have also prepared local scripts that are easy to modify:demo.py

Examples 👀

Installation 🛠️

Git clone our repository, creating a Python environment and activate it via the following command

git clone https://github.com/farewellthree/ST-LLM.git
cd ST-LLM
conda create --name stllm python=3.10
conda activate stllm
pip install -r requirement.txt

Training & Validation :bar_chart:

The instructions of data, training and evaluating can be found in trainval.md.

Acknowledgement 👍

Citation ✏️

If you find the code and paper useful for your research, please consider staring this repo and citing our paper:

@article{liu2023one,
  title={One for all: Video conversation is feasible without video instruction tuning},
  author={Liu, Ruyang and Li, Chen and Ge, Yixiao and Shan, Ying and Li, Thomas H and Li, Ge},
  journal={arXiv preprint arXiv:2309.15785},
  year={2023}
}
@article{liu2023one,
  title={ST-LLM: Large Language Models Are Effective Temporal Learners},
  author={Liu, Ruyang and Li, Chen and Tang, Haoran and Ge, Yixiao and Shan, Ying and Li, Ge},
  journal={https://arxiv.org/abs/2404.00308},
  year={2023}
}