qirui-chen / MultiHop-EgoQA

14 stars 0 forks source link

Grounded Multi-Hop VideoQA in Long-Form Egocentric Videos

🏑 Project Page | πŸ“„ Paper | πŸ€— Dataset | πŸ€— Checkpoints

Abstract

Problem Scenario

This paper considers the problem of Multi-Hop Video Question Answering (MH-VidQA) in long-form egocentric videos. This task not only requires to answer visual questions, but also to localize multiple relevant time intervals within the video as visual evidences.

Baseline Method

We develop an automated pipeline to mine multi-hop question-answering pairs with associated temporal evidence, enabling to construct a large-scale dataset for instruction-tuning. We then propose a novel architecture, termed as GeLM, to leverage the world knowledge reasoning capabilities of multi-modal large language models (LLMs), while incorporating a grounding module to retrieve temporal evidence in the video with flexible grounding tokens.

πŸ“‚ Directory Structure

MultiHop-EgoQA/            
β”œβ”€β”€ baseline/                       # Our Baseline Method
β”‚   β”œβ”€β”€ checkpoints/                # Checkpoints of LLMs
β”‚   β”‚   β”œβ”€β”€vicuna-v1-3-7b/
β”‚   β”œβ”€β”€ datasets/                   # Save path of datasets
β”‚   β”‚   β”œβ”€β”€ multihop_qa/        
β”‚   β”‚   β”‚   β”œβ”€β”€ features/     
β”‚   β”‚   β”‚   β”œβ”€β”€ train_annotations.json
β”‚   β”‚   β”œβ”€β”€ activitynet-captions/     
β”‚   β”‚   β”‚   β”œβ”€β”€ intern_feature/
β”‚   β”‚   β”‚   β”œβ”€β”€ val_1.json
β”‚   β”‚   β”œβ”€β”€ temporal_reasoning/     
β”‚   β”œβ”€β”€ gelm/                       # Implementation of the GeLM model
β”‚   β”œβ”€β”€ llava/                      # LLaVa code base
β”‚   β”œβ”€β”€ scripts/                    # Scripts for evaluating the baseline method
β”‚   β”‚   β”œβ”€β”€ eval_multihop_qa.sh     # Evaluate GeLM on MultiHop-EgoQA
β”‚   β”‚   └── eval_rtl.sh             # Evaluate GeLM on ActivityNet-RTL
β”‚   └── pyproject.toml              # Configuration file
|
β”œβ”€β”€ benchmark/                      # Benchmarking tools and metrics
β”‚   β”œβ”€β”€ metrics/                    # Metrics calculation
β”‚   └── zero-shot-inference/        # Zero-shot inference codes

Datasets

See Dataset Preparation.

Baseline Method

Training setup: Ubuntu 18.04, CUDA 12.1, 4x Nvidia H800 (80GB)

Training

  1. Installing the environment.
cd baseline

conda create -n gelm python=3.10 -y
conda activate gelm

pip install --upgrade pip  # enable PEP 660 support
pip install -e .

pip install ninja
pip install flash-attn --no-build-isolation
  1. Downloading LLM checkpoints and saving under checkpoints.

    git clone https://huggingface.co/lmsys/vicuna-13b-v1.3
  2. Training.

    
    # Training on MultiHop-EgoQA
    bash scripts/finetune_multihop_qa.sh

Training on ActivityNet-RTL

bash scripts/finetune_rtl.sh

Training on both MultiHop-EgoQA and ActivityNet-RTL

bash scripts/finetune_mixed.sh


### Checkpoints

We provide the checkpoints of GeLM-7B trained on MultiHop-EgoQA and ActivityNet-RTL on [Hugging face](https://huggingface.co/SurplusDeficit/GeLM), respectively.

### Evaluation

1. Evaluation on MultiHop-EgoQA

```bash
cd benchmark/metrics
bash evaluate.sh
  1. Evaluation on ActivityNet-RTL
cd baseline
bash eval_rtl.sh

🫑 Acknowledgements