xian-sh / MRNet

MIT License
3 stars 0 forks source link

Maskable Retentive Network for Video Moment Retrieval

Source code for our ACM MM 2024 paper

Task Example: The goal of both MR tasks NLMR (natural language moment retrieval) and SLMR (spoken language moment retrieval) is to predict the temporal boundaries $(\tau{start}, \tau{end})$ of target moment described by a given query $q$ (text or audio modality).

 Two important characteristics:
 1) Temporal association between video clips: The temporal correlation between two video clips that are farther apart is weaker;
 2) Redundant background interference: The background contains a lot of redundant information that can interfere with the recognition of the current event, and this redundancy is even worse in long videos.

Approach

The architecture of the Maskable Retentive Network (MRNet). We conduct modality-specific attention modes, that is, we set Unlimited Attention for language-related attention regions to maximize cross-modal mutual guidance, and perform a new Maskable Retention for video branch $\mathcal{A}(v\to v)$ for enhanced video sequence modeling.

Approach

Download and prepare the datasets

1. Download the datasets (Optional).

2. For convenience, the extracted input data features can be downloaded directly from baiduyun, passcode:d4yl

3. Text and audio feature extraction (Optional).

 cd preprocess
 python text_encode.py
 python audio_encode.py

4. Set your own dataset path in the following .py file.

  ret/config/paths_catalog.py

5. Or prepare the files in the following structure (Optional).

  MRNet
  ├── configs
  ├── dataset
  ├── ret
  ├── data
  │   ├── activitynet
  │   │   ├── *text features
  │   │   ├── *audio features
  │   │   └── *video c3d features
  │   ├── charades
  │   │   ├── *text features
  │   │   └── *video i3d features
  │   └── tacos
  │       ├── *text features
  │       └── *video c3d features
  ├── train_net.py
  ├── test_net.py
  └── ···

Dependencies

pip install yacs h5py terminaltables tqdm librosa transformers
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch

Training

ActivityNet

1) python train_net.py --config-file --config-file checkpoints/best/activity/config.yml

TACoS

1) cd ret/modeling/ret_model, then copy the code in file ret_model_tacos.py to file ret_model.py. 2) python train_net.py --config-file checkpoints/best/tacos/config.yml

Charades

please wait for the update


Testing

ActivityNet

1) download the model weight file from Google Drive to the checkpoints/best/activity folder 2) python test_net.py --config-file checkpoints/best/activity/config.yml --ckpt checkpoints/best/activity/pool_model_14.pth

TACoS

1) download the model weight file from Google Drive to the checkpoints/best/tacos folder 2) cd ret/modeling/ret_model, then copy the code in file ret_model_tacos.py to file ret_model.py. 3) python test_net.py --config-file checkpoints/best/tacos/config.yml --ckpt checkpoints/best/tacos/pool_model_110e.pth

Charades

please wait for the update


LICENSE

The annotation files and many parts of the implementations are borrowed from MMN. Our codes are under MIT license.