Francis-Rings / ILA

32 stars 2 forks source link

[ICCV'2023 Oral] Implicit Temporal Modeling with Learnable Alignment for Video Recognition

This is an official implementation of ILA, a new temporal modeling method for video action recognition.

Implicit Temporal Modeling with Learnable Alignment for Video Recognition
accepted by ICCV 2023
Shuyuan Tu, Qi Dai, Zuxuan Wu, Zhi-Qi Cheng, Han Hu, Yu-Gang Jiang

[arxiv] [pdf] [supp] [slides]

ILA performance

News

Environment Setup

To set up the environment, you can easily run the following command:

pip install torch==1.11.0
pip install torchvision==0.12.0
pip install pathlib
pip install mmcv-full
pip install decord
pip install ftfy
pip install einops
pip install termcolor
pip install timm
pip install regex

Install Apex as follows

git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./

It is worth noting that this branch is for k400. For SSv2, please refer to SSv2 branch.

Data Preparation

For downloading the Kinetics datasets, you can refer to mmaction2 or CVDF. For Something-Something v2, you can get them from the official website.

Due to limited storage, we decord the videos in an online fashion using decord.

We provide the following way to organize the dataset:

Train

The training configurations lie in configs. For example, you can run the following command to train ILA-ViT-B/16 with 8 frames on Something-Something v2.

python -m torch.distributed.launch --nproc_per_node=8 main.py -cfg configs/ssv2/16_8.yaml --output /PATH/TO/OUTPUT --accumulation-steps 8

Note:

Test

For example, you can run the following command to validate the ILA-ViT-B/16 with 8 frames on Something-Something v2.

python -m torch.distributed.launch --nproc_per_node=8 main.py -cfg configs/ssv2/16_8.yaml --output /PATH/TO/OUTPUT --only_test --resume /PATH/TO/CKPT --opts TEST.NUM_CLIP 4 TEST.NUM_CROP 3

Note:

Main Results in paper

This is an original-implementation for open-source use. In the following table we report the accuracy in original paper.

Bibtex

If this project is useful for you, please consider citing our paper :

@inproceedings{tu2023ila,
  title={Implicit Temporal Modeling with Learnable Alignment for Video Recognition},
  author={Tu, Shuyuan and Dai, Qi and Wu, Zuxuan and Cheng, Zhi-Qi and Hu, Han and Jiang, Yu-Gang},
  booktitle={ICCV},
  year={2023}
}

Acknowledgements

Parts of the codes are borrowed from mmaction2, Swin and X-CLIP. Sincere thanks to their wonderful works.