This is an official pytorch implementation of our ICML 2021 paper Is Space-Time Attention All You Need for Video Understanding?. In this repository, we provide PyTorch code for training and testing our proposed TimeSformer model. TimeSformer provides an efficient video classification framework that achieves state-of-the-art results on several video action recognition benchmarks such as Kinetics-400.
If you find TimeSformer useful in your research, please use the following BibTeX entry for citation.
@inproceedings{gberta_2021_ICML,
author = {Gedas Bertasius and Heng Wang and Lorenzo Torresani},
title = {Is Space-Time Attention All You Need for Video Understanding?},
booktitle = {Proceedings of the International Conference on Machine Learning (ICML)},
month = {July},
year = {2021}
}
We provide TimeSformer models pretrained on Kinetics-400 (K400), Kinetics-600 (K600), Something-Something-V2 (SSv2), and HowTo100M datasets.
name | dataset | # of frames | spatial crop | acc@1 | acc@5 | url |
---|---|---|---|---|---|---|
TimeSformer | K400 | 8 | 224 | 77.9 | 93.2 | model |
TimeSformer-HR | K400 | 16 | 448 | 79.6 | 94.0 | model |
TimeSformer-L | K400 | 96 | 224 | 80.6 | 94.7 | model |
name | dataset | # of frames | spatial crop | acc@1 | acc@5 | url |
---|---|---|---|---|---|---|
TimeSformer | K600 | 8 | 224 | 79.1 | 94.4 | model |
TimeSformer-HR | K600 | 16 | 448 | 81.8 | 95.8 | model |
TimeSformer-L | K600 | 96 | 224 | 82.2 | 95.6 | model |
name | dataset | # of frames | spatial crop | acc@1 | acc@5 | url |
---|---|---|---|---|---|---|
TimeSformer | SSv2 | 8 | 224 | 59.1 | 85.6 | model |
TimeSformer-HR | SSv2 | 16 | 448 | 61.8 | 86.9 | model |
TimeSformer-L | SSv2 | 64 | 224 | 62.0 | 87.5 | model |
name | dataset | # of frames | spatial crop | single clip coverage | acc@1 | url |
---|---|---|---|---|---|---|
TimeSformer | HowTo100M | 8 | 224 | 8.5s | 56.8 | model |
TimeSformer | HowTo100M | 32 | 224 | 34.1s | 61.2 | model |
TimeSformer | HowTo100M | 64 | 448 | 68.3s | 62.2 | model |
TimeSformer | HowTo100M | 96 | 224 | 102.4s | 62.6 | model |
We note that these models were re-trained using a slightly different implementation than the one used in the paper. Therefore, there might be a small difference in performance compared to the results reported in the paper.
You can load the pretrained models as follows:
import torch
from timesformer.models.vit import TimeSformer
model = TimeSformer(img_size=224, num_classes=400, num_frames=8, attention_type='divided_space_time', pretrained_model='/path/to/pretrained/model.pyth')
dummy_video = torch.randn(2, 3, 8, 224, 224) # (batch x channels x frames x height x width)
pred = model(dummy_video,) # (2, 400)
First, create a conda virtual environment and activate it:
conda create -n timesformer python=3.7 -y
source activate timesformer
Then, install the following packages:
pip install torchvision
or conda install torchvision -c pytorch
pip install 'git+https://github.com/facebookresearch/fvcore'
pip install simplejson
pip install einops
pip install timm
conda install av -c conda-forge
pip install psutil
pip install scikit-learn
pip install opencv-python
pip install tensorboard
Lastly, build the TimeSformer codebase by running:
git clone https://github.com/facebookresearch/TimeSformer
cd TimeSformer
python setup.py build develop
Please use the dataset preparation instructions provided in DATASET.md.
Training the default TimeSformer that uses divided space-time attention, and operates on 8-frame clips cropped at 224x224 spatial resolution, can be done using the following command:
python tools/run_net.py \
--cfg configs/Kinetics/TimeSformer_divST_8x32_224.yaml \
DATA.PATH_TO_DATA_DIR path_to_your_dataset \
NUM_GPUS 8 \
TRAIN.BATCH_SIZE 8 \
You may need to pass location of your dataset in the command line by adding DATA.PATH_TO_DATA_DIR path_to_your_dataset
, or you can simply add
DATA:
PATH_TO_DATA_DIR: path_to_your_dataset
To the yaml configs file, then you do not need to pass it to the command line every time.
If you want to use a smaller number of GPUs, you need to modify .yaml configuration files in configs/
. Specifically, you need to modify the NUM_GPUS, TRAIN.BATCH_SIZE, TEST.BATCH_SIZE, DATA_LOADER.NUM_WORKERS entries in each configuration file. The BATCH_SIZE entry should be the same or higher as the NUM_GPUS entry. In configs/Kinetics/TimeSformer_divST_8x32_224_4gpus.yaml
, we provide a sample configuration file for a 4 GPU setup.
If you want to experiment with different space-time self-attention schemes, e.g., space-only or joint space-time attention, use the following commands:
python tools/run_net.py \
--cfg configs/Kinetics/TimeSformer_spaceOnly_8x32_224.yaml \
DATA.PATH_TO_DATA_DIR path_to_your_dataset \
NUM_GPUS 8 \
TRAIN.BATCH_SIZE 8 \
and
python tools/run_net.py \
--cfg configs/Kinetics/TimeSformer_jointST_8x32_224.yaml \
DATA.PATH_TO_DATA_DIR path_to_your_dataset \
NUM_GPUS 8 \
TRAIN.BATCH_SIZE 8 \
If you want to train more powerful TimeSformer variants, e.g., TimeSformer-HR (operating on 16-frame clips sampled at 448x448 spatial resolution), and TimeSformer-L (operating on 96-frame clips sampled at 224x224 spatial resolution), use the following commands:
python tools/run_net.py \
--cfg configs/Kinetics/TimeSformer_divST_16x16_448.yaml \
DATA.PATH_TO_DATA_DIR path_to_your_dataset \
NUM_GPUS 8 \
TRAIN.BATCH_SIZE 8 \
and
python tools/run_net.py \
--cfg configs/Kinetics/TimeSformer_divST_96x4_224.yaml \
DATA.PATH_TO_DATA_DIR path_to_your_dataset \
NUM_GPUS 8 \
TRAIN.BATCH_SIZE 8 \
Note that for these models you will need a set of GPUs with ~32GB of memory.
Use TRAIN.ENABLE
and TEST.ENABLE
to control whether training or testing is required for a given run. When testing, you also have to provide the path to the checkpoint model via TEST.CHECKPOINT_FILE_PATH.
python tools/run_net.py \
--cfg configs/Kinetics/TimeSformer_divST_8x32_224_TEST.yaml \
DATA.PATH_TO_DATA_DIR path_to_your_dataset \
TEST.CHECKPOINT_FILE_PATH path_to_your_checkpoint \
TRAIN.ENABLE False \
To train TimeSformer via Slurm, please check out our single node Slurm training script slurm_scripts/run_single_node_job.sh
.
Distributed training is available via Slurm and submitit
pip install submitit
To train TimeSformer model on Kinetics using 4 nodes with 8 gpus each use the following command:
python tools/submit.py --cfg configs/Kinetics/TimeSformer_divST_8x32_224.yaml --job_dir /your/job/dir/${JOB_NAME}/ --num_shards 4 --name ${JOB_NAME} --use_volta32
We provide a script for launching slurm jobs in slurm_scripts/run_multi_node_job.sh
.
To finetune from an existing PyTorch checkpoint add the following line in the command line, or you can also add it in the YAML config:
TRAIN.CHECKPOINT_FILE_PATH path_to_your_PyTorch_checkpoint
TRAIN.FINETUNE True
If you want to experiment with the long-term video modeling task on HowTo100M, please download the train/test split files from here.
The code was developed using python 3.7 on Ubuntu 20.04. For training, we used four GPU compute nodes each node containing 8 Tesla V100 GPUs (32 GPUs in total). Other platforms or GPU cards have not been fully tested.
The majority of this work is licensed under CC-NC 4.0 International license. However portions of the project are available under separate license terms: SlowFast and pytorch-image-models are licensed under the Apache 2.0 license.
We actively welcome your pull requests. Please see CONTRIBUTING.md and CODE_OF_CONDUCT.md for more info.
TimeSformer is built on top of PySlowFast and pytorch-image-models by Ross Wightman. We thank the authors for releasing their code. If you use our model, please consider citing these works as well:
@misc{fan2020pyslowfast,
author = {Haoqi Fan and Yanghao Li and Bo Xiong and Wan-Yen Lo and
Christoph Feichtenhofer},
title = {PySlowFast},
howpublished = {\url{https://github.com/facebookresearch/slowfast}},
year = {2020}
}
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}