This repository includes the official pytorch implementation of the paper: Learning from Temporal Gradient for Semi-supervised Action Recognition, CVPR 2022.
The codes are built upon MMAction2. You are recommended to follow the official tutorials of MMAction2 to prepare the environment and datasets (without temporal gradient extraction). Please use the extraction script (e.g., script for UCF101) if needed.
First of all, you can run the following scripts to prepare conda environment.
Install Miniconda (Optional)
bash install_miniconda.sh
Conda create environment
bash create_mmact_env.sh
bash prepare_ucf101.sh "number of your cpu threads"
First link the videos_train
and videos_val
folders to ./data/kinetics400/
bash prepare_k400.sh "number of your cpu threads"
For example,
bash exps/8gpu-rawframes-ucf101/our_method/exp3_ucf101_20percent_180e_align0123_1clip_weak_sameclip_ptv_new_loss_half.sh
If you use our code or paper in your research or wish to refer to our results, please use the following BibTeX entry.
@InProceedings{xiao2021learning,
title={Learning from Temporal Gradient for Semi-supervised Action Recognition},
author={Xiao, Junfei and Jing, Longlong and Zhang, Lin and He, Ju and She, Qi and Zhou, Zongwei and Yuille, Alan and Li, Yingwei},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022}
}
Code is built upon MMAction2 and video-data-aug.