Phoenix1327 / tea-action-recognition

The Pytorch code of the TEA module (Temporal Excitation and Aggregation for Action Recognition)
192 stars 31 forks source link

TEA: Temporal Excitation and Aggregation for Action Recognition (CVPR2020)

The PyTorch code of the TEA Module.

Requirements

Data Preparation

Please refer to TSN repo and TSM repo for the detailed guide of data pre-processing.

The List Files

A list file is utilized to specify the video data information, including a tuple of extracted video frame folder path (absolute path), video frame number, and video label. A typical line in the file look like:

/data/xxx/xxx/something-something/video_frame_folder 100 12

Finally, the absolute path of your own generated list files should be added into ops/dataset_configs.py

Training TEA

We have provided several examples for training TEA models on different datasets. Please refer to the Appendix B of our paper for more training details.

Testing

Two inference protocols are utilized in our paper: 1) efficient protocol and 2) accuracy protocol. For both protocols we provide the example scripts for testing TEA models:

Pre-trained Models

Currently, we do not provide the original pre-trained models on STHV1, STHV2, and Kinetics, since we have reorganized the structure of the codes and renamed the modules of TEA for public release. The old models cannot be loaded with new names. We plan to retrain the models with the new codes and release the models for evaluation.

The released codes are verified, and you will get a similar performance with our paper if you have followed the exact training settings of TEA (issue 2 and issue 4).