Finspire13 / CMCS-Temporal-Action-Localization

Completeness Modeling and Context Separation for Weakly Supervised Temporal Action Localization (CVPR2019)
MIT License
150 stars 17 forks source link

CMCS-Temporal-Action-Localization

Code for 'Completeness Modeling and Context Separation for Weakly Supervised Temporal Action Localization' (CVPR2019).

Paper and Supplementary.

Recommended Environment

Prerequisites

Feature Extraction

We employ UntrimmedNet or I3D features in the paper.

We recommend re-extracting the features yourself using these two repos:

Or use the features pre-extracted by us (Warning: Not easy to download):

  1. Download the features:
  2. Join the zip files by zip --fix {} --out {} and unzip the files.
  3. Put the extracted folder into the parent folder of this repo. (Or change the paths in the config file.)

Other features can also be used.

Generate Static Clip Masks:

Static clip masks are used for hard negative mining. They are included in the download features. If you want to generate the masks by yourself, please refer to tools/get_flow_intensity_anet.py.

Check ActivityNet Videos

URL links of some videos in this dataset are no longer valid. Check the availability and generate this file: anet_missing_videos.npy.

Run

  1. Train models with weak supervision (Skip this if you use our trained model):

    python3 train.py --config-file {} --train-subset-name {} --test-subset-name {} --test-log
  2. Test and save the class activation sequences (CAS):

    python3 test.py --config-file {} --train-subset-name {} --test-subset-name {} --no-include-train
  3. Action localization using the CAS:

    python3 detect.py --config-file {} --train-subset-name {} --test-subset-name {} --no-include-train

For THUMOS14, predictions are saved in output/predictions and final performances are saved in a npz file in output. For ActivityNet, predictions are saved in output/predictions and final performances can be obtained via the dataset evaluation API.

Settings

Our method is evaluated on THUMOS14 and ActivityNet with I3D or UNT features. Experiment settings and their auguments are listed as following.

config-file train-subset-name test-subset-name
1 configs/thumos-UNT.json val test
2 configs/thumos-I3D.json val test
3 configs/anet12-local-UNT.json train val
4 configs/anet12-local-I3D.json train val
5 configs/anet13-local-I3D.json train val
6 configs/anet13-server-I3D.json train test

Trained Models

Our trained models are provided in this folder. To use these trained models, run test.py and detect.py with the config files in this folder.

Citation

@InProceedings{Liu_2019_CVPR, author = {Liu, Daochang and Jiang, Tingting and Wang, Yizhou}, title = {Completeness Modeling and Context Separation for Weakly Supervised Temporal Action Localization}, booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2019} }

License

MIT