Hendricks, Lisa Anne, et al. "Localizing Moments in Video with Natural Language." ICCV (2017).
Find the paper here and the project page here.
@inproceedings{hendricks17iccv,
title = {Localizing Moments in Video with Natural Language.},
author = {Hendricks, Lisa Anne and Wang, Oliver and Shechtman, Eli and Sivic, Josef and Darrell, Trevor and Russell, Bryan},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
year = {2017}
}
License: BSD 2-Clause license
Preliminaries: I trained all my models with the BVLC caffe version. Before you start, look at "utils/config.py" and change any paths as needed (e.g., perhaps you want to point to a Caffe build in a different folder).
Evaluation
Look at "utils/eval.py" if you would like to evaluate a model that you have trained. Below are instructions to eval the models I proposed in my paper:
You should get the following outputs:
Rank@1 | Rank@5 | mIOU | |
---|---|---|---|
RGB val | 0.2442 | 0.7540 | 0.3739 |
Flow val | 0.2626 | 0.7839 | 0.4015 |
Fusion val (lambda 0.5) | 0.2765 | 0.7961 | 0.4191 |
RGB test | 0.2312 | 0.7336 | 0.3549 |
Flow test | 0.2583 | 0.7540 | 0.3894 |
Fusion test (lambda 0.5) | 0.2708 | 0.7853 | 0.4053 |
Training
Use "run_job_rgb.sh" to train an RGB model and "run_job_flow.sh" to train a flow model. You should be able to rerun these scripts and get simiar numbers to those reported in the paper.
To access the dataset, please look at the json files in the "data" folder. Our annotations include descriptions which are temporally grounded in videos. For easier annotation, each video is split into 5-second temporal chunks. The first temporal chunk correpsonds to seconds 0-5 in the video, the second temporal chunk correpsonds to seconds 5-10, etc. The following describes the different fields in the json files:
python download_videos_AWS.py --download --video_directory DIRECTORY
There are 13 videos which are not on AWS which you may download from my website here (I don't have enough space to store all the videos on my website -- Sorry!)
Use the script download_videos.py:
python download_videos.py --download --video_directory DIRECTORY
When I originally released the dataset, ~3% of the original videos had been deleted from Flickr. You may access them here. If you find that more videos are missing, please download the videos via the AWS links above.
You can view the Creative Commons licenses in "video_licenses.txt".
You can access preextracted features for RGB here and for flow here. These are automatically downloaded in "download/get_models.sh". To extract flow, I used the code here.
I provide re-extracted features in the Google Drive above. You can use this script to create a dict with averaged RGB features and this script. The average features will be a bit different than the original release, but did not influence any trends in the results.