This package contains the accompanying code for the following paper:
Tu, Yunbin, et al. "Video Description with Spatial-Temporal Attention.", and "Baidu Cloud", which has appeared as full paper in the Proceedings of the ACM International Conference on Multimedia,2017 (ACM MM'17).
The codes are forked from yaoli/arctic-capgen-vid.
Firstly, Clone our repository:
$ git clone https://github.com/tuyunbin/Video-Description-with-Spatial-Temporal-Attention.git
Here, msvd_data contains 7 pkl files needed to train and test the model.
Theano can be easily installed by following the instructions there. Theano has its own dependencies as well. The second way to install Theano is to install Anaconda. If you use first way to install Theano, you may meet the error : "no module named pygpu". If so, you should install it with Anaconda, but you needn't change your python environment. You only add this command when you use Theano:
$ export PATH="/home/tuyunbin/anaconda2/bin:$PATH"
(Changing your own PATH)
coco-caption. Install it by simply adding it into your $PYTHONPATH.
Jobman. After it has been git cloned, please add it into $PYTHONPATH as well.
Finally, you will also need to install h5py, since we will use hdf5 files to store the preprocessed features.
The pre-processed datasets used in our paper are available at this links, and there is the baidu cloud link.
The pre-processed global, motion and local features used in our paper can be download at these links:
global features, and there is the baidu cloud link.
motion features, and there is the baidu cloud link.
local features extracting code is: h7nq. And this is google driver link.
In our paper, we used local features extracted from the fc7 layer of Faster R-CNN network, and their number is 8. You can extract local features with other number by Faster R-CNN.
Note: Since the data amount on MSR-VTT-10K is too large, we don't offer the data we used. You can train your model on this dataset with the same code. But don't forget to shuffle the train_id when training the model.
Firstly, you need to download the pre-trained model at this link, this is google driver link, and as add them into your $PYTHONPATH.
Secondly, go to common.py and change the following two line
RAB_DATASET_BASE_PATH = '/home/tuyunbin/Video-Description-with-Spatial-Temporal-Attention/msvd_data/'
RAB_EXP_PATH = '/home/sdc/tuyunbin/msvd_result/Video-Description-with-Spatial-Temporal-Attention/exp/'
according to your specific setup. The first path is the parent dir path containing msvd_data folder. The second path specifies where you would like to save all the experimental results.
Before testing the model, we suggest to test data_engine.py
by running python data_engine.py without any error.
It is also useful to verify coco-caption evaluation pipeline works properly by running python metrics.py
without any error.
Finally, you can exploit our trained model by setting this configuration with 'True' in config.py
.
'reload_': True,
Here, you need to set 'False' with reload in config.py
.
Now ready to launch the training
$ THEANO_FLAGS=mode=FAST_RUN,device=cuda0,floatX=float32 python train_model.py
If you find this helps your research, please consider citing:
@inproceedings{tu2017video,
title={Video Description with Spatial-Temporal Attention},
author={Tu, Yunbin and Zhang, Xishan and Liu, Bingtao and Yan, Chenggang},
booktitle={Proceedings of the 2017 ACM on Multimedia Conference},
pages={1014--1022},
year={2017},
organization={ACM}
}
@ARTICLE{8744407,
author={C. {Yan} and Y. {Tu} and X. {Wang} and Y. {Zhang} and X. {Hao} and Y. {Zhang} and Q. {Dai}},
journal={IEEE Transactions on Multimedia},
title={STAT: Spatial-Temporal Attention Mechanism for Video Captioning},
year={2020},
volume={22},
number={1},
pages={229-241},}
Running train_model.py for the first time takes much longer since Theano needs to compile for the first time lots of things and cache on disk for the future runs. You will probably see some warning messages on stdout. It is safe to ignore all of them. Both model parameters and configurations are saved (the saving path is printed out on stdout, easy to find). The most important thing to monitor is train_valid_test.txt in the exp output folder. It is a big table saving all metrics per validation.
My email is tuyunbin1995@foxmail.com
Any discussions and suggestions are welcome!