We use a spatial and a temporal stream with VGG-16 and CNN-M respectively for modeling video information. LSTMs are stacked on top of the CNNs for modeling long term dependencies between video frames. For more information, see these papers:
Two-Stream Convolutional Networks for Action Recognition in Videos
Fusing Multi-Stream Deep Networks for Video Classification
Modeling Spatial-Temporal Clues in a Hybrid Deep Learning Framework for Video Classification
Towards Good Practices for Very Deep Two-Stream ConvNets
Here are the steps to run the project on CCV dataset:
First create a directory named env and then run the following inside the directory. This will create a virtual environment. Assuming we create a requirements.txt file to help install modules that are needed in the project.
$ mkdir env
$ cd env
$ virtualenv venv-video-classification
$ source env-video-classification\bin\activate
$ cd ..
$ pip install requirements.txt
Get the YouTube data, remove broken videos and negative instances and finally create a pickle file of the dataset by running scripts from the utility_scripts folder
Temporal Stream (in the temporal folder):
Spatial Stream (in the spatial folder):
Run the spatial_vid2img to create static frames and related files
Download the vgg16_weights.h5 file from here and put it in the spatial folder
Run spatial_stream_cnn to start with the spatial stream training
Temporal Stream LSTM: Will soon update the code
Spatial Stream LSTM: Will soon update the code