yjxiong / caffe

A fork of Caffe with OpenMPI-based Multi-GPU (mainly data parallel) support for action recognition and more. More documentation please see the original readme.
http://caffe.berkeleyvision.org/
Other
551 stars 153 forks source link

video frames sampling #192

Closed propaganda12 closed 6 years ago

propaganda12 commented 6 years ago

Hi, Professor Xiong, in your paper "Towards Good Practices for Very Deep Two-Stream ConvNets", it is mentioned that 25 frames are sampled from each video for spatial net, but I read the src/caffe/layers/video_data_layer.cpp and vgg_16_rgb_train_val_fast.prototxt, it seems that nothing processed for frame sampling. I wonder if I have missed some clues and how to set the sampling frequency. Looking forward to your reply and thank you.

JosephChenHub commented 6 years ago

the sampling process lies in the definition of VideoDataLayer ,I remember

祝好 陈祖耀

On 12/09/2017 19:29, propaganda12 wrote:

Hi, Professor Xiong, in your paper "Towards Good Practices for Very Deep Two-Stream ConvNets", it is mentioned that 25 frames are sampled from each video for spatial net, but I read the src/caffe/layers/video_data_layer.cpp and vgg_16_rgb_train_val_fast.prototxt, it seems that nothing processed for frame sampling. I wonder if I have missed some clues and how to set the sampling frequency. Looking forward to your reply and thank you.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

propaganda12 commented 6 years ago

Thanks for your response, I have read the video_data_layer.cpp again and I find that the parameter "num_segments" can control the sampling frequency, and just set num_segments=25 to ensue sampling 25 frames per video.

yjxiong commented 6 years ago

25 is used during testing. You should use dedicated test scripts for that. Please see the TSN repo for details.