chuckcho / video-caffe

Video-friendly caffe -- comes with the most recent version of Caffe (as of Jan 2019), a video reader, 3D(ND) pooling layer, and an example training script for C3D network and UCF-101 data
Other
175 stars 93 forks source link

run a demo #69

Closed DehaiZhao closed 7 years ago

DehaiZhao commented 7 years ago

I only know how to extract features from a video. But if there is a script to run a video demo like shown on the website "http://www.cs.dartmouth.edu/~dutran/c3d/" ? Thanks a lot.

chuckcho commented 7 years ago

No. It will involve extract video clips, run through classification, overlay classification results on images, assemble them together into a video -- straight-forward, but time-consuming to actually implement this.

pelun commented 7 years ago

I made a demo. First, I extract video clips, classify on image and then I chose the popular label to make a label for video. Is it OK?

chuckcho commented 7 years ago

sounds fine.

pelun commented 7 years ago

Thanks. But, I haven't treated the signal temporal.

chuckcho commented 7 years ago

Classification is based on one frame only? -- that still works, but won't be as good to classify motion-based concepts as the C3D.

pelun commented 7 years ago

Thanks. You can help me check that issue. Load a video, extract video clips, training with frames, choose the popular label to make label for video.

pelun commented 7 years ago

@chuckcho. I can not show a label to predict class.

chuckcho commented 7 years ago

@pelun Can you elaborate please?

pelun commented 7 years ago

Ohh. I run test_ucf101.sh file with a single video, but it isn't show the label on terminal.

chuckcho commented 7 years ago

I see your point. test_ucf101.sh does show top-1 accuracy only. I'll add a python code to predict a class for a given video clip (or a whole video file) in a near future. Thanks.