bryanyzhu / two-stream-pytorch

PyTorch implementation of two-stream networks for video action recognition
MIT License
568 stars 150 forks source link

Testing from Video input #13

Closed Zumbalamambo closed 6 years ago

Zumbalamambo commented 6 years ago

How do I test it realtime or any video as an input using VideoCapture?

bryanyzhu commented 6 years ago

Right now, the input is a folder of frames corresponding to a video as in here. Then we evenly sample 25 frames from the video and average the prediction results.

If you want to do online testing (like streaming video), I suggest you look into this file, especially line 78. For example, you can have a buffer to store the incoming frames using VideoCapture. Whenever you have 25 frames, you make them into a batch like in line 78, and give it to the model for prediction. Hope this helps.

thassan66 commented 6 years ago

Can i get your trained model and evaluate it UCF101 to generate results

bryanyzhu commented 6 years ago

@taimur99 Sure, but I didn't quite understand your question. I think I already shared my trained models for UCF101 split1 in this repo, both VGG16 and ResNet152 models. And I also share the test script to generate the results. No matter you want to test on UCF101, or test it on any video, or use them as feature extractors, you can just modify the test script a little bit to get what you want. Hope this helps.

thassan66 commented 6 years ago

Okay, Thanks