wei-tim / YOWO

You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization
Other
846 stars 158 forks source link

Is there anyway to test on unknown video? #62

Open zhaoleo1111 opened 3 years ago

zhaoleo1111 commented 3 years ago

cau you give me a support for test on my own video?

GeLee-Q commented 3 years ago

Hi,I want it ,too. there is a project for me to adress it by using action localization.

okankop commented 3 years ago

I will soon update the paper with AVA dataset results. The repo will also be updated accordingly. I will provide also a webcam demo.

okankop commented 3 years ago

YOWO is extended for AVA dataset! Please check out the updated repo!

MoaazAbdulrahman commented 3 years ago

@okankop Thanks for your great effort. Can I use test_video_ava.py for ucf24 or any other custom dataset?

okankop commented 3 years ago

@MoaazAbdulrahman I have written test_video_ava.py for AVA dataset. For ucf24 dataset, you need to use YOWO model trained on ucf24 and modify the inference part according to ucf24. Because, ucf24 and ava are quite diferent as higlighted in Table 1 of updated_paper.

MoaazAbdulrahman commented 3 years ago

@okankop Thank you. I just need to know if I need to preform any preprocessing to the input image other than resizing? I am working on building inference for ucf24 dataset.

ronnie659 commented 2 years ago

@MoaazAbdulrahman were you able to do that?