mzolfaghari / ECO-efficient-video-understanding

Code and models of paper " ECO: Efficient Convolutional Network for Online Video Understanding", ECCV 2018
MIT License
437 stars 96 forks source link

frames for Kinetics dataset #17

Closed sophia-wright-blue closed 6 years ago

sophia-wright-blue commented 6 years ago

Hello,

In the script 'create_list_kinetics.m', you have the following path:

path_DB_rgb='/datasets/kinetics/train/db_frames//'

I'm assuming this folder contains the frames for the kinetics videos. Are the frames for the videos available online somewhere, or is there a script available to split the videos into the frames?

I tried running ''main.py' of your pytorch implementation and got the following error:

when running

---> 20 for i, (input, target) in enumerate(train_loader):

......

FileNotFoundError: [Errno 2] No such file or directory: '/kinetics/pumping_gas/ib5PzcBeYIc_000004_000014/0004.jpg'

Thanks,

mzolfaghari commented 6 years ago

Hi @sophia-wright-blue

Yes, you need to extract frames. For some datasets you can find the frames but for Kinetics I don't think you can find frames. To extract frames you can use this script.

Please let me know if you still had problem with extracting frames.

sophia-wright-blue commented 6 years ago

Thank you so much for your response, and thank you for your patience with the basic questions.

I'm trying to use your PyTorch code from scratch. To clarify, here are the steps:

1). Download Kinetics dataset. To do this, I've found the following link:

https://github.com/activitynet/ActivityNet/tree/master/Crawler/Kinetics

Do you have a better script for doing this step?

2). Once I have downloaded all of the videos in a folder, I need to extract the frames from the videos. Here you have the script:

https://github.com/mzolfaghari/chained-multistream-networks/blob/master/scripts/extract_frames_frmRate.sh

I'd greatly appreciate your help in the exact command required to run the script to extract the frames for all the videos (how many frames should be extracted?). I have the videos downloaded in a folder named kinetics/videos. I would like to extract the frames to the folder kinetics/frames.

3). Once the frames have been extracted, do we need to create the training and testing lists, by running the script:

https://github.com/mzolfaghari/ECO-efficient-video-understanding/blob/master/scripts/create_lists/create_list_kinetics.m

or is there an equivalent python script?

4). Once we have the frames extracted and lists created, we can run the script:

https://github.com/mzolfaghari/ECO-pytorch/blob/master/scripts/run_ECOLite_kinetics.sh

This would give us the trained model.

5). Use the trained model for inference on some test videos.

Once again, greatly appreciate your help and guidance with this.

mzolfaghari commented 6 years ago

@sophia-wright-blue 1- We used the same scripts! 2- Provided necessary scripts in this folder. Please check the code. 3- We don't have the python script for this. 4- Yes, after having the frames and list you can run the code and get the final model. 5- Yes.

sophia-wright-blue commented 6 years ago

thank you so much @mzolfaghari !

hate to be bothering you so much about this. Do you have any scripts to help with step 5) to do inference on any mp4 files, or a PyTorch equivalent of

https://github.com/mzolfaghari/ECO-efficient-video-understanding/blob/master/scripts/online_recognition/online_recognition.py