hassony2 / kinetics_i3d_pytorch

Inflated i3d network with inception backbone, weights transfered from tensorflow
MIT License
523 stars 114 forks source link

about extract features from my dataset #30

Open galaxysan opened 5 years ago

galaxysan commented 5 years ago

Hi, I want to ask how can i use this code to extract features from my own video datasets. Your input of your code is .npy. However, how can i get my .npy file from my dataset? I do not find any data process code in this repo.

Thanks

SJYbetter commented 5 years ago

hello, this is very common because they didn't push all code to this repo, you need to preprocess the video you want to implement by yourself. This link maybe help for you. https://scikit-image.org/docs/dev/user_guide/video.html You should transfer the video to images by using ffmpeg then transfer the images to .npy.

galaxysan commented 5 years ago

Thx. I saw the PyAV’s API can help me transfer the images to .npy. the input of the i3d is rgb features and flow features, I think I should have two files that one is rgb feature and the other is flow feature about each video right? so this output of PyAV’s API is just a type transfer of images right? how can I get two feature files?Look forward to your reply. thanks

ed-fish commented 4 years ago

Hi check out this issue on the Tensorflow Repo which has a notebook for the optical flow preprocessing in Python. https://github.com/deepmind/kinetics-i3d/issues/87