Open YongyiTang92 opened 6 years ago
Ping! Any update on this?
Ping! Any update on this?
I found that the pretrained model work better with the flow images which are extracted after resizing the rgb frames. And I used OpenCV3.3 (or 4.0 may work) instead of 3.4 since I found there some difference for the cv::cuda::OpticalFlowDual_TVL1. I have got compatible accuracy for the fusion results while the flow results is still slightly worse.
Thank you very much for the feedback. This is helpful. I'll look into that.
Has anyone tried calculating optical flow using python opencv? I can't seem to get good results with that preprocessing, but might also be my lack of understing about parameters to use. I'm using opencv 4.1.0: optical_flow = cv2.optflow.createOptFlow_DualTVL1() flow_frame = optical_flow.calc(prev, curr, None) flow_frame = np.clip(flow_frame, -20, 20) flow_frame = flow_frame / 20.0
Thanks for any comments!
Actually, the code I used above works fine and produces good results on the example vid. But would still be nice to get pointers if this is missing something from the original preprocessing. Thanks
Actually, the code I used above works fine and produces good results on the example vid. But would still be nice to get pointers if this is missing something from the original preprocessing. Thanks
I think your code is correct. But the python interface is too slow. Do you have any idea how to speedup? Actually, I used the C++ interface of OpenCV for flow extraction.
Yes, it's slow, on my desktop flow calc runs at about 4 fps. I just wanted to reproduce for now. Don't know if it's possible to bring it up to 25 fps with Python - I'd guess that it isn't. Is the speed fine with C++?
Hello, does anyone know how the frame sampling was done? Is it just nearest sampling?
Yes, it's slow, on my desktop flow calc runs at about 4 fps. I just wanted to reproduce for now. Don't know if it's possible to bring it up to 25 fps with Python - I'd guess that it isn't. Is the speed fine with C++?
it is fine with C++. it runs at about 40~50fps on 2080ti with the opencv cuda interface.
Hello, does anyone know how the frame sampling was done? Is it just nearest sampling?
Hi do you have any ideas? I am also wondering how the video resampling is done...
Hi,
the preprocessing code has been released as part of mediapipe, see here: https://github.com/google/mediapipe/tree/master/mediapipe/examples/desktop/media_sequence
Best,
Joao
On Tue, Sep 24, 2019 at 7:20 AM zehzhang notifications@github.com wrote:
Hello, does anyone know how the frame sampling was done? Is it just nearest sampling?
Hi do you have any ideas? I am also wondering how the video resampling is done...
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/34?email_source=notifications&email_token=ADXKU2WH47W72NSDG7JCVBDQLGWRPA5CNFSM4F22S2J2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7NGWYA#issuecomment-534408032, or mute the thread https://github.com/notifications/unsubscribe-auth/ADXKU2QKEOFXA7BL2M5N2SDQLGWRPANCNFSM4F22S2JQ .
Just to reply to the resampling question: This is what I did to my videos which were originally 1280x720 and 30fps (using ffmpeg on ubuntu command line):
ffmpeg -y -r 30 -i input.avi -r 25 -filter:v scale=456x256 -sws_flags bilinear output.avi
Output of this should be a bilinearly interpolated video with 25fps and smaller side of video at 256px, as described in the paper and/or README file.
If I follow the steps and use kinetics_dataset.py on v_CricketShot_g04_c01,avi, I do not get rgb and flow files. Please do elaborate on how to preprocess the avi file to generate the rgb and flow data.
Thanks for your help
I think Jiuqiang may be able to help.
Joao
On Tue, Nov 19, 2019 at 6:44 AM Shiv Saxena notifications@github.com wrote:
Hi Joao, the preprocessing code has been released as part of mediapipe, see here:
https://github.com/google/mediapipe/tree/master/mediapipe/examples/desktop/media_sequence
If I follow the steps and use kinetics_dataset.py on v_CricketShot_g04_c01,avi, I do not get rgb and flow files. Please do elaborate on how to preprocess the avi file to generate the rgb and flow data.
Thanks for your help
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/34?email_source=notifications&email_token=ADXKU2WOHT4QPRSA5GI4XHLQUODMHA5CNFSM4F22S2J2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEENBVAY#issuecomment-555358851, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXKU2QZSYOJBUHA3ZZDCFTQUODMHANCNFSM4F22S2JQ .
Thanks Joao, Just to clarify, I am following the steps outlined under "custom videos in the Kinetics format". I change VIDEO_PATH to point to the avi, build the media_sequence_demo and run kinetics_dataset.py. I do see an output file kinetics_700_custom_25fps_rgb_flow-00000-of-00001. I am not sure about the next step.
I was hoping that it will generate the rgb & flow files in a format that I can then use as an input to evaluate_sample. Not sure if that is the intent of the release of the preprocessing code.
I can successfully gengerate the tfrecord file for v_CricketShot_g04_c01.avi. Please see https://github.com/google/mediapipe/issues/257#issuecomment-555654883 for the details. Thanks!
On Mon, Nov 18, 2019 at 11:17 PM João Carreira joaoluiscarreira@gmail.com wrote:
I think Jiuqiang may be able to help.
Joao
On Tue, Nov 19, 2019 at 6:44 AM Shiv Saxena notifications@github.com wrote:
Hi Joao, the preprocessing code has been released as part of mediapipe, see here:
https://github.com/google/mediapipe/tree/master/mediapipe/examples/desktop/media_sequence
If I follow the steps and use kinetics_dataset.py on v_CricketShot_g04_c01,avi, I do not get rgb and flow files. Please do elaborate on how to preprocess the avi file to generate the rgb and flow data.
Thanks for your help
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/deepmind/kinetics-i3d/issues/34?email_source=notifications&email_token=ADXKU2WOHT4QPRSA5GI4XHLQUODMHA5CNFSM4F22S2J2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEENBVAY#issuecomment-555358851, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXKU2QZSYOJBUHA3ZZDCFTQUODMHANCNFSM4F22S2JQ .
Hi, I would like to know how to preprocess the kinetics-400 for reproducing the results. I found that extracting tvl1 flow before rescale the rgb images leads to worse flow recognition accuracy. So, currently, I first resampling videos at 25 fps. Then I extract rgb frames and resize with shorter side setting 256 pixels. I am using opencv3.4 version of cv::cuda::OpticalFlowDual_TVL1 for flow extraction on the resize gray-scale frames. All the pixels values are rescale as mention in the project. Are there any details i am missing in this preprossing procedure? Or, am I conducting the right way for extracting optical flow? Thanks.