RomeroBarata / skeleton_based_anomaly_detection

Code for the CVPR'19 paper "Learning Regularity in Skeleton Trajectories for Anomaly Detection in Videos"
132 stars 49 forks source link

Dear sir: how do you extract your optical stream files #5

Open answer123answe opened 5 years ago

answer123answe commented 5 years ago

Dear Sir: I read your paper and code. First of all, thank you very much for sharing the code. I am also doing some research on related aspects. I want to do some testing on my own video, but I am not sure about your method that How is the optical stream files (.csv) calculated? Can you share your code for optical flow calculations? Thank you very much!

RomeroBarata commented 5 years ago

Hi,

The .csv files are created by tracking the people in the video. Since we do tracking by detection, the optical flow is used only as a criterion to aid the tracking process. Sparse optical flow is available in the OpenCV library and you can find usage examples here: https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_lucas_kanade.html (use the keypoints of the skeleton as the input features to the algorithm)

I should be able to share the tracking code, but I'm fairly busy at the moment so it might take a while for me to do that.

Kind regards, Romero

answer123answe commented 5 years ago

Dear Romero: First of all, thank you very much for your reply to me during your busy schedule. Your advice and guidance have helped me a lot. I hope that you could share your tracking code when you are not busy. thank you very much! best wishes!

Dr. Zhang

------------------ 原始邮件 ------------------ 发件人: "Romero Morais"notifications@github.com; 发送时间: 2019年6月24日(星期一) 上午7:07 收件人: "RomeroBarata/skeleton_based_anomaly_detection"skeleton_based_anomaly_detection@noreply.github.com; 抄送: "张辰锐"449013518@qq.com;"Author"author@noreply.github.com; 主题: Re: [RomeroBarata/skeleton_based_anomaly_detection] Dear sir: how doyou extract your optical stream files (#5)

Hi,

The .csv files are created by tracking the people in the video. Since we do tracking by detection, the optical flow is used only as a criterion to aid the tracking process. Sparse optical flow is available in the OpenCV library and you can find usage examples here: https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_lucas_kanade.html (use the keypoints of the skeleton as the input features to the algorithm)

I should be able to share the tracking code, but I'm fairly busy at the moment so it might take a while for me to do that.

Kind regards, Romero

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

roystonrodrigues commented 5 years ago

Dear Romero, Thanks for your wonderful work on anomaly detection, using skeleton trajectories. We will be very happy if you can add the tracking code to the repository. (after poses have been extracted from alpha pose) We have read your paper, but could not get much details regarding the tracking procedure. If you can share some pointers to how the tracking should be done after extracting poses from alpha pose, that would also be very helpful.

Thanks, Royston

RomeroBarata commented 5 years ago

Hi everyone,

I'm sorry I still haven't been able to put the tracking code online.

The tracking is done by essentially solving an assignment problem. We have skeletons in the current frame and a list of skeletons in past frames, and we want to assign scores between the skeletons in the past frames and the skeletons in the current frame. As long as we have a way of getting these scores, we can put it all in a matrix (e.g. skeletons in past frames against skeletons in current frame) and call the Hungarian algorithm to return the best assignment. If I recall properly, the way I computed the score was by creating small bounding boxes (e.g. 20x20 pixels) around the keypoints of a skeleton and comparing whether the keypoints of the skeleton predicted by sparse optical flow would fall into those boxes. For each keypoint in which the detection and the prediction agreed I would give a score of 1, and 0 otherwise. This is roughly how I implemented the tracking.

I can't really say when I'll be able to put the code online, so I'd recommend you guys to have a look at some other tracking frameworks available in the meantime, e.g. https://github.com/Guanghan/lighttrack

Kind regards, Romero