timctho / convolutional-pose-machines-tensorflow

Apache License 2.0
795 stars 270 forks source link

Dear Author,how can I run the demo_cpm_hand with a video? #51

Open lilyswang opened 5 years ago

lilyswang commented 5 years ago

Dear Author,how can I run the demo_cpm_hand with a video? thx

DilaraAlbayrak commented 5 years ago

Hi, I am also trying to do that, have you found a way to succeed about it? Could you share, or anyone else, who know how to give video as input and get an appropriate output, can share?

asifzhcet11 commented 5 years ago

Hi @lilyswang and @DilaraSina ,

In theory its possible. In this repo, Kalman filter is implemented to track the hands and the initial position of the hand is assumed to be in the center of the frame. If in the input video, the hand initially or in any time frame is in the center then it could be detected. However, there is another approach which one can use and can be achieved by using the following steps.

1) Detect the hand in the frame (one can use body pose estimation or other hand detection algorithms). 2) Crop, zoom and pad the hand area such that the hand in the cropped image should cover max area and the size of the cropped image should be 256 x 256 pixels. 3) Pass it to the joint point detector.

Hope it will help you :)

Thanks, Asif

DilaraAlbayrak commented 5 years ago

Thanks for your answer. In Readme file, it is said that 'You can also use video files like .avi, .mp4, .flv', as my understanding we can pass files with these extension and get an output, how can we do that?

asifzhcet11 commented 5 years ago

Hi @DilaraSina,

If you just use the model from the code then you can pass any image you want to get the output. For e.g. read the video using OpenCV lib and pass each image to the model with criteria needed for the model i.e. 256 x 256 cropped image of the hand image with max area covered is hand.

Many Thanks,

Asif