Demo on YouTube
Paper Abstract
Uses Docker
$ docker pull jgravity/tensorflow-opencv:odin
$ docker run -it --name odin jgravity/tensorflow-opencv:odin bin/bash
# git clone https://github.com/PJunhyuk/people-counting-pose
# cd people-counting-pose
# chmod u+x ./compile.sh && ./compile.sh && cd models/coco && chmod u+x download_models_wget.sh && ./download_models_wget.sh && cd -
# cd testset && chmod u+x ./download_testset_wget.sh && ./download_testset_wget.sh && cd -
# python video_tracking.py -f '{video_file_name}'
Qualified supporting video type: mov, mp4
You have to put target video file in ./testset folder
-f, --videoFile = Path to Video File
-w, --videoWidth = Width of Output Video
-o, --videoType = Extension of Output Video
# python video_tracking.py -f 'test_video_01f.mov'
> docker cp odin:/people-counting-pose/testset/{video_file_name} ./
# python video_pose.py -f '{video_file_name}'
Qualified supporting video type: mov, mp4
Use Docker jgravity/tensorflow-opencv,
or install
Check results_log
@inproceedings{insafutdinov2017cvpr,
title = {ArtTrack: Articulated Multi-person Tracking in the Wild},
booktitle = {CVPR'17},
url = {http://arxiv.org/abs/1612.01465},
author = {Eldar Insafutdinov and Mykhaylo Andriluka and Leonid Pishchulin and Siyu Tang and Evgeny Levinkov and Bjoern Andres and Bernt Schiele}
}
@article{insafutdinov2016eccv,
title = {DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model},
booktitle = {ECCV'16},
url = {http://arxiv.org/abs/1605.03170},
author = {Eldar Insafutdinov and Leonid Pishchulin and Bjoern Andres and Mykhaylo Andriluka and Bernt Schiele}
}
pose-tensorflow - Human Pose estimation with TensorFlow framework
object-tracker - Object Tracker written in Python using dlib and OpenCV