guohengkai / region-ensemble-network

Repository for Region Ensemble Network based Hand Pose Estimation
GNU General Public License v2.0
113 stars 36 forks source link
convolutional-neural-networks deep-learning hand-pose-estimation region-ensemble-network

Towards Good Practices for Deep 3D Hand Pose Estimation

By Hengkai Guo (Updated on Aug 9, 2017)

Description

This is the project of work Region Ensemble Network: Improving Convolutional Network for Hand Pose Estimation and Towards Good Practices for Deep 3D Hand Pose Estimation. This repository includes the prediction results for comparison, prediction codes and visualization codes. More details will be released in the future. Here are live results from Kinect 2 sensor using the model trained on ICVL:

result1.gif result2.gif

Results

Here we provide the testing results of basic network (results/dataset_basic.txt) and region ensemble network (results/dataset_ren_nx6x6.txt) for ICVL dataset, NYU dataset and MSRA dataset in our paper. Also we provide the testing labels (labels/dataset_test_label.txt), computed centers (labels/dataset_center.txt, which can be computed by evaluation/get_centers.py) and corresponding image names (labels/dataset_test_list.txt). Currently, the MSRA center computation is not available due to lack of loading function for images.

For results and labels, each line is corresponding to one image, which has J x 3 numbers indicating (x, y, z) of J joint locations. The (x, y) are in pixels and z is in mm.

Evaluation

Please use the Python script evaluation/compute_error.py for evaluation, which requires numpy and matplotlib. Here is an example:

$ python evaluation/compute_error.py icvl results/icvl_ren_9x6x6.txt

Visualization

Please use the Python script evaluation/show_result.py for visualziation, which also requires OpenCV:

$ python evaluation/show_result.py icvl your/path/to/ICVL/images/test/Depth --in_file=results/icvl_ren_4x6x6.txt

You can see all the testing results on the images. Press 'q' to exit.

Prediction

Please use the Python script evaluation/run_model.py for prediction with predefined centers in labels directory:

$ python evaluation/run_model.py icvl ren_4x6x6 your/path/to/output/file your/path/to/ICVL/images/test

The script depends on pyCaffe. Please install the Caffe first.

Models

The caffe models can be downloaded at BaiduYun, GoogleDrive or here. Please put them in the models directory. (For MSRA models, we only provide the one for fold 1 due to the limit of memory.)

Realsense Realtime Demo

We provide a realtime hand pose estimation demo using Intel Realsense device.

Using pyrealsense

When you are using pyrealsense v1.x and v0.x, please use the Python script for demo:

$ python demo/realsense_realtime_demo_pyrealsense_1.x.py

When you are using pyrealsense v2.0 and above, please use the Python script for demo:

$ python demo/realsense_realtime_demo_pyrealsense_2.x.py

Using librealsense

Firstly compile and install the python wrapper. After everything is working properly, just run the following python script for demo:

$ python demo/realsense_realtime_demo_librealsense2.py

Note that we just use a naive depth thresholding method to detect the hand. Therefore, the hand should be in the range of [0, 650mm] to run this demo. We tested this realtime demo with an Intel Realsense SR300.

Citation

Please cite the paper in your publications if it helps your research:

@article{guo2017towards,
  title={Towards Good Practices for Deep 3D Hand Pose Estimation},
  author={Guo, Hengkai and Wang, Guijin and Chen, Xinghao and Zhang, Cairong},
  journal={arXiv preprint arXiv:1707.07248},
  year={2017}
}
@article{guo2017region,
  title={Region Ensemble Network: Improving Convolutional Network for Hand Pose Estimation},
  author={Guo, Hengkai and Wang, Guijin and Chen, Xinghao and Zhang, Cairong and Qiao, Fei and Yang, Huazhong},
  journal={arXiv preprint arXiv:1702.02447},
  year={2017}
}

License

This program is free software with GNU General Public License v2.

Feedback

Please email to guohengkaighk@gmail.com if you have any suggestions or questions.

History

Feb 11, 2020: Update Google Drive link for models

Aug 9, 2017: Update papers

July 23, 2017: Add script for center computing and results for newly paper

May 22, 2017: Intel Realsense realtime demo

May 15, 2017: More visualization and demos

May 9, 2017: Models and bugs fixed

May 6, 2017: Visualization and prediction codes

April 8, 2017: Evaluation codes