facebookresearch / DeepHandMesh

Official PyTorch implementation of "DeepHandMesh: A Weakly-Supervised Deep Encoder-Decoder Framework for High-Fidelity Hand Mesh Modeling," ECCV 2020
Other
94 stars 13 forks source link

DeepHandMesh: A Weakly-Supervised Deep Encoder-Decoder Framework for High-Fidelity Hand Mesh Modeling

Introduction

This repo is official PyTorch implementation of DeepHandMesh: A Weakly-Supervised Deep Encoder-Decoder Framework for High-Fidelity Hand Mesh Modeling (ECCV 2020. Oral.).

Demo

DeepHandMesh dataset

Directory

Root

The ${ROOT} is described as below.

${ROOT}
|-- data
|-- common
|-- main
|-- output
|-- demo

Data

You need to follow directory structure of the data as below.

${ROOT}
|-- data
|   |-- images
|   |   |-- subject_1
|   |   |-- subject_2
|   |   |-- subject_3
|   |   |-- subject_4
|   |-- annotations
|   |   |-- 3D_scans_decimated
|   |   |   |-- subject_4
|   |   |-- depthmaps
|   |   |   |-- subject_4
|   |   |-- keypoints
|   |   |   |-- subject_4
|   |   |-- KRT_512
|   |-- hand_model
|   |   |-- global_pose.txt
|   |   |-- global_pose_inv.txt
|   |   |-- hand.fbx
|   |   |-- hand.obj
|   |   |-- local_pose.txt
|   |   |-- skeleton.txt
|   |   |-- skinning_weight.txt

Output

You need to follow the directory structure of the output folder as below.

${ROOT}
|-- output
|   |-- log
|   |-- model_dump
|   |-- result
|   |-- vis

Running DeepHandMesh

Prerequisites

Start

Train

In the main folder, run

python train.py --gpu 0-3 --subject 4

to train the network on the GPU 0,1,2,3. --gpu 0,1,2,3 can be used instead of --gpu 0-3. You can use --continue to resume the training. Only subject 4 is supported for the training.

Test

Place trained model at the output/model_dump/subject_${SUBJECT_IDX}.

In the main folder, run

python test.py --gpu 0-3 --test_epoch 4 --subject 4

to test the network on the GPU 0,1,2,3 with snapshot_4.pth.tar. --gpu 0,1,2,3 can be used instead of --gpu 0-3.
Only subject 4 is supported for the testing. It will save images and output meshes.

Results

Here I report results of DeepHandMesh and pre-trained DeepHandMesh.

Pre-trained DeepHandMesh

Effect of Identity- and Pose-Dependent Correctives

Comparison with MANO

Reference

@InProceedings{Moon_2020_ECCV_DeepHandMesh,  
author = {Moon, Gyeongsik and Shiratori, Takaaki and Lee, Kyoung Mu},  
title = {DeepHandMesh: A Weakly-supervised Deep Encoder-Decoder Framework for High-fidelity Hand Mesh Modeling},  
booktitle = {European Conference on Computer Vision (ECCV)},  
year = {2020}  
}  

License

DeepHandMesh is CC-BY-NC 4.0 licensed, as found in the LICENSE file.