hongsukchoi / Pose2Mesh_RELEASE

Official Pytorch implementation of "Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose", ECCV 2020
MIT License
665 stars 69 forks source link
2d-human-pose 3d-human-mesh 3d-human-pose 3d-mesh eccv eccv2020 graph-convolutional-network rgb-image single-view

Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose

pose to mesh quality results

News

Introduction

This repository is the offical Pytorch implementation of Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose (ECCV 2020). Below is the overall pipeline of Pose2Mesh. overall pipeline

Install Guidelines

Quick Demo

Results

Here I report the performance of Pose2Mesh.

:muscle: Update: We increased the performance on 3DPW using GT meshes obtained from NeuralAnnot on COCO and AMASS. The annotations from NeuralAnnot are yet to be released.
:muscle: Update: The performance on 3DPW has increased using DarkPose 2D detection, which improved HRNet.

table

Below shows the results when the input is groundtruth 2D human poses. For Human3.6M benchmark, Pose2Mesh is trained on Human3.6M. For 3DPW benchmark, Pose2Mesh is trained on Human3.6M and COCO.

MPJPE PA-MPJPE
Human36M 51.28 mm 35.61 mm
3DPW 63.10 mm 35.37 mm

We provide qualitative results on SURREAL to show that Pose2Mesh can recover 3D shape to some degree. Please refer to the paper for more discussion.

surreal quality results

Directory

Root

The ${ROOT} is described as below.

${ROOT} 
|-- data
|-- demo
|-- lib
|-- experiment
|-- main
|-- manopth
|-- smplpytorch

Data

The data directory structure should follow the below hierarchy.

${ROOT}  
|-- data  
|   |-- Human36M  
|   |   |-- images  
|   |   |-- annotations   
|   |   |-- J_regressor_h36m_correct.npy
|   |   |-- absnet_output_on_testset.json 
|   |-- MuCo  
|   |   |-- data  
|   |   |   |-- augmented_set  
|   |   |   |-- unaugmented_set  
|   |   |   |-- MuCo-3DHP.json
|   |   |   |-- smpl_param.json
|   |-- COCO  
|   |   |-- images  
|   |   |   |-- train2017  
|   |   |   |-- val2017  
|   |   |-- annotations  
|   |   |-- J_regressor_coco.npy
|   |   |-- hrnet_output_on_valset.json
|   |-- PW3D 
|   |   |-- data
|   |   |   |-- 3DPW_latest_train.json
|   |   |   |-- 3DPW_latest_validation.json
|   |   |   |-- darkpose_3dpw_testset_output.json
|   |   |   |-- darkpose_3dpw_validationset_output.json
|   |   |-- imageFiles
|   |-- AMASS
|   |   |-- data
|   |   |   |-- cmu
|   |-- SURREAL
|   |   |-- data
|   |   |   |-- train.json
|   |   |   |-- val.json
|   |   |   |-- hrnet_output_on_testset.json
|   |   |   |-- simple_output_on_testset.json
|   |   |-- images
|   |   |   |-- train
|   |   |   |-- test
|   |   |   |-- val
|   |-- FreiHAND
|   |   |-- data
|   |   |   |-- training
|   |   |   |-- evaluation
|   |   |   |-- freihand_train_coco.json
|   |   |   |-- freihand_train_data.json
|   |   |   |-- freihand_eval_coco.json
|   |   |   |-- freihand_eval_data.json
|   |   |   |-- hrnet_output_on_testset.json
|   |   |   |-- simple_output_on_testset.json

If you have a problem with 'download limit' when trying to download datasets from google drive links, please try this trick.

  • Go the shared folder, which contains files you want to copy to your drive
  • Select all the files you want to copy
  • In the upper right corner click on three vertical dots and select “make a copy”
  • Then, the file is copied to your personal google drive account. You can download it from your personal account.

Pytorch SMPL and MANO layer

Experiment

The experiment directory will be created as below.

${ROOT}  
|-- experiment  
|   |-- exp_*  
|   |   |-- checkpoint  
|   |   |-- graph 
|   |   |-- vis 

Pretrained model weights

Download pretrained model weights from here to a corresponding directory.

${ROOT}  
|-- experiment  
|   |-- posenet_human36J_train_human36 
|   |-- posenet_cocoJ_train_human36_coco_muco
|   |-- posenet_smplJ_train_surreal
|   |-- posenet_manoJ_train_freihand
|   |-- pose2mesh_human36J_train_human36
|   |-- pose2mesh_cocoJ_train_human36_coco_muco
|   |-- pose2mesh_smplJ_train_surreal
|   |-- pose2mesh_manoJ_train_freihand
|   |-- posenet_human36J_gt_train_human36
|   |-- posenet_cocoJ_gt_train_human36_coco
|   |-- pose2mesh_human36J_gt_train_human36
|   |-- pose2mesh_cocoJ_gt_train_human36_coco

Running Pose2Mesh

joint set topology

Start

Train

Select the config file in ${ROOT}/asset/yaml/ and train. You can change the train set and pretrained posenet by your own *.yml file.

1. Pre-train PoseNet

To train from the scratch, you should pre-train PoseNet first.

Run

python main/train.py --gpu 0,1,2,3 --cfg ./asset/yaml/posenet_{input joint set}_train_{dataset list}.yml

2. Train Pose2Mesh

Copy best.pth.tar in ${ROOT}/experiment/exp_*/checkpoint/ to ${ROOT}/experiment/posenet_{input joint set}_train_{dataset list}/. Or download the pretrained weights following this.

Run

python main/train.py --gpu 0,1,2,3 --cfg ./asset/yaml/pose2mesh_{input joint set}_train_{dataset list}.yml

Test

Select the config file in ${ROOT}/asset/yaml/ and test. You can change the pretrained model weight. To save sampled outputs to obj files, change TEST.vis value to True in the config file.

Run

python main/test.py --gpu 0,1,2,3 --cfg ./asset/yaml/{model name}_{input joint set}_test_{dataset name}.yml

Reference

@InProceedings{Choi_2020_ECCV_Pose2Mesh,  
author = {Choi, Hongsuk and Moon, Gyeongsik and Lee, Kyoung Mu},  
title = {Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose},  
booktitle = {European Conference on Computer Vision (ECCV)},  
year = {2020}  
}  

Related Projects

I2L-MeshNet_RELEASE
3DCrowdNet_RELEASE
TCMR_RELEASE
Hand4Whole_RELEASE
HandOccNet
NeuralAnnot_RELEASE