mks0601 / I2L-MeshNet_RELEASE

Official PyTorch implementation of "I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image", ECCV 2020
MIT License
708 stars 130 forks source link

I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image

News

There was a code mistake in here. Basically, the translation during the rigid alignment was wrong. The results in my paper became better after I fix the error.

Introduction

This repo is official PyTorch implementation of I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image (ECCV 2020). Our I2L-MeshNet wons the first and second place at 3DPW challenge on unknown assocation track in part orientation and joint position metrics, respectively.:tada:

Quick demo

Directory

Root

The ${ROOT} is described as below.

${ROOT}  
|-- data  
|-- demo
|-- common  
|-- main  
|-- output  

Data

You need to follow directory structure of the data as below.

${ROOT}  
|-- data  
|   |-- Human36M  
|   |-- |-- rootnet_output  
|   |   |   |-- bbox_root_human36m_output.json  
|   |   |-- images  
|   |   |-- annotations   
|   |   |-- J_regressor_h36m_correct.npy
|   |-- MuCo  
|   |   |-- data  
|   |   |   |-- augmented_set  
|   |   |   |-- unaugmented_set  
|   |   |   |-- MuCo-3DHP.json
|   |   |   |-- smpl_param.json
|   |-- MSCOCO  
|   |   |-- rootnet_output  
|   |   |   |-- bbox_root_coco_output.json  
|   |   |-- images  
|   |   |   |-- train2017  
|   |   |   |-- val2017  
|   |   |-- annotations  
|   |   |-- J_regressor_coco_hip_smpl.npy
|   |-- PW3D
|   |   |-- rootnet_output  
|   |   |   |-- bbox_root_pw3d_output.json  
|   |   |-- data
|   |   |   |-- 3DPW_train.json
|   |   |   |-- 3DPW_validation.json
|   |   |   |-- 3DPW_test.json
|   |   |-- imageFiles
|   |-- FreiHAND
|   |   |-- rootnet_output  
|   |   |   |-- bbox_root_freihand_output.json  
|   |   |-- data
|   |   |   |-- training
|   |   |   |-- evaluation
|   |   |   |-- freihand_train_coco.json
|   |   |   |-- freihand_train_data.json
|   |   |   |-- freihand_eval_coco.json
|   |   |   |-- freihand_eval_data.json

Pytorch SMPL and MANO layer

Output

You need to follow the directory structure of the output folder as below.

${ROOT}  
|-- output  
|   |-- log  
|   |-- model_dump  
|   |-- result  
|   |-- vis  

Running I2L-MeshNet

Start

Train

1. lixel stage

First, you need to train I2L-MeshNet of lixel stage. In the main folder, run

python train.py --gpu 0-3 --stage lixel 

to train I2L-MeshNet in the lixel stage on the GPU 0,1,2,3. --gpu 0,1,2,3 can be used instead of --gpu 0-3.

2. param stage

Once you pre-trained I2L-MeshNet in lixel stage, you can resume training in param stage. In the main folder, run

python train.py --gpu 0-3 --stage param --continue

to train I2L-MeshNet in the param stage on the GPU 0,1,2,3. --gpu 0,1,2,3 can be used instead of --gpu 0-3.

Test

Place trained model at the output/model_dump/. Choose the stage you want to test among lixel and param.

In the main folder, run

python test.py --gpu 0-3 --stage $STAGE --test_epoch 20  

to test I2L-MeshNet in $STAGE stage (should be one of lixel and param) on the GPU 0,1,2,3 with 20th epoch trained model. --gpu 0,1,2,3 can be used instead of --gpu 0-3.

Results

Here I report the performance of the I2L-MeshNet.

Human3.6M dataset

3DPW dataset

$ python test.py --gpu 4 --stage param --test_epoch 8
>>> Using GPU: 4
Stage: param
08-16 13:56:54 Creating dataset...
loading annotations into memory...
Done (t=7.05s)
creating index...
index created!
Get bounding box and root from ../data/PW3D/rootnet_output/bbox_root_pw3d_output.json
08-16 13:57:04 Load checkpoint from ../output/model_dump/snapshot_8.pth.tar
08-16 13:57:04 Creating graph...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8879/8879 [17:42<00:00,  3.58it/s]
MPJPE from lixel mesh: 93.47 mm
PA MPJPE from lixel mesh: 60.87 mm
MPJPE from param mesh: 99.34 mm
PA MPJPE from param mesh: 61.80 mm

Troubleshoots

Reference

@InProceedings{Moon_2020_ECCV_I2L-MeshNet,  
author = {Moon, Gyeongsik and Lee, Kyoung Mu},  
title = {I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image},  
booktitle = {European Conference on Computer Vision (ECCV)},  
year = {2020}  
}