xiu-cs / IGANet

IGANet, single-frame based 3D human pose estimation
52 stars 6 forks source link
3d-human-pose-estimation 3dhpe pytorch visualization

IGANet:Interweaved Graph and Attention Network for 3D Human Pose Estimation

Interweaved Graph and Attention Network for 3D Human Pose Estimation,
Wang Ti, Hong Liu, Runwei Ding, Wenhao Li, Yingxuan You, Xia Li,
In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023

Results on Human3.6M

Here, we compare our IGANet with recent state-of-the-art methods on Human3.6M dataset. The 2D pose detected by cascaded pyramid network (CPN) is used as input. We use $\S$ to highlight methods that use additional refinement module. Evaluation metric is Mean Per Joint Position Error (MPJPE) in mm​.

Models MPJPE
GraFormer 51.8 mm
MGCN $\S$ 49.4 mm
IGANet 48.3 mm

Dataset setup

Setup from original source

You can obtain the Human3.6M dataset from the Human3.6M website, and then set it up using the instructions provided in VideoPose3D.

Setup from preprocessed dataset (Recommended)

You also can access the processed data by downloading it from here.

${POSE_ROOT}/
|-- dataset
|   |-- data_3d_h36m.npz
|   |-- data_2d_h36m_gt.npz
|   |-- data_2d_h36m_cpn_ft_h36m_dbb.npz

Dependencies

Create conda environment:

conda env create -f environment.yml

Test the pre-trained model

The pre-trained model can be found here. please download it and put it in the 'args.previous_dir' ('./pre_trained_model') directory.

To Test the pre-trained model on Human3.6M:

python main.py --reload --previous_dir "./pre_trained_model" --model model_IGANet --layers 3 --gpu 0

Train the model from scratch

The log file, pre-trained model, and other files of each training time will be saved in the './checkpoint' folder.

For Human3.6M:

python main.py --train --model model_IGANet --layers 3 --nepoch 20 --gpu 0

Demo

This visualization code is designed for single-frame based models, making it easy for you to perform 3D human pose estimation on a single image or video.

Before starting, please complete the following preparation steps:

Testing on in-the-wild image:

python demo/vis.py --type 'image' --path './demo/images/running.png' --gpu 0

Testing on in-the-wild video:

python demo/vis.py --type 'video' --path './demo/videos/running3s.mp4' --gpu 0

Acknowledgement

Our code is extended from the following repositories. We thank the authors for releasing the codes.

Licence

This project is licensed under the terms of the MIT license.