Shimingyi / MotioNet

A deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video [ToG 2020]
https://rubbly.cn/publications/motioNet/
BSD 2-Clause "Simplified" License
562 stars 84 forks source link

Problem in testing of repository #22

Closed lisa676 closed 3 years ago

lisa676 commented 3 years ago

Hi @Shimingyi I'm facing problem to testing this wonderful repository.

1: By using h36m_gt_t.pth When I run the evaluate.py then BVH file are saved but nothing is there in BVH files. There is no any skeleton but have joints info in generated BVH file, you can check attached BVH files in zip.

BVH_files.zip

I also got RuntimeWarning: invalid value encountered in true_divide here and here

After debugging I checked that there are many nan parameters in poses_2d_root, pred_bones, pre_rotations, pre_rotations_full, pre_pose_3d, pre_proj, rotations, translations.

So is this problem (empty BVH) due to RuntimeWarning or due to nan values?

2: By using wild_gt_tcc.pth with this I got runtime error, please check the attached txt file for errors details because error logs are too many.

error.txt

Shimingyi commented 3 years ago

Hi @lisa676 ,

I test the code on my machine, but it works well.

Would you mind checking the pytorch version? I write 0.4.1 in my requirment.txt but I think it will be better if you install it following the introduction in the official site. You can install the latest version, but make sure the cuda version matches to your machine.

Best, Mingyi

lisa676 commented 3 years ago

@Shimingyi Thanks for your prompt response. I tested it with latest version of PyTorch it gives new errors but by changing input size it works using wild_gt_tcc.pth but for h36m_gt_t.pth problem is still same. For any outsource vidoe we can't use h36m_gt_t.pth?

Shimingyi commented 3 years ago

Sorry, I still cannot reproduce your problem. I clone my repo from begining and test it, everything works fine. Maybe you can follow my commands to install it again?

git clone https://github.com/Shimingyi/MotioNet.git
cd MotioNet

conda create -n motionet python=3.7
source activate motionet
conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch # Need to be updated by your cuda version
pip install -r requirements.txt

mkdir checkpoints
# Download the three pre-trained models from https://drive.google.com/drive/folders/19hO4eVV8cE61aVg3dA-hClVjrtiJhq8d
# Download the dataset from https://drive.google.com/drive/folders/1mvRPqtsNp46grBQ9feYish8evhEkm_9O
# Put models into ./checkpoints
# Put data file into ./data

mkdir output
python evaluate.py -r ./checkpoints/wild_gt_tcc.pth -i demo
python evaluate.py -r ./checkpoints/h36m_gt.pth -i h36m
python evaluate.py -r ./checkpoints/h36m_gt_t.pth -i h36m

Here is my output from these commands and i think it looks fine: results.zip

lisa676 commented 3 years ago

@Shimingyi thanks again for your help. Now it is working completely. I have another question, I just read your paper abstract and also debug the code and I feel that I can't get 3D joints location from your model as your model is more about joints rotation and single symmetric skeleton. So if we need to get 3D joints location then is there any way to get 3D joints location using your model?

Shimingyi commented 3 years ago

@lisa676 It's great :) You can get the 3d joint location by the forward kinematics which means applying the rotation on the skeleton. Can be found here: pred_pose_3d and forward_fk function.

lisa676 commented 3 years ago

@Shimingyi thanks for your kind help. I'm trying to draw 3D joints using pred_pose_3d but it looks data is in tensor. I mean to say I want to draw 17 joints from pred_pose_3d using matplotlib or plotly.

Shimingyi commented 3 years ago

You can use this line to convert the tensor to numpy array so you can use matplotlib.

pred_pose_3d_numpy = pred_pose_3d.cpu().numpy()