microsoft / multiview-human-pose-estimation-pytorch

This is an official Pytorch implementation of "Cross View Fusion for 3D Human Pose Estimation, ICCV 2019".
MIT License
544 stars 89 forks source link

Pre-trained Model and corresponding heatmaps #17

Closed lisa676 closed 4 years ago

lisa676 commented 4 years ago

@CHUNYUWANG Hi, thanks for providing pre-trained model. I've a question that from where we can get corresponding heatmaps (predicted_heatmaps.h5) file? Kindly also upload heatmaps file and share the link. Thanks

CHUNYUWANG commented 4 years ago

@lan786 You can get the predicted_heatmaps.h5 by running run/pose2d/valid.py. It is too large to share the heatmap file.

lisa676 commented 4 years ago

@CHUNYUWANG Thanks for your reply. I got heatmaps file and when I test it with your pre-trained model, I got worst results, average mpjpes 415. When I trained the model then I got 211, that is also worst. All the settings are same as you provide in your code. I don't know what is the problem.

CHUNYUWANG commented 4 years ago

Did you generate the data using this tool box https://github.com/CHUNYUWANG/H36M-Toolbox?

Can you show the training and validation log? You should be able to find it in the "output" directory?

You can also check the saved images in the "output" directory to see if they are reasonable.

lisa676 commented 4 years ago

@CHUNYUWANG Yes I generated using h36m-toolbox but every 5 frames. For example image_1, imgae_6, not every frames. So for validation I've validation pkl file based on 5 frames along with images. Do you think this worst result is due to shortage of datasets?

CHUNYUWANG commented 4 years ago

It could be the way you process the data leads to misalignment between images and the ground truth. In current pipeline, we generate the images/labels for the whole dataset first. Then in the dataset file (https://github.com/microsoft/multiview-human-pose-estimation-pytorch/blob/3f301f6f9fb1e1bed838e04829add7d3df0529f2/lib/dataset/multiview_h36m.py#L97), we use 1/5 of data for training, and 1/64 of data for testing.

As I suggested in the previous reply, you could check the saved debug images in the "output" directory. If the 2D poses are accurate, then it is very likely that the generated data have misalignment problem.

lisa676 commented 4 years ago

@CHUNYUWANG Thanks so much for your response. I'll double check it.