IcarusWizard / PixelwiseRegression

PyTorch release for paper "Pixel-wise Regression: 3D Hand Pose Estimation via Spatial-form Representation and Differentiable Decoder"
MIT License
34 stars 2 forks source link

MSRA #6

Closed YuehengLuo closed 1 year ago

YuehengLuo commented 1 year ago

Hello, this is my first experience with 3d Hand Pose Estimation, I would like to know what is the difference between MSRA and other datasets, is it the different subjects? And how to visualize the MSRA results since I have run python test_samples.py --dataset MSRA but it failed. Also, I'd like to know how to visualize the metrics(1.mean 3d error 2. fractions of the frame within distance) in the code. Thank you very much.

IcarusWizard commented 1 year ago

Hi @YuehengLuo,

Regarding your questions:

  1. Yes, each dataset has its own subjects. The MSRA dataset has 9 subjects.
  2. To run test_sample.py please make sure you follow the instruction in readme to place the dataset correctly and download the pre-trained models.
  3. The metrics are drawn by awesome-hand-pose-estimation. Please check their repo for more details.

Hope these help.

YuehengLuo commented 1 year ago

Thanks for your response. For question 3, I have used the evaluation method provided by the link. But I found the 3d mean error of MSRA is only 7.985mm average(I have used the result.txt provided by you in the pre-trained model), which is not correct for the paper. Are there some mistakes or the provided one is not the best one?

IcarusWizard commented 1 year ago

The result we release is for the TMM version.

For the Arxiv version, we made some mistakes about the 9 folder cross-validation, which makes the result better.

YuehengLuo commented 1 year ago

I’m grateful for your help. Thank you for the quick response.