facebookresearch / InterHand2.6M

Official PyTorch implementation of "InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose Estimation from a Single RGB Image", ECCV 2020
Other
687 stars 91 forks source link

What is the version of result in your paper? #59

Closed MengHao666 closed 3 years ago

MengHao666 commented 3 years ago

Hi, thanks for your great work first.

There are 2 versions of your dataset, v1.0 and v0.0. What version of dataset you use in your papaer when train and test? Your papar seems no more update later as I can only see one version in arXiv.org

good luck, HaoMeng

mks0601 commented 3 years ago

Hi, v0.0 is an initial version, which is smaller than the dataset that I used in the paper. You should use v1.0, which is a full InterHand2.6M dataset. Thanks!

MengHao666 commented 3 years ago

Hi, v0.0 is an initial version, which is smaller than the dataset that I used in the paper. You should use v1.0, which is a full InterHand2.6M dataset. Thanks!

So after all, do u mean u used v1.0 to train and test in your paper?

MengHao666 commented 3 years ago

And why does the pretrained model that u provide have better performance then which u describe in your paper?

mks0601 commented 3 years ago

Where did you get the result and compare it with the result of the paper?

MengHao666 commented 3 years ago

just in the master branch of this repo, which is named "Pre-trained InterNet"

mks0601 commented 3 years ago

Did you download the model and test on the test set?

MengHao666 commented 3 years ago

Did you download the model and test on the test set?

Sure,it would be slightly better than that reported in the paper.I tested on human-annot subset.

mks0601 commented 3 years ago

Ah I got the reason. The pre-trained model in this repo is trained on InterHand2.6M v0.0, which is a subset of InterHand2.6M v1.0. As mentioned above, InterHand2.6M v1.0 is a full dataset. We should have updated the pre-trained model by training again on v1.0. Sorry for the late update :(

MengHao666 commented 3 years ago

Ah I got the reason. The pre-trained model in this repo is trained on InterHand2.6M v0.0, which is a subset of InterHand2.6M v1.0. As mentioned above, InterHand2.6M v1.0 is a full dataset. We should have updated the pre-trained model by training again on v1.0. Sorry for the late update :(

Thanks for your reply. Anyway, it seems that the result u reported in the paper is based on V0.0 version. So, if I want to do some work on v1.0 and then compare result with yours, should I train the v1.0 as your training setting? Do u have the plan to train on v1.0 recently and make it open source?

mks0601 commented 3 years ago

Sorry for the unclear description.

The results in the paper are based on v1.0. The models in the paper are trained on v1.0 and tested on v1.0. You can directly compare your results with the results of the paper if you want to do some works on v1.0.

The models in this repo are trained on v0.0.

MengHao666 commented 3 years ago

tested on v1.0.

Ok. However, it looks very strange. When tested on same v1.0 dataset, the model trained on v0.0 gets slightly better result than trianed on v1.0 dataset. Do u know the reason? I am curious about it. And would u open source the model trained on v1.0 dataset? As I fliter some samples in test sets and also want to have a more fair comparsion . And training again will cost a lot of time for me.

mks0601 commented 3 years ago

I'm not sure about the reason... How much there is difference? I think just random initialization will make some difference. As v0.0 contains all hand sequences (but some frames are missing), I think training on v0.0 can give a good result. Unfortunately, I don't have the model trained on v1.0 :( I should train it again. But as I said, there gonna be small difference between those two models (trained on v0.0 and v1.0).

MengHao666 commented 3 years ago

I'm not sure about the reason... How much there is difference? I think just random initialization will make some difference. As v0.0 contains all hand sequences (but some frames are missing), I think training on v0.0 can give a good result. Unfortunately, I don't have the model trained on v1.0 :( I should train it again. But as I said, there gonna be small difference between those two models (trained on v0.0 and v1.0).

The difference is about 1mm lower on both MPJPE and MRRPE when I test it on human-annot subset. If u wouldn't train it again, it would be better to describe the provided model is trained on v0.0, otherwise someone will get confused when he or she compares the result with that in the paper.

If u have time to train it again , i will wait for the models.Otherwise I need to train it again by myself for fair comparison...

Anyway, thanks for warm reply.

mks0601 commented 3 years ago

I was plan to train it again, but it was kind of busy :( Before the training, I'll describe what you requested. Thanks!

MengHao666 commented 3 years ago

Ok, thanks. Wish u can infrom me when u finished. And i might train it by myself. Thanks!

mks0601 commented 3 years ago

The model trained on v1.0 is available in here.

MengHao666 commented 3 years ago

The model trained on v1.0 is available in here.

Thanks for sharing,and it would be better if u can update it in readme.md as it might also confuses others.