Closed ThibaultGROUEIX closed 3 years ago
I think your results are aligned with https://competitions.codalab.org/competitions/24938#results Did you evaluate on all 3DPW or only on it test set?
Thanks a lot for a fast reply!
isnt the MPJPE_PA different by 7 points?
I evaluated on 3DPW test set using almost the same code as VIBE for evaluation.
Why doesn't it match the number reported in the table? It the evaluation carried differently or the model different? cf line kolotouros et al 37
@ikvision Hi, may I know what do you mean by "evaluate on all 3DPW or only on it test set"? I suppose it should be evaluated on test set.
Thanks.
@ikvision Hi, may I know what do you mean by "evaluate on all 3DPW or only on it test set"? I suppose it should be evaluated on test set.
Thanks.
In the paper they used only the test set, while in the ECCV competition they use the data for testing "This challenge, we do not use the original splits in the dataset; and we use the entire dataset including it's train, validation and test splits for evaluation. Your algorithm MUST NOT use any part of the 3DPW dataset for training"
Now that I read the question more carefully it seems like it about reproducing the SPIN results (as reported in SPIN paper). Therefore I think this issue should be ported to https://github.com/nkolot/SPIN/issues Concerning reproducing VIBE results, there is soon an updated paper https://github.com/mkocabas/VIBE/issues/99
No i am actually interested in the SPIN model used in VIBE codebase. I'd like to know if its exactly the same model as the one from the SPIN repo, or if there are any changes which explain this difference in performance.
Hi @ThibaultGROUEIX,
The result you get is correct and it is identical to what we report in Table 2 in the paper, see 4th row. The results of SPIN in Table 1 is copied from their papers.
The accuracy gap is due to the difference in 3DPW data preprocessing. We use the same exact preprocessing as HMMR (Kanazawa et al, CVPR 2019) which is slightly different than SPIN preprocessing. Hence, even though we use the same pretrained checkpoint released by authors, we get different results on 3DPW.
This was something we wanted to add as a comment in the paper, however I forgot it. Thanks for pointing this out. I will include this in the paper.
Thanks for the swift clarification @mkocabas and congrats again on VIBE!!
Thanks a lot! I am closing this issue for now. Feel free to reopen if needed.
Dear authors,
Thanks for the great paper and great codebase! I evaluated the SPIN pretrained model you base VIBE on, using your evaluation codebase on 3DPW. I found : MPJPE: 102.4041, PA-MPJPE: 60.0952, PVE: 129.1991, ACCEL: 29.2282, ACCEL_ERR: 29.9531,
Do you know what could explain the difference with the numbers reported in your paper for spin? Thanks in advance,