Closed fabienbaradel closed 8 months ago
Hi, Thank you for your interest in our challenge. I checked BEDLAM leader board. They support submitting joints and vertices instead of pose parameters. Is that what you want?
Thanks for your reply. Yes I would like to submit joints and vertices. Is it possible for you to adapt the leaderbord website please?
I see, got your point. I will try to implement this over the weekend.
Great, looking forward this implementation. Thanks.
Hi,
I have added support for submitting joints and vertices, see this commit: https://github.com/xiexh20/behave-dataset/commit/be765d46ee246b2e014a4de3781908d847fecb2f
However, codalab can only accept files smaller than 300MB, which means you have to save the vertices as np.float16 data to reduce file size. Fortunately, the numeric difference between float64 and gloat16 is smaller than 0.02mm, which is negligible.
Let me know if you have further questions. Have fun!
Wow nice, thanks @xiexh20 for your commit. One remark, the shape here for joints
and vertices
shoulde be respectiely (24,3)
and (6890,3)
.
oh I see, good catch. I have updated the doc, thanks!
Hi @xiexh20 , I just made a first submission to your challenge for the human mesh recovery task.
I think that the v2v_behave
metric is not well ranked, for the moment higher is better while it should be lower is better if I am not mistaken?
And for AUC_*
we see only 1 digit after the comma, would it be possible to show at least two digits?
One more question: would it be possible to get the camera intrinsics for the test images?
Thanks for organizing this competition,
Hi,
Yes, v2v is the lower the better, that is why you are ranked 2nd in v2v_behave.
I have changed the leaderboard to show AUC at 2 digits. Let me know if you have further questions.
Hi, thanks for your reply. Sorry my bad. However there is an error for MPJPE_PA_icap; it should be lower is better but 48.3 is ranked 2nd vs 54.9.
I go back to my "other question": do you plan to release the camera instrinsics (focal length, principal point) for the training sets and test sets? I cannot find them for both BEHAVE and InterCap. Thanks for your help,
Good catch, I updated the leaderboard with correct ranking.
You can find the intrinsics here: https://github.com/xiexh20/behave-dataset/blob/main/challenges/lib/config.py#L7
Nice! Thanks a lot @xiexh20
Hi @xiexh20, In the leaderboard you are reporting metrics from 3 different split (behave, icap and synz) however on the codlab website only BEHAVE and INTERCAP are introduced. Where teh SYNZ dataset/samples come from? And how can we know which images of the test sets come belong to SYNZ? Thanks for your answer.
Hi,
The synz test images are synthetic renderings of some interactions from BEHAVE test set. They are used only to understand the model performance difference between synthetic and real data. They will NOT be used to determine the final ranking. All these images are randomized so it is not possible to identify the source from the released information.
Hi @xiexh20 Thanks for this repo and dataset. I am wondering if we will be able to directy store th vertices in the pkl files for the HMR task instead of storing the pose and betas parameters of the SMPL model? It will be more convenient to allow both format similar to what AGORA and BEDLAM leaderboard are doing. Also methods that are directly regressing vertices (e.g. METRO, FastMETRO, TORE) cannot participate to this challenge because of this requirement regarding the data format. I hope that you see my point, Thanks for your time.