Closed Guptajakala closed 4 years ago
Hi @Guptajakala,
Thank you for your interest in our work !
If you check this file, you will find a like_v1
flag which will allow you to spot the v1 configuration of HO3D I exerimented with, with the list of training sequences.
All the best !
Yana
Thanks very much for the reply! For testing results reported in the paper, you are using {"MC2"}, is it?
Hi, I also have the question about it. Now that u could use like_v1=True orFalse to change configuration, so I wonder if u use like_v1=True to get resluts in your paper at early time but use full trainset to train and submit the result in Ho3dv2 Codalab Competition later? Is that true ? @hassony2 Merci beaucoup!
Hi @MengHao666 and @Guptajakala ,
It is even a bit more confusing, let me try to clarify.
Following the early release of HO3D, (which was the only available at submission time) I evaluated on the single sequence to provide fair comparison with figure 8 of the early release version Indeed, post-submission I later used the full HO3D dataset, which resulted in the Codalab submission. (See sequence 5.2 "We use 14 sequences for training and the remaining one sequence for testing.")
For other experiments on HO3D I used the somewhat larger but still very small test set of 2 sequences {"SM1", "MC2"} following the HO3D authors suggestions at that time. Note that this is still a very small evaluation set.
For leaderboard submissions indeed I used the full ho3d dataset, as per the CVPR release (which I refer to as "v2").
At the time of submission we used this small subset, because the full dataset was not available. I would definitely suggest moving on to the full HO3D dataset for any future experiments, as because of the very small size of the dataset it is difficult to draw meaningful and generalizable conclusions from evaluating on only 1 or 2 objects (which is of course also a concern for the results I report).
Let me know if this clarifies the raised questions !
Yana
Thanks very much for your clarfiy!
@Guptajakala Hi , as u asked for training details in HO3D_v2 Dataset, have u submitted reuslts on Codalab leadboard. Could u be more generous to share your e-mail ? I really want to ask some questions about the leadboard result. Thank u!
@Guptajakala Hi , as u asked for training details in HO3D_v2 Dataset, have u submitted reuslts on Codalab leadboard. Could u be more generous to share your e-mail ? I really want to ask some questions about the leadboard result. Thank u!
Hi, I have a question , hasson model works on estimating of both object and hand pose (given the object model) , do you guys estimate the object during submission since we do not know object model of test set ?? Does warping section work without considering object?
@Guptajakala Hi , as u asked for training details in HO3D_v2 Dataset, have u submitted reuslts on Codalab leadboard. Could u be more generous to share your e-mail ? I really want to ask some questions about the leadboard result. Thank u!
Hi, I have a question , hasson model works on estimating of both object and hand pose (given the object model) , do you guys estimate the object during submission since we do not know object model of test set ?? Does warping section work without considering object?
Hi ,as my own opinion , object model is necessary for warping section work. And for submission, I have known that both baseline result and hassony's submssion used Ho3D V2 dataset only.
All best.
Hi, thanks for the great work!
For HO3D you mentioned that " HO3D Optional: Download the HO3D-v2 dataset. Note that all results in our paper are reported on a subset of the current dataset which was published as an early release. The results are therefore not directly comparable with the final published results which are reported on the v2 version of the dataset. "
Do you remember what exactly are the subset you were using? I'm working on a project which may need to follow same protocol for fair comparison later. Currently HO3D is using online competition and the evaluation set groundtruths are not available personally. If I understand correctly, you were not using the same evaluation set at that moment?