Closed canuck6ix closed 4 years ago
Hi, there are pre-defined data split such as training/testing set for those datasets. I used the pre-defined data split.
Thanks. I couldn't find the data split for Hands17. Can you point me to it please?
Read here. http://icvl.ee.ic.ac.uk/hands17/challenge/ You should send mail to organizers.
Will do thanks. Thought it might have been published before!
Hi, there are pre-defined data split such as training/testing set for those datasets. I used the pre-defined data split.
Sorry to bug you again Moon. I just heard back from the challenge organizers that the selection of validation data was left up to the participants. Can you please tell me if you used validation at all? If so which images in the training data (957K) did you use for validation? Thanks.
Please note I'm not asking about evaluation which is done on the challenge servers (seen/unseen/occluded)
I didn't make additional validation set from the training set. You can adjust hyperparameters using test set of this dataset or other datasets.
Hi @mks0601, I have a question regarding cross-validation: According to your paper, the 'leave-one-subject-out' strategy was used for MSRA cross-validation. I was wondering what strategy do you use for NYU and Hands17 datasets, thank you!