Closed yaoxinreaps closed 5 years ago
Hi,
OK, thank you~
Sorry to bother you again. Would you please tell me the training parameters on hands17 dataset? More specifically, what is the batch size, initial learning rate, learning policy and stepsize or stepvalue, and max training epoch? By the way, have you trained the REN model on hands17 dataset for comparison? I guess it can achieve the similar subjective performance because the training data is big enough. What's your opinion? Thanks again.
Hi, here are the training parameters on hands17 dataset:
base_lr: 0.001 lr_policy: "step" gamma: 0.1 stepsize: 200000 display: 100 max_iter: 800000 momentum: 0.9 weight_decay: 0.0005
The batch size is 128, the same with all other experiments on our paper.
We did try to triain the REN model on hands17 dataset. Now I can't remember what exactly the performance was. However, Pose-REN did consistently perform better than REN (for several millimeters, maybe) as far as I can remember.
In case you are now working on the HANDS17 challenge, it should be noted that the pre-trained models on hands17 dataset released in this repo is not exactly the one for our final entry in HANDS17 Challenge. As stated in the challenge document, we used REN as the Init-Net and there were also several improvements/tricks. The pre-trained models on hands17 dataset in this repo are not intended to reproduce our performance in the challenge, but to provide a more stable and accurate hand pose estimator for various applications.
OK, I will try to train again and hope to get as good performance as yours. Thank you!
hi, thanks for sharing your project, I have two questions here,