This is the pretraining code for PeCLR. An equivariant contrastive learning framework for 3D hand pose estimation. The paper is presented at ICCV 2021.
Hi, thanks for the released codes, and I have some questions related to the training of the 2.5D hand representation for hand pose estimation.
From the related papers, I consider that for the hand pose estimator the loss has three terms: 1) 2D pixel coordinate on the image plane, 2) the scale normalized relative z, and 3) the scale normalized z root (after refinement). I wonder whether my understanding is correct, and wonder about the value of weight parameters used to balance these three loss terms.
Your understanding is correct! We use the same weights as explained in the supplementary of the following paper: https://arxiv.org/pdf/2003.09282.pdf (p.27)
Hi, thanks for the released codes, and I have some questions related to the training of the 2.5D hand representation for hand pose estimation.
From the related papers, I consider that for the hand pose estimator the loss has three terms: 1) 2D pixel coordinate on the image plane, 2) the scale normalized relative z, and 3) the scale normalized z root (after refinement). I wonder whether my understanding is correct, and wonder about the value of weight parameters used to balance these three loss terms.
Thanks for your help!