Open Deng-Y opened 3 years ago
Thank you for your quick response! Yes, I have read Pose2Mesh, and I understand why you cite it here.
From your words, I guess currently 3D mesh is more like a side product of 3D pose, and the objectives of most papers are to maximize the accuracy of 3D pose instead of 3D mesh, am I right? After all, the ground truth of 3D mesh is not easy to obtain.
Correct. Maybe I can incorporate more supervisions, such as depth maps or silhouettes, for more accurate 3D shapes.
Do you have depth maps of the InterHand2.6M dataset? The silhouette is a more readily available weak supervision.
No I don't :(
Hello! I read the NeuralAnnot paper and have some questions. Can you help me?
NeuralAnnot takes a single-view image as input and outputs a set of MANO parameters. Thus for a single hand pose in InterHand2.6M, you will get multiple sets of MANO parameters from multiple views. How do you fuse MANO parameters from different views?
How do you estimate that the fitting error is about 5 mm?
NeuralAnnot is only supervised with the 3D pose (i.e., 3D keypoints) without shape information. Can it really learn to predict shape parameters in MANO or SPML?
Thank you! Look forward to your reply!