Hi,
Thanks for the awesome work. I have another healthcare dataset that has a large domain shift compared to the datasets available in the wild.
I am wondering how to generate the GT meshes for my video to train the network as you did.
Are the steps
1.) Detect the 2d key points using openpose and crop the hand image.
2.) Using github repos like https://hassony2.github.io/obman.html estimate the shape and pose parameters from RGB images.
3) Pass these to MANO to get some GT mesh which will not be good.
4) Now keep varying the shape and pose parameters manually with initial estimates as the above until we are satisfied?
Is this how we do it? or am I missing something obvious ?
I am new to graphics and any help will be greatly appreciated.
Thanks a lot
Hi, Thanks for the awesome work. I have another healthcare dataset that has a large domain shift compared to the datasets available in the wild. I am wondering how to generate the GT meshes for my video to train the network as you did. Are the steps 1.) Detect the 2d key points using openpose and crop the hand image. 2.) Using github repos like https://hassony2.github.io/obman.html estimate the shape and pose parameters from RGB images. 3) Pass these to MANO to get some GT mesh which will not be good. 4) Now keep varying the shape and pose parameters manually with initial estimates as the above until we are satisfied?
Is this how we do it? or am I missing something obvious ? I am new to graphics and any help will be greatly appreciated. Thanks a lot