Closed wangyidong3 closed 3 years ago
Hi @wangyidong3 , thank you for your interest in our work!
U-CMR doesn't require any special annotations and requires only images+masks as input. To setup with a new class, I suggest writing a Dataset that inherits BaseDataset
and implements the get_anno(self, index)
function. Here is an example. You can provide ground-truth cameras if they're available (possibly computed from SFM over keypoints as done in CMR) but it's optional as it is only used for visualizing camera-pose error during training.
re:Template Mesh: I downloaded freely available models on the internet, simplified + symmetrized + UV-unwrapped them in Blender, and exported it as a .obj file. I also post-processed the meshes with PyMesh to have similar edge-lengths. I'll soon share more instructions on this.
But note that if you already have a template mesh (as a .obj), it can be fed to UCMR directly via --shape_path=filename.obj
. If the .obj file defines a UV-texture mapping, use it by additionally passing the --textureUnwrapUV
option.
Thank you very much. I will update once the new dataset can be trained.
Hi @shubham-goel , Thanks for your great work!
If I want to train and test on other classes, how to generate the annotation files and template meshes files?