Closed lll-gen closed 3 years ago
The mesh vertices are obtained by forwarding SMPL parameters to SMPL layers. For a new dataset, all you have to do is just making another data/$DB_NAME/$DB_NAME.py by refering another data/$DB_NAME/$DB_NAME.py
thanks for you applyment! there is another question,How is joint_cam obtained and what does it do
joint_cam would be the camera coordinate system 3d joints given from whatever dataset you're using (Human3.6m, PW3d, etc..) so they're the ground truth labels. They can also be obtained via the forward pass of the network
hello,I want to ask why the camera parameters don't have R and T?In other word ,it's only in the camera coordinate not in the world coordinate?
Could you let me know which dataset are you talking about?
firsr of all,thanks for your patient response!I have 3 question: Question1 :oh at present ,I use my own dataset ,it only contains SMPL parameters,rendered imgs,camera parameters.does it work? Question2: But when I download your MUCO datasets,I found the camera parameters only contains focal and principal,it doesn't need Rotation and translation matrices to Transform to the World coordinate? Question3: Farther, Can I abandon the SMPL expression?in other words,I obtain the coordinate by non-rigid Registration,not use the smpl parameters?
Q1. Yes. It may work. Q2. The 3D coordinates and SMPL parameters of the MuCo dataset is camera-centered ones. Q3. I can't get your question :(
Q3:I mean, I don't get the Mesh coordinates from the SMPL parameters,I get the vertex coordinates directly the other way around
Q3. That would be no problem.
Hi,I'm back again! Do I need to rerun the ROOTNET code to get the root node depth to obtain the final mesh?
Greatest job! I would like to ask why the SMPL parameter is needed in the training process. The loss function in the paper does not use the SMPL parameter. If I use my own data set, what should I do