Closed Wuchuq closed 10 months ago
Hi, for datasets with 3D labls, such as MPII-3D, all the pseudo-GT are fitted to 3D GT joint coordinates, which means it is in the same space of the 3D GT joint coordinate (world coordinate system), and can be projected to the image using camera parameters of the dataset.
Hi, for datasets with 3D labls, such as MPII-3D, all the pseudo-GT are fitted to 3D GT joint coordinates, which means it is in the same space of the 3D GT joint coordinate (world coordinate system), and can be projected to the image using camera parameters of the dataset.
Thank you so much! However, while training the model with SMPL labels and the 2D and 3D datasets initially provided by MPII-3D, I encountered some issues. I estimate the pose and shape parameters in the model, extract the vertex position from the SMPL model, and then map it to obtain the estimated keypoint coordinates, using the ground truth value to calculate the loss function. Is that training process reasonable? I observed that the SMPL you provided includes the "trans" parameter; does this parameter have a connection to the issue I am encountering? Looking forward to your response.
I can't get your point. For pseudo-GT of 3D datasets, such as MPII-3D dataset (https://vcai.mpi-inf.mpg.de/3dhp-dataset/), the output smpl vertices are in world coordinate system. You can check it from here: https://github.com/mks0601/NeuralAnnot_RELEASE/blob/main/MPI-INF-3DHP/demo_smpl.py
Got it, I will look into it further. Thanks in advance!
Thank you for your excellent work! The SMPL parameter is based on a size-reduced image, so does it match the 2D and 3D keypoint annotation provided by the original MPII-3D dataset?