Official PyTorch implementation of "I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image", ECCV 2020
I noticed something peculiar in the augmentation part and hence, I'd be graetful if you could clarify this for me. Basically, in your dataset class (for example in Human36M.py), when you do rotation augmentation for the image (variable rot in degrees in your code), you're not doing the same to your image aligned mesh/joint data (h36m_joint_img, smpl_mesh_img and smpl_joint_img). Instead, only the camera space joints (smpl_joint_cam) are augmented using "rot_aug_mat" matrix. Shouldn't the augmentation be also done for h36m_joint_img, smpl_mesh_img and smpl_joint_img?
Hi,
I noticed something peculiar in the augmentation part and hence, I'd be graetful if you could clarify this for me. Basically, in your dataset class (for example in Human36M.py), when you do rotation augmentation for the image (variable rot in degrees in your code), you're not doing the same to your image aligned mesh/joint data (h36m_joint_img, smpl_mesh_img and smpl_joint_img). Instead, only the camera space joints (smpl_joint_cam) are augmented using "rot_aug_mat" matrix. Shouldn't the augmentation be also done for h36m_joint_img, smpl_mesh_img and smpl_joint_img?
Thanks