Open rohanksaxena opened 2 years ago
annotate_camera_train
, annotate_real_train
, and annotate_test_data
use 3 different approaches to get gt pose label.
Why not use Umeyama for all of them, since the NOCS map, depth map, and camera intrinsics are available for all these 3 cases? Also, in the paper, only the Umeyama method is mentioned.
Hello, During data preprocessing in the pose_data.py file there are separate methods for annotating camera train and real train datasets. In the camera train dataset, you have done Umeyama alignment for gt nocs map with the depth image, but they haven't done the same for real dataset. Can you please explain why this is the case?