chensong1995 / HybridPose

HybridPose: 6D Object Pose Estimation under Hybrid Representation (CVPR 2020)
MIT License
419 stars 64 forks source link

Experiments on the Truncation LineMOD #66

Closed JiChun-Wang closed 2 years ago

JiChun-Wang commented 3 years ago

Hi, I try to test your provided pre-trained models on the Truncation LineMOD which is introduced in the PVNet. But the evaluation result is always 0.0. I hope your explaination sincerely! The size of the input image is 256*256 which is cropped from the LineMOD dataset, so I only change the camera intrinsic parameter in generate_data function.

chensong1995 commented 3 years ago

Hello there,

Thanks for your interest in our work! My suggestion is to visualize the network prediction first and see whether the problem is in network regression or pose estimation. My suspension is that a constant transformation matrix is missing during the 2D-3D alignment.

I hope this helps!

JiChun-Wang commented 3 years ago

As your suggestion, I find the visualization results out of the network is right and a constant transformance matrix is added during the 2D-3D aligment, but the final result is still wrong, so it seems the problem is in pose estimation.

JiChun-Wang commented 3 years ago

And follow your ablation code to set the weights for three hybrid representations as 1, 0, 0 where the initial pose is abtained from the cv2.solvePnP, the ADD result for the object cat is 0.2714 of initial prediction but that of final prediction is only 0.0407.

chensong1995 commented 3 years ago

Thanks for your follow-up. After you add the transformation matrix, make sure to visualize the ground truth pose labels and see if they looks right. One way to do this is to project the 3D model into 2D using the ground-truth labels and then render the model on the image. Once we are certain about the labels and alignment, you may want to set a few break points in PoseRegression::RefinePose and investigate the optimization trajectory. Perhaps do some calculations by hand and verify the debugger output against your hand calculation. On a new dataset, we may or may not need some hyper parameter tuning. I hope this helps!

YC0315 commented 3 years ago

Hi, I try to test your provided pre-trained models on the Truncation LineMOD which is introduced in the PVNet. But the evaluation result is always 0.0. I hope your explaination sincerely! The size of the input image is 256*256 which is cropped from the LineMOD dataset, so I only change the camera intrinsic parameter in generate_data function.

您好,请问Truncation_linemod 数据集是自己制作的吗?可不可以发我一份呀?非常感谢