j96w / DenseFusion

"DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion" code repository
https://sites.google.com/view/densefusion
MIT License
1.08k stars 301 forks source link

Visualization of DenseFusion pred, target point collections to 2D image [unable to reproduce paper's true dis error] #196

Open ghost opened 3 years ago

ghost commented 3 years ago

Hi, In your Linemod testing example, you create a predictive model with the result pred, consisting of my_r and my_t. This prediction is compared to the reference, target, a similarly constructed collection of points of the object model used.

I am attempting to reproduce the figures of your paper, in where these points are visually projected onto the images of the 2D RGB data. Using your paper's predictive model and downloaded trained checkpoints, the predictions do not visually line up with their 2D RGB targets.

I am using OpenCV's projectpoints method, in the form projectpoints(model_points, my_r, my_t, cam_mat), where the camera matrix is the values of camera intrinsic provided in dataset.py.

https://imgur.com/a/Qfh1RRL

As you can see, while the error between predicted and target points is low, the true error to the object in scene is high. Is there a more accurate way of representing your paper's visualization, or is the predictive model used as an example incorrect? I could not find any reference or material explaining the method used to visualize results in either the DenseFusion paper or code repository.

Thank you.