Gorilla-Lab-SCUT / BiCo-Net

Code for "BiCo-Net: Regress Globally, Match Locally for Robust 6D Pose Estimation"
MIT License
17 stars 0 forks source link

How to get the Pred mask of PVN3D on YCB-Video dataset? #3

Closed ZJU-PLP closed 2 years ago

ZJU-PLP commented 2 years ago

Dear author:

Could you mind sharing the method or details to generate Pred mask of PVN3D on YCB-Video dataset? Currently, I can only get the Pred mask of PVN3D on YCB-Video dataset from your provided website link. However, I cannot find the details of Pred mask of PVN3D on YCB-Video dataset in your paper.

aragakiyui611 commented 2 years ago

You might need to wait for few days as the author is busy working now.

ZJU-PLP commented 2 years ago

@aragakiyui611 Okk, thanks for your reply. I am looking forward to your newly notice.

ZJU-PLP commented 2 years ago

@aragakiyui611 Hi, could you mind sharing the details of this problem if you are leisure?

aragakiyui611 commented 2 years ago

Here I reply you briefly, run the inference of PVN3D and project the point cloud to image space coordinate and fill the corresponding pixel with predicted segmentation class id, then you get the mask.

ZJU-PLP commented 2 years ago

@aragakiyui611 Thanks for your reply. I still cannot understand how to run the inference of PVN3D. Could you mind sharing more details?

Another question, how to plot Figure 3 (Figure 3: Comparative evaluation with different levels of occlusion on the YCB-Video benchmark) in your paper? I have looked up to DenseFusion but I could not find details. Could you mind sharing the details to plot Figure 3 in terms of different levels of occlusion on the YCB-Video benchmark?

aragakiyui611 commented 2 years ago
  1. Go to run test of PVN3D
  2. please refer to the supplementary materials of DenseFusion
ZJU-PLP commented 2 years ago

@aragakiyui611

  1. Ok, I will have a try.
  2. Thanks for providing the details. Furthermore, could you mind sharing the script if it is possible?