Closed hillaric closed 3 years ago
The results reported in the paper are all in '3D'. Specifically, for Mask-RCNN, we projected the sampled point cloud fed into PVN3D back to the 2D image and get the corresponding label predicted by Mask-RCNN to calculate the mIoU. Since the sampled point cloud is randomly selected from the whole image, the distribution of labels on the point cloud is close to the RGB image.
thanks.
Hello, ethnhe. I am very interested in your work, it is very effective. In fact, I have some questions in PVN3D segmentation results. In table 7, you compare your segmentation result with Mask-RCNN, but the miou of 2D is the same as miou of 3D?