facebookresearch / ContrastiveSceneContexts

Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"
MIT License
224 stars 28 forks source link

Questions about metric and visulization #25

Closed sjtuchenye closed 2 years ago

sjtuchenye commented 2 years ago

Firstly thank you for your wonderful work! And I have two questions about the implementation.

  1. I notice that you computed segmentation IoU based on the voxel-wise predictions (the input is voxelized). So are all the metrics reported in the paper computed in this way?
  2. Also, for scannet, I can find the function to pass the predictions to points, but for stanford3d, there seems none such function. So how do you visualize the segmentation results of stanford3d? Maybe visualize based on voxels?

Thank your great work again, and looking forward to replies.

Sekunde commented 2 years ago

Hi,

  1. yes, all the miou are computed on voxels. We use 2cm voxel on ScanNet and 5cm voxel on Stanfor3D following PointContrast.
  2. There are multiple ways of visualising it; if you want good graphics in paper, I would recommend to pass the predictions to points and render it properly; if just for debugging purpose, visualising voxels is enough, e.g. voxel coordinates as point cloud.
sjtuchenye commented 2 years ago

Hi,

  1. yes, all the miou are computed on voxels. We use 2cm voxel on ScanNet and 5cm voxel on Stanfor3D following PointContrast.
  2. There are multiple ways of visualising it; if you want good graphics in paper, I would recommend to pass the predictions to points and render it properly; if just for debugging purpose, visualising voxels is enough, e.g. voxel coordinates as point cloud.

Thank you for solving my questions. Great work!