astra-vision / MonoScene

[CVPR 2022] "MonoScene: Monocular 3D Semantic Scene Completion": 3D Semantic Occupancy Prediction from a single image
https://astra-vision.github.io/MonoScene/
Apache License 2.0
682 stars 66 forks source link

How did you get the occupancy labels of the NYUv2 dataset? #104

Open zicha4555 opened 6 days ago

zicha4555 commented 6 days ago

Thanks for your great job! Since I noticed that the download url in the page contains "monoscene", I want to know how do you get the occupancy labels of the NYUv2 dataset? I have not found it in official web page and paper of NYUv2 dataset, even though it is mentioned in the paper of MonoScene that "NYUv2 [58] has 1449 Kinect captured indoor scenes, encoded as 240x144x240 voxel grids labeled with 13 classes". If you generated the occupancy labels, could you please provide more details about the way you did it?

anhquancao commented 6 days ago

Hi @zicha4555, please take a look at this issue.

zicha4555 commented 5 days ago

Hi @zicha4555, please take a look at this issue.

Thank you for replying! After researching, I learned that this SSC annotations of NYU was produced by SSCNet, through voxelizing the 3D models of NYU scenes, which was made by another work titled as "Support surfaces prediction for indoor scene understanding."